Multimedia enhancements in Sun Ray Software 4 Update 3 (aka SRS4U3) FAQ
Principles and libraries (written by Ottomeister. Here is the original post):
On 4/10/06, Jason T. Hallahan <jthallah at gmail.com> wrote:
Can somebody (again, maybe), explain to me the primary differences between libutmedia, Sun Ray Protocol (SRP), libvis, XVideo, and Direct Pixel Access (DPA)?
Sun Ray Protocol is, I suppose, the set of rules for exchanging data with a Sun Ray in such a way that some useful result is produced.
In principle it’s possible for any application to talk Sun Ray Protocol to a Sun Ray. In practice the Xserver is usually the only process that talks SRP, and it does that on behalf of its clients. The X clients make requests to the Xserver and the Xserver translates those requests into SRP and delivers them to the Sun Ray.
X clients usually interact with the X server by using the core X operations, but X has an “extension” mechanism that allows additional operations to be supported. Extensions are often tailored to providing additional features over what’s in the core set, or providing improved performance for certain activities.
Direct Pixel Access is an X extension whose purpose is to allow an X client to perform frame buffer updates more quickly and efficiently than would be possible through the core X operations. X clients that use DPA can manipulate pixels in the frame buffer at a lower processing cost than would have been achievable by
using core X operations. DPA is a Sun-specific extension. It is available on Sun Ray only when Sun Ray is driven by Sun’s ‘Xsun’ Xserver. At least I’m sure it’s available on Solaris/SPARC, I’m not completely sure about Solaris/x86. OpenGL is the big (only?) user of DPA, and the state of OpenGL on Solaris/x86 is
somewhat fuzzy. Fuzzy to me, anyway. When DPA is in use the client manipulates pixels in a virtual frame buffer in the Xserver’s memory. The Xserver is then responsible for delivering the updated frame buffer contents via SRP to the Sun Ray.
XVideo is an X extension whose purpose is to allow an X client to deliver video frames to the X server’s frame buffer more quickly and efficiently than would be possible by using the core X operations. The XVideo extension also defines some (optional) operations intended to support video capture. The XVideo extension is a standard extension. It is not supported by Sun Ray. If Sun Ray did support XVideo then the most straightforward implementation would be to have the Xserver accept video frame data from the client and then encode that data into SRP for delivery to the Sun Ray.
libutmedia is a private Sun Ray library that understands asubset of SRP and can deliver video frame images directly to a Sun Ray from within an application, completely bypassing the Xserver. Because libutmedia bypasses the Xserver it should offer the most efficient way to drive video to a Sun Ray. The libutmedia API is not documented for use outside Sun. It has been used by some groups inside Sun to enhance the performance of some products when those products run in a Sun Ray session. I believe that
ShowMeTV and Sun Forum are able to use libutmedia, and I think that the Java Media Framework is also able to use it.
libvis used to be a library for Solaris/SPARC that contained routines that were optimised through the use of the VIS extensions to the SPARC instruction set. It has been replaced by mediaLib. MediaLib is a Sun library that contains highly-tuned architecture-dependent implementations of algorithms that are commonly used in the manipulation of multimedia streams. MediaLib for SPARC will make use of VIS if it executes on a SPARC that implements VIS. Similarly mediaLib on x86/x64 can use MMX or SSE when executing on a processor that has those extensions. MediaLib is used by Gnome to accelerate some GTL widget operations and I think it’s used by Xsun too.
ShowMeTV uses DPA and libvis, while many video players (like MPlayer) use XVideo, which SRSS uses libutmedia.
I know the Sun Ray Protocol attempts to update as little of the screen as possible and tries to compress as much image data as possible… can anybody explain to me any other video-specific operations that SRP is responsible for?
The protocol doesn’t try to do anything, it just is what it is. The application that is generating SRP might try to be clever about updating only modified regions of the screen in an efficient way; the Xserver tries pretty hard to do that.
The only video-specific coverage in SRP is that it is able to carry data encoded as mildly compressed YUV. This generally provides a small data-size improvement over carrying that data as RGB.