Towards frameless (clockless) video
Recently I wrote about the desire to provide power in every sort of cable in particular the video cable. And while we'll be using the existing video cables (VGA and DVI/HDMI) for some time to come, I think it's time to investigate new thinking in sending video to monitors. The video cable has generally been the highest bandwidth cable going out of a computer though the fairly rare 10 gigabit ethernet is around the speed of HDMI 1.3 and DisplayPort, and 100gb ethernet will be yet faster.
Even though digital video methods are so fast, the standard DVI cable is not able to drive my 4 megapixel monitor -- this requires dual-link DVI, which as the name suggests, runs 2 sets of DVI wires (in the same cable and plug) to double the bandwidth. The expensive 8 megapixel monitors need two dual-link DVI cables.
Now we want enough bandwidth to completely redraw a screen at a suitable refresh (100hz) if we can get it. But we may want to consider how we can get that, and what to do if we can't get it, either because our equipment is older than our display, or because it is too small to have the right connector, or must send the data over a medium that can't deliver the bandwidth (like wireless, or long wires.)
Today all video is delivered in "frames" which are an update of the complete display. This was the only way to do things with analog rasterized (scan line) displays. Earlier displays actually were vector based, and the computer sent the display a series of lines (start at x,y then draw to w,z) to draw to make the images. There was still a non-fixed refresh interval as the phosphors would persist for a limited time and any line had to be redrawn again quickly. However, the background of the display was never drawn -- you only sent what was white.
Today, the world has changed. Displays are made of pixels but they all have, or can cheaply add, a "frame buffer" -- memory containing the current image. Refresh of pixels that are not changing need not be done on any particular schedule. We usually want to be able to change only some pixels very quickly. Even in video we only rarely change all the pixels at once.
This approach to sending video was common in the early remote terminals that worked over ethernet, such as X windows. In X, the program sends more complex commands to draw things on the screen, rather than sending a complete frame 60 times every second. X can be very efficient when sending things like text, as the letters themselves are sent, not the bitmaps. There are also a number of protocols used to map screens over much slower networks, like the internet. The VNC protocol is popular -- it works with frames but calculates the difference and only transmits what changes on a fairly basic level.
We're also changing how we generate video. Only video captured by cameras has an inherent frame rate any more. Computer screens, and even computer generated animation are expressed as a series of changes and movements of screen objects though sometimes they are rendered to frames for historical reasons. Finally, many applications, notably games, do not even work in terms of pixels any more, but express what they want to display as polygons. Even videos are now all delivered compressed by compressors that break up the scene into rarely updated backgrounds, new draws of changing objects and moves and transformations of existing ones.
So I propose two distinct things:
- A unification of our high speed data protocols so that all of them (external disks, SAN, high speed networking, peripheral connectors such as USB and video) benefit together from improvements, and one family of cables can support all of them.
- A new protocol for displays which, in addition to being able to send frames, sends video as changes to segments of the screen, with timestamps as to when they should happen.
The case for approach #2 is obvious. You can have an old-school timed-frame protocol within a more complex protocol able to work with subsets of the screen. The main issue is how much complexity you want to demand in the protocol. You can't demand too much or you make the equipment too expensive to make and too hard to modify. Indeed, you want to be able to support many different levels, but not insist on support for all of them. Levels can include:
- Full frames (ie. what we do today)
- Rastered updates to specific rectangles, with ability to scale them.
- More arbitrary shapes (alpha) and ability to move the shapes with any timebase
- VNC level abilities
- X windows level abilities
- Graphics card (polygon) level abilities
- In the unlikely extreme, the abilities of high level languages like display postscript.
I'm not sure the last layers are good to standardize in hardware, but let's consider the first few levels. When I bought my 4 megapixel (2560x1600) monitor, it was annoying to learn that none of my computers could actually display on it, even at a low frame rate. Technically single DVI has the bandwidth to do it at 30hz but this is not a desirable option if it's all you ever get to do. While I did indeed want to get a card able to make full use, the reality is that 99.9% of what I do on it could be done over the DVI bandwith with just the ability to update and move rectangles, or to do so at a slower speed. The whole screen never is completely replaced in a situation where waiting 1/30th of a second would not be an issue. But the ability to paint a small window at 120hz on displays that can do this might well be very handy. Adoption of a system like this would allow even a device with a very slow output (such as USB 2 at 400mhz) to still use all the resolution for typical activities of a computer desktop. While you might think that video would be impossible over such a slow bus, if the rectangles could scale, the 400 megabit bus could still do things like paying DVDs. While I do not suggest every monitor be able to decode our latest video compression schemes in hardware, the ability to use the post-compression primatives (drawing subsections and doing basic transforms on them) might well be enough to feed quite a bit of video through a small cable.
One could imagine even use of wireless video protocols for devices like cell phones. One could connect a cell phone with an HDTV screen (as found in a hotel) and have it reasonably use the entire screen, even though it would not have the gigabit bandwidths needed to display 1080p framed video.
Sending in changes to a screen with timestamps of when they should change also allows the potential for super-smooth movement on screens that have very low latency display elements. For example, commands to the display might involve describing a foreground object, and updating just that object hundreds of times a second. Very fast displays would show those updates and present completely smooth motion. Slower displays would discard the intermediate steps (or just ask that they not be sent.) Animations could also be sent as instructions to move (and perhaps rotate) a rectangle and do it as smoothly as possible from A to B. This would allow the display to decide what rate this should be done at. (Though I think the display and video generator should work together on this in most cases.)
Note that this approach also delivers something I asked for in 2004 -- that it be easy to make any LCD act as a wireless digital picture frame.
It should be noted that HDMI supports a small amount of power (5 volts at 50ma) and in newer forms both it and DisplayPort have stopped acting like digitized versions of analog signals and more like highly specialized digital buses. Too bad they didn't go all the way.
Protocol
As noted, it is key the basic levels be simple to promote universal adoption. As such, the elements in such a protocol would start simple. All commands could specify a time they are to be executed if it is not immediate.
- Paint line or rectangle with specified values, or gradient fill.
- Move object, and move entire screen
- Adjust brightness of rectangle (fade)
- Load pre-buffered rectangle. (Fonts, standard shapes, quick transitions)
- Display pre-buffered rectangle
However, lessons learned from other protocols might expand this list slightly.
One connector?
This, in theory allows the creation of a single connector (or compatible family of connectors) for lots of data and lots of power. It can't be just one connector though, because some devices need very small connectors which can't handle the number of wires others need, or deliver the amount of power some devices need. Most devices would probably get by with a single data wire, and ideally technology would keep pushing just how much data can go down that wire, but any design should allow for simply increasing the number of wires when more bandwidth than a single wire can do is needed. (Presumably a year later, the same device would start being able to use a single wire as the bandwidth increases.) We may, of course, not be able to figure out how to do connectors for tomorrow's high bandwidth single wires, so you also want a way to design an upwards compatible connector with blank spaces -- or expansion ability -- for the pins of the future, which might well be optical.
There is also a security implication to all of this. While a single wire that brings you power, a link to a video monitor, LAN and local peripherals would be fabulous, caution is required. You don't want to be able to plug into a video projector in a new conference room and have it pretend to be a keyboard that takes over your computer. As this is a problem with USB in general, it is worth solving regardless of this. One approach would be to have every device use a unique ID (possibly a certified ID) so that you can declare trust for all your devices at home, and perhaps everything plugged into your home hubs, but be notified when a never seen device that needs trust (like a keyboard or drive) is connected.
To some extent having different connectors helps this problem a little bit, in that if you plug an ethernet cable into the dedicated ethernet jack, it is clear what you are doing, and that you probably want to trust the LAN you are connecting to. The implicit LAN coming down a universal cable needs a more explicit approval.
Final rough description
Here's a more refined rough set of features of the universal connector:
- Shielded twisted pair with ability to have varying lengths of shield to add more pins or different types of pins.
- Asserts briefly a low voltage on pin 1, highly current limited, to power negotiator circuit in unpowered devices
- Negotiator circuits work out actual power transfer, at what voltages and currents and on what pins, and initial data negotiation about what pins, what protocols and what data rates.
- If no response is given to negotiation (ie. no negotiator circuit) then measure resistance on various pins and provide specified power based on that, but abort if current goes too high initially.
- Full power is applied, unpowered devices boot up and perform more advanced negotiation of what data goes on what pins.
- When full data handshake is obtained, negotiate further functions (hubs, video, network, storage, peripherals etc.)
Comments
Frank Ch. Eigler
Tue, 2009-11-10 16:06
Permalink
Brad, are you aware of
Brad, are you aware of DisplayLink?
http://www.displaylink.com/technology_overview.html
brad
Tue, 2009-11-10 18:16
Permalink
Various protocols
Have not tried that one but there are, as I said, many protocols that have existed in the past to squeeze video over a slower channel like USB. I've have one (of a different make) and have seen projectors that use USB and wifi to receive video (this is nice for shared projectors in presentation rooms.)
However, the goal is not to do this, but rather to still try for a lot of bandwidth but make the best of the bandwidth you have if you don't have enough -- rather than to design for specific low bandwidth applications.
While it would be nice to do this (and of course things like Displaylink, X windows, VNC and other protocols exist to do this) I feel they are hard to standardize in hardware. The more complex the protocol the less likely we would get a good hardware standard. Though of course if we start demanding the monitors be smarter and smarter they can have firmware of their own to allow support of new protocols and extensions.
My goal is a little different, to find a way to do full resolution video over a channel that is "almost fast enough" -- ie. from an older generation of hardware.
USB 3.0 is actually fast enough to be "almost fast enough" for example.
corysama
Tue, 2009-11-10 21:57
Permalink
Your proposed protocol is
Your proposed protocol is pretty similar to a simple GPU command buffer protocol. Reading it gives me the feeling that what you are going to end up with is duplicating or moving the GPU into the display device --effectively stretching the PCI-Express bus over a DVI cable.
Whether or not that's a good idea.. I have no comment. I just wanted to mention which wheel your invention looks like from a distance.
brad
Wed, 2009-11-11 12:35
Permalink
GPUs
GPUs work with polygons, have alpha channels and much more. They can do complex translations and 3-D mappings. Though basic GPU function in the display is probably something that happens over time, the function I describe is way below what GPUs do today. It's more like the bitblt terminal from Bell Labs 30 years ago.
tuacker
Wed, 2009-11-11 02:22
Permalink
Intel Light Peak
Intel is currently working on Light Peak a new high-speed optical cable technology which shows some promises to replace all connectors with one. It also allows to run multiple protocols over the same wire, you can read more here: http://techresearch.intel.com/articles/None/1813.htm
brad
Wed, 2009-11-11 12:32
Permalink
Optical and power
Yes, I do expect optical to play a role. The main issues are fragility of the cable, and inability to carry power. We can get 10gb on copper but that may be close to the limit. Though we keep breaking our limits. A mixture of copper and (optional) fiber might be a good approach, copper for power and some gigabits, optical for the rest of the gigabits.
As for protocols, the universal cable should of course carry everything. We already have a working solution for that called IP.
CyberFonic
Thu, 2009-11-12 01:03
Permalink
Frameless Video
There is another way, and it is here! Right now. You can buy a 27" Apple iMac for hardly much more than what it would cost to buy the monitor. For simple applications you'd just run X-windows, VNC, whatever and your connectivity is a mere gigE LAN. All standard and available.
For real power, you'd actually run the "View" portion of a true MVC designed application (one where each of M, V, C run as one or more separate processes).
So basically, you simply have the LCD and GPU tightly coupled and the applications would run headless, or should that me Hydra like, with as many heads as you like. Plan9 and Inferno (from Bell Labs / Lucent) have implemented this type or architectures ages ago and IBM have considerable amount of compute resources running in this manner. So why not your lowly desktop environment. Silicon is getting cheaper everyday.
brad
Thu, 2009-11-12 11:28
Permalink
That's not it
First of all, the bandwidth to that device would not be nearly enough for the things I discuss. And as far as I know it costs a good deal more than the $400 or so such monitors cost.
Realize that one of the goals would be to drive a 4 megapixel monitor with DVI-single, which only does just over 2 megapixels at 60fps. With a simple protocol like the one described above, you would be able to drive a 2 megapixel square in a rectangle of the monitor at 60fps if you did not want to paint the other windows often. You would also be able to display a 4 megapixel movie from any of today's modern compression formats (which do not update more than a fraction of the screen per frame in almost all frames) with occasional drops down to 30fps for complete scene changes.
There already are protocols for much lower bandwidth (sub-gigabit) video which require a lot of smarts in the monitor, such as VNC and X. This is a protocol for medium (and lower) bandwidth that requires minimal smarts in the monitor, just the ability to specify where to draw in the frame buffer, and when. The monitor would be (as it already is) responsible for displaying the contents of its frame buffer on the actual screen.
Add new comment