Video windows that simulate 3-D
I'm waiting for the right price point on a good >24" monitor with a narrow bezel to drop low enough that I can buy 4 or 5 of them to make a panoramic display wall without the gaps being too large.
However, another idea that I think would be very cool would be to exploit the gaps between the monitors to create a simulated set of windows in a wall looking out onto a scene. It's been done before in lab experiments with single monitors, but not as a large panoramic installation or something long term from what I understand. The value in the multi display approach is that now the gap between displays is a feature rather than a problem, and viewers can see the whole picture by moving. (Video walls must edit out the seams from the picture, removing the wonderful seamlessness of a good panorama.) We restore the seamlessness in the temporal dimension.
To do this, it would be necessary to track the exact location of the eyes of the single viewer. This would only work for one person. From the position of the eyes (in all 3 dimensions) and the monitors the graphics card would then project the panoramic image on the monitors as though they were windows in a wall. As the viewer's head moved, the image would move the other way. As the viewer approached the wall (to a point) the images would expand and move, and likewise shrink when moving away. Fortunately this sort of real time 3-D projection is just what modern GPUs are good at.
The monitors could be close together, like window panes with bars between them, or further apart like independent windows. Now the size of the bezels is not important.
For extra credit, the panoramic scene could be shot on layers, so it has a foreground and background, and these could be moved independently. To do this is would be necessary to shoot the panorama from spots along a line and both isolate foreground and background (using parallax, focus and hand editing) and also merge the backgrounds from the shots so that the background pixels behind the foreground ones are combined from the left and right shots. This is known as "background subtraction" and there has been quite a lot of work in this area. I'm less certain over what range this would look good. You might want to shoot above and below to get as much of the hidden background as possible in that layer. Of course having several layers is even better.
The next challenge is to very quickly spot the viewer's head. One easy approach that has been done, at least with single screens, is to give the viewer a special hat or glasses with easily identified coloured dots or LEDs. It would be much nicer if we could do face detection as quickly as possible to identify an unadorned person. Chips that do this for video cameras are becoming common, the key issue is whether the detection can be done with very low latency -- I think 10 milliseconds (100hz) would be a likely goal. The use of cameras lets the system work for anybody who walks in the room, and quickly switch among people to give them turns. A camera on the wall plus one above would work easily, two cameras on the left and right sides of the wall should also be able to get position fairly quickly.
Even better would be doing it with one camera. With one camera, one can still get a distance to the subject (with less resolution) by examining changes in the size of features on the head or body. However, that only provides relative distance, for example you can tell if the viewer got 20% closer but not where they started from. You would have to guess that distance, or learn it from other queues (such as a known sized object like the hat) or even have the viewer begin the process by standing on a specific spot. This could also be a good way to initiate the process, especially for a group of people coming to view the illusion. Stand still in the spot for 5 seconds until it beeps or flashes, and then start moving around.
If the face can be detected with high accuracy and quickly, a decent illusion should be possible. I was inspired by this clever simulated 3-D videoconferencing system which simulates 3-D in this way and watches the face of the viewer.
You need high resolution photos for this, as only a subset of the image appears in the "windows" at any given time, particularly when standing away from the windows. It could be possible to let the viewer get reasonably close to the "window" if you have a gigapan style panorama, though a physical barrier (even symbolic) to stop people from getting so close that the illusion breaks would be a good idea.
Comments
Adam Drew
Fri, 2009-12-18 16:34
Permalink
3d head tracking
You mean... like this?
brad
Fri, 2009-12-18 18:26
Permalink
Yup, that's it
That's the sort of thing that has been done in the lab. I'm interested in seeing it taken further, to working without special glasses or hat, and multiple monitors on a wall.
Remember the issue is that video walls have seams, and thus are imperfect, and today's main market LCD panels have bezels too large to make a nice small-seam video wall. So the idea is to create the illusion of seamlessness by making the bezels and gaps part of the whole thing.
drewp
Mon, 2009-12-28 12:24
Permalink
siggraph demo
Someone had a system like this at SIGGRAPH about 1-4 years ago. Their focus was actually on their ability to use a few graphics cards at once to render a scene in realtime, but to give them something to render, they were face-tracking people who came to the screen and adjusting their camera based on the position of your face. IIRC, they were moving the camera backwards (like you might do for a game, as opposed to a window?), and the guy there didn't seem to care. I think they were using OpenCV for the face tracking, which is a common free choice for that task. Here's my own OpenCV tester code, in fact: http://bigasterisk.com/darcs/?r=headtrack;a=headblob;f=/cap
Shava Nerad
Mon, 2010-01-11 00:53
Permalink
If you wait two years...
My company will be producing a similar display tech with AMOLED panels. We're just waiting for the cost of large format inkjet produced AMOLED stock to come down in price. It's really not very far out there.
Also, a halfway point is M$'s Project Natal for interface ideas.
Add new comment