Photography

Near-perfect virtual reality of recent times and tourism

Recently I tried Facebook/Occulus Rift Crescent Bay prototype. It has more resolution (I will guess 1280 x 1600 per eye or similar) and runs at 90 frames/second. It also has better head tracking, so you can walk around a small space with some realism — but only a very small space. Still, it was much more impressive than the DK2 and a sign of where things are going. I could still see a faint screen door, they were annoyed that I could see it.

We still have a lot of resolution gain left to go. The human eye sees about a minute of arc, which means about 5,000 pixels for a 90 degree field of view. Since we have some ability for sub-pixel resolution, it might be suggested that 10,000 pixels of width is needed to reproduce the world. But that’s not that many Moore’s law generations from where we are today. The graphics rendering problem is harder, though with high frame rates, if you can track the eyes, you need only render full resolution where the fovea of the eye is. This actually gives a boost to onto-the-eye systems like a contact lens projector or the rumoured Magic Leap technology which may project with lasers onto the retina, as they need actually render far fewer pixels. (Get really clever, and realize the optic nerve only has about 600,000 neurons, and in theory you can get full real-world resolution with half a megapixel if you do it right.)

Walking around Rome, I realized something else — we are now digitizing our world, at least the popular outdoor spaces, at a very high resolution. That’s because millions of tourists are taking billions of pictures every day of everything from every angle, in every lighting. Software of the future will be able to produce very accurate 3D representations of all these spaces, both with real data and reasonably interpolated data. They will use our photographs today and the better photographs tomorrow to produce a highly accurate version of our world today.

This means that anybody in the future will be able to take a highly realistic walk around the early 21st century version of almost everything. Even many interiors will be captured in smaller numbers of photos. Only things that are normally covered or hidden will not be recorded, but in most cases it should be possible to figure out what was there. This will be trivial for fairly permanent things, like the ruins in Rome, but even possible for things that changed from day to day in our highly photographed world. A bit of AI will be able to turn the people in photos into 3-D animated models that can move within these VRs.

It will also be possible to extend this VR back into the past. The 20th century, before the advent of the digital camera, was not nearly so photographed, but it was still photographed quite a lot. For persistent things, the combination of modern (and future) recordings with older, less frequent and lower resolution recordings should still allow the creation of a fairly accurate model. The further back in time we go, the more interpolation and eventually artistic interpretation you will need, but very realistic seeming experiences will be possible. Even some of the 19th century should be doable, at least in some areas.

This is a good thing, because as I have written, the world’s tourist destinations are unable to bear the brunt of the rising middle class. As the Chinese, Indians and other nations get richer and begin to tour the world, their greater numbers will overcrowd those destinations even more than the waves of Americans, Germans and Japanese that already mobbed them in the 20th century. Indeed, with walking chairs (successors of the BigDog Robot) every spot will be accessible to everybody of any level of physical ability.

VR offers one answer to this. In VR, people will visit such places and get the views and the sounds — and perhaps even the smells. They will get a view captured at the perfect time in the perfect light, perhaps while the location is closed for digitization and thus empty of crowds. It might be, in many ways, a superior experience. That experience might satisfy people, though some might find themselves more driven to visit the real thing.

In the future, everybody will have had a chance to visit all the world’s great sites in VR while they are young. In fact, doing so might take no more than a few weekends, changing the nature of tourism greatly. This doesn’t alter the demand for the other half of tourism — true experience of the culture, eating the food, interacting with the locals and making friends. But so much commercial tourism — people being herded in tour groups to major sites and museums, then eating at tour-group restaurants — can be replaced.

I expect VR to reproduce the sights and sounds and a few other things. Special rooms could also reproduce winds and even some movement (for example, the feeling of being on a ship.) Right now, walking is harder to reproduce. With the OR Crescent Bay you could only walk 2-3 feet, but one could imagine warehouse size spaces or even outdoor stadia where large amounts of real walking might be possible if the simulated surface is also flat. Simulating walking over rough surfaces and stairs offers real challenges. I have tried systems where you walk inside a sphere but they don’t yet quite do it for me. I’ve also seen a system where you are held in place and move your feet in slippery socks on a smooth surface. Fun, but not quite there. Your body knows when it is staying in one place, at least for now. Touching other things in a realistic way would require a very involved robotic system — not impossible, but quite difficult.

Also interesting will be immersive augmented reality. There are a few ways I know of that people are developing

  • With a VR headset, bring in the real world with cameras, modify it and present that view to the screens, so they are seeing the world through the headset. This provides a complete image, but the real world is reduced significantly in quality, at least for now, and latency must be extremely low.
  • With a semi-transparent screen, show the augmentation with the real world behind it. This is very difficult outdoors, and you can’t really stop bright items from the background mixing with your augmentation. Focus depth is an issue here (and is with most other systems.) In some plans, the screens have LCDs that can go opaque to block the background where an augmentation is being placed.
  • CastAR has you place retroreflective cloth in your environment, and it can present objects on that cloth. They do not blend with the existing reality, but replace it where the cloth is.
  • Projecting into the eye with lasers from glasses, or on a contact lens can be brighter than the outside world, but again you can’t really paint over the bright objects in your environment.

Getting back to Rome, my goal would be to create an augmented reality that let you walk around ancient Rome, seeing the buildings as they were. The people around you would be converted to Romans, and the modern roads and buildings would be turned into areas you can’t enter (since we don’t want to see the cars, and turning them into fast chariots would look silly.) There have been attempts to create a virtual walk through ancient Rome, but being able to do it in the real location would be very cool.

Shuttle fly-by most photographed event in history?

A follow-up thought about yesterday’s shuttle fly-by and panorama. I was musing, might this be perhaps the most photographed single thing in human history to date?

Here’s the reasoning. Today there are more cameras and more photographers than ever, and people use them all the time in a way that continues to grow. To be a candidate for a most-photographed event, you would need to be recent, and you would need to take place in front of a ton of people, ideally with notice. It seemed like just about everybody in Sacramento, the Bay Area and LA was out for this and holding up a phone or camera.

Of course, many objects are more photographed, like the Golden Gate Bridge the shuttle flew over, but I’m talking here of the event rather than the object. Of course this is an event which moved over the course of thousands of miles.

Other candidates:

  • The other shuttle fly-overs done over New York and Washington — also with large populations
  • Total eclipses of the sun which go over highly populated areas. The 2009 eclipse went over Shanghai, Varanasi and many other hugely populated areas but was clouded out for many. Nobody has yet to make a photo of an eclipse that looks like an eclipse, of course — I’ve seen them all, including many of the clever HDRs and overlays — but that doesn’t stop people from trying.
  • The 1999 eclipse did go over a number of large European cities, but this was before the everybody-is-photographing era
  • Most lunar eclipses are seen by as much as half the world, though they are hard to photograph with consumer camera gear, and only a fraction of people go out to watch and photograph them, but they could easily be a winner.

Prior to the digital era, a possible winner might be the moon landing. Back in 1969, every family had a camera, though usage wasn’t nearly what it is today. However, I remember the TV giving lessons on how to photograph a TV screen. Everybody was shooting their TV for the launches and the walk on the moon. Terrible pictures (much like early camera phone pictures) but people took them to be a part of the event. I recall taking one myself though I have no idea where it is.

Of course there may be objective ways to measure this today, by tracking the number of photos on photo sharing and social sites, and extrapolating the winner. If the shuttle is the winner for now, it won’t last long. Photography is going to grow even more.

I should also note that remote photography, like we did for Apollo, is clearly much larger, in the form of recording video. For those giant events viewed by billions — World Cup, Olympics, Oscars etc. — huge numbers of people are recording them, at least temporarily.

Panorama of Shuttle fly-by at Moffett Field

Today marked the last trip through the air for the space shuttle, as the Endeavour was carried to LA to be installed in a museum. The trip included fly-overs of the Golden Gate bridge and many other landmarks in SF and LA, and also a low pass over Nasa Ames at Moffett Field, where I work at Singularity University. A special ceremony was done on the tarmac, and I went to get a panoramic photo. We all figured the plane would come along the airstrip, but they surprised us, having it fly a bit to the west so it suddenly appeared from behind the skeleton of Hangar One, the old dirigible hangar. That turned out to be bad for my photography, as I didn’t get much advance notice, and the shot of the crowd I had done a few minutes before had everybody expectantly looking along the runway, and not towards the west where the plane and shuttle appear in my photo.

However, it did make for a very dramatic arrival. So while different parts of this shot are at slightly different times, it does capture the scene of Moffett field and the crowd awaiting the shuttle, and its arrival. I do however have a nice hi-res photo for you to enjoy as well as the panoramic shot of the Endeavour shuttle fly-by.

New panoramas of Israel, and of course a proposal for peace

I’m back from our fun “Singuarlity Week” in Tel Aviv, where we did a 2 day and 1 day Singularity University program. We judged a contest for two scholarships by Israelis for SU, and I spoke to groups like Garage Geeks, Israeli Defcon, GizaVC’s monthly gathering and even went into the west bank to address the Palestinian IT Society and announce a scholarship contest for SU.

Of course I did more photography, though the weather did not cooperate. However, you will see six new panoramas on my Israel Panorama Page and my Additional Israeli panoramas. My favourite is the shot of the western wall during a brief period of sun in a rainstorm.

In Ramallah, the telecom minister for the Palestinian Authority asked us, jokingly, “how can this technology end the occupation?” But I wanted to come up with a serious answer. Everybody who goes to the middle east tries to come up with a solution or at least some sort of understanding. Israelis get a bit sick of it, annoyed that outsiders just don’t understand the incredible depth and nuance of the problem. Outsiders imagine the Israelis and Palestinians are so deep in their conflict that they are like fish who no longer see the water.

In spite of those warnings, here’s my humble proposal for how to use new media technology to help.

Take classrooms of Israelis and classrooms of Palestinians and give them a mandatory school assignment. Their assignment is to be paired with an online buddy from the “other side.” Students would be paired based on a matching algorithm, considering things like their backgrounds, language skills or languages and subjects they want to learn. The other student, with whom they would interact over online media and video-conferencing (like Skype or Google Hangouts,) would become a study partner and the students would collaborate on projects suitable to them. They might also help one another learn a language, like English, Arabic or Hebrew. Students would be encouraged to add their counterpart to their social networking circles.

Both students would also be challenged to write an essay attempting to see the world from the point of view of the other. They will not be asked to agree with it, but simply to be able to write from that point of view. And their counterpart must agree at the end that it mostly does reflect their point of view. Students would be graded on this.

It would be important not to have this be a “forced friendship.” The students would be told it was not demanded they forget their preconceptions; not demanded they agree with everything their counterpart says. In fact, they would be encouraged to avoid conflict, to not immediately contradict statements they think are false. That the goal is not to convince their counterpart of things but to understand and help them understand. And in particular, projects should be set up where the students naturally work together viewing the teachers as the common enemy.

At the end of the year, a meeting would be arranged. For example, west bank students would be thrilled at a chance to visit the beach or some amusement park. A meeting on the west bank border on neutral ground might make sense too, though parents would be paranoid about safety and many would veto trips by their children into the west bank.

Would this bring peace? Hardly on its own. But it would improve things if every student at least knew somebody from outside their world, and had tried to understand their viewpoint even without necessarily agreeing with it. And some of the relationships would last, and the social networks would grow. Soon each student would have at least one person in their network from outside their formerly insular world. This would start with some schools, but ideally it would be something for every student to do. And it could even be expanded to include online pen-pals from other countries. With some students it would fail, particularly older ones whose views are already set. Alas, for younger ones, finding a common language might be difficult. Few Israelis learn Arabic, more Palestinians learn Hebrew and all eventually want to learn English. Somebody has to provide computers and networking to the poorer students, but it seems the cost of this is small compared to the benefit.

A foveal digital camera sensor

Earlier I wrote about desires for the next generation of DSLR camera and a number of readers wrote back that they wanted to be able to swap the sensor in their camera, most notably so they could put in a B&W sensor with no colour filter mask on it. This would give you better B&W photos and triple your light gathering ability, though for now only astronomers are keen enough on this to justify filterless cameras.

I’m not sure how easy it would be to make a sensor that could be swapped, due to a number of problems — dust, connectivity and more. In fact I wonder if an idea I wrote about earlier — lenses with integrated sensors might have a better chance of being the future.

Here’s another step in that direction — a “foveal” digital camera that has tiny sensors in the middle of the frame and larger ones out at the edges. Such sensors have been built for a variety of purposes in the past, but might they have application for serious photography?

For example, the 5d Mark II I use has 22 million 6.4 micron sensors. Being that large, they are low noise compared to the smaller sensors found in P&S cameras. But the full frame requires very large, very heavy, very expensive lenses. Getting top quality over the large image circle is difficult and you pay a lot for it.

Imagine that this camera has another array, perhaps of around 16 million pixels of 1.6 micron size in the center. This allows it to shoot a 16MP picture in the small crop zone or a 22MP picture on the full frame. (It also allows it to shoot a huge 252 megapixel image that is sharp in the center but interpolated around the edges.) The central region would have transistors that could combine all the wells of a particular colour in the 4x4 array that maps to one large pixel. This is common in the video modes on DSLR cameras, and helps produce pixels that are much lower noise than the tiny pixels are on their own, but not as good as the 16x larger big pixels, though the green pixels, which make up half the area, would probably do decently well.

As a result, this camera would not be as good in low light, and the central region would be no better in low light than today’s quality P&S cameras. But that’s actually getting pretty good, and the results at higher light levels are excellent.

The win is that you would be able to use a 100mm/f2 lens with the field of view of a 400mm lens for a 16MP picture. It would not be quite as good as a real 400mm f/2.8L Canon lens of course. But it could compare decently — and that 400mm lens is immense, heavy and costs $10,000 — far more than the camera body. On the other hand a decent 100mm f/2.8 lens aimed at the smaller image circle would cost a few hundred dollars at most, and do a very good job. A professional wildlife or sports photographer might still seek the $10K lens but a lot of photographers would be much happier to carry the small one, and not just for the saved cost. You would not get the very shallow depth of field of the 400mm f/2.8 — it would be about double with a small sensor 100mm f/2 — but many would consider that a plus in this situation, not a minus.

You could also use 3.2 or 2.1 micron sensors for better low-noise and less of a crop (or focal length multiplier as it is incorrectly called sometimes.)

One other benefit is that, if your lens can deliver it, and particularly when you have decent lighting, you would get superb resolution in the center of your full frame photos, as the smaller pixels are combined. You would get better colour accuracy, without as many bayer interpolation artifacts, as you would truly sense each colour in every pixel, and much better contrast in general. You would be making use of the fact that your lens is sharper in the center. Jpeg outputs would probably never do the 250 megapixel interpolated image, but the raw output could record all the pixels if it is not necessary to combine the wells to improve signal/noise.

Botswana / Falls panorama gallery up

I have put up a new gallery of panoramic photos from my trip earlier this year to Botswana (with short stays in South Africa and Zimbabwe.) There are some interesting animal and scenic shots, and also some technically difficult shots such as Victoria Falls from a helicopter. (I also have some new shots of Niagara falls from a fixed wing plane which is even harder.)

In the case of the helicopter, which is still moving as it was just a regular tour helicopter, the challenge is to shoot very fast and still not make mistakes in coverage. I took several panos but only a few turned out. Victoria Falls can really only be viewed from the air — on the ground the viewing spots during high water season are in so much mist that it’s actually raining hard all around you, and in any event you can’t see the whole falls. One lesson is to try not to be greedy and get a 200m pano. Stick to 50 to 100mm at most.

On this trip I took along a 100-400mm lens, and it was my first time shooting with such a long lens routinely. I knew intellectually about the much smaller depth of field at 400mm, but in spite of this I still screwed up a number of panoramas, since I normally set focus at one fixed distance for the whole pano. Stopping down 400mm only helps a little bit. Wildlife will not sit still for you, creating extra challenges. I already showed you this elephant shot but I am also quite fond of this sunset on the Okavango delta. While this shot may not appear to have wildlife, the sun is beaming through giant spiderwebs which are the work of “social spiders” which live in nests, all building the same web. I recommend zooming in on the scene in the center. I also have some nice regular photos of this which will be up later.


I am still a bit torn about the gallery of ordinary aspect ratio photos. I could put them up on my photo site easily enough, but I’ve noticed photos get a lot more commentary and possibly viewing when done on Google+/Picasa. This is a sign of a disturbing trend away from the distributed web, where people and companies had their own web sites and got pagerank and subscribers, to the centralized AOL style model of one big site (be it Facebook or Google Plus) which is attractive because of its social synergies.


What do I want in a 5d Mark 3 (next generation digital SLR)

I shoot with the Canon 5d Mark II. While officially not a pro camera, the reality is that a large fraction of professional photographers use this camera rather than the Eos-1D cameras which are faster but much bulkier and in some ways even inferior to the 5D. But it’s been out a long time now, and everybody is wondering when its successor will come and what features it will have.

Each increment in the DSLR world has been quite dramatic over the last decade. There’s always been a big increase in resolution with the new generation, but now at 22 megapixels there’s less call for that. While there are lenses that deliver more than 22 megapixels sharply, they are usually quite expensive, and while nobody would turn down 50mp for free, there just wouldn’t be nearly as much benefit from it than the last doubling. Here’s a look at features that might come, or at least be wished for.

Better Pixels

More pixels may not be important, but everybody wants better pixels.

  • Low noise / higher ISO: The 5D2 astounded us with ISO 3200 shots that aren’t very noisy. Unlike megapixels, there is almost no limit to how high we would like ISO to go at low noise levels. Let’s hope we see 12,500 or more at low noise, plus even 50,000 noisy. Due to physics, smaller pixels have higher noise, so this is another reason not to increase the megapixel count.
  • 3 colour: The value of full 3-colour samples at every pixel has been overstated in the past. The reason is that Bayer interpolation is actually quite good, and almost every photographer would rather have 18 million bayer pixels over 6 million full RGB pixels. It’s not even a contest. As we start maxing out our megapixels to match our lenses, this is one way to get more out of a picture. But if it means smaller pixels, it causes noise. The Foveon approach which stacked the 3 pixels would be OK here — finally. But I don’t expect this to be very likely.
  • Higher dynamic range: How about 16 bits per pixel, or even 24? HDR photography is cool but difficult. But nobody doesn’t want more range, if only for the ability to change exposure decisions after the fact and bring out those shadows or highlights. Automatic HDR in the camera would be nice but it’s no substitute for try high-range pixels.

Video & Audio

Due to the high quality video in the 5D2, many professional videographers now use it. Last week Canon announced new high-end video cameras aimed at that market, so they may not focus on improvements in this area. If they do, people might like to see things like 60 frame video, ability to focus while shooting, higher ISO, and 4K video.  read more »

Panoramic article in Photo Technique

A little self-plug. I have an article on an introduction to panoramic photographic technique the November issue of Photo Technique with a few panos in it. This is old world journalism, folks — you have to read it on paper at least for now.

In the meantime, I’m working on upcoming galleries of photos from Botswana, Eastern Europe and Burning Man for you. I have already placed two of my Botswana photos into my gallery of favourite panoramas. This includes a lovely group of elephants in Savuti and a sunset on the Okavango delta that is one of my new favourites.

We decided to go to Harvey’s pan in Savuti one afternoon and lucked upon a large breeding group of elephant just on their way there. I caught them in one of my first long lens panoramas. Long lens panos are fairly difficult due to the limited depth of field, but they get great detail on the baby elephant.

Much more to come!

Gallery of regular photos from Burning Man 2010

As I prepare for Burning Man 2011, I realized I had not put my gallery of regular sized photos up on the web.

Much earlier I announced my gallery of giant panoramas of 2010 which features my largest photos in a new pan-and-zoom fullscreen viewer, I had neglected to put up the regular sized photos.

So enjoy: Gallery of photos of Burning Man 2010

I still need to select and caption 2007 and 2009 some day.

Back from Botswana, I want better audio for my video

This blog has been silent the last month because I’ve been on an amazing trip to Botswana and a few other places. There will be full reports and lots of pictures later, but today’s idea comes from experiments in shooting HD video using my Canon 5D Mark II. As many people know, while the 5D is an SLR designed for stills, it also shoots better HD video than all but the most expensive pro video cameras, so I did a bit of experimenting

The internal mic in the camera is not very good, and picks up not just wind but every little noise on the camera, including the noises of the image stabilizer found in many longer lenses. I brought a higher quality mic that mounts on the camera, but it wasn’t always mounted because it gets a little in the way of both regular shooting and putting the camera away. When I used it, I got decent audio, but I also got audio of my companion and our guide rustling or shooting stills with their own cameras. To shoot a real video with audio I had to have everybody be silent. This is why much of the sound you see in nature documentaries is actually added later, and very often just created by Foley artists. I also forgot to turn on my external mic, which requires a small amount of power, a few times. That was just me being stupid — as the small battery lasts for 300 hours I could have just left it on the whole trip. (Another fault I had with the mic, the Sennheiser MKE 400, was that the foam wind sleeve kept coming off, and after a few times I finally lost it.)  read more »

Definition of pixels for the world's biggest photos

I shoot lots of large panoramas, and the arrival of various cheaper robotic mounts to shoot them, such as the Gigapan Epic Pro and the Merlin/Skywatcher (which I have) has resulted in a bit of a “mine’s bigger than yours” contest to take the biggest photo. Some would argue that the stitched version of the Sloane Digital Sky survey, which has been rated at a trillion pixels, is the winner, but most of the competition has been on the ground.

Many of these photos have got special web sites to display them such as Paris 26 gigapixels, the rest are usually found at the Gigapan.org site where you can even view the gigapans sorted by size to see which ones claim to be the largest.

Most of these big ones are stitched with AutopanoPro, which is the software I use, or the Gigapan stitcher. The largest I have done so far is smaller, my 1.4 gigapixel shot of Burning Man 2010 which you will find on my page of my biggest panoramas which more commonly are in the 100mp to 500mp range.

The Paris one is pretty good, but some of the other contenders provide a misleading number, because as you zoom in, you find the panorama at its base is quite blurry. Some of these panoramas have even just been expanded with software interpolation, which is a complete cheat, and some have been shot at mixed focal length, where sections of the panorama are sharp but others are not. I myself have done this, for example in my Gigapixel San Francisco from the end of the Golden Gate I shot the city close up, but shot the sky and some of the water at 1/4 the resolution because there isn’t really any fine detail in the sky. I think this is partially acceptable, though having real landscape features not at full resolution should otherwise disqualify a panorama. However, the truth is that sections of sky perhaps should not count at all, and anybody can make their panorama larger by just including more sky all the way to the zenith if they choose to.

There is a difficult craft to making such large photos, and there are also aesthetic elements. To really count the pixels for the world’s largest photos, I think we should count “quality” pixels. As such, sky pixels are not generally quality pixels, and distant terrain lost in haze also does not provide quality pixels. The haze is not the technical fault of the photographer, but it is the artistic fault, at least if the goal is to provide a sharp photo to explore. You get rid of haze only through the hard work of being there at the right time, and in some cities you may never get a chance.

Some of the shots are done through less than ideal lenses, and many of them are done use tele-extenders. These extenders do get more detail but the truth is a 2x tele-extender does not provide 4 times as many quality pixels. A common lens today is a 400mm with a 2x extender to get 800mm. Fairly expensive, but a lot cheaper than a quality 800mm lens. I think using that big expensive glass should count for more in the race to the biggest, even though some might view it as unfair. (A lens that big and heavy costs a ton and also weighs a lot, making it harder to get a mount to hold it and to keep it stable.) One can get very long mirror “lens” setups that are inexpensive, but they don’t deliver the quality, and I don’t believe work done with them should score as high as work with higher quality lenses. (It may be the case that images from a long telescope, which tend to be poor, could be scaled down to match the quality of a shorter but more expensive lens, and this is how it should be done.)

Ideally we should seek an objective measure of this. I would propose:

  • There should be a sufficient number of high contrast edges in the image — sharp edges where the intensity goes from bright to dark in the space of just 1 or 2 pixels. If there are none of these, the image must be shrunk until there are.
  • The image can then be divided up into sections and the contrast range in each evaluated. If the segment is very low contrast, such as sky, it is not counted in the pixel count. Possibly each block will be given a score based on how sharp it is, so that background items which are hazy count for more than nothing, but not as much as good sharp sections.
  • I believe that to win a pano should not contain gross flaws. Examples of such flaws include stripes of brightness or shadow due to cloud movement, big stitching errors and checkerboard patterns due to bad overlap or stitching software. In general that means manual exposure rather than shots where the stitcher tries to fix mixed exposures unless it does it undetectably.

Some will argue with the last one in particular, since for some the goal is just to get as many useful pixels as possible for browsing around. Gigapixel panoramas after all are only good for zooming around in with a digital viewer. No monitor can display them and sometimes even printing them 12 feet high won’t show all their detail, and people rarely do that. (Though you can see my above San Francisco picture as the back wall of a bar in SF.) Still, I believe it should be a minimum bar than when you look at the picture at more normal sizes, or print it out a few feet in size, it still looks like an interesting, if extremely sharp, picture.

Ideally an objective formula can be produced for how much you have to shrink what is present to get a baseline. It’s very rare that any such panorama not contain a fair number of segments with high contrast edges and lines in them. For starters, one could just put in the requirement that the picture be shrunk until you have a frame that just about anybody would agree is sharp like an ordinary quality photo when viewed 1:1. Ideally lots of frames like that, all over the photo.

Under these criteria a number of the large shots on gigapan fall short. (Though not as short as you think. The gigapan.org zoom viewer lets you zoom in well past 1:1, so even sharp images are blurry when zoomed in fully. On my own site I set maximum zoom at 200%.)

These requirements are quite strict. Some of my own photos would have to be shrunk to meet these tests, but I believe the test should be hard.

Shoot Nikon? Please help review my article on choosing lenses for Nikon cameras

For many years I have had a popular article on what lenses to buy for a Canon DSLR. I shoot with Canon, but much of the advice is universal, so I am translating the article into Nikon.

If you shoot Nikon and are familiar with a variety of lenses for them, I would appreciate your comments. At the start of the article I indicate the main questions I would like people’s opinions on, such as moderately priced wide angle lenses, as well as regular zooms.

If you “got a Nikon camera and love to take photographs” please read the article on what lens to buy for your Nikon DSLR and leave comments here or send them by email to btm@templetons.com. I’m also interested in lists of “what’s in your kit” today.

Burning Man 2010 Panoramas with new Flash Viewer

I have put up a page of panoramas from Burning Man 2010. This page includes my largest yet, a 1.2 billion pixel image of the whole of Black Rock City which you will find first on the page. I am particularly proud of it, I hope you find it as amazing as I do.

There are many others, including a nice one of the Man while they dance before the burn with the whole circle of people, a hi-res of the temple and the temple burn, and more.

However, what’s really new is I have put in a Flash-based panorama zoom viewer. This application lets you see my photos for the first time at their full resolution, even the gigapixel ones. You can pan around, zoom in and see everything. For many of them, I strongly recommend you click the button (or use right-click menu) to enter fullscreen mode, especially if you have a big monitor as I do. There you can pan around with the arrow keys and zoom in and out with your mouse wheel. There are other controls (and when not in fullscreen mode you can also use shift/ctrl or +/- for zooming.) A help page has full details.

Go into the gigapixel and shot and zoom around. You’ll be amazed what you find. I have also converted most of my super-size city photos of Black Rock City to the zoom viewer, they can be found at the page of Giant BRC photos as well as many of my favourites from the various years. I’m also working at converting some of my other photos, including the gallery of my largest images which I built recently. It takes time to build and upload these so it will be some while before the big ones are all converted. I may not do the smaller ones.

If you don’t have flash, it displays the older 1100 pixel high image, and you can still get to that via a link. If you have flashblock, you will need to enable flash for my photo site because it will detect you have no flash player and display the old one.

Get out the big monitor and it will feel like you’re standing on a tower in Black Rock City with a pair of binoculars. The gigapixel image is also up on gigapan.

New 400 megapixel Moraine Lake plus gallery of Moraine & Louise

Moraine Lake, in Banff National Park, is one of the world’s most beautiful mountain scenes. I’ve returned to Banff, Moraine Lake and Lake Louise many times, and in June, I took my new robotic panorama mount to take some very high resolution photos of it and other scenes.

Rather than filling my Alberta Panorama Gallery with all those pictures, I have created a special page with panoramas of just Moraine Lake and its more famous sister Lake Louise. While I like my new 400 megapixel shot the best, an earlier shot was selected by the respected German Ravensburger puzzle company for a panoramic jigsaw puzzle along with my shot of Burney Falls, CA.

It was a bit of work carrying the motorized mount, laptop computer, tripod and camera gear to the top of the Moraine, but the result is worth it. While my own printer is only 24” high, this picture has enough resolution to be done 6 feet high and still be tack sharp up close, so I’m hoping to find somebody who wants to do a wall with it.

So check out the new gallery of photos of Moraine Lake and Lake Louise. I’ve also added some other shots from that trip to the Alberta gallery and will be adding more shortly. When on the panorama page ask for the “Full Rez Slice” to see how much there is in the underlying image.

Total Eclipse at Hao, French Polynesia

I got a chance to see my 5th eclipse on July 11 — well sort of. In spite of many tools at our disposal, including a small cruise ship devoted to the eclipse, we saw only about 30 seconds of the possible 4 minutes due to clouds. But I still have a gallery of pictures.

Many people chose the Hao atoll for eclipse viewing because of its very long airstrip and 3 minute 30 second duration. Moving north would provide even more, either from water or the Amanu atoll. Weather reports kept changing, suggesting moving north was a bad idea, so our boat remained at the Hao dock until the morning of the eclipse. In spite of storm reports, it dawned effectively cloudless so we decided to stay put and set up all instruments and cameras. Seeing an eclipse on land is best in my view, ideally a place with trees and animals and water. And it’s really the only choice for good photography.

As the eclipse came, clouds started building, moving quickly in the brisk winds. The clouds may have been the result of eclipse-generated cooling and they did increase as the eclipse came. However, having set up we decided not to move. The clouds were fast and small and it was clear that they would not block the whole eclipse until a big cloud came just near totality which almost did. We did get 30 seconds of fairly clear skies, so the crowd of first-timers were just as awed as first timers always are. Disappointment was only felt by those who had seen a few.

Later I realized a better strategy for an eclipse cruise interested in land observation. When the clouds thickened, we should have left all the gear on land with a crewman from the ship to watch it. The cameras were all computer controlled, and so they would take whatever images they would take — in theory. We, on the other hand could have run onto the boat and had it sail to find a hole in the clouds. It would have found one — just 2 miles away at the airport, people gathered there saw the complete eclipse. For us it was just the luck on the draw on our observing spot. Mobility can change that luck. Photographs and being on land are great, but seeing the whole eclipse is better.

I said “in theory” above because one person’s computer did not start the photos properly, and he had to start them again by hand. In addition, while we forgot to use it, the photo program has an “emergency mode” for just such a contingency. This mode puts into into a quick series of shots of all major exposures, designed to be used in a brief hole in the clouds. In the panic we never thought to hit the panic button.

I was lucky last year in spite of my rush. I was fooled into thinking I could duplicate that luck. You have to learn to rehearse everything you will do during an eclipse. This also applied to my panoramas. I had brought a robotic panoramic mount controlled by bluetooth from my laptop. In spite of bringing two laptops, and doing test shots the day before, I could not get the bluetooth link going as the eclipse approached. I abandoned the robotic mount to do manual panos. I had been considering that anyway, since the robotic mount is slow and takes about 10 seconds between shots, limiting how much pano it could do. By hand I can do a shot every second or so. Of course the robot in theory takes none of my personal eclipse time, while doing the hand pano took away precious views, but taking 3 minutes means too much changing light and moving people.

Even so a few things went wrong. I was doing a bracket, which in retrospect I really did not need. A friend loaned me a higher quality 24mm lens than the one I had, and this lens was also much faster (f/1.8) than mine. While I had set to go into manual mode, at first I forgot, and int he darkness the camera tried to shoot at f/1.8 — meaning very shallow depth of field and poor focus on all things in the foreground. I then realized this and switched to manual mode for my full pano. This pano was shot while the eclipse was behind clouds. I had taken a shot a bit earlier where it was visible and of course used that for that frame of the pano, but the different exposure causes some lessening of quality. Modern pano software handles different exposure levels, but the best pano comes from having everything fixed.

More lessons learned. After the eclipse we relaxed and cruised the Atoll, swam, dove, surfed, bought black pearls and had a great time.

The next eclipse is really only visible in one reachable place: Cairns Australia in November of 2012. (There is an annular eclipse in early 2012 that passes over Redding and Reno and hits sunset at Lubbock, but an annular is just a big partial eclipse, not a total.)

Cairns and the great barrier reef are astounding. I have a page about my prior trip to Australia and Cairns and any trip there will be good even with a cloudy eclipse. Alas, a cloudy eclipse is a risk, because the sun with be quite low in the morning sky over the mountains, and worse, Nov 13 is right at the beginning of the wet season. If the wet starts then, it’s probably bad news. For many, the next eclipse will be the one that crosses the USA in 2017. However, there are other opportunities in Africa/2013 (for the most keen,) Svalbard/2015 and Indonesia/2016 before then.

I’ll have some panoramas in the future. Meanwhile check out the gallery. Of course I got better eclipse pictures last year.

Travel notes from the Alps, Davos and elsewhere

I recently went to the DLD conference in Germany, briefly to Davos during the World Economic Forum and then drove around the Alps for a few days, including a visit to an old friend in Grenoble. I have some panoramic galleries of the Alps in Winter up already.

Each trip brings some new observations and notes.

  • For the first time, I got a rental car which had a USB port in it, as I’ve been wanting for years. The USB port was really part of the radio, and if you plugged a USB stick in, it would play the music on it, but for me its main use was a handy charging port without the need for a 12v adapter. As I’ve said before, let’s see this all the time, and let’s put them in a few places — up on the dashboard ledge to power a GPS, and for front and rear seats, and even the trunk. And have a plug so the computer can access the devices, or even data about the car.
  • The huge network of tunnels in the alpine countries continues to amaze me, considering the staggering cost. Sadly, some seem to simply bypass towns that are pretty.
  • I’ve had good luck on winter travel, but this trip reminded me why there are no crowds. The weather can curse you, and especially curse your photography, though the snow-covered landscapes are wonderful when you do get sun. Three trips to Lake Constance/Bodenzee now, and never any good weather!
  • Davos was a trip. While there was a lot of security, it was far easier than say, flying in the USA. I was surprised how many people I knew at Davos. I was able to get a hotel in a village about 20 minutes away.

On to Part Two  read more »

Best collaborative processing and tagging of a group's photo archive?

I have the photo archives of a theatre company I was involved with for 12 years. It is coming upon its 50th anniversary. I have a high speed automatic scanner, so I am going to generate scans of many of the photos — that part is not too hard. Even easier for modern groups in the digital age, where the photos are already digital and date-tagged.

But now I want members of the group to be able to rotate the photos, tag them with the names of people in them and other tags, group them into folders where needed, and add comments. I can’t do this on my own, it is a collaborative project.

Lots of photo sharing sites let other people add comments. Few sites let you add tags or let trusted other people do things like rotations. Flickr lets others draw annotations and add tags/people which would make it a likely choice, but they can’t rotate.

Facebook has an interesting set of features. It’s easy to tag photos with friends’ names, and they get notified of it and the photos appear on their page, which is both good and bad. (The need for the owner to approve is a burden here.) Tagging non-friends is annoying because when somebody adds a real friend tag you must delete the old one, and the old ones may be spelled differently. However, the real deal-breaker on facebook is that the resolution is unacceptably small.

The recent killer feature I really want is face recognition, which makes tagging with people’s names vastly easier. Even the fact that it auto-draws boxes around the faces for you to tag is a win even without the recognition feature. The algorithms are far from perfect but they speed up the task a great deal. As such, right now an obvious choice is Picasa and Picasa Web Albums. however, while PWA lets you allow others to upload photos to your albums and tag their own photos, they can’t tag yours.

There is also face recognition in iPhoto, but I am not a Mac user so I don’t know if that can meet this need.

So right now two choices seem to be Flickr (but I must do all rotates) or a newly created Picasa account to which the password is shared. That’s a bit of a kludge but it seems to be the only way to get shared face recognition tagging.

Facebook can be integrated with a face recognizer called “Polar Rose” which also works with the 23hq photo sharing site. However, Facebook’s resolution is way, way too small and you need to approve tags.

I have not tried all the photo sharing sites so I wonder if people know of one that can do what I want?

Video windows that simulate 3-D

I’m waiting for the right price point on a good >24” monitor with a narrow bezel to drop low enough that I can buy 4 or 5 of them to make a panoramic display wall without the gaps being too large.

However, another idea that I think would be very cool would be to exploit the gaps between the monitors to create a simulated set of windows in a wall looking out onto a scene. It’s been done before in lab experiments with single monitors, but not as a large panoramic installation or something long term from what I understand. The value in the multi display approach is that now the gap between displays is a feature rather than a problem, and viewers can see the whole picture by moving. (Video walls must edit out the seams from the picture, removing the wonderful seamlessness of a good panorama.) We restore the seamlessness in the temporal dimension.

To do this, it would be necessary to track the exact location of the eyes of the single viewer. This would only work for one person. From the position of the eyes (in all 3 dimensions) and the monitors the graphics card would then project the panoramic image on the monitors as though they were windows in a wall. As the viewer’s head moved, the image would move the other way. As the viewer approached the wall (to a point) the images would expand and move, and likewise shrink when moving away. Fortunately this sort of real time 3-D projection is just what modern GPUs are good at.

The monitors could be close together, like window panes with bars between them, or further apart like independent windows. Now the size of the bezels is not important.

For extra credit, the panoramic scene could be shot on layers, so it has a foreground and background, and these could be moved independently. To do this is would be necessary to shoot the panorama from spots along a line and both isolate foreground and background (using parallax, focus and hand editing) and also merge the backgrounds from the shots so that the background pixels behind the foreground ones are combined from the left and right shots. This is known as “background subtraction” and there has been quite a lot of work in this area. I’m less certain over what range this would look good. You might want to shoot above and below to get as much of the hidden background as possible in that layer. Of course having several layers is even better.

The next challenge is to very quickly spot the viewer’s head. One easy approach that has been done, at least with single screens, is to give the viewer a special hat or glasses with easily identified coloured dots or LEDs. It would be much nicer if we could do face detection as quickly as possible to identify an unadorned person. Chips that do this for video cameras are becoming common, the key issue is whether the detection can be done with very low latency — I think 10 milliseconds (100hz) would be a likely goal. The use of cameras lets the system work for anybody who walks in the room, and quickly switch among people to give them turns. A camera on the wall plus one above would work easily, two cameras on the left and right sides of the wall should also be able to get position fairly quickly.

Even better would be doing it with one camera. With one camera, one can still get a distance to the subject (with less resolution) by examining changes in the size of features on the head or body. However, that only provides relative distance, for example you can tell if the viewer got 20% closer but not where they started from. You would have to guess that distance, or learn it from other queues (such as a known sized object like the hat) or even have the viewer begin the process by standing on a specific spot. This could also be a good way to initiate the process, especially for a group of people coming to view the illusion. Stand still in the spot for 5 seconds until it beeps or flashes, and then start moving around.

If the face can be detected with high accuracy and quickly, a decent illusion should be possible. I was inspired by this clever simulated 3-D videoconferencing system which simulates 3-D in this way and watches the face of the viewer.

You need high resolution photos for this, as only a subset of the image appears in the “windows” at any given time, particularly when standing away from the windows. It could be possible to let the viewer get reasonably close to the “window” if you have a gigapan style panorama, though a physical barrier (even symbolic) to stop people from getting so close that the illusion breaks would be a good idea.

Negative copier for digital camera

As digital cameras have developed enough resolution to work as scanners, such as in the scanning table proposal I wrote about earlier, some people are also using them to digitize slides. You can purchase what is called a “slide copier” which is just a simple lens and holder which goes in front of the camera to take pictures of slides. These have existed for a long time as they were used to duplicate slides in film days. However, they were not adapted for negatives since you can’t readily duplicate a colour negative this way, because it is a negative and because it has an orange cast from the substrate.

There is at least one slide copier (The Opteka) which offers a negative strip holder, however that requires a bit of manual manipulation and the orange cast reduces the color gamut you will get after processing the image. Digital photography allows imaging of negatives because we can invert and colour adjust the result.

To get the product I want, we don’t have too far to go. First of all, you want a negative strip holder which has wheels in the sprocket holes. Once you have placed your negative strip correctly with one wheel, a second wheel should be able to advance exactly one frame, just like the reel in the camera did when it was shooting. You may need to do some fine adjustments, but it is also satisfactory to have the image cover more than 36mm so that you don’t have to be perfectly accurate, and have the software do some cropping.

Secondly, you would like it so that ideally, after you wind one frame, it triggers the shutter using a remote release. (Remote release is sadly a complex thing, with many different ways for different cameras, including wired cable releases where you just close a contact but need a proprietary connector, infrared remote controls and USB shooting. Sadly, this complexity might end up adding more to the cost than everything else, so you may have to suffer and squeeze it yourself.) As a plus, a little air bulb should be available to blow air over negatives before shooting them.

Next, you want an illuminator behind the negative or slide. For slides you want white of course. For negatives however, you would like a colour chosen to undo the effects of the orange cast, so that the gamut of light received matches the range of the camera sensors. This might be done most easily with 3 LEDs matched to camera sensors in the appropriate range of brightness.

You could also simply make a product out of this light, to be used with existing slide duplicators; that’s the simplest way to do this in the small scale.

Why do all this, when a real negative scanner is not that expensive, and higher quality? Digitizing your negatives this way would be fast. Negative scanners all tend to be very slow. This approach would let you slot in a negative strip, and go wind-click-wind-click-wind-click-wind-click in just a couple of seconds, not unlike shooting fast on an old film camera. You would get quite decent scans with today’s high quality DLSRs. My 5d Mark II with 21 megapixels would effectively be getting around 4000 dpi, though with bayer interpolation. If you wanted a scan for professional work or printing, you could then go back to that negative and do it on a more expensive negative scanner, cleaning it first etc.

Another solution is just to send all the negatives off to one of the services which send them to India for cheap scanning, though these tend to be at a more modest resolution. This approach would let you quickly get a catalog of your negatives.

Light Table

Of course, to get a really quick catalog, another approach would be to create a grid of 3 rows of negative strip holder which could then be placed on a light table — ideally a light table with a blueish light to compensate for the orange cast. Take a photo of the entire grid to get 12 individual photos in one shot. This will result (on the 5D) in about 1.5 megapixel versions of each negative. Not sufficient to work with but fine for screen and web use, and not too far off the basic service you get from the consumer scanning companies.

I have some of my old negatives in plastic sheets that go in binders, so I could do it directly with them, but it’s work to put negatives into these and would be much easier to slide strips into a plastic holder which keeps them flat. Of course, another approach would be to simply lay the strips on the light table and put a sheet of clear plexiglass on top of them, and shoot in a dim room to avoid reflections.

Negative viewer

It would also be useful if digital cameras or video cameras tossed in a “view colour negative” mode which did its best to show an invert of the live preview image with orange cast reverted. Then you could browse your negatives by holding them up to your camera (in macro mode) and see them in their true form, if at lower resolution. Of course you can usually figure out what’s in a negative but sometimes it’s not so easy and requires the loupe, and it might not in this case.

Bluetooth in all video cameras, and smart microphones

I suggested this as a feature for my Canon 5D SLR which shoots video, but let me expand it for all video cameras, indeed all cameras. They should all include bluetooth, notably the 480 megabit bluetooth 3.0. It’s cheap and the chips are readily available.

The first application is the use of the high-fidelity audio profile for microphones. Everybody knows the worst thing about today’s consumer video cameras is the sound. Good mics are often large and heavy and expensive, people don’t want to carry them on the camera. Mics on the subjects of the video are always better. While they are not readily available today, if consumer video cameras supported them, there would be a huge market in remote bluetooth microphones for use in filming.

For quality, you would want to support an error correcting protocol, which means mixing the sound onto the video a few seconds after the video is laid down. That’s not a big deal with digital recorded to flash.

Such a system easily supports multiple microphones too, mixing them or ideally just recording them as independent tracks to be mixed later. And that includes an off-camera microphone for ambient sounds. You could even put down multiples of those, and then do clever noise reduction tricks after the fact with the tracks.

The cameraman or director could also have a bluetooth headset on (those are cheap but low fidelity) to record a track of notes and commentary, something you can’t do if there is an on-camera mic being used.

I also noted a number of features for still cameras as well as video ones:

  • Notes by the photographer, as above
  • Universal protocol for control of remote flashes
  • Remote control firing of the camera with all that USB has
  • At 480mbits, downloading of photos and even live video streams to a master recorder somewhere

It might also be interesting to experiment in smart microphones. A smart microphone would be placed away from the camera, nearer the action being filmed (sporting events, for example.) The camera user would then zoom in on the microphone, and with the camera’s autofocus determine how far away it is, and with a compass, the direction. Then the microphone, which could either be motorized or an array, could be directional in the direction of the action. (It would be told the distance and direction of the action from the camera in the same fashion as the mic was located.) When you pointed the camera at something, the off-camera mic would also point at it, except during focus hunts.

There could, as before be more than one of these, and this could be combined with on-person microphones as above. And none of this has to be particularly expensive. The servo-controlled mic would be a high end item but within consumer range, and fancy versions would be of interest to pros. Remote mics would also be good for getting better stereo on scenes. Key to all this is that adding the bluetooth to the camera is a minor cost (possibly compensated for by dropping the microphone jack) but it opens up a world of options, even for cheap cameras.

And of course, the most common cameras out there now — cell phones — already have bluetooth and compasses and these other features. In fact, cell phones could readily be your off camera microphones. If there were an nice app with a quick pairing protocol, you could ask all the people in the scene to just run it on their cell phone and put the phone in their front pocket. Suddenly you have a mic on each participant (up to the limit of bluetooth which is about 8 devices at once.)

Syndicate content