Submitted by brad on Sat, 2012-09-22 17:10.
A follow-up thought about yesterday’s shuttle fly-by and panorama. I was musing, might this be perhaps the most photographed single thing in human history to date?
Here’s the reasoning. Today there are more cameras and more photographers than ever, and people use them all the time in a way that continues to grow. To be a candidate for a most-photographed event, you would need to be recent, and you would need to take place in front of a ton of people, ideally with notice. It seemed like just about everybody in Sacramento, the Bay Area and LA was out for this and holding up a phone or camera.
Of course, many objects are more photographed, like the Golden Gate Bridge the shuttle flew over, but I’m talking here of the event rather than the object. Of course this is an event which moved over the course of thousands of miles.
- The other shuttle fly-overs done over New York and Washington — also with large populations
- Total eclipses of the sun which go over highly populated areas. The 2009 eclipse went over Shanghai, Varanasi and many other hugely populated areas but was clouded out for many. Nobody has yet to make a photo of an eclipse that looks like an eclipse, of course — I’ve seen them all, including many of the clever HDRs and overlays — but that doesn’t stop people from trying.
- The 1999 eclipse did go over a number of large European cities, but this was before the everybody-is-photographing era
- Most lunar eclipses are seen by as much as half the world, though they are hard to photograph with consumer camera gear, and only a fraction of people go out to watch and photograph them, but they could easily be a winner.
Prior to the digital era, a possible winner might be the moon landing. Back in 1969, every family had a camera, though usage wasn’t nearly what it is today. However, I remember the TV giving lessons on how to photograph a TV screen. Everybody was shooting their TV for the launches and the walk on the moon. Terrible pictures (much like early camera phone pictures) but people took them to be a part of the event. I recall taking one myself though I have no idea where it is.
Of course there may be objective ways to measure this today, by tracking the number of photos on photo sharing and social sites, and extrapolating the winner. If the shuttle is the winner for now, it won’t last long. Photography is going to grow even more.
I should also note that remote photography, like we did for Apollo, is clearly much larger, in the form of recording video. For those giant events viewed by billions — World Cup, Olympics, Oscars etc. — huge numbers of people are recording them, at least temporarily.
Submitted by brad on Fri, 2012-09-21 18:28.
Today marked the last trip through the air for the space shuttle, as the Endeavour was carried to LA to be installed in a museum. The trip included fly-overs of the Golden Gate bridge and many other landmarks in SF and LA, and also a low pass over Nasa Ames at Moffett Field, where I work at Singularity University. A special ceremony was done on the tarmac, and I went to get a panoramic photo. We all figured the plane would come along the airstrip, but they surprised us, having it fly a bit to the west so it suddenly appeared from behind the skeleton of Hangar One, the old dirigible hangar. That turned out to be bad for my photography, as I didn’t get much advance notice, and the shot of the crowd I had done a few minutes before had everybody expectantly looking along the runway, and not towards the west where the plane and shuttle appear in my photo.
However, it did make for a very dramatic arrival. So while different parts of this shot are at slightly different times, it does capture the scene of Moffett field and the crowd awaiting the shuttle, and its arrival. I do however have a nice hi-res photo for you to enjoy as well as the panoramic shot of the Endeavour shuttle fly-by.
Submitted by brad on Tue, 2012-03-20 10:05.
I’m back from our fun “Singuarlity Week” in Tel Aviv, where we did a 2 day and 1 day Singularity University program. We judged a contest for two scholarships by Israelis for SU, and I spoke to groups like Garage Geeks, Israeli Defcon, GizaVC’s monthly gathering and even went into the west bank to address the Palestinian IT Society and announce a scholarship contest for SU.
Of course I did more photography, though the weather did not cooperate. However, you will see six new panoramas on my Israel Panorama Page and my Additional Israeli panoramas. My favourite is the shot of the western wall during a brief period of sun in a rainstorm.
In Ramallah, the telecom minister for the Palestinian Authority asked us, jokingly, “how can this technology end the occupation?” But I wanted to come up with a serious answer. Everybody who goes to the middle east tries to come up with a solution or at least some sort of understanding. Israelis get a bit sick of it, annoyed that outsiders just don’t understand the incredible depth and nuance of the problem. Outsiders imagine the Israelis and Palestinians are so deep in their conflict that they are like fish who no longer see the water.
In spite of those warnings, here’s my humble proposal for how to use new media technology to help.
Take classrooms of Israelis and classrooms of Palestinians and give them a mandatory school assignment. Their assignment is to be paired with an online buddy from the “other side.” Students would be paired based on a matching algorithm, considering things like their backgrounds, language skills or languages and subjects they want to learn. The other student, with whom they would interact over online media and video-conferencing (like Skype or Google Hangouts,) would become a study partner and the students would collaborate on projects suitable to them. They might also help one another learn a language, like English, Arabic or Hebrew. Students would be encouraged to add their counterpart to their social networking circles.
Both students would also be challenged to write an essay attempting to see the world from the point of view of the other. They will not be asked to agree with it, but simply to be able to write from that point of view. And their counterpart must agree at the end that it mostly does reflect their point of view. Students would be graded on this.
It would be important not to have this be a “forced friendship.” The students would be told it was not demanded they forget their preconceptions; not demanded they agree with everything their counterpart says. In fact, they would be encouraged to avoid conflict, to not immediately contradict statements they think are false. That the goal is not to convince their counterpart of things but to understand and help them understand. And in particular, projects should be set up where the students naturally work together viewing the teachers as the common enemy.
At the end of the year, a meeting would be arranged. For example, west bank students would be thrilled at a chance to visit the beach or some amusement park. A meeting on the west bank border on neutral ground might make sense too, though parents would be paranoid about safety and many would veto trips by their children into the west bank.
Would this bring peace? Hardly on its own. But it would improve things if every student at least knew somebody from outside their world, and had tried to understand their viewpoint even without necessarily agreeing with it. And some of the relationships would last, and the social networks would grow. Soon each student would have at least one person in their network from outside their formerly insular world. This would start with some schools, but ideally it would be something for every student to do. And it could even be expanded to include online pen-pals from other countries. With some students it would fail, particularly older ones whose views are already set. Alas, for younger ones, finding a common language might be difficult. Few Israelis learn Arabic, more Palestinians learn Hebrew and all eventually want to learn English. Somebody has to provide computers and networking to the poorer students, but it seems the cost of this is small compared to the benefit.
Submitted by brad on Sun, 2011-12-18 14:27.
Earlier I wrote about desires for the next generation of DSLR camera and a number of readers wrote back that they wanted to be able to swap the sensor in their camera, most notably so they could put in a B&W sensor with no colour filter mask on it. This would give you better B&W photos and triple your light gathering ability, though for now only astronomers are keen enough on this to justify filterless cameras.
I’m not sure how easy it would be to make a sensor that could be swapped, due to a number of problems — dust, connectivity and more. In fact I wonder if an idea I wrote about earlier — lenses with integrated sensors might have a better chance of being the future.
Here’s another step in that direction — a “foveal” digital camera that has tiny sensors in the middle of the frame and larger ones out at the edges. Such sensors have been built for a variety of purposes in the past, but might they have application for serious photography?
For example, the 5d Mark II I use has 22 million 6.4 micron sensors. Being that large, they are low noise compared to the smaller sensors found in P&S cameras. But the full frame requires very large, very heavy, very expensive lenses. Getting top quality over the large image circle is difficult and you pay a lot for it.
Imagine that this camera has another array, perhaps of around 16 million pixels of 1.6 micron size in the center. This allows it to shoot a 16MP picture in the small crop zone or a 22MP picture on the full frame. (It also allows it to shoot a huge 252 megapixel image that is sharp in the center but interpolated around the edges.) The central region would have transistors that could combine all the wells of a particular colour in the 4x4 array that maps to one large pixel. This is common in the video modes on DSLR cameras, and helps produce pixels that are much lower noise than the tiny pixels are on their own, but not as good as the 16x larger big pixels, though the green pixels, which make up half the area, would probably do decently well.
As a result, this camera would not be as good in low light, and the central region would be no better in low light than today’s quality P&S cameras. But that’s actually getting pretty good, and the results at higher light levels are excellent.
The win is that you would be able to use a 100mm/f2 lens with the field of view of a 400mm lens for a 16MP picture. It would not be quite as good as a real 400mm f/2.8L Canon lens of course. But it could compare decently — and that 400mm lens is immense, heavy and costs $10,000 — far more than the camera body. On the other hand a decent 100mm f/2.8 lens aimed at the smaller image circle would cost a few hundred dollars at most, and do a very good job. A professional wildlife or sports photographer might still seek the $10K lens but a lot of photographers would be much happier to carry the small one, and not just for the saved cost. You would not get the very shallow depth of field of the 400mm f/2.8 — it would be about double with a small sensor 100mm f/2 — but many would consider that a plus in this situation, not a minus.
You could also use 3.2 or 2.1 micron sensors for better low-noise and less of a crop (or focal length multiplier as it is incorrectly called sometimes.)
One other benefit is that, if your lens can deliver it, and particularly when you have decent lighting, you would get superb resolution in the center of your full frame photos, as the smaller pixels are combined. You would get better colour accuracy, without as many bayer interpolation artifacts, as you would truly sense each colour in every pixel, and much better contrast in general. You would be making use of the fact that your lens is sharper in the center. Jpeg outputs would probably never do the 250 megapixel interpolated image, but the raw output could record all the pixels if it is not necessary to combine the wells to improve signal/noise.
Submitted by brad on Thu, 2011-12-15 18:10.
I have put up a new gallery of panoramic photos from my trip earlier this year to Botswana (with short stays in South Africa and Zimbabwe.) There are some interesting animal and scenic shots, and also some technically difficult shots such as Victoria Falls from a helicopter. (I also have some new shots of Niagara falls from a fixed wing plane which is even harder.)
In the case of the helicopter, which is still moving as it was just a regular tour helicopter, the challenge is to shoot very fast and still not make mistakes in coverage. I took several panos but only a few turned out. Victoria Falls can really only be viewed from the air — on the ground the viewing spots during high water season are in so much mist that it’s actually raining hard all around you, and in any event you can’t see the whole falls. One lesson is to try not to be greedy and get a 200m pano. Stick to 50 to 100mm at most.
On this trip I took along a 100-400mm lens, and it was my first time shooting with such a long lens routinely. I knew intellectually about the much smaller depth of field at 400mm, but in spite of this I still screwed up a number of panoramas, since I normally set focus at one fixed distance for the whole pano. Stopping down 400mm only helps a little bit. Wildlife will not sit still for you, creating extra challenges. I already showed you this elephant shot but I am also quite fond of this sunset on the Okavango delta. While this shot may not appear to have wildlife, the sun is beaming through giant spiderwebs which are the work of “social spiders” which live in nests, all building the same web. I recommend zooming in on the scene in the center. I also have some nice regular photos of this which will be up later.
I am still a bit torn about the gallery of ordinary aspect ratio photos. I could put them up on my photo site easily enough, but I’ve noticed photos get a lot more commentary and possibly viewing when done on Google+/Picasa. This is a sign of a disturbing trend away from the distributed web, where people and companies had their own web sites and got pagerank and subscribers, to the centralized AOL style model of one big site (be it Facebook or Google Plus) which is attractive because of its social synergies.
Submitted by brad on Thu, 2011-11-17 23:03.
I shoot with the Canon 5d Mark II. While officially not a pro camera, the reality is that a large fraction of professional photographers use this camera rather than the Eos-1D cameras which are faster but much bulkier and in some ways even inferior to the 5D. But it’s been out a long time now, and everybody is wondering when its successor will come and what features it will have.
Each increment in the DSLR world has been quite dramatic over the last decade. There’s always been a big increase in resolution with the new generation, but now at 22 megapixels there’s less call for that. While there are lenses that deliver more than 22 megapixels sharply, they are usually quite expensive, and while nobody would turn down 50mp for free, there just wouldn’t be nearly as much benefit from it than the last doubling. Here’s a look at features that might come, or at least be wished for.
More pixels may not be important, but everybody wants better pixels.
- Low noise / higher ISO: The 5D2 astounded us with ISO 3200 shots that aren’t very noisy. Unlike megapixels, there is almost no limit to how high we would like ISO to go at low noise levels. Let’s hope we see 12,500 or more at low noise, plus even 50,000 noisy. Due to physics, smaller pixels have higher noise, so this is another reason not to increase the megapixel count.
- 3 colour: The value of full 3-colour samples at every pixel has been overstated in the past. The reason is that Bayer interpolation is actually quite good, and almost every photographer would rather have 18 million bayer pixels over 6 million full RGB pixels. It’s not even a contest. As we start maxing out our megapixels to match our lenses, this is one way to get more out of a picture. But if it means smaller pixels, it causes noise. The Foveon approach which stacked the 3 pixels would be OK here — finally. But I don’t expect this to be very likely.
- Higher dynamic range: How about 16 bits per pixel, or even 24? HDR photography is cool but difficult. But nobody doesn’t want more range, if only for the ability to change exposure decisions after the fact and bring out those shadows or highlights. Automatic HDR in the camera would be nice but it’s no substitute for try high-range pixels.
Video & Audio
Due to the high quality video in the 5D2, many professional videographers now use it. Last week Canon announced new high-end video cameras aimed at that market, so they may not focus on improvements in this area. If they do, people might like to see things like 60 frame video, ability to focus while shooting, higher ISO, and 4K video. read more »
Submitted by brad on Mon, 2011-11-07 22:23.
A little self-plug. I have an article on an introduction to panoramic photographic technique the November issue of Photo Technique with a few panos in it. This is old world journalism, folks — you have to read it on paper at least for now.
In the meantime, I’m working on upcoming galleries of photos from Botswana, Eastern Europe and Burning Man for you. I have already placed two of my Botswana photos into my gallery of favourite panoramas. This includes a lovely group of elephants in Savuti and a sunset on the Okavango delta that is one of my new favourites.
We decided to go to Harvey’s pan in Savuti one afternoon and lucked upon a large breeding group of elephant just on their way there. I caught them in one of my first long lens panoramas. Long lens panos are fairly difficult due to the limited depth of field, but they get great detail on the baby elephant.
Much more to come!
Submitted by brad on Fri, 2011-08-12 16:40.
As I prepare for Burning Man 2011, I realized I had not put my gallery of regular sized photos up on the web.
Much earlier I announced my gallery of giant panoramas of 2010 which features my largest photos in a new pan-and-zoom fullscreen viewer, I had neglected to put up the regular sized photos.
So enjoy: Gallery of photos of Burning Man 2010
I still need to select and caption 2007 and 2009 some day.
Submitted by brad on Mon, 2011-06-13 10:30.
This blog has been silent the last month because I’ve been on an amazing trip to Botswana and a few other places. There will be full reports and lots of pictures later, but today’s idea comes from experiments in shooting HD video using my Canon 5D Mark II. As many people know, while the 5D is an SLR designed for stills, it also shoots better HD video than all but the most expensive pro video cameras, so I did a bit of experimenting
The internal mic in the camera is not very good, and picks up not just wind but every little noise on the camera, including the noises of the image stabilizer found in many longer lenses. I brought a higher quality mic that mounts on the camera, but it wasn’t always mounted because it gets a little in the way of both regular shooting and putting the camera away. When I used it, I got decent audio, but I also got audio of my companion and our guide rustling or shooting stills with their own cameras. To shoot a real video with audio I had to have everybody be silent. This is why much of the sound you see in nature documentaries is actually added later, and very often just created by Foley artists. I also forgot to turn on my external mic, which requires a small amount of power, a few times. That was just me being stupid — as the small battery lasts for 300 hours I could have just left it on the whole trip. (Another fault I had with the mic, the Sennheiser MKE 400, was that the foam wind sleeve kept coming off, and after a few times I finally lost it.) read more »
Submitted by brad on Tue, 2011-02-08 13:36.
I shoot lots of large panoramas, and the arrival of various cheaper robotic mounts to shoot them, such as the Gigapan Epic Pro and the Merlin/Skywatcher (which I have) has resulted in a bit of a “mine’s bigger than yours” contest to take the biggest photo. Some would argue that the stitched version of the Sloane Digital Sky survey, which has been rated at a trillion pixels, is the winner, but most of the competition has been on the ground.
Many of these photos have got special web sites to display them such as Paris 26 gigapixels, the rest are usually found at the Gigapan.org site where you can even view the gigapans sorted by size to see which ones claim to be the largest.
Most of these big ones are stitched with AutopanoPro, which is the software I use, or the Gigapan stitcher. The largest I have done so far is smaller, my 1.4 gigapixel shot of Burning Man 2010 which you will find on my page of my biggest panoramas which more commonly are in the 100mp to 500mp range.
The Paris one is pretty good, but some of the other contenders provide a misleading number, because as you zoom in, you find the panorama at its base is quite blurry. Some of these panoramas have even just been expanded with software interpolation, which is a complete cheat, and some have been shot at mixed focal length, where sections of the panorama are sharp but others are not. I myself have done this, for example in my Gigapixel San Francisco from the end of the Golden Gate I shot the city close up, but shot the sky and some of the water at 1/4 the resolution because there isn’t really any fine detail in the sky. I think this is partially acceptable, though having real landscape features not at full resolution should otherwise disqualify a panorama. However, the truth is that sections of sky perhaps should not count at all, and anybody can make their panorama larger by just including more sky all the way to the zenith if they choose to.
There is a difficult craft to making such large photos, and there are also aesthetic elements. To really count the pixels for the world’s largest photos, I think we should count “quality” pixels. As such, sky pixels are not generally quality pixels, and distant terrain lost in haze also does not provide quality pixels. The haze is not the technical fault of the photographer, but it is the artistic fault, at least if the goal is to provide a sharp photo to explore. You get rid of haze only through the hard work of being there at the right time, and in some cities you may never get a chance.
Some of the shots are done through less than ideal lenses, and many of them are done use tele-extenders. These extenders do get more detail but the truth is a 2x tele-extender does not provide 4 times as many quality pixels. A common lens today is a 400mm with a 2x extender to get 800mm. Fairly expensive, but a lot cheaper than a quality 800mm lens. I think using that big expensive glass should count for more in the race to the biggest, even though some might view it as unfair. (A lens that big and heavy costs a ton and also weighs a lot, making it harder to get a mount to hold it and to keep it stable.) One can get very long mirror “lens” setups that are inexpensive, but they don’t deliver the quality, and I don’t believe work done with them should score as high as work with higher quality lenses. (It may be the case that images from a long telescope, which tend to be poor, could be scaled down to match the quality of a shorter but more expensive lens, and this is how it should be done.)
Ideally we should seek an objective measure of this. I would propose:
- There should be a sufficient number of high contrast edges in the image — sharp edges where the intensity goes from bright to dark in the space of just 1 or 2 pixels. If there are none of these, the image must be shrunk until there are.
- The image can then be divided up into sections and the contrast range in each evaluated. If the segment is very low contrast, such as sky, it is not counted in the pixel count. Possibly each block will be given a score based on how sharp it is, so that background items which are hazy count for more than nothing, but not as much as good sharp sections.
- I believe that to win a pano should not contain gross flaws. Examples of such flaws include stripes of brightness or shadow due to cloud movement, big stitching errors and checkerboard patterns due to bad overlap or stitching software. In general that means manual exposure rather than shots where the stitcher tries to fix mixed exposures unless it does it undetectably.
Some will argue with the last one in particular, since for some the goal is just to get as many useful pixels as possible for browsing around. Gigapixel panoramas after all are only good for zooming around in with a digital viewer. No monitor can display them and sometimes even printing them 12 feet high won’t show all their detail, and people rarely do that. (Though you can see my above San Francisco picture as the back wall of a bar in SF.) Still, I believe it should be a minimum bar than when you look at the picture at more normal sizes, or print it out a few feet in size, it still looks like an interesting, if extremely sharp, picture.
Ideally an objective formula can be produced for how much you have to shrink what is present to get a baseline. It’s very rare that any such panorama not contain a fair number of segments with high contrast edges and lines in them. For starters, one could just put in the requirement that the picture be shrunk until you have a frame that just about anybody would agree is sharp like an ordinary quality photo when viewed 1:1. Ideally lots of frames like that, all over the photo.
Under these criteria a number of the large shots on gigapan fall short. (Though not as short as you think. The gigapan.org zoom viewer lets you zoom in well past 1:1, so even sharp images are blurry when zoomed in fully. On my own site I set maximum zoom at 200%.)
These requirements are quite strict. Some of my own photos would have to be shrunk to meet these tests, but I believe the test should be hard.
Submitted by brad on Sun, 2010-11-14 16:47.
For many years I have had a popular article on what lenses to buy for a Canon DSLR. I shoot with Canon, but much of the advice is universal, so I am translating the article into Nikon.
If you shoot Nikon and are familiar with a variety of lenses for them, I would appreciate your comments. At the start of the article I indicate the main questions I would like people’s opinions on, such as moderately priced wide angle lenses, as well as regular zooms.
If you “got a Nikon camera and love to take photographs” please read the article on what lens to buy for your Nikon DSLR and leave comments here or send them by email to firstname.lastname@example.org. I’m also interested in lists of “what’s in your kit” today.
Submitted by brad on Tue, 2010-10-05 21:53.
I have put up a page of panoramas from Burning Man 2010. This page includes my largest yet, a 1.2 billion pixel image of the whole of Black Rock City which you will find first on the page. I am particularly proud of it, I hope you find it as amazing as I do.
There are many others, including a nice one of the Man while they dance before the burn with the whole circle of people, a hi-res of the temple and the temple burn, and more.
However, what’s really new is I have put in a Flash-based panorama zoom viewer. This application lets you see my photos for the first time at their full resolution, even the gigapixel ones. You can pan around, zoom in and see everything. For many of them, I strongly recommend you click the button (or use right-click menu) to enter fullscreen mode, especially if you have a big monitor as I do. There you can pan around with the arrow keys and zoom in and out with your mouse wheel. There are other controls (and when not in fullscreen mode you can also use shift/ctrl or +/- for zooming.) A help page has full details.
Go into the gigapixel and shot and zoom around. You’ll be amazed what you find. I have also converted most of my super-size city photos of Black Rock City to the zoom viewer, they can be found at the page of Giant BRC photos as well as many of my favourites from the various years. I’m also working at converting some of my other photos, including the gallery of my largest images which I built recently. It takes time to build and upload these so it will be some while before the big ones are all converted. I may not do the smaller ones.
If you don’t have flash, it displays the older 1100 pixel high image, and you can still get to that via a link. If you have flashblock, you will need to enable flash for my photo site because it will detect you have no flash player and display the old one.
Get out the big monitor and it will feel like you’re standing on a tower in Black Rock City with a pair of binoculars. The gigapixel image is also up on gigapan.
Submitted by brad on Wed, 2010-08-11 21:32.
Moraine Lake, in Banff National Park, is one of the world’s most beautiful mountain scenes. I’ve returned to Banff, Moraine Lake and Lake Louise many times, and in June, I took my new robotic panorama mount to take some very high resolution photos of it and other scenes.
Rather than filling my Alberta Panorama Gallery with all those pictures, I have created a special page with panoramas of just Moraine Lake and its more famous sister Lake Louise. While I like my new 400 megapixel shot the best, an earlier shot was selected by the respected German Ravensburger puzzle company for a panoramic jigsaw puzzle along with my shot of Burney Falls, CA.
It was a bit of work carrying the motorized mount, laptop computer, tripod and camera gear to the top of the Moraine, but the result is worth it. While my own printer is only 24” high, this picture has enough resolution to be done 6 feet high and still be tack sharp up close, so I’m hoping to find somebody who wants to do a wall with it.
So check out the new gallery of photos of Moraine Lake and Lake Louise. I’ve also added some other shots from that trip to the Alberta gallery and will be adding more shortly. When on the panorama page ask for the “Full Rez Slice” to see how much there is in the underlying image.
Submitted by brad on Wed, 2010-07-28 22:02.
I got a chance to see my 5th eclipse on July 11 — well sort of. In spite of many tools at our disposal, including a small cruise ship devoted to the eclipse, we saw only about 30 seconds of the possible 4 minutes due to clouds. But I still have a gallery of pictures.
Many people chose the Hao atoll for eclipse viewing because of its very long airstrip and 3 minute 30 second duration. Moving north would provide even more, either from water or the Amanu atoll. Weather reports kept changing, suggesting moving north was a bad idea, so our boat remained at the Hao dock until the morning of the eclipse. In spite of storm reports, it dawned effectively cloudless so we decided to stay put and set up all instruments and cameras. Seeing an eclipse on land is best in my view, ideally a place with trees and animals and water. And it’s really the only choice for good photography.
As the eclipse came, clouds started building, moving quickly in the brisk winds. The clouds may have been the result of eclipse-generated cooling and they did increase as the eclipse came. However, having set up we decided not to move. The clouds were fast and small and it was clear that they would not block the whole eclipse until a big cloud came just near totality which almost did. We did get 30 seconds of fairly clear skies, so the crowd of first-timers were just as awed as first timers always are. Disappointment was only felt by those who had seen a few.
Later I realized a better strategy for an eclipse cruise interested in land observation. When the clouds thickened, we should have left all the gear on land with a crewman from the ship to watch it. The cameras were all computer controlled, and so they would take whatever images they would take — in theory. We, on the other hand could have run onto the boat and had it sail to find a hole in the clouds. It would have found one — just 2 miles away at the airport, people gathered there saw the complete eclipse. For us it was just the luck on the draw on our observing spot. Mobility can change that luck. Photographs and being on land are great, but seeing the whole eclipse is better.
I said “in theory” above because one person’s computer did not start the photos properly, and he had to start them again by hand. In addition, while we forgot to use it, the photo program has an “emergency mode” for just such a contingency. This mode puts into into a quick series of shots of all major exposures, designed to be used in a brief hole in the clouds. In the panic we never thought to hit the panic button.
I was lucky last year in spite of my rush. I was fooled into thinking I could duplicate that luck. You have to learn to rehearse everything you will do during an eclipse. This also applied to my panoramas. I had brought a robotic panoramic mount controlled by bluetooth from my laptop. In spite of bringing two laptops, and doing test shots the day before, I could not get the bluetooth link going as the eclipse approached. I abandoned the robotic mount to do manual panos. I had been considering that anyway, since the robotic mount is slow and takes about 10 seconds between shots, limiting how much pano it could do. By hand I can do a shot every second or so. Of course the robot in theory takes none of my personal eclipse time, while doing the hand pano took away precious views, but taking 3 minutes means too much changing light and moving people.
Even so a few things went wrong. I was doing a bracket, which in retrospect I really did not need. A friend loaned me a higher quality 24mm lens than the one I had, and this lens was also much faster (f/1.8) than mine. While I had set to go into manual mode, at first I forgot, and int he darkness the camera tried to shoot at f/1.8 — meaning very shallow depth of field and poor focus on all things in the foreground. I then realized this and switched to manual mode for my full pano. This pano was shot while the eclipse was behind clouds. I had taken a shot a bit earlier where it was visible and of course used that for that frame of the pano, but the different exposure causes some lessening of quality. Modern pano software handles different exposure levels, but the best pano comes from having everything fixed.
More lessons learned. After the eclipse we relaxed and cruised the Atoll, swam, dove, surfed, bought black pearls and had a great time.
The next eclipse is really only visible in one reachable place: Cairns Australia in November of 2012. (There is an annular eclipse in early 2012 that passes over Redding and Reno and hits sunset at Lubbock, but an annular is just a big partial eclipse, not a total.)
Cairns and the great barrier reef are astounding. I have a page about my prior trip to Australia and Cairns and any trip there will be good even with a cloudy eclipse. Alas, a cloudy eclipse is a risk, because the sun with be quite low in the morning sky over the mountains, and worse, Nov 13 is right at the beginning of the wet season. If the wet starts then, it’s probably bad news. For many, the next eclipse will be the one that crosses the USA in 2017. However, there are other opportunities in Africa/2013 (for the most keen,) Svalbard/2015 and Indonesia/2016 before then.
I’ll have some panoramas in the future. Meanwhile check out the gallery. Of course I got better eclipse pictures last year.
Submitted by brad on Tue, 2010-02-16 19:02.
I recently went to the DLD conference in Germany, briefly to Davos during the World Economic Forum and then drove around the Alps for a few days, including a visit to an old friend in Grenoble. I have some panoramic galleries of the Alps in Winter up already.
Each trip brings some new observations and notes.
- For the first time, I got a rental car which had a USB port in it, as I’ve been wanting for years. The USB port was really part of the radio, and if you plugged a USB stick in, it would play the music on it, but for me its main use was a handy charging port without the need for a 12v adapter. As I’ve said before, let’s see this all the time, and let’s put them in a few places — up on the dashboard ledge to power a GPS, and for front and rear seats, and even the trunk. And have a plug so the computer can access the devices, or even data about the car.
- The huge network of tunnels in the alpine countries continues to amaze me, considering the staggering cost. Sadly, some seem to simply bypass towns that are pretty.
- I’ve had good luck on winter travel, but this trip reminded me why there are no crowds. The weather can curse you, and especially curse your photography, though the snow-covered landscapes are wonderful when you do get sun. Three trips to Lake Constance/Bodenzee now, and never any good weather!
- Davos was a trip. While there was a lot of security, it was far easier than say, flying in the USA. I was surprised how many people I knew at Davos. I was able to get a hotel in a village about 20 minutes away.
On to Part Two read more »
Submitted by brad on Sat, 2010-01-02 14:10.
I have the photo archives of a theatre company I was involved with for 12 years. It is coming upon its 50th anniversary. I have a high speed automatic scanner, so I am going to generate scans of many of the photos — that part is not too hard.
Even easier for modern groups in the digital age, where the photos are already digital and date-tagged.
But now I want members of the group to be able to rotate the photos, tag them with the names of people in them and other tags, group them into folders where needed, and add comments. I can’t do this on my own, it is a collaborative project.
Lots of photo sharing sites let other people add comments. Few sites let you add tags or let trusted other people do things like rotations. Flickr lets others draw annotations and add tags/people which would make it a likely choice, but they can’t rotate.
Facebook has an interesting set of features. It’s easy to tag photos with friends’ names, and they get notified of it and the photos appear on their page, which is both good and bad. (The need for the owner to approve is a burden here.) Tagging non-friends is annoying because when somebody adds a real friend tag you must delete the old one, and the old ones may be spelled differently. However, the real deal-breaker on facebook is that the resolution is unacceptably small.
The recent killer feature I really want is face recognition, which makes tagging with people’s names vastly easier. Even the fact that it auto-draws boxes around the faces for you to tag is a win even without the recognition feature. The algorithms are far from perfect but they speed up the task a great deal. As such, right now an obvious choice is Picasa and Picasa Web Albums. however, while PWA lets you allow others to upload photos to your albums and tag their own photos, they can’t tag yours.
There is also face recognition in iPhoto, but I am not a Mac user so I don’t know if that can meet this need.
So right now two choices seem to be Flickr (but I must do all rotates) or a newly created Picasa account to which the password is shared. That’s a bit of a kludge but it seems to be the only way to get shared face recognition tagging.
Facebook can be integrated with a face recognizer called “Polar Rose” which also works with the 23hq photo sharing site. However, Facebook’s resolution is way, way too small and you need to approve tags.
I have not tried all the photo sharing sites so I wonder if people know of one that can do what I want?
Submitted by brad on Fri, 2009-12-18 15:18.
I’m waiting for the right price point on a good >24” monitor with a narrow bezel to drop low enough that I can buy 4 or 5 of them to make a panoramic display wall without the gaps being too large.
However, another idea that I think would be very cool would be to exploit the gaps between the monitors to create a simulated set of windows in a wall looking out onto a scene. It’s been done before in lab experiments with single monitors, but not as a large panoramic installation or something long term from what I understand. The value in the multi display approach is that now the gap between displays is a feature rather than a problem, and viewers can see the whole picture by moving. (Video walls must edit out the seams from the picture, removing the wonderful seamlessness of a good panorama.) We restore the seamlessness in the temporal dimension.
To do this, it would be necessary to track the exact location of the eyes of the single viewer. This would only work for one person. From the position of the eyes (in all 3 dimensions) and the monitors the graphics card would then project the panoramic image on the monitors as though they were windows in a wall. As the viewer’s head moved, the image would move the other way. As the viewer approached the wall (to a point) the images would expand and move, and likewise shrink when moving away. Fortunately this sort of real time 3-D projection is just what modern GPUs are good at.
The monitors could be close together, like window panes with bars between them, or further apart like independent windows. Now the size of the bezels is not important.
For extra credit, the panoramic scene could be shot on layers, so it has a foreground and background, and these could be moved independently. To do this is would be necessary to shoot the panorama from spots along a line and both isolate foreground and background (using parallax, focus and hand editing) and also merge the backgrounds from the shots so that the background pixels behind the foreground ones are combined from the left and right shots. This is known as “background subtraction” and there has been quite a lot of work in this area. I’m less certain over what range this would look good. You might want to shoot above and below to get as much of the hidden background as possible in that layer. Of course having several layers is even better.
The next challenge is to very quickly spot the viewer’s head. One easy approach that has been done, at least with single screens, is to give the viewer a special hat or glasses with easily identified coloured dots or LEDs. It would be much nicer if we could do face detection as quickly as possible to identify an unadorned person. Chips that do this for video cameras are becoming common, the key issue is whether the detection can be done with very low latency — I think 10 milliseconds (100hz) would be a likely goal. The use of cameras lets the system work for anybody who walks in the room, and quickly switch among people to give them turns. A camera on the wall plus one above would work easily, two cameras on the left and right sides of the wall should also be able to get position fairly quickly.
Even better would be doing it with one camera. With one camera, one can still get a distance to the subject (with less resolution) by examining changes in the size of features on the head or body. However, that only provides relative distance, for example you can tell if the viewer got 20% closer but not where they started from. You would have to guess that distance, or learn it from other queues (such as a known sized object like the hat) or even have the viewer begin the process by standing on a specific spot. This could also be a good way to initiate the process, especially for a group of people coming to view the illusion. Stand still in the spot for 5 seconds until it beeps or flashes, and then start moving around.
If the face can be detected with high accuracy and quickly, a decent illusion should be possible. I was inspired by this clever simulated 3-D videoconferencing system which simulates 3-D in this way and watches the face of the viewer.
You need high resolution photos for this, as only a subset of the image appears in the “windows” at any given time, particularly when standing away from the windows. It could be possible to let the viewer get reasonably close to the “window” if you have a gigapan style panorama, though a physical barrier (even symbolic) to stop people from getting so close that the illusion breaks would be a good idea.
Submitted by brad on Mon, 2009-11-23 14:29.
As digital cameras have developed enough resolution to work as scanners, such as in the scanning table proposal I wrote about earlier, some people are also using them to digitize slides. You can purchase what is called a “slide copier” which is just a simple lens and holder which goes in front of the camera to take pictures of slides. These have existed for a long time as they were used to duplicate slides in film days. However, they were not adapted for negatives since you can’t readily duplicate a colour negative this way, because it is a negative and because it has an orange cast from the substrate.
There is at least one slide copier (The Opteka) which offers a negative strip holder, however that requires a bit of manual manipulation and the orange cast reduces the color gamut you will get after processing the image. Digital photography allows imaging of negatives because we can invert and colour adjust the result.
To get the product I want, we don’t have too far to go. First of all, you want a negative strip holder which has wheels in the sprocket holes. Once you have placed your negative strip correctly with one wheel, a second wheel should be able to advance exactly one frame, just like the reel in the camera did when it was shooting. You may need to do some fine adjustments, but it is also satisfactory to have the image cover more than 36mm so that you don’t have to be perfectly accurate, and have the software do some cropping.
Secondly, you would like it so that ideally, after you wind one frame, it triggers the shutter using a remote release. (Remote release is sadly a complex thing, with many different ways for different cameras, including wired cable releases where you just close a contact but need a proprietary connector, infrared remote controls and USB shooting. Sadly, this complexity might end up adding more to the cost than everything else, so you may have to suffer and squeeze it yourself.) As a plus, a little air bulb should be available to blow air over negatives before shooting them.
Next, you want an illuminator behind the negative or slide. For slides you want white of course. For negatives however, you would like a colour chosen to undo the effects of the orange cast, so that the gamut of light received matches the range of the camera sensors. This might be done most easily with 3 LEDs matched to camera sensors in the appropriate range of brightness.
You could also simply make a product out of this light, to be used with existing slide duplicators; that’s the simplest way to do this in the small scale.
Why do all this, when a real negative scanner is not that expensive, and higher quality? Digitizing your negatives this way would be fast. Negative scanners all tend to be very slow. This approach would let you slot in a negative strip, and go wind-click-wind-click-wind-click-wind-click in just a couple of seconds, not unlike shooting fast on an old film camera. You would get quite decent scans with today’s high quality DLSRs. My 5d Mark II with 21 megapixels would effectively be getting around 4000 dpi, though with bayer interpolation. If you wanted a scan for professional work or printing, you could then go back to that negative and do it on a more expensive negative scanner, cleaning it first etc.
Another solution is just to send all the negatives off to one of the services which send them to India for cheap scanning, though these tend to be at a more modest resolution. This approach would let you quickly get a catalog of your negatives.
Of course, to get a really quick catalog, another approach would be to create a grid of 3 rows of negative strip holder which could then be placed on a light table — ideally a light table with a blueish light to compensate for the orange cast. Take a photo of the entire grid to get 12 individual photos in one shot. This will result (on the 5D) in about 1.5 megapixel versions of each negative. Not sufficient to work with but fine for screen and web use, and not too far off the basic service you get from the consumer scanning companies.
I have some of my old negatives in plastic sheets that go in binders, so I could do it directly with them, but it’s work to put negatives into these and would be much easier to slide strips into a plastic holder which keeps them flat. Of course, another approach would be to simply lay the strips on the light table and put a sheet of clear plexiglass on top of them, and shoot in a dim room to avoid reflections.
It would also be useful if digital cameras or video cameras tossed in a “view colour negative” mode which did its best to show an invert of the live preview image with orange cast reverted. Then you could browse your negatives by holding them up to your camera (in macro mode) and see them in their true form, if at lower resolution. Of course you can usually figure out what’s in a negative but sometimes it’s not so easy and requires the loupe, and it might not in this case.
Submitted by brad on Tue, 2009-11-03 15:41.
I suggested this as a feature for my Canon 5D SLR which shoots video, but let me expand it for all video cameras, indeed all cameras. They should all include bluetooth, notably the 480 megabit bluetooth 3.0. It’s cheap and the chips are readily available.
The first application is the use of the high-fidelity audio profile for microphones. Everybody knows the worst thing about today’s consumer video cameras is the sound. Good mics are often large and heavy and expensive, people don’t want to carry them on the camera. Mics on the subjects of the video are always better. While they are not readily available today, if consumer video cameras supported them, there would be a huge market in remote bluetooth microphones for use in filming.
For quality, you would want to support an error correcting protocol, which means mixing the sound onto the video a few seconds after the video is laid down. That’s not a big deal with digital recorded to flash.
Such a system easily supports multiple microphones too, mixing them or ideally just recording them as independent tracks to be mixed later. And that includes an off-camera microphone for ambient sounds. You could even put down multiples of those, and then do clever noise reduction tricks after the fact with the tracks.
The cameraman or director could also have a bluetooth headset on (those are cheap but low fidelity) to record a track of notes and commentary, something you can’t do if there is an on-camera mic being used.
I also noted a number of features for still cameras as well as video ones:
- Notes by the photographer, as above
- Universal protocol for control of remote flashes
- Remote control firing of the camera with all that USB has
- At 480mbits, downloading of photos and even live video streams to a master recorder somewhere
It might also be interesting to experiment in smart microphones. A smart microphone would be placed away from the camera, nearer the action being filmed (sporting events, for example.) The camera user would then zoom in on the microphone, and with the camera’s autofocus determine how far away it is, and with a compass, the direction. Then the microphone, which could either be motorized or an array, could be directional in the direction of the action. (It would be told the distance and direction of the action from the camera in the same fashion as the mic was located.) When you pointed the camera at something, the off-camera mic would also point at it, except during focus hunts.
There could, as before be more than one of these, and this could be combined with on-person microphones as above. And none of this has to be particularly expensive. The servo-controlled mic would be a high end item but within consumer range, and fancy versions would be of interest to pros. Remote mics would also be good for getting better stereo on scenes.
Key to all this is that adding the bluetooth to the camera is a minor cost (possibly compensated for by dropping the microphone jack) but it opens up a world of options, even for cheap cameras.
And of course, the most common cameras out there now — cell phones — already have bluetooth and compasses and these other features. In fact, cell phones could readily be your off camera microphones. If there were an nice app with a quick pairing protocol, you could ask all the people in the scene to just run it on their cell phone and put the phone in their front pocket. Suddenly you have a mic on each participant (up to the limit of bluetooth which is about 8 devices at once.)
Submitted by brad on Wed, 2009-09-30 14:47.
I have several sheetfed scanners. They are great in many ways — though not nearly as automatic as they could be — but they are expensive and have their limitations when it comes to real-world documents, which are often not in pristine shape.
I still believe in sheetfed scanners for the home, in fact one of my first blog posts here was about the paperless home, and some products are now on the market similar to this design, though none have the concept I really wanted — a battery powered scanner which simply scans to flash cards, and you take the flash card to a computer later for processing.
My multi-page document scanners will do a whole document, but they sometimes mis-feed. My single-page sheetfed scanner isn’t as fast or fancy but it’s still faster than using a flatbed because the act of putting the paper in the scanner is the act of scanning. There is no “open the top, remove old document, put in new one, lower top, push scan button” process.
Here’s a design that might be cheap and just what a house needs to get rid of its documents. It begins with a table which has an arm coming out from one side which has a tripod screw to hold a digital camera. Also running up the arm is a USB cable to the camera. Also on the arm, at enough of an angle to avoid glare and reflections are lighting, either white LED or CCFL tubes.
In the bed of the table is a capacitive sensor able to tell if your hand is near the table, as well as a simple photosensor to tell if there is a document on the table. All of this plugs into a laptop for control.
You slap a document on the table. As soon as you draw your hand away, the light flashes and the camera takes a picture. Then go and replace or flip the document and it happens again. No need to push a button, the removal of your hand with a document in place causes the photo. A button will be present to say “take it again” or “erase that” but you should not need to push it much. The light should be bright enough so the camera can shoot fairly stopped down, allowing a sharp image with good depth of field. The light might be on all the time in the single-sided version.
The camera can’t be any camera, alas, but many older cameras in the 6MP range would get about 300dpi colour from a typical letter sized page, which is quite fine. Key is that the camera has macro mode (or can otherwise focus close) and can be made to shoot over USB. An infrared LED could also be used to trigger many consumer cameras. Another plus is manual focus. It would be nice if the camera can just be locked in focus at the right distance, as that means much faster shooting for typical consumer digital cameras. And ideally all this (macro mode, manual focus) can all be set by USB control and thus be done under the control of the computer.
Of course, 3-D objects can also be shot in this way, though they might get glare from the lights if they have surfaces at the wrong angles. A fancier box would put the lights behind cloth diffusers, making things bulkier, though it can all pack down pretty small. In fact, since the arm can be designed to be easily removed, the whole thing can pack down into a very small box. A sheet of plexi would be available to flatten crumpled papers, though with good depth of field, this might not strictly be necessary.
One nice option might be a table filled with holes and a small suction pump. This would hold paper flat to the table. It would also make it easy to determine when paper is on the table. It would not help stacks of paper much but could be turned off, of course.
A fancier and bulkier version would have legs and support a 2nd camera below the table, which would now be a transparent piece of plexiglass. Double sided shots could then be taken, though in this case the lights would have to be turned off on the other side when shooting, and a darkened room or shade around the bottom and part of the top would be a good idea, to avoid bleed through the page. Suction might not be such a good idea here. The software should figure if the other side is blank and discard or highly compress that image. Of course the software must also crop images to size, and straighten rectangular items.
There are other options besides the capacitive hand sensor. These include a button, of course, a simple voice command detector, and clever use of the preview video mode that many digital cameras now have over USB. (ie. the computer can look through the camera and see when the document is in place and the hand is removed.) This approach would also allow gesture commands, little hand signals to indicate if the document is single sided, or B&W, or needs other special treatment.
The goal however, is a table where you can just slap pages down, move your hand away slightly and then slap down another. For stacks of documents one could even put down the whole stack and take pages off one at a time though this would surely bump the stack a bit requiring a bit of cleverness in straightening and cropping. Many people would find they could do this as fast as some of the faster professional document scanners, and with no errors on imperfect pages. The scans would not be as good as true scanner output, but good enough for many purposes.
In fact, digital camera photography’s speed (and ability to handle 3-D objects) led both Google Books and the Internet Archive to use it for their book scanning projects. This was of course primarily because they were unwilling to destroy books. Google came up with the idea of using a laser rangefinder to map the shape of the curved book page to correct any distortions in it. While this could be done here it is probably overkill.
One nice bonus here is that it’s very easy to design this to handle large documents, and even to be adjustable to handle both small and large documents. Normally scanners wide enough for large items are very expensive.