Submitted by brad on Fri, 2009-12-18 15:18.
I’m waiting for the right price point on a good >24” monitor with a narrow bezel to drop low enough that I can buy 4 or 5 of them to make a panoramic display wall without the gaps being too large.
However, another idea that I think would be very cool would be to exploit the gaps between the monitors to create a simulated set of windows in a wall looking out onto a scene. It’s been done before in lab experiments with single monitors, but not as a large panoramic installation or something long term from what I understand. The value in the multi display approach is that now the gap between displays is a feature rather than a problem, and viewers can see the whole picture by moving. (Video walls must edit out the seams from the picture, removing the wonderful seamlessness of a good panorama.) We restore the seamlessness in the temporal dimension.
To do this, it would be necessary to track the exact location of the eyes of the single viewer. This would only work for one person. From the position of the eyes (in all 3 dimensions) and the monitors the graphics card would then project the panoramic image on the monitors as though they were windows in a wall. As the viewer’s head moved, the image would move the other way. As the viewer approached the wall (to a point) the images would expand and move, and likewise shrink when moving away. Fortunately this sort of real time 3-D projection is just what modern GPUs are good at.
The monitors could be close together, like window panes with bars between them, or further apart like independent windows. Now the size of the bezels is not important.
For extra credit, the panoramic scene could be shot on layers, so it has a foreground and background, and these could be moved independently. To do this is would be necessary to shoot the panorama from spots along a line and both isolate foreground and background (using parallax, focus and hand editing) and also merge the backgrounds from the shots so that the background pixels behind the foreground ones are combined from the left and right shots. This is known as “background subtraction” and there has been quite a lot of work in this area. I’m less certain over what range this would look good. You might want to shoot above and below to get as much of the hidden background as possible in that layer. Of course having several layers is even better.
The next challenge is to very quickly spot the viewer’s head. One easy approach that has been done, at least with single screens, is to give the viewer a special hat or glasses with easily identified coloured dots or LEDs. It would be much nicer if we could do face detection as quickly as possible to identify an unadorned person. Chips that do this for video cameras are becoming common, the key issue is whether the detection can be done with very low latency — I think 10 milliseconds (100hz) would be a likely goal. The use of cameras lets the system work for anybody who walks in the room, and quickly switch among people to give them turns. A camera on the wall plus one above would work easily, two cameras on the left and right sides of the wall should also be able to get position fairly quickly.
Even better would be doing it with one camera. With one camera, one can still get a distance to the subject (with less resolution) by examining changes in the size of features on the head or body. However, that only provides relative distance, for example you can tell if the viewer got 20% closer but not where they started from. You would have to guess that distance, or learn it from other queues (such as a known sized object like the hat) or even have the viewer begin the process by standing on a specific spot. This could also be a good way to initiate the process, especially for a group of people coming to view the illusion. Stand still in the spot for 5 seconds until it beeps or flashes, and then start moving around.
If the face can be detected with high accuracy and quickly, a decent illusion should be possible. I was inspired by this clever simulated 3-D videoconferencing system which simulates 3-D in this way and watches the face of the viewer.
You need high resolution photos for this, as only a subset of the image appears in the “windows” at any given time, particularly when standing away from the windows. It could be possible to let the viewer get reasonably close to the “window” if you have a gigapan style panorama, though a physical barrier (even symbolic) to stop people from getting so close that the illusion breaks would be a good idea.
Submitted by brad on Mon, 2009-11-23 14:29.
As digital cameras have developed enough resolution to work as scanners, such as in the scanning table proposal I wrote about earlier, some people are also using them to digitize slides. You can purchase what is called a “slide copier” which is just a simple lens and holder which goes in front of the camera to take pictures of slides. These have existed for a long time as they were used to duplicate slides in film days. However, they were not adapted for negatives since you can’t readily duplicate a colour negative this way, because it is a negative and because it has an orange cast from the substrate.
There is at least one slide copier (The Opteka) which offers a negative strip holder, however that requires a bit of manual manipulation and the orange cast reduces the color gamut you will get after processing the image. Digital photography allows imaging of negatives because we can invert and colour adjust the result.
To get the product I want, we don’t have too far to go. First of all, you want a negative strip holder which has wheels in the sprocket holes. Once you have placed your negative strip correctly with one wheel, a second wheel should be able to advance exactly one frame, just like the reel in the camera did when it was shooting. You may need to do some fine adjustments, but it is also satisfactory to have the image cover more than 36mm so that you don’t have to be perfectly accurate, and have the software do some cropping.
Secondly, you would like it so that ideally, after you wind one frame, it triggers the shutter using a remote release. (Remote release is sadly a complex thing, with many different ways for different cameras, including wired cable releases where you just close a contact but need a proprietary connector, infrared remote controls and USB shooting. Sadly, this complexity might end up adding more to the cost than everything else, so you may have to suffer and squeeze it yourself.) As a plus, a little air bulb should be available to blow air over negatives before shooting them.
Next, you want an illuminator behind the negative or slide. For slides you want white of course. For negatives however, you would like a colour chosen to undo the effects of the orange cast, so that the gamut of light received matches the range of the camera sensors. This might be done most easily with 3 LEDs matched to camera sensors in the appropriate range of brightness.
You could also simply make a product out of this light, to be used with existing slide duplicators; that’s the simplest way to do this in the small scale.
Why do all this, when a real negative scanner is not that expensive, and higher quality? Digitizing your negatives this way would be fast. Negative scanners all tend to be very slow. This approach would let you slot in a negative strip, and go wind-click-wind-click-wind-click-wind-click in just a couple of seconds, not unlike shooting fast on an old film camera. You would get quite decent scans with today’s high quality DLSRs. My 5d Mark II with 21 megapixels would effectively be getting around 4000 dpi, though with bayer interpolation. If you wanted a scan for professional work or printing, you could then go back to that negative and do it on a more expensive negative scanner, cleaning it first etc.
Another solution is just to send all the negatives off to one of the services which send them to India for cheap scanning, though these tend to be at a more modest resolution. This approach would let you quickly get a catalog of your negatives.
Of course, to get a really quick catalog, another approach would be to create a grid of 3 rows of negative strip holder which could then be placed on a light table — ideally a light table with a blueish light to compensate for the orange cast. Take a photo of the entire grid to get 12 individual photos in one shot. This will result (on the 5D) in about 1.5 megapixel versions of each negative. Not sufficient to work with but fine for screen and web use, and not too far off the basic service you get from the consumer scanning companies.
I have some of my old negatives in plastic sheets that go in binders, so I could do it directly with them, but it’s work to put negatives into these and would be much easier to slide strips into a plastic holder which keeps them flat. Of course, another approach would be to simply lay the strips on the light table and put a sheet of clear plexiglass on top of them, and shoot in a dim room to avoid reflections.
It would also be useful if digital cameras or video cameras tossed in a “view colour negative” mode which did its best to show an invert of the live preview image with orange cast reverted. Then you could browse your negatives by holding them up to your camera (in macro mode) and see them in their true form, if at lower resolution. Of course you can usually figure out what’s in a negative but sometimes it’s not so easy and requires the loupe, and it might not in this case.
Submitted by brad on Tue, 2009-11-03 15:41.
I suggested this as a feature for my Canon 5D SLR which shoots video, but let me expand it for all video cameras, indeed all cameras. They should all include bluetooth, notably the 480 megabit bluetooth 3.0. It’s cheap and the chips are readily available.
The first application is the use of the high-fidelity audio profile for microphones. Everybody knows the worst thing about today’s consumer video cameras is the sound. Good mics are often large and heavy and expensive, people don’t want to carry them on the camera. Mics on the subjects of the video are always better. While they are not readily available today, if consumer video cameras supported them, there would be a huge market in remote bluetooth microphones for use in filming.
For quality, you would want to support an error correcting protocol, which means mixing the sound onto the video a few seconds after the video is laid down. That’s not a big deal with digital recorded to flash.
Such a system easily supports multiple microphones too, mixing them or ideally just recording them as independent tracks to be mixed later. And that includes an off-camera microphone for ambient sounds. You could even put down multiples of those, and then do clever noise reduction tricks after the fact with the tracks.
The cameraman or director could also have a bluetooth headset on (those are cheap but low fidelity) to record a track of notes and commentary, something you can’t do if there is an on-camera mic being used.
I also noted a number of features for still cameras as well as video ones:
- Notes by the photographer, as above
- Universal protocol for control of remote flashes
- Remote control firing of the camera with all that USB has
- At 480mbits, downloading of photos and even live video streams to a master recorder somewhere
It might also be interesting to experiment in smart microphones. A smart microphone would be placed away from the camera, nearer the action being filmed (sporting events, for example.) The camera user would then zoom in on the microphone, and with the camera’s autofocus determine how far away it is, and with a compass, the direction. Then the microphone, which could either be motorized or an array, could be directional in the direction of the action. (It would be told the distance and direction of the action from the camera in the same fashion as the mic was located.) When you pointed the camera at something, the off-camera mic would also point at it, except during focus hunts.
There could, as before be more than one of these, and this could be combined with on-person microphones as above. And none of this has to be particularly expensive. The servo-controlled mic would be a high end item but within consumer range, and fancy versions would be of interest to pros. Remote mics would also be good for getting better stereo on scenes.
Key to all this is that adding the bluetooth to the camera is a minor cost (possibly compensated for by dropping the microphone jack) but it opens up a world of options, even for cheap cameras.
And of course, the most common cameras out there now — cell phones — already have bluetooth and compasses and these other features. In fact, cell phones could readily be your off camera microphones. If there were an nice app with a quick pairing protocol, you could ask all the people in the scene to just run it on their cell phone and put the phone in their front pocket. Suddenly you have a mic on each participant (up to the limit of bluetooth which is about 8 devices at once.)
Submitted by brad on Wed, 2009-09-30 14:47.
I have several sheetfed scanners. They are great in many ways — though not nearly as automatic as they could be — but they are expensive and have their limitations when it comes to real-world documents, which are often not in pristine shape.
I still believe in sheetfed scanners for the home, in fact one of my first blog posts here was about the paperless home, and some products are now on the market similar to this design, though none have the concept I really wanted — a battery powered scanner which simply scans to flash cards, and you take the flash card to a computer later for processing.
My multi-page document scanners will do a whole document, but they sometimes mis-feed. My single-page sheetfed scanner isn’t as fast or fancy but it’s still faster than using a flatbed because the act of putting the paper in the scanner is the act of scanning. There is no “open the top, remove old document, put in new one, lower top, push scan button” process.
Here’s a design that might be cheap and just what a house needs to get rid of its documents. It begins with a table which has an arm coming out from one side which has a tripod screw to hold a digital camera. Also running up the arm is a USB cable to the camera. Also on the arm, at enough of an angle to avoid glare and reflections are lighting, either white LED or CCFL tubes.
In the bed of the table is a capacitive sensor able to tell if your hand is near the table, as well as a simple photosensor to tell if there is a document on the table. All of this plugs into a laptop for control.
You slap a document on the table. As soon as you draw your hand away, the light flashes and the camera takes a picture. Then go and replace or flip the document and it happens again. No need to push a button, the removal of your hand with a document in place causes the photo. A button will be present to say “take it again” or “erase that” but you should not need to push it much. The light should be bright enough so the camera can shoot fairly stopped down, allowing a sharp image with good depth of field. The light might be on all the time in the single-sided version.
The camera can’t be any camera, alas, but many older cameras in the 6MP range would get about 300dpi colour from a typical letter sized page, which is quite fine. Key is that the camera has macro mode (or can otherwise focus close) and can be made to shoot over USB. An infrared LED could also be used to trigger many consumer cameras. Another plus is manual focus. It would be nice if the camera can just be locked in focus at the right distance, as that means much faster shooting for typical consumer digital cameras. And ideally all this (macro mode, manual focus) can all be set by USB control and thus be done under the control of the computer.
Of course, 3-D objects can also be shot in this way, though they might get glare from the lights if they have surfaces at the wrong angles. A fancier box would put the lights behind cloth diffusers, making things bulkier, though it can all pack down pretty small. In fact, since the arm can be designed to be easily removed, the whole thing can pack down into a very small box. A sheet of plexi would be available to flatten crumpled papers, though with good depth of field, this might not strictly be necessary.
One nice option might be a table filled with holes and a small suction pump. This would hold paper flat to the table. It would also make it easy to determine when paper is on the table. It would not help stacks of paper much but could be turned off, of course.
A fancier and bulkier version would have legs and support a 2nd camera below the table, which would now be a transparent piece of plexiglass. Double sided shots could then be taken, though in this case the lights would have to be turned off on the other side when shooting, and a darkened room or shade around the bottom and part of the top would be a good idea, to avoid bleed through the page. Suction might not be such a good idea here. The software should figure if the other side is blank and discard or highly compress that image. Of course the software must also crop images to size, and straighten rectangular items.
There are other options besides the capacitive hand sensor. These include a button, of course, a simple voice command detector, and clever use of the preview video mode that many digital cameras now have over USB. (ie. the computer can look through the camera and see when the document is in place and the hand is removed.) This approach would also allow gesture commands, little hand signals to indicate if the document is single sided, or B&W, or needs other special treatment.
The goal however, is a table where you can just slap pages down, move your hand away slightly and then slap down another. For stacks of documents one could even put down the whole stack and take pages off one at a time though this would surely bump the stack a bit requiring a bit of cleverness in straightening and cropping. Many people would find they could do this as fast as some of the faster professional document scanners, and with no errors on imperfect pages. The scans would not be as good as true scanner output, but good enough for many purposes.
In fact, digital camera photography’s speed (and ability to handle 3-D objects) led both Google Books and the Internet Archive to use it for their book scanning projects. This was of course primarily because they were unwilling to destroy books. Google came up with the idea of using a laser rangefinder to map the shape of the curved book page to correct any distortions in it. While this could be done here it is probably overkill.
One nice bonus here is that it’s very easy to design this to handle large documents, and even to be adjustable to handle both small and large documents. Normally scanners wide enough for large items are very expensive.
Submitted by brad on Tue, 2009-09-22 16:05.
I have put up a gallery of panoramas for Burning Man 2009. This year I went with the new Canon 5D Mark II, which has remarkable low-light shooting capabilities. As such, I generated a number of interesting new night panoramas in addition to the giant ones of the day.
In particular, you will want to check out the panorama of the crowd around the burn, as seen from the Esplanade, and the night scene around the Temple, and a twilight shot.
Below you see a shot of the Gothic Raygun Rocket, not because it is the best of the panoramas, but because it is one of the shortest and thus
fits in the blog!
Some of these are still in progress. Check back for more results, particularly in the HDR department. The regular sized photos will also be processed and available in the future.
Finally, I have gone back and rebuilt the web pages for the last 5 years of panoramas at a higher resolution and with better scaling. So you may want to look at them again to see more detail. A few are also up as gigapans including one super high-res 2009 shot in a zoomable viewer.
Submitted by brad on Sat, 2009-08-15 14:24.
Today, fewer and fewer photos are printed. We usually see them on screen. And more and more commonly, we see them on a widescreen monitor. 16:9 screens are quite common as are 16:10. You can hardly find a 4:3 screen any more, though that is the aspect ratio of most P&S cameras. Most SLRs are 3:2, which still doesn’t fit on the widescreen monitor.
So there should be a standard tag to put in photos saying, “It’s OK to crop this photo to fill aspect ratio X:Y.” Then display programs could know to do this, instead of putting black bars at the center. Since all photos exceed the resolution of the screen by a large margin these days, there is no loss of detail to do this, in fact there is probably a gain.
One could apply this tag (or perhaps its converse, one saying, “please display the entirety of this photo without crop”) in a photo organizer program of course. It could also be applied by cameras. To do this, the camera might display a dim outline of a widescreen aspect ratio, so you can compose the shot to fit in that. Many people might decide to do this as the default, and push a button when they need the whole field of view and want to set a “don’t crop” flag. Of course you can fix this after the fact.
Should sensors just go widescreen? Probably not. The lens produces a circular image, so more square aspect ratios make sense. A widescreen sensor would be too narrow in portrait mode. In fact, there’s an argument that as sensors get cheaper, they should go circular and then the user can decide after the fact if they want landscape, portrait or some particular aspect ratio in either.
The simplest way to start this plan would be to add a “crop top/bottom to fit width” option to photo viewers. And to add a “flag this picture to not do that” command to the photo viewer. A quick run through the slideshow, tagging the few photos that can’t be expanded to fill the screen, would prepare the slideshow for showing to others, or it could be done right during the show.
Submitted by brad on Mon, 2009-07-27 23:05.
The total eclipse of the sun is the most visually stunning natural phenomenon there is. It leaves the other natural wonders like the Grand Canyon far behind. Through an amazing set of circumstances I got to see my 4th on Enewetak, an isolated atoll in the Marshall Islands. Enewetak was the site of 43 nuclear explosions including Mike, the first H-bomb (which erased one of the islands in the chain.)
The eclipse was astounding and we saw it clearly, other than one cloud which intruded for the first 30 seconds of our 5 minute and 40 second totality in otherwise generally clear skies. We were fortunate, as most of the eclipse path, which went over hundreds of millions of people, was clouded out in India and China. After leaving China the eclipse visited just a few islands, including Enewetak, and many of those were also clouded.
What makes the story even more dramatic is the effort to get there, and the fact that we only confirmed we were going 48 hours before the eclipse. We tracked the weather and found that only Enewetak had good cloud prospects and a long runway, but the runway there has not been maintained for several years, and hasn’t seen a jet for a long time. We left not knowing if we would be able to land there, but in the end all was glorious.
I have written up the story and included my first round of eclipse photos (my best to date) as well as photos of the islands and the nuke craters. I will be updating with new photos, including experiments in high-dynamic-range photography. An eclipse is so amazing in part because it covers a huge range of brightnesses — from prominences almost as hot as the sun, to the inner corona (solar atmosphere) brighter than the full moon to the streamers of the outer corona, and the stars and planets. No photograph has ever remotely done it justice, but I am working on that.
This eclipse had terror, drama, excitement and great beauty. The corona was more compact than it has been in the past, due to the strange minimum the sun has been going through, and there were few prominences, but the adventure getting there and the fantastic tropical setting made up for it.
Enjoy the story of the story of the jet trip to the 2009 Eclipse at Enewetak. You’ll be a bit jealous, but it was so great I can make no apologies.
Submitted by brad on Tue, 2009-06-30 13:24.
Back in March, I took my first trip to the middle east, to attend Yossi Vardi’s “Kinnernet” unconference on the shores of lake Kinneret, also known as the Sea of Galilee. This is an invite-only conference and a great time, but being only 2 days long, it’s hard to justify 2 days of flying just to go to it. So I also conducted a tour of sites in Israel and a bit of Jordan.
Israel is another one of the fascinating must-do countries for an English speaker, not simply for its immense history and impressive scenery, but because it is fascinating politically, and a large segment of the population speaks English. There are other countries which are interesting politically and culturally, but you will only get to speak to that segment of the population that has learned English.
Israel is a complex country and of course one can’t understand it on a visit, since many of the natives will admit to not understanding it. Most of the people I associated with, being high-tech internet people, seemed to be on the less aggressive side, if I can call them that; people opposed to the settlers, for example, and eager for land-for-peace or two-state solutions. During my trip Gaza was in turmoil and I did not visit it. I drove through West Bank areas a couple of times but only to get from A to B — though many Israelis expressed shock that I would be willing to do that. (On our way back from Jordan, on the outskirts of Jericho, we saw a lone Haredi, wearing black hat and black coat, hitch-hiking after dark on the side of the road. Our car was full, but our driver, who was not much afraid of the west bank, did agree that was a man of particular bravery of foolishness.)
The Israelis have come to accept, like fish in water, many things that to an outside seem shocking. Having two very different levels of rights for large sections of the population. Having your car, and then later your bag, searched as you do something as simple as visiting a shopping mall. The presence of soldiers with machine guns slung on their backs almost everywhere you look. Being on the bus that simply shuttles all day along a 400 foot trip between the Jordan and Israel border stations, and having to go through a 20 minute security inspection even though it’s been in view of the Israel station the whole time. Showing ID cards all the time.
The latter is of course not unexpected but disturbing. Israelis are taught more than anybody else in school about the dangers of a society with too much identity information on its people, and which requires them to carry and show papers. So they would have been the last to accept this, but they have. It shows how extreme their situation is more than some of the other less subtle signs. If more buildings fall in the USA, we’ll become more and more like Israel.
And yet the people, both Israelis and Arabs, are all intensely friendly and gregarious. (The same whether I would reveal my Jewish ancestry or not. I do not, however, look Jewish.) Famously brusque but still warm hearted.
The food is Israel is much better than I expected. It starts with the extremely fresh ingredients grown in the tropical climate. The falafel stands on the sides of the streets put anything elsewhere to shame, and I became addicted to the fresh squeezed juices also found everywhere.
In Jerusalem, around my hotel near King George and Jaffa, I experienced an amazing contrast. On Thursday night the streets were packed full of young people, starting their weekend. On Friday night, Shabbat was observed so strictly in that area that you could hear nothing but the chirping of birds and a few distant cars. In Tel Aviv, and among the high-tech crowd, Shabbat was hard to detect.
The old city of Jerusalem is a great trip, and the Muslim quarter, which is the most lively, is not nearly so dangerous or scary, even after hours, as Israelis described it to be. Along it is the “Stations of the Cross” route which gets Christians all excited, even though it’s clearly not the original route, which was not dotted with hundreds of Muslim-run souvenir shops. Seeing an internet cafe, I joked, “And here, at station 5.5, is where Jesus stopped to check his E-mail and twitter about how tired he was.” Jerusalem, and the rest of Israel, is packed full of Christians on “holy land” tours. A friend described it as like Houston, in that it was full of Texans.
I have a very large gallery of panoramas of Israel, along with a second page of panos and a still yet to be processed gallery of regular photos to come. Also to come is the 2-day trip into Jordan to see Petra. I’m particularly pleased with the first one that I show here, a 360 degree view of the western wall (wailing wall) male section just before Shabbat. Check out the full sized version.
Submitted by brad on Tue, 2009-06-23 10:12.
I’m really enjoying my Canon EOS 5D Mark II, especially its ability to shoot at 3200 ISO without much noise, allowing it to be used indoors, handheld without flash. But as fine as this (and other high end) cameras are, I still see a raft of features missing that I hope will appear in future cameras.
Help me fix my mistakes
A high end camera has full manual settings, which is good. But even the best of us make mistakes with these settings, mistakes the camera should know about and warn us about. It should not stop us from making shots, or in many circumstances try to correct the mistakes. But it should notice them, and beep when I take a picture, and show the mistake on the display with menu options to correct it, to always correct it, or not not warn me again about it for a day or forever.
I write earlier about the general principle of noticing when we’ve left the camera in an odd mode. If we put the camera into incandescent white balance in the evening and then a day later the camera notice we’re shooting in a sunny environment, it should know and alert us, or even fix it. This is true of a variety of settings that are retained through a non-shooting period, including exposure compensation, white balance, shooting modes, ISO changes and many others. The camera should learn over time what our “normal” modes are that we do like to leave the camera in, and not warn us about them, but warn us about other unusual things.
Many things will be obvious to the camera. If I shoot in manual mode and then later take another shot in manual mode that’s obviously way overexposed or underexposed, I probably just forgot, and would not mind the reminder. The reminder might also offer to delete the bad shot.
There are many things the camera can detect, including big blobs of sensor dust. Lenses left in manual focus should be noticed after a long gap of time, and especially if the lens has been removed and returned to the camera. Again, this should not impinge on the UI much — just a beep and a chance to see what the problem was on the screen.
Add bluetooth and other communications protocols to the camera
Let the camera talk to other devices. One obvious method would be bluetooth. With that, let the camera use bluetooth microphones and headsets when it records video and annotations. Let me hear the camera’s beeps and audio in a bluetooth headset so as not to disturb others. Let the camera talk to a Bluetooth GPS or GPS equipped phone to get geolocation data for photos. Let the camera be controlled via bluetooth from a laptop, and let it upload photos to a computer as it currently can do over USB. Let me use my phone or any other bluetooth remote as a remote control for the camera — indeed, on a smart phone, let me go so far as to control all aspects of the camera and see the live preview. Start making bluetooth controlled flash modules to replace the infrared protocols — it’s more reliable and won’t trigger other people’s flashes. Build simple bluetooth modules that can connect to the hotshoe or IR of existing flashes to convert them to this new system. Bluetooth would also allow keyboards (and even mice) for fancier control of the camera, and configuration of parameters that today require software on a PC. A bluetooth mouse, with its wheels (like the camera’s wheels) could make an interesting remote control.
With Bluetooth 3.0, which can go 480 megabits, this is also a suitable protocol for downloading photos or live tethering. Wireless USB (also 480 megabits at short range) is another contender.
Let it be a USB master as well as slave, so it can also be connected to USB GPSs and other peripherals dream up, including cell phones, most of which can now be a USB slave. This would also allow USB microphones, speakers and video displays.
Finally, add a protocol (USB or just plain IP) to the hot shoe to make this happen. (See below.)
Make more use of the microphone
I’ve always liked the idea of capturing a few seconds of sound around every still photo. This can be used for mood, or it can be used for notes on the photo. Particularly if we can do speech-to-text on the audio later, so that I can put captions on photos right then and there. This would work especially well if I can get a bluetooth headset with high quality microphone audio, something that is still hard to do right now.
If your camera can shoot video, it can of course be used as an audio recorder by putting on the lens cap, but why not just offer a voice recorder mode once you have gone to the trouble to support a good microphone.
Treat the camera as a software platform
Let other people write code to run on the camera. Add-on modules and new features. For low-end, deliberately crippled cameras this might not be allowed, but if I’m paying more for my camera than a computer, I should be able to program it, or download other people’s interesting programs.
Furthermore, let this code send signals to other devices, over USB, the flash shoe, and even bluetooth. Consider including a few general purpose digital read/write pins for general microcontroller function, or make a simple module to allow that.
Letting others write code for your product has a cost — you must define APIs and support them. But the benefits are vast, and would generate
great loyalty to the camera to do this first. I imagine software for panorama taking, high-dynamic range photography, timelapse, automatic exposure evaluation and much more — perhaps even the mistake-detection described above.
Create a fancy new hotshoe with data flow and power flow
The hotshoe should include a generalized data bus like two-way USB or just IP over something. Make all peripherals, including flashes, speak this protocol for control. But also allow the unit on the flash hot shoe to control the camera — this will be a two way street.
In the hotshoe, include pins for power — both to access the power from the camera, and to allow hotshoe devices to assist powering the camera and to charge the battery. This would allow the creation of low-powered flashes which are small and don’t need a battery because they draw from the camera battery. Not big, but suitable for fill flash and other purposes. The 5D has no built-in flash and I miss the fill-flash of the on-camera flash of the 40D. Obviously you don’t want devices sucking all the battery, and some might have their own batteries, but I would rather carry two camera batteries than have to carry a camera battery and then another battery type and charger type for my flash!
One could make a hotshoe device that holds more camera batteries, as an alternative to the battery grip. But hotshoe devices, with their data link, could do much more than control flashes. They could include fancy audio equipment, even a controller for the servo motors of a rotating pano-head or pan and scan tripod. Hotshoe devices could include wifi or bluetooth if it’s not already in the phone. Or GPS location.
The Hotshoe would offer 5v USB style power to start, but on approval, switch the power lines to high-current direct battery access, to allow extra power devices, and even battery chargers or AC adapters.
Support incremental download
Perhaps some cameras do this but I have not seen it. Instead of deleting photos from cards, just let things cycle through, and have the downloader only fetch the new photos, and mark the ones fetched as ready for deletion when needed. It’s always good to have your photos in multiple places — why delete them from the card before you need to? Possibly make the old photos semi-invisible. And, as I have asked before, when a photo is deleted, don’t delete it, but move it to a recycle bin where I can undelete. Of course, as space is needed, purge things from that bin in order. Though still call it delete, so that when rent-a-cops try to make you delete photos, you can fake it.
Put an Arca-swiss style plate on the bottom of the camera
Serious photographers have all settled on this plate, and have one stuck to the bottom of their camera, which is annoying when the camera is on your neck. Put these dovetails right into the base of the camera, with a standard tripod hole in the center (as these plates often can’t quite do as they must put the screw in the center.) I pay $50 for every new camera to get a custom plate. Just build it in. Those with other QR systems can still connect to the 1/4-20 tripod hole.
Consider a new format between jpeg and raw
The jpeg compression is good enough that detail is not lost. What is lost is exposure range. Raw format preserves everything, but is very large and slower and harder to use when organizing photographs — its main value is in post-processing photographs. A 12 bit jpeg standard exists, but is not widely used, but if cameras started offering it, I expect we would see support for it proliferate, even faster than support for raw has done.
Show me the blurries
A feature I have been requesting for some time. After I take a photo, let one of the review modes offered provide a zoom in of something that is supposed to be in focus. That could be the best focus point, or simply the most contrasty part of the photo. If, when I see the most contrasty part of the photo, it’s still blurry, I can know I didn’t focus right or hold the camera steady enough. If using focus points, the wheel could rotate around the focus points that were supposed to be in focus, so I can see what was probably my subject and how well it was shot.
Have a good accelerometer, and use it
Most cameras have a basic accelerometer to know if the camera is in portrait mode. (Oddly, they don’t all use it to know how to display photos on the screen.) But you can do much more. For example, you should be able to tell if the camera is on a tripod or handheld, based on how steady it is. That knowledge can be used to enable or disable the image stabilizer. It can also be used to add stability, by offering to delay the shutter release until the camera is being held steady when doing longer exposures. (Nikon had a feature called BSS, where it would shoot several long exposure shots, and retain the one that was least blurry. This should be a regular feature for all cameras.) Knowing the camera is stable on a tripod should also allow automatic exposure controllers to make more use of longer exposures if they need to in low light, though of course with moving subjects you still need manual control. (The camera should also be able to tell if the subjects are moving if it knows the camera itself is stable.)
Like new phones, also have a compass, and record the direction of all photos, to add to GPS data. This would allow identification of subjects. It would also allow “panorama” modes that know when you have rotated the camera sufficiently for the next overlapping shot. Finally, the accelerometer should offer me a digital level on the screen so I can quickly level the camera.
Embrace your inner eBook
I wrote about this last month — realize we are using cameras to do more than just take pictures.
Submitted by brad on Sat, 2009-05-30 20:16.
While I have over 30 galleries of panoramic photos up on the web, a while ago I decided to generate some pages of favourites as an introduction to the photography. I’m way behind on putting up galleries from recent trips to Israel, Jordan, Russia and various other places, but in the meantime you can enjoy these three galleries:
My Best Panoramas — favourites from around the world
Burning Man Sampler — different sorts of shots from each year of Burning Man
Giant Black Rock City Shots — Each year I shoot a super-large shot of the whole of Black Rock City. This shows this shot for each year.
As always, I recommend you put your browser in full-screen mode (F11 in Firefox) to get the full width when clicking on the panos.
Submitted by brad on Wed, 2009-05-20 14:10.
In my quest for the idea panorama head, I have recently written up some design notes and reviews. I found that the automatic head I tried, the beta version of the Gigapan turned out to be too slow for my tastes. I can shoot by hand much more quickly.
Manual pano heads either come with a smooth turning rotator with markers, or with a detent system that offers click-stops at intervals, like 15, 20 or 30 degrees. Having click-stops is great in theory — easy to turn, much less chance of error, more exact positioning. But it turns out to have its problems.
First, unless you shoot with just one lens, no one interval is perfect. I used to shoot all my large panos with a 10 degree interval which most detent systems didn’t even want to support. Your best compromise is to pick a series of focal lengths that are multiples. So if you shoot with say a 50mm and near-25mm lens, you can use a 15 degree interval, and just go 2-clicks for 30 degrees and so on. (It’s not quite this simple, you need more overlap at the wider focal lengths.)
Changing the click stops is a pain on some rotators — it involves taking apart the rotator, which is too much no matter how easy they make that. The new Nodal Ninja rotators and some others use a fat rotator with a series of pins. This is good, but the rotator alone is $200.
Click stops have another downside. You want them to be firm, but when they are, the “click” sets up vibrations in the assembly, which has a long lever arm, especially if there is a telephoto lens. Depending on the assembly it can take a few seconds for those vibrations to die down.
So here’s a proposal that might be a winner: electronic click stops. The rotator ring would have fine sensor marks on it, which would be read by a standard index photosensor. This would be hooked up to an inexpensive microcontroller. The microcontroller in turn would have a small piezo speaker and/or a couple of LEDs. The speaker would issue a beep when the camera was in the right place, and also issue a sub-tone which changes as you get close to the right spot — a “warmer/colder” signal to let you find it quickly. LEDs could blink faster and faster as you get warmer, and go solid when on the right spot. They would also warn you if you drifted too far from the spot before shooting.
Now this alone would be quite useful, and of course, fully general as it could handle any interval desired. Two more things are needed — a way to set the interval, and optionally a way to ease the taking of the photos.
To set the interval, you might first reset the device by giving it a quick spin of 360 degrees. It would give a distinctive beep when ready. Then you would look through the viewfinder and move the desired interval. Your interval would be set. If doing a multi-row you would have 2 sensors for angle, and you would do this twice. You could have a button for this, but I am interested in avoiding buttons.
Now you would be ready to shoot. It would give a special signal after you had shot 360 degrees or the width of the first row in a multi-row.
Other modes could be set with other large motions of the rotator, such as moving it back and forth 2 times quickly, or other highly atypical rotations.
(If you want buttons, an interesting way to do this is to have an IR sensor and to accept controls from other remotes, such as a universal TV remote set to a Sony TV, or some other tiny remote control which is readily available. Then you can have all the buttons and modes you want.)
We might need to have one button (for on/off) and since off could be a long press-and-hold, the button could also be used for interval setting and panorama starting.
The next issue is automatic shooting or shot detection. The sensor, since it will be finely tuned, will be able to tell when you’ve stopped at the proper stop. When all movement ceases, it could take the shot without you pressing the shutter using a bunch of methods. It might also be useful to have you manually control the shutter, but via a button on the panohead rather than the camera’s own shutter or cable release. First of all, this would let the head know you had taken the shot, so it could warn you about any shot that was missing. It could also know if you bumped the head or moved it during any shot — when doing long exposures there is a risk of doing this, especially if you are too eager for the next shot.
Secondly, you should always be using a cable release anyway, so building one into the pano head makes some sense. However, this need not be included in the simplest form of the product.
One very cheap way of having the pano head fire the shutter is infrared. Many cameras, though sadly not all, will let you control the shutter with infrared. Digital SLRs stopped doing this for a while, but now Canon at least has reversed things and supports infrared remote on the 5D Mark II. I think we can expect to see more of this in future. Another way is with a custom cable into the camera’s cable release port. The non-standard connectors, such as the Canon N3, can now be bought but this does mean having various connector adapters available, and plugging them in.
A third way is via USB. This is cheap and the connector is standard, but not all cameras will fire via USB. Fortunately more and more microcontroller chipsets are getting USB built in. The libgphoto2 open source library will control a lot of cameras. Of course, if you have a fancy controller, you can do much more with USB, such as figure out the field of view of the camera from EXIF but that’s beyond the scope of a simple system like this.
The fourth way is a shutter servo, again beyond the scope of a small system like this. In addition, all these methods beg more UI, and that means more buttons and even eventually a screen if an LED and speaker can’t tell you all you need. However, in this case what’s called for is a button which you can use to fire the shutter, and which you can press and hold before starting a pano to ask for auto firing.
The parts cost of all this is quite small, especially in any bulk. Cheaper than a machined detent system, in fact. In smaller volumes, a pre-assembled microcontroller board could be used, such as the Arduino or its clones. The only custom part might be the optical rotary encoder disk, but a number of vendors make these in various sizes.
I’ve talked about this system being cheap but in fact it has another big advantage, which is it can be small. It’s also not out of the question that it could be retrofitted onto existing pano heads, as just about everybody is already carrying a ballhead or pan/tilt head. For retrofit, one would glue an index mark tape around the outside of your existing head near where it turns, and mount the sensor and other equipment on the other part. The result is a panohead that weighs nothing because you are already carrying it.
Update: I am working on even more sophisticated plans than this which could generate a panohead which is the strongest, smallest, fastest, most versatile and lightest all at the same time — and among the less expensive too. But I would probably want some partners if I were to manufacture it.
Submitted by brad on Sat, 2009-04-11 08:33.
Lots of people are doing it — using their digital camera as a quick way to copy documents, not just for taking home, but to carry around. Rather than carry around a large travel guidebook (where most of the weight is devoted to hotels and restaurants in other towns) we normally just photograph the relevant pages for the area we will be exploring. We also do it even with portable items like guides and travel maps since we don’t really want the paper. We also find ourselves regularly photographing maps of cities, facilities and transit systems found on walls. We will photograph transit timetables: take a ferry out, photograph the schedule of ferries going back. In countries where you can’t write the language, photographing the names of destinations, so you can show it to cab drivers and locals is handy.
Yes, I have also seen copyright violation going on, with people taking a temporary photograph of somebody else’s guidebook, or one in a library or hotel. Not to save money, but for the convenience.
While I still think a dedicated travel device makes sense when doing tourism, cameras should embrace this function. Some travel guides, such as Lonely Planet, will sell you a PDF version of the book or chapters in it. Perhaps being able to read PDFs is more than a camera wants to do, but these could be converted to PNGs or some other clear and compact format. A very simple book browser in the camera is not a tall order, considering the level of processing they now have. Though there seems to be a lot to be said for the simplicity of the camera’s interface, where you turn a wheel to find a page and then zoom in. If there’s a browser it had better be easier to use than that.
However, even simpler would be a way to tag a photo as being text (indeed, many cameras could probably figure out that a photo is dense with text on their own.) Such photos would be put into their own special folder, and the camera’s menu should offer a way to directly go to those photos for browsing.
I realize the risk here. Forced convergence often results in a device that does nothing well. In this case people are already using the camera for this, because it is what they are carrying. There is already pressure to make camera screens bigger and higher resolution, and to give them good interfaces to move around and zoom in.
In time, though, travel guides might deliberately make versions that you store on the flash card of your camera. Of course, you can already do this on your PDA, and I read eBooks on my PDA all the time. And sometimes your cell phone/PDA is your camera.
Submitted by brad on Sat, 2009-01-24 17:12.
Since I do so many of my own, you won’t find me blogging about other people’s panoramas very much but this gigapixel shot of the crowd as Obama gives his inaugural speech is well worth exploring full screen. David Bergman’s story of the photo is available.
It was taken with the gigapan imager that I gave a negative review to last month. You can see why I want a better version of this imager. The shot is a great recording of history, as you can see the faces of almost all the dignitaries and high rollers who were there. It has a few stitch errors which would be a lot of work to remove by hand, so I don’t blame the creator for doing just one 5 hour automated pass. When such an imager becomes available for quality DSLRs, the image will be even better — this one faces the limitations of the G10. And due to the long time required to shoot any panorama of this scope, it looks like only some of the crowd are applauding, while others are bored.
I would love to see a shot of the ordinary folks in the far-away crowd too, but he wasn’t in range to get that, and it would have needed a longer lens. A computer might be able to count the faces then, or even tell you their racial mix. The made-the-list area probably has more black faces than ever before, but still a small minority.
A few years in the future, every event will be captured at this resolution, until we start having privacy worries about it.
Submitted by brad on Tue, 2009-01-13 20:58.
I just got my new Canon 5D Mark II. (Let me know if you want to buy some of my old gear, see below…) This camera is creating a lot of attention because of several ground-breaking features. First, it’s 22MP full-frame. Second, it shoots at up to 25,600 ISO — 8 stops faster than the 100 ISO that was standard not so long ago, and is still the approximate speed of typical P&S today. It’s grainy at that speed (though makes a perfectly good shot for web display) and it’s really not very grainy at all at 3200 ISO.
Secondly, they “threw in” HDTV video capture at the full 1920x1080, and I must say the video is stunning. There are a few flaws with it — the compression rate is poor (5 megabytes/second) and there is no autofocus available while shooting, but most of us were not expecting it to be there at all.
Another “flaw” I found — for years I have had a 2x tele-extender but the cameras refuse to autofocus with them on f/4 lenses (f/8 being too dark, while f/5.6 is OK.) But I figured, with the way sensors have been getting so much better and more sensitive of late, surely the newest cameras would be able to do it? No dice. I will later try an experiment blocking the pins that tell it not to autofocus, maybe it will work.
Anyway, on to the little surprise for those photographing friends who want this camera. Normally, cameras and most other gear are more expensive in Canada. But there was a lucky accident on this camera. When they priced it, the Canadian dollar was much stronger compared to the U.S dollar, and so they only priced it at $450 over the USD price. That’s to say that the Camera with 24-105L lens is $3500 in the USA and $3950 in Canada. But due to the shift in the U.S. dollar, $3950 CDN is only about $3250 USD. And the camera comes with full USA/Canada warranty, so it is not gray market.
There is a smaller savings on the body-only — $3100 CDN vs $2700 USD, only save about $130. If you want the body only, I recommend you buy the kit with lens for $3250 and sell the lens (you can get about $900 for it in the USA) and that gets you the body for $2350, a $350 saving, with some work. Boy at that price this camera is pretty amazing, considering I paid over $3000 for my first D30!
In Canada, two good stores are Henry’s Camera and Camera Canada. All stores sell this camera at list price right now (because it’s hot) but I talked Henry’s into knocking $75 because their Boxing Day sales ads proclaimed “All Digital SLRs on sale.” At first they said, “not that one” but I said, “So all doesn’t mean all?” so they were nice and gave the discount. You probably won’t. Shipping was $10 and I got it in about 3 shipping days via international Priority Mail. No taxes or duties if exported from Canada.
Of course, if you prefer to order from a U.S. realtor you can do me a favour and follow the links on my Camera Advice pages, where I get a modest cut if you buy from Amazon or B&H, both quality online retailers.
Now that I have my 5D, I don’t really need my 20D or 40D. I may keep one of them as a backup body. Based on eBay prices, the 20D is worth about $325 and the 40D about $620 — make me an offer. I will also sell the 10-22mm EF-S lens which works with those bodies but not with the 5D. Those go for about $550 on eBay, mine comes with an aftermarket lens hood — always a good idea. The 10mm lens is incredibly wide and gets shots you won’t get other ways. I am slightly more inclined to sell the superior 40D, as I only want to keep the other camera as a backup. The 40D’s main advantages are a few extra pixels, a much nicer display screen and the vibrating sensor cleaner. I have Arca-swiss style quick release plates for each camera, and want to sell them with the cameras. They cost $55 new, and don’t wear out, so I would want at least $40 added for them.
More on the 5D/II after I have shot with it for a while.
Update: The Canadian dollar has fallen more, it’s $1.29 CDN to $1 USD, so the 5D Mark II with lens kit at $3950 CDN is just $3060 USD, a bargain hard to resist over the $3500 US price. Sell that kit lens if you don’t need it for $850 and you’re talking $2200 for your 5D.
Update 2: The Canadian dollar has risen again, reducing the value of this bargain. It is unlikely to make sense with the currencies near even in value.
Submitted by brad on Tue, 2008-12-09 21:17.
This is an unfair review of the “Gigapan” motorized panoramic mount. It’s unfair because the unit I received did not work properly, and I returned it. But I learned enough to know I did not want it so I did not ask for an exchange. The other thing that’s unfair is that this unit is still listed as a “beta” model by the vendor.
I’ve been wanting something like the Gigapan for a long time. It’s got computerized servos, and thus is able to shoot a panorama, in particular a multi-row panorama, automatically. You specify the corners of the panorama and it moves the camera through all the needed shots, clicking the shutter, in this case with a manual servo that mounts over the shutter release and physically presses it.
I shoot a lot of panos, as readers know, and so I seek a motorized mount for these reasons:
read more »
- I want to shoot panos faster. Press a button and have it do the work as quickly as possible
- I want to shoot them more reliably. With manual shooting, I may miss a shot or overshoot the angle, ruining a whole pano
- For multi-row, there’s a lot of shooting and it can be tiresome.
- With the right shutter release, there can be lower vibration. You can also raise the mirror just once for the whole pano, with no need to see through the viewfinder.
Submitted by brad on Wed, 2008-12-03 01:33.
Here’s a new panorama gallery for Helsinki in Finland/Suomi with a few extra shots in a link off the end.
As I noted, I went to Finland to talk to the members of Alternative Party, a Demoscene gathering, but I always seek new photographs. The weather gods were not with me, however, so I only got a few usable periods of sun in the short days. And it involved some more playing with Autopano Pro. The regular photographs will come much later.
The Finns, not unlike the Dutch, all spoke to me in very good English. It was rather embarrassing, really, and indeed they conducted their conference entirely in English and tolerated my fast speaking style. As such I learned hardly any words of Finnish. It’s not hard to see why this has taken place, however. There are only about 6 million people who speak it, and while it is weakly related to Hungarian, it’s not really understood by anybody else. In the global village, the Finns see which way the wind is blowing and teach their children English.
I did learn however, that I’ve been saying the Finnish word “Sauna” wrong all my life. It’s “Sow-na” not “Saw-na.” And there was a sauna after the conference, of course!
Here’s a shot of the Helsinki harbour taken from an approaching ferry boat in a glorious moment of sun. It’s not pefect because the boat was moving but it shows the central landmarks.
More about Helsinki is yet to come.
Update: Silly me, there were two other panos of Helsinki I forgot to include, one of Senate Square on the main page, and The Cable Factory area on the secondary page.
Submitted by brad on Sat, 2008-11-22 14:14.
I now have a gallery up of the panoramas from Stockholm, Sweden. While this was not the best time of year to be photographing that far north (except for the availability of fall colour) I generated a lot of panoramas of various sorts. The main reason was I am trying some new panorama software, known as AutoPano Pro. This software is one of the licencees of the interesting SIFT algorithm, which is able to take a giant pile of pictures, and figure out which ones overlap and setting up the blend. The finding algorithm isn’t as important to me, because I recently wrote a perl program that goes through my pictures and finds all the runs of portrait shots with fixed parameters taken over a short period of time, and that helps me isolate my panoramas. However, the auto blending, even for handheld shots, means that it’s a lot easier to put together a larger number of panoramas.
I will be doing a more full review of the software later. Unfortunately while this is great in finding and building panos, and does an automatic job a fair bit of the time, when it does goof up it’s harder to fix it, so no one tool is yet ideal. This software also does HDR and not just multi-row but random “shoot everywhere” panos so you may see more of these from me.
One difference — because this made it easier to assemble my lesser and redundant panos, I did assemble them, and they can be found on a page of extra panoramas of Stockholm.
Submitted by brad on Sat, 2008-10-11 12:48.
I have tripods with both 3 segments and 4 segments. A 4-segment tripod has 3 clamps per leg, which means 9 of them to open and close in extending and collapsing the tripod. That’s a pain. Enough of one that you sometimes find yourself asking whether a shot is worth setting up the tripod. But even 3 segment tripods are only a bit better.
I have my 4-segment legs because I can pack the tripod down into a reasonably small suitcase. I do most shooting when I travel so this is actually my best carbon fiber tripod. But when I am out carrying the tripod, or more commonly carrying it in the car, it doesn’t need to be this short. Unfortunately, the tripod fully extended, with camera and pano mount on it, is too long to fit in most cars, so I have to collapse one set of legs. That’s not so hard but it’s still very long and unweildy with just one set collapsed.
Here’s a possible answer: A 4 segment tripod where the bottom two segments join not with an external clamp, but which screw or snap together to make a smooth double-length segment. You used to be able to get monopods like this. Of course, the threaded join is not very convenient, and is not adjustable. However, you could readily take it apart to pack the tripod in a suitcase. If it can be made strong enough, a snap-together join would be best, with some recessed buttons to push to pull the legs apart. Then takedown and setup could be quick enough that you would also use it to put the pod into a backpack.
However, what you would have when put together is a 2-segment tripod, because the lower pair of segments, with no bumpy clamp, could feed up into the upper two segments when both of those are extended. In other words, you would have a nice tripod you could quickly reduce to half its length and back with just 3 clamps. A reasonable length for carrying and a very easy length to put into a car trunk or back seat.
You would not, however, be able to make the tripod any shorter than half-length without undoing the bottom join. Then you could get the tripod down to 1/4 length for low shots and for placing on tables and stone walls if half-length was just too high. That use is rare enough that I could handle that, especially if it’s just snaps.
The same approach could apply to your center column, or you could have just a 1/4 length center column, which is fine for most applications, since you don’t want to extend the column unless you have to, normally.
Note that the top join would be normal, so you would have 2 clamps per leg, and one hard-join. You don’t want a hard join at the top because presumably that will thin the inner diameter of the pole if you want it strong, stopping the lower segment from telescoping inside.
The 3rd segment (2nd from the ground) into which the bottom segment snaps, could also possibly have a spike or small foot coming out the center, which goes into a hole in the bottom segment. Or a place to attach such a foot. This would allow you to also configure a shorter, 3-segment standard tripod when you don’t want to snap in the lowest segment.
Submitted by brad on Sun, 2008-09-21 00:39.
A friend (Larry P.) suggested that the time was here for serious (ie. DSLR) cameras to undertake a design revolution. The old SLR design, with a mirror that flips up and must sit between the last lens element and the sensor, creates a lot of problems in designing the lens and camera systems. Yes, being able to view directly through the lens with your eye is a very useful thing. But at what cost?
We’re already seeing the disappearance of optical viewfinders, even rangefinders, from small consumer cameras, if only to save space. Few people were using them any more, since the screen display turns out to show a lot more, and is even better than the eye in low light.
Serious cameras aren’t seeking (too much) to save space. We want image quality most of all, and the tools to shoot good images. Looking through the viewfinder is one of those tools, but again, at what cost?
So a proposal is put forward that now that sensors are dropping in price — even full frame sensors — that each lens have its own sensor, and shutter, that is part of it. There would be a body which has a digital (and mounting) connection with the lens. The body would have display, processor, controls, battery and so on. It’s a pretty radical proposal. Let’s look at the advantages:
- There is much more freedom in lens design, and lenses can be smaller, less expensive (for the lens at least) and lighter.
- Each sensor can be custom fit to the lens and its image circle. Some lenses could have small sensors and some have large ones. You could work with both super large hi-res sensors on a 28-70mm zoom, and also carry a small, dense sensor which offers you a (higher noise) super-tele in a tiny package.
- Each sensor can be tuned to the flaws of a particular lens, ready to correct distortions and other problems. (This could be done with a protocol for communicating those distortions to the camera too, and we’re finally starting to see things like the 5D’s database of lens light fall-off.)
- You would not get dust on the sensor
- You could build special bodies and/or lens holders that could hold multiple lenses, as now there is only an electronic connection to each lens. As a result you could switch among lenses instantly!
- It might be possible to have standarization, so you could mix and match lenses from different vendors as you choose.
- Image stabilization designs could be done with both sensor and optics, whatever works best.
- The lens could be some modest distance from the “body.”
- Body design can also be liberated, as the mechanical linkage with the lens can be designed without the need for a light path.
There are some downsides
- Obviously, sensors are not yet so cheap that this isn’t a more expensive approach initially. But serious lenses are often more than $1,000 and this approach might not increase their cost by more than a few hundred dollars. For cheaper lenses, putting on a high quality sensor would not make sense, cost-wise.
- In turn, where now you might put a lot of money into your one sensor, here it must be spread.
- Today, if you get a new body with a new sensor, you now get the better sensor with all your lenses.
- You lose the TTL viewfinder and focusing screen.
- You need all new equipment, and probably want new mounting hardware too.
Sensors may not be cheap enough to do this today, but they are getting cheaper, and thanks to Moore’s law this will continue. We’ve pretty much got all the megapixels we want now, so the main focus will be in improving sensor quality and ISO speed. Until sensors get so cheap that we might buy several that we know will be obsolete in a few years, one approach would be to still have a mount, so that sensors on a lens can be change. However, this need not be a quick disconnect mount, it would be more intended so you could swap out the sensor on a lens.
And of course, there could be a “sensor” on the lens which is not a sensor, but rather a mount to go on a body with a sensor, as we have today. However, this would have to be a body without a flip-up mirror, as the focal planes of these lenses would be much closer to the last lens element than they can be with an SLR. And I could also see the potential of a super-fancy rangefinder, which uses its own lens, but is digitally tied to focus and other information from the real lens to give you a view identical to the main lens, though DOF preview and manual focus would still be best on the screen.
Aside from the option of better lens design (and thus better image quality for the money) the two most appealing features to me are the instant electronic lens switch, and the ability to use different size sensors. Much as I would like to, even if I wanted to pay $6,000 for one of those amazing super-tele fast lenses that sports photographers use, I would only carry it around rarely. On the other hand, I might very well carry a short 85mm lens with a small sensor of the sort found in P&S cameras that gave me the field of view of a 600mm lens with 10 megapixels. It’s going to get me photos I would not otherwise get because I’m just not going to carry a 600mm f/2.8 in my bag. Instant lens switch might also change your desires about what zooms you want, since one of the goals of a zoom is to switch focal lengths quickly, though another goal is to have fewer lenses in the bag. If not using a mount that holds multiple lenses, lens switch could still be a very quick unsnap/snap, with no caps to remove and no seal to make.
Of course, to do this would require a very high-bandwidth data/control/power bus that ideally was standardized over vendors and designed to be upwards compatible with the future, faster bus that might come along. There is already a Camera Link bus specification, but the technologies behind SATA-600 (also 6gb) or 10gig ethernet might make sense.
So I suspect that as sensors get cheap enough, we might see things move this way.
Wide angle lens
Let’s consider how this might help us produce a wide angle lens. Good wide angle lenses are expensive. It takes work and good design to keep them free from distortions, vignetting and to make them rectilinear with a flat focal plane. Flare is also always a problem, as is doing all this for a sensor that is far from the last element. And these things are hard to do for a big image circle, though smaller image circles require very short focal lengths.
A sensor-included wide-angle could select the right focal length and image circle to get the best price/performance at suitable low noise. The sensors’s pixels could sit in distorted rows to match the distortions of the lens — indeed, one could go all the way to a fish-eye lens and put a fish-eye sensor on it to make it rectilinear. (This could also be done in software with some loss of sharpness.) The sensors could be designed so that they are larger (or have larger covering lenses) at the corners, to perfectly account for vignetting. And of course, one could use the short-focus design common in view cameras that can’t be done in SLRs because the focal plane is so close to the last lens element.
It’s not out of the question that such a lens/sensor could even be cheaper than a high quality lens able to put a great image on a 36mm full-frame sensor, and take better photos.
Submitted by brad on Mon, 2008-09-08 22:34.
In my previous post, I noted that I had not done many night panoramas of Burning Man. I thought I should outline just why they are such a challenge.
To shoot at night, you need a time exposure, typically a second or more. You can capture lights and fires with far less, but if you want to capture the things illuminated by those lights and fires, you need a long exposure. Having both the light source and the illuminated subject in a shot is like shooting into the sun. There are a few things you can do to get away with a shorter exposure, but they don’t work well for this sort of work.
- You can bump the ISO on your camera. If you do that, you make the picture more noisy. This ruins it when you try the next technique…
- You can apply curves in photoshop to brighten the shadows but not brighten the highlights, which tend to be much brighter than the shadows, because they are the light sources themselves. But if you used high ISO, you will immediately highlight the noise. You can’t do both.
- You can be tricky about how you do your curves. I recommend first using colour range select to mask out the actual light sources and areas near them (highly feathered) and then do your curves so you are not brightening the area right next to the lights at all.
- You can use a fast lens, wide open. But if you do this, you will get a shallow depth of field, meaning that if the foreground is in focus, the background is blurry, or vice versa. Problem is, for panoramas, trying to capture a large sweeping area, shallow depth of field is not a good idea. My daytime panos are shot at f/8 or f/11.
So you’re stuck with a long exposure. Right away that’s going to cause a problem with moving things, notably people and vehicles. There is simply nothing you can do about this with a long exposure, unless you can command the world to stop.
- I like to shoot panoramas from up towers, to capture the whole city. But towers at Burning Man are rarely built super-stable. They are usually scaffolding. If other people get on them, they wobble. That ruins almost any length of exposure.
- Over the years, the only really stable platforms have been the man, when he was a pyramid, and the Black Rock Refinery of 2002. Other platforms would be stable if I could get them to myself, but that’s hard at Burning Man.
- A boomlift can be good if you get it to yourself. But nobody on the boomlift can even shift their weight while the shutter is open.
- In the dark, it’s easier to make mistakes, like leaving autofocus on. Or if you are doing manual focus, it’s much harder to do it. The autofocus often doesn’t work, and your eyes may not have something good to focus on either.
- If what you are shooting is lit by fire, then the lighting is going to change form one frame to the next!
Now it gets worse. Since a full panorama like I take uses 36 shots, to get a perfect pano, every single one of them must be good. And that’s not going to happen. So you tend to take each shot 2 or 3 times, and hope that one of them works out. Problem is, the longer you wait between moves of the camera, the more likely something in the scene is going to move between frames, causing a blending problem.
You can check on the camera screen if the shot came out, but that’s very time consuming and just makes the moving car problem even worse. I have wished for some time that cameras had a review mode that was “Show me a full 1:1 pixel zoom of the region of the photo with the highest contrast and sharpest edges.” If that region is blurry, you know your photo is blurry. If that region is not your subject, you know you had bad focus. A button to cycle through the sharpest edges in the photo would help confirm this.
Some Nikon cameras had a mode to do this automatically — take 3 photos, and save the one with the least blur. I wish that mode appeared on my cameras.
So all in all, it’s a wonder they work at all sometimes. This year I had high hopes, because one crew built an 11 floor tower out of giant steel I-beams. But it wobbled a great deal at the top with all the constant traffic. It didn’t wobble as much on lower floors, but sadly at night they put up a giant screen and projected rather uninteresting photos onto it. The combination of the screen, and the projector light shining right at you, made photos from the stable levels impossible.