Submitted by brad on Mon, 2015-01-19 12:38.
For many decades, cameras have come with a machine screw socket (1/4”-20) in the bottom to mount them on a tripod. This is slow to use and easy to get loose, so most photographers prefer to use a quick-release plate system. You screw a plate on the camera, and your tripod head has a clamp to hold those plates. The plates are ideally custom made so they grip an edge on the camera to be sure they can’t twist.
There are different kinds of plates, but in the middle to high end, most people have settled on a metal dovetail plate first made by Arca Swiss. It’s very common with ball-heads, but still rare on pan-heads and lower end tripods, which use an array of different plate styles, including rectangles and hexagons.
The plates have issues — the add weight to your camera and something with protruding or semi-sharp edges on the bottom. They sometimes block doors on the bottom of the camera. If they are not custom, they can twist, and if they are custom they can be quite expensive. They often have tripod holes but those must be off-center.
Arca style dovetails are quite sturdy, but must be metal. With only the 2 sides clamped they can slide to help you position the camera. It is hard, but not impossible to make them snap in, so they usually are screwed and unscrewed which takes time and work and often involves a knob which can get in the way of other things. They are 38mm wide, and normally the dovetails are parallel to the sensor plane, though for strength the plates on big lenses are sometimes perpendicular, which is not an issue for most ball heads.
It’s time the camera vendors accepted that the tripod screw is a legacy part and move to some sort of quick release system standardized and built right into the cameras. The dovetail can probably be improved on if you’re going to start from scratch, and I’m in favour of that, but for now it is almost universal among serious photographers so I will discuss how to use that.
I have seen a few products like this — for example the E-mount to EOS adapter I bought includes a tripod wedge which has both a screw and ARCA dovetails. (Considering the huge difference in weight between my mirrorless cameras and old Canon glass, this mount is a good idea.)
Many cameras are deep enough that a 38mm wide dovetail (with tripod hole) could be built into the base of the camera. You would have to open the clamp fully to insert unless you wanted the dovetails to run the entire length, which you don’t, but I think most photographers would accept that to have something flush. It would expand the size of the camera slightly, perhaps, but much less than putting on a plate does — and everybody with high end cameras puts on a plate.
Today, though, many cameras have flip-up screens. They are certainly very handy. As people want their screens as big as possible, this can be an issue as the screen goes down flush with the bottom. If there’s a clamp on the bottom, it can block your screen from getting out. One idea would be to design clamps that taper away at the back, or to accept the screen won’t go down all the way.
The smaller cameras
A lot of new cameras are not 38mm deep, though. Putting plates on them is even worse as they stick out a lot. While again, a new design would help solve this problem, one option would be to standardize on a narrower dovetail, and make clamps that have an adapter that can slide in, seat securely so it won’t pop when the pressure is applied, and hold the narrower plate. That or have a clamp with a great deal of travel but that tends to take a lot of time to adjust. (I will note that there are 2 larger classes of dovetails used for heavy telescopes, known as the Vixen and the Losmandy “D”. Some vixen clamps are actually able to grab an arca plate, even though they are not as deep because of the valley often formed with the dovetail and the top of the plate.
It’s also possible to have a 2 level clamp that can grab a smaller plate but there must be a height gap, which may or may not work.
Narrower plates would be used only on smaller and lighter cameras, where not as much strength is needed. However, here again it might be time to design something new.
A locking pin
For some time, camcorders have established a pattern of having a small hole forward of the tripod screw for a locking pin. This allows a much sturdier mount that can’t twist with no need to grab edges of the camera body. Still cameras could do well to establish pin positions — perhaps one one forward, and one to the side. All they have to do is have small indentations for these pins, which typically come spring-loaded on the plates so you can still use them if the hole is not there. (The camcorder pin is placed forward of the tripod hole, but often “forward” is in the direction of the rails.)
For small cameras, it would be necessary to put the dovetail rails perpendicular to the sensor, and they would be very short. That’s OK because those cameras are small and light. The clamps screws would need to be flush with the top of the clamp. (This is sometimes true but not always.)
The presence of a pin would allow small, generic clamps to sturdily hold many cameras. For larger cameras, bigger plates would be available. The cost and size of plates would go down considerably.
The tripod leg screw
The world also standardized on using a bigger machine screw — 3/8”-16 thread — to connect tripod legs to tripod heads. This is a stronger screw, but could also use improvement. The fact that it takes time to switch tripod heads is not that big a deal for most photographers, but the biggest problem is there is no way, other than friction, to lock it, and many is the time that I have turned my tripod head loose from my legs. Here, some sort of clamp or retractable pin would be good, but frankly another clamp (quick release or not) might make sense, and it could become a standard for heavier duty cameras as well.
Something entirely new
I would leave it to a professional mechanical engineer to design something new, but I think a great system would scale to different sizes, so that one can have variants of it for small, light devices, and variants for big, heavy gear, with a way that the larger clamps could easily adapt to hold some of the smaller sizes. I would also design it to be backwards compatible if practical — it is probably easy to leave a 1/4-20 hole in the center, and it may even be possible in the larger sizes to have dovetails that can be gripped by such clamps.
Submitted by brad on Mon, 2014-12-01 09:52.
On Saturday I wrote about how we’re now capturing the world so completely that people of the future will be able to wander around it in accurate VR. Let’s go further and see how we might shoot the video resolutions of the future, today.
Almost everybody has a 1080p HD camera with them — almost all phones and pocket cameras do this. HD looks great but the future’s video displays will do 4K, 8K and full eye-resolution VR, and so our video today will look blurry the way old NTSC video looks blurry to us. In a bizarre twist, in the middle of the 20th century, everything was shot on film at a resolution comparable to HD. But from the 70s to 90s our TV shows were shot on NTSC tape, and thus dropped in resolution. That’s why you can watch Star Trek in high-def but not “The Wire.”
I predict that complex software in the future will be able to do a very good job of increasing the resolution of video. One way it will do this is through making full 3-D models of things in the scene using data from the video and elsewhere, and re-rendering at higher resolution. Another way it will do this is to take advantage of the “sub-pixel” resolution techniques you can do with video. One video frame only has the pixels it has, but as the camera moves or things move in a shot, we get multiple frames that tell us more information. If the camera moves half a pixel, you suddenly have a lot more detail. Over lots of frames you can gather even more.
This will already happen with today’s videos, but what if we help them out? For example, if you have still photographs of the things in the video, this will allow clever software to fill in more detail. At first, it will look strange, but eventually the uncanny valley will be crossed and it will just look sharp. Today I suspect most people shooting video on still cameras also shoot some stills, so this will help, but there’s not quite enough information if things are moving quickly, or new sides of objects are exposed. A still of your friend can help render them in high-res in a video, but not if they turn around. For that the software just has to guess.
We might improve this process by designing video systems that capture high-res still frames as often as they can and embed them to the video. Storage is cheap, so why not?
I typical digital video/still camera has 16 to 20 million pixels today. When it shoots 1080p HD video, it combines those pixels together, so that there are 6 to 10 still pixels going into every video pixel. Ideally this is done by hardware right in the imaging chip, but it can also be done to a lesser extent in software. A few cameras already shoot 4K, and this will become common in the next couple of years. In this case, they may just use the pixels one for one, since it’s not so easy to map a 16 megapixel 3:2 still array into a 16:9 8 megapixel 4K image. You can’t just combine 2 pixels per pixel.
Most still cameras won’t shoot a full-resolution video (ie. a 6K or 8K video) for several reasons:
- As designed, you simply can’t pull that much data off the chip per unit time. It’s a huge amount of data. Even with today’s cheap storage, it’s also a lot to store.
- Still camera systems tend to compress jpegs, but you want a video compression algorithm to record a video even if you can afford the storage for that.
- Nobody has displays to display 6K or 8K video, and only a few people have 4K displays — though this will change — so demand is not high enough to justify these costs
- When you combine pixels, you get less noise and can shoot in lower light. That’s why your camera can make a decent night-time video without blurring, but it can’t shoot a decent still in that lighting.
What is possible is a sensor which is able to record video (at the desired 30fps or 60fps rate) and also pull off full-resolution stills at some lower frame rate, as long as the scene is bright enough. That frame rate might be something like 5 or even 10 fps as cameras get better. In addition, hardware compression would combine the stills and the video frames to eliminate the great redundancy, though only to a limited extent because our purpose is to save information for the future.
Thus, if we hand the software of the future an HD video along with 3 to 5 frames/second of 16megapixel stills, I am comfortable it will be able to make a very decent 4K video from it most of the time, and often a decent 6K or 8K video. As noted, a lot of that can happen even without the stills, but they will just improve the situation. Those situations where it can’t — fast changing objects — are also situations where video gets blurred and we are tolerant of lower resolution.
It’s a bit harder if you are already shooting 4K. To do this well, we might like a 38 megapixel still sensor, with 4 pixels for every pixel in the video. That’s the cutting edge in high-end consumer gear today, and will get easier to buy, but we now run into the limitations of our lenses. Most lenses can’t deliver 38 million pixels — not even many of the high-end professional photographer lenses can do that. So it might not deliver that complete 8K experience, but it will get a lot closer than you can from an “ordinary” 4K video.
If you haven’t seen 8K video, it’s amazing. Sharp has been showing their one-of-a-kind 8K video display at CES for a few years. It looks much more realistic than 3D videos of lower resolution. 8K video can subtend over 100 degrees of viewing angle at one pixel per minute of arc, which is about the resolution of the sensors in your eye. (Not quite, as your eye also does sub-pixel tricks!) At 60 degrees — which is more than any TV is set up to subtend — it’s the full resolution of your eyes, and provides an actual limit on what we’re likely to want in a display.
And we could be shooting video for that future display today, before the technology to shoot that video natively exists.
Submitted by brad on Thu, 2014-11-27 14:32.
Recently I tried Facebook/Oculus Rift Crescent Bay prototype. It has more resolution (I will guess 1280 x 1600 per eye or similar) and runs at 90 frames/second. It also has better head tracking, so you can walk around a small space with some realism — but only a very small space. Still, it was much more impressive than the DK2 and a sign of where things are going. I could still see a faint screen door, they were annoyed that I could see it.
We still have a lot of resolution gain left to go. The human eye sees about a minute of arc, which means about 5,000 pixels for a 90 degree field of view. Since we have some ability for sub-pixel resolution, it might be suggested that 10,000 pixels of width is needed to reproduce the world. But that’s not that many Moore’s law generations from where we are today. The graphics rendering problem is harder, though with high frame rates, if you can track the eyes, you need only render full resolution where the fovea of the eye is. This actually gives a boost to onto-the-eye systems like a contact lens projector or the rumoured Magic Leap technology which may project with lasers onto the retina, as they need actually render far fewer pixels. (Get really clever, and realize the optic nerve only has about 600,000 neurons, and in theory you can get full real-world resolution with half a megapixel if you do it right.)
Walking around Rome, I realized something else — we are now digitizing our world, at least the popular outdoor spaces, at a very high resolution. That’s because millions of tourists are taking billions of pictures every day of everything from every angle, in every lighting. Software of the future will be able to produce very accurate 3D representations of all these spaces, both with real data and reasonably interpolated data. They will use our photographs today and the better photographs tomorrow to produce a highly accurate version of our world today.
This means that anybody in the future will be able to take a highly realistic walk around the early 21st century version of almost everything. Even many interiors will be captured in smaller numbers of photos. Only things that are normally covered or hidden will not be recorded, but in most cases it should be possible to figure out what was there. This will be trivial for fairly permanent things, like the ruins in Rome, but even possible for things that changed from day to day in our highly photographed world. A bit of AI will be able to turn the people in photos into 3-D animated models that can move within these VRs.
It will also be possible to extend this VR back into the past. The 20th century, before the advent of the digital camera, was not nearly so photographed, but it was still photographed quite a lot. For persistent things, the combination of modern (and future) recordings with older, less frequent and lower resolution recordings should still allow the creation of a fairly accurate model. The further back in time we go, the more interpolation and eventually artistic interpretation you will need, but very realistic seeming experiences will be possible. Even some of the 19th century should be doable, at least in some areas.
This is a good thing, because as I have written, the world’s tourist destinations are unable to bear the brunt of the rising middle class. As the Chinese, Indians and other nations get richer and begin to tour the world, their greater numbers will overcrowd those destinations even more than the waves of Americans, Germans and Japanese that already mobbed them in the 20th century. Indeed, with walking chairs (successors of the BigDog Robot) every spot will be accessible to everybody of any level of physical ability.
VR offers one answer to this. In VR, people will visit such places and get the views and the sounds — and perhaps even the smells. They will get a view captured at the perfect time in the perfect light, perhaps while the location is closed for digitization and thus empty of crowds. It might be, in many ways, a superior experience. That experience might satisfy people, though some might find themselves more driven to visit the real thing.
In the future, everybody will have had a chance to visit all the world’s great sites in VR while they are young. In fact, doing so might take no more than a few weekends, changing the nature of tourism greatly. This doesn’t alter the demand for the other half of tourism — true experience of the culture, eating the food, interacting with the locals and making friends. But so much commercial tourism — people being herded in tour groups to major sites and museums, then eating at tour-group restaurants — can be replaced.
I expect VR to reproduce the sights and sounds and a few other things. Special rooms could also reproduce winds and even some movement (for example, the feeling of being on a ship.) Right now, walking is harder to reproduce. With the OR Crescent Bay you could only walk 2-3 feet, but one could imagine warehouse size spaces or even outdoor stadia where large amounts of real walking might be possible if the simulated surface is also flat. Simulating walking over rough surfaces and stairs offers real challenges. I have tried systems where you walk inside a sphere but they don’t yet quite do it for me. I’ve also seen a system where you are held in place and move your feet in slippery socks on a smooth surface. Fun, but not quite there. Your body knows when it is staying in one place, at least for now. Touching other things in a realistic way would require a very involved robotic system — not impossible, but quite difficult.
Also interesting will be immersive augmented reality. There are a few ways I know of that people are developing
- With a VR headset, bring in the real world with cameras, modify it and present that view to the screens, so they are seeing the world through the headset. This provides a complete image, but the real world is reduced significantly in quality, at least for now, and latency must be extremely low.
- With a semi-transparent screen, show the augmentation with the real world behind it. This is very difficult outdoors, and you can’t really stop bright items from the background mixing with your augmentation. Focus depth is an issue here (and is with most other systems.) In some plans, the screens have LCDs that can go opaque to block the background where an augmentation is being placed.
- CastAR has you place retroreflective cloth in your environment, and it can present objects on that cloth. They do not blend with the existing reality, but replace it where the cloth is.
- Projecting into the eye with lasers from glasses, or on a contact lens can be brighter than the outside world, but again you can’t really paint over the bright objects in your environment.
Getting back to Rome, my goal would be to create an augmented reality that let you walk around ancient Rome, seeing the buildings as they were. The people around you would be converted to Romans, and the modern roads and buildings would be turned into areas you can’t enter (since we don’t want to see the cars, and turning them into fast chariots would look silly.) There have been attempts to create a virtual walk through ancient Rome, but being able to do it in the real location would be very cool.
Submitted by brad on Sat, 2012-09-22 17:10.
A follow-up thought about yesterday’s shuttle fly-by and panorama. I was musing, might this be perhaps the most photographed single thing in human history to date?
Here’s the reasoning. Today there are more cameras and more photographers than ever, and people use them all the time in a way that continues to grow. To be a candidate for a most-photographed event, you would need to be recent, and you would need to take place in front of a ton of people, ideally with notice. It seemed like just about everybody in Sacramento, the Bay Area and LA was out for this and holding up a phone or camera.
Of course, many objects are more photographed, like the Golden Gate Bridge the shuttle flew over, but I’m talking here of the event rather than the object. Of course this is an event which moved over the course of thousands of miles.
- The other shuttle fly-overs done over New York and Washington — also with large populations
- Total eclipses of the sun which go over highly populated areas. The 2009 eclipse went over Shanghai, Varanasi and many other hugely populated areas but was clouded out for many. Nobody has yet to make a photo of an eclipse that looks like an eclipse, of course — I’ve seen them all, including many of the clever HDRs and overlays — but that doesn’t stop people from trying.
- The 1999 eclipse did go over a number of large European cities, but this was before the everybody-is-photographing era
- Most lunar eclipses are seen by as much as half the world, though they are hard to photograph with consumer camera gear, and only a fraction of people go out to watch and photograph them, but they could easily be a winner.
Prior to the digital era, a possible winner might be the moon landing. Back in 1969, every family had a camera, though usage wasn’t nearly what it is today. However, I remember the TV giving lessons on how to photograph a TV screen. Everybody was shooting their TV for the launches and the walk on the moon. Terrible pictures (much like early camera phone pictures) but people took them to be a part of the event. I recall taking one myself though I have no idea where it is.
Of course there may be objective ways to measure this today, by tracking the number of photos on photo sharing and social sites, and extrapolating the winner. If the shuttle is the winner for now, it won’t last long. Photography is going to grow even more.
I should also note that remote photography, like we did for Apollo, is clearly much larger, in the form of recording video. For those giant events viewed by billions — World Cup, Olympics, Oscars etc. — huge numbers of people are recording them, at least temporarily.
Submitted by brad on Fri, 2012-09-21 18:28.
Today marked the last trip through the air for the space shuttle, as the Endeavour was carried to LA to be installed in a museum. The trip included fly-overs of the Golden Gate bridge and many other landmarks in SF and LA, and also a low pass over Nasa Ames at Moffett Field, where I work at Singularity University. A special ceremony was done on the tarmac, and I went to get a panoramic photo. We all figured the plane would come along the airstrip, but they surprised us, having it fly a bit to the west so it suddenly appeared from behind the skeleton of Hangar One, the old dirigible hangar. That turned out to be bad for my photography, as I didn’t get much advance notice, and the shot of the crowd I had done a few minutes before had everybody expectantly looking along the runway, and not towards the west where the plane and shuttle appear in my photo.
However, it did make for a very dramatic arrival. So while different parts of this shot are at slightly different times, it does capture the scene of Moffett field and the crowd awaiting the shuttle, and its arrival. I do however have a nice hi-res photo for you to enjoy as well as the panoramic shot of the Endeavour shuttle fly-by.
Submitted by brad on Tue, 2012-03-20 10:05.
I’m back from our fun “Singuarlity Week” in Tel Aviv, where we did a 2 day and 1 day Singularity University program. We judged a contest for two scholarships by Israelis for SU, and I spoke to groups like Garage Geeks, Israeli Defcon, GizaVC’s monthly gathering and even went into the west bank to address the Palestinian IT Society and announce a scholarship contest for SU.
Of course I did more photography, though the weather did not cooperate. However, you will see six new panoramas on my Israel Panorama Page and my Additional Israeli panoramas. My favourite is the shot of the western wall during a brief period of sun in a rainstorm.
In Ramallah, the telecom minister for the Palestinian Authority asked us, jokingly, “how can this technology end the occupation?” But I wanted to come up with a serious answer. Everybody who goes to the middle east tries to come up with a solution or at least some sort of understanding. Israelis get a bit sick of it, annoyed that outsiders just don’t understand the incredible depth and nuance of the problem. Outsiders imagine the Israelis and Palestinians are so deep in their conflict that they are like fish who no longer see the water.
In spite of those warnings, here’s my humble proposal for how to use new media technology to help.
Take classrooms of Israelis and classrooms of Palestinians and give them a mandatory school assignment. Their assignment is to be paired with an online buddy from the “other side.” Students would be paired based on a matching algorithm, considering things like their backgrounds, language skills or languages and subjects they want to learn. The other student, with whom they would interact over online media and video-conferencing (like Skype or Google Hangouts,) would become a study partner and the students would collaborate on projects suitable to them. They might also help one another learn a language, like English, Arabic or Hebrew. Students would be encouraged to add their counterpart to their social networking circles.
Both students would also be challenged to write an essay attempting to see the world from the point of view of the other. They will not be asked to agree with it, but simply to be able to write from that point of view. And their counterpart must agree at the end that it mostly does reflect their point of view. Students would be graded on this.
It would be important not to have this be a “forced friendship.” The students would be told it was not demanded they forget their preconceptions; not demanded they agree with everything their counterpart says. In fact, they would be encouraged to avoid conflict, to not immediately contradict statements they think are false. That the goal is not to convince their counterpart of things but to understand and help them understand. And in particular, projects should be set up where the students naturally work together viewing the teachers as the common enemy.
At the end of the year, a meeting would be arranged. For example, west bank students would be thrilled at a chance to visit the beach or some amusement park. A meeting on the west bank border on neutral ground might make sense too, though parents would be paranoid about safety and many would veto trips by their children into the west bank.
Would this bring peace? Hardly on its own. But it would improve things if every student at least knew somebody from outside their world, and had tried to understand their viewpoint even without necessarily agreeing with it. And some of the relationships would last, and the social networks would grow. Soon each student would have at least one person in their network from outside their formerly insular world. This would start with some schools, but ideally it would be something for every student to do. And it could even be expanded to include online pen-pals from other countries. With some students it would fail, particularly older ones whose views are already set. Alas, for younger ones, finding a common language might be difficult. Few Israelis learn Arabic, more Palestinians learn Hebrew and all eventually want to learn English. Somebody has to provide computers and networking to the poorer students, but it seems the cost of this is small compared to the benefit.
Submitted by brad on Sun, 2011-12-18 14:27.
Earlier I wrote about desires for the next generation of DSLR camera and a number of readers wrote back that they wanted to be able to swap the sensor in their camera, most notably so they could put in a B&W sensor with no colour filter mask on it. This would give you better B&W photos and triple your light gathering ability, though for now only astronomers are keen enough on this to justify filterless cameras.
I’m not sure how easy it would be to make a sensor that could be swapped, due to a number of problems — dust, connectivity and more. In fact I wonder if an idea I wrote about earlier — lenses with integrated sensors might have a better chance of being the future.
Here’s another step in that direction — a “foveal” digital camera that has tiny sensors in the middle of the frame and larger ones out at the edges. Such sensors have been built for a variety of purposes in the past, but might they have application for serious photography?
For example, the 5d Mark II I use has 22 million 6.4 micron sensors. Being that large, they are low noise compared to the smaller sensors found in P&S cameras. But the full frame requires very large, very heavy, very expensive lenses. Getting top quality over the large image circle is difficult and you pay a lot for it.
Imagine that this camera has another array, perhaps of around 16 million pixels of 1.6 micron size in the center. This allows it to shoot a 16MP picture in the small crop zone or a 22MP picture on the full frame. (It also allows it to shoot a huge 252 megapixel image that is sharp in the center but interpolated around the edges.) The central region would have transistors that could combine all the wells of a particular colour in the 4x4 array that maps to one large pixel. This is common in the video modes on DSLR cameras, and helps produce pixels that are much lower noise than the tiny pixels are on their own, but not as good as the 16x larger big pixels, though the green pixels, which make up half the area, would probably do decently well.
As a result, this camera would not be as good in low light, and the central region would be no better in low light than today’s quality P&S cameras. But that’s actually getting pretty good, and the results at higher light levels are excellent.
The win is that you would be able to use a 100mm/f2 lens with the field of view of a 400mm lens for a 16MP picture. It would not be quite as good as a real 400mm f/2.8L Canon lens of course. But it could compare decently — and that 400mm lens is immense, heavy and costs $10,000 — far more than the camera body. On the other hand a decent 100mm f/2.8 lens aimed at the smaller image circle would cost a few hundred dollars at most, and do a very good job. A professional wildlife or sports photographer might still seek the $10K lens but a lot of photographers would be much happier to carry the small one, and not just for the saved cost. You would not get the very shallow depth of field of the 400mm f/2.8 — it would be about double with a small sensor 100mm f/2 — but many would consider that a plus in this situation, not a minus.
You could also use 3.2 or 2.1 micron sensors for better low-noise and less of a crop (or focal length multiplier as it is incorrectly called sometimes.)
One other benefit is that, if your lens can deliver it, and particularly when you have decent lighting, you would get superb resolution in the center of your full frame photos, as the smaller pixels are combined. You would get better colour accuracy, without as many bayer interpolation artifacts, as you would truly sense each colour in every pixel, and much better contrast in general. You would be making use of the fact that your lens is sharper in the center. Jpeg outputs would probably never do the 250 megapixel interpolated image, but the raw output could record all the pixels if it is not necessary to combine the wells to improve signal/noise.
Submitted by brad on Thu, 2011-12-15 18:10.
I have put up a new gallery of panoramic photos from my trip earlier this year to Botswana (with short stays in South Africa and Zimbabwe.) There are some interesting animal and scenic shots, and also some technically difficult shots such as Victoria Falls from a helicopter. (I also have some new shots of Niagara falls from a fixed wing plane which is even harder.)
In the case of the helicopter, which is still moving as it was just a regular tour helicopter, the challenge is to shoot very fast and still not make mistakes in coverage. I took several panos but only a few turned out. Victoria Falls can really only be viewed from the air — on the ground the viewing spots during high water season are in so much mist that it’s actually raining hard all around you, and in any event you can’t see the whole falls. One lesson is to try not to be greedy and get a 200m pano. Stick to 50 to 100mm at most.
On this trip I took along a 100-400mm lens, and it was my first time shooting with such a long lens routinely. I knew intellectually about the much smaller depth of field at 400mm, but in spite of this I still screwed up a number of panoramas, since I normally set focus at one fixed distance for the whole pano. Stopping down 400mm only helps a little bit. Wildlife will not sit still for you, creating extra challenges. I already showed you this elephant shot but I am also quite fond of this sunset on the Okavango delta. While this shot may not appear to have wildlife, the sun is beaming through giant spiderwebs which are the work of “social spiders” which live in nests, all building the same web. I recommend zooming in on the scene in the center. I also have some nice regular photos of this which will be up later.
I am still a bit torn about the gallery of ordinary aspect ratio photos. I could put them up on my photo site easily enough, but I’ve noticed photos get a lot more commentary and possibly viewing when done on Google+/Picasa. This is a sign of a disturbing trend away from the distributed web, where people and companies had their own web sites and got pagerank and subscribers, to the centralized AOL style model of one big site (be it Facebook or Google Plus) which is attractive because of its social synergies.
Submitted by brad on Thu, 2011-11-17 23:03.
I shoot with the Canon 5d Mark II. While officially not a pro camera, the reality is that a large fraction of professional photographers use this camera rather than the Eos-1D cameras which are faster but much bulkier and in some ways even inferior to the 5D. But it’s been out a long time now, and everybody is wondering when its successor will come and what features it will have.
Each increment in the DSLR world has been quite dramatic over the last decade. There’s always been a big increase in resolution with the new generation, but now at 22 megapixels there’s less call for that. While there are lenses that deliver more than 22 megapixels sharply, they are usually quite expensive, and while nobody would turn down 50mp for free, there just wouldn’t be nearly as much benefit from it than the last doubling. Here’s a look at features that might come, or at least be wished for.
More pixels may not be important, but everybody wants better pixels.
- Low noise / higher ISO: The 5D2 astounded us with ISO 3200 shots that aren’t very noisy. Unlike megapixels, there is almost no limit to how high we would like ISO to go at low noise levels. Let’s hope we see 12,500 or more at low noise, plus even 50,000 noisy. Due to physics, smaller pixels have higher noise, so this is another reason not to increase the megapixel count.
- 3 colour: The value of full 3-colour samples at every pixel has been overstated in the past. The reason is that Bayer interpolation is actually quite good, and almost every photographer would rather have 18 million bayer pixels over 6 million full RGB pixels. It’s not even a contest. As we start maxing out our megapixels to match our lenses, this is one way to get more out of a picture. But if it means smaller pixels, it causes noise. The Foveon approach which stacked the 3 pixels would be OK here — finally. But I don’t expect this to be very likely.
- Higher dynamic range: How about 16 bits per pixel, or even 24? HDR photography is cool but difficult. But nobody doesn’t want more range, if only for the ability to change exposure decisions after the fact and bring out those shadows or highlights. Automatic HDR in the camera would be nice but it’s no substitute for try high-range pixels.
Video & Audio
Due to the high quality video in the 5D2, many professional videographers now use it. Last week Canon announced new high-end video cameras aimed at that market, so they may not focus on improvements in this area. If they do, people might like to see things like 60 frame video, ability to focus while shooting, higher ISO, and 4K video. read more »
Submitted by brad on Mon, 2011-11-07 22:23.
A little self-plug. I have an article on an introduction to panoramic photographic technique the November issue of Photo Technique with a few panos in it. This is old world journalism, folks — you have to read it on paper at least for now.
In the meantime, I’m working on upcoming galleries of photos from Botswana, Eastern Europe and Burning Man for you. I have already placed two of my Botswana photos into my gallery of favourite panoramas. This includes a lovely group of elephants in Savuti and a sunset on the Okavango delta that is one of my new favourites.
We decided to go to Harvey’s pan in Savuti one afternoon and lucked upon a large breeding group of elephant just on their way there. I caught them in one of my first long lens panoramas. Long lens panos are fairly difficult due to the limited depth of field, but they get great detail on the baby elephant.
Much more to come!
Submitted by brad on Fri, 2011-08-12 16:40.
As I prepare for Burning Man 2011, I realized I had not put my gallery of regular sized photos up on the web.
Much earlier I announced my gallery of giant panoramas of 2010 which features my largest photos in a new pan-and-zoom fullscreen viewer, I had neglected to put up the regular sized photos.
So enjoy: Gallery of photos of Burning Man 2010
I still need to select and caption 2007 and 2009 some day.
Submitted by brad on Mon, 2011-06-13 10:30.
This blog has been silent the last month because I’ve been on an amazing trip to Botswana and a few other places. There will be full reports and lots of pictures later, but today’s idea comes from experiments in shooting HD video using my Canon 5D Mark II. As many people know, while the 5D is an SLR designed for stills, it also shoots better HD video than all but the most expensive pro video cameras, so I did a bit of experimenting
The internal mic in the camera is not very good, and picks up not just wind but every little noise on the camera, including the noises of the image stabilizer found in many longer lenses. I brought a higher quality mic that mounts on the camera, but it wasn’t always mounted because it gets a little in the way of both regular shooting and putting the camera away. When I used it, I got decent audio, but I also got audio of my companion and our guide rustling or shooting stills with their own cameras. To shoot a real video with audio I had to have everybody be silent. This is why much of the sound you see in nature documentaries is actually added later, and very often just created by Foley artists. I also forgot to turn on my external mic, which requires a small amount of power, a few times. That was just me being stupid — as the small battery lasts for 300 hours I could have just left it on the whole trip. (Another fault I had with the mic, the Sennheiser MKE 400, was that the foam wind sleeve kept coming off, and after a few times I finally lost it.) read more »
Submitted by brad on Tue, 2011-02-08 13:36.
I shoot lots of large panoramas, and the arrival of various cheaper robotic mounts to shoot them, such as the Gigapan Epic Pro and the Merlin/Skywatcher (which I have) has resulted in a bit of a “mine’s bigger than yours” contest to take the biggest photo. Some would argue that the stitched version of the Sloane Digital Sky survey, which has been rated at a trillion pixels, is the winner, but most of the competition has been on the ground.
Many of these photos have got special web sites to display them such as Paris 26 gigapixels, the rest are usually found at the Gigapan.org site where you can even view the gigapans sorted by size to see which ones claim to be the largest.
Most of these big ones are stitched with AutopanoPro, which is the software I use, or the Gigapan stitcher. The largest I have done so far is smaller, my 1.4 gigapixel shot of Burning Man 2010 which you will find on my page of my biggest panoramas which more commonly are in the 100mp to 500mp range.
The Paris one is pretty good, but some of the other contenders provide a misleading number, because as you zoom in, you find the panorama at its base is quite blurry. Some of these panoramas have even just been expanded with software interpolation, which is a complete cheat, and some have been shot at mixed focal length, where sections of the panorama are sharp but others are not. I myself have done this, for example in my Gigapixel San Francisco from the end of the Golden Gate I shot the city close up, but shot the sky and some of the water at 1/4 the resolution because there isn’t really any fine detail in the sky. I think this is partially acceptable, though having real landscape features not at full resolution should otherwise disqualify a panorama. However, the truth is that sections of sky perhaps should not count at all, and anybody can make their panorama larger by just including more sky all the way to the zenith if they choose to.
There is a difficult craft to making such large photos, and there are also aesthetic elements. To really count the pixels for the world’s largest photos, I think we should count “quality” pixels. As such, sky pixels are not generally quality pixels, and distant terrain lost in haze also does not provide quality pixels. The haze is not the technical fault of the photographer, but it is the artistic fault, at least if the goal is to provide a sharp photo to explore. You get rid of haze only through the hard work of being there at the right time, and in some cities you may never get a chance.
Some of the shots are done through less than ideal lenses, and many of them are done use tele-extenders. These extenders do get more detail but the truth is a 2x tele-extender does not provide 4 times as many quality pixels. A common lens today is a 400mm with a 2x extender to get 800mm. Fairly expensive, but a lot cheaper than a quality 800mm lens. I think using that big expensive glass should count for more in the race to the biggest, even though some might view it as unfair. (A lens that big and heavy costs a ton and also weighs a lot, making it harder to get a mount to hold it and to keep it stable.) One can get very long mirror “lens” setups that are inexpensive, but they don’t deliver the quality, and I don’t believe work done with them should score as high as work with higher quality lenses. (It may be the case that images from a long telescope, which tend to be poor, could be scaled down to match the quality of a shorter but more expensive lens, and this is how it should be done.)
Ideally we should seek an objective measure of this. I would propose:
- There should be a sufficient number of high contrast edges in the image — sharp edges where the intensity goes from bright to dark in the space of just 1 or 2 pixels. If there are none of these, the image must be shrunk until there are.
- The image can then be divided up into sections and the contrast range in each evaluated. If the segment is very low contrast, such as sky, it is not counted in the pixel count. Possibly each block will be given a score based on how sharp it is, so that background items which are hazy count for more than nothing, but not as much as good sharp sections.
- I believe that to win a pano should not contain gross flaws. Examples of such flaws include stripes of brightness or shadow due to cloud movement, big stitching errors and checkerboard patterns due to bad overlap or stitching software. In general that means manual exposure rather than shots where the stitcher tries to fix mixed exposures unless it does it undetectably.
Some will argue with the last one in particular, since for some the goal is just to get as many useful pixels as possible for browsing around. Gigapixel panoramas after all are only good for zooming around in with a digital viewer. No monitor can display them and sometimes even printing them 12 feet high won’t show all their detail, and people rarely do that. (Though you can see my above San Francisco picture as the back wall of a bar in SF.) Still, I believe it should be a minimum bar than when you look at the picture at more normal sizes, or print it out a few feet in size, it still looks like an interesting, if extremely sharp, picture.
Ideally an objective formula can be produced for how much you have to shrink what is present to get a baseline. It’s very rare that any such panorama not contain a fair number of segments with high contrast edges and lines in them. For starters, one could just put in the requirement that the picture be shrunk until you have a frame that just about anybody would agree is sharp like an ordinary quality photo when viewed 1:1. Ideally lots of frames like that, all over the photo.
Under these criteria a number of the large shots on gigapan fall short. (Though not as short as you think. The gigapan.org zoom viewer lets you zoom in well past 1:1, so even sharp images are blurry when zoomed in fully. On my own site I set maximum zoom at 200%.)
These requirements are quite strict. Some of my own photos would have to be shrunk to meet these tests, but I believe the test should be hard.
Submitted by brad on Sun, 2010-11-14 16:47.
For many years I have had a popular article on what lenses to buy for a Canon DSLR. I shoot with Canon, but much of the advice is universal, so I am translating the article into Nikon.
If you shoot Nikon and are familiar with a variety of lenses for them, I would appreciate your comments. At the start of the article I indicate the main questions I would like people’s opinions on, such as moderately priced wide angle lenses, as well as regular zooms.
If you “got a Nikon camera and love to take photographs” please read the article on what lens to buy for your Nikon DSLR and leave comments here or send them by email to email@example.com. I’m also interested in lists of “what’s in your kit” today.
Submitted by brad on Tue, 2010-10-05 21:53.
I have put up a page of panoramas from Burning Man 2010. This page includes my largest yet, a 1.2 billion pixel image of the whole of Black Rock City which you will find first on the page. I am particularly proud of it, I hope you find it as amazing as I do.
There are many others, including a nice one of the Man while they dance before the burn with the whole circle of people, a hi-res of the temple and the temple burn, and more.
However, what’s really new is I have put in a Flash-based panorama zoom viewer. This application lets you see my photos for the first time at their full resolution, even the gigapixel ones. You can pan around, zoom in and see everything. For many of them, I strongly recommend you click the button (or use right-click menu) to enter fullscreen mode, especially if you have a big monitor as I do. There you can pan around with the arrow keys and zoom in and out with your mouse wheel. There are other controls (and when not in fullscreen mode you can also use shift/ctrl or +/- for zooming.) A help page has full details.
Go into the gigapixel and shot and zoom around. You’ll be amazed what you find. I have also converted most of my super-size city photos of Black Rock City to the zoom viewer, they can be found at the page of Giant BRC photos as well as many of my favourites from the various years. I’m also working at converting some of my other photos, including the gallery of my largest images which I built recently. It takes time to build and upload these so it will be some while before the big ones are all converted. I may not do the smaller ones.
If you don’t have flash, it displays the older 1100 pixel high image, and you can still get to that via a link. If you have flashblock, you will need to enable flash for my photo site because it will detect you have no flash player and display the old one.
Get out the big monitor and it will feel like you’re standing on a tower in Black Rock City with a pair of binoculars. The gigapixel image is also up on gigapan.
Submitted by brad on Wed, 2010-08-11 21:32.
Moraine Lake, in Banff National Park, is one of the world’s most beautiful mountain scenes. I’ve returned to Banff, Moraine Lake and Lake Louise many times, and in June, I took my new robotic panorama mount to take some very high resolution photos of it and other scenes.
Rather than filling my Alberta Panorama Gallery with all those pictures, I have created a special page with panoramas of just Moraine Lake and its more famous sister Lake Louise. While I like my new 400 megapixel shot the best, an earlier shot was selected by the respected German Ravensburger puzzle company for a panoramic jigsaw puzzle along with my shot of Burney Falls, CA.
It was a bit of work carrying the motorized mount, laptop computer, tripod and camera gear to the top of the Moraine, but the result is worth it. While my own printer is only 24” high, this picture has enough resolution to be done 6 feet high and still be tack sharp up close, so I’m hoping to find somebody who wants to do a wall with it.
So check out the new gallery of photos of Moraine Lake and Lake Louise. I’ve also added some other shots from that trip to the Alberta gallery and will be adding more shortly. When on the panorama page ask for the “Full Rez Slice” to see how much there is in the underlying image.
Submitted by brad on Wed, 2010-07-28 22:02.
I got a chance to see my 5th eclipse on July 11 — well sort of. In spite of many tools at our disposal, including a small cruise ship devoted to the eclipse, we saw only about 30 seconds of the possible 4 minutes due to clouds. But I still have a gallery of pictures.
Many people chose the Hao atoll for eclipse viewing because of its very long airstrip and 3 minute 30 second duration. Moving north would provide even more, either from water or the Amanu atoll. Weather reports kept changing, suggesting moving north was a bad idea, so our boat remained at the Hao dock until the morning of the eclipse. In spite of storm reports, it dawned effectively cloudless so we decided to stay put and set up all instruments and cameras. Seeing an eclipse on land is best in my view, ideally a place with trees and animals and water. And it’s really the only choice for good photography.
As the eclipse came, clouds started building, moving quickly in the brisk winds. The clouds may have been the result of eclipse-generated cooling and they did increase as the eclipse came. However, having set up we decided not to move. The clouds were fast and small and it was clear that they would not block the whole eclipse until a big cloud came just near totality which almost did. We did get 30 seconds of fairly clear skies, so the crowd of first-timers were just as awed as first timers always are. Disappointment was only felt by those who had seen a few.
Later I realized a better strategy for an eclipse cruise interested in land observation. When the clouds thickened, we should have left all the gear on land with a crewman from the ship to watch it. The cameras were all computer controlled, and so they would take whatever images they would take — in theory. We, on the other hand could have run onto the boat and had it sail to find a hole in the clouds. It would have found one — just 2 miles away at the airport, people gathered there saw the complete eclipse. For us it was just the luck on the draw on our observing spot. Mobility can change that luck. Photographs and being on land are great, but seeing the whole eclipse is better.
I said “in theory” above because one person’s computer did not start the photos properly, and he had to start them again by hand. In addition, while we forgot to use it, the photo program has an “emergency mode” for just such a contingency. This mode puts into into a quick series of shots of all major exposures, designed to be used in a brief hole in the clouds. In the panic we never thought to hit the panic button.
I was lucky last year in spite of my rush. I was fooled into thinking I could duplicate that luck. You have to learn to rehearse everything you will do during an eclipse. This also applied to my panoramas. I had brought a robotic panoramic mount controlled by bluetooth from my laptop. In spite of bringing two laptops, and doing test shots the day before, I could not get the bluetooth link going as the eclipse approached. I abandoned the robotic mount to do manual panos. I had been considering that anyway, since the robotic mount is slow and takes about 10 seconds between shots, limiting how much pano it could do. By hand I can do a shot every second or so. Of course the robot in theory takes none of my personal eclipse time, while doing the hand pano took away precious views, but taking 3 minutes means too much changing light and moving people.
Even so a few things went wrong. I was doing a bracket, which in retrospect I really did not need. A friend loaned me a higher quality 24mm lens than the one I had, and this lens was also much faster (f/1.8) than mine. While I had set to go into manual mode, at first I forgot, and int he darkness the camera tried to shoot at f/1.8 — meaning very shallow depth of field and poor focus on all things in the foreground. I then realized this and switched to manual mode for my full pano. This pano was shot while the eclipse was behind clouds. I had taken a shot a bit earlier where it was visible and of course used that for that frame of the pano, but the different exposure causes some lessening of quality. Modern pano software handles different exposure levels, but the best pano comes from having everything fixed.
More lessons learned. After the eclipse we relaxed and cruised the Atoll, swam, dove, surfed, bought black pearls and had a great time.
The next eclipse is really only visible in one reachable place: Cairns Australia in November of 2012. (There is an annular eclipse in early 2012 that passes over Redding and Reno and hits sunset at Lubbock, but an annular is just a big partial eclipse, not a total.)
Cairns and the great barrier reef are astounding. I have a page about my prior trip to Australia and Cairns and any trip there will be good even with a cloudy eclipse. Alas, a cloudy eclipse is a risk, because the sun with be quite low in the morning sky over the mountains, and worse, Nov 13 is right at the beginning of the wet season. If the wet starts then, it’s probably bad news. For many, the next eclipse will be the one that crosses the USA in 2017. However, there are other opportunities in Africa/2013 (for the most keen,) Svalbard/2015 and Indonesia/2016 before then.
I’ll have some panoramas in the future. Meanwhile check out the gallery. Of course I got better eclipse pictures last year.
Submitted by brad on Tue, 2010-02-16 19:02.
I recently went to the DLD conference in Germany, briefly to Davos during the World Economic Forum and then drove around the Alps for a few days, including a visit to an old friend in Grenoble. I have some panoramic galleries of the Alps in Winter up already.
Each trip brings some new observations and notes.
- For the first time, I got a rental car which had a USB port in it, as I’ve been wanting for years. The USB port was really part of the radio, and if you plugged a USB stick in, it would play the music on it, but for me its main use was a handy charging port without the need for a 12v adapter. As I’ve said before, let’s see this all the time, and let’s put them in a few places — up on the dashboard ledge to power a GPS, and for front and rear seats, and even the trunk. And have a plug so the computer can access the devices, or even data about the car.
- The huge network of tunnels in the alpine countries continues to amaze me, considering the staggering cost. Sadly, some seem to simply bypass towns that are pretty.
- I’ve had good luck on winter travel, but this trip reminded me why there are no crowds. The weather can curse you, and especially curse your photography, though the snow-covered landscapes are wonderful when you do get sun. Three trips to Lake Constance/Bodenzee now, and never any good weather!
- Davos was a trip. While there was a lot of security, it was far easier than say, flying in the USA. I was surprised how many people I knew at Davos. I was able to get a hotel in a village about 20 minutes away.
On to Part Two read more »
Submitted by brad on Sat, 2010-01-02 14:10.
I have the photo archives of a theatre company I was involved with for 12 years. It is coming upon its 50th anniversary. I have a high speed automatic scanner, so I am going to generate scans of many of the photos — that part is not too hard.
Even easier for modern groups in the digital age, where the photos are already digital and date-tagged.
But now I want members of the group to be able to rotate the photos, tag them with the names of people in them and other tags, group them into folders where needed, and add comments. I can’t do this on my own, it is a collaborative project.
Lots of photo sharing sites let other people add comments. Few sites let you add tags or let trusted other people do things like rotations. Flickr lets others draw annotations and add tags/people which would make it a likely choice, but they can’t rotate.
Facebook has an interesting set of features. It’s easy to tag photos with friends’ names, and they get notified of it and the photos appear on their page, which is both good and bad. (The need for the owner to approve is a burden here.) Tagging non-friends is annoying because when somebody adds a real friend tag you must delete the old one, and the old ones may be spelled differently. However, the real deal-breaker on facebook is that the resolution is unacceptably small.
The recent killer feature I really want is face recognition, which makes tagging with people’s names vastly easier. Even the fact that it auto-draws boxes around the faces for you to tag is a win even without the recognition feature. The algorithms are far from perfect but they speed up the task a great deal. As such, right now an obvious choice is Picasa and Picasa Web Albums. however, while PWA lets you allow others to upload photos to your albums and tag their own photos, they can’t tag yours.
There is also face recognition in iPhoto, but I am not a Mac user so I don’t know if that can meet this need.
So right now two choices seem to be Flickr (but I must do all rotates) or a newly created Picasa account to which the password is shared. That’s a bit of a kludge but it seems to be the only way to get shared face recognition tagging.
Facebook can be integrated with a face recognizer called “Polar Rose” which also works with the 23hq photo sharing site. However, Facebook’s resolution is way, way too small and you need to approve tags.
I have not tried all the photo sharing sites so I wonder if people know of one that can do what I want?
Submitted by brad on Fri, 2009-12-18 15:18.
I’m waiting for the right price point on a good >24” monitor with a narrow bezel to drop low enough that I can buy 4 or 5 of them to make a panoramic display wall without the gaps being too large.
However, another idea that I think would be very cool would be to exploit the gaps between the monitors to create a simulated set of windows in a wall looking out onto a scene. It’s been done before in lab experiments with single monitors, but not as a large panoramic installation or something long term from what I understand. The value in the multi display approach is that now the gap between displays is a feature rather than a problem, and viewers can see the whole picture by moving. (Video walls must edit out the seams from the picture, removing the wonderful seamlessness of a good panorama.) We restore the seamlessness in the temporal dimension.
To do this, it would be necessary to track the exact location of the eyes of the single viewer. This would only work for one person. From the position of the eyes (in all 3 dimensions) and the monitors the graphics card would then project the panoramic image on the monitors as though they were windows in a wall. As the viewer’s head moved, the image would move the other way. As the viewer approached the wall (to a point) the images would expand and move, and likewise shrink when moving away. Fortunately this sort of real time 3-D projection is just what modern GPUs are good at.
The monitors could be close together, like window panes with bars between them, or further apart like independent windows. Now the size of the bezels is not important.
For extra credit, the panoramic scene could be shot on layers, so it has a foreground and background, and these could be moved independently. To do this is would be necessary to shoot the panorama from spots along a line and both isolate foreground and background (using parallax, focus and hand editing) and also merge the backgrounds from the shots so that the background pixels behind the foreground ones are combined from the left and right shots. This is known as “background subtraction” and there has been quite a lot of work in this area. I’m less certain over what range this would look good. You might want to shoot above and below to get as much of the hidden background as possible in that layer. Of course having several layers is even better.
The next challenge is to very quickly spot the viewer’s head. One easy approach that has been done, at least with single screens, is to give the viewer a special hat or glasses with easily identified coloured dots or LEDs. It would be much nicer if we could do face detection as quickly as possible to identify an unadorned person. Chips that do this for video cameras are becoming common, the key issue is whether the detection can be done with very low latency — I think 10 milliseconds (100hz) would be a likely goal. The use of cameras lets the system work for anybody who walks in the room, and quickly switch among people to give them turns. A camera on the wall plus one above would work easily, two cameras on the left and right sides of the wall should also be able to get position fairly quickly.
Even better would be doing it with one camera. With one camera, one can still get a distance to the subject (with less resolution) by examining changes in the size of features on the head or body. However, that only provides relative distance, for example you can tell if the viewer got 20% closer but not where they started from. You would have to guess that distance, or learn it from other queues (such as a known sized object like the hat) or even have the viewer begin the process by standing on a specific spot. This could also be a good way to initiate the process, especially for a group of people coming to view the illusion. Stand still in the spot for 5 seconds until it beeps or flashes, and then start moving around.
If the face can be detected with high accuracy and quickly, a decent illusion should be possible. I was inspired by this clever simulated 3-D videoconferencing system which simulates 3-D in this way and watches the face of the viewer.
You need high resolution photos for this, as only a subset of the image appears in the “windows” at any given time, particularly when standing away from the windows. It could be possible to let the viewer get reasonably close to the “window” if you have a gigapan style panorama, though a physical barrier (even symbolic) to stop people from getting so close that the illusion breaks would be a good idea.