A minor local spot of interest here is the spillway for the Lake Berryessa reservoir. Unlike most spillways, this one drains from the top on the interior of the lake. It is called a "Morning Glory" or "Glory Hole" spillway. From time to time, the lake level gets above that spillway, sometimes far above, and it creates something that looks completely wrong, like a hole in the fabric of space time. So we went up to photograph it.
Many of you will have read of the tragic fire which destroyed the National Museum of Brazil. Many of the artifacts and documents in the museum were not photographed or backed up, and so are destroyed forever.
Canon has finally released a higher-end, full frame mirrorless camera. Nikon also released theirs a few weeks ago. Canon had seriously botched their entry into APS-C (smaller sensor) mirrorless with the M series. Nikon did a better job. Sony took ownership of the full-frame mirrorless space causing many, including myself to switch, even though their cameras were far from perfect.
If you read my article about computational photography you will know I am very interested in the Light L16 camera which uses 16 small cameras (with cell-phone level sensors and different focal length lenses) to produce an image they hope will rival high end cameras like DSLRs.
The plan is an excellent one. I purchased the L16 but must sadly report it is "not yet the camera of the future" though I feel the general idea points the way there.
The Eclipse of 2017 caused dire traffic warnings, even from myself. Since a total eclipse is the most amazing thing you will see, and one was coming to a rich country where almost everybody owns a car, and hundreds of millions live within a day's drive -- I wondered how we would not have horrendous traffic. (You can see my main Eclipse report and gallery here or see all my Eclipse articles.)
Also look out below for a new 4K video I made from having 4 different video cameras running around the eclipse. I have started you 3 minutes in for the short-attention-span world, but you might also enjoy the 3 minutes leading up as the excitement builds. Even on an HD display, be sure to click through to Youtube to watch it full screen.
As described, the 4 cameras are two 4K cell phones facing forward and back, plus an HD video from a 1200mm superzoom camera and snippets of 4K video and stills from the main telescope and Sony A7rII.
The big places for predicted bad traffic were central Oregon, because it was the place with the best weather that was closest to everybody from Seattle to Los Angeles, and areas of South Carolina which were closest for the whole eastern seaboard. At a popular Eclipse site, they had a detailed analysis of potential traffic but in many cases, it was quite wrong.
The central Oregon spine around the tiny town of Madras did get really bad traffic, as in reports of 4 to 6 hours to get out. That was not unexpected, since the area does not have very many roads, and is close to Washington and relatively close to California. At the same time, a lot of traffic diverted to the Salem area, which got a nice clear sky forecast. It has an interstate and many other roads. Planning ahead, Madras was the best choice because the weather is much more unpredictable west of the Cascades. But once the forecast became clear, many people from Seattle, Portland and California should have shifted to the more populated areas with the larger roads.
I decided, since it was only 2 hours more driving to Weiser (on the Oregon/Idaho border) but much less traffic, to go to the Snake River valley. It was the right choice -- there was almost no traffic leaving Weiser. In fact, Weiser did not get overwhelmed with people as had been expected, disappointing the businesses. Many thought that a large fraction of Boise would have tried to get up to that area, but they didn't. We actually wandered a bit and ended up over the river in a school field in Annex, Oregon.
There was no problem finding space, even for free.
This is a pattern we've seen many times now -- dire predictions of terrible traffic, then almost nothing. It turns out the predictions work too well. The famous Carmageddon in Los Angeles never materialized -- even with a major link cut, traffic was lighter than normal.
This is, in turn a tragedy. It seems a lot of people did not go see the eclipse because they were scared of bad traffic. What a great shame.
At my sight I had 4 cameras recording video. I set up two cell phones, both able to do 4K, looking at our group from in front and behind. The one behind I put in portrait mode, almost capturing the sun, to show that view, while the one in front showed us looking at the eclipse and also the shadow approaching on the hills.
The camera industry is about to come crashing down thanks to the rise of computational photography.
Many have predicted this for some time, and even wondered why it hasn't happened. While many people take most of their photos with their cell phones, at this point, if you want to do serious photography, in spite of what it says on giant Apple billboards, you carry a dedicated camera, and the more you want from that camera, the bigger the lens on the front of it is.
That's because of some basic physics. No matter how big your sensor is, the bigger the lens, the more light that will come in for each pixel. That means less noise, more ability to get enough light in dark situations, faster shutter speeds for moving subjects and more.
For serious photographers, it also means making artistic use of what some might consider a defect of larger lenses -- only a narrow range of distances is in focus. "Shallow depth of field" lets photographers isolate and highlight their subjects, and give depth and dimensionality to photos that need it.
So why is it all about to change?
Traditional photography has always been about capturing a single frame. A frozen moment in time. The more light you gather, the better you can do that. But that's not the way the eye works. Our eyes are constantly scanning a dynamic scene in real time, assembling our image of the world in our brains. We combine information captured at different times to get more out of a scene than our eyes as cameras can extract in a single "frame" (if they had frames.)
Computational photography adds smart digital algorithms not just to single frames, but to quickly shot sequences of them, or frames from multiple different lenses. It uses those to learn more about the image than any one frame or lens could pull out.
I was just outside Weiser Idaho, a small town on the Snake river, for the 2017 Eclipse, which was an excellent, if short, spectacle which reawakened U.S. interests in total eclipses. They are, as I wrote earlier, the most spectacular natural phenomenon you can see on the Earth, but due to their random pattern it's been a long time since one has covered so much of the world's richest country.
For me, it was my sixth total eclipse, but the first I could drive to. I began this journey in Mexico in 1991, with the super-eclipse of that year, which also was the last to visit the United States (it was visible on the big island of Hawai`i.) Since then I have flown around the world to the Curacao area, to the Black Sea, to the Marshall Islands (more photos) and French Polynesia to see other total eclipses. And I will continue to do so starting with 2 years from now in Argentina.
See the gallery
I recommend before you read that you enjoy my Gallery of 2017 Eclipse Photos in HD resolution. When going through them I recommend you click the "i" button so you can read the descriptions; they do not show in the slide show.
Why it's impossible (today) to photograph
I did not photograph my first eclipse (nor should anybody) but every photographer, seeing such a spectacle, hopes to capture it. We can't, because in addition to being the most spectacular natural event, it's also the one with the greatest dynamic range. In one small field you have brilliant jets of fire coming off the sun, its hot inner atmosphere, its giant glowing outer atmosphere and a dimly lit dark sky in which you can see stars. And then there is the unlit side of the moon which appears to be the blackest thing you have ever seen. While you can capture all these light values with a big bracket, no display device can come close to showing that 24 stop range. Only the human eye and visual system can perceive it.
Some day though, they will make reasonable display devices that can do this, but even then it will be tough. For the eclipse covers just a few degrees of sky, but in reality it's a full 360 experience, with eerie light in all directions and the temporary light of twilight in every direction. Still, we try.
In the future, when there is a retinal resolution VR headset with 24 bits of HDR light level ability, we might be able to show people an eclipse without going to one. Though you should still go.
That's why these photographs are so different. Every exposure reveals a different aspect of the eclipse. Short exposures show the prominences and the "chromosphere" -- the inner atmosphere of the sun visible only at the start and end of the eclipse. Longer exposures reveal more of the giant corona. The fingers of the outer corona involve 2 or 4 second exposures! The most interesting parts happen at 2nd and 3rd contact (the start and end) and also have many aspects. About 1/60th of a second shows the amazing diamond ring by letting the tiny sliver of sun blow out the sensor to make the diamond, as it does to the eye.
Time to rename the partial eclipse
One thing that saddens and frustrates me is that all of this is only visible in a band less than 100 miles wide where the eclipse is total. Outside that, for thousands of miles, one can see (with eye protection) a "partial eclipse." They both get called an eclipse but the difference is night and day. Yet I think the naming makes people not understand the difference. They think a "90% partial eclipse" is perhaps 90% as interesting as a total eclipse. Nothing could be more wrong. There are really three different things:
- The total eclipse, the most amazing thing you will ever see.
- The >98% partial eclipse (and annular eclipse) which are definitely an interesting event, but still just a tiny shadow of what a total eclipse is.
- The ordinary partial eclipse, which is a fun and educational curiosity.
I constantly meet people who think they saw "the eclipse" when to me and all others who have seen one, only the total eclipse is the eclipse. While the 98% partial is interesting, nobody should ever see that, because if you are that close to the band of totality, you would be nuts not to make the effort to go that extra distance. In a total eclipse, you see all that the partial has to offer, and even a few partial effects not seen except at 99.9%
As such, I propose we rename the partial eclipse, calling it something like a "grazing transit of the moon." An eclipse technically is a transit of the moon over the sun, but my main goal is to use a different term for the partial and total so that people don't get confused. To tell people in the partial zone "you saw a transit, hope it was interesting" while telling people in the total zone, "You saw a solar eclipse, wasn't that the most amazing thing you've ever seen?"
Automating the photography
This was the first eclipse I have ever driven to, and because of that, I went a bit overboard, able to bring all sorts of gear. I had to stop myself and scale back, but I still brought 2 telescopes, 4 cameras, one long lens, 5 tripods and more.
I have so much paper that I've been on a slow quest to scan things. So I have high speed scanners and other tools, but it remains a great deal of work to get it done, especially reliably enough that you would throw away the scanned papers. I have done around 10 posts on digitizing and gathered them under that tag.
Recently, I was asked by a friend who could not figure out what to do with the papers of a deceased parent. Scanning them on your own or in scanning shops is time consuming and expensive, so a new thought came to me.
Set up a scanning table by mounting a camera that shoots 4K video looking down on the table. I have tripods that have an arm that extends out but there are many ways to mount it. Light the table brightly, and bring your papers. Then start the 4K video and start slapping the pages down (or pulling them off) as fast as you can.
There is no software today that can turn that video into a well scanned document. But there will be. Truth is, we could write it today, but nobody has. If you scan this way, you're making the bet that somebody will. Even if nobody does, you can still go into the video and find any page and pull it out by hand, it will just be a lot of work, and you would only do this for single pages, not for whole documents. You are literally saving the document "for the future" because you are depending on future technology to easily extract it.
Perhaps by now you are sick of the dress that 3/4 people see as "white and gold" and 1/4 people see as "dark blue and black." If you haven't seen it, it's easy to find. What's amazing is to see how violent the arguments can get between people because the two ways we see it are so hugely different. "How can you see that as white????" people shout. They really shout.
There are a few explanations out there, but let me add my own:
For many decades, cameras have come with a machine screw socket (1/4"-20) in the bottom to mount them on a tripod. This is slow to use and easy to get loose, so most photographers prefer to use a quick-release plate system. You screw a plate on the camera, and your tripod head has a clamp to hold those plates. The plates are ideally custom made so they grip an edge on the camera to be sure they can't twist.
On Saturday I wrote about how we're now capturing the world so completely that people of the future will be able to wander around it in accurate VR. Let's go further and see how we might shoot the video resolutions of the future, today.
Recently I tried Facebook/Oculus Rift Crescent Bay prototype. It has more resolution (I will guess 1280 x 1600 per eye or similar) and runs at 90 frames/second. It also has better head tracking, so you can walk around a small space with some realism -- but only a very small space. Still, it was much more impressive than the DK2 and a sign of where things are going. I could still see a faint screen door, they were annoyed that I could see it.
Today marked the last trip through the air for the space shuttle, as the Endeavour was carried to LA to be installed in a museum. The trip included fly-overs of the Golden Gate bridge and many other landmarks in SF and LA, and also a low pass over Nasa Ames at Moffett Field, where I work at Singularity University. A special ceremony was done on the tarmac, and I went to get a panoramic photo.
I'm back from our fun "Singuarlity Week" in Tel Aviv, where we did a 2 day and 1 day Singularity University program. We judged a contest for two scholarships by Israelis for SU, and I spoke to groups like Garage Geeks, Israeli Defcon, GizaVC's monthly gathering and even went into the west bank to address the Palestinian IT Society and announce a scholarship contest for SU.
Earlier I wrote about desires for the next generation of DSLR camera and a number of readers wrote back that they wanted to be able to swap the sensor in their camera, most notably so they could put in a B&W sensor with no colour filter mask on it. This would give you better B&W photos and triple your light gathering ability, though for now only astronomers are keen enough on this to justify filterless cameras.
I shoot with the Canon 5d Mark II. While officially not a pro camera, the reality is that a large fraction of professional photographers use this camera rather than the Eos-1D cameras which are faster but much bulkier and in some ways even inferior to the 5D. But it's been out a long time now, and everybody is wondering when its successor will come and what features it will have.
Each increment in the DSLR world has been quite dramatic over the last decade. There's always been a big increase in resolution with the new generation, but now at 22 megapixels there's less call for that. While there are lenses that deliver more than 22 megapixels sharply, they are usually quite expensive, and while nobody would turn down 50mp for free, there just wouldn't be nearly as much benefit from it than the last doubling. Here's a look at features that might come, or at least be wished for.
More pixels may not be important, but everybody wants better pixels.
- Low noise / higher ISO: The 5D2 astounded us with ISO 3200 shots that aren't very noisy. Unlike megapixels, there is almost no limit to how high we would like ISO to go at low noise levels. Let's hope we see 12,500 or more at low noise, plus even 50,000 noisy. Due to physics, smaller pixels have higher noise, so this is another reason not to increase the megapixel count.
- 3 colour: The value of full 3-colour samples at every pixel has been overstated in the past. The reason is that Bayer interpolation is actually quite good, and almost every photographer would rather have 18 million bayer pixels over 6 million full RGB pixels. It's not even a contest. As we start maxing out our megapixels to match our lenses, this is one way to get more out of a picture. But if it means smaller pixels, it causes noise. The Foveon approach which stacked the 3 pixels would be OK here -- finally. But I don't expect this to be very likely.
- Higher dynamic range: How about 16 bits per pixel, or even 24? HDR photography is cool but difficult. But nobody doesn't want more range, if only for the ability to change exposure decisions after the fact and bring out those shadows or highlights. Automatic HDR in the camera would be nice but it's no substitute for try high-range pixels.
Video & Audio
Due to the high quality video in the 5D2, many professional videographers now use it. Last week Canon announced new high-end video cameras aimed at that market, so they may not focus on improvements in this area. If they do, people might like to see things like 60 frame video, ability to focus while shooting, higher ISO, and 4K video.