Archives

Date
  • 01
  • 02
  • 03
  • 04
  • 05
  • 06
  • 07
  • 08
  • 09
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

Total Eclipse at Hao, French Polynesia

I got a chance to see my 5th eclipse on July 11 — well sort of. In spite of many tools at our disposal, including a small cruise ship devoted to the eclipse, we saw only about 30 seconds of the possible 4 minutes due to clouds. But I still have a gallery of pictures.

Many people chose the Hao atoll for eclipse viewing because of its very long airstrip and 3 minute 30 second duration. Moving north would provide even more, either from water or the Amanu atoll. Weather reports kept changing, suggesting moving north was a bad idea, so our boat remained at the Hao dock until the morning of the eclipse. In spite of storm reports, it dawned effectively cloudless so we decided to stay put and set up all instruments and cameras. Seeing an eclipse on land is best in my view, ideally a place with trees and animals and water. And it’s really the only choice for good photography.

As the eclipse came, clouds started building, moving quickly in the brisk winds. The clouds may have been the result of eclipse-generated cooling and they did increase as the eclipse came. However, having set up we decided not to move. The clouds were fast and small and it was clear that they would not block the whole eclipse until a big cloud came just near totality which almost did. We did get 30 seconds of fairly clear skies, so the crowd of first-timers were just as awed as first timers always are. Disappointment was only felt by those who had seen a few.

Later I realized a better strategy for an eclipse cruise interested in land observation. When the clouds thickened, we should have left all the gear on land with a crewman from the ship to watch it. The cameras were all computer controlled, and so they would take whatever images they would take — in theory. We, on the other hand could have run onto the boat and had it sail to find a hole in the clouds. It would have found one — just 2 miles away at the airport, people gathered there saw the complete eclipse. For us it was just the luck on the draw on our observing spot. Mobility can change that luck. Photographs and being on land are great, but seeing the whole eclipse is better.

I said “in theory” above because one person’s computer did not start the photos properly, and he had to start them again by hand. In addition, while we forgot to use it, the photo program has an “emergency mode” for just such a contingency. This mode puts into into a quick series of shots of all major exposures, designed to be used in a brief hole in the clouds. In the panic we never thought to hit the panic button.

I was lucky last year in spite of my rush. I was fooled into thinking I could duplicate that luck. You have to learn to rehearse everything you will do during an eclipse. This also applied to my panoramas. I had brought a robotic panoramic mount controlled by bluetooth from my laptop. In spite of bringing two laptops, and doing test shots the day before, I could not get the bluetooth link going as the eclipse approached. I abandoned the robotic mount to do manual panos. I had been considering that anyway, since the robotic mount is slow and takes about 10 seconds between shots, limiting how much pano it could do. By hand I can do a shot every second or so. Of course the robot in theory takes none of my personal eclipse time, while doing the hand pano took away precious views, but taking 3 minutes means too much changing light and moving people.

Even so a few things went wrong. I was doing a bracket, which in retrospect I really did not need. A friend loaned me a higher quality 24mm lens than the one I had, and this lens was also much faster (f/1.8) than mine. While I had set to go into manual mode, at first I forgot, and int he darkness the camera tried to shoot at f/1.8 — meaning very shallow depth of field and poor focus on all things in the foreground. I then realized this and switched to manual mode for my full pano. This pano was shot while the eclipse was behind clouds. I had taken a shot a bit earlier where it was visible and of course used that for that frame of the pano, but the different exposure causes some lessening of quality. Modern pano software handles different exposure levels, but the best pano comes from having everything fixed.

More lessons learned. After the eclipse we relaxed and cruised the Atoll, swam, dove, surfed, bought black pearls and had a great time.

The next eclipse is really only visible in one reachable place: Cairns Australia in November of 2012. (There is an annular eclipse in early 2012 that passes over Redding and Reno and hits sunset at Lubbock, but an annular is just a big partial eclipse, not a total.)

Cairns and the great barrier reef are astounding. I have a page about my prior trip to Australia and Cairns and any trip there will be good even with a cloudy eclipse. Alas, a cloudy eclipse is a risk, because the sun with be quite low in the morning sky over the mountains, and worse, Nov 13 is right at the beginning of the wet season. If the wet starts then, it’s probably bad news. For many, the next eclipse will be the one that crosses the USA in 2017. However, there are other opportunities in Africa/2013 (for the most keen,) Svalbard/2015 and Indonesia/2016 before then.

I’ll have some panoramas in the future. Meanwhile check out the gallery. Of course I got better eclipse pictures last year.

Robocar challenge from Italy to China

Today marks the start of a remarkable robocar trek from Italy to China. The team from the Vislab International Autonomous Challenge start in Italy and will trek all the way to Shanghai in electric autonomous vehicles, crossing borders, handling rough terrain and going over roads for which there are no maps in areas where there is no high-accuracy GPS.

This would be impossible today so they are solving that problem by having a lead car which drives mostly autonomously, but sometimes has the humans take over, particularly in areas where there are no maps. This vehicle can be seen by the other vehicles and also transmits GPS waypoints to them, so they can follow those waypoints and use their sensors to fill in the rest. The other vehicles also will have humans to correct them in case of error, and the amount of correction needed will be recorded. Some of the earliest robocar experiments in Germany used this approach, driving the highways with occasional human correction. (The DARPA grand challenges required empty vehicles on a closed course, and no human intervention, except the kill switch, was allowed.) This should be a tremendous challenge with much learned along the way about what works and what doesn’t. As a computer vision lab, these cars appear to want to use vision a lot more than other robocars, which have gone LIDAR all the way. (There are LIDARs on the Vislab cars, but not as fancy as the 64 line Velodyne.)

They are using electric cars to send a green message. While I do believe that the robocars of the future will indeed be electric, and that self-recharge is a cruicial element of the value of robocars, I am not as fond of this decision. “One thing at a time” is the philosophy that makes sense, so I think it’s better to start with proven and easy to refuel gasoline cars and get the autonomy working, then improve what’s underneath. But this is a minor quibble about an exciting project.

They have a live tracking tool (not up yet) and a blog you can follow.

More robocar news to come. Yesterday I had an interesting ride in Junior (Darpa Grand Challenge II winner) and we trusted it enough to have Kathryn stand in the crosswalk while Junior drove up to it, then stopped and waited for her to walk out of it.

Can your computer be like your priest?

I’ve had a blogging hiatus of late because I was heavily involved last week with Singularity University a new teaching institution about the future created by Nasa, Google, Autodesk and various others. We’ve got 80 students, most from outside North America, here for the summer graduate program, and they are quite an interesting group.

On Friday, I gave a lecture to open the policy, law and ethics track and I brought up one of the central questions — should we let our technology betray us? Now our tech can betray us in a number of ways, but in this case I mean something more literal, such as our computer ratting us out to the police, or providing evidence that will be used against us in court. Right now this is happening a lot.

I put forward the following challenge: In history, certain service professions have been given a special status when it comes to being forced to betray us. Your lawyer, your doctor and your priest must keep most of what you tell them in confidence, and can’t be compelled to reveal it in court. We have given them this immunity because we feel their services are essential, and that people might be afraid to use them if they feared they could be betrayed.

Our computers are becoming essential too, and even more intimately entangled with our lives. We’re carrying our cell phone on our body all day long, with its GPS and microphone and camera, and we’re learning that it is telling our location to the police if they ask. Soon we’ll have computers implanted in our bodies — will they also betray us?

So can we treat our personal computer like a priest or doctor? Sadly, while people we trust have been given this exemption, technology doesn’t seem to get it. And there may be a reason, too. People don’t seem as afraid to disclose incriminating data to their computers as they are of disclosing it to other people. Right now, we know that people can blab, but we don’t seem to appreciate how much computers can blab. If we do, we’ll become more afraid to trust our computers and other technology, which hurts their value.

Can the ethics that developed around the trusted professions move to our technology? That’s for the future to see.