Blogs

Definition of pixels for the world's biggest photos

I shoot lots of large panoramas, and the arrival of various cheaper robotic mounts to shoot them, such as the Gigapan Epic Pro and the Merlin/Skywatcher (which I have) has resulted in a bit of a “mine’s bigger than yours” contest to take the biggest photo. Some would argue that the stitched version of the Sloane Digital Sky survey, which has been rated at a trillion pixels, is the winner, but most of the competition has been on the ground.

Many of these photos have got special web sites to display them such as Paris 26 gigapixels, the rest are usually found at the Gigapan.org site where you can even view the gigapans sorted by size to see which ones claim to be the largest.

Most of these big ones are stitched with AutopanoPro, which is the software I use, or the Gigapan stitcher. The largest I have done so far is smaller, my 1.4 gigapixel shot of Burning Man 2010 which you will find on my page of my biggest panoramas which more commonly are in the 100mp to 500mp range.

The Paris one is pretty good, but some of the other contenders provide a misleading number, because as you zoom in, you find the panorama at its base is quite blurry. Some of these panoramas have even just been expanded with software interpolation, which is a complete cheat, and some have been shot at mixed focal length, where sections of the panorama are sharp but others are not. I myself have done this, for example in my Gigapixel San Francisco from the end of the Golden Gate I shot the city close up, but shot the sky and some of the water at 1/4 the resolution because there isn’t really any fine detail in the sky. I think this is partially acceptable, though having real landscape features not at full resolution should otherwise disqualify a panorama. However, the truth is that sections of sky perhaps should not count at all, and anybody can make their panorama larger by just including more sky all the way to the zenith if they choose to.

There is a difficult craft to making such large photos, and there are also aesthetic elements. To really count the pixels for the world’s largest photos, I think we should count “quality” pixels. As such, sky pixels are not generally quality pixels, and distant terrain lost in haze also does not provide quality pixels. The haze is not the technical fault of the photographer, but it is the artistic fault, at least if the goal is to provide a sharp photo to explore. You get rid of haze only through the hard work of being there at the right time, and in some cities you may never get a chance.

Some of the shots are done through less than ideal lenses, and many of them are done use tele-extenders. These extenders do get more detail but the truth is a 2x tele-extender does not provide 4 times as many quality pixels. A common lens today is a 400mm with a 2x extender to get 800mm. Fairly expensive, but a lot cheaper than a quality 800mm lens. I think using that big expensive glass should count for more in the race to the biggest, even though some might view it as unfair. (A lens that big and heavy costs a ton and also weighs a lot, making it harder to get a mount to hold it and to keep it stable.) One can get very long mirror “lens” setups that are inexpensive, but they don’t deliver the quality, and I don’t believe work done with them should score as high as work with higher quality lenses. (It may be the case that images from a long telescope, which tend to be poor, could be scaled down to match the quality of a shorter but more expensive lens, and this is how it should be done.)

Ideally we should seek an objective measure of this. I would propose:

  • There should be a sufficient number of high contrast edges in the image — sharp edges where the intensity goes from bright to dark in the space of just 1 or 2 pixels. If there are none of these, the image must be shrunk until there are.
  • The image can then be divided up into sections and the contrast range in each evaluated. If the segment is very low contrast, such as sky, it is not counted in the pixel count. Possibly each block will be given a score based on how sharp it is, so that background items which are hazy count for more than nothing, but not as much as good sharp sections.
  • I believe that to win a pano should not contain gross flaws. Examples of such flaws include stripes of brightness or shadow due to cloud movement, big stitching errors and checkerboard patterns due to bad overlap or stitching software. In general that means manual exposure rather than shots where the stitcher tries to fix mixed exposures unless it does it undetectably.

Some will argue with the last one in particular, since for some the goal is just to get as many useful pixels as possible for browsing around. Gigapixel panoramas after all are only good for zooming around in with a digital viewer. No monitor can display them and sometimes even printing them 12 feet high won’t show all their detail, and people rarely do that. (Though you can see my above San Francisco picture as the back wall of a bar in SF.) Still, I believe it should be a minimum bar than when you look at the picture at more normal sizes, or print it out a few feet in size, it still looks like an interesting, if extremely sharp, picture.

Ideally an objective formula can be produced for how much you have to shrink what is present to get a baseline. It’s very rare that any such panorama not contain a fair number of segments with high contrast edges and lines in them. For starters, one could just put in the requirement that the picture be shrunk until you have a frame that just about anybody would agree is sharp like an ordinary quality photo when viewed 1:1. Ideally lots of frames like that, all over the photo.

Under these criteria a number of the large shots on gigapan fall short. (Though not as short as you think. The gigapan.org zoom viewer lets you zoom in well past 1:1, so even sharp images are blurry when zoomed in fully. On my own site I set maximum zoom at 200%.)

These requirements are quite strict. Some of my own photos would have to be shrunk to meet these tests, but I believe the test should be hard.

Blind man drives, sort of, with a robocar

A release from the National Federation for the Blind reports a blind person driving and avoiding obstacles on the Daytona speedway. They used a car from the TORC team at Virginia Tech, one of the competitors in the Darpa Grand Challenges. In effect, the blind driver replaced the “drive by wire” component of a robocar with a more intelligent and thinking human also able to feel acceleration and make some judgements. As the laser and other sensors in the car detected obstacles and turns, the computer sent audio and vibratory signals to the driver to turn, speed up or slow down.

While this demo is pretty simple, it was part of a larger project the NFB has to encourage computer and robotic technologies to let the blind do what the sighted can do. In my robocar roadmap I outlined a number of bodies who might promote and lobby for robocar technology, in particular the blind, so it’s good to see that step underway. They did it as well in 2009 with a simpler dune buggy.

This car did not use the fancy and expensive 64 line Velodyne LIDAR sensor that has become the norm on most other working robocars. The Virginia Tech team (Victor Tango) was the only one of the 6 teams to complete the urban challenge not to use that LIDAR. The car shown isn’t nearly as decorated with sensors as Victor Tango was, at least from looking at it visually, indicating good improvements in their system.

Another pedal-powered monorail: Skyride

Last year I wrote about an interesting but simple pedal powered monorail/PRT system called Shweeb which had won a prize/investment from Google. Recent announcements show they are not alone in this concept. Scott Olson, the original developer of the Rollerblade, has founded a company called Skyride Technologies to build their own version of a pedal powered suspended monorail.

You will find much that is similar between the two concepts, though they were developed independently. I will have to give Skyride the nod of picking names, though. Skyride offers both pedaling and a rowing-machine style interface, the latter aimed both at the disabled and those seeking a different kind of workout.

At present, the Skyride car is also open to the air, which has both advantages and disadvantages when it comes to cooling, drag, and exposure to the elements. Skyride does not also seem to offer the “bumper” system in the wheel cartridge which Shweeb claims will allow vehicles to safely hit one another and then push one another in trains.

Both are confined to prototype tracks for now, though the Schweeb one is an amusement ride that is open to the public. Both have plans to solve the most important problem in turning this into a real transportation system for campuses or urban areas, namely a switch that lets the vehicle smoothly and safely change tracks. Switching has always been an issue in monorails — not that it can’t be solved, but it’s just a little harder than changing lanes in a car. Rail systems sometimes put the switching in the track (that’s what regular heavy rail does) but that’s not very practical if you are going to have very frequent small vehicles. You want in-vehicle switching but with no risk of derailing.

While this concept is interesting, and even more fun if they can prove it works and then add some automation, I am not sure it will ever become a really big space. Still, having 2 companies will not doubt spur a bit more innovation.

Working on Robocars at Google

As readers of this blog surely know, for several years I have been designing, writing and forecasting about the technology of self-driving “robocars” in the coming years. I’m pleased to announce that I have recently become a consultant to the robot car team working at Google.

Of course all that work will be done under NDA, and so until such time as Google makes more public announcements, I won’t be writing about what they or I are doing. I am very impressed by the team and their accomplishments, and to learn more I will point you to my blog post about their announcement and the article I added to my web site shortly after that announcement. It also means I probably won’t blog in any detail about certain areas of technology, in some cases not commenting on the work of other teams because of conflict of interest. However, as much as I enjoy writing and reporting on this technology, I would rather be building it.

My philosophical message about Robocars I have been saying for years, but it should be clear that I am simply consulting on the project, not setting its policies or acting as a spokesman.

My primary interest at Google is robocars, but many of you also know my long history in online civil rights and privacy, an area in which Google is often involved in both positive and negative ways. Indeed, while I was chairman of the EFF I felt there could be a conflict in working for a company which the EFF frequently has to either praise or criticise. I will be recusing myself from any EFF board decisions about Google, naturally.  read more »

My phone should know when I start a trip

Every day I get into my car and drive somewhere. My mobile phone has a lot of useful apps for travel, including maps with traffic and a lot more. And I am usually calling them up.

I believe that my phone should notice when I am driving off from somewhere, or about to, and automatically do some things for me. Of course, it could notice this if it ran the GPS all the time, but that’s expensive from a power standpoint, so there are other ways to identify this:

  • If the car has bluetooth, the phone usually associates with the car. That’s a dead giveaway, and can at least be a clue to start looking at the GPS.
  • Most of my haunts have wireless, and the phone associates with the wireless at my house and all the places I work. So it can notice when it disassociates and again start checking the GPS. To get smart, it might even notice the MAC addresses of wireless networks it can’t see inside the house, but which it does see outside or along my usual routes.
  • Of course moving out to the car involves jostling and walking in certain directions (it has a compass.)

Once it thinks it might be in the car, it should go to a mode where my “in the car” apps are easy to get to, in particular the live map of the location with the traffic displayed, or the screen for the nav system. Android has a “car mode” that tries to make it easy to access these apps, and it should enter that mode.

It should also now track me for a while to figure out which way I am going. Depending on which way I head and the time of day, it can probably guess which of my common routes I am going to take. For regular commuters, this should be a no-brainer. This is where I want it to be really smart: Instead of me having to call up the traffic, it should see that I am heading towards a given highway, and then check to see if there are traffic jams along my regular routes. If it sees one, Then it should beep to signal that, and if I turn it on, I should see that traffic jam. This way if I don’t hear it beep, I can feel comfortable that there is light traffic along the route I am taking. (Or that if there is traffic, it’s not traffic I can avoid with alternate routes.)

This is the way I want location based apps to work. I don’t want to have to transmit my location constantly to the cloud, and have the cloud figure out what to do at any given location. That’s privacy invading and uses up power and bandwidth. Instead the phone should have a daemon that detects location “events” that have been programmed into it, and then triggers programs when those events occur. Events include entering and leaving my house or places I work, driving certain roads and so on.

And yes, for tools like shopkick, they can even be entering stores I have registered. And as I blogged at the very beginning of this blog many years ago, we can even have an event for when we enter a store with a bad reputation. The phone can download a database of places and wireless and Bluetooth MACs that should trigger events, and as such the network doesn’t need to know my exact location to make things happen. But most importantly, I don’t want to have to know to ask if there is something important near me, I want the right important things to tell me when I get near them.

TVs should be universal, not remote controls

Like me, you probably have a dozen “universal” remote controls gathered over the years. With each new device and remote you go through a process to try to figure out special codes to enter into the remote to train it to operate your other devices. And it’s never very good, except perhaps in the expensive remotes with screens and macros.

The first universal remotes had to do this because they were made after the TVs and other devices, and had to control old ones. But the idea’s been around for decades, and I think we have it backwards. It’s not the remote that should work with any TV, it’s the TV that should work with any remote. I’m not even sure in most cases we need to have the remote come with the TV, though I know they like designing special magic buttons and layouts for each new remote.

It would be trivial for any TV or other device that displays video to figure out exactly what sort of remote you are pointing at it, and then figure out what to do with all its buttons. Since these devices now all have USB plugs and internet connections, they can even get their data updated. With the TV in a remote setting mode (which you must of course reach by the few keys on the TV) a few buttons from any remote should let the TV figure out what it’s seeing. If it can’t figure out the difference it can ask on the screen to push specific buttons until you you see a picture of your remote on the screen and confirm.

If it can’t figure it out, it can still program the codes from any device by remembering. This would let it prompt you “push the button you want to change the channel” and you would push it and it would know. You could also tweak any remotes. But most people would see the very simple interface of “press these keys and we’ll figure out which you have.” Also makes it easy to have more than one device of the same type. But in particular makes it easy to not have so many “modes” where you have to tell the remote you want to control the TV now, then the satellite box, then the stereo, then the dvd player. Instead just tell the TV “ignore the buttons I am about to press” (for example the volume buttons) and tell the stereo to obey them. Or program a button to do different things on different devices — not a macro where a smart remote sends all the codes needed to tell the TV and stereo to switch inputs while turning on the DVD player, but just each box responding in its own way.

For outlying cases, you could tell the user to program their universal remote for some well established old devices. Every universal remote there is can control a Sony TV for example. That makes it sure the TV will know a set of codes.

The TVs and other devices might as well recognize all the infrared keyboards out there while they are at it.

Of course, as TVs figure out how to do this, the remotes can change. They can become a bit more standardized, and instead of trying to figure everything out, they can be the dumb device and the AV equipment can be the smart device. It’s the AV equipment that has storage, a screen, audio and so much more.

You can also train devices to understand there are multiple remotes that belong to some people. For example, the adult remote can be different from the child’s remote, and only the adult remote can see the Playboy channel, and is kept private. The child’s remote can also be limited to a number of hours of TV as I first suggested six years ago at the birth of this blog.

You can even fix the annoying problem of most remote protocols — “on” and “off” are the same button. This makes it very hard to do things like macro control because you can’t be sure what that code can do. You can have a “turn everything off” button that really works (I presume some of the ones out there use hidden non-toggle codes when they can) or codes to do things like switch on the DVD if it’s not already on, switch video and audio inputs to it, and start playing — something many systems have tried to do but rarely done well.

There are a few things to tweak to make sure “IR blasters” work properly. (These are IR outputs found on DVRs which send commands to cable and satellite boxes to change their channel etc. They are a horrible kludge and the best way rid of them are the new protocols that connect the devices up to IP or the new IP over HDMI 1.4, or failing that the badly-done anynet.)

But the key point here is this: Remotes put the smarts in the wrong place.

Comparing electricity to a gallon of gasoline

The “burning” question for electric cars is how to compare them with gasoline. Last month I wrote about how wrong the EPA’s 99mpg number for the Nissan Leaf was, and I gave the 37mpg number you get from the Dept. of Energy’s methodology. More research shows the question is complex and messy.

So messy that the best solution is for electric cars to publish their efficiency in electric terms, which means a number like “watt-hours/mile.” The EPA measured the Leaf as about 330 watt-hours/mile (or .33 kwh/mile if you prefer.) For those who really prefer an mpg type number, so that higher is better, you would do miles/kwh.

Then you would get local power companies to publish local “kwh to gallon of gasoline” figures for the particular mix of power plants in that area. This also is not very easy, but it removes the local variation. The DoE or EPA could also come up with a national average kwh/gallon number, and car vendors could use that if they wanted, but frankly that national number is poor enough that most would not want to use it in the above-average states like California. In addition, the number in other countries is much better than in the USA.

The local mix varies a lot. Nationally it’s about 50% coal, 20% gas, 20% nuclear and 10% hydro with a smattering of other renewables. In some places, like Utah, New Mexico and many midwestern areas, it is 90% or more coal (which is bad.) In California, there is almost no coal — it’s mostly natural gas, with some nuclear, particularly in the south, and some hydro. In the Pacific Northwest, there is a dominance by hydro and electricity has far fewer emissions. (In TX, IL and NY, you can choose greener electricity providers which seems an obvious choice for the electric-car buyer.)

Understanding the local mix is a start, but there is more complexity. Let’s look at some of the different methods, staring with an executive summary for the 330 wh/mile Nissan Leaf and the national average grid:  read more »

  • Theoretical perfect conversion (EPA method): 99 mpg-e(perfect)
  • Heat energy formula (DoE national average): 37 mpg-e(heat)
  • Cost of electricity vs. gasoline (untaxed): 75 mpg-e($)
  • Pollution, notably PM2.5 particulates: Hard to calculate, could be very poor. Hydrocarbons and CO: very good.
  • Greenhouse Gas emissions, g CO2 equivalent: 60 mpg-e(CO2)

Designing a better, faster, secure, vastly cheaper airport with proto-robocars

Like just about everybody, I hate the way travel through airports has become. Airports get slower and bigger and more expensive, and for short-haul flights you can easily spend more time on the ground at airports than you do in the air. Security rules are a large part of the cause, but not all of it.

In this completely rewritten essay, I outline the design on a super-cheap airport with very few buildings, based on a fleet of proto-robocars. I call them proto models because these are cars we know how to build today, which navigate on prepared courses on pavement, in controlled situations and without civilian cars to worry about.

In this robocar airport, which I describe first in a narrative and then in detail, there are no terminal buildings or gates. Each plane just parks on the tarmac and robotic stairs and ramps move up and dock to all its doors. (Catering trucks, fuel trucks and luggage robots also arrive.) The passengers arrive in a perfect boarding order in robocars that dock at the ramps/steps to let them get on the plane through every entrance. Luggage is handled by different robots, and is checked and picked up not in carousels and check-in desks, but at curbs, parking lots, rental car centers and airport hotels.

The change is so dramatic that (even with security issues) people could arrive at airports for flights under 20 minutes before take-off, and get out even faster. Checked luggage would add time, but not much. I also believe you could build a high capacity airport for a tiny fraction of the cost of today’s modern multi-billion dollar edifices. I believe the overall experience would also be more pleasant and more productive for all.

This essay is a long one, but I am interested in feedback. What will work here, and what won’t? Would you love to fly through this airport or hate it? This is an airport designed not to give you a glorious building in which to wait but to get you through it without waiting most of the time.

The airport gets even better when real robocars, that can drive on the streets to the airport, come on the scene.

Give me your feedback on The Robocar Airport.

Key elements of the design include:  read more »

Where will 3-D cameras like Kinect lead?

This year, I bought Microsoft Kinect cameras for the nephews and niece. At first they will mostly play energetic X-box games with them but my hope is they will start to play with the things coming from the Kinect hacking community — the videos of the top hacks are quite interesting. At first, MS wanted to lock down the Kinect and threaten the open source developers who reverse engineered the protocol and released drivers. Now Microsoft has official open drivers.

This camera produced a VGA colour video image combined with a Z (depth) value for each pixel. This makes it trivial to isolate objects in the view (like people and their hands and faces) and splitting foreground from background is easy. The camera is $150 today (when even a simple one line LIDAR cost a fortune not long ago) and no doubt cameras like it will be cheap $30 consumer items in a few years time. As I understand it, the Kinect works using a mixture of triangulation — the sensor being in a different place from the emitter — combined with structured light (sending out arrays of dots and seeing how they are bent by the objects they hit.) An earlier report that it used time-of-flight is disputed, and implies it will get cheaper fast. Right now it doesn’t do close up or very distant, however. While projection takes power, meaning it won’t be available full time in mobile devices, it could still show up eventually in phones for short duration 3-D measurement.

I agree with those that think that something big is coming from this. Obviously in games, but also perhaps in these other areas.

Gestural interfaces and the car

While people have already made “Minority Report” interfaces with the Kinect, studies show these are not very good for desktop computer use — your arms get tired and are not super accurate. They are good for places where your interaction with the computer will be short, or where using a keyboard is not practical.

One place that might make sense is in the car, at least before the robocar. Fiddling with the secondary controls in a car (such as the radio, phone, climate system or navigation) is always a pain and you’re really not supposed to look at your hands as you hunt for the buttons. But taking one hand off the wheel is OK. This can work as long as you don’t have to look at a screen for visual feedback, which is often the case with navigation systems. Feedback could come by audio or a heads up display. Speech is also popular here but it could be combined with gestures.

A Gestural interface for the TV could also be nice — a remote control you can’t ever misplace. It would be easy to remember gestures for basic functions like volume and channel change and arrow keys (or mouse) in menus. More complex functions (like naming shows etc.) are best left to speech. Again speech and gestures should be combined in many cases, particularly when you have a risk that an accidental gesture or sound could issue a command you don’t like.

I also expect gestures to possibly control what I am calling the “4th screen” — namely an always-on wall display computer. (The first 3 screens are Computer, TV and mobile.) I expect most homes to eventually have a display that constantly shows useful information (as well as digital photos and TV) and you need a quick and unambiguous way to control it. Swiping is easy with gesture control so being able to just swipe between various screens (Time/weather, transit arrivals, traffic, pending emails, headlines) might be nice. Again in all cases the trick is not being fooled by accidental gestures while still making the gestures simple and easy.

In other areas of the car, things like assisted or automated parking, though not that hard to do today, become easier and cheaper.

Small scale robotics

I expect an explosion in hobby and home robotics based on these cameras. Forget about Roombas that bump into walls, finally cheap robots will be able to see. They may not identify what they see precisely, though the 3D will help, but they won’t miss objects and will have a much easier time doing things like picking them up or avoiding them. LIDARs have been common in expensive robots for some time, but having it cheap will generate new consumer applications.

Mobile

There will be some gestural controls for phones, particularly when they are used in cars. I expect things to be more limited here, with big apps to come in games. However, history shows that most of the new sensors added to mobile devices cause an explosion of innovation so there will be plenty not yet thought of. 3-D maps of areas (particularly when range is longer which requires power) can also be used as a means of very accurate position detection. The static objects of a space are often unique and let you figure out where you are to high precision — this is how the Google robocars drive.

Security & facial recognition

3-D will probably become the norm in the security camera business. It also helps with facial recognition in many ways (both by isolating the face and allowing its shape to play a role) and recognition of other things like gait, body shape and animals. Face recognition might become common at ATMs or security doors, and be used when logging onto a computer. It also makes “presence” detection reliable, allowing computers to see how and where people are in a room and even a bit of what they are doing, without having to object recognition. (Though as the kinect hacks demonstrate, they help object recognition as well.)

Face recognition is still error-prone of course so its security uses will be initially limited, but it will get better at telling among people.

Virtual worlds & video calls

While some might view this as gaming, we should also see these cameras heavily used in augmented reality and virtual world applications. It makes it easy to insert virtual objects into a view of the physical world and have a good sense of what’s in front and what’s behind. In video calling, the ability to tell the person from the background allows better compression, as well as blanking of the background for privacy. Effectively you get a “green screen” without the need for a green screen.

You can also do cool 3-D effects by getting an easy and cheap measurement of where the viewer’s head is. Moving a 3-D viewpoint in a generated or semi-generated world as the viewer moves her head creates a fun 3-D effect without glasses and now it will be cheap. (It only works for one viewer, though.) Likewise in video calls you can drop the other party into a different background and have them move within it in 3-D.

With multiple cameras it is also possible to build a more complete 3-D model of an entire scene, with textures to paint on it. Any natural scene can suddenly become something you can fly around.

Amateur video production

Some of the above effects are already showing up on YouTube. Soon everybody will be able to do it. The Kinect’s firmware already does “skeleton” detection, to map out the position of the limbs of a person in the view of the camera. That’s good for games but also allows motion capture for animation on the cheap. It also allows interesting live effects distorting the body or making light sabres glow. Expect people in their own homes to be making their own Avatar like movies, at least on a smaller scale.

These cameras will become so popular we may need to start worrying about interference by their structured light. These are apps I thought of in just a few minutes. I am sure there will be tons more. If you have something cool to imagine, put it in the comments.

Happy Seasons to all! and a Merry New Year.

Drivers cost 1.7 million person-years every year in the USA, 3rd of all major causes

I’ve written frequently about how driving fatalities are the leading cause of death for people from age 5 to 45, and one of the leading overall causes of death. I write this because we hope that safe robocars, with a much lower accident rate, can eliminate much of this death.

Today I sought to calculate the toll in terms not of lives, but in years of life lost. Car accidents kill people young, while the biggest killers like heart disease/stroke, cancer and respiratory disease kill people when they are older. The CDC’s injury prevention dept. publishes a table of “Years of Potential Life Lost” which I have had it calculate for a lifespan of 80 years. (People who die after 80 are not counted as having lost years of life, though a more accurate accounting might involve judging the average expected further lifespan for each age cohort and counting that as the YPLL.)

The core result of the table though is quite striking. Auto accidents jump to #3 on the list from #7, and the ratios become much smaller. While each year almost a million die from cardiovascular causes and 40,000 from cars, the ratio of total years lost is closer to 4 to 1 for both cardiovascular disease and cancer, and the other leading causes are left far behind. (The only ones to compete with the cars are suicides and accidental poisoning which is much worse than I expected.)

The lesson: Work on safe robocars is even more vital than we might have thought, if you use this metric. It also seems that those interested in saving years of life may want to address the problem of accidental poisoning. Perhaps smart packaging or cheap poison detection could have a very big effect. (Update: This number includes non-intentional drug overdoses and deaths due to side effects of prescription drugs.) For suicide, this may suggest that our current approaches to treating depression need serious work. (For example, there are drugs that have surprising effectiveness on depression such as ketamine which are largely unused because they have recreational uses at higher doses and are thus highly controlled.) And if you can cure cancer, you would be doing everybody a solid.

Note: Stillbirths are not counted here. I would have expected the Perinatal causes to rank higher due to the large number of years erased. If you only do it to 65, thus counting what might get called “productive years” the motor vehicle deaths take on a larger fraction of the pie. Productivity lost to long term disability is not counted here, though it is very common in non-fatal motor vehicle accidents. Traffic deaths are dropping though so the 2009 figures will be lower.

Banks: Give me two passwords

Passwords are in the news thanks to Gawker media, who had their database of userids, emails and passwords hacked and published on the web. A big part of the fault is Gawker’s, who was saving user passwords (so it could email them) and thus was vulnerable. As I have written before, you should be very critical of any site that is able to email you your password if you forget it.

Some of the advice in the wake of this to users has been to not use the same password on multiple sites, and that’s not at all practical in today’s world. I have passwords for many hundreds of sites. Most of them are like gawker — accounts I was forced to create just to leave a comment on a message board. I use the same password for these “junk accounts.” It’s just not a big issue if somebody is able to leave a comment on a blog with my name, since my name was never verified in the first place. A different password for each site just isn’t something people can manage. There are password managers that try to solve this, creating different passwords for each site and remembering them, but these systems often have problems when roaming from computer to computer, or trying out new web browsers, or when sites change their login pages.

The long term solution is not passwords at all, it’s digital signature (though that has all the problems listed above) and it’s not to even have logins at all, but instead use authenticated actions so we are neither creating accounts to do simple actions nor using a federated identity monopoly (like Facebook Connect). This is better than OpenID too.  read more »

How Robocars affect the City, plus Masdar & City of Apple

I decided to gather together all my thoughts on how robocars will affect urban design. There are many things that might happen, though nobody knows enough urban planning to figure out just what will happen. However, I felt it worthwhile to outline the forces that might be at work so that urban geographers can speculate on what they will mean. It is hard to make firm predictions. For example, does the ability for a short pleasant trip make people want a Manhattan where everybody can get anywhere in 10 minutes, or does the ability to work or relax during trips make people not care about the duration and lead to more Sprawl? It can go either way, or both.

Read Robocar influence on the future of cities.

Masdar Video

In other notes, now that Masdar’s PRT is in limited operation, there are more videos of it. Here is a CNN Report with good shots of the cars moving around. As noted before, the system is massively scaled back, and runs at ground level, underneath elevated pedestrian streets. The cars are guided by magnets but there is LIDAR to look for pedestrians and obstacles.

City of Apple

The designer of Masdar, Foster + Partners, has been retained to design the new “City of Apple” which is going to spring up literally a 5 minute walk from my house. Apple has purchased the large Cupertino tract that was a major HP facility (and which also held Tandem, which HP eventually bought) and a few other companies. This is about a mile from Apple’s main HQ in Cupertino. Speculation about the plan includes a transportation system of some kind, possibly a PRT like in Masdar. However, strangely, there are talks of an underground tunnel between the buildings which makes almost no sense in this area, particularly since I can’t imagine it would be too hard to run elevated guideway along the side of interstate 280 or even on the very wide Stevens Creek Boulevard.

Sadly, aside from Apple, there’s not a lot for the system to visit if it’s to be more than intra-company transport. The Valco mall and the Cupertino Village are popular but Cupertino doesn’t really have a walkable downtown to speak of.

Of course if Apple wants to tear down all the HP buildings and put up a new massive complex, it will be hard to call that a green move. The energy and greenhouse gases involved in replacing buildings are huge. For transportation, robocars could just make use of the existing highway between the two campuses. It’s not even impossible to imagine Apple building its own exits and bridges on the interstate — much cheaper than an underground tunnel.

Building a house organizing robot with image search

There are many fields that people expect robotics to change in the consumer space. I write regularly about transportation, and many feel that robots to assist the elderly will be the other big field. The first successful consumer robot (outside of entertainment) was the Roomba, a house cleaning robot. So I’ve often wondered about how far we are from a robot that can tidy up the house. People got excited with a PR2 robot was programmed to fold towels.

This is a hard problem because it seems such a robot needs to do general object recognition and manipulation, something we’re pretty far from doing. Special purpose household chore robots, like the Roomba, might appear first. (A gutter cleaner is already on the market.)

Recently I was pondering what we might do with a robot that is able to pick up objects gently, but isn’t that good at recognizing them. Such a robot might not identify the objects, but it could photograph them, and put them in bins. The members of the household could then go to their computers and see a visual catalog of all the things that have been put away, and an indicator of where it was put. This would make it easy to find objects.

The catalog could trivially be sorted by when the items were put away, which might well make it easy to browse for something put away recently. But the fact that we can’t do general object recognition does not mean we can’t do a lot of useful things with photographs and sensor readings (including precise weight and other factors) beyond that. One could certainly search by colour, by general size and shape, and by weight and other characteristics like rigidity. The item could be photographed in a 360 view by being spun on a table or in the grasping arm, or which a rotating camera. It could also be laser-scanned or 3D photographed with new cheap 3D camera techniques.

When looking for a specific object, one could find it by drawing a sketch of the object — software is already able to find photos that are similar to a sketch. But more is possible. Typing in the name of what you’re looking for could bring up the results of a web image search on that string, and you could find a photo of a similar object, and then ask the object search engine to find photos of objects that are similar. While ideally the object was photographed from all angles, there are already many comparison algorithms that survive scaling and rotation to match up objects.

The result would be a fairly workable search engine for the objects of your life that were picked up by the robot. I suspect that you could quickly find your item and learn just exactly where it was.

Certain types of objects could be recognized by the robot, such as books, papers and magazines. For those, bar-codes could be read, or printing could be scanned with OCR. Books might be shelved at random in the library but be easily found. Papers might be hard to manipulate but could at least be stacked, possibly with small divider sheets inserted between them with numbers on them, so that you could look for the top page of any collected group of papers and be told, “it’s under divider 20 in the stack of papers.”  read more »

SARTRE "road train" update

The folks at the SARTRE road train project have issued an update one year into their 3 year project. This is an EU-initiated project to build convoy technology, where a professional lead driver in a truck or bus is followed by a convoy of closely packed cars which automatically follow based on radio communications (and other signals) with the lead. They have released a new video on their progress from Volvo.

I have written before about the issues involved in this project and many of them remain. It’s the easiest way to get a robocar on the highway, but comes with a particularly high risk if it fails — and failure in the earliest stages of robocar projects is very likely.

In the video, some interesting elements include:

  • The building of a simulator to test driver attitudes and reactions. Generally quite positive, in that people are happy to trust the driving to the system and the lead driver. This will change a bit in a real car, since a simulator can only do so much.
  • The imagine people eating, drinking, listening to music and reading while in the convoys, but they don’t talk about the elephant in the car: sleeping. People doing anything else can quickly take the controls in a problem, but sleepers may not. And there’s also that act that we metaphorically call “sleeping together.”
  • Their simulations depict cars leaving the convoy from the middle. However, in this situation it seems you can’t give them too much brake-accelerator control for the difficult task of changing lanes when you are just a few feet from the cars in front and back of you. You must maintain the speed of the train until you have fully left its lane, but that means you can’t do the usual task of changing speed as you enter your new lane. Exit from the trains will need some work. (There are suggestions in the comments that make sense.)
  • They expect to have to make legal changes to allow this. However, since it’s an EU initiated project, they have a leg-up on that. This might pave the way for more robocar-friendly laws in Europe.
  • While they plan to do a live test by 2012, they are much more cautious on predicting when the trains might be common on the roads.
  • They do speculate if a simple robocar function for “stop and go” traffic, which is able to follow the car in front of you at lower speeds, might come first. Indeed, this is pretty easy, and not much more than a smarter version of existing auto-follow cruise control with steering and lane-following added.
  • Their main pitch is environmental, as drafting should save decent fuel. However, I think most people will be interested in the time saving, and I’ll be interested in how the public accepts it.

Audi TT to Pikes Peak, Masdar PRT goes into action

Two bits of robocar news from last week. I had been following the progress of the Stanford/VW team that was building a robotic Audi TT to race to the top of Pikes Peak. They accomplished their run in September, but only now made the public announcement of it. You can find photos and videos with the press release or watch a video on youtube.

This project began with the team teaching the vehicle to “drift” — make controlled turns while wheels are skidding, something needed on the windy curves and dirt/gravel/pavement mix on the way up to Pikes Peak. Initial impressions were that they had the goal of being a competitor in the famous Pikes Peak Hill Climb — a time trial race to the top by human drivers, the fastest of whom have climbed in in 10 minutes, 3 seconds in major muscle cars. The best standard cars have done it in about 11.5 minutes, and Audi says a stock TT would take a bit under 17 minutes.

The autonomous Audi’s time of 27 minutes, with a top speed of 45mph, is thus a bit disappointing for those who were hoping for some real man vs. machine competition. The team leader, Burkhard Huhnke, downplayed this, saying that the goal was to come to a better understanding of computer controlled cornering and skidding, in order to make better driver assist systems for production vehicles. Indeed, that is a good goal and it is expected that robocar technologies will first appear as driver assist and safety features in production cars.

The actual run was also marred by tragedy when the helicopter filming it crashed.

Earlier, I spoke with James Gosling — more famous as the creator of the Java language — about his role in the project. Gosling knows languages and compilers very well, and he helped the team develop a compiler so the interpreted scripts they were writing in languages like Matlab. Gosling’s compiler was able to run the resulting code around 100x faster than the interpreter, allowing them to do a lot more with less hardware.

There is strong interest in man vs. machine robocar contests. Such contests, aside from setting a great bar for the robots, will demonstrate their abilities to the public and generate strong public interest. This turned out not to be such a contest, but someday a robot will race to the top of Pikes Peak in better than 10 minutes. It will have a bigger engine, and many more sensors than the Audi in this run, which mostly relied on augmented GPS (extra transmitters were put by the roadside for full accuracy.)

A future car will have a complete map in its head of where all road surfaces are, and their characteristics. It will know the physics of the car and the road better than any human driver. The main thing humans will be able to do is use their eyes to judge changing road conditions, but they don’t change very much, and computer vision or sensor systems to make such judgments don’t seem like an impossible project.

Masdar PRT in operation

In other news, the greatly-shrunk Masdar PRT system, built by 2getthere Inc. of the Netherlands, has entered production operation in Masdar, an experimental city project just outside Abu Dhabi. The project only has 2 stops for passengers (and 3 more for cargo) at this point. It runs at ground level, and pedestrians use an artificial level one floor up.

These pods have many robocar features. They use rubber tires and run on open, unmarked pavement, guiding themselves via odometry and sensing magnets embedded every 5 feet or so in the pavement. They also have laser sensors which see obstructions on the roadway and any pedestrians. They will stop for pedestrians, and even follow you if you walk ahead, maintaining a fixed distance. The system is not designed to mix with pedestrians, however, and the control software shuts down the relevant section of the track if passengers exit their vehicle outside a station.

The tracking is accurate enough that, as you can see, the tires have left black trails on the pavement by constantly running in the same place.

Photos and video can currently be found at the PRT Consulting site and this video shows it pulling out of a station. There is only one other video — I hope more will arrive soon.

The economy has scaled Masdar’s plans back greatly. The original plan called for a whole city done one floor up with a network of these proto-robocar PRT pods running underneath, and no traditional cars in the whole city.

Nissan Leaf EPA rating of "99mpg" is, sadly, a lie.

Nissan is touting that the EPA gave the new Leaf a mileage rating of 99mpg “gasoline equivalent”. What is not said in some stories (though Nissan admits it in the press release) is that this is based on the EPA rating a gallon of gasoline as equivalent to 33.7 kwh, and the EPA judging that the car only goes 73 miles on its 24kwh battery.

There is a huge problem with these numbers. If it were possible to convert perfectly, a gallon of gasoline actually has about 36kwh, so possibly the EPA is factoring in the 7% loss of electrical distribution. But in reality it isn’t even remotely possible to convert fuel to electricity perfectly.

I have written and update on comparing gasoline and electricity with more details.

The Department of Energy, for example, offers a number which puts under 13kwh as the energy equivalent of a gallon of gasoline. That’s how many kwh you get out of the plug if you burn coal, gas or oil with roughly the same energy as that gallon of gas. With the DoE’s number, the Leaf is getting a combined mileage of around 36 mpg-equivalent. That’s not a bad number, but there are many gasoline cars that do better than that. Even a Lexus hybrid does similar to that. This is no minor error, it’s a massive one, and it’s highly unlikely that Nissan or the EPA are unaware of it. This gives the impression of an attempt to make the Leaf seem way, way better than it is to promote electric cars. The problem with that is that when people learn the truth, they are going to be unhappy, and will be soured on electric cars, Nissan and the EPA.

Now I will agree that there is justifiable debate over the right way to do this calculation. The DoE works from its calculation of the average efficiency of power plants in the USA. People in areas with more efficient power will do better using electricity than those close to old coal plants (which are the big drag-down here.) The DoE also counts BTUs in nuclear plants (which provide about 20% of U.S. energy) as BTUs even though no fossil fuel is burned and no greenhouse gas is emitted. People must judge for themselves how “dirty” they think nuclear BTUs are, and how to value an electric car in areas where most of the electricity is nuclear. Even harder to judge are the 10% of US kwh that come from hydro. Hydro doesn’t even have BTUs or pollution, though it does come with environmental destruction. If you live in the Pacific Northwest or parts of Canada where most of the power is from hydro, you may judge the 99mpg number as more realistic, though in this case the concept of a gasoline equivalent is stretched pretty thin.

If you live in California, which burns almost no coal and gets most of its power from natural gas, and then nuclear, the real number isn’t as bad as the national average, but it’s still nowhere close to 99mpg. If you live in a place that is almost all-coal, like Utah or New Mexico, electric cars are not so great an idea — their only environmental advantage is that the fuel source is domestic rather than imported, and the coal is burned elsewhere, not right next to you.

There are other electric cars that are more efficient than the Leaf, but the big reality is that to really beat out the 50mph gasoline hybrids you need to make your car lighter.

“But wait,” some people say. We can run our electric car on solar or renewables and all is wonderful! Don’t get me started on this. There are no solar electrons. Installing renewable generation can be a good idea, but you must tie it to the grid for it to work. Not tying solar or wind or other sources to the grid is highly wasteful, because the power is discarded any time the battery is not empty (or worse, not connected.) Grid tie makes the grid greener, and people who do that can feel good about it if they do it well, but it does’t make driving more than a tiny smidgen of a percent greener than it was.

Shame on Nissan and the EPA. I hope that at least, Nissan will only sell the car in places with electricity that is well above average in quality, and refuse to sell it in places where the power is mostly from coal.

Not that I don’t understand the motivation. Had the EPA rated the car with the DoE methodology number of 36mpg, it might well have killed the car at the starting gate. It’s an interesting moral question if it’s right to lie to kickstart a technology which will become better with time. They could also have lobbied for a more reasonable but generous mpg, perhaps derived from the best natural gas plants, which would have offered a number in the 50s. Not nearly as exciting but not a car-killer, though the comparison to the Prius or Insight would not look so good.

It would have been best if they had just developed a new standard, like watt-hours/mile or miles/kwh, and leave it to the press and local power utilities to publish local conversions between “kwh” and gallons. (Not the dealers, they can’t be trusted of course.) It actually would be quite handy if every power utility were to publish, for each zone the local efficiency of the power grid in terms of BTU/kwh or greenhouse effect/kwh.

Update on Chevy Volt: The numbers for the Volt were released. As a plug-in Hybrid that can go 35 miles on its batteries and then has a gasoline engine, they rated it as 97mpg while on the battery (similar false number to the Leaf) and 37mpg while on gasoline. These numbers are actually roughly the same when using electricity at the grid national average.

Sad to say, but if you live in a place where the power comes from coal, the math seems to say you should remove most of the batteries and save the weight.

Needed: An international hand signal for "There's a problem with your car"

You’re driving down the road. You see another car on the road with you that has a problem. The lights are off and it’s dusk. There is something loose that may break off. There’s something left on the roof or the trunk is not closed — any number of things. How do you tell the driver that they need to stop and check? I’ve tried sometimes and they mostly think you are some sort of crazy, driving to close to them, waving at them, honking or shouting. Perhaps after a few people do it they figure it out.

We have a few signals. Oncoming cars flash lights on and off to warn you your lights are off. (Sometimes they are also warning of a speed trap.) High beams means, “I want to pass and you’re impeding the lane” and while many think that’s rude it’s better than tailgating.

We need a signal for “There is a problem with your car, you should check it out.” This signal should be taught in driving schools, and even be on the driving test. A publicity campaign should educate existing drivers.

One proposal that might make sense is the SCUBA signal for “I have a problem.” This is holding your hand flat, palm down, and wiggling it side to side (ie. rotating your wrist.) Then you point to the source of the problem, like your regulator or whatever. (There are specific SCUBA signals for well known problems, like being low on air, nitrogen narcosis etc.)

For this signal you would waggle the hand and then point at the place on the other person’s car. To those untrained, the signal often mean’s “dicey” or uncertain. Shaking of the head could also strengthen the signal.

Anybody have a better signal to propose?

Robocars vs. Deer and the flying bumper

Today, I was challenged with the question of how well robocars would deal with deer crossing the road. There are 1.5 million collisions with deer in the USA every year, resulting in 200 deaths of people and of course many more deer. Many of the human injuries and crashes have come from trying to swerve to avoid the deer, and skidding instead during the panic.

At present there is no general purpose computer vision system that can just arbitrarily identify things — which is to say you can’t show it a camera view of anything and ask, “what is that?” CV is much better at looking for specific things, and a CV system that can determine if something is a deer is probably something we’re close to being able to make. However, I made a list of a number of the techniques that robots might have to do a better job of avoiding collisions with animals, and started investigating thoughts on one more, the “flying bumper” which I will detail below.

Spotting and avoiding the deer

  • There are great techniques for spotting animal eyes using infrared light bouncing off the retinas. If you’ve seen a cheap flash photo with the “red eye” effect you know about this. An IR camera with a flash of IR light turns out to be great at spotting eyes and figureing out if they are looking at you, especially in darkness.
  • A large number of deer collisions do take place at dusk or at night, both because deer move at these times and humans see badly during them. LIDAR works superbly in darkness, and can see 100m or more. On dry pavement, a car can come to a full stop from 80mph in 100m, if it reacts instantly. The robocar won’t identify a deer on the road instantly but it will do so quickly, and can thus brake to be quite slow by the time it travels 100m.
  • Google’s full-map technique means the robocar will already have a complete LIDAR map of the road and terrain — every fencepost, every bush, every tree — and of course, the road. If there’s something big in the LIDAR scan at the side of the road that was not there before, the robocar will know it. If it’s moving and more detailed analysis with a zoom camera is done, the mystery object at the side of the road can be identified quickly. (Radar will also be able to tell if it’s a parked or disabled vehicle.)
  • They are expensive today, but in time deep infrared cameras which show temperature will become cheap and appear in robocars. Useful for spotting pedestrians and tailpipes, they will also do a superb job on animals, even animals hiding behind bushes, particularly in the dark and cool times of deer mating season.
  • Having spotted the deer, the robocar will never panic, the way humans often do.
  • The robocar will know its physics well, and unlike the human, can probably plot a safe course around the deer that has no risk of skidding. If the ground is slick with leaves or rain, it will already have been going more slowly. The robocar can have a perfect understanding of the timings involved with swerving into the oncoming traffic lane if it is clear. The car can calculate the right speed (possibly even speeding up) where there will be room to safely swerve.
  • If the oncoming traffic lane is not clear, but the oncoming car is also a robocar, it can talk to that car both to warn it and to make sure both cars have safe room to swerve into the oncoming lane.
  • Areas with major deer problems put up laser sensors along the sides of the road, which detect if an animal crosses the beam and flash lights. A robocar could get data from such sensors to get more advanced warning of animal risks areas.

Getting the deer to move

There might be some options to get the deer to get out of the way. Deer sometimes freeze, a “deer in the headlights.” A robocar, however, does not need to have visible headlights! It may have them on for the comfort of the passengers who want to see where they are going and would find it spooky driving in the dark guided by invisible laser light, but those comfort lights can be turned off or dimmed during the deer encounter, something a human driver can’t do. This might help the deer to move.  read more »

Robocar impact on traffic congestion and capacity

Many people wonder whether robocars will just suffer the curse of regular cars, namely traffic congestion. They are concerned that while robocars might solve many problems of the automobile, in many cities there just isn’t room for more roads. Can robocars address the problems of congestion and capacity? What about combined with ITS (Intelligent Transportation Systems) efforts to make roads smarter for human driven cars?

I think the answer is quite positive, for a number of different reasons. I have added a new Robocar essay:

Traffic Congestion and Capacity with Robocars

In short, a wide variety of factors (promotion of small, single passenger cars, ability to reverse streets during rush-hour, elimination of accidents and irrational congestion-fostering behaviour, shorter headways, metering of road usage and load balancing of roads and several others) could amount to a severalfold increase in the capacity of our roads, with minimal congestion. If you add the ability to do convoys, the increase can be 5 to 10 fold. (About 20-fold in theory.) The use of on-demand pooling into buses over congested sections allows a theoretical (though unlikely) 100-fold increase in highway capacity.

While these theoretical limits are unlikely, the important lesson is that once most of the cars on the roads are robotic, we have more than enough road capacity to handle our current needs and needs well into the future. In general, overcapacity causes building, so in time we’ll start to use it up — and have much larger cities, if we wish them — but unlike today’s roads which add capacity until they collapse from congestion, advanced metering can assure that no road accepts more vehicles than it can handle without major risk of congestion collapse.

Even before most cars are robotic, various smart-road efforts will work to improve capacity and traffic flow. The appearance of robotic safety systems in human driven cars will also reduce accidents and congestion along the way. Free market economist Robin Hanson believes the ability of cities to grow much larger will be one of the biggest consequences of robocar capacity improvements.

Shoot Nikon? Please help review my article on choosing lenses for Nikon cameras

For many years I have had a popular article on what lenses to buy for a Canon DSLR. I shoot with Canon, but much of the advice is universal, so I am translating the article into Nikon.

If you shoot Nikon and are familiar with a variety of lenses for them, I would appreciate your comments. At the start of the article I indicate the main questions I would like people’s opinions on, such as moderately priced wide angle lenses, as well as regular zooms.

If you “got a Nikon camera and love to take photographs” please read the article on what lens to buy for your Nikon DSLR and leave comments here or send them by email to btm@templetons.com. I’m also interested in lists of “what’s in your kit” today.

Syndicate content