You are here

Notes from Robodevelopers conference


I gave a few visits to the RoboDeveloper's conference the past few days. It was a modest sized affair, one of the early attempts to make a commercial robot development conference (it's been more common to be academic in the past.) The show floor was modest, with just 3 short aisles, and the program modest as well, but Robocars were an expanding theme.

Sebastian Thrun (of the Stanford "Stanley" and "Junior" Darpa Grand Challenge teams) gave the keynote. I've seen him talk before but his talk is getting better. Of course he knows just about everything in my essays without having to read them. He continues (as I do) to put a focus on the death toll from human driving, and is starting to add an energy plank to the platform.

While he and I believe Robocars are the near-term computer project with the greatest benefit, the next speaker, Maja Mataric of USC made an argument that human-assistance robots will be even bigger. They are the other credible contender, though the focus is different. Robocars will save a million young people from death who would have been killed by human driving. Assist robots will improve and prolong the lives of many millions more of the aged who would die from ordinary decrepitude. (Of course, if we improve anti-aging drugs that might change.) Both are extremely worthy projects not getting enough attention.

Mataric said that while people in Robotics have been declaring "now is the hot time" for almost 50 years, she thinks this time she really means it. Paul Saffo, last weekend at Convergence 08, declared the same thing. He thinks the knee of the Robotics "S Curve" is truly upon us.

On the show floor, and demonstrated in a talk by Bruce Hall (of Velodyne Lidar and of Team DAD in the Darpa Grand Challenges) was Velodyne's 64 line high resolution LIDAR. This sensor was on almost all the finishers in the Urban Challenge.

While very expensive today ($75,000) Hall believes that if he had an order for 10 million it would cost only hundreds without any great advances. With a bit of Moore's law tech, it could even be less in short order.

Their LIDAR sees out to 120 meters. Hall says it could be tuned to go almost 300 meters, though of course resolution gets low out there. But even 120 meters gives you the ability to stop (on dry road) at up to 80 mph. Of course you need a bit of time to examine a potential obstacle before you hit the brakes so hard, so the more range the better, but this sensor is able to deliver with today's technology.

The LIDAR uses a class 1 (eye-safe) infrared laser and Hall says it works in any amount of sunlight, and of course in darkness. He also says having many together on the road does not present a problem and did not at the Urban Challenge when cars came together. It might require something fancier to avoid deliberate jamming or interference. I suspect the military will pay for that technology to be developed.

This LIDAR, at a lower cost, seems good enough for a Whistlecar today, combined, perhaps with tele-valet remote operation. The LIDAR is good enough to drive at modest urban speeds (25mph) and not hit anything that isn't trying to hit you. A tele-valet could get the whistlecar out of jams as it moves to drivers, filling stations and parking spots.

These forecasts of cheap, long-range LIDAR make me very optimistic about Whistlecars if we can get them approved for use in limited areas, notably parking lots, airports, gated communities and the like. We may be able to deploy this even sooner than some expect.


I've been thinking about this for decades myself...but not as formally as you and these other researchers.

The radar would also have to tell the difference among a tree, a dog, and a person.

I can't wait for these radars to improve enough for a bumper to bumper stream of cars to drive 100 mph through an intersection. It should be quite scary at first :-)


You get a resolution of about 1 inch, so this is not an ideal tool for identifying dogs, people and trees. Existing systems are pretty good at identifying other cars with it, both because of their shape and the fact that they are often moving. Ditto people and animals.

The one thing it does is give you very reliable assurance there is nothing in your path or other places you are interested in. If the LIDAR gives you a distant return you can be pretty sure there is nothing closer.

Once it tells you where objects are, other systems -- including cameras and higher resolution, spot-based LIDAR, will be used to zoom in on the objects and identify them, as to what they are and what they are doing. If they are people, I expect cameras to zoom in on the face and determine where the person is looking -- there are already systems that do this.

Right now the big problem camera based systems face is lighting. They can work in some lightings but not in others. LIDAR provides its own infrared light, so it works in all lighting.

I expect a robocar to feature LIDAR, wide field stereo cameras in all directions, actual radar (including down at the bumpers for parking) and narrow field zoom cameras on fast servos that can look at objects and people to identify them and their intentions. And anything else we think of.

Perhaps the ID decision would be aided with other devices to provide the vehicle other "senses"

Standard video and infrared cameras along with computer vision software similar to this might aid the decision making process. Would a small doppler radar unit be plausible and would it provide data that a lidar does not provide ?

I also think that as time goes by and more "smart" vehicles are on the road better detailed maps will be available that would map locations of things such as trees, utility poles, mailboxes etc etc. I would think the robovehicle itself would play a key role in creating and maintaining these maps. A method to authenticate and validate these updates will have to be worked out.

A P2P communication protocol between vehicles traveling along similar vectors could be employed to track and update the location of the mobile targets such as people, pets, and other dynamic changes to the environment such as new road construction, traffic delays, new/worsening potholes etc etc. Presumably these vehicles will be able to share data and their decisions made so each robocar wlll not need to make a decision that was made by 30 other robocars that have passed along the same route within the past 10 minutes.

While privacy issues will have to be addressed RFID for pets could aid in the ID decision and at the same time notify the owner that Fluffy has wondered out into the street.



While I am in favour of doing data connections between vehicles, I think it's pretty fundamental that operation, and certainly not safety, must not depend on them. Vehicles must be able to operate well and totally safely without any communication with other cars; if for no other reason because that communication might break down. Or other cars might lie.

What car-to-car connections can do is make things better. Allow cars to leave smaller gaps than humans can. Detect problems even sooner. Predict the road surface. Flow around people planning to make turns well in advance. Go a bit faster. Form convoys for drafting.

That's different from car-to-central-to-car, for things like mapping the location of trees and potholes. That I expect to happen, and to some extent to be relied on, though again, not as a necessity as things change. So if you know where a mailbox is, it does help you in saying, "Ok, that's the mailbox" but if you can't spot a person in time 100% of the time without that, you should not be on the road yet.

Add new comment

Subscribe to Comments for "Notes from Robodevelopers conference"