Google's robocar explained in video

Topic: 

Since getting involved with Google's self-driving-car team, I've had to keep silent about its internals, but for those who are interested in the project, a recent presentation at the intelligent robotics conference in San Francisco is now up on youtube. The talk is by Sebastian Thrun (overall project leader) and Chris Urmson, lead developer. Sebastian led the Stanley and Junior teams in the Darpa Grand Challenge and Chris led CMU teams, including BOSS which won the urban challenge.

The talk begins in part one with the story of the grand challenges. If you read this blog you probably know most of that story.

Part two (above) shows video that's been seen before in Sebastian's TED talk and my own talks, and maps of some of the routes the car has driven. Then you get Chris showing some hard technical details about mapping and sensors.

Part three shows the never before revealed story of a different project called "Caddy": self-driving, self-delivering golf carts for use in campus transportation. The golf carts are an example of what I've dubbed a WhistleCar -- a car that delivers itself and then you drive it in any complex situations.

If you want to see what's inside the project, these videos are a must-watch, particularly part 2 (embedded above) and the start of part 3.

There's lots of other robocar news after the Intelligent Transportation Systems conference, which I attended this week in Orlando FL. The ITS community is paying only minimal attention to robocars, which is an error on their part, but a lot of the technology there will eventually affect how robocars develop -- though a surprising amount of it will become obsolete because it focuses on the problems caused by lots of human driving.

Comments

The ability to drive thousands of miles autonomously is awesome. But it seems that the car relies heavily on accurate prior knowledge of the road, signals, etc., and it relies heavily on the cooperation of other drivers. There doesn't seem to be much lookahead or dynamic situational driving.

One example is the left turn in Palo Alto. The car started to make the turn and then noticed the pedestrians. While waiting for the pedestrians to cross, it was holding up oncoming traffic. I would expect that truly robust robocars would have to have the competence to drive in unfamiliar and challenging situations FIRST, and then factor in maps and street views.

There's no way to gather diverse enough data to train machine learning algorithms to handle the range of situations a human driver can react to instinctively (even if sometimes sub-optimally). It's interesting that in these reports of thousands of miles driven autonomously, the result is that nothing so unusual was encountered as to require intervention by the human backup. What accounts for this?

The stop sign example showed that driving is partly about social cognition. Like, spotting the drunks and the crazies by abstract properties like swerving and aggressiveness. It's easy to drive in California. The roads and signals pretty much take care of you and most other drivers are cooperative. How does the car fare in Boston?

@Eric:
The video showed an example where the car swerved slightly to give space to a bicyclist.

However, I do agree with your point about unusual events. What would have happened above if there was a car approaching in an oncoming lane?

My example of an extraordinary event is to imagine if a tree is blocking the lane the robocar is in while a *person* is blocking the lane the robacar wants to swerve into to avoid a crash. Will it pick the tree's lane or the person's lane? What if the person is *near* the lane. What would it decide then?

I'm a programmer, and know that I don't want to be involved in writing any software for a robocar. That would be too much pressure for me.

I can't wait for robocars. I've been dreaming and thinking about them since I was 10 (over 40 years ago), but don't think I'll be an early adopter.

Randy

Add new comment