This weekend’s announcement that Google had logged 140,000 miles of driving in traffic with their prototype robocars got lots of press, but it’s not the only news of teams making progress. A team at TU Braunschweig in Germany has their own model which has been driving on ordinary city streets with human oversight. You can watch a video of the car in action though there is a lot of B-roll in that video, so seek ahead to 1:50 and particularly 3:20 for the inside view of the supervisor’s hands hovering just over the self-turning steering wheel. There is some information on Stadtpilot here, but we can see many similarities, including the use of the Velodyne 64 line LIDAR on the roof and a typical array of sensors, and more use of detailed maps.
The team at Vislab in Milan has completed most of their Milan to Shanghai autonomous car journey which I have been following. You can read their blog or watch video (sometimes live) of their trip. A lot of the blog has ended up being not about the autonomous challenges, but just the challenges of taking a fleet of very strange looking vehicles in a convoy across Eastern Europe and Asia. For example, they have trucks which can carry their robocars inside, and once decided it was simpler to cross a border into Hungary this way. However, they left driving the vehicles, and the exit officials got very concerned that there was no record of the robocars coming into the country. I presume it wasn’t hard to convince them they were not smuggling Hungarian robocars out.
The Vislab challenge is impressive but aimed at very different goals. They have a focus on making more near term “driver assist” technology. This makes sense and is a definite stop on the roadmap to Robocars. However, it means their trek is not as autonomous as you might imagine. The lead vehicle in each pair is usually human driven, in particular because there are not accurate maps of the route they are taking. The second vehicle is usually autonomous, and either follows visually or by receiving sets of GPS waypoints from the lead.
They plan to reach Shanghai for the close of the world’s faire there.
Alberto Broggi on the Vislab team tells me that one of the big challenges in some areas — in particular Moscow — has been really erratic driving by locals. (That’s a lot, coming from Italians!) I have certainly seen how in some places in Eastern Europe and Asia the lines on the road (including the center one) are “just a suggestion.” Their software is not up to dealing with this and they’ve gone to wholly manual there.
The Vislab team has smaller 4-line LIDARs and has not used the larger Velodyne everybody else uses, though they agree it’s the best sensor available. They want to see what they can develop for driver assist that is at a lower price point, particularly what they can do with machine vision. Vision sensors are indeed cheap, and in fact they are becoming stock equipment on many cars for other purposes. “…given someone else is investing on this lidar, research-wise I feel some value in investing on another technology,” says Professor Broggi.
My prediction is that with Moore’s law and economies of scale, all sensors will get cheap over time, and that all robocars will want to have as many sensors as they can afford. That means that all efforts to gain more out of each type of sensor are worthwhile.
More Google Notes
A small part of the press on Google’s robocars was overshadowed by the question of why a company like Google is doing this. I think the answer there lies in the extensive use of mapping and local data in improving the performance of their cars. Google has given its cars a really, really good sense of where they are because they only drive streets that were scanned before in 3-D by similar vehicles. With a complete 3-D map of each street, knowing the location of every curb and tree and building by shape, the cars are able to very accurately figure out just where they are without anything more than a loose position from GPS and the odometer. The GPS may only report position within a few meters, and may in fact not even be working in some locations, but the car knows where it is because that building is here, and that other building is over there, and the lane markers on this particular street are at X, Y and Z. In effect, the google car is able to navigate the street because it has been there before and remembered every little detail of it — taking out the moving things. While I don’t know how good the system is, such a technique should be fairly robust — even if a building is knocked down, there are lots of other buildings. In rural areas with a certain sameness, I imagine the dashed lane markings, phone poles, ditches, road edge shapes and other clues will never look the same over a short stretch. The GPS, odometer and intertial systems will always tell you roughly where you are, but the street memory will tell you exactly.
And this is something Google is one of the world experts on, particularly the StreetView team. There are two broad problem spaces in autonomous driving. One is the tactics — figuring out what to do in real time based on changing conditions, avoiding obstacles and so on. The other is the navigation (what might be called the strategy) of knowing where you are and picking lanes and turns. They are not disjoint, of course, but Google has become heavily involved in navigation tools and their breakthrough involves not just improving them, but making them so good that the sense of where you are helps the tactical driving system figure out what it’s going to do.
I’ve been advocating the development of what I call a trillion mile test suite. Google has also demonstrated the value of precise mapping data, which is easy to get — just drive down a street a few times. With a proper test suite, your future robocar will have already driven — virtually — down every street in your region before it ever physically takes you down them.