The future of computer-driven cars and deliverbots
During a very busy September of travel, I let a number of important stories fall through the cracks. The volume of mainstream press articles on Robocars is immense. Most are rehashes of things you have already seen here, but if you want the fastest breaking news, there are now some sources that focus on that. Here I will report the important news with analysis.
Last week, I commented on the VW scandal and asked the question we have all wondered, "what the hell were they thinking?" Elements of an answer are starting to arise, and they are very believable and teach us interesting lessons, if true. That's because things like this are rarely fully to blame on a small group of very evil people, but are more often the result of a broad situation that pushed ordinary (but unethical) people well over the ethical line. This we must understand because frankly, it can happen to almost anybody.
The ingredients, in this model are:
- A hard driving culture of expected high performance, and doing what others thought was difficult or impossible.
- Promising the company you will deliver a hotly needed product in that culture.
- Realizing too late that you can't deliver it.
- Panic, leading to cheating as the only solution in which you survive (at least for a while.)
There's no question that VW has a culture like that. Many successful companies do, some even attribute their excellence to it. Here's a quote from the 90s from VW's leader at the time, talking about his desire for a hot new car line, and what would happen if his team told him that they could not delivery it:
"Then I will tell them they are all fired and I will bring in a new team," Piech, the grandson of Ferdinand Porsche, founder of both Porsche and Volkswagen, declared forcefully. "And if they tell me they can't do it, I will fire them, too."
Now we add a few more interesting ingredients, special to this case:
- European emissions standards and tests are terrible, and allowed diesel to grow very strong in Europe, and strong for VW in particular
- VW wanted to duplicate that success in the USA, which has much stronger emissions standards and tests
The team is asked to develop an engine that can deliver power and fuel economy for the US and other markets, and do it while meeting the emissions standards. The team (or its leader) says "yes," instead of saying, "That's really, really hard."
They get to work, and as has happened many times in many companies, they keep saying they are on track. Plans are made. Tons of new car models will depend on this engine. Massive marketing and production plans are made. Billions are bet.
And then it unravels
Not too many months before ship date, it is reported, the team working on the engine -- it is not yet known precisely who -- finally comes to a realization. They can't deliver. They certainly can't deliver on time, possibly they can actually never deliver for the price budget they have been given.
Now we see the situation in which ordinary people might be pushed over the line. If they don't deliver, the company has few choices. They might be able to put in a much more expensive engine, with all the cost such a switch would entail, and price their cars much more than they hoped, delivering them late. They could cancel all the many car models which were depending on this engine, costing billions. They could release a wimpy car that won't sell very well. In either of these cases, they are all fired, and their careers in the industry are probably over.
Or they can cheat and hope they won't get caught. They can be the heroes who delivered the magic engine, and get bonuses and rewards. 95% they don't get caught, and even if they are caught, it's worse, but not in their minds a lot worse than what they are facing. So they pretend they built the magic engine, and program it to fake that on the tests.
Among the most common questions I have seen in articles in the mainstream press, near the top is, "Who is going to be liable in a crash?" Writers always ask it but never answer it. I have often given the joking answer by changing the question to "Who gets sued?" and saying, "In the USA, that's easy. Everybody will get sued."
Yesterday I attended the "Silicon Valley reinvents the wheel" conference by the Western Automotive Journalists which had a variety of talks and demonstrations of new car technology.
Now that robocars have hit the top of the "Gartner Hype Cycle" for 2015, everybody is really piling on, hoping to see what's good for their industry due to the robocar. And of course, there is a great deal of good, but not for several industries.
Let me break down some potential misconceptions if my predictions are true:
Most of you would have heard about the giant scandal where it has been revealed that Volkswagen put software in their cars to deliberately cheat on emissions tests in the USA and possibly other places. It's very bad for VW, but what does it mean for all robocar efforts?
You can read tons about the Volkswagen emissions violations but here's a short summary. All modern cars have computer controlled fuel and combustion systems, and these can be tuned for different levels of performance, fuel economy and emissions. (Of course, ignition in a diesel is not done by an electronic spark.) Cars have to pass emission tests, so most cars have to tune their systems in ways that reduce other things (like engine performance and fuel economy) in order to reduce their pollution. Most cars attempt to detect the style of driving going on, and tune the engine differently for the best results in that situation.
VW went far beyond that. Apparently their system was designed to detect when it was in an emissions test. In these tests, the car is on rollers in a garage, and it follows certain patterns. VW set their diesel cars to look for this, and tune the engine to produce emissions below the permitted numbers. When the car saw it was in more regular driving situations, it switched the tuning to modes that gave it better performance and better mileage but in some cases vastly worse pollution. A commonly reported number is that in some modes 40 times the California limit of Nitrogen Oxides could be emitted, and even over a wide range of driving it was as high as 20 times the California limit (about 5 times the European limit.) NOx are a major smog component and bad for your lungs.
It has not been revealed just who at VW did this, and whether other car companies have done this as well. (All companies do variable tuning, and it's "normal" to have modestly higher emissions in real driving compared to the test, but this was beyond the pale.) The question everybody is asking is "What the hell were they thinking?"
That is indeed the question, because I think the central issue is why VW would do this. After all, having been caught, the cost is going to be immense, possibly even ruining one of the world's great brands. Obviously they did not really believe that they might get caught.
Beyond that, they have seriously reduced the trust that customers and governments will place not just in VW, but in car makers in general, and in their software offerings in particular. VW will lose trust, but this will spread to all German carmakers and possibly all carmakers. This could result in reduced trust in the software in robocars.
What the hell were they thinking?
The motive is the key thing we want to understand. In the broad sense, it's likely they did it because they felt customers would like it, and that would lead to selling more cars. At a secondary level, it's possible that those involved felt they would gain prestige (and compensation) if they pulled off the wizard's trick of making a diesel car which was clean and also high performance, at a level that turns out to be impossible.
Much press has been made over Jonathan Petit's recent disclosure of an attack on some LIDAR systems used in robocars. I saw Petit's presentation on this in July, but he asked me for confidentiality until they released their paper in October. However, since he has decided to disclose it, there's been a lot of press, with truth and misconceptions.
There are many security aspects to robocars. By far the greatest concern would be compromise of the control computers by malicious software, and great efforts will be taken to prevent that. Many of those efforts will involve having the cars not talk to any untrusted sources of code or data which might be malicious. The car's sensors, however, must take in information from outside the vehicle, so they are another source of compromise.
There are ways to compromise many of the sensors on a robocar. GPS can be easily spoofed, and there are tools out there to do that now. (Fortunately real robocars will only use GPS as one clue to their location.) Radar is also very easy to spooof -- far easier than LIDAR, agrees Petit -- but their goal was to see if LIDAR is vulnerable.
The attack is a real one, but at the same time it's not, in spite of the press, a particularly frightening one. It may cause a well designed vehicle to believe there are "ghost" objects that don't actually exist, so that it might brake for something that's not there, or even swerve around it. It might also overwhelm the sensor, so that it feels the sensor has failed, and thus the car would go into a failure mode, stopping or pulling off the road. This is not a good thing, of course, and it has some safety consequences, but it's also a fairly unlikely attack. Essentially, there are far easier ways to do these things that don't involve the LIDAR, so it's not too likely anybody would want to mount such an attack.
Indeed, to do these attacks, you need to be physically present, near the target car, and you need a solid object that's already in front of the car, such as the back of a truck that it's following. (It is possible the road surface might work.) This is a higher bar than attacks which might be done remotely (such as computer intrusions) or via radio signals (such as with hypothetical vehicle-to-vehicle radio, should cars decide to use that tech.)
Here's how it works: LIDAR works by sending out a very short pulse of laser light, and then waiting for the light to reflect back. The pulse is a small dot, and the reflection is seen through a lens aimed tightly at the place the pulse was sent. The time it takes for the light to come back tells you how far away the target is, and the brightness tells you how reflective it is, like a black-and-white photo.
To fool a lidar, you must send another pulse that comes from or appears to come from the target spot, and it has to come in at just the right time, before (or on some, after) the real pulse from what's really in front of the LIDAR comes in.
The attack requires knowing the characteristics of the target LIDAR very well. You must know exactly when it is going to send its pulses before it sends them, and thus precisely (to the nanosecond) when a return reflection ("return") would arrive from a hypothetical object in front of the LIDAR. Many LIDARS are quite predictable. They scan a scene with a rotating drum, and you can see the pulses coming out, and know when they will be sent.
Everybody has heard about Google's restructuring. In the restructuring, Google [x], which includes the self-driving car division, will be a subsidiary of the new Alphabet holding company, and no longer part of Google.
Having been a consultant on that team, I have some perspective to offer on how the restructuring might affect the companies that become Alphabet subsidiaries and leave the Google umbrella.
From small beginnings, over 800 people are here at the Ann Arbor AUVSI/TRB Automated Vehicles symposium. Let's summarize some of the news.
Lots of PR about the new test track opening at University of Michigan. I have not been out to see it, but it certainly is a good idea to share one of these rather than have everybody build their own, as long as you don't want to test in secret.
I'm in the Detroit area for the annual TRB/AUVSI Automated Vehicle Symposium, which starts tomorrow. Today, those in Ann Arbor attended the opening of the new test track at the University of Michigan. Instead, I was at a small event with a lot of good folks in downtown Detroit, sponsored by SAFE which is looking to wean the USA off oil.
Much was discussed, but a particularly interesting idea was just how close we are getting to something I had put further in the future -- robocars that are cheaper than ordinary cars.
We know electric cars are getting better and likely to get popular even when driven by humans. Tesla, at its core, is a battery technology company as much as it's a car company, and it is sometimes joked that the $85,000 Telsa with a $40,000 battery is like buying a battery with a car wrapped around it. (It's also said that it's a computer with a car wrapped around it, but that's a better description of a robocar.) (Update: Since this article was written, the cost of the Tesla battery has dropped to closer to $20,000.)
At Singularity U, we're releasing a new video series answering questions about our future technology topics that come from Twitter. My segment is one of the first, and while regular readers of my blog will probably have seen me talk about most of these, here is the video:
The press were all a-twitter about a report from Reuters that there had been a near miss between Delphi's test car and one of Google's though it was quickly denied that anything happened
The situation described, one car cutting off another, was a very unlikely one for several reasons:
A reader recently asked about the synergies between robocars and ultracapacitors/supercapacitors. It turns out they are not what you would expect, and it teaches some of the surprising lessons of robocars.
2 months mostly on the road, so here's a roundup of the "real" news stories in the field.
This weekend I went to Pomona, CA for the 2015 DARPA Robotics Challenge which had robots (mostly humanoid) compete at a variety of disaster response and assistance tasks. This contest, a successor of sorts to the original DARPA Grand Challenge which changed the world by giving us robocars, got a fair bit of press, but a lot of it was around this video showing various robots falling down when doing the course:
What you don't hear in this video are the cries of sympathy from the crowd of thousands watching -- akin to when a figure skater might fall down -- or the cheers as each robot would complete a simple task to get a point. These cheers and sympathies were not just for the human team members, but in an anthropomorphic way for the robots themselves. Most of the public reaction to this video included declarations that one need not be too afraid of our future robot overlords just yet. It's probably better to watch the DARPA official video which has a little audience reaction.
Don't be fooled as well by the lesser-known fact that there was a lot of remote human tele-operation involved in the running of the course.
What you also don't see in this video is just how very far the robots have come since the first round of trials in December 2013. During those trials the amount of remote human operation was very high, and there weren't a lot of great fall videos because the robots had tethers that would catch them if they fell. (These robots are heavy and many took serious damage when falling, so almost all testing is done with a crane, hoist or tether able to catch the robot during the many falls which do occur.)
We aren't yet anywhere close to having robots that could do tasks like these autonomously, so for now the research is in making robots that can do tasks with more and more autonomy with higher level decisions made by remote humans. The tasks in the contest were:
- Starting in a car, drive it down a simple course with a few turns and park it by a door.
- Get out of the car -- one of the harder tasks as it turns out, and one that demanded a more humanoid form
- Go to a door and open it
- Walk through the door into a room
- In the room, go up to a valve with circular handle and turn it 360 degrees
- Pick up a power drill, and use it to cut a large enough hole in a sheet of drywall
- Perform a surprise task -- in this case throwing a lever on day one, and on day 2 unplugging a power cord and plugging it into another socket
- Either walk over a field of cinder blocks, or roll through a field of light debris
- Climb a set of stairs
The robots have an hour to do this, so they are often extremely slow, and yet to the surprise of most, the audience -- a crowd of thousands and thousands more online -- watched with fascination and cheering. Even when robots would take a step once a minute, or pause at a task for several minutes, or would get into a problem and spend 10 minutes getting fixed by humans as a penalty.
Some headlines (I've been on the road and will have more to say soon.)
Google announces it will put new generation buggies on city streets
Google has done over 2.7 million km of testing with their existing fleet, they announced. Now, they will be putting their small "buggy" vehicle onto real streets in Mountain View. The cars will stick to slower streets and are NEVs that only go 25mph.
Earlier this week I was sent some advance research from the U of Michigan about car sickness rates for car passengers. I found the research of interest, but wish it had covered some questions I think are more important, such as how carsickness is changed by potentially new types of car seating, such as face to face or along the side.
Most of the robocar press this week has been about the Delphi drive from San Francisco to New York, which completed yesterday. Congratulations to the team. Few teams have tried to do such a long course and so many different roads. (While Google has over a million miles logged in their testing by now, it's not been reported that they have done 3,500 distinct roads; most testing is done around Google HQ.)