NHTSA, the federal car safety agency has been talking about getting into the robocar game for a while, and now declares it wants more involvement with two important details:
Unlike California, they are keen on making sure full robocars (able to run unmanned) are part of the regulations, and
Their regulations might supersede those of states like California.
In the next six months, the DoT will work with states and others on a unified policy. There are some other details here.
(California, by the way will have hearings in the next couple of weeks on their regulations. I will be out of the state, unfortunately.)
On top of this there is a $4 billion (over 10 years) proposal in the new Obama budget to support and accelerate robocars and (sadly) connected cars.
Perhaps most heartening is a plan to offer reduced regulation for up to 2,500 early deployment vehicles — a way to get companies out there in the field without shackling them first. Public attitudes on robocars have pushed regulators to a rather radical approach to regulation, namely attempting to define regulations before a product is actually on the market, with California even thinking of banning unmanned cars before they arrive. In the normal history of car safety regulation, technologies are built and deployed by vendors and are usually on the road for decades before they get regulated, but people are so afraid of robots that this normal approach may not happen here.
GM Delays super-cruise again
There was a fair bit of excitement when Cadillac announced “super-cruise,” a product similar to what you see in the Tesla autopilot, for the 2014 model year, or so we thought. It was the first effort from a big car company at some level of self-driving, even if minimal. Since then, they’ve kept delaying it, while Mercedes, Tesla and others have released such products. Now they have said it won’t show until at least 2017. GM is quickly dropping in the ranks of active Robocar companies, leaving the U.S. mantle to Tesla and Ford. Chrysler has never announced anything an even ran anti-self-driving-car ads in the Superbowl a few years ago.
Tesla releases “summon” and hints at more
The latest Tesla firmware release offers a “summon” function, so you can train your car to park and come back to you (with a range of 39 feet.) Primary use is to have your car go park itself in the garage, or at a robotic charging station. This didn’t stop Elon Musk from promising we are not very far away from being able to summon the car from very far away.
The pace of news is getting fast. Even I’m having trouble keeping up with everything even though it’s part of my job. This blog will continue to be a place not for all the news, but the news that actually makes a difference, with analysis.
Here are some other items you might find of interest:
A new news web site from Continental, one of the Tier 1 suppliers building self-driving systems. This is general news, not directly about Continental.
Ford is testing in snow up in Michigan. Localizing on snow is not hard with LIDAR if there are lots of poles, signs and other objects which stick up above the snow. Driving a freshly covered road with no landmarks will be harder. Another issue is deciding what to do when other cars have chosen to “make a lane” in the wrong place, when you know where the lanes really are.
Chris reports two interesting statistics. The first is “simulated contacts” — times when a safety driver intervened, and the vehicle would have hit something without the intervention:
There were 13 [Simulated Contact] incidents in the DMV reporting period (though 2 involved traffic cones and 3 were caused by another driver’s reckless behavior). What we find encouraging is that 8 of these incidents took place in ~53,000 miles in ~3 months of 2014, but only 5 of them took place in ~370,000 miles in 11 months of 2015.
(There were 69 safety disengages, of which 13 were determined to be likely to cause a “contact.”)
The second is detected system anomalies:
There were 272 instances in which the software detected an anomaly somewhere in the system that could have had possible safety implications; in these cases it immediately handed control of the vehicle to our test driver. We’ve recently been driving ~5300 autonomous miles between these events, which is a nearly 7-fold improvement since the start of the reporting period, when we logged only ~785 autonomous miles between them. We’re pleased.
Let’s look at these and why they are different and how they compare to humans.
The “simulated contacts” are events which would have been accidents in an unsupervised or unmanned vehicle, which is serious. Google is now having one once every 74,000 miles, though Urmson suggests this rate may not keep going down as they test the vehicle in new and more challenging environments. It’s also noted that a few were not the fault of the system. Indeed, for the full set of 69 safety disengagements, the rate of those is actually going up, with 29 of them in the last 5 months reported.
How does that number compare with humans? Well, regular people in the USA have about 6 million accidents per year reported to the police, which means about once every 500,000 miles. But for some time, insurance companies have said the number is twice that, or once every 250,000 miles. Google’s own new research suggests even more accidents are taking place that go entirely unreported by anybody. For example, how often have you struck a curb, or even had a minor touch in a parking lot that nobody else knew about? Many people would admit to that, and altogether there are suggestions the human number for a “contact” could be as bad as one per 100,000 miles.
Which would put the Google cars at close to that level, though this is from driving in simple environments with no snow and easy California driving situations. In other words, there is still some distance to go, but at least one possible goal seems in striking distance. Google even reports going 230,000 miles from April to November of last year without a simulated contact, a (cherry-picked) stretch that nonetheless matches human levels.
For the past while, when people have asked me, “What is the biggest obstacle to robocar deployment, is it technology or regulation?” I have given an unexpected answer — that it’s testing. I’ve said we have to figure out just how to test these vehicles so we can know when a safety goal has been met. We also have to figure out what the safety goal is.
Various suggestions have come out for the goal: Having a safety record to match humans. Matching good humans. Getting twice or even 10 times or even 100 times as good as humans. Those higher, stretch goals will become good targets one day, but for now the first question is how to get to the level of humans.
One problem is that the way humans have accidents is quite different from how robots probably will. Human accidents sometimes have a single cause (such as falling asleep at the wheel) but many arise because 2 or more things went wrong. Almost everybody I talk to will agree a time has come when they were looking away from the road to adjust the radio or even play with their phone, and they looked up to see traffic slowing ahead of them, and quickly hit the brakes just in time, narrowly avoiding an accident. Accidents often happen when luck like this runs out. Robotic accidents will probably mostly come from one single flaw or error. Robots doing anything unsafe, even for a moment, will be cause for alarm and the source of the error will be fixed as quickly as possible.
This leads us to look at the other number — the safety anomalies. At first, this sounds more frightening. They range from 39 hardware issues and anomalies to 80 “software discrepancies” which may include rarer full-on “blue screen” style crashes (if the cars ran Windows, which they don’t). People often wonder how we can trust robocars when they know computers can be so unreliable. (The most common detected fault is a perception discrepancy, with 119. It is not said, but I will presume these will include strange sensor data or serious disagreement between different sensors.)
It’s important to note the hidden message. These “safety anomaly” interventions did not generally cause simulated contacts. With human beings, the fact that you zone out, take your eyes off the road, text or even in many cases even briefly fall asleep does not always result in a crash for humans, and nor will similar events for robocars. In the event of a detected anomaly, one presumes that independent (less capable) backup systems will immediately take over. Because they are less capable, they might cause an error, but that should be quite rare.
As such, the 5300 miles between anomalies, while clearly in need of improvement, may also not be a bad number. Certainly many humans have such an “anomaly” that often (that’s about every 6 months of human driving.) It depends how often such anomalies might lead to a crash, and what severity of crash it would be.
The report does not describe something more frightening — a problem with the system that it does not detect. This is the sort of issue that could lead to a dangerous “careen into oncoming traffic” style event in the worst case scenario. The “unexpected motion” anomalies may be of this class. (As such would be a contact incident, we can conclude it’s very rare if it happens at all in the modern car.) (While I worked on Google’s car a few years ago, I have no inside data on the performance of the current generations of cars.)
I have particular concern with the new wave of projects hoping to drive with trained machine learning and neural networks. Unlike Google’s car and most others, the programmers of those vehicles have only a limited idea how the neural networks are operating. It’s harder to tell if they’re having an “anomaly,” though the usual things like hardware errors, processor faults and memory overflows are of course just as visible.
The other vendors
Google didn’t publish total disengagements, judging most of them to be inconsequential. Safety drivers are regularly disengaging for lots of reasons:
Taking a break, swapping drivers or returning to base
Moving to a road the car doesn’t handle or isn’t being tested on
Any suspicion of a risky situation
The latter is the most interesting. Drivers are told to take the wheel if anything dangerous is happening on the road, not just with the vehicle. This is the right approach — you don’t want to use the public as test subjects, you don’t want to say, “let’s leave the car auto-driving and see what it does with that crazy driver trying to hassle the car or that group of schoolchildren jaywalking.” Instead the approach is to play out the scenario in simulator and see if the car did the right thing.
Delphi reports 405 disengagements in 16,600 miles — but their breakdown suggests only a few were system problems. Delphi is testing on highway where disengagement rates are expected to be much lower.
Nissan reports 106 disengagements in 1485 miles, most in their early stages. For Oct-Nov their rate was 36 for 866 miles. They seem to be reporting the more serious ones, like Google.
Tesla reports zero disengagements, presumably because they would define what their vehicle does as not a truly autonomous mode.
VW’s report is a bit harder to read, but it suggests 5500 total miles and 85 disengagements.
If the number is the 100,000 mile or 250,000 mile number we estimate for humans, that’s still pretty hard to test. You can’t just take every new software build and drive it for a million miles (about 25,000 hours) to see if it has fewer than 4 or even 10 accidents. You can and will test the car over billions of miles in simulator, encountering every strange situation ever seen or imagined. Before the car has a first accident it will be unlike a human. It will probably perform flawlessly. if it doesn’t, that will be immediate cause for alarm back at HQ, and correction of the problem.
Makers of robocars will need to convince themselves, their lawyers and safety officers, their boards, the public and eventually even the government that they have met some reasonable safety goal.
Over time we will hopefully see even more detailed numbers on this. That is how we’ll answer this question.
This does turn out to be one advantage of the supervised autopilots, such as what Tesla has released. Because it can count on all the Tesla owners to be the fail-safe (or if you prefer, guinea-pig) for their autopilot system, Tesla is able to quickly gather a lot of data about the safety record of its system over a lot of miles. Far more than can be gathered if you have to run the testing operation with paid drivers or even your own unmanned cars. This ability to test could help the supervised autopilots get to good confidence numbers faster than expected. Indeed, though I have often written that I don’t feel there is a good evolutionary path from supervised robocars to unmanned ones, this approach could make my prediction be in error. For if Tesla or some other car maker with lots of cars on the road is able to make an autopilot, and then observe that it never fails in several million miles, then they might have a legitimate claim on having something safe enough to run unmanned, at least on the classes of roads and situations which the customers tested it on. Though a car that does 10 million perfect highway miles is still not ready to bring itself to you door to door on urban streets, as Elon Musk claimed would happen soon with the Tesla yesterday.
Ford’s CEO talks like he gets it. Ford did not have too much to show — they announced they will be moving to Velodyne’s new lower cost 32-laser puck-sized LIDAR for their research, and boosting their research fleet to 30 vehicles. They plan for full-auto operation in limited regions fairly soon.
Ford is also making its own efforts into one-way car share (similar to Daimler Car2Go and BMW DriveNow) called GoDrive, which pushes Ford more firmly into the idea of selling rides rather than cars. The car companies are clearly believing this sooner than I expected, and the reason is very clearly the success of Uber. (As I have said, it’s a mistake to think of Uber as competition for the taxi companies. Uber is competition for the car companies.)
Ford is also doing an interesting “car swap” product. While details are scant, it seems what the service will do is let you swap your Ford for somebody else’s different Ford. For example, if somebody has an F-150 or Transit Van that they know they won’t use the cargo features on some day or weekend, you drive over with your ordinary sedan and swap temporarily for their truck — presumably with a small amount of money flowing to the more popular vehicle. Useful idea.
The big announcement that didn’t happen was the much-rumoured alliance between Ford and Google. Ford did not overtly refute it but suggested they had enough partners at present. The alliance would be a good idea, but either the rumours were wrong, or they are waiting for another event (such as the upcoming Detroit Auto Show) to talk about it.
Faraday Future, where art thou?
The big disappointment of the event was the silly concept racecar shown by Faraday Future. Oh, sure, it’s a cool electric racecar, but it has absolutely nothing to do with everything we’ve heard about this company, namely that they are building a consumer electric car-on-demand service with autonomous delivery. Everybody wondered if they had booked the space and did not have their real demo ready on time. It stays secret for a while, it seems. Recent hires, such as Jan Becker, the former head of the autonomous lab for Bosch, suggest they are definitely going autonomous.
Mapping heats up
Google’s car drives by having super-detailed maps of all the roads, and that’s the correct approach. Google is unlikely to hand out its maps, so both Here/Navteq (now owned by a consortium of auto companies in Germany) and TomTom have efforts to produce similar maps to licence to non-Google robocar teams. They are taking fairly different approaches, which will be the subject of a future article.
One interesting edge is that these companies plan to partner with big automakers and not just give them map data but expect data in return. That means that each company will have a giant fleet of cars constantly scanning the road, and immediately reporting any differences between the map and the territory. With proper scale, they should get reports on changes to the road literally within minutes of them happening. The first car to encounter a change will still need to be able to handle it, possibly by pulling over and/or asking the human passenger to help, but this will be a very rare event.
MobilEye has announced a similar plan, and they are already the camera in a large fraction of advanced cars on the road today. MobilEye has a primary focus on vision, rather than Lidar, but will have lots of sources of data. Tesla has also been uploading data from their cars, though it does not (as far as I know) make as extensive use of detailed maps, though it does rely on general maps. read more »
Lyft announced a $500M investment from GM with $500M more, pushing them to a $5.4B valuation, which is both huge and just a tenth of Uber. This was combined with talk of a push to robocars. (GM will provide a car rental service to Lyft drivers to start, but the speculation is that whatever robocar GM gets involved in will show up at Lyft.)
With no details, Lyft’s announcement doesn’t really add anything to the robocar world that Uber doesn’t already add. It is GM’s participation that is more interesting, because it’s another car company showing they are not just giving lip service to the idea of selling rides rather than cars. (Mercedes and BWM have also started saying real things in this area.)
My initial expectations for the big car companies were much more bleak for them. I felt that their century long histories of doing nothing but selling cars would impede them from switching models until it was too late. That might still happen, and will happen for some companies, but more might survive than expected. The story also contains some more pure PR comments about OnStar in the new Lyft rental cars. Lyft drivers are all linked in real time with their smartphones; OnStar is obsolete technology, named only to make it seem GM is adding something. GM is not a great robocar leader. They have been very slow even with their highway “super cruise” efforts and the best they have done is partner with Rajkumar at CMU only to find Uber more successful at working with CMU folks.
Sidecar and where are you going?
Also frightening is the news last week of the death of Sidecar. Sidecar was the 3rd place smartphone-hail company after Uber and Lyft, but so distant a third that it decided to shut down. Where Lyft can raise another billion, Sidecar could not get a dime. The CEO is a friend of mine and I’ve been impressed that Sidecar was willing to innovate, even building a successful delivery business on top of the fact that you had to tell Sidecar where you were going. I think it’s important that users say where they are going. It allows much better planning of the use of robocar resources. If customers say where they are going, you can not only do some of the things Sidecar did (deliveries in the trunk the passenger doesn’t even know about, pricing set by drivers, directional goals set by drivers etc.) you can do more:
Send short-range cars (electric cars) for short trips
Send small (one or two person) cars when there is just one rider
Send cars not even capable of the highway if the trip doesn’t involve the highway
Pool riders far more efficiently, sometimes in vehicles designed for pooling which have 2-12 private “cabins.”
All of this is important to making transportation vastly more efficient, and in allowing a wide variety of vehicle designs, and a wide variety of power trains. It is only by knowing the destination that many of these benefits can be seen.
Uber lets you enter the destination but does not require it, and people do like having less to do when summoning a vehicle. (I always enter the destination when in places they don’t speak English, it’s a handy way to communicate with the driver.) The driver is not shown the destination until after they pick you up. This stops drivers from refusing rides going places they don’t want to go, which has its merits. It also has serious downsides for drivers, who sometimes at the end of their shift pick up a rider who wants to go 40 miles in the opposite direction of their home.
Even more frightening is what Sidecar’s death says about how much room there is for competitors in the robotaxi space. There are dozens of car makers competing for a new car customer, but San Francisco, the birthplace of Uber, Lyft and Sidecar, could not support 3 players in one of the world’s hottest investment spaces. Two unicorns, but nobody else.
When it comes to competition, the ride business is a strange one. For scheduled rides (which was most of the black car business before Uber) there are minimal economies of scale. A one-car limo “fleet” is still a viable business today, picking up customers for scheduled rides. They provide the same service as a 100 car limo-fleet, though they sometimes have to turn you down or redirect you to a partner.
For on-demand rides, there is a big economy of scale. I want a car now, so you have to have a lot of cars to be sure to have one near me. I will go with the service that can get to me soonest. While price and vehicle quality matter, they can be trumped by pickup time, within reason. Sidecar, being small, often failed in this area, including my attempt to use it on its last day on my way home from the airport.
Robocars offer up a middle ground. Because there is no driver who minds waiting, it will be common to summon a robocar longer in advance of when you want it. Once you know that “I’m leaving in around 20 minutes” you can summon, and the car can find somewhere to wait except in the most congested zones. Waiting time for a robotaxi can be very cheap, well under a dollar/hour, though during peak times, robotaxi owners will raise the price a little to avoid lost opportunity costs. (Finance costs will be under 20 cents/hour at 5% interest, and waiting space will range from free to probably 30 cents/hour in a competitive parking “spot market.”)
The more willing customers are to summon in advance, the more competitive a small player can be. They can offer you instant service when you actually are ready to leave, and that way they can compete on factors other than wait time. Small players can be your first choice, and they can subcontract your business to another company who has a car close by when you forget to summon in advance.)
CES in Las Vegas
I’m off to CES Wednesday. This show, as before promises to have quite a lot of car announcements. Rumours suggest the potential Ford/Google announcement could happen there, along with updates from most major companies. There will also be too many “connected” car announcements because companies need to announce something, and it’s easy to come up with something in that space that sounds cool without the actual need that it be useful.
This morning already sees an announcement from Volvo and Ericsson about streaming video in cars. This is a strange one, a mix of something real — as cars become more like living rooms and offices they are going to want more and better bandwidth, including bandwidth reliable enough for video conferencing — but also something silly, in that watching movies and TV shows is, with a bit of buffering, a high-bandwidth application that’s easy to get right on an unreliable network. Though in truth, because wireless bandwidth on the highway is always going to be more expensive than wifi in the parking space, it really makes more sense to pre-load your likely video choices to win both ways on cost and quality. I have been fascinated watching the shift between semi-planned watching (DVD rental, Netflix DVD queue, DVR, prepaid series subscriptions, watchlists and old-school live TV) and totally ad-hoc streaming on demand. While I understand the attraction of ad-hoc streaming (even for what you planned far ahead to watch) it surprises me that people do it even at the expense of cost and quality. Of course, there are parallels to how we might summon cars!
Yahoo Autos is reporting rumours that Google and Ford will announce a partnership at CES. Google has always said it doesn’t want to build the cars, and Ford makes sense as a partner — big, but with only modest R&D efforts of its own, and frankly a brand that needs a jolt of excitement. That means it will be willing to work with Google as a partner which calls many of the shots, rather than just viewing them as a supplier, which gets to call few of them. Ford has the car-making skills, global presence and scale to take this to any level desired. Besides, if Google really wanted, it could buy Ford with cash it has on hand. :-)
This is combined with the announcement of what I predicted earlier in the year — that Alphabet will spin out the self-driving car project (known internally as “Chauffeur”) into its own corporate subsidiary.
While the big story of the week was the California regulations here are some other items worth of note, and non-note.
No, a whiz-kid hasn’t duplicated what the big labs did
There was a fair bit of press about the self-driving car efforts of George Hotz. Hotz modified an Acura ILX to do some basic self-driving. It’s a worthwhile project, and impressive for a solo operator, but because of the amount of press hype was so large, Tesla even issued a “correction” which is pretty close to spot-on.
I don’t know Hotz or anything about his effort not in the story, but what is described is what were viewed as “solved problems” years ago by the major teams. What’s interesting about his effort is how much less work is required to do it today. The sensors are much cheaper, the computing is cheaper and smaller, the AI tools and other software tools are much better and more readily available, and the cars are easier to interface to.
In particular, Hotz gets to take advantage of two things not easy for early teams. Today’s cars are all controlled by digital signals on internal controller area network buses. Many cars are very close to “drive by wire” if you can use that bus. The problem is, most car vendors are very protective about their bus protocols, and also want to change them, so it’s hard to make a production system based on unsupported protocols learned via reverse engineering. Better to take the hard but certain route of pretending to be the sensors in the brake pedal and gas pedal, and wire on to the steering motor.
The other rising trend of interest is the surge of capability in convolutional neural networks and the “deep learning” algorithm. Google loves these tools and just open sourced the TensorFlow package to spread it out into the world. This is starting to affect the conclusions I wrote in my article several years ago on the question of whether cameras or lidar will be the primary sensor in a robocar. In that essay, I conclude that computer vision is still too uncertain a quantity to predict, while cheap lidar is a safe and easy prediction. Computer vision is improving faster than expected, though it’s still not there yet. It is this, I think, that gives Elon Musk the (still probably false) confidence to declare lidar as the wrong direction.
Many people with whom I have had conversations have felt that Tesla’s early autopilot release was reckless. And Elon Musk perhaps agrees, because he has noted that videos show customers clearly doing unsafe things with the autopilot. The Tesla autopilot handles most highway conditions, and in fact lulls people into thinking it handles them all. In reality it is a system that needs constant monitoring, like a good cruise control. Some of us have feared it’s a matter if when, not if, a Tesla on autopilot will have an incident.
One comment from Tesla has particularly concerned me. It is said the Teslas are improving every week based on learning from data gathered from all the Teslas out running in autopilot, perhaps a million miles a day. That is an impressive and useful resource, and Tesla has even said they would love for them to learn every day. Learning is good, but that rate of learning strongly suggests that no human quality assurance is being done on the results of the learning — the QA is being done by the customers. I fear that is not a safe approach at this stage of the technology.
Many more teams and entrants
Baidu has stepped up their efforts and now also will work on buses. They have also stepped up their partnership with BMW. Samsung has entered the fray as well as Kia (Hyundai announced big plans earlier this year.) Tata group has also announced plans but through the Tata Elxsi design division, not Tata Motors. (Mahindra earlier offered a prize for robocar development in India.)
The testing regulations did not bother too many, though I am upset that they effectively forbid the testing of delivery robots like the ones we are making at Starship because the test vehicles must have a human safety driver with a physical steering system. Requiring that driver makes sense for passenger cars but is impossible for a robot the size of breadbox.
Needing a driver
The draft operating rules effectively forbid Google’s current plan, making it illegal to operate a vehicle without a licenced and specially certified driver on board and ready to take control. Google’s research led them to feel that having a transition between human driver and software is dangerous, and that the right choice is a vehicle with no controls for humans. Most car companies, on the other hand, are attempting to build “co-pilot” or “autopilot” systems in which the human still plays a fundamental role.
The state proposes banning Google style vehicles for now, and drafting regulations on them in the future. Unfortunately, once something is banned, it is remarkably difficult to un-ban it. That’s because nobody wants to be the regulator or politician who un-bans something that later causes harm that can be blamed on them. And these vehicles will cause harm, just less harm than the people currently driving are doing.
The law forbids unmanned operation, and requires the driver/operator to be “monitoring the safe operation of the vehicle at all times and be capable of taking over immediate control.” This sounds like it certainly forbids sleeping, and might even forbid engrossing activities like reading, working or watching movies.
Drivers must not just have a licence, they must have a certificate showing they are trained in operation of a robocar. On the surface, that sounds reasonable, especially since the hand-off has dangers which training could reduce. But in practice, it could mean a number of unintended things:
Rental or even borrowing of such vehicles becomes impossible without a lot of preparation and some paperwork by the person trying it out.
Out of state renters may face a particular problem as they can’t have California licences. (Interstate law may, bizarrely, let them get by without the certificate while Californians would be subject to this rule.)
Car sharing or delivered car services (like my “whistlecar” concept or Mercedes Car2Come) become difficult unless sharers get the certificate.
The operator is responsible for all traffic violations, even though several companies have said they will take responsibility. They can take financial responsibility, but can’t help you with points on your licence or criminal liability, rare as that is. People will be reluctant to assume that responsibility for things that are the fault of the software in the car they use, as they have little ability to judge that software.
With no robotaxis or unmanned operation, a large fraction of the public benefits of robocars are blocked. All that’s left is the safety benefit for car owners. This is not a minor thing, but it’s a small a part of the whole game (and active safety systems can attain a fair chunk of it in non-robocars.)
The state says it will write regulations for proper robocars, able to run unmanned. But it doesn’t say when those will arrive, and unfortunately, any promises about that will be dubious and non-binding. The state was very late with these regulations — which is actually perfectly understandable, since not even vendors know the final form of the technology, and it may well be late again. Unfortunately, there are political incentives for delay, perhaps indeterminate delay.
This means vendors will be uncertain. They may know that someday they can operate in California, but they can’t plan for it. With other states and countries around the world chomping at the bit to get vendors to move their operations, it will be difficult for companies to choose California, even though today most of them have.
People already in California will continue their R&D in California, because it’s expensive to move such things, and Silicon Valley retains its attractions as the high-tech capital of the world. But they will start making plans for first operation outside California, in places that have an assured timetable.
It will be less likely that somebody would move operations to California because of the uncertainty. Why start a project here — which in spite of its advantages is also the most expensive place to operate — without knowing when you can deploy here. And people want to deploy close to home if they have the option.
It might be that the car companies, whose prime focus is on co-pilot or autopilot systems today, may not be bothered by this uncertainty. In fact, it’s good for their simpler early goals because it slows the competition down. But most of them have also announced plans for real self-driving robocars where you can act just like a passenger. Their teams all want to build them. They might enjoy a breather, but in the end, they don’t want these regulations either.
And yes, it means that delivery robots won’t be able to go on the roads, and must stick to the sidewalks. That’s the primary plan at Starship today, but not the forever plan.
California should, after receiving comment, alter these regulations. They should allow unmanned vehicles which meet appropriate functional safety goals to operate, and they should have a real calendar date when this is going to happen. If they don’t, they won’t be helping to protect Californians. They will take California from being the envy of the world as the place that has attracted robocar development from all around the planet to just another contender. And that won’t just cost jobs, it will delay the deployment in California of a technology that will save the lives of Californians.
I don’t want to pretend that deploying full robocars is without risk. Quite the reverse, people will be hurt. But people are already being hurt, and the strategy of taking no risk is the wrong one.
This summer, I started wondering what you might do to build a small farming robot to manage a home garden. I then discovered the interesting Farmbot project, which has been working on this for much longer, and has done much of what I thought might be useful. So I offer kudos to them, but thought it might be worth discussing some of the reasons why this is interesting, and a few new ideas.
The rough idea is to use robotics to manage a modest garden. It could be outside or in a greenhouse, or perhaps eventually a vertical farm on a wall. The simplest way to do this is to have a track and a gantry to allow the robot head to move to any spot in the rectangle and then do gardening tasks — tilling, planting seeds, watering, weed killing, weeding, analysis and even perhaps harvesting.
Why do people have gardens? Some do it because they enjoy the task, or at least some portions of the task. Those folks may not be interested in a farming robot, though they might like one which does the tiresome tasks like weeding.
Others garden to save money on produce, particularly specialty produce which is organic and where they know all about how it was grown. Initially, the robot might be too expensive to allow you to save money unless you pretend to ignore the cost of the robot.
Perhaps most interesting is the ability to get a supply of superior produce that’s already delivered to your house. The produce can be quite superior to agribusiness produce found in grocery stores, because many of those plants have been bred for things like shelf life, how well they pack and transport, how good they look on the shelf, yield, ripeness out of season and many other factors. The problem is, every time you breed for one of these, you breed out other things, including the most important — flavour. People with no love of gardening as a hobby will still pay well for food that tastes better.
To meet that last (and richest) market, you want a design that requires as little owner effort as possible. The owner would lay down the robot and pour in some soil, but ideally do very little else other than insert modules and possibly harvest.
The Farmbot today has a seed planting tool and a watering tool. Let’s look at other functions a farm robot might have:
Because the robot knows very precisely where it put each seed, anything not in those locations is a weed. Knowing this offers various weeding strategies, including the ability to tackle each weed within hours after it sprouts. There could be very precision application of weed killers, and they could be placed with such precision they might be stronger than usual. Mechanical weed destruction and removal is also possible. The system would also know when it failed, and has to summon a human. Bosch makes a weed killing robot for larger farms.
Simple hyperspectral cameras might eventually lead to super understanding of how plants are doing, and near-perfect estimation of ripeness, as well as amounts of feed and water to apply, again with full precision.
Insect pests could be spotted immediately, and some of them dealt with. It is not even out of the question they could be burned with lasers, which of course is super cool.
Animal pests (stealing the food) could be detected and harassed with motion, lights, sound or even that bug-killing laser. The robot could be a very superb scarecrow. Of course, netting could also be used on the garden since the human rarely has to access it.
The soil could be tilled by the robot. Analysis of the soil may make more sense to do remotely but it could be a service.
The system could tell you exactly when to pick every plant for perfection, or what the best plant to pick is when you want something. It might even be able to harvest certain plants with the right attachment and put them in a basket for you to collect.
The robot could anticipate frosts, requesting the humans to put a cover over the garden and even applying heat.
For the non-gardening gardener, you would just order cartridges online with seeds, nutrients or weed killer, plug them in and let it run. Then eat whatever is at the peak of flavour. The app could also arrange trading with neighbours — everybody likes being generous to neighbours with home produce. (Farmbot is open source but of course could make money from this business quite well.)
Over time, mass manufacturing might make this cheaper and more flexible. For example, eventually a free-roaming design could be possible that is of course much easier to install and could handle much larger plots of land. (It would need to go back to base to refill on water and electricity.) Knowing the garden so well (because it planted it) it could know where to put its wheels. It doesn’t matter how slow it is, so long as it’s quiet.
Vertical farming might be interesting. With a vertical farm on the wall, the robot might simply hang in front of the wall without even needing tracks, though it could not apply much force in that case.
Robots might even make practical something that started off silly — indoor farming with LED light sources. The idea of taking even solar panel energy and using it to shine lights indoors is silly compared to having a garden outdoors or having skylights, but people have slowly been making that more reasonable, using purple LEDs (no light wasted on the green plants don’t want) of high efficiency. Robots might be able to do even better, shining or concentrating light precisely on the leaves of plants so that little energy is spent lighting anything else. I have not done the math, but if anything can make this work, such precision might do the job.
Another road trip has meant fewer posts — this trip included being in Paris on the night of Nov 13 but fortunately taking a train out a couple of hours before the shooting began, and I am now in South Africa on the way to Budapest — but a few recent items merit some comment.
Almost every newspaper in the world reported the story of how a motorcycle cop pulled over one of Google’s 3rd generation test cars, the 2 seaters, and a lot of incorrect reports that the car was given a ticket for going too slow. Or that there was “no driver to ticket.” Today, Google’s cars always have a safety driver (who has a steering wheel) and who is responsible for the car in case it does something unexpected or enters an especially risky situation. So had there been a ticket to write, there would have been a driver in the car, just as there is if you get a ticket for speeding while using your cruise control.
Google’s prototype is what is known as a “Neighbourhood Electric Vehicle” or NEV. There are special NEV rules in place that make such vehicles much less subject to the complex web of regulations required for a general purpose vehicle. They need to be electric, must not travel on roads with a speed limit over 35mph and they must themselves not be capable of going more than 25mph. The Google car was doing 24mph when the officer asked the safety driver to pull over, so there was nothing to ticket. Of course, that does not mean an officer can’t get confused and need an explanation of the law — even they don’t know all of them.
The NEV regulations are great for testing, though there is indeed an issue around how the earliest robocars will probably want to go a little slow, because safety really is the top priority on all teams I know. As such, they may go as slowly as the law allows, and they may indeed annoy other drivers when doing that. This should be a temporary phase but could create problems while cars learn to go faster. I have suggested in the past that cars wanting to go slow might actually notice anybody coming up behind them and pull off the road, pausing briefly in driveways or other open spots, so that the drivers coming up behind never have to even brake. A well behaved unmanned vehicle might go slowly but not present a burden to hurried humans.
Ford may also avoid standby supervision
Recent reports suggest that Ford, like Google, may have concluded that there is not an evolutionary path from ADAS to full self driving, in particular, the so-called “level 3” which I call standby supervision, where a human driver can be called on with about 10 seconds notice (but not anything shorter) to resolve live driving problems or to take the wheel when the car enters a zone it can’t drive. This transition may just be too dangerous, Google has said, along with many others.
Cheaper LIDAR etc.
Noted without much comment — Quanergy, on whose advisory board I sit, as announced progress on the plans for an inexpensive solid state LIDAR, and plans to ship the first on schedule, in 2016. This sub-$1000 LIDAR keeps us on a path to even cheaper LIDAR, which should eliminate all the people who keep saying they want to build robocars without LIDAR — I am looking at you, Elon Musk. Nobody will make their first full robocar less safe just to save a few hundred dollars.
Also related to Starship, another company I advise, is the arrival of not one but two somewhat similar startups to build small delivery robots. “Dispatch Network” involves U.S. roboticists who participated in a Chinese based hardware accelerator and have a basic prototype, larger than the Starship robot. “Sidewalk” — a Lithuanian company, also has a prototype model and a deal with DHL to do research together with them on last mile robots.
I’m pleased to announce today the unveiling of a new self-driving vehicle company with which I am involved,
not building self-driving cars, but instead small delivery robots which are going to change the face of
retailing and last-mile delivery and logistics.
Starship Technologies comes out of Europe, created by two of the founders of Skype, Janus Friis and Ahti Heinla who
is CEO. The mission is similar to the vision I laid out in 2007 for the Deliverbot —
the self-driving box that can get you anything in 30 minutes for under a dollar.
Starship is still in early stages, but will be conducting a pilot project next year in the UK, and another in the
USA shortly thereafter. Customers will be able to place online orders and have a robot come to their home immediately
or on their schedule.
Why is this possible well before full unmanned self-driving cars can go into public use? There are all sorts of
The boxes are not in a super hurry:
They will go slowly and cautiously
They don’t mind detours, and can take the safest rather than shortest route to you
It’s not a big deal if they have to pause if they encounter children or anything confusing or risky, or need to wait for a remote operator to solve a problem
They will travel on the sidewalks, rather than the roads (already legal in many places but work is needed in others)
They will be slow and light, so that if something goes seriously wrong and they hit you, they won’t injure you
They won’t hit you though, because they can come to a full stop in under a foot
You don’t need crumple zones, airbags or other passenger safety features for cargo, making them simple and inexpensive
How big is the last mile? It’s huge. It’s not just what today’s delivery companies do. Most deliveries are
actually made by customers who run out to stores to get stuff. The Starship robot will bring you things in less
time than a round-trip shopping trip would take, for less money, and with vastly less energy, pollution, traffic
congestion and parking. It’s a win for the store, the customer and for society.
It’s a really big win for those with disabilities or difficulty moving. The elderly are going to be able to live
in their own homes with greater independence even if shopping has become such a chore they were contemplating a
The robots will eventually create an “internet of parcels” (I guess the
term “internet of things” is already in use) where physical goods can move
around cities with an ease surpassed only by data. Not only will you be
able to buy anything, you’ll be able to rent things on short notice too,
or borrow them from your neighbours. The sharing economy can be enabled
and the meaning of ownership may change.
The convenience of robot delivery will surprise people. A common question I get asked is
“What does the robot do if you’re not home when it delivers?” and my answer is, “why would
you want the robot to deliver when you’re not home?” Regular delivery run’s on a driver’s
schedule, robotic delivery will run on yours. Robots don’t mind waiting either, so you
could ask a shop to put 5 pairs of shoes into a robot, and at home you could try them on and
put 4 pairs back in the robot and keep the ones you like.
I also expect interesting changes in prepared food. It will be possible to run a
“restaurant” inexpensively in a private kitchen and get ingredients and deliver
dishes quickly and cheaply with no wastage. A family might even order different
dishes from different locations to create a meal.
Nothing is 100% good for everybody — there will be disruption in the retail industry, and
retailers who exist primarily so you can go get things will have trouble competing
if they don’t embrace this model, but other retailers who are suffering from
competition from the online stores may find they can now dominate with fast
Of course, there might be competition in the air. Our students at Singularity University
were among the first to work on drone delivery with the Matternet project
which is beginning a trial delivering for the post office on the steep slopes of Switzerland.
Both methods face legal challenges, and both have their advantages. Drones will be faster and
can cover unusual terrain, while ground robots can carry more with less energy and have an easier
time landing. :-) I suspect people will tolerate small robots on their sidewalks more than
drones with heavy packages over their heads, but we’ll see.
Note that I’m a special advisor to Starship on both technology and business, but this post is
written with my own voice, and doesn’t speak on behalf of the company.
In the buzz over the Tesla autopilot update, a lot of commentary has appeared comparing this Autopilot with Google’s car effort and other efforts and what I would call a “real” robocar — one that can operate unmanned or with a passenger paying no attention to the road. We’ve seen claims that “Tesla has beaten Google to the punch” and other similar errors. While the Tesla release is a worthwhile step forward, the two should not be confused as all that similar.
Tesla’s autopilot isn’t even particularly new. Several car makers have had similar products in their labs for several years, and some have released it to the public, at first in a “traffic jam assist” mode, but reportedly in full highway cruise mode outside the USA. The first companies to announce it were Cadillac with the “Super Cruise” and VW’s “Temporary Autopilot” but they delayed that until much later.
Remarkably, Honda showed off a car ten years ago doing this sort of basic autopilot (without lane change) and sold only in the UK. They decided to stop doing that, however.
That this was actually promoted as an active product ten years ago will give you some clue it’s very different from the bigger efforts.
These cruise products require constant human supervision. That goes back to cruise control itself. With regular cruise control, you could take your feet off the pedals, but might have to intervene fairly often either by using the speed adjust buttons or full control. Interventions could be several times a minute. Later, “Adaptive Cruise Control” arose which still required you to steer and fully supervise, but would only require intervention on the pedals rarely on the highway. A few times an hour might be acceptable.
The new autopilot systems allow you to take your hands off the wheel but demand full attention. Users report needing to intervene rarely on some highways, but frequently on other roads. Once again, the product is useful if you only intervene once an hour, it might make your drive more relaxing.
Now look at what a car that drives without supervision has to do. Human drivers have an accident around every 2,500 to 6,000 hours, depending on what figures we believe. That’s a minor accident, and it’s after around 10 to 20 years of driving. A fatality accident takes place every 2,000,000 hours of driving — around 10,000 years for the typical driver. (It’s very good that it’s much more than a lifetime.)
If a full robocar needs an intervention, that means it’s going to have an accident, because there is nobody there to intervene. Just like humans, most of the errors that would cause an accident are minor. Running off the road. Fender benders. Not every mistake that could cause a crash or a fatality causes one. Indeed, humans make mistakes that might cause a fatality far more often than every 2,000,000 hours, because we “get away” with many of them.
Even so, the difference is staggering. A cruise autopilot like Tesla and the others have made is a workable product if you have to correct it a few times an hour. A full robocar product is only workable if you would need to correct it in decades or even lifetimes of driving. This is not a difference of degree, it is a difference of kind. It is why there is probably not an evolutionary path from the cruise/autopilot systems based on existing ADAS technologies to a real robocar. Doing many thousands times better will not be done by incremental improvement. It almost surely requires a radically different approach, and probably very different sensors.
To top it all off, a full robocar doesn’t just need to be this good, it needs a lot of other features and capabilities once you imagine it runs unmanned, with no human inside to help it at all.
The mistaken belief in an evolutionary path also explains why some people imagine robocars are many decades away. If you wanted evolutionary approaches to take you to 100,000x better, you would expect to wait a long time. When an entirely different approach is required, what you learn from the old approach doesn’t help you predict how the other approaches — including unknown ones — will do.
It does teach you something. By being on the road, Tesla will encounter all sorts of interesting situations they didn’t expect. They will use this data to train new generations of software that do better. They will learn things that help them make the revolutionary unmanned product they hope to build in the 2020s. This is a good thing. Google and others have also been out learning that, and soon more teams will.
Last night, one day early, I attended Stanford’s unveiling of their newest research vehicle for self-driving. In order to do experiments with drifting (where you let the rear wheels skid freely) they heavily modified an old Delorean.
They managed to get Jamie Hyneman of Mythbusters to host the event so there was a good crowd. He asked “Why a Delorean?” and instead of saying the obvious line:
“The way I see it, if you’re going to build a self-driving drifting car, why not do it with some style?”
They got into the actual technical reasons for it, even though they called the car Marty and were revealing it one day before “Back to the Future Day” — the day in the 2nd movie where Marty travels to in the future, Oct 21, 2015.
Back to the present, this car, with rear wheel drive and central engine mount is not a great car to drive, and they removed the engine and replaced it with dual electric motors from Renovo. This creates a car able to drive the two rear wheels independently. This offers the ability for the software to spin the wheels at different rates, and do things that no human driver could ever do, including special types of drifting. They already have managed to get the car to turn tighter doughnuts (circles) than a human could.
Drifting is usually done for show — it rarely will help you in a race. Hyneman actually showed that in one of the episodes of his show. Stanford’s team wants to answer whether the robot’s ability to do inhuman driving might offer more “outs” in a dangerous situation, like trying to avoid a collision. Might a car twist its wheels (perhaps some day all of its wheels) and spin them at different speeds to make the car take a path which could avoid an accident.
In effect, you would be trying to make a car that can drive like a Hollywood stunt car. In movies, stunt drivers often do fairly improbable and impossible moves with cars to avoid accidents. A classic Hollywood scene involves a car titling up in two wheels to get through a tiny gap it could not drive through. (The Stanford team did not propose this, and it’s a pretty hard thing to do, but it’s one way to envision the general idea.)
Up to now, research on accident avoidance has been fairly low-key. After all, the main task is to drive safely in the lane you are supposed to drive in. That’s plenty of work and the result of almost all of the focus. Eventually, teams will focus on what to do when things go wrong, but for now the prime priority is to make sure things don’t go wrong. Someday, they may even focus on the infamous trolley problem.
Normally, drifting is a bad idea. It means a loss of control and a loss of power. Normally, the connection of the tires and the road is the sole tool you have to drive and control a car. You would only give it up if you absolutely have to. Perhaps, the research will show, there are times you might want to.
Generally, drift or not, robots should become very good at avoiding accidents. They will know the physics of the tires perfectly, and they will calculate without panic, and will be able to drive with full confidence missing things by very thin margins while staying safe. While a human could not navigate a space only a few inches wider than the car with confidence, a robot could. A robot will always use the optimal combination of steering and braking, which humans need a lot of training to do. (Your tires can give you braking force or steering force but you must reduce one to get more of the other, so often the best strategy is to brake first and then steer, though the human instinct is to do both.)
The car is not super autonomous. It is meant to do test algorithms on private open spaces. It won’t be avoiding obstacles or plotting lanes on a highway. It will be testing how well a computer can get the most from the tires.
Tesla’s offering is not too different from what many other automakers have shown in what is sometimes called “highway cruise” — a combination of lanekeeping and adaptive cruise control, both of which have been around for a while. The vehicle reportedly insists you keep your hands touching the wheel, but some owners report you can take them off. Of particular note is the addition of lane-changing, which you can do by flicking the turn signal. You must check behind you, since if you attempt this when the next lane is moving fast and somebody is coming up on you quickly, you could cause a real problem if you don’t check. The vehicle won’t do the lane change if its blind spot detector sees an adjacent car, but it won’t stop you from cutting off somebody who is not in that zone and gaining on you.
I am curious about the claim that the system will “overtake” another car — I have not tried it out yet, but I presume since it should not change lanes on its own, user input will be required to command driving around another car. While you can’t safely make a lane change into a lane where you see no cars, you actually can make the change after passing a car because you know where that car is and that nothing is going to rush at you in the lane.
The release of this, and similar products will test a supposition I made earlier, that this products may not be as exciting as hoped. Worse, some drivers may find it a bit frightening trusting control to a vehicle knowing that from time to time they will need to grab the wheel. I have felt that myself driving in cars with adaptive cruise control, I am wondering if the ACC has seen the car stopped up ahead of me and will stop for it, since I see that before the radar or camera system does.
On the other hand, many people have reported that even though they must supervise, highway cruise can make the trip more relaxing, just as basic cruise control does. The trick is to get your brain into putting focus on its new sole task — supervisor — so that the rest of your brain can relax. With cruise control, you are reasonably able to have one part of your brain worry about steering, and relax the part that was going to worry about speed. So this may happen here.
More of a concern are the people who will trust it too much. Many of us already do crazy things, texting or playing with things on our phones when we are doing fully manual driving. It’s a given this will happen here. It will be safer to take your eyes from the road for a longer period than you should in a manual car, but people may magnify that. Yes, if you take your eyes off the road and the car ahead of you stops suddenly, the car will very probably brake for you — as will any car with forward collision avoidance. But not 100% of the time, and that’s the rub if you trust it to.
I may have more to say after taking a ride in one of these. I think the first really interesting product won’t be this, but a more full-auto traffic jam assist, that will drive for you in a traffic jam and allow you to take your attention off the road entirely to read or work on your phone or computer. At the low speeds of a traffic jam, boxed in by other cars, the driving problem is much simpler. You don’t even need to see the lanes, just follow the cars. If the car in front of you zooms ahead, the traffic jam is over, and the driver needs to manually speed the car up. Done at low speeds, that transition can be fairly safe, and in addition, if the driver does nothing and the car slows to a halt, that is not unsafe — just annoying — in a breaking up traffic jam. The main remaining problem in traffic jams is what to do when the cars in front of you change lanes. Are they turning just to go into another lane, or is it because all the lanes are turning due to restriping (common in jams) or an obstacle. You need to get that right, and let the driver know about it, but you can’t buzz the driver every time somebody does a lane change or the product is not useful.
Several car vendors (and probably Tesla) have been working on this, and could release it quite soon, if they get the guts and legal approval to do so.
Tesla may get more attention for the way they delivered these new features, as an over the air software update. Telsa has now done this several times, and promised it would do this, but from the viewpoint of traditional car makers, it is incredibly radical. In the modern computerized car, like the phone, regular software updates are just part of the system. This is going to be mostly positive, but will create some issues when the time comes for a “recall” of some electronic function of a car. Today when that happens, the car company mails all the owners and says, “Please come in to your dealer to get new firmware for your ECU to fix the problem.” From that point, it is the owner’s responsibility to get the update done. Tesla can send a fix over the air, but that means it can’t pass responsibility on to the owners. Some day, a company is going to find a problem in their self-drive system that clearly should be fixed, but won’t have the fix ready to go for weeks. It will face the question of what to do in those intervening weeks. Will it be forced to turn off the system it now knows to have a flaw? Or can it tell owners, “We know we have this flaw, if you want to drive, it’s your responsibility now.”
During a very busy September of travel, I let a number of important stories fall through the cracks. The volume of mainstream press articles on Robocars is immense. Most are rehashes of things you have already seen here, but if you want the fastest breaking news, there are now some sources that focus on that. Here I will report the important news with analysis.
Earlier we learned that Google restructured itself and put the car project in the new Alphabet Holding company. Google also hired John Krafcik to lead the project. Krafcik is a car industry veteran from Hyundai, Ford and Truecar but what’s interesting is he’s been announced as “CEO,” which strongly implies that the project will be spun out as a subsidiary as I suspected, with freedom to be its own company. Chris Urmson has led the project since Sebastian Thrun moved on to Udacity, but the bulk of the work has been engineering, which Chris will continue to lead. This is a good move, one person probably should not do both. (Chris did do a great job on the recent 60 minutes, though.)
Google continues to state it does not wish to be a carmaker, and will work with existing carmakers.
Mercedes and BMW not selling cars
Perhaps the biggest news comes in announcements from both BMW and Mercedes that they plan to investigate selling rides instead of cars. They both own large car-sharing systems (DriveNow and Car2Go respectively) which rent cars one way by the minute, but while they are large for the industry, they are tiny portions of these companies. However, the idea that these companies, with a century of being about selling cars and nameplate luxury to consumers who drive away in them, can think seriously of being like Uber is a sign we’re in the 21st century. BMW and Mercedes are not idiots — they have always known this was a potential business plan. The hard part at a big company is having the guts and leadership to turn the company 90 degrees if it needs to be done.
Mercedes prototype has the name “Car2Come” — a car that delivers itself to you and you drive it. They understand that name doesn’t really sound that great in the US market. :-) Longtime readers will recognize this as similar to what I called a whistlecar in 2007.
Apple clues keep showing up
Apple refuses to say anything, but little clues keep emerging, including records of Apple’s request to use an old military base converted into a robocar test track in Northern California, and also talking to the DMV’s crew about robocars. Other leaks suggest with certainty that Apple project Titan is building an electric car (due in 2019) and that making it self-driving is on the table, but not the #1 priority.
After Uber raided some of the top people from CMU’s robotics labs (it should be noted that Chris Urmson and Sebastian Thrun also came out of that lab) they have been donating money back to fund more research inside the school, and also at the University of Arizona.
Uber remains one of the biggest game-changers out there. Aside from their money and unconventional thinking, and of course the world’s #1 brand in selling rides, Uber also has the easiest path to collecting vast volumes of driving data at low cost, and data are important.
Toyota and Honda make announcements
The Japanese have been surprisingly behind, except for Nissan, but now Honda has finally made some serious steps getting permits for California, and Toyota has announced both new projects, and has joined on the popular goal of 2020, saying that Toyota cars will be driving the public around at the 2020 Olympics in Tokyo.
Some other companies have also joined the game, such as Citroën, which had a car drive to the recent ITS World Congress in Bordeaux.
In the Shuttle business, Navia is back as the Navya, and the new vehicle is more enclosed, as shown in this video. Many other private campus shuttle projects are heating up around the world, including Citymobil2. Easymile (also from France) is setting up a pilot project in the Bay Area and shuttle projects are underway in many labs and towns around the world. (Disclaimer: I am discussing involvement in one of them which I will talk about later.)
The news goes on
The volume of news stories shows why Gartner put robocars so high on their hype cycle. I have not covered a lot of other news, including:
New states, provinces and countries passing new laws or enabling testing. Even my own home province of Ontario.
The creation of new test tracks and facilities — these are useful but as news they are mostly PR
Last week, I commented on the VW scandal and asked the question we have all wondered, “what the hell were they thinking?” Elements of an answer are starting to arise, and they are very believable and teach us interesting lessons, if true. That’s because things like this are rarely fully to blame on a small group of very evil people, but are more often the result of a broad situation that pushed ordinary (but unethical) people well over the ethical line. This we must understand because frankly, it can happen to almost anybody.
The ingredients, in this model are:
A hard driving culture of expected high performance, and doing what others thought was difficult or impossible.
Promising the company you will deliver a hotly needed product in that culture.
Realizing too late that you can’t deliver it.
Panic, leading to cheating as the only solution in which you survive (at least for a while.)
There’s no question that VW has a culture like that. Many successful companies do, some even attribute their excellence to it. Here’s a quote from the 90s from VW’s leader at the time, talking about his desire for a hot new car line, and what would happen if his team told him that they could not delivery it:
“Then I will tell them they are all fired and I will bring in a new team,” Piech, the grandson of Ferdinand Porsche, founder of both Porsche and Volkswagen, declared forcefully. “And if they tell me they can’t do it, I will fire them, too.”
Now we add a few more interesting ingredients, special to this case:
European emissions standards and tests are terrible, and allowed diesel to grow very strong in Europe, and strong for VW in particular
VW wanted to duplicate that success in the USA, which has much stronger emissions standards and tests
The team is asked to develop an engine that can deliver power and fuel economy for the US and other markets, and do it while meeting the emissions standards. The team (or its leader) says “yes,” instead of saying, “That’s really, really hard.”
They get to work, and as has happened many times in many companies, they keep saying they are on track. Plans are made. Tons of new car models will depend on this engine. Massive marketing and production plans are made. Billions are bet.
And then it unravels
Not too many months before ship date, it is reported, the team working on the engine — it is not yet known precisely who — finally comes to a realization. They can’t deliver. They certainly can’t deliver on time, possibly they can actually never deliver for the price budget they have been given.
Now we see the situation in which ordinary people might be pushed over the line. If they don’t deliver, the company has few choices. They might be able to put in a much more expensive engine, with all the cost such a switch would entail, and price their cars much more than they hoped, delivering them late. They could cancel all the many car models which were depending on this engine, costing billions. They could release a wimpy car that won’t sell very well. In either of these cases, they are all fired, and their careers in the industry are probably over.
Or they can cheat and hope they won’t get caught. They can be the heroes who delivered the magic engine, and get bonuses and rewards. 95% they don’t get caught, and even if they are caught, it’s worse, but not in their minds a lot worse than what they are facing. So they pretend they built the magic engine, and program it to fake that on the tests. read more »
In Canada, there are 3 (and sometimes more) strong parties. This is true in much of the world; in fact the two-party USA is somewhat unusual. However, with “plurality” style elections, where the candidate with the most votes takes the seat even though they might have well under a majority, you can get a serious difference between the popular vote and the composition of the house. Americans see the same in their Electoral college and in gerrymandered districts.
The author, who wishes to defeat the incumbent Conservative party, proposes a way for the other two parties (Liberals and New Democrats) to join forces and avoid vote splitting. The Liberals and NDP are competitors, but have much more affinity for one another than they do for the Conservatives. They are both left-of-centre. This collaboration could be done at a national party level or at the grass roots level, though it would be much harder there.
Often in parliaments, you not only get splitting within the race for each seat, you get a house where no party has a majority. For minority governments, one party — usually the largest — strikes a deal with another party for a coalition that allows them to govern. Sometimes the coalition involves bitter enemies. They cooperate because the small party gets some concessions, and some of their agenda is passed into law, even though far more of the dominant party’s agenda gets passed. Otherwise, the small party knows it will get nothing. read more »
Among the most common questions I have seen in articles in the mainstream press, near the top is, “Who is going to be liable in a crash?” Writers always ask it but never answer it. I have often given the joking answer by changing the question to “Who gets sued?” and saying, “In the USA, that’s easy. Everybody will get sued.”
But in reality, in spite of all the writing that this is a hard and central question, the long term answer has always been obvious. If the software/hardware in a car is responsible for the crash (ie. caused the vehicle to do something like depart its right-of-way) then it’s pretty obvious that the vendor of that car will be liable, or perhaps some proxy for the vendor like a taxi fleet operator.
The main reason this has sat as an open question for so long is that any lawyer will tell you never to admit in advance that you should be liable for something. From a lawyer’s standpoint, that can never do anything but come back to haunt you later. There’s no upside, and a big downside, so they tell clients not to do it.
It is not just a legal decision. After all, customers are not going to want to buy or even ride in cars if the rule is, “If this car crashes because of our bug, then you (or your insurance) will be liable, and demerit points or even criminal charges might go to you.” Early adopters might accept that but it’s not a workable long-term policy. If the question of points and rare criminal charges could be eliminated, we could see a workable system where the passenger is liable and has insurance to fully cover it, but deep down that’s a silly system; again something only for the early days.
Even if you could get such a system in place with passenger-insurance, the reality is the vendor would still get sued. Even a great policy and indemnification from the passenger would not prevent plaintiff’s lawyers from wanting to go after the deep pocketed vendor. They would look for the hope of negligence (or in their dreams, VW style fraud) to get juicy damages. Even if not directly liable, the vendor would pay more in legal costs for some cases than the cost of the accident.
As I have written before, in today’s world, car accident costs are not paid by individuals or even companies. If I’m liable, my insurance company pays, and every policyholder shares the cost in their premiums. If my car has a defect, the car company pays, but builds in a share of that cost or insurance against it into the price of every car — once again the public shares the cost. This will not change in the world of robocars, and fighting over liability is really just fighting over who the money will flow through, and who will get the burdens and benefits of control of the legal strategy.
This saner approach has the vendor responsible — at least while the car was driving itself — and the vendor self-insuring and getting reinsurance, or product liability insurance, to cover the cost. The cost we currently know very, very well — rooms of actuaries at every auto insurance company study it all day — and we can handle it. (We don’t know the cost of the early, special lawsuits which will be unlike typical car crash cases.)
If the world is rational, the total number of accidents and their severity goes way down, and that cost goes way down with it. The world may not be rational, but ideally this new lower cost is built into the cost of the ride or the car, and we all pay less. Hooray.
For cars that people drive some of the time, traditional insurance will do the job, but it should be billed by the mile — called PAYD or Pay-as-you-drive.
Yesterday I attended the “Silicon Valley reinvents the wheel” conference by the Western Automotive Journalists which had a variety of talks and demonstrations of new car technology.
Now that robocars have hit the top of the “Gartner Hype Cycle” for 2015, everybody is really piling on, hoping to see what’s good for their industry due to the robocar. And of course, there is a great deal of good, but not for several industries.
Let me break down some potential misconceptions if my predictions are true:
The dashboard almost vanishes
The dashboard of the modern car is amazing and expensive. Multiple digital screens and lots of buttons and interfaces on the wheel, the central stack and beyond. Fancy navigation systems, audio and infotainment systems, mobile apps, phone integration, car information and much more can be found there. There have been experiments with gesture and speech based controls, concierge services and fancy experimental controls. There are video screens in the headrests for the rear passengers. In recent years, specialized offerings like Ford Sync, GM OnStar and many others have become differentiating factors in cars, and there’s a lot of money in that dashboard.
This started changing a bit as car companies came to accept the dominance of the mobile phone in the driver’s life. People stopped wanting things like navigation and music from their car. They had better versions of those things in their phone, and they knew how to use them and had customized them. This year we are seeing deployment of “Android Auto” and “Apple CarPlay” which connect your phone to the car’s dashboard screen, and let you see and control a very limited number of apps on the screen. The car makers had to be dragged to this kicking and screaming, but frankly today’s offerings from both Google and Apple are fairly poor in this department. For example, you can only run the special approved and modified apps. If you like to navigate with Waze instead of Google Maps (even though Waze is a Google product) you can’t — your phone is locked out when it is driving Android Auto.
All of this is still based on the the idea that the driver must put all focus on driving. You can’t have complex UI where you look at the screen for more than a glance, and you should not distract the driver.
The Mercedes F015 concept car shown here is one of the first automaker explorations of a car where it’s OK to distract the driver. On the doors and walls of this car, you can see large touchscreens with concept apps on them.
In contrast, Google’s new prototype 2-seater car barely has a dashboard at all. Their answer shows more wisdom, I think. Your phone and tablet are always going to be your preferred choice for mobile computer interactions. Access to the internet, music and entertainment will go through them. The phone is updated every 2 years, or even less, and will always have superior hardware and services, and more effort goes into its design and the design of its systems and apps than will go into a car system. But even if the car system is fantastic, 2 years later it will be behind the phone.
As such, the phone is even how you give commands to the car, such as what destination to drive to. And we all know it’s vastly easier to enter destinations on phones than in any car nav system we’ve ever seen.
The full-auto robocar becomes more like a living space than a car. More like your TV room or your office. It is those places where we might seek clues as to the interior of the car. My living spaces do not have touchscreens on all the walls, for example, so it’s not too likely my car will either — or if they become useful, both will have them.
In my office I do have a desk and a big screen, along with better user input devices (full keyboard and trackball) as well as more computing power. These things I will want in my car, but of course they should personalize to me, probably using my phone as a gateway to that information. In my TV room I do have a large screen, and that will probably show up in the car, but it probably won’t be a touchscreen any more than my TV is. I will control it from my phone.
Car makers agreed that they should not attempt to pioneer new forms of user interface, and that this is why experiments in gesture control and other novel UIs have not done well. Drivers don’t want to learn entirely new styles of computer interaction when sitting in a car. They want to use the forms they already know. New forms should be pioneered in the general computing world, and moved into the car.
With so much dependence on the phone, the car will need a small and simple tethered phone so that if your phone is not present for some reason, you can still do all the basic functions. Of course there will be power so running out of battery is not an issue, other than for unlocking and summoning the car.
We also have to realize that long ago, car dashboards had barely anything on them. Even emergency driving (when the self-drive system has failed, and you plug in handlebars or a joystick) hardly needs anything more than a speedometer, and frankly you can drive fine with other traffic even without that.
It should be noted that the phone is not a secure device. As such, it won’t send much to the self-driving system. It will receive status information from it, but only give very limited commands. Indeed, it is quite possible that your phone might control your car through the cloud, sending commands to a central server which then talks to the car. A more simplified interface must exist for cloud dead zones, with all its traffic highly scrutinized. Robocars will be able to drive without the cloud, but it will be the exception, not the rule, and it won’t be done for very long or far.
A lot of the event dealt with audio. Audio makers are making good strides in producing fantastic car audio, and doing it at high prices. This trend started when it became clear that the car was the primary place to listen to music for many people, though this has decreased with the rise of the smartphone and other good music players.
People will continue to want good audio systems in their cars, but they will be offered a new, and cheaper option thanks to robocars, namely quality noise-canceling headphones with subwoofer seats. Today, drivers are not allowed to wear noise isolating headphones because they should respond to traffic sounds and sirens. In the future, drivers might like to take themselves away from road noise to get good music. Aside from being cheaper, this solution allows all occupants of a car to have different music (or video soundtrack) if they wish to.
Last weekend I tried out a new product called a SubPac, which is a small seat cushion which emits strong subwoofer bass directly into your body, providing a fair bit of the feeling of standing in from of giant dance club speakers. This unit is $380 today but will come down in price and soon should be a modest extra cost in car seats. (I am also interested in backpack versions which will eventually make silent disco more acceptable to those who crave mind-numbing bass.
In the other direction, the freedom of design that robocars provide means that larger vehicles will become interesting spaces, and you might come to view the car parked in your driveway as your music listening room or even home theatre and video game room, and invest the money in that (if you own your robocar)
Results from a design competition at Academy of Art University of San Francisco were presented. The designs were interesting, in line with many other futuristic car designs I have seen, though larger and with more lounging. Sadly, the focus was on large vehicles. The reality, I think, is that most new vehicles will be small 1-2 person vehicles, and I would like to see more radical design ideas there. Around 80% of urban trips are solo, and we have a tremendous opportunity to design comfortable higher end solo vehicles which will also be very efficient in terms of both energy and road space occupancy. Here’s where we want to see adjustable comfortable chairs, fold out desks and screens and other techniques to give us the things from our offices and homes in a space just 1.5m wide.
The concept of infrastructure changes were touched on. Regular readers here will know I instead favour minimal infrastructure change and the idea of “virtual infrastructure,” where as much is done at software levels as possible. “Smart cars on stupid roads.” In spite of this cities and agencies commonly ask what they should do to hasten the robocar, and I love the sentiment but the reality is that the answers are minimal.
All of this takes us further towards the conclusion that some models of robocar in the future will be incredibly cheap. Particularly the “city cars” which never go on the highway and only carry 1-2 people. With simple electric drivetrains they will be easy to build and maintain, and eventually their battery cost will become very reasonable. The dashboard vanishes along with many other controls. The expensive sound system vanishes. The windshields need not be a large custom piece of curved glass — in fact they don’t even have to exist other than for passenger comfort. The parts count goes down significantly. The only additions are the sensors and computers, both on Moore’s law downward curves, and perhaps that nice large screen for working. We may also see a more advanced computer-controlled suspension to keep the ride smooth and comfortable.
Recently I did a road trip through Portugal. I always enjoy finding something new that they are doing in a country which has not yet spread to the rest of the world.
Along a number of Portuguese roads, you will see a sign marked “velocidade controlada” — speed control — and then a modest distance down the road will be a traffic light in the middle of nowhere. There is no cross street. This is an interesting alternative to the speed bump or other “traffic calming” systems.
At the sign a radar gun measures your speed. If you are over the limit, then as you approach the light, it turns red. It turns red for you, and also anybody behind you and the oncoming traffic.
The result is people slow down for these signs to the limit. Far more effectively than any speed bump, and without the very annoying bump. Mostly this is done on faster roads than the quiet residential streets that have speed bumps, and of course traffic lights cost more than speed bumps, at least today.
The social dynamic is interesting. Even though many of us are scofflaws when it comes to the speed limit, most are much more religious about a red light. Even a red light like this one where there is no physical danger to running the red light, just the fairly unlikely risk of a stronger ticket. Strangely, though both speeding and running this light are both just violations of the law, I never saw anybody run one, and drivers who were total speed demons elsewhere quickly slowed down before these signs. (People know where they are, so they aren’t a general speed reducer, but rather more like a speed bump in cutting speed in one particular place.)
Added to this is the element of public shame. If you trigger this light, you stop everybody around you too. If you’re a sociopath, this won’t concern you, but for most there is a deep shame about it.
Today, as noted, a traffic light and radar gun are a moderately expensive thing. These lights are not nearly as expensive because they don’t require the complex intersection survey and programming of sometimes 20-30 real lights, but they still need a pole, and electricity, and weather hardened gear. In the future, I predict this sort of tech will get quite inexpensive, possibly cheaper than a speed bump. You could imagine making one with solar power and LEDs which only displayed the red light, not the green, and so needed no external power for it. They need not be on all the time — in fact if the batteries got low, they could just shut down until they recharged. The radar and communications link could also become quite cheap.
Of course, I would like to see this combined with more reasonable speed limits. I have pointed out before that the French Autoroute approach of a realistic limit of 130km/h that everybody obeys and where you really get tickets if you exceed it is much better than the US approach of a 65mph limit that 90% of drivers disregard. This system is much better than the speed bump. Speed bumps hurt cars and impede emergency vehicles. Emergency vehicles can blow through these. These could even vary their speed based on conditions and time of day.
Robocars of course would know where all these are and never trigger one, even if the occupants have commanded the vehicle to exceed the limit. But this is mostly a technology for human drivers. It is halfway along the path to “virtual infrastructure,” which is how roads and traffic control will work in the future when every car, human driven or not, uses a maps and data over phones to know the road, rather than signs and lights.
Most of you would have heard about the giant scandal where it has been revealed that Volkswagen put software in their cars to deliberately cheat on emissions tests in the USA and possibly other places. It’s very bad for VW, but what does it mean for all robocar efforts?
You can read tons about the Volkswagen emissions violations but here’s a short summary. All modern cars have computer controlled fuel and combustion systems, and these can be tuned for different levels of performance, fuel economy and emissions. (Of course, ignition in a diesel is not done by an electronic spark.) Cars have to pass emission tests, so most cars have to tune their systems in ways that reduce other things (like engine performance and fuel economy) in order to reduce their pollution. Most cars attempt to detect the style of driving going on, and tune the engine differently for the best results in that situation.
VW went far beyond that. Apparently their system was designed to detect when it was in an emissions test. In these tests, the car is on rollers in a garage, and it follows certain patterns. VW set their diesel cars to look for this, and tune the engine to produce emissions below the permitted numbers. When the car saw it was in more regular driving situations, it switched the tuning to modes that gave it better performance and better mileage but in some cases vastly worse pollution. A commonly reported number is that in some modes 40 times the California limit of Nitrogen Oxides could be emitted, and even over a wide range of driving it was as high as 20 times the California limit (about 5 times the European limit.) NOx are a major smog component and bad for your lungs.
It has not been revealed just who at VW did this, and whether other car companies have done this as well. (All companies do variable tuning, and it’s “normal” to have modestly higher emissions in real driving compared to the test, but this was beyond the pale.) The question everybody is asking is “What the hell were they thinking?”
That is indeed the question, because I think the central issue is why VW would do this. After all, having been caught, the cost is going to be immense, possibly even ruining one of the world’s great brands. Obviously they did not really believe that they might get caught.
Beyond that, they have seriously reduced the trust that customers and governments will place not just in VW, but in car makers in general, and in their software offerings in particular. VW will lose trust, but this will spread to all German carmakers and possibly all carmakers. This could result in reduced trust in the software in robocars.
What the hell were they thinking?
The motive is the key thing we want to understand. In the broad sense, it’s likely they did it because they felt customers would like it, and that would lead to selling more cars. At a secondary level, it’s possible that those involved felt they would gain prestige (and compensation) if they pulled off the wizard’s trick of making a diesel car which was clean and also high performance, at a level that turns out to be impossible. read more »
Much press has been made over Jonathan Petit’s recent disclosure of
an attack on some LIDAR systems used in robocars. I saw Petit’s presentation
on this in July, but he asked me for confidentiality until they released their
paper in October. However, since he has decided to disclose it, there’s
been a lot of press, with truth and misconceptions.
There are many security aspects to robocars. By far the greatest concern
would be compromise of the control computers by malicious software, and great
efforts will be taken to prevent that. Many of those efforts will involve
having the cars not talk to any untrusted sources of code or data which
might be malicious. The car’s sensors, however, must take in information
from outside the vehicle, so they are another source of compromise.
There are ways to compromise many of the sensors on a robocar. GPS can be
easily spoofed, and there are tools out there to do that now. (Fortunately
real robocars will only use GPS as one clue to their location.) Radar is
also very easy to spooof — far easier than LIDAR, agrees Petit — but
their goal was to see if LIDAR is vulnerable.
The attack is a real one, but at the same time it’s not, in spite of the
press, a particularly frightening one. It may cause a well designed
vehicle to believe there are “ghost” objects that don’t actually exist, so that
it might brake for something that’s not there, or even swerve around it.
It might also overwhelm the sensor, so that it feels the sensor has failed,
and thus the car would go into a failure mode, stopping or pulling off the
road. This is not a good thing, of course, and it has some safety
consequences, but it’s also a fairly unlikely attack. Essentially, there
are far easier ways to do these things that don’t involve the LIDAR, so it’s
not too likely anybody would want to mount such an attack.
Indeed, to do these attacks, you need to be physically present, near the target car, and you need a solid
object that’s already in front of the car, such as the back of a truck that
it’s following. (It is possible the road surface might work.) This is a higher bar than attacks which might be done
remotely (such as computer intrusions) or via radio signals (such as with
hypothetical vehicle-to-vehicle radio, should cars decide to use that tech.)
Here’s how it works: LIDAR works by sending out a very short pulse of laser
light, and then waiting for the light to reflect back. The pulse is a small
dot, and the reflection is seen through a lens aimed tightly at the place the
pulse was sent. The time it takes for the light to come back tells you how
far away the target is, and the brightness tells you how reflective it is,
like a black-and-white photo.
To fool a lidar, you must send another pulse that comes from or appears to come
from the target spot, and it has to come in at just the right time, before (or on some, after)
the real pulse from what’s really in front of the LIDAR comes in.
The attack requires knowing the characteristics of the target LIDAR very
well. You must know exactly when it is going to send its pulses before it
sends them, and thus precisely (to the nanosecond) when a return reflection
(“return”) would arrive from a hypothetical object in front of the LIDAR. Many LIDARS
are quite predictable. They scan a scene with a rotating drum, and you can
see the pulses coming out, and know when they will be sent. read more »