The future of computer-driven cars and deliverbots
The AUVSI/TRB "Automated Vehicles Summit" kicked off this morning with a report from JD Power on consumer attitudes. I am very skeptical of all such surveys. They seem as useful as a survey from 2005 about what people would do with the iPhone after it comes out in 2 years. Such a survey would surely have reported almost nobody planned to get one or would use it in the ways people actually do.
The Tempe police released a detailed report on their investigation of Uber's fatality. I am on the road and have not had time to read it, but the big point, reported in many press was that the safety driver was, according to logs from her phone accounts, watching the show "The Voice" via Hulu on her phone just shortly before the incident.
Yesterday I examined some of the details released by the NTSB about the Uber fatality. Now I want to dig deeper with speculation as to the why. Of course, speculation is risky, though I can claim a pretty good track record. When I outlined various possible causes of the incident just after it, I put 4 at the top. I figured that only one might be true, but it turned out that two were (Misclassification as a bicycle, and the car wanting to stop but being unable to actuate the brakes) though I did not suspect Uber deliberately blocked the car from doing hard stops. So I'll try my luck at speculating again.
Most of the press reported a research report from UBS securities claiming Waymo is now worth $75B to Google because it is poised to dominate the robotaxi business. In addition to this, it claimed that business would be $1.2 trillion by 2030, with an additional $472 billion for "in car monetization." (Total Google revenue was $110 billion in 2017.)
The wake of Tesla's incident has caused a lot more questions about the concept of testing prototype robocars on public roads supervised by "safety drivers." Is it putting the public at risk for corporate benefit? Are you a guinea pig in somebody's experiment against your will? Is it safe enough? Is there another way?
A crash today with a Waymo van is getting attention coming in the same area just a short time after the Uber fatality, but Waymo will not be assigned fault -- the driver of the car that hit the Waymo van veered out of his lane into oncoming traffic because of somebody else who was incurring on the intersection. Only minor injuries, but higher energy than prior crashes for Waymo.
At teams around the world attempt to build safe robocar systems, one key asset has stood out as a big differentiator -- experience. For a company to be willing to certify their vehicle as safe, it needs experience with all the strange circumstances that it might encounter driving the roads.
The primary purpose of the city is transportation. Sure, we share infrastructure like sewers and power lines, but the real reason we live in dense cities is so we can have a short travel time to the things in our lives, be they jobs, friends, shopping or anything else.
Sometimes that trip is a walking one, and indeed only the dense city allows walking trips to be short and also interesting. The rest of the trips involve some technology, from the bicycle to the car to the train. All that is about to change.
The NHTSA/SAE "levels" of robocars are not just incorrect. I now believe they are contributing to an attitude towards their "level 2" autopilots that plays a small, but real role in the recent Tesla fatalities.
Last week, buried in the news of the Uber fatality, a Tesla model X had a fatality, plowing into the ramp divider on the flyover carpool exit from Highway 101 to Highway 85 in the heart of Silicon Valley. Literally just a few hundred feet from Microsoft and Google buildings, close to many other SV companies, and just a few miles from Tesla HQ. I take this ramp frequently, as does almost everybody else in the valley. The driver was an Apple programmer, on his way to work.
How does a robocar see and avoid hitting a pedestrian? There are a lot of different ways. Some are very common, some are used only by certain teams. To understand what the Uber car was supposed to do, it can help to look at them. I write this without specific knowledge of what techniques Uber uses.
In particular, I want to examine what could go wrong at any of these points, and what is not likely to go wrong.
The usual pipeline looks something like this:
Lost in all my coverage of the Uber event is a much more positive story from San Francisco, where Police issued a ticket to the safety driver of a Cruise test vehicle for getting too close to a pedestrian.
Uber has reached an undisclosed settlement in the fatal incident with the victim's husband and daughter. This matches my prediction of Uber's likely best course of action, since it will shut down much of the public discussion and avoid dragging all sorts of details out into the open in a lengthy trial. The settlement comes with an agreement for silence, as you might expect.
Yesterday we saw the state of Arizona kick Uber's robocar program out of the state. Arizona worked hard to provide very light regulation and attracted many teams to the state, but now it has understandable fear of political bite-back. Here I discuss what the government might do about this and what standards the courts, public or government might demand.
The governor of Arizona has told Uber to "get an Uber" and stop testing in the state. With no instructions on how to come back.
Unlike the early positive statements from Tempe police, this letter is harsh and to the point. It's even more bad news for Uber, and the bad news is not over. Uber has not released any log data that makes them look better, the longer they take to do that, the more it seems that the data don't tell a good story for them.