The NHTSA/SAE "levels" of robocars are not just incorrect. I now believe they are contributing to an attitude towards their "level 2" autopilots that plays a small, but real role in the recent Tesla fatalities.
The future of computer-driven cars and deliverbots
Last week, buried in the news of the Uber fatality, a Tesla model X had a fatality, plowing into the ramp divider on the flyover carpool exit from Highway 101 to Highway 85 in the heart of Silicon Valley. Literally just a few hundred feet from Microsoft and Google buildings, close to many other SV companies, and just a few miles from Tesla HQ. I take this ramp frequently, as does almost everybody else in the valley. The driver was an Apple programmer, on his way to work.
How does a robocar see and avoid hitting a pedestrian? There are a lot of different ways. Some are very common, some are used only by certain teams. To understand what the Uber car was supposed to do, it can help to look at them. I write this without specific knowledge of what techniques Uber uses.
In particular, I want to examine what could go wrong at any of these points, and what is not likely to go wrong.
The usual pipeline looks something like this:
Lost in all my coverage of the Uber event is a much more positive story from San Francisco, where Police issued a ticket to the safety driver of a Cruise test vehicle for getting too close to a pedestrian.
Uber has reached an undisclosed settlement in the fatal incident with the victim's husband and daughter. This matches my prediction of Uber's likely best course of action, since it will shut down much of the public discussion and avoid dragging all sorts of details out into the open in a lengthy trial. The settlement comes with an agreement for silence, as you might expect.
Yesterday we saw the state of Arizona kick Uber's robocar program out of the state. Arizona worked hard to provide very light regulation and attracted many teams to the state, but now it has understandable fear of political bite-back. Here I discuss what the government might do about this and what standards the courts, public or government might demand.
The governor of Arizona has told Uber to "get an Uber" and stop testing in the state. With no instructions on how to come back.
Unlike the early positive statements from Tempe police, this letter is harsh and to the point. It's even more bad news for Uber, and the bad news is not over. Uber has not released any log data that makes them look better, the longer they take to do that, the more it seems that the data don't tell a good story for them.
In the wake of the Uber fatality, I'm seeing lots of questions. Let's consider the issues of crosswalks and interventions by safety drivers.
The importance of the crosswalk
Crosswalks actually are important to robocars in spite of the fact that they still should stop for a pedestrian outside of a crosswalk.
Today I'm going to examine how you attain safety in a robocar, and outline a contradiction in the things that went wrong for Uber and their victim. Each thing that went wrong is both important and worthy of discussion, but at the same time unimportant. For almost every thing that went wrong Is something that we want to prevent going wrong, but it's also something that we must expect will go wrong sometimes, and to plan for it.
Major Update: Release of the NTSB full report includes several damning new findings
It's just been reported that one of Uber's test self-driving cars struck a woman in Tempe, Arizona during the night. She died in the hospital. There are not a lot of facts at present, so any of these things might be contradicted later.
One of the biggest milestones of the robocar world has gotten just a little coverage. Waymo, which last year removed the safety driver from behind the wheel of their cars in Phoenix, still had a supervisor sitting in the back with a kill switch. That supervisor is now gone and the car comes to pick up passengers entirely unmanned.
Earlier this week, I wrote about making a subway for robotic vans which just has tunnels and ramps to the surface, rather than the vastly more expensive system of giant stations we use for today's underground transit. It offers the chance to save immense amounts of money because stations are expensive to build and maintain.
I have written a few times about the unusual nature of robocar accidents. Recently I was discussing this with a former student who is doing some research on the area. As a first step, she began looking at lists of all the reasons that humans cause accidents. (The majority of them, on police reports, are simply that one car was not in its proper right-of-way, which doesn't reveal a lot.)
This led me, though to the following declaration that goes against most early intuitions.
A hiker online asked me about when we might see a robotic "pack mule" to make long hikes easier. The big problem is energy (and noise) since right now the walking robots that exist use a lot of energy to travel, and most hikes involve some terrain you can't do on wheels.
He hoped for solar charging, but most hikers like to hike under cover away from the burning sun. The robot probably wants to be electric since nobody wants a loud engine on a pack robot on the trail. That's a problem.
San Francisco is building its new Central Subway -- an underground light rail line. Ground was broken in 2010 but due to delays it will not open until 2021. This line will finally make the Caltrain commuter rail (which otherwise dumps passengers into an industrial zone far from where most of them wish to go) more useful, and offer travel not slowed by SF's terrible central district congestion.