Interviews with me on Reason.tv and SmartDrivingCars podcast
A couple of non-text interviews this week.
Reason TV did a segment on regulation for robocars where I'm interviewed on the question of regulation. Naturally Reason likes my spin that we don't need heavy regulation at this time, though by putting me up against John Simpson from the shadowy "Consumer Watchdog" group (who does not seem to have met a regulation he doesn't like) it makes me seem like I'm calling for complete laissez-faire. Instead, my position is that there's already a lot of regulation (it's already illegal to drive unsafely and certainly to hit anything) so we don't need to rush to regulate until we know what companies can't be trusted to not cheat on. Let's learn about how the tech works, first.
What is key is that we actually have a chance to greatly simplify driving regulation (the "rules of the road") because most rules of the road were written for untrustworthy human drivers with unreliable judgment. If a robocar makes a mistake that would result in a ticket or incident, the teams will all fix that mistake and it won't happen again to any car. That's very different from people. We put up "no left turn" signs in places where 99% of the time a left turn is perfectly safe for people, and it's 99.9999% safe in a robot with perfect judgment of physics. Another key point, not noted in the interview, is that with robocars, you can get reps from every team putting a car on the road in a room together and work out problems. You don't need to pass specific laws, you just work interactively with developers to find good solutions.
Allain Kornhauser of Princeton is one of the few to have been working on and writing on robocars (or Smart Driving Cars) since the start. He does a regular e-mail blast of commentary on news stories, and now has started a podcast. I'm a guest on the episode this morning. One new issue discussed is the ironic crash of a Navya shuttle in Las Vegas 2 hours after it went into operation, which I made note of yesterday. We also talk about the meaning of the Waymo supervisor, now in the back seat of their Phoenix pilot. (My earlier post did not make that clear.) This supervisor does not have a wheel, just buttons to get the vehicle to stop or pull over.
The Navya has since generated lots of debate about its failure to back up. People seem to be forgetting this is still an early revision product. It does not have a lot of experience with extreme and accident situations. Teams don't want their cars driving in ways that they have not had the time to extensively test.
I liken this to humans and deer. If you see a deer on the highway, the advice given by all is "don't swerve, hit the deer." That's because most deaths in encounters with animals come from swerving, not from hitting the animal. Swerving is not something we're good at or do very often, so while it seems the ideal choice, it is much riskier. The same applies to early cars. They won't do complex evasive action because they're not rated for it. Not that they could not do it, but because it contains too many unknowns for a device programmed to be paranoid about safety.