Accident autopilot

Topic: 

An MIT team has been working on a car that is "hard to crash." Called the intelligent co-pilot it is not a self-driving car, but rather a collection of similar systems designed to detect if you are about to hit something and try to avoid it. To some extent, it actually wrests control from the driver.

When I first puzzled over the roadmap to robocars I proposed this might be one of the intermediary steps. In particular, I imagined a car where, in a danger situation, the safest thing to do is to let go of the wheel and have the car get you to a safe state. This car goes further, actually resisting you if you try to drive the car off the road or towards an obstacle.

This is a controversial step, and the reasons are understood by the MIT team. First of all, from a legal liability standpoint, vendors are afraid of overriding the human. If a person is in control of a vehicle and makes a mistake, they are liable. If a machine takes over and saves the day, it's great, but if the machine takes over and there is an accident -- an accident the human could have avoided -- there could be high risks to the maker of the machine as well as the occupant. In most designs, the system is set up so that the human has the opportunity for control at all times.

Actually, it's even worse. A number of car makers are building freeway autopilots which still require attention from the driver in case the lane markers disappear or other problems ensue. One way some of them have built this is to require the driver to touch the wheel every so often to show they are alert. They will beep if the driver does not touch the wheel, and they will even disengage if the driver waits for too long after the beep. Consider what the companies have interpreted the liability system to require: That the right course of action, when the system is driving and the driver has her hands off the wheel, is to disengage and let the vehicle wander freely and possibly careen off the road! Of course, they don't want the vehicle to do that, but they want to make it clear to the driver that they can't depend on the system, can't decide to type a long E-mail while it is running.

And this relates to the final problem of human accommodation. When a system makes people safer, they compensate by being more reckless. For example, anti-lock brakes are great and prevent wheel lock-up on slippery roads -- but they cause drivers to feel they have invincible brakes and studies show they drive more aggressively because of them. Only a safe robocar avoids this problem; its decisions will always be based on a colder analysis of the situation.

A hard-to-crash car is still a very good idea. Before a full robocar is available, it can make a lot of sense, particularly for aging people and teens with new licences. But it may never come to market due to liability concerns.

Comments

How is this different from currently advertised cars that notice an obstruction and brake for you and there is a malfunction that causes an accident or doesn't prevent one that it's designed to stop.

Is that this car will correct your steering. While stopping for an obstacle that the driver is not braking for after hearing an alarm is one thing -- essentially it's never a bad thing to do -- swerving carries much more risk and is much harder to figure out.

Add new comment