Tesla, Audi and other recent announcements
Some recent announcements have caused lots of press stir, and I have not written much about them, both because of my busy travel schedule, but also because there is less news that we might imagine.
Tesla is certainly an important company to watch. As the first successful start-up car company in the USA, they are showing they know how to do things differently, taking advantage of the fact that they don't have a baked in knowledge of "how a car company works" the way other companies do. Tesla's announcements of plans for more self-driving are important. Unfortunately, the announcements around the new dual-motor Model S involve offerings quite similar to what can be found already in cars from Mercedes, Audi and a few others. Namely advanced ADAS and the combination of lane-keeping and adaptive cruise control to provide a hands-off cruise control where you must keep your eyes on the road.
One notable feature demonstrated by Tesla is automatic lane change, which you trigger by hitting a turn signal. That's a good interface, but it must be made clear to people that they still have the duty to check that it's safe to change lanes. It's not that easy for a robocar's sensors, especially the limited sensor package in the Telsa, to see a car coming up fast behind you in the next lane. On some highways relative speeds can get pretty high. You're not likely to be hit by such cars, but in some cases that's because they will probably brake for you, not because you did a fully safe lane change.
Much more interesting are Elon Musk's predictions of a real self-driving car in 5 to 6 years. He means one where you can read a book, or even, as he suggests, go to sleep. Going to sleep is one of the greatest challenges, almost as hard as operating unmanned or carrying a drunk or disabled person. You won't likely do that just with cameras -- but 5 to 6 years is a good amount of time for a company like Tesla.
Another unusual thing about Tesla is that while they are talking about robocars a lot, they have also built one of the finest driver's cars ever made. The Model S is great fun to drive, and has what I call a "telepathic" interface sometimes -- the motors have so much torque that you can almost think about where you want to go and the vehicle makes it happen. (Other examples of telepathic interfaces include touch-typing and a stickshift.) In some ways it is the last car that people might want to automate. But it's also a luxury vehicle, and that makes self-driving desirable too.
Audi Racing
Another recent announcement creating buzz is Audi's self-driving race car on a test track in Germany. Audi has done racing demos several times now. They are both important but also unimportant. It definitely makes sense to study how to control a car in extreme, high performance situations. To understand the physics of the tires so fully that you can compete in racing will teach lessons of use in danger situations (like accidents) or certain types of bad weather.
At the same time, real-world driving is not like racing, and nobody is going to be doing race-like driving on ordinary streets in their robocar. 99.9999% of driving consists of "staying in your lane" and some other basic maneuvers and so racing is fun and sexy but not actually very high on the priority list. (Not that teams don't deserve to spend some of their time on a bit of fun and glory.) The real work of building robocars involves putting them through all the real-world road situations you can put them through, both real and in some cases simulated on a track or in a computer.
Google first showed its system to many people by having it race figure-8s on the roof parking lot at the TeD conference. The car followed a course through a group of cones at pretty decent speed and wowed the crowd with the tight turns. What most of the crowd didn't know was that the cones were only there for show, largely. The car was guiding itself from its map of all the other physical things in the parking lot -- line markers, pavement defects and more. The car is able to localize itself fine from those things. The cones just showed the public that it really was following the planned course. At the same time, making a car do that is something that was accomplished decades ago, and is used routinely to run "dummy cars" on car company test tracks.
A real demo turns out to be very boring, because that's how being driven should be. I'm not saying it's bad in any way to work on racing problems. The only error would be forgetting that the real-world driving problems are higher priority and success in them is less dramatic but more impressive in the technical sense.
This doesn't mean we won't see more impressive demos soon. Many people have shown off automatic braking. Eventually we will see demos of how vehicles respond in danger situations -- accidents, pedestrians crossing into the road and the like. A tiny part of driving but naturally one we care about. And we will want them to understand the physics of what the tires and vehicle are capable of so that they perform well, but not so they can find the most efficient driving line on the track.
There was some debate about having a new self-driving car contest like the DARPA grand challenges, and a popular idea was man vs. machine, including racing. That would have been exciting. We asked ourselves whether a robot might have an advantage because it would have no fear of dying. (It might have some "fear" of smashing its owners very expensive car.) Turns out this happens on the racetrack fairly often with new drivers who try to get an edge by driving like they have no fear, that they will win all games of chicken. When this happens, the other drivers get together to teach that new driver a lesson. A lesson about cooperating and reciprocation in passing and drafting. So the robots would need to be programmed with that as well, or their owners would find a lot of expensive crashes and few victories.
Comments
Lunatic Esex
Tue, 2014-10-21 16:36
Permalink
Perspective
"danger situations — accidents, pedestrians crossing into the road and the like. A tiny part of driving"
"Tiny" only in terms of time spent involved in them. HUGE in terms of importance.
Diminishing these things is like saying "the code works great, but it's missing a UI." So... effectively useless to 99% of the world's population.
As for a man vs. machine racing competition, it'd be interesting like computer vs. grand master chess tournaments, but similarly lacking in relevance to the real world.
So what does Elon Musk/Tesla say about the major challenges to autonomous cars of driving in snow and poor weather, humans directing traffic like around schools, construction areas, event parking, and signal failures, and the constant everyday changes to road layouts/signs/signaling, in regards to his "5 to 6 years" prediction? Is he hiding a magic wand somewhere?
brad
Tue, 2014-10-21 19:15
Permalink
No, tiny
This is, of course, a mathematical analysis rather than an emotional one, but no, the amount of time spent in accident situations is tiny.
For example, human drivers in the USA have an accident about every 250,000 miles, and a fatality every 130 million miles. (Back of envelope math. 60% of fatalities are solo car, and 40% are multi-vehicle and I did not put that in this math, but the important thing is that most accidents you are in will be caused by you, and on average about 80% of the fatality accidents you are in will be caused by you.) Of course, if you are the fatality, you are unlikely to be in another accident.
Only about 30% of reported accidents are single-vehicle but there is a huge bias here. The vast bulk of single vehicle accidents are not reported to police, and often not even to insurance companies. Single vehicle fatalities are all reported to police. For example, the 2 million deer strikes are only reported to insurance companies. I am going to guess that a much larger percentage of accidents today are single vehicle.
I am going to thus guess that the average joe causes an accident every 300K miles and is hit every 1.5M miles -- but I would be very curious if there are real numbers on this.
If you drop the rate for accidents caused by the vehicle to 1 in a million miles, it's great. Even if you did nothing about their severity. If you made the severity worse, of course you need to fix that.
You might also, if you are very good, avoid or reduce the severity of accidents where you are the one hit. Also good, but not as much return as you get from just reducing the frequency of caused accidents with the same severity, at least up to a point. Once you have the accident cost for accidents you cause down, you start working harder on that.
I will also note that a very large volume of accidents are "dings" and a huge fraction are in parking lots. I actually hope to see robocars get those numbers down super-low very fast. Of course bugs that make a vehicle run amok -- these will increase severity and giving the vehicle better accident-driving ability won't help any there.
Lun Esex
Wed, 2014-10-22 16:04
Permalink
Completely missing my point
I already said they were "tiny" in terms of time spent involved in them.
An autonomous car that gets in fewer accidents but *deals with danger situations worse* than a human driver is going to get loads of press not for how many fewer accidents it gets in, but for how much worse it is at handling danger situations than human drivers are.
This is huge for the public perception of the safety of autonomous cars.
Of course, the code for what to do once in a danger situation is going to be different from the code for avoiding them in the first place, which I get the impression has been focused on almost exclusively so far.
Just due to the vast numbers of collective vehicle miles driven it is inevitable that there will be not only many danger situations but also many accidents involving autonomous cars. It should be obvious that those accidents, regardless of how reduced they are compared to ones not involving autonomous cars, are going to get more attention than how many more accidents are avoided in the first place.
Plus, those many minor accidents caused by human drivers that are currently not reported are likely to get disproportionate attention when an autonomous car is involved in them, regardless of whether a human is actually at fault or not. It's not unlikely that simply due to disproportionate reporting autonomous cars will be branded as more prone to these kinds of accidents than human drivers, even if they're actually less frequently involved in them per vehicle or miles driven.
Then people will like to try to lay the blame on the computer-driven car in any accident where both a human-driven and autonomous car are involved. Data logs can be pulled from the autonomous car to try to show it was not at fault, but the data might be inconclusive. Judges and juries are likely to be biased towards the human driver, just due to human nature.
The code for an autonomous car actually dealing with a danger situation is like the vehicle escape system on a manned rocket: You really hope you don't have to use it, but it'd better be especially good at what it does if you do.
And it's an absolute inevitability that autonomous cars *are* going to be involved in danger situations and accidents, and their code for dealing with them *is* going to have to be used.
brad
Wed, 2014-10-22 18:23
Permalink
Bad press, I agree
But I am looking at two different things.
a) The reality -- what strategy gives the greatest reduction in accident consequences
b) The perception -- how scared the public is of it.
I realize these are not the same thing. And I realize you definitely have to deal with (b). Never said otherwise.
What I said was that we also want to keep our focus on (a) as much as we can. But not as much as we would want.
Consider a hypothetical. A car which gets into accidents once every 300M miles, but when it does, causes a horrendous death. Now human drivers in 300M miles will have not just a little more than 1 fatality, but also about 1,200 minor accidents and a few score injuries, some of them major.
Now from an objective standpoint, the first car is much better for society, very much the one you want to ride. Even though it didn't do anything about mitigating the one accident it gets in every 300M miles.
So the question becomes, how do you make the public act on that objective valuation? I am not saying it's easy.
Xed
Sat, 2014-10-25 10:52
Permalink
Perception correction
I am keen to see some autonomous car development effort spent on racing. The reason I think that it is important (though probably not more important than street driving) is that I am frequently meeting people who dismiss autonomous cars because they do not trust them. They simply feel they are better drivers than a computer could ever be. If playing chess poorly occasionally caused horrific violent deaths one could see the merit of using a competition to convince people that if the grand master loses to a computer, you will likely lose too. With racing I feel like it could help demonstrate that, no, you do not drive better than a computer. Already F1 has had to really get philosophical about what "winning" really means with regard to dominant cars with computer controlled active suspension and other computer aids. To some extent if computer controlled cars were able to consistently crush the best human drivers on a race track, the question wouldn't be if we could make cars' good enough for normal roads, but could we make the driving environment more like a race track (controlled conditions etc). In any case, clear superiority in the simplified domain of racing couldn't hurt the reputation of autonomous cars.
Teppy
Sat, 2014-11-29 18:06
Permalink
Telepresence Driving
Is anyone pursuing this? Seems that having some guy in a call center (driving center?) in India would be a good bridge for the next 5-6 years until we have autonomous self-driving cars.
Add new comment