Automated Vehicles Symposium Days 1 and 2
From small beginnings, over 800 people are here at the Ann Arbor AUVSI/TRB Automated Vehicles symposium. Let's summarize some of the news.
Test Track
Lots of PR about the new test track opening at University of Michigan. I have not been out to see it, but it certainly is a good idea to share one of these rather than have everybody build their own, as long as you don't want to test in secret.
NHTSA
Mark Rosekind, the NHTSA administrator gave a pretty good talk for an official, though he continued the DoT's bizarre promotion of V2V/DSRC. He said that they were even open to sharing the DSRC spectrum with other users (the other users have been chomping at the bit to get more unlicenced spectrum opened up, and this band, which remains unused, is a prime target, and the DoT realizes it probably can't protect it.) Questions however, clarified that he wants to demand evidence that the spectrum can be shared without interfering with the ability of cars to get a clear signal for safety purposes. Leaving aside the fact that the safety applications are not significant, this may bode a different approach -- they may plan to demand this evidence, and when they don't get it -- because of course there will be interference -- they will then use that as a grounds to fight to retain the spectrum.
I say there will be interference because the genius of the unlicenced bands (like the 2.4ghz where your 802.11b and bluetooth work) was the idea that if you faced interference, it was your problem to fix, not the transmitter's, as long as the transmitter stayed low power. A regime where you don't interfere would be a very different band, one that could only be used a long distance from any road -- ie. nowhere that anybody lives.
Manufacturers
The most disappointing session for everybody was the vendor's session, particularly the report from GM. In the past GM has shown real stuff based on their work. Instead we got a recap of ancient stuff. The other reports were better, but only a little. Perhaps it is a sign that the field is getting big, and people are no longer treating it like a research discipline where you share with your colleagues.
Ethics
Chris Gerdes' report on a Stanford ethics conference was good in that it went well past the ridiculous trolley problem question (what if the machine has to choose between harming two different humans) which has become the bane of anybody who talks about robocars. You can see my answer if you haven't by now.
Their focus was on more real problems, like when you illegally cross the double yellow line to get around a stalled car, or what you do if a child runs into the street chasing a ball. I am not sure I liked Gerdes' proposal -- that the systems compute a moral calculus, putting weights on various outcomes and following a formula. I don't think that's a good thing to ask the programmers to do.
If we really do have a lot of this to worry about, I think this is a place where policymakers could actually do something useful. They could set up a board of some sort. A vendor/programmer who has an ethical problem to program would put it to the board, and get a ruling, and program in that ruling with the safe knowledge they would not be blamed, legally, for following it.
The programmers would know how to properly frame the questions, but they could also refine them. They would frame them differently that lay people would imagine, because they would know things. For example:
My vehicle encounters a child (99% confidence) who darts out from behind a parked van, and it is not possible to stop in time before hitting the child. I have an X% confidence (say 95%) that the oncoming lane is clear and a y% confidence (90%) that the sidewalk is clear though driving there would mean climbing a curb, which may injure my passenger. While on the sidewalk, I am operating outside my programming so my risk of danger increases 100 fold while doing so. What should I do?
Let the board figure it out, and let them understand the percentages, and even come back with a formula on what to do based on X, Y and other numbers. Then the programmer can implement it and refine it.
Investment
For the first time, there was a panel about investment in the technology, with one car company, two VCs and a car oriented family fund (Porsche.) Lots more interest in the space, but still a reluctance to get involved in hardware, because it costs a lot, is uncertain, and takes a long time to generate a return.
Afternoon breakouts
I largely missed these. Many were just filled with more talks. I have suggested to conference organizers a rule that the breakout sessions be no more than 40% prepared talks, and the rest interactive discussion.
Wednesday starts with Chris Urmson of Google
Chris' talk was perhaps the most anticipated one. (Disclaimer -- I used to work for Chris on the Google team.) It has similarities to a number of his other recent talks at TeD and ITS America, with lots of good video examples of the car's perception system in operation. Chris also addressed this week's hot topic in the press, namely the large number of times Google's car fleet is being hit by other drivers in accidents that are clearly the fault of the other driver.
While some (including me) have speculated this might be because the car is unusual and distracting, Google's analysis of the accidents strongly suggests that our impression of how common small bumper-bender accidents are was seriously underestimated. There are 6 million reported accidents in the US every year, and common suggestions from insurers and researchers suggested the real number might include another 6 million unreported ones. It's now clear, based on Google's experience, that the number of small accidents that go unreported is much higher.
Google thinks that is good news in several ways. First, it tells us just how distracted human drivers are, and how bad they are, and it shows that their car is doing even better than was first thought. The task of outperforming humans on safety may be easier than expected.
The anti-Urmson
Adriano Allessandrini has always been an evocative and controversial character at these events. His report on Citymobil2 (a self-driving shuttle bus that has run in several cities with real passengers) was deliberately done as contrast to Google's approach. Google is building a car meant to drive existing roads, a very complex task. Allesandrini believes the right approach is to make the vehicle much simpler, and only run it on certified safe infrastructure (not mixed with cars) and at very low speeds. As much as I disagree with almost everything he says, he does have a point when it comes to the value of simplicity. His vehicles are serving real passengers, something few else can claim.
Public perception
We got to see a number of study results. Frankly, I have always been skeptical of the studies that report what the public thinks of future self-driving cars and how much they want them. In reality, only a tiny fraction of the 800 people at the conference, supposed experts in the field, probably have a really solid concept of what these future vehicles will look like. None of us truly know the final form. So I am not sure how you can ask the general public what they think of them.
Of greater interest are reports on what people think of today's advanced features. For example, blindspot warning is much more popular than I realized, and is changing the value of cars and what cars people will buy.
Security
For Tuesday afternoon I attended a very interesting security session. I will write more about this later, particularly about a great paper on spoofing robocar sensors (I will await first publication of the paper by its author) but in general I feel there is a lot of work to be done here.
In another post I will sum up a new expression of my thoughts here, which I will describe as "Connected and Automated: Pick only one." While most of the field seems to be raving about the values of connectivity, and that debate has some merit, I feel that if the value of connectivity (other than to the car's HQ) is not particularly high, it does not justify the security risk that comes from it. As such, if you have a vehicle that can drive itself, that system should not be "on the internet" as it were, connecting to other cars or to various infrastructure services. It should only talk to its maker (probably over a verified and encrypted tunnel on top of the cellular data network) and it should frankly be a little scared even of talking to its maker.
I proposed this to the NHTSA administrator, and as huge backers of V2V he could not give me an answer -- he mostly want to talk about the perception of security rather than the security itself -- but I think it's an important question to be discussed.
Since many people don't accept this there are efforts to increase security. First of all people are working to put in the security that always should have been in cars (they have almost none at present.) Secondly there are efforts at more serious security, with the lessons of the internet's failures fresh in our minds. Efforts at provably correct algorithms are improving, and while nobody thinks you could build a provably correct self-driving system, there is some hope that the systems which parse inputs from outside could be made provably secure, and they could be compartmentalized from other systems in a way that compromise of one system would have a hard time getting to the driving system where real danger could be done.
There were calls for standards, which I oppose -- we are way too early in this game to know how to write the standards. Standards at best encode the conventional wisdom of 3 years ago, and make it hard to go beyond it. Not what we need now.
Nonetheless there is research going to make this more secure, if it is to be done.
Comments
Frank Ch. Eigler
Sun, 2015-07-26 13:54
Permalink
trolley ethics board
FWIW, it would be nice to know why you believe that an ethical question - trading off lives/risk - could be reliably formalized. The existence of a government board that trades immunity for following preapproved ethical decisions seems far-out too. Is there an area of analogous human law?
If the robocar is placed into a situation where others' errors cause a crash, genuine liability won't accrue to the robocar anyway, so (from your scenario) hitting the child would be "safe" from a legal point of view.
brad
Mon, 2015-07-27 17:46
Permalink
Formalizing
I am skeptical of Chris Gerdes' approach to formalizing it, but in the end, if we get to a situation where the programmers must put in some sort of ethical logic, it will be formalized, and so a board can decide it.
Truth be known, I don't really see the example I give as a real one. Far more real is what Chris talked about:
Of course, any human would drive around, crossing the line, and no cop would ticket you, so here's where the board could offer an answer to the programmers.
DensityDuck
Thu, 2015-10-01 16:34
Permalink
What the double yellow is *for*
The purpose of the double-yellow line is to establish a virtual barrier in the center of the roadway. The reason this virtual barrier is present is the assumption that a human driver might not be able to see an oncoming vehicle in time to avoid a collision, and so it is necessary to prevent opposing-direction traffic in the same lane. A robocar with the appropriate vision system *would* be able to detect an oncoming vehicle in time to avoid a collision, and so it would be acceptable for a robocar to cross the double yellow line in circumstances where the roadway was blocked.
Of course, now you have to make sure the robocar doesn't confuse a line of stopped traffic for a completely-blocked roadway.
brad
Fri, 2015-10-02 17:07
Permalink
A virtual barrier?
It's an interesting question is that's true. There are a lot of rules on the road of the form, "We prohibit this, because we can't trust people to do it without reducing safety or blocking others." A great example is the lights with protected left arrow and no full-green allowing "turn when clear." Most lights in the world have just the green and you left turn if clear, but for some reason once it is decided to give a protected turn, suddenly many lights put up a red arrow, and you can't turn even on a completely empty road. If fact all traffic lights without sensors will put you in situations where you are stopped at the red light, the road is completely clear but you can't go.
A robocar might well be trusted to be reliable in that situation, so rationally the rules might be relaxed. Ditto for double yellows. Police will not ticket you for crossing the double yellow (unless they have another reason) if it turns out it was necessary to cross it.
A robocar would probably wait a while before doing this, and probably would not cross it without input from a human if there are several cars in front of it.
Steve
Fri, 2015-07-31 00:58
Permalink
Thanks for updates
Thanks for update Brad. You give a more honest perspective than many of the stories that filter out through through the media.
Lately I have been having some online discussions with people involved in the long term planning of city roads and infrastructure. Some of these people are highly qualified yet are totally switched off to any future introduction of automated vehicles. On a question of alleviating future congestion the response to my mentioning of driverless cars was "Why should we even consider automatic vehicles a solution." (from a professor of sustainability no less). Is it just me and the (at times backward) country I live in :Australia, or is this dismissive view fairly widespread? There seems to be a large body of people that see public transport as an environmental solution and are instinctively opposed to anything that could favour car any usage.
DensityDuck
Thu, 2015-10-01 16:43
Permalink
Autonomous vehicles enable sprawl
Self-driving cars allow you to do other things during a commute, making longer commutes more bearable. Instead of having to drive for two hours, you basically sit in a closet for two hours; you can nap, read, talk on the phone, use a computer, etc. Which means that autonomous vehicles are a direct threat to the urban planners' density-filled dreams.
Add new comment