Uber robocar hits and kills pedestrian in Arizona

Topic: 

Update: Did the woman cross 3.5 lanes of road before being hit?

It's just been reported that one of Uber's test self-driving cars struck a woman in Tempe, Arizona during the night. She died in the hospital. There are not a lot of facts at present, so any of these things might be contradicted later.

Police say 49-year-old Elaine Herzberg was crossing the road (not using the crosswalk.) She was walking her bicycle and came out from the median, which is signed that pedestrians should not cross there, and should go to the main crosswalk at the light. According to the police reports, she was in the dark and could not be seen by the safety driver until he hit her. (The LIDAR should have seen her a little bit before but the car did not react, which is odd.)

What should happen very soon is that Uber will know just what happened. The vehicle was in autonomous mode. They'll have a full 3-D recreation of the incident and are almost surely working to understand why their vehicle did not stop in time. There was a safety driver in the vehicle who is supposed to use human senses and judgment to intervene and hit the brakes if they see the car not properly reacting to a cyclist, pedestrian or any other risk on the road. The driver says she did not see her.

Below is the location of the crash on Streetview.

It is unclear why the car's sensors would not have detected her even coming from the dark. Because this is a zone prohibited to pedestrians (in spite of the paved path!) Uber's car would not be expecting one to come from there, but it should have detected her the moment she put foot on the road. According to the safety driver it was almost immediately after that that she was hit.

There is the potential that because a pedestrian walking a bike is an unusual (though certainly not unknown) user of the road, their system may not be as well tested as it could be on such a profile to the LIDAR.

We also might learn -- speculation again -- that the car could have swerved around her, but did not. While several companies have shown demos of cars that swerve into another lane, this is not a well solved problem. The road in question does have more than two lanes. I Swerving is problematic because it can make things worse unless you are sure it's safe to do.

Other notes:

  • Uber has shut down testing until it addresses the problem
  • The impact was hard enough to damage the fender and grille of the SUV. It was at 40mph, and the limit is 35mph.
  • There are some trees on the median which would allow the pedestrian to appear "out of the blue."
  • There are some unusually placed footpaths in the area, including a big "X" in the median between the two directions. These might create an "unmarked crosswalk" under the law, though not one you would expect people to use.
  • The LIDAR on top of the Uber sees just as well (in fact very slightly better) at night, so the darkness would not be an issue for the vehicle. The area is also brightly streetlit from appearances for the cameras in the car. Radar would see her bicycle but probably report it as a stationary object as it is not moving towards or away from the Uber.

According to police and friends, the the victim was a homeless person. (However, her facebook profile does not suggest this.) This opens up a rather sad irony. Since the car had the right of way over a person crossing at a sign forbidding pedestrian crossing, Uber will probably not be liable under the traffic code. Uber could be liable in a wrongful death suit, but as you might guess, that's a lot less likely for a homeless person. Some of them have no known family, or are estranged from their families, making it less likely that a family member would sue, or could claim significant damages. As a result, the irony is that in this first death, Uber might have no financial or vehicle code liability at all.
Market and public opinion consequences are another matter. However, indications are she was in contact with family, but was recently incarcerated, so we'll see what that means.

Sensor and simulation speculation

Our best information is that she was walking the bicycle across the road, not at a crosswalk, in the night. That's a very odd thing to be doing is my first impression, but it does raise some questions.

First, while not entirely unknown, a person walking a bike across a non-crosswalk is an unusual thing compared to what you normally see. As such, Uber's perception system may not be as capable of identifying and modeling that. It is something which may not have been tested as much in simulator.

The oddest thing to me is that the impact seems to have taken place on the right side of the grille and in the right lane, even though police say the victim came from the west (left.) Debris is visible in the main right lane (not the right-turn lane which is just opening up there.) According to reports, the car did not attempt to brake. This means the woman somehow crossed 3 lanes(two left turn lanes and the main left lane) and the left front of the vehicle before being hit. That does not make sense in that the sensors should definitely have detected her well in advance of impact.

Note that with the most advanced teams, they are doing very extensive testing of all the variations they can think of for situations like this in simulator. So this should be something the systems have seen before, at least virtually.

Update

New reports suggest the woman came from the median in the shadows and was not visible to the safety driver. The police probably will assign no fault to Uber. It is also suggested the woman may indeed be homeless. A few items in this article have been updated in light of these new facts.

Comments

This report says the woman, who is named, was walking a bicycle.
http://abcnews.go.com/US/driving-uber-car-kills-arizona-bicyclist-police/story?id=53853861

I don't know AZ law, but in California crossing outside the crosswalk is legal much of the time, so be careful calling it jaywalking.
http://www.christian-attorney.net/jaywalking_california.html

https://azbikelaw.org/jaywalking-in-arizona/

It says you either can't cross at all, or must yield to cars. It looks like in the Tempe CBD (not sure if this is that, it's quite close to the CBD) you can't cross at all. However, there are some footpaths very close to the area which might create what is called an "unmarked crosswalk" where she was.

Brad, I think speculating on the person being homeless and possibly jaywalking is pretty disgusting. I am disappointed in you.

Jaywalking is what the police are reported to have said. Homeless is as i said, purely hypothetical but possible, and it bears on how Uber would fare in a lawsuit, so it is topical. I am certainly not asserting that she was homeless. Rather, i am pointing out the sad irony that in the case of a homeless jaywalker, Uber might suffer zero liability.

Article needs a little editing on the crosswalking terms. Not to be too pedantic here but just want this all to be clear. An unmarked crosswalk is still a crosswalk; you mention at one point in the article that it's not clear whether they were at an unmarked crosswalk, but then also state they were not using a crosswalk. If they were using an unmarked crosswalk, they were indeed using a crosswalk.

However, all that aside, the X in the median has a sign specifically stating no pedestrians and to use the marked crosswalk at the next intersection. IOW, it's completely safe to say that the person was jaywalking and was not using a (marked or unmarked) crosswalk. The only two spots you can safely (and legally) cross the road there are right after the lake where there's a road underpass that you can walk via, and the light up ahead next to Marquee Theatre. The entire space between that has no safe or legal crosswalks.

To get more hairy though, right at the place your map is showing in street view and where the cops and bicycle were, there are some interrupting bricks on the sidewalk. These directly mirror those across the street at a bus stop. Even more, there are distinct and direct cuts across the median with no vegetation, making it look like a seamless area able to be crossed. While there is no particular indication that this is a safe crosswalk, it was also probably designed with the idea that pedestrians would cross in-mind. Looking at it from an aerial view looks completely obvious this is the case, but of course it's impossible to know if it was designed for that for sure. Hard to know the impact of that design decision too, especially this far down the line. All in all though, I don't think this particular fact matters for an SDC; if its perceivable by humans as a plausible path across the road, we of course want SDCs to perceive that same possibility to be able to assign proper probabilities of someone crossing there.

I would have thought there would have been very little traffic at that time of morning giving the Uber vehicle plenty of opportunity to give a wide berth - perhaps driving into oncoming lane, or slowing right down. This is what you would expect any driverless system to do if faced with uncertainty about an object ahead.

There are at least a couple of possibilities why this did not happen, one is sensor/detection confusion similar to the Tesla accident. The other is perhaps more aggressive driving algorithms. I was surprised at how quickly a previous Uber driverless vehicle involved entered an intersection with blind spots from other vehicles with the outcome being the Uber vehicle laying on its side (fortunately no one injured and the Uber did have right of way).

Your guess of 4AM is incorrect. The ABC15 story says it was closer to 10PM

Additionally, the light rail in that area is still running around then. Perhaps she could have just been trying to reach it? I've rode the light rail countless times through that area, and there are generally many people dangerously and illegally crossing that as well in the area to get to it.

Crosswalk, jaywalk, right, wrong, AM, PM -- that's all extraneous BS. There's only one fact
of any relevance: cars shouldn't hit pedestrians. Full stop.

Self-driving vehicles are not ready for public roads and do not belong there. You want to
make a self-driving car? Then build your own test tracks/towns, populate them with (live)
test subjects -- all on your own dime. Then prove your car is safer. But not while using
the public roadways for your own selfish ends.

How many more people will have to die (or be injured) in order to satisfy the tech fantasy
of a few "bros" who -- boo hoo -- don't like to drive in traffic?

How many more people will have to die (or be injured) in order to satisfy the desired to drive from people that cannot drive?.
From the point of view of safety, a big amount of people actually driving, should be banned to get a license. The cars should be like a war tank, surrounded by foam mattresses. With a system that avoid more than 30 km speed in towns and not more than 80 in highways. a GPS / wristwatch system will stop the car after driving 2 hours by the same driver, and avoid to him drive any car from the 20 following minutes. People will be allow to drive in public streets after 5 years of daily training.

A person found driving under drugs effects (even medicine), alcohol, smoking, eating, talking with passengers, with a mobile phone in the cabin, must be banned for driving for ever. A driver who makes any mistake, even if nothing happened, must be re-training for one year or even banned for ever. Other rules/ tests must be necessary, but lets stop here.

I am not sure that even with this controls, may be 10 % of the people actually driving will continue driving, but for sure they will cause more casualties than 2.000.000.000 robocars running world wide.
Besides that robocars will allow any person from 2 to 120 years old, and independent of any disability, to use a robocar. Just we are trying to make robocars all the safe they can be. Actually, any robocar driving in the street is safer than any man driving car, taking in account the possibilities to make a mistake.

I live near the Google robotics "hive". Nearly everyday I intersect with and/or am passed by one of their robots. Never had a close call, but texting drivers, rushing, daydreaming drivers -- they've hit me (3 times). The total number of dead people will be way less with autonomous cars, we need them sooner not later. Airplanes, new ships and rockets have killed scores of people in cabin and out as they were developed. Rather, we need government to do just two things:
1) Order the autonomous driving community to adopt an open simulation standard such as http://vladlen.info/publications/carla-open-urban-driving-simulator/ This should be populated with realistic avatars acting in all human known ways + other randomness baked in, in all weather and lighting + randomness and in all categories: Urban, suburban, country, mountain, rolling hills, freeway etc in all localities to be driven. This will be populated by a trust from all entrees and additional. And then, if you want to drive an environment, you must pass 1M hours of drive time (it can be accelerated and parallelized) before you are allowed out in that environment.
2) Mandate that the industry adopt a standard data representation for driving (sensor fused, tilted graphics map for example) so that cars can share hazards, incidents, construction sites can show how to get through them and traffic lights can broadcast where other vehicles and pedestrians are before the car gets there.

But, let's get on with it!

I have called for that for a while. I would not have the government mandate one particular one though! But it could mandate that they have something meeting certain minimums.

How about gamifying it? Let people write software agents to interact with the simulator and offer a bounty for anyone who can cause the car to make a mistake. Or it could be interactive.

http://robocars.com/simulator.html is my now ancient plan which includes things like this.

The "big "X" in the median between the two directions." looks like it is generated by Google

Putting thing is perspective, as Brad suggested taking into account that modest improvements in robocar's driving would indeed be very beneficial out in the streets.
News outlets read: 1 person killed by an Uber selfdriving car. What they should add also is: 37,461 people killed by human drivers in 2016.

Brad, you wrote, " If this was a person walking with a bicycle, they would be classed as a pedestrian (and jaywalker)".

Living in a hilly area in California, I often see bicyclists walking their bike up a steep hill. They are not jaywalking. On a hilly road with no sidewalk, a bicyclist who has to walk for a stretch of steep uphill has no other option than to walk at the side of the road. I actually drove past someone doing just that this afternoon... a youngster probably riding home from school.

I realize that my comment doesn't apply to the specifics of the fatal collision in Tempe (flat road, someone trying to cross the road, etc). I'm just reacting to your quote.

Another situation is a bicyclist with a flat tire. Not a common occurrence but it happens. I assume it's legal for the bicyclist to walk their bicycle along the side of the road.

The video and LIDAR data of course will reveal if the bicyclist was crossing the traffic lane or merely proceeding in the direction of traffic whilst staying on the rightmost part of the roadway.

A person walking on a road with no sidewalk is a proper pedestrian. Here, however, we have the woman crossing the road from the median, having gone through the "Pedestrians use main crosswalk" sign and thus very much a jaywalker because of that.

If you zoom in on the side of the road where there is the red dirt and the blue sign, you will see the track of a bike's tires once you get off the brick, perperndicular to the road, driving away toward ( or coming from) a path worn in the grass. Is it possible to guess that the biker was speeding toward the road crosswise and drove right across the road without checking to see if a car were coming? Either: got going too fast and forgot to look or didn't see the car before trying to cross traffic; brakes on bike failed; biker meant to turn right into bike lane but went across road instead, unable to steer properly because of bags on handlebars; unable to brake properly due to bags on handlebars; intended to bike quickly into bike lane without noticing that car was changing lanes to turn right; was unable to steer because of riding too quickly over curb; misjudged the speed of the car; forgot which way to look on a one-way street, etc.

Why do we instantly assume the car made a mistake? Perhaps a collision and cyclist death was literally unavoidable and would have been unavoidable no matter who was driving, if the bike literally drove right into the car from the side of the road in the dark? I've seen bikers and I know bikers and there really is no reason to assume the bike was going slowly across the road and several reasons to assume it was going quickly across the road, perhaps even without a thorough look, and every reason to assume it was going across the road where nothing is supposed to be crossing the road. May have zero to do with this being a self driving car.

May be woman was crossing at some speed and robocar thought he has time to pass or she will stop, but in the last moment women decided to accelerate to cross guessing she has enought time or the car will brake a bit, but robocar didn't get it. A typical accident pattern.

I would be amazed if the car is not programmed to immediately brake, and brake hard, if it sees a pedestrian attempting to cross the road in a location like this.

If pedestrians crossing the streets int the "wrong" place are very frequent, it may be the case. Breaking continuously could be a bad solution. And guessing pedestrian intentions could be tricky for an AI. If the pedestrian looks at you yo can infer it will stop, but if don't, it's better to stop the car. Can Uber cameras see the pedestrian face and act accordingly?

Yes, on slower streets, there is a dance between jaywalking pedestrians and cars. We've all done it. We want the car to go by us fast so we can go behind it. The ar wants to stop to be sure it doesn't hit you. You make eye contact, you do gestures. But not on a 45mph road. You don't do the dance there. If you are a ped, you cross only when it's so clear you won't meet the car at all. And the car, if it sees you, will definitely slow.

Now that there's been a deadly car crash involving a self-driving car, we can calculate a fatality rate for self-driving cars.

I don't know of any published statistics on the total number of miles logged by fully autonomous vehicles in the U.S., but based on published statements by some of the companies involved, it seems that a good guess is somewhere between 10M and 25M total miles across the entire industry. That would make the fatality rate of autonomous vehicles somewhere in the range of 4-10 fatalities per hundred million vehicle miles. Of course there's enormous error bars on this calculation given the single fatality so far.

To put this in context, over the past decade the overall fatality rate in the U.S. has generally ranged between 1.1 and 1.2 fatalities per hundred million vehicle miles.

It's possible that my guess about the total miles driven by self-driving cars is much too low, but I can't find any better data. But if my guess is in the right ballpark, it may be time to back off claims that autonomous vehicles are at present safer than human-driven vehicles. They have the potential to get there, but right now the data seems to show that there's a long way to go.

No, the whole industry is probably at no more than about 6M miles (almost all of them Waymo) I would guess. However, there is not really an "industry rate." The cars are at very different levels, using different technologies. Some day perhaps there will be an industry rate. There are only rates for individual cars, and really for individual software releases of individual cars. As I said, this Uber software revision will never run again, probably. This is one of the things that makes it very hard to calculate the rate.

If you want to argue that autonomous vehicles in general are safer than human-driven cars, you can't pick and choose which fatalities you're going to count--otherwise you're making a No True Scotsman argument. Yes, there's a lot of different vehicles and platforms. But there's also a lot of different kinds of human-driven vehicles and yet we include them all when calculating our highway fatality rate--including drunk drivers, professional long-haul truckers, school bus drivers, etc.

I would believe you that we should look at the different platforms separately if we were at the point where Waymo had logged, say, 150M miles with no fatalities, while some other platform had multiple deaths in a similar amount of driving. But we're not there yet, and the fact remains that if autonomous vehicles really are safer than cars driven by people then it's quite improbable that there would have been the first robocar death after only 6M miles.

Put a slightly different way, autonomous vehicles have not yet driven nearly enough miles to prove the common claim that they are (or will be) safer. The industry probably needs to be over a quarter billion miles in order to actually prove this. But the fact that we have a fatality already is very troubling.

This is not a No-true-Scotsman fallacy. It is perfectly possible for Waymo's cars to be at a sufficient safety level, while Tesla or comma.ai or Uber are not. In fact, I think that is arguably the case today, not just something possible.

No human is safe, of course, and drunk humans are so dangerous that we make that illegal. But there are things that all humans have in common which are not true for machines.

The idea of the robocar is not inherently safe, and hopefully nobody wise ever said it was. It offers the potential for great safety, but only after a lot of work. The only way to do that work is to drive out there on the roads when your vehicle is not safe enough, and to have humans oversee it in order to correct its mistakes. That is how everybody got to where they are now. That is also how student drivers get to the basic level, with a driving instructor who has an emergency brake pedal. Even a human is not judged to be safe enough until they get training and pass a test. A pretty meager test of course.

It is correct that one can't prove the claim they are safer at this point. Again, nobody who knows the field is making that claim. The only proper claim is that evidence suggests they can be safer, after extensive development. That evidence is actually only a little bit about them, and more about all the mistakes that humans make that machines will not. And the evidence that machines in other environments can be highly reliable. Proof that machines actually are that safe on the roads is what people are trying to build now. Nobody has figured out a way to build that without taking some risks on the road. Just as you can't train teens without taking some risks on the road. Test tracks are not even remotely near enough, nor are simulators, at least for now.

The Uber death will raise questions (possibly) about whether the human oversight approach with the safety driver is enough. Or whether there is a better one. Uber was running with just one safety driver. Google always used two in its early days, though now they have reached zero.

Or it may turn out to match what the police chief said after viewing the video, that the pedestrian came out of nowhere, and nobody could have stopped. If that's true all we have from this is a story of great misfortune. I don't have the figures, but I presume they are available on how many pedestrian injuries are blamed on the pedestrian. It does happen, but infrequently. Did the victim and Uber just have bad luck at the wrong time? We'll learn that over time.

Add new comment