It certainly looks bad for Uber

Topic: 

Major Update: Release of the NTSB full report includes several damning new findings

Update: Analysis of why most of what went wrong is both terrible but also expected.

The Tempe police released the poor quality video from the Uber. What looks like a dash-cam video along with a video of the safety driver. Both videos show things that suggest serious problems from Uber, absent further explanation.

You can watch the video here if you have not seen it. It's disturbing, though the actual impact is removed. It will make you angry. It made me angry.

Above I have included a brightened frame from 3 seconds into the video. It is the first frame in which the white running shoes of the victim are visible in the dashcam video. They only appear then because she is previously in darkness, crossing at a poorly lit spot, and the headlamps finally illuminate her. Impact occurs at about 4.4 seconds (if the time on the video is right.)

She is crossing, we now see, at exactly this spot where two storm drains are found in the curb. It is opposite the paved path in the median which is marked by the signs telling pedestrians not to cross at this location. She is walking at a moderate pace.

The road is empty of other cars. Here are the big issues:

  1. On this empty road, the LIDAR is very capable of detecting her. If it was operating, there is no way that it did not detect her 3 to 4 seconds before the impact, if not earlier. She would have come into range just over 5 seconds before impact.
  2. On the dash-cam style video, we only see her 1.5 seconds before impact. However, the human eye and quality cameras have a much better dynamic range than this video, and should have also been able to see her even before 5 seconds. From just the dash-cam video, no human could brake in time with just 1.5 seconds warning. The best humans react in just under a second, many take 1.5 to 2.5 seconds.
  3. The human safety driver did not see her because she was not looking at the road. She seems to spend most of the time before the accident looking down to her right, in a style that suggests looking at a phone.
  4. While a basic radar which filters out objects which are not moving towards the car would not necessarily see her, a more advanced radar also should have detected her and her bicycle (though triggered no braking) as soon as she entered the lane to the left, probably 4 seconds before impact at least. Braking could trigger 2 seconds before, in theory enough time.)

To be clear, while the car had the right-of-way and the victim was clearly unwise to cross there, especially without checking regularly in the direction of traffic, this is a situation where any properly operating robocar following "good practices," let alone "best practices," should have avoided the accident regardless of pedestrian error. That would not be true if the pedestrian were crossing the other way, moving immediately into the right lane from the right sidewalk. In that case no technique could have avoided the event.

LIDAR

This is not a complex situation. This is the sort of thing that the LIDAR sees, and it sees very well at night. The Uber car, at 40mph, is getting into the upper range of speeds at which it is safe to drive non-freeway with just the LIDAR. What I called the valley of danger four years ago, and Uber knows it. 40mph is about as fast as you should go, but you can do it. (Even so, some cars like to go a bit slower approaching legal crosswalks, marked or not.) Using the LIDAR their perception system should have had a pretty good impression of her by 50m (2.7 seconds) and applied the brakes hard. The stopping distance is 25m or less with hard braking. (A more typical strategy would be to slow, and get a better appraisal, and then continue braking to as to stop a 2-3m before her, to avoid jarring any passengers.)

Uber needs to say why this did not happen. I have seen one report -- just a rumour from somebody who spoke to an un-named insider, that the LIDAR was off in order to test operations using just camera and radar. While that might explain partly what happened, it is hard to excuse. Even if you want to do such tests -- many teams are trying to build vehicles with no LIDAR -- the LIDAR should remain on as a backup, triggering braking in exactly this sort of situation when the other systems have failed for some reason, or at least triggering a warning to the safety driver. It would be highly unwise to just turn it off.

In fact, I have to say that this sort of impact would have been handled by the fairly primitive ADAS "Forward Collision Warning" systems found on a large number of cars. Not the most basic radar-only ones that don't detect horizontally moving objects, but any of the slightly more sophisticated ones on the market. The unit standard in the Volvo XC90 promises it will reduce velocity by 50km/h if a bicycle crosses your path. The built in systems that come with these cars are typically disabled in robocar operation.

You may wonder, if there is LIDAR data, why have the police not examined it? While it is possible they have, they may not be equipped to. Police can readily examine videos. Understanding point clouds is difficult without Uber's own suite of tools, which they may not have yet offered to police, though they will need to if legal proceedings take place. Remember that because the victim crossed directly at a "don't cross" location, the car had the right of way, which is all the police are usually concerned with. Police may be concerned over Arizona law requirements to brake even for jaywalkers. However, the police may only consider a human's abilities here, not the superhuman vision of LIDAR or radar. From the dashcam video, it seems that there was very little time to react, and no fault for a human driver hitting somebody who "came out of nowhere." The police may not have a good way to evaluate the vastly superior dynamic range of human vision compared to the camera.

Waymo's cars, and a few others use long-range LIDARs able to see 200m or more. Such a LIDAR would have detected the victim as soon as she became visible on the median, though most systems do not react to pedestrians off the road. As soon as she set foot on the road, that would be a flag to such a system. One lesson from this accident might well be to map "illegal, but likely crossings" and exercise certain caution around them. In many countries, people even cross freeways routinely, though this is very dangerous because nobody can react in time at such speeds.

There is a dark irony that this longer range LIDAR is what the Waymo vs. Uber lawsuit was about. Though I doubt that Uber would have had its own long range LIDAR in production by now if they had not been troubled by that lawsuit.

It should be noted because the victim is wearing a black shirt, some of the numbers on the LIDAR range may be reduced, but not a great deal. Would need to know the reflectivity of the cloth. If it was less than 10% (very black) it's an issue, though she was not fully covered. She has blue jeans, bright hair and a red bike.

It should also be noted that while driving without a LIDAR's help if you have one is unwise, many teams, most famously Tesla, are developing cars with no LIDAR at all. It not a problem to drive a prototype which is not ready for deployment yet, that is what everybody is doing, and it's why they have safety drivers and other safety backups. However, those drivers must be diligent and the cars operated only where they are qualified to operate.

Cameras and HDR

The Uber reportedly has a large array of cameras. Good. That usually means that the system has been designed to do "high dynamic range" vision for driving at night. That's because what we see here is common -- uneven lighting due to the headlamps and streetlamps. This means either 2 or more cameras with different exposure levels, or one camera constantly switching exposure level to capture both lit and unlit objects.

A vision system based on HDR should also have easily seen her and triggered the stop.

Another option, not used on most cars today, is a thermal "night vision" camera. I have written about these a few times and I experimented with them while at Google back in 2011. Then (and even now) they are quite expensive, and must be mounted outside the glass and kept clean, so teams have not been eager to use them. Such a camera would have seen this pedestrian trivially, even if all the lights were off (headlights, streetlamps etc.) (LIDAR also works in complete darkness.) I have not heard of Uber using such night-vision cameras.

Note that the streetlamps are actually not that far from her crossing point, so I think she should have been reasonably illuminated even for non-HDR cameras or the human eye, but I would need to go to the site to make a full determination of that.

Once you have a properly exposed image from a camera, several vision techniques are used to spot obstacles within it. The simplest one is the use of "stereo" which requires 2 cameras, anywhere from 8 inches to 4 feet part in most cars. That can identify the distance to objects, though it is much better when they are close. It would not detect a pedestrian 200 feet away but can, if wide and high-resolution see 150 feet.

The second method is detecting motion. There is the motion of close objects against the background if they are not directly in front of you. There is also the motion against the background when the objects are moving, as a pedestrian crossing the road is.

Finally, the area of most research is the use of computer vision, usually powered by new machine learning techniques, to recognize objects from their appearance, as humans do. That's not perfect yet but it's getting pretty good. It can, in theory see quite far away if the camera is high resolution at the distance in question.

Radar

Radar could have helped here, but the most basic forms of radar would not help because a pedestrian slowly crossing the street returns a Doppler signature similar to a stationary object -- ie. just like all the signs, poles, trees and other fixed objects. Because radar resolution is low, many radars just ignore all stationary (meaning not moving towards or away from the car) objects. More advanced radars with better resolution would see her, but their resolution is typically only enough to know what lane the stationary target is in. Radar-based cars generally don't respond to a stationary object in the next lane, because as a driver you also don't slow because a car is stopped in a different lane, when your lane is clear. Once she entered the Uber's lane, the radar should have reported a potential stationary object in the lane which should have been a signal to brake. It's not quite as easy as I lay out here, unfortunately. Even these good radars have limited vertical resolution and so are often not enough on their own.

My guess is she is only squarely in the lane about 1.5 seconds before impact, which with decision making time may not be enough. You need to start super hard braking 1.4 seconds before impact at 40mph)

The safety driver

Clearly there is a problem with the safety driver. She is not doing her job. She may face legal problems. She will certainly be fired. The real debate will be over Uber's policies on hiring, training and monitoring safety drivers, and the entire industry's policies.

Uber was operating this car with only one safety driver. Almost all other teams run with two. Typically the right-seat one is monitoring the software while the left-seat one monitors the road. However, the right-seat "software operator" (to use Google's term) is also a second pair of eyes on the road fairly frequently.

Human beings aren't perfect. They will glance away from the road, though it is not possible to justify the length of time this safety driver was not looking. We will be asking questions about how to manage safety drivers. It is possible to install "gaze tracking" systems which can beep if a driver looks away from the road for too long a time. We all do it, though, and get away without accidents almost all the time.

We may see applicants for this job tested for reaction times and ability to remain attentive through a long grinding day, or see them given more breaks during the day. If phone use was an issue, it may be necessary to lock up phones during operations.

It is very likely that the safety driver's mistakes will pass on to Uber through the legal principles of vicarious liability. There is even criminal vicarious liability in extreme cases.

The software

Passengers who have ridden in Uber's vehicles get to look at a display where the software shows its perception output, ie. a view of the world where it identifies its environment and things in it. They report that the display has generally operated as expected detecting pedestrians, including along the very road in question, where they have seen the vehicle slow or brake for pedestrians entering the road outside crosswalks. Something about that failed.

It is also worth considering that the police report suggested that no braking took place at all. Even detecting and hard braking at one second might have reduced the impact speed enough to make it non-fatal. I have not done the math, but braking and swerving, even from 1 second out, might have been able to avoid the impact on the woman. (As I explain in earlier posts, most cars are reluctant to swerve, because you can't depend on doing that and it can make the situation worse.)

The Arizona code states that at all times, drivers must, "Exercise due care to avoid colliding with any pedestrian on any roadway."

Important counterpoints

I've seen some call to relegate prototype robocars to test tracks and simulation. This is where they all start, but to reach safety, you can only do .1% of your testing there. There is truly no alternative yet known to develop and prove these cars than operation on real roads with other road users, exposing them to some risk. The discussion is how much risk they can be exposed to and how it can be mitigated.

It's also important to realize that these cars are prototypes, and they are expected to fail in a variety of ways, and that is why they have safety drivers performing oversight.

This accident will make us ask just how much risk is allowed, and also to examine how well the safety driver system works and how it can be improved. We are shocked that Uber was operating a car that did not detect a pedestrian in the middle of the road, and shocked that the safety driver failed at her job to take over if that happens. But we must understand that the prototype vehicles are expected to fail in different ways. I don't think a vehicle should have failed in so simple a way as this, but most of these cars in testing still get software faults on a fairly frequent basis, and the safety drivers take over safely. The answer is not to demand perfection from the cars or we can never prototype them. Sadly, we also can't demand perfection from human safety drivers either. But we can demand better than this.

Whither Uber?

This will set Uber's efforts back considerably, and that may very well be the best thing, if it is the case that Uber has been reckless. It will also reduce public trust in other teams, even though they might be properly diligent. It may even sink Uber's efforts completely, but as I have written, Uber is the one company that can afford to fail at developing a car. Even if they give up now, they can still buy other people's cars, and maintain their brand as a provider of rides, which is the only brand they have now.

I suspect it may be a long time -- perhaps years -- before Uber can restart giving rides to the public in their self-driving cars. It may also slow down the plans of Waymo, Cruise and others to do that this year and next.

At this point, it does seem as though a wrongful death lawsuit might emerge from the family of the victim. The fame for the lawyer will cause pro bono representation to appear, and the deep pockets of Uber will certainly be attractive. I recommend Uber immediately offer a settlement the courts would consider generous.

And tell us more information about what really happened. And, if it's as surmised, to get their act together. The hard truth is, that if Uber's vehicle is unable to detect a pedestrian like this in time to stop, Uber has no business testing at 40mph on a road like this. Certainly not with an inattentive solo safety driver.

This article is a follow on to an initial analysis and a second one.

Comments

So if it turns out that the lidar was off, could one make the case that EM is at fault? :/

Re ir cameras: it wouldnt be too expensive to have them even with outside-the-glass installations. They would no more expensive than that lidar (unless that has gotten quite cheap on the last couple of years).

Also, dont the visible cameras face the same sort of cleanliness issues?

Many teams put regular cameras at the rear-view mirror, where the wipers clear the windshield. Yes, thermal cameras are cheaper than LIDARs. They still have all the problems of cameras in terms of the reliability of computer vision, but they are better at night. But not perfect. Only now are they getting to be available at reasonable prices and resolutions.

Thermal cameras are expensive? They can be found on a variety of luxury cars as an option, from Mercedes to Lexus to Cadillac to BMW. Perhaps these are not sufficient for AV purposes though.

They were expensive when I researched them at the resolutions you would want. Today, new models are coming out under $200, but since they can't be mounted behind the windshield, there are other costs. I believe they will become an important sensor, particularly for night operations.

IR resolution comparable to visible cameras is still prohibitively expensive, but is that really needed? The cheap (320x256. 640x.. ?) IR cameras will easily out-resolve lidar.

FOV is problematic with IR, of course. Then you're talking about multi-camera setups, and that adds a bunch of complexity if not cost. It's been done, though.

What's the intended production location of visible cameras in these systems? I see that they are one the roof now. Is the not-behind-the-windscreen concern a bit of a red herring?

Behind the windshield is easiest and gets wiper blades. For an array of cameras for stereo and other directions, rooftop is best until you get to fully design the vehicle yourself. It's not a red herring, just a factor. As noted, the biggest barriers to thermal have been cost, resolution, the only partial utility value, and the general problem that computer vision is not good enough, and having a better camera doesn't really solve that.

I am just pasting in this YouTube video here, in case anyone wants to see one of these things in operation. (Please, people opposed to IR, and people who love IR, don't jump on me, I have no position on this technology, just posting a video!) Note that it is from 2012. Seems to be of some use, yes?

https://www.youtube.com/watch?v=YSZ2BFumHYE

FYI, the driver of the vehicle was a he, not a she.

Driver is definitely a she

Driver was born a he and did prison time as a man, Rafael. Then decided to identify as a woman and changed name to Rafaela.

The driver of the car was an AI. I'm gonna go with "it". There was a passenger behind the steering wheel who was supposed to be a backup driver, but it's pretty clear they were not driving at the time of the accident.

I'm curious to learn more about your comments about LIDAR. It is well known that LIDAR does not work well in the dark. LIDAR works in the visible or near-visible (infrared) light spectrum, and it can't magically see things that don't reflect light. You seem fairly confident asserting the range of other LIDAR systems, but it doesn't seem like you understand the limitations of these systems in different environments.

LIDAR is self illuminating. It's not dependent on other sources of light (or illumination per se.), and at night it's own laser is less disturbed by ambient light, so it can see much better.

It is not much better, it's just a little better. But it's certainly not worse.

LiDAR is an active device. The signal it senses is the signal it sends. It does not rely in any way on ambient lighting. If anything, the LiDAR works better in the dark, since ambient lighting is 'noise' and degrades the signal-to-noise ratio if not filtered out.

I have been an optical engineer for 30 years, designing LIDAR systems, and your "well-known" idea is completely uninformed. LIDAR doesn't care if it's dark or not; that has no effect on its performance. LIDAR is based on detecting an ultra-narrow band of light frequency reflected from the LASER beam (the "L" in LIDAR) that it emits - and not even sun glints (usually) put out enough light in that incredibly narrow band to fool a LIDAR system.

Seems like Brad's characterization is reasonable: there's still a ton of solar at 905, so the background is gonna be higher during the day, right?

I built a system at 1550. We use a 10nm bandpass, but solar was still a massive problem for us. :(

Network Rail & other administrations have been using LIDAR as a scanning system to detect obstructions on at grade road-rail crossings. Substantial effort was required to get a robust and reliable system using a static, and precise coverage of a defined fixed location, especially for the variety of animals, people and objects which needed to be detected. The reliability of a single (?) beacon & beam mounted on a moving platform to scan a continuously changing target zone is not I suspect a detail which can be box ticked as reliable, to the required level - and ultimately will require a second beam and a parity check?

Uber can argue that the driver was Distracted, fire the driver and agree to use Lidar and always have 2 people in the car - one driving, the other watching the Lidar/camera images on a laptop. Uber's lawyers can also blame the pedestrian for not crossing at a safe place. I don't think this necessarily sinks Uber's self-driving car efforts given how many fatalities are caused by distracted drivers and distracted pedestrians and how lenient the penalties are for some.
Distracted Driving laws article

That is a horrible solution. Even if the driver had been looking she was crossing outside a crosswalk and therefor the driver would have no reason to expect a pedestrian to walk straight out in front of him (which, btw, is exactly what happened, but nobody is blaming the person who LITERALLY just walked out in front of a car...) Coming out of the dark like that she'd probably have been hit by just about anybody. And 2 operators? Seriously? I don't want my rides home to double in price, thank you very much... Let's just tell it like it was. If you wander around in the street in the dark, there's a good chance you're gonna get hit. Doesn't matter if the car is driving or being driven.

No one, neither a human or a robot are ENTITLED to run over another human, just because they walk in the road in the dark. What is wrong with your ethics? But you see it all the time IRL -"oh a cyclist slowing me, the only reasonable punishment is death".

I note that, nearly 40 years ago, I took my first written driver's test in Arizona and one (multiple choice) question was:

"What should you do if a person crosses the road in front of you, not in a cross walk?" One wrong choice (honest to God!) was "Hit them and teach them a lesson."

1) Uber monitor should have been paying attention
2) Pedestrian should have been paying attention
3) The last posters comments have nothing to do with ethics or cyclists in general, you sound like an idiot

What's wrong with you? People and animals cross all the time where they are not supposed to cross. The only good thing about this could be that it slows down automation for many years, as it should. The drivers are better than machines, and they need kind

I agree strongly with the first part of your comment. I disagree VEHEMENTLY with the last part. Human drivers kill thousands of people every single year largely by idiot mistakes (you make them, I make them, we all make them). The sooner cars are able to drive themselves, the better. Just don't let Uber play the game, they're clearly not interested in doing it safely.

The person with the bike was walking from the left curb to the right curb at about 2mph. She had crossed a full lane and most of a second. With 30-foot lanes, that means she had been in the road for at least ten seconds. The car was moving at 40mph. When the person stepped into the road, the car would have then been 600 feet away, or LITERALLY MORE THAN A TENTH OF A MILE. LITERALLY.

Oh, and the street view clearly shows an array of bright street lights lighting the scene. Contrary to the underexposed video Uber supplied.

15-foot lanes, I mean.

Your math is off. A 15-foot lane takes a 2mph person 5 seconds to cross. Halfway through the next lane (which is where she was when hit, 1.5 seconds after we first see her) is 7.5 seconds, so she first appears 6 seconds after she stepped onto the roadway (illegally, and without checking for oncoming headlights in the dark).

The car was going 38mph, which travels 334 feet in 6 seconds.

Your all-caps sentence is attention-getting, but irrelevant; what is relevant is HOW FAST CAN A 38MPH CAR STOP?

(What is also relevant is that the car was actually 3mph over the speed limit. Bad Uber. That's going to be brought up in court.)

What the video also shows is that the pedestrian wore mostly dark clothing, which is hard to see in the dark. That argues against a person doing better. There are comments about the human eye having greater dynamic range than the camera. True. However, in the dark, a darkly clothed person, is still hard to see. There's also the assumption that it's only the distraction of the driver letting the car do the work that caused a lack of response. Humans don't respond instantly, and ordinary distraction like talking or texting on the phone, simple inattention or perhaps a couple of drinks could easily have taken the critical seconds off the response time.

Fear of not having control of our cars is what I'm seeing in most of these comments. I certainly wouldn't bet against the self driving car wouldn't on a statistical basis. What the Google self-driving car tests show is that the overwhelming majority of accidents result from human failure to maintain attention.

Also remember that she was not only clothed almost invisibly for walking in the dark, she was jaywalking on a major thoroughfare. It's flat out stupid to not look before walking across traffic that might not stop, particularly at night. As sad as her death is, it was in part her own fault.

Nobody would doubt she was foolish to cross and not to look. The concern is that, even so, the car should have stopped, and failing that, the safety driver should have stopped it. Whether the lighting is too dark is actually not entirely relevant. The car must be able to operate in that level of lighting, and its sensors do indeed have that ability.

what about a nice little kitten or a cute puppy in this situation? Ha, teach innocent animal a lesson?

Don't think it looks that bad for Uber.

I'm sure they have blame to take her, but I also done think it is anywhere near fully their fault; maybe not even 50% their fault.

Honestly, if someone is crossing at an unmarked spot, on a poorly light road, in the dark, they should be taking full responsibility for their own life; not pushing a bike across the road staring down at the pavement. I also didn't notice any lights on her bike, and she definitely wasn't wearing any type of reflective vest.

I feel like anybody, driving anything, would of more than likely stuck her; unfortunately. It reminded of how people hit deer all they time. They leap out of the shadows, and into your lane, and there is nothing anyone can do.

It's very sad this happened, but I don't see how anyone can make this out like it was some big obvious thing that could of been so easily avoided.

You are good at noticing. Did you notice the street lights? Did you notice the law?

Did you even read the article? We are not talking about a human driver hitting a pedestrian. Perhaps even a human driver would have noticed the pedestrian earlier and maybe swerved cause as the article mentions, our eyes see a little better than what the dashcam shows. Even if it was just a human driver who made a mistake, then although bad, that happens; accidents happen everyday. However, we are not talking about a human driver, we are talking about a system which is operating in 100+(I dont know the number) cars simultaneously and if this system does not even recognise a walking pedestrian within 50m (it easily should have been able to from all the sensors and cameras it has) and does not even register to brake or decelerate, that is a HUGE concern for Uber's self driving cars and the safety of self driving cars in general. Ofcourse this is a big deal. This is the first time a death has happened because of a self driving system. The whole point of building a self driving car is to reduce accidents among other things, if that system doesnt work and the people who built that system aren't to blame, so who is?

If Uber is testing vision-only, is that at all because of the Waymo lawsuit? Perhaps this accident needs to be added to the "Overall effects of the lawsuit" section of your previous post.

That vehicle is equipped with the very common, but expensive, Velodyne 64.

My employer provides radar based automatic emergency braking. I believe it should have prevented this fatality. http://www.bosch-presse.de/pressportal/de/en/emergency-braking-in-two-blinks-of-an-eye-121600.html

Wow, the video where smoke is used is seriously impressive. There's no way a human would break that fast.

I could have stopped that fast, and have recently. This is the thing that will happen, if we get hit with an EMP. That car will not just shut off and kill only one person, but many. With an EMP, electronic safety features are no longer relevant.

If there is an EMP no car withe an electronic engine management system is going to work the whole car will no longer be relevant.

Mechanical or electromechanical failsafes are a thing, you know. They've been for a while. A much more likely case than an EMP is the car's onboard computer crashing, which should of course not lead to an accident.

How did you determine if the car had the right-of-way?

She crossed at a place that is not a crosswalk. In fact, since it looks like it could be a crosswalk, it is marked with signs telling people not to cross, and to use the crosswalk at the traffic light down the road. Arizona law gives the car the right of way in this situation.

Talk about crosswalks seems like a distraction. The presence or absence of a crosswalk should have no difference to the behavior of the car's control system. If the car could not see the person, it certainly could not see a marked crosswalk.

While, as I have said, a capable car should have been able to stop in this situation, one would not be able to do so if the pedestrian came from the right.

Humans, and robocars, treat crosswalks differently and are required to by law. For example, you pay attention to a pedestrian standing just off the road who appears to be about to enter a crosswalk. You slow for them and track them. You can't slow for every pedestrian who is just off the road outside of a crosswalk. Traffic would never flow if we all did that. That's why the law requires that people crossing outside crosswalks (if they can cross at all) are responsible for looking both ways (remember that?) and making sure no cars are coming. The car's duty if the pedestrian does not look properly is just to do what they can to avoid hitting them, but if they do that and still hit them it's the pedestrian's fault.

Not in a crosswalk. There, a smart pedestrian still looks both ways and crosses with care, but technically they can just stroll and it's the driver's duty to watch out for them.

Try that in Texas and you're likely to end up in a hospital (if not a morgue).

There's an old saying among sailors (a risk averse bunch), "The right of way is given, not taken." While the law may give right of way to the car we can assume the pedestrian did not. It would be ideal if in addition to identifying pedestrians and vehicles, self driving cars could understand and acknowledge when they have been given the right of way.

If the LiDAR was switched off as part of a test, then like any form of engineering, alternate solutions should have been in place. This could include using your best drivers and having two of them and perhaps restricting testing to daylight hours. It would also be good to know how effectively Uber check driver videos to make sure they are always concentrating on the road ahead as best as is humanly possible.

The sensors should have provided defense in depth, yet it seems the car did not even brake! Uber has often been considered a bit of a corporate cowboy in their approach to business. I certainly hope that culture has not in any way influenced their attitude to safety and risk assessments. For the sake of others in the industry, I hope they are kept off the road until the problems have been fully sorted - in depth.

Public confidence is going to take a big hit from this accident. This made me recall an article that I read that proposed copying the system to used test drugs and applying it to driverless cars before they are allowed on the road (The Conversation: /before-hitting-the-road-self-driving-cars-should-have-to-pass-a-driving-test-90364). I didn't think much of it at the time but perhaps spending several weeks being independently tested in deliberately challenging situations both day and night could give the public a bit more confidence? The purpose of the test could be to transparently show the public that the vehicles are as safe as what is claimed. Such a test would only cover very limited situations, but after the Uber accident it now looks more worthwhile.

To get approval to go on the road, the AI needs:
- Time in the simulator
- closed track examination for road rules 7 days of 24 hours.
- the pedestrians will be the members of the board
- the cyclists will be the development team
- the sales & marketing team will be wearing black during the night shifts.

>Radar-based cars generally don't respond to a stationary object in the next lane, because as a driver you also don't slow because a car is stopped in a different lane, when your lane is clear.

If you spot a child on a bike in the next lane you may wish to slow down, children are not always the best judge of a situation. I do not know if a robocar is able to discern children and adjust reactions accordingly but perhaps it should.

Arizona requires it specifically.

28-794. Drivers to exercise due care
Notwithstanding the provisions of this chapter every driver of a vehicle shall: [...]
3. Exercise proper precaution on observing a child or a confused or incapacitated person on a roadway.

Caution is always good -- robocars are effectively always at a basic minimum level of caution which is superior to a human's -- always looking in all directions, modelling all objects and predicting their trajectories. They will exercise special caution (slowing down) when seeing things like a pedestrian about to enter a crosswalk, but can't do that just because one is standing on a sidewalk mid-block.

Brad, thanks so much for the overall analysis. It’s really good. I do have a comment, though: we have evidence right in front of us that robocars are *not* “effectively always at a basic minimum level of caution.” The failure mode is certainly different from a human’s but the outcome is the same. It might be time to retire this particular kind of rhetoric around robocars. (I take the point that the ideal is that they are always paying attention. But we’re no longer talking in ideals and abstractions with these vehicles.)

We have evidence that this specific case ONE robocar was not "effectively always at a basic minimum level of caution." It might be time to retire this unfortunate habit of conflating a single failure with the entire class.

Seems to me that the broad assumption is the 'correct' (conservative) and most appropriate initial response from an engineering POV.

Assume the worst until you can identify the cause.

I agree that individual teams may not always live up to the ideal. I expect that Uber will pay dearly for not having done so. That is how the system works. Though we don't yet know anything about why Uber's system failed here. We know that it should have been able to work properly, and we have some idea about what working properly should be. It might be for a stupid reason, it might be for a complex and rare reason.

But cars will fail. That's why there is the safety driver system. When teams first get started, they drive vehicles that disengage every 2000 feet. That's how it all begins.

It's a major consideration amoung pilots, who also call it "autopilot comblacency" and there's lots of academic research in google scholar, including proposed fixes, but I've not see it discussed in the context of self-driving cars. However, I'm not following them nearlyas closely as you.

Any usefull work that you know about?

--dave collier-brown

I've actually seen a lot of discussion about this issue. It came up after the Tesla crash for sure. Usually it's framed a little differently than in the aviation world: in cars the required reaction time is much shorter, and in a lot of scenarios it's just not possible for a human to take over in time to avert a crash when the automation fails.

I've actually seen a lot of discussion of this problem (especially after the Tesla crash), though the problem is a little different in cars. The required reaction time in a car is much faster than in an airplane, and in a lot of scenarios there is not enough time for a human to take over if the automation fails to avert a crash.

Obviously people are going to wear whatever clothes they want, but this seems a worst case situation for Lidar that the victim was wearing black. (I totally agree that the Lidar should have been on for safety even during restrictive tests.)
This video makes it clear that if she were riding the bike, going 1.5x-3x the speed she was walking, that wheel reflectors would have been useless. However, this is the extreme edge case where she had a bike but wasn't riding it, so, what would the lidar have made of wheel reflectors? Probably just mistaken a few bright points for glass in the road or some other such noise.

My thought on why the pedestrian was in the wrong place at the wrong time is that the AV was doing the speed limit. You can see another car well ahead. The woman probably waited for a bunch of normal (speeding) traffic to go by and started crossing in the break. And then came the straggling (from the last stoplight) Uber car (doing the speed limit correctly) and surprised her. That's my best guess at the psychology here.

If there's one thing that this video seems to make clear it's that humans driving cars or mobile phones have to go.

Anybody have the link to the report claiming the LIDAR was turned off at the time of the accident?

That section of road/median really needs to be rethought - why is there what looks like a bike/pedestrian path right up to the curb, but then a "don't cross here" sign with an arrow to a crosswalk that requires leaving the path? It seems designed to make people want to cross right where they put the "don't cross" sign. That seems like exceedingly poor design and was likely a factor in this crash.

The sign is posted so as to be visible to people on the right side of the street who may intend to cross to the median. Only the back is visible to people already on the median where the person was. So anyone making the argument that "it was dark" in defense of Uber also needs to make the same argument about reading the back side of a sign in the dark.

"The Tempe police released the poor quality video from the Uber. What looks like a dash-cam video along with a video of the safety driver. Both videos show things that suggest serious problems from Uber, absent further explanation.”

It is unfortunate that information is dripping out like this. It fuels speculation even though the sensors on the car probably make this one of the best documented traffic fatalities, ever. Why not just release it?

This may have still happened if it was just a regular car and human driver because she was hard to see till last minute - an alert driver probably would have been able to hit brake a tich before collision - but not much better.

But that is no excuse for Uber. None. This is horrible negligence on their part on so many levels.

Woman who was killed was in road for a long-time before car got there, she was an obvious person, walking in front of her bike, she was not erractic, she was walking across road and had made it more than half way across when hit. Only issue was it was dark, which matters to human eye but not AV tech.

Why I think Uber utterly failed here:
- Current tech available in higher-end regular cars would have caught this, and "seen" the woman, and at least did some braking before hitting her and yet a test vehicle with multiple systems didn't have this back-up tech and their new tech failed miserably - something really amiss - They didn't show proper caution about whatever new tech they were using. If there is tech that can intervene easily if your tech doesn't work, and you don't have it on a test car, why ???? You can still get the freaking data, right? you will say, oops the object detection system caught this when it should have been caught by "new tech", back to drawing board. Not have available tech as back-up in a test car is just plain negligent.

- Speed. The car is a test car and yet it was going over the speed limit on a surface street where pedestrians may appear. WTF??!. Can I say that again, test car, going over the speed limit, surface street, WTF.?!?! I've been following street design arguments for maybe, a whole 6 months of my life on the side, and even I know 40 mph is nearly 10 times more lethal to a pedestrian than hitting them going 20 mph. Why oh why would you want a test car to go any faster than absolutely necessary. At some point, a car going too slow will be a danger in traffic, but there is no remotely decent reason to go any faster than that, certainly not faster than speed limit on a surface street . Why wasn't car going 30 mph, 5 mph below speed limit instead of going faster than speed limit? It is a test car. Why, why? Stupid, negligent.

- Safety driver set-up issues - Uber spent, something like billion dollars in a few months trying to get traction, data on Uber pool, so Uber has a fire hose of money flowing to them. How in the heck didn't they have better ways to keep their drivers alert? I doubt it would have helped much in this situation but can't believe they aren't trying to do better. Again, these are a limited number of test vehicles, why not two drivers like others have done? Why not have a requirement driver to keep their hands up, near the streering wheel, sort of like Tesla autopilot doe? The lack of these basic extra way of adding safety seems extremely short-sighted and needlessly dangerous. Being better about safety drivers would cost them in Phoenix, what, like $5 million dollars?

- A huge reason for cities to allow these AVs on their roads is to improve safety - if the AV companies can't be trusted to have available working tech on their TEST cars and these cars can't over-perform humans in situations like this, and we don't insist on it, it is just squandering future lives for no good reason - because, these cars can do so much better if their designers aren't being reckless idiots.

I don't want AVs if they can't be way way safer in net than human driven cars. It is Uber's and other AV developers responsibility to deliver on this and their fail in having very basic, reasonable back-up safety measures on a test car is ridic. I get that some time in the future, even very safe AVs might fail in a way most humans wouldn't but in net the AVs could be a huge improvement over humans, that statistically they would still be safety win - but just because these AVs outperform humans driven ones, but say 30 percent, we should allow them to pass up on reasonable, economical ways to make their AVs even safer. And this isn't mas production of a proven tech, this is a freaking test vehicle on our public streets - if you can't be careful in this situation and make it as safe as reasonably possible, what will you do later when you own all the cars in a city - and they keep killing people needlessly - just say, well, they are safer than humans? - that's not good enough. Planes are super safe way to travel many miles, but when one crashes, we don't shrug oh well, it's still way safer than driving - nope - we make sure the airplane designer/manufacturer, maintenance guys didn't fail.

On this stretch of road, just south of the accident site, Google Street View shows a 45 MPH speed limit sign. People keep saying 35 mph was the speed limit, but I think it's really 45. Brad, do you know for sure?

I have seen the sign on streetview so I agree it is 45mph unless the limit changed since streetview was done. Police said 35mph but I think they may be mistaken. Even saying 35mph, the police have made no indication of speeding as a factor in the accident.

I am alarmed by a tendency to blame the humans in this story. This is especially evident on Fox News and, perhaps, the Tempe police (http://www.fox10phoenix.com/news/arizona-news/tempe-police-woman-struck-killed-by-self-driving-uber and http://www.foxnews.com/us/2018/03/20/operator-self-driving-uber-vehicle-that-killed-arizona-pedestrian-was-felon-report-says.html) but to some extent present even in discussions here.

I want to be careful here because some blame, and perhaps even criminal charges, are warranted. The pedestrian WAS crossing where she shouldn't have been. The driver WAS (apparently) inattentive. But the implication we should NOT draw is that 'if the humans would just behave appropriately, the technology would work fine.'

In both cases, those are known, common, human failings. Humans do those things. To ask humans NOT to do them is useless. They WILL do them. The designers of this system SHOULD know and account for those failings.

If a designer puts a chip in a system (for various good reasons) but knows the chip is prone to heat failure in conditions the system is likely to encounter, and then the system fails in hot conditions, the problem is not "chip failure" or "chip error"-- it's a design failure and designer error. Asking the chip to, "please don't fail" and punishing it when it does is a bad strategy.

Putting humans, such as the Uber safety driver, in conditions where they are supposed to be eternally but almost always passively vigilant, ready to step in at a moment's notice to save the day from an automation failure, is just a bad idea. It's even worse if we're dealing with a single human operator. The human WILL get bored, distracted, sleepy, out-of-the-loop and, eventually, will simply lose driving skills. We've known that in aviation for about half a century. TSA knows that for baggage screeners, which is why they rotate jobs on very short cycles.

This pattern of "in case of emergency, break glass" and dump it on the human operator is just a bad idea, and shouldn't be the default for "automated", "driverless" cars.

(At least not without a LOT more attention to how to ensure and maintain human vigilance.)

I would go a step further, and say there isn't even clear evidence that the supervising human was inattentive at the time of the accident. My understanding on the Uber setup is that there is an iPad right down where the supervising human keeps glancing during the video.

If the human was monitoring this display and the road at the same time, as they may well be trained and expected to do, then it would be further evidence of Uber's negligence in designing this system, expecting more from the supervisor than is possible.

Google, I've read, separates the role of computer monitor and backup driver, requiring two people in the car for exactly this reason.

Uber's drivers are told, according to reports, to use that tablet when the vehicle is not moving. It's not a well designed procedure. Waymo, for example, has drivers log reports in audio. If Uber allows or requires the solo safety driver to look at the tablet, that would be a serious procedural error on their part.

The team developing a self-driving car is motivated to learn as rapidly as possible how the car responds in a variety of situations and to demonstrate that it can do its job without intervention by a human. The role of the "safety driver" is in partial conflict with these objectives. If she intervenes when the car might not have caused an accident the team learns less about what the car would have done and the team gets a negative mark on their safety record. I expect that adding factors like these to the judgment required of the "safety driver" also adds to the reaction time we can expect from that "driver". It doesn't help if the "safety driver' doesn't have a usable indication whether the car has noticed a danger that the "driver" sees, to indicate whether the car might in the end do the right thing.

It's not enough to ask whether it's reasonable to expect a passive understudy to pay constant attention. We have to think about whether the things we ask the "safety driver" to do achieve an acceptable level of risk avoidance.

What teams like Waymo do is after an intervention, they turn the data into a simulator scenario, and they find out what would have happened with no intervention, and if it's bad, they fix it.

And on most teams the software operator does indeed have indications if the vehicle has seen the problem, but it's not necessary to do this because of the above plan.

So the team does not learn any less. Simulations are getting very good at short-range modeling.

That lady was a homeless and she ain't right in the head. That's why she was crossing in a non designated pedestrian area. That's why she's now dead. But the liberals all cry manslaughter and make small issues a big deal.

"That lady was a homeless and she ain't right in the head. That's why she was crossing in a non designated pedestrian area. That's why she's now dead. "

The reason she was in the road may be due to poor judgement. The reason she is dead is because she was struck by a car not being driven properly. If you cannot stop your car quick enough to avoid an object on the road, you are traveling too fast. If it is too foggy, rainy, snowy, or dark to see far enough ahead of you to stop in time, then you are traveling too fast for the conditions.

If you cannot avoid hitting or ball, a dog, or child slowly moving from one lane of an otherwise empty road, into the lane you are driving in, then you are driving recklessly. The mental state of the ball, dog, or child does not enter into it.

If their was a failure in some or many components of AV system, then this incident might be an "accident", but if the systems as designed or operated are incapable of stopping in this type of situation and they were operated anyway, then this is they type of situation that often results in manslaughter charges and wrongful death lawsuits with significant penalties.

It is possible that the penalties might be reduced slightly because of the behaviour of the victim, but it seems very unlikely that any court would find that this was "reasonable" behaviour of the car.

The car was traveling too fast for the conditions, as demonstrated by the fact that the car struck the pedestrian, no matter why this pedestrian was on the road at the time.

I know that the ability to "see" objects that are either stationary or moving horizontally with respect to the direction of car movement of regular radar front ends in cars like this is limited. For that reason the radar system is programmed to ignore such and instead the relies on Lidar. Would increasing the frequency of radar operation (shorter wavelength) to 94 GHz or push it to 140-150 GHz be not a better option? Radars operate just as well during the night and bad weather conditions and as such could do without Lidar. Also I would have thought that a radar cross section of a person is greater than that of say road sign. Who knows but maybe higher output power or lower noise figure at low IF frequency would be sufficient to do the trick. It is sad to see someone killed while such an interesting technology is in the making.

The main thing you need for more radar resolution is bandwidth. Those upper bands often have more bandwidth. More radar resolution would make radar more useful for these situations. You get radar returns from her and the bike (though not really strong ones as people and cloth are less reflective than metal) but you can't be sure what they are coming from. You know how far they are, and you know they are not moving towards you, but only roughly where they are. It could be a sign on the side of the road, or the road itself or a bott's dot (not present on this road from what I can see.)

If this vehicle was using two types of radars i.e. mid-range at 24 GHz wide aperture 60-90 deg, and long-range at 77 GHz narrow aperture 15-30 deg, then the 24 GHz one would certainly struggle with resolution to spot a sideways moving object. I would have thought (and seen) the 77 GHz systems that even with BW of 0.5 GHz were able to detect people reliably. My take on this is that with no Lidar sideways movement of the person was not picked up by the mid-range system and when the person walked into the long-range beam (77GHz) it was too late for the car to apply brakes and stop within the sufficient distance. This is of course speculation as I don't know what frequencies, ranges and beam angles where at works.

If you hold your mouse pointer over where the left lane line intersects the bottom of the frame you can see that the car holds the line very well until right at the end. And then it actually turns into the victim, to the right. I'm wondering if it was already starting its right turn sequence oblivious to the woman the whole time.

Also, I timed the safety driver looking at the road for approximately 4s and looking down (at a phone?) for 9s. Ouch.

Do we have any idea if the safety driver was looking down to monitor systems (which would be a major test design flaw on the part of Uber) or had become complacent and was looking at a phone or some thing else non-vehicle related?

Or at least nobody has said. But no, it would normally not be expected for a safety driver to take her eyes off the road for an extended period for anything.

"From the dashcam video, it seems that there was very little time to react, and no fault for a human driver hitting somebody who "came out of nowhere." "

Brad, I respectfully disagree. I think you are missing that a sane driver would have probably noticed two things.

First, that the high beams were not on (to warn pedestrians via indirect lighting).

Second, that the lowbeams were mis-adjusted. No sane driver would be driving with the light pointed down like that for more than a few miles.

See https://www.facebook.com/notes/jason-arthur-taylor/will-uber-death-of-elaine-herzberg-from-mis-adjusted-headlights-allow-ai-to-kill/10155674944874091/?pnref=story for more details.

I do agree the headlights are quite low. People say that is a fault of the Volvo. High beams are not on because there is another car further ahead in traffic, and you may not turn on high beams in that situation, I believe.

Anyway, I do think there was time to react, and the video misleads you if you think it shows that. You may have misunderstood what I was writing.

I'm glad we sort of agree about reaction times but you are missing the reaction time to take over, which is itself huge, like seconds in this case. It is like asking a buddy if they want to a computer game. Once they are ready to play, then the reaction times go down, but it will take a while for your buddy to answer if he wants to play.

Fault of the volvo???? Are you kidding me? Where are you getting this? DOT requires 0.4-0.5 degrees down. All cars have adjustable lights. This is ~10-20 times off DOT regulations. The car is illegal. Period.

This is how you adjust the headlights in this car. https://itstillruns.com/adjust-headlights-volvo-xc90-7385280.html And it is just same as in any other car.

Unfortunately Lidar is not the answer here. Lidar is very low resolution data source compared to cameras, so it is not usually used for primary object identification - once cameras see an object, a bunch of Lidar points hitting that object can give you distance to it and that distance can help you figure out where one objest stops and another begins, but you can not identify a pedestrian from a tree just from Lidar. Especially not in a second or two.

Plus Lidar is noisy, due to timing uncertainty. So you always need to filter and average Lidar data before use. If you are looking at a bicycle wheel spokes from the sides with very low resolution, it might classify the few reflections it got as noise and thus filered out the whole object.

The cameras on these cars are typically not in different exposures, instead they are in different fields of view - wide angle, narrow forward angle, narrow turn angle, ...

Basically I'd say the current way these systems are made it is not expected that they'd perform better than humans when driving badly lit roads at night. That is just not in the design criteria. And it would be pretty expensive to accommodate this one rather narrow additional requirement.

All sensors have advantages and disadvantages. LIDAR's segmentation of the world into 3D is 100% which is why people love it. It does not give enough resolution for good identification of objects smaller than cars. That's why people use all the sensors that make sense. Most people anyway. It will not see bike spokes, that is certainly true, but should get a few returns from frame and wheels and bags, but the bike is not important, the ped is more than enough to see.

Actually, people use cameras in all sorts of configurations. Tri-focal (3 different fields of view) is indeed a popular strategy. But HDR is also getting wide use. You can do HDR without multiple cameras. For example, you can be shooting at 60fps though you can only handle 30fps, and boost ISO on alternate frames, or use a variety of other tricks. When you do this of course the two frames are shot from slightly different positions, which has other issues. But at night, with some objects lit by headlights, streetlamps and ambient light, you want to have HDR -- or thermal cameras.

Perhaps, Uber don't know this technology until now.

They are highly aware of all these technologies.

They're aware of it. In fact, they were famously accused of stealing it.

Uber's entire self-drive program revolves around cutting corners in questionable ways (like "forgetting" to apply for licenses), but people won't believe it until Uber cars cause a few more fatalities, while competing cars don't.

Pages

Add new comment