It certainly looks bad for Uber
Major Update: Release of the NTSB full report includes several damning new findings
Update: Analysis of why most of what went wrong is both terrible but also expected.
The Tempe police released the poor quality video from the Uber. What looks like a dash-cam video along with a video of the safety driver. Both videos show things that suggest serious problems from Uber, absent further explanation.
You can watch the video here if you have not seen it. It's disturbing, though the actual impact is removed. It will make you angry. It made me angry.
Above I have included a brightened frame from 3 seconds into the video. It is the first frame in which the white running shoes of the victim are visible in the dashcam video. They only appear then because she is previously in darkness, crossing at a poorly lit spot, and the headlamps finally illuminate her. Impact occurs at about 4.4 seconds (if the time on the video is right.)
She is crossing, we now see, at exactly this spot where two storm drains are found in the curb. It is opposite the paved path in the median which is marked by the signs telling pedestrians not to cross at this location. She is walking at a moderate pace.
The road is empty of other cars. Here are the big issues:
- On this empty road, the LIDAR is very capable of detecting her. If it was operating, there is no way that it did not detect her 3 to 4 seconds before the impact, if not earlier. She would have come into range just over 5 seconds before impact.
- On the dash-cam style video, we only see her 1.5 seconds before impact. However, the human eye and quality cameras have a much better dynamic range than this video, and should have also been able to see her even before 5 seconds. From just the dash-cam video, no human could brake in time with just 1.5 seconds warning. The best humans react in just under a second, many take 1.5 to 2.5 seconds.
- The human safety driver did not see her because she was not looking at the road. She seems to spend most of the time before the accident looking down to her right, in a style that suggests looking at a phone.
- While a basic radar which filters out objects which are not moving towards the car would not necessarily see her, a more advanced radar also should have detected her and her bicycle (though triggered no braking) as soon as she entered the lane to the left, probably 4 seconds before impact at least. Braking could trigger 2 seconds before, in theory enough time.)
To be clear, while the car had the right-of-way and the victim was clearly unwise to cross there, especially without checking regularly in the direction of traffic, this is a situation where any properly operating robocar following "good practices," let alone "best practices," should have avoided the accident regardless of pedestrian error. That would not be true if the pedestrian were crossing the other way, moving immediately into the right lane from the right sidewalk. In that case no technique could have avoided the event.
LIDAR
This is not a complex situation. This is the sort of thing that the LIDAR sees, and it sees very well at night. The Uber car, at 40mph, is getting into the upper range of speeds at which it is safe to drive non-freeway with just the LIDAR. What I called the valley of danger four years ago, and Uber knows it. 40mph is about as fast as you should go, but you can do it. (Even so, some cars like to go a bit slower approaching legal crosswalks, marked or not.) Using the LIDAR their perception system should have had a pretty good impression of her by 50m (2.7 seconds) and applied the brakes hard. The stopping distance is 25m or less with hard braking. (A more typical strategy would be to slow, and get a better appraisal, and then continue braking to as to stop a 2-3m before her, to avoid jarring any passengers.)
Uber needs to say why this did not happen. I have seen one report -- just a rumour from somebody who spoke to an un-named insider, that the LIDAR was off in order to test operations using just camera and radar. While that might explain partly what happened, it is hard to excuse. Even if you want to do such tests -- many teams are trying to build vehicles with no LIDAR -- the LIDAR should remain on as a backup, triggering braking in exactly this sort of situation when the other systems have failed for some reason, or at least triggering a warning to the safety driver. It would be highly unwise to just turn it off.
In fact, I have to say that this sort of impact would have been handled by the fairly primitive ADAS "Forward Collision Warning" systems found on a large number of cars. Not the most basic radar-only ones that don't detect horizontally moving objects, but any of the slightly more sophisticated ones on the market. The unit standard in the Volvo XC90 promises it will reduce velocity by 50km/h if a bicycle crosses your path. The built in systems that come with these cars are typically disabled in robocar operation.
You may wonder, if there is LIDAR data, why have the police not examined it? While it is possible they have, they may not be equipped to. Police can readily examine videos. Understanding point clouds is difficult without Uber's own suite of tools, which they may not have yet offered to police, though they will need to if legal proceedings take place. Remember that because the victim crossed directly at a "don't cross" location, the car had the right of way, which is all the police are usually concerned with. Police may be concerned over Arizona law requirements to brake even for jaywalkers. However, the police may only consider a human's abilities here, not the superhuman vision of LIDAR or radar. From the dashcam video, it seems that there was very little time to react, and no fault for a human driver hitting somebody who "came out of nowhere." The police may not have a good way to evaluate the vastly superior dynamic range of human vision compared to the camera.
Waymo's cars, and a few others use long-range LIDARs able to see 200m or more. Such a LIDAR would have detected the victim as soon as she became visible on the median, though most systems do not react to pedestrians off the road. As soon as she set foot on the road, that would be a flag to such a system. One lesson from this accident might well be to map "illegal, but likely crossings" and exercise certain caution around them. In many countries, people even cross freeways routinely, though this is very dangerous because nobody can react in time at such speeds.
There is a dark irony that this longer range LIDAR is what the Waymo vs. Uber lawsuit was about. Though I doubt that Uber would have had its own long range LIDAR in production by now if they had not been troubled by that lawsuit.
It should be noted because the victim is wearing a black shirt, some of the numbers on the LIDAR range may be reduced, but not a great deal. Would need to know the reflectivity of the cloth. If it was less than 10% (very black) it's an issue, though she was not fully covered. She has blue jeans, bright hair and a red bike.
It should also be noted that while driving without a LIDAR's help if you have one is unwise, many teams, most famously Tesla, are developing cars with no LIDAR at all. It not a problem to drive a prototype which is not ready for deployment yet, that is what everybody is doing, and it's why they have safety drivers and other safety backups. However, those drivers must be diligent and the cars operated only where they are qualified to operate.
Cameras and HDR
The Uber reportedly has a large array of cameras. Good. That usually means that the system has been designed to do "high dynamic range" vision for driving at night. That's because what we see here is common -- uneven lighting due to the headlamps and streetlamps. This means either 2 or more cameras with different exposure levels, or one camera constantly switching exposure level to capture both lit and unlit objects.
A vision system based on HDR should also have easily seen her and triggered the stop.
Another option, not used on most cars today, is a thermal "night vision" camera. I have written about these a few times and I experimented with them while at Google back in 2011. Then (and even now) they are quite expensive, and must be mounted outside the glass and kept clean, so teams have not been eager to use them. Such a camera would have seen this pedestrian trivially, even if all the lights were off (headlights, streetlamps etc.) (LIDAR also works in complete darkness.) I have not heard of Uber using such night-vision cameras.
Note that the streetlamps are actually not that far from her crossing point, so I think she should have been reasonably illuminated even for non-HDR cameras or the human eye, but I would need to go to the site to make a full determination of that.
Once you have a properly exposed image from a camera, several vision techniques are used to spot obstacles within it. The simplest one is the use of "stereo" which requires 2 cameras, anywhere from 8 inches to 4 feet part in most cars. That can identify the distance to objects, though it is much better when they are close. It would not detect a pedestrian 200 feet away but can, if wide and high-resolution see 150 feet.
The second method is detecting motion. There is the motion of close objects against the background if they are not directly in front of you. There is also the motion against the background when the objects are moving, as a pedestrian crossing the road is.
Finally, the area of most research is the use of computer vision, usually powered by new machine learning techniques, to recognize objects from their appearance, as humans do. That's not perfect yet but it's getting pretty good. It can, in theory see quite far away if the camera is high resolution at the distance in question.
Radar
Radar could have helped here, but the most basic forms of radar would not help because a pedestrian slowly crossing the street returns a Doppler signature similar to a stationary object -- ie. just like all the signs, poles, trees and other fixed objects. Because radar resolution is low, many radars just ignore all stationary (meaning not moving towards or away from the car) objects. More advanced radars with better resolution would see her, but their resolution is typically only enough to know what lane the stationary target is in. Radar-based cars generally don't respond to a stationary object in the next lane, because as a driver you also don't slow because a car is stopped in a different lane, when your lane is clear. Once she entered the Uber's lane, the radar should have reported a potential stationary object in the lane which should have been a signal to brake. It's not quite as easy as I lay out here, unfortunately. Even these good radars have limited vertical resolution and so are often not enough on their own.
My guess is she is only squarely in the lane about 1.5 seconds before impact, which with decision making time may not be enough. You need to start super hard braking 1.4 seconds before impact at 40mph)
The safety driver
Clearly there is a problem with the safety driver. She is not doing her job. She may face legal problems. She will certainly be fired. The real debate will be over Uber's policies on hiring, training and monitoring safety drivers, and the entire industry's policies.
Uber was operating this car with only one safety driver. Almost all other teams run with two. Typically the right-seat one is monitoring the software while the left-seat one monitors the road. However, the right-seat "software operator" (to use Google's term) is also a second pair of eyes on the road fairly frequently.
Human beings aren't perfect. They will glance away from the road, though it is not possible to justify the length of time this safety driver was not looking. We will be asking questions about how to manage safety drivers. It is possible to install "gaze tracking" systems which can beep if a driver looks away from the road for too long a time. We all do it, though, and get away without accidents almost all the time.
We may see applicants for this job tested for reaction times and ability to remain attentive through a long grinding day, or see them given more breaks during the day. If phone use was an issue, it may be necessary to lock up phones during operations.
It is very likely that the safety driver's mistakes will pass on to Uber through the legal principles of vicarious liability. There is even criminal vicarious liability in extreme cases.
The software
Passengers who have ridden in Uber's vehicles get to look at a display where the software shows its perception output, ie. a view of the world where it identifies its environment and things in it. They report that the display has generally operated as expected detecting pedestrians, including along the very road in question, where they have seen the vehicle slow or brake for pedestrians entering the road outside crosswalks. Something about that failed.
It is also worth considering that the police report suggested that no braking took place at all. Even detecting and hard braking at one second might have reduced the impact speed enough to make it non-fatal. I have not done the math, but braking and swerving, even from 1 second out, might have been able to avoid the impact on the woman. (As I explain in earlier posts, most cars are reluctant to swerve, because you can't depend on doing that and it can make the situation worse.)
The Arizona code states that at all times, drivers must, "Exercise due care to avoid colliding with any pedestrian on any roadway."
Important counterpoints
I've seen some call to relegate prototype robocars to test tracks and simulation. This is where they all start, but to reach safety, you can only do .1% of your testing there. There is truly no alternative yet known to develop and prove these cars than operation on real roads with other road users, exposing them to some risk. The discussion is how much risk they can be exposed to and how it can be mitigated.
It's also important to realize that these cars are prototypes, and they are expected to fail in a variety of ways, and that is why they have safety drivers performing oversight.
This accident will make us ask just how much risk is allowed, and also to examine how well the safety driver system works and how it can be improved. We are shocked that Uber was operating a car that did not detect a pedestrian in the middle of the road, and shocked that the safety driver failed at her job to take over if that happens. But we must understand that the prototype vehicles are expected to fail in different ways. I don't think a vehicle should have failed in so simple a way as this, but most of these cars in testing still get software faults on a fairly frequent basis, and the safety drivers take over safely. The answer is not to demand perfection from the cars or we can never prototype them. Sadly, we also can't demand perfection from human safety drivers either. But we can demand better than this.
Whither Uber?
This will set Uber's efforts back considerably, and that may very well be the best thing, if it is the case that Uber has been reckless. It will also reduce public trust in other teams, even though they might be properly diligent. It may even sink Uber's efforts completely, but as I have written, Uber is the one company that can afford to fail at developing a car. Even if they give up now, they can still buy other people's cars, and maintain their brand as a provider of rides, which is the only brand they have now.
I suspect it may be a long time -- perhaps years -- before Uber can restart giving rides to the public in their self-driving cars. It may also slow down the plans of Waymo, Cruise and others to do that this year and next.
At this point, it does seem as though a wrongful death lawsuit might emerge from the family of the victim. The fame for the lawyer will cause pro bono representation to appear, and the deep pockets of Uber will certainly be attractive. I recommend Uber immediately offer a settlement the courts would consider generous.
And tell us more information about what really happened. And, if it's as surmised, to get their act together. The hard truth is, that if Uber's vehicle is unable to detect a pedestrian like this in time to stop, Uber has no business testing at 40mph on a road like this. Certainly not with an inattentive solo safety driver.
This article is a follow on to an initial analysis and a second one.
Comments
suha esen
Thu, 2018-03-22 15:52
Permalink
don't try to sugarcoat the irresponsibility in testing sdcars
Look at the aerospace industry for safety and testing, there's much to learn and they're not risking lives... no company has a luxury making tests risking innocent peoples lives that stupidly...
and "and shocked that the safety driver failed at her job to take over if that happens." attention and control is bound to each other, if you take control from driver there is no way he/she can stay focused long to respond sudden events.. to expect such a high grade attention from passive driver is bullshit... lots of data can be found about attention/focus of pilots/operators/drivers and there is a field called "human factors engineering"...
Becca
Thu, 2018-03-22 15:55
Permalink
Reaction times for safety drivers
I wonder how common it is for a safety driver to do something effective in these sort of tests. It seems like the driver would see potential hazards all the time just like when driving, but normally the safety driver would not need to react because the car would react. So even an alert safety driver might not react quickly enough here because here or she would be used to not reacting and first need to realize that the car didn't respond to the pedestrian. It seems to me like a lot of training would be needed to do this job well, especially in a situation where the car is almost always doing the right thing. I'm not saying this driver was doing a great job, but I question whether putting an average driver in the front seat as backup for the robo driver is as effective as we would like.
art
Thu, 2018-03-22 17:42
Permalink
Well ok. Maybe we should blame the driver?
I'm wondering if maybe in a sense this is all the fault of the driver. That is, is what we are seeing here in reality the total capability of the Uber sensor system? Is it actually incapable of distinguishing a person on the roadway 50 (or 20 or 2) feet ahead and the only reason we haven't seen this happen before is because every other time a similar situation presented itself the safety driver was not distracted and took control? And the only thing out of the ordinary here was the driver playing Words With Friends.
I would dismiss this a fringe conspiracy theory except that Uber has a somewhat irresponsible documented track record of just blasting ahead, faking it, and then attempting to clean things up in the wake. How sure are we that their cars actually do what they say they do?
brad
Thu, 2018-03-22 22:20
Permalink
Intervention
Teams generally treat any situation where a safety driver intervention was necessary, rather than just cautionary, as a major event. Effectively as bad as an accident. So no, that should generally not be happening very often. A typical technique after an intervention is to play back the data and put it in a simulator to see what the car would have done without the intervention. If the answer is "contact" that's considered very important. Note that Waymo, while it has not published, probably has their contact number up over 250,000 miles, if not better.
dr2chase
Thu, 2018-03-22 20:04
Permalink
Reaction time numbers
Where do those come from? I've measured, on 60fps video, 0.6s reaction time in a 55 year old man (me).
That was time between seeing a car and initiating a swerve on a bicycle (the front basket moves distinctly to the left).
I don't think I'm superhuman.
Does "reaction time" include time to move foot to brake pedal? If so, mighty poor design if you ask me.
Video here, since otherwise people are guaranteed to claim this is BS: https://www.youtube.com/watch?v=toIGD7SrpYo
(You can use the "." key to advance one frame at a time, there are 60 per second, I just checked.
I collect a lot of video, it's amazing how much better eyeballs are than a camera, especially at night.
brad
Thu, 2018-03-22 22:18
Permalink
You are good
You are not superhuman, but you are good. Note that it may make sense -- I don't know if Uber does it -- for recruiters for the safety driver job to test reaction times of applicants in various situations. But yes, that includes moving your foot onto the brake. (Not off the gas, your foot is not on the gas.)
Generally 0.7 seconds is considered "good" and many people are far longer. It depends on the situation and how alert the driver is.
Note that with LIDAR systems the reaction is only a little bit better, 0.4 to 0.5 seconds. The refresh rate of the Velodyne is 100ms, so it can, in theory take 200ms until you've had 2 sweeps over an object, though you would like more to be sure, and then you must analyse and decide. After you decide, actuation of the brake fluid pump is of course superhumanly fast.
suha esen
Fri, 2018-03-23 00:58
Permalink
software crashes???
"but most of these cars in testing still get software crashes which shut them down entirely on a fairly frequent basis, and the safety drivers take over safely" wait this must be a joke... you people should get serious and start thinking of self driving car as a "life dependent system", not the game console on you living room... you can't put fucking windows on a self driving car's control and risk peoples lives with sudden bluescreens... can you think of any control related software on a Boeing jet crashes on flight? they do whatever it takes, even designing hardware and software through to its smallest components with highest specifications, and they succeed... that crappy approach of programmers makes me sick, be an engineer first... yes it costs, developing extensive test systems, designing reliable hardware within the specs of aerospace industry, even more reliable software is possible with the right engineers... If it costs, it costs, you will not finance it with adclicks at the end, people always pay more for safety... Don't go such cheap on testing as relying on traffic rules applies for humans will save your company's ass at the end, it's just immoral...
brad
Fri, 2018-03-23 11:59
Permalink
Shut down entirely
I overstate when I say shut down entirely. I mean that the systems declare themselves non-functional. Hard "hang" crashes are very rare but not impossible, especially in the prototypes. Indeed, good designs feature more than one computer system of independent design and hardware, and complete failure of the main system does not affect the other and it is able to continue driving. However, in such situations, the safety driver is always asked to quickly take control.
Jeremy A
Fri, 2018-03-23 02:18
Permalink
Visibility is better than Uber seems to want to admit
I came across this video shot by someone who commutes through the route that the accident happened on (link). It seems pretty clear that visibility is what you would expect for a metro area at night, and that a relatively un-distracted driver would at least be able to see the pedestrian in the road. Personally, I believe that I have encountered similar situations with either an obstacle or an animal in the road and I've had to suddenly brake hard to avoid a collision.
Gina Hansson
Fri, 2018-03-23 03:55
Permalink
“Wither”
You probably meant “whither”.
Higgy
Fri, 2018-03-23 05:50
Permalink
Avoidance
I haven't noticed any discussion of avoidance? Even if there was not enough time to break there appeared sufficient time to swerve into the empty lane.
Obviously the system did not see the victim - but only discussing the possibility of breaking ignores what would have been a normal driver reaction to swerve.
There would be a strong probability that this would have avoided the victim.
Bob
Fri, 2018-03-23 06:57
Permalink
Obvious blind spot due to poor lighting
Leaving aside the issue of the performance of the vehicle for the moment, the video makes it clear that there is a poorly lit section of the road and that is exactly where the pedestrian was crossing only to emerge from the poorly lit area a moment before the crash. It is possible that even a human could not have seen her under those conditions and that likely only the most adept driver may have been able to maneuver quickly enough so as to avoid the crash with so little time to spare given how the pedestrian emerged from the dark. So between the sidewalks that lead directly to the street at a non crosswalk location (completely idiotic - put a crosswalk where the people are crossing!) and the poor/uneven lighting, the design factors appear to be overwhelming contributors (and evil contributors at that) to the accident. If a sidewalk conveniently but wrongly leads pedestrians to the road, a barrier of some kind is called for. Imagine if this sidewalk lead to a cliff or the edge of a volcano or ravine - surely there would be design features that would protect the hapless pedestrian beyond a sign. Failing to have a barrier is negligent and incompetent on somebody's part. That sidewalks on BOTH sides of this road wrongly lead pedestrians at this location is the utmost of evil and incompetence. Who is the mayor and whose city council district is this in? If I lived there I'd be paying a visit to these elected officials and it would not be a pleasant conversation for either of them.
Walt French
Tue, 2018-03-27 11:57
Permalink
Poorly-lit for dash-cams but LIDAR brings its own illumination
The cops here are not expected to be engineers so I'll excuse their providing cover for the lack of adequate engineering.
(Their office is ALSO not responsible for determining who's at fault in a collision, and there's LESS excuse for knowingly going beyond their authority.)
LIDAR supplies its own illuminating pulses, when there's strong ambient illumination it works LESS WELL—it's harder to distinguish its own timed lighting from ambient. Brad explicitly notes that LIDAR should not have failed.
I'll also bet you a nickel that Arizona's official who OK'd the Uber experiment will not say he gave them a blank check to turn off safety systems at will, to operate vehicles that are purposely less safe than they could be. Brad's pointed the way to what I expect will soon be a very nasty situation for Uber: either their engineering was badly incompetent, or their testing protocol was unsafe by design. Neither points the way to an future that trusts Uber to send 2-ton hunks of steel careering around our streets.
Giacomo
Fri, 2018-03-23 08:35
Permalink
Passenger's behaviour was part of the test.
Craig
Fri, 2018-03-23 09:12
Permalink
The safety driver is clearly
The safety driver is clearly inattentive, but realistically, I think just about anyone in that position will be inattentive at times. When you're actually driving a car, you are actively engaged in its operation, and even then, people sometimes get distracted momentarily. A safety driver, on the other hand, is basically a passenger. They're supposed to pay attention, but really they're just sitting there passively watching most of the time, and no matter who you are, I think you'll eventually get bored after hours or days or weeks of riding around in the car and seeing it behave perfectly. You'll start to trust the car, your attention will start to wander, and nothing (at first!) will go wrong, so your attention will wander more, and if/when something finally does happen, you won't be able to prevent it. The same thing probably happened to that guy who got killed by trusting his Tesla's autopilot mode. So while the safety driver is technically at fault, I suspect any of the people judging him/her would have sooner or later made the same mistake if they had the same job. If you think you wouldn't, I think you're a fool. Having two safety drivers would be better, but there are no perfect solutions here.
Technology is never perfect. If there are eventually production self-driving cars, there will be incidents where people are killed because the technology doesn't behave as it should have, and people won't prevent these incidents because they won't be watching. Why would I use a self-driving car if I still have to watch everything it does and take responsibility for overriding it when necessary? The automation isn't really bringing me any benefit in that case. I might as well just keep driving myself.
David Emery
Fri, 2018-03-23 10:55
Permalink
Volvo has 'pedestrian collision avoidance'
At least in the new XC-40, and I think some version of this is in their older XC-60 platform. It would be important, if not critical, to see if Volvo's own package could have detected and avoided this collision. Sharing this kind of data should be mandatory across self-driving car vendors/developers.
brad
Fri, 2018-03-23 11:45
Permalink
Volvo system
Self-drive teams usually disable any built-in safety systems on their car. The new system is supposed to be superior. There is merit to the argument that the warning functions of the built-in systems should still be on to alert the safety driver.
echo
Fri, 2018-03-23 13:29
Permalink
The social good
I have been reading up on the history of car accidents. When I first heard of the accident and read Brad's commentary I wasn't angry I was upset. I feel really sorry for the poor homeless woman killed and if reports are true symapthetic for the female driver who according to comment was a transgender woman working on a corporate offender rehabilitation program. I dislike Ubers cavalier business practices but in the interests of justice prefer to withhold comment until a full and impartial investigation is held.
It is very easy to lose ourselves in talk about business practices and systems. I would hope that the potential for autonomous vehicles does not lose the human side of discussion and that the opportunities autconomous vehicles offer for enfranchising many often not very well off citizens living under difficult circumstances is not forgotten.
Carlos
Fri, 2018-03-23 15:04
Permalink
What could an attentitive safety driver have done?
How was the safety driver role supposed to work in this situation? I imagine they aren't supposed to take the controls until they've given the self-driving car a chance to act autonomously; by the time the safety driver decides that the car needs help, it's awfully late.
Safety drivers are great for situations like police directing traffic by hand. But in emergencies, I don't see how there could be time for two decision-making processes in series, especially when the backup system is slower to react than the primary one.
brad
Fri, 2018-03-23 16:02
Permalink
A fully attentive safety driver
A fully attentive driver would, most people seem to conclude from looking at other video of the scene, see the pedestrian well in advance of the accident, and taken the controls to brake in plenty of time.
If visibility were really poor, an attentive safety driver seeing her only 2 seconds before (she shows up on video 1.4 seconds before) would have still hit the brakes and swerved left at the last second to miss the pedestrian altogether. That's because a highly attentive driver would be keeping aware of other traffic on the road, and know there was nobody in the lane to the left and that it was save to swerve in this fashion.
You might suspect an attentive driver might even swerve into another car, all trolley-problem style, because they are less likely to kill the occupants of that car than the pedestrian. However, a robot would probably not do that, since the penalty for deliberately hitting somebody -- even to save another -- is vastly higher than the penalty for accidentally hitting somebody who was not supposed to be there. A human driver might get off for a spur of the moment decision (or even claim it to be an accident) but for a robot it would be a pre-meditated decision.
Carlos
Fri, 2018-03-23 16:19
Permalink
I guess what I'm struggling
I guess what I'm struggling with is that if I were the safety driver, I would hesitate until I were convinced the the car really wasn't going to brake by itself. By then, we'd be right on top of the poor pedestrian.
I assume a major part of safety driving training is learning not to instinctively take the controls and learning to trust the car. Surely that's going to slow the safety driver's reaction time when they need to act.
Dan
Fri, 2018-03-23 16:06
Permalink
last view of safety driver is looking left
In the video, the last view of the safety driver, with the shocked facial expression and dropped jaw, shows her looking to her left. But the collision was on the right side of the car. So it appears that there may have been a few seconds between seeing the pedestrian on the left prior to entering the traffic lane, and the collision on the right.
Dan
Fri, 2018-03-23 16:40
Permalink
near misses
In the study of accidents, (vehicles, aircraft, military, etc), for every fatal accident there are usually multiple non-fatal injury accidents, and for each of those, there are usually multiple non-injury accidents, and for each of those, multiple near misses. It's instructive to study the near misses, because there are enough of them to study statistically. How often do SD-Uber's safety drivers take over? What are the primary reasons statistically for taking over? How often do SD-Ubers have to slam on the brakes because a hazard was detected late? And what do the statistics show to be the primary reasons for that? How often to SD-Ubers run red lights or stop signs, or fail to stay in their lane? Have near-misses been reduced to a satisfactory level, giving confidence that there will be very few more serious incidents?
brad
Fri, 2018-03-23 17:50
Permalink
Good question
Self-driving car teams, at least good ones, treat near misses as very serious incidents, essentially just as serious as an impact, for just this reason. This is why the Uber's failure here is baffling. This isn't a complex or difficult situation. Uber's cars have been out tracking lots of miles in situations just like this, and they do spot and track the pedestrians under normal conditions. So we need to learn what failed here.
But that is odd. A broad failure would have had many near misses before this and been fixed. So this is some other sort of error.
Chen Du
Sat, 2018-03-24 00:09
Permalink
request permission to quote
Hi Brad! I came to a similar conclusion before being sent a link to your explanation article. I write for a chinese tech blog named PingWest, more specifically its wechat channel which is currently non-profit. I want to request a permission to quote sentences from your article. They will be credited to you and/or Brad Ideas. Thanks!
brad
Sat, 2018-03-24 01:12
Permalink
Permission to quote is not generally needed
But thanks for thinking of me. Unless you are quoting most of the article. I am familiar with PingWest
jason
Sat, 2018-03-24 04:53
Permalink
Can these LIDARs have a blind
Can these LIDARs have a blind spot and the pedestrian perchance walked under the blind spot of coverage? It would be a poor design if this were the case.
brad
Sat, 2018-03-24 12:35
Permalink
No
This LIDAR does not have any blind spots, except areas very close to the car where the car body itself blocks the view from the perch above the roof. They also don't see that far above the level of the road.
Albin Foro
Sat, 2018-03-24 07:16
Permalink
Questions about lidar itself
Doesn't take much googling to learn that there are other and potential better low light or fog-restricted technologies. Elon Musk likes ordinary cameras better, and apparently big rig truckers are looking to automate with old fashioned radar: less detail, but longer range and using solid state instead of less durable mechanical components. Uber seems to be using it's own "stolen" and half-baked version of Waymo, which itself may have left Uber's version in the rear view - so here we are.
Reynolds Cameron
Sat, 2018-03-24 11:36
Permalink
Never unavoidable
“That would not be true if the pedestrian were crossing the other way, moving immediately into the right lane from the right sidewalk. In that case no technique could have avoided the event.”
100% false. Defensive driving involves actively scanning the periphery of the vehicle path of travel to detect potential threats and dangers. No excuse for any driver— autonomous or human, to drive more than 15mph if there is a child present, or pedestrian facing the roadway. The expectation must always be that the pedestrian might trip or dash into the roadway. Slow down and change lanes to provide at least 6’ clearance between the vehicle and pedestrian/bicyclist. Doing otherwise amounts to gross negligent homicide.
brad
Sat, 2018-03-24 12:41
Permalink
Not in reality
Defensive driving is taught in driving school but is not the law, and is almost never practiced by human drivers in the way you describe. The law says a pedestrian attempting to cross the road outside the crosswalk has to yield to cars. We often stand on the sidewalk, waiting for a break in traffic, and cars almost never, in my experience, slow down. You could decide to hold the robocars to that much higher standard, and that would cause the public to become very frustrated with them.
The law has decided right of way here, even though it means higher risk for reckless pedestrians who cross without carefully waiting for a long gap in traffic. It is a decision necessary to create good traffic flow. The decision can be argued or changed, but that's how it is. Some day we may hold the robots to a higher standard than the people, but if they are going to get the R&D necessary to let them reach safer levels than humans -- which we all want -- it would be folly to try to require that today.
James Salsman
Sat, 2018-03-24 12:28
Permalink
Tangental: what happens if a robocar is cited for speeding?
If a cop pulls an autonomous car (lets's say for the sake of discussion without a human backup safety driver) over and writes it a ticket for speeding, who pays it, the owner of the car, the software configuration manager(s), programmers, or someone else? If "points" are deducted, from whose license?
Albin Foro
Sat, 2018-03-24 12:47
Permalink
Robocars will not speed, squeeze thru signals, be normal
Because of legal liability issues autonomous cars are going to be obsessively scrupulous about all road rules, and that's why I doubt they will ever mix with human drivers. Human drivers will love sharing the road with robo big rigs and delivery trucks that are predictable and safe and won't double-park cities or tailgate the highway. But unleash millions of robocars will be like clogging the roads with millions of old grannies, infuriating the ordinary driver who pushes speed limits, squeezes traffic signals, and has "normal" expectations for other human traffic violations to suit road conditions. My guess is cities that go autonomous will completely ban human driving, and roads that permit humans to drive will ban the robots.
brad
Sat, 2018-03-24 12:57
Permalink
Pull over
Well, generally I don't think any unmanned vehicles will speed. Vehicles with humans in them sometimes have speed controls, similar to cruise control, which allow speeding at the human's order. Police could pull one of those over, and possibly ticket the human who set the speed.
Normally there would be no desire to "pull over" an unmanned vehicle. What do you want to do, talk to it? If an unmanned vehicle is actually acting in a dangerous way, police probably would be able to pull it over the usual way -- get behind it with siren and flashers -- or more easily, just call in to HQ and provide the licence plate, so HQ can then contact the operator and tell them to take the vehicle out of service immediately. If the vehicle is not dangerous, I think they would just record what it did and file a report which would get back to the company operating it.
Which would cause an immediate major panic at the company, since of course they will test their vehicles extensively to make sure this can't happen. Because they are not perfect, it might still happen, but ideally in the range of a few times a year for the entire fleet. Once a problem is identified, if the officer is in the right, the problem will be fixed and not happen again.
Almost all proposed legal regimes would put any fines or penalties on the operator of the vehicle, that is to say the fleet operator, or vendor.
For a car to do something that deserves "points" seems so rare that it may be simplest to deal with it as a special case. Frankly, if an entire model of car were to get points more than once I would be shocked. Since the only suitable punishment for points for a fleet would be to disable the fleet, it's such a huge consequence -- a business destroying one -- that there will be huge efforts to prevent it. That's less true for a small operator doing development, but such vehicles do not operate unmanned.
Would society do this? Shut down General Motors or Waymo as a company, if their cars did a few dangerous things? The dangerous things are very bad but it probably makes sense to have a serious punishment available but nothing so big as that.
James Salsman
Sat, 2018-03-24 15:17
Permalink
NYT: Uber’s Self-Driving Cars Struggling Before Arizona Crash
https://www.nytimes.com/2018/03/23/technology/uber-self-driving-cars-arizona.html
Average miles before human intervention required:
Google/Waymo: 5600
Cruise: 1200
Uber: 13
Dan
Sat, 2018-03-24 18:15
Permalink
We already have risk-based
We already have risk-based engineering methods for safety-critical systems, such as IEC 61508, and the automotive-specific IEC 26262. In functional safety it is necessary to define the risks and specific countermeasures required to operate at a tolerable level of risk. I’m wondering whether the systems being developed for robocars are done using these methods - and if not, why not? You don’t get to operate an oil refinery, develop an industrial robot, or write safety-related software in cars without functional safety these days - so are Google, Waymo, Uber & co. Held to the same standard? Or have they been getting a pass due to the Silicon Valley distortion field? This death was a completely avoidable tradegy, and if it turns out that lax engineering of safety controls was to blame, then there should be jail time for persons responsible at Uber.
Side note: human supervision can be a reasonable safety control, but should NEVER be the primary control in a situation that could result in a fatality. Especially as these supervisors have a perverse incentive not to act (companies are using number of human interventions as a performance metric), the human supervisor is only partially responsible for this incident IMO.
brad
Sat, 2018-03-24 19:36
Permalink
These are used
Especially in auto industry efforts. I do not think they are always the wisest idea. Standards only encode conventional wisdom and existing practice. The job of all teams is to invent entirely new ways to be safe, different from and better than those of the old world, but sometimes in violation of them.
FroMars
Sun, 2018-03-25 07:21
Permalink
Taking the video for granted
Many of the comments here take the low-res tweeted video “released” by the police for granted. These are the comments that talk about how the cyclist popped out of the shadows in a poorly lit area and/or calculate potential time available for potential human reaction based on first appearance of a foot. Some have pointed out that a human might have had better vision and some have pointed out (rightly, I think) that local residents have thoroughly debunked the darkness of the video by filming their own on the same street.
Let’s set aside aside the dubious darkness. Let’s also set aside some dodgy black blotching over the driver: watch the video frame-by-frame, there is a black spot that looks pretty intelligent for a digital artifact... it is not part of the long “shadow” across the road, and it slowly recedes... revealing first the legs, and then shrinking to something approximating the size of her pitch black top—why pitch black? how can that be? We see all of the details of her jeans. Was the pitch black smudge put there to enhance the surprise/jump out of the bushes element?
Even if you give Uber the benefit of the doubt on all of the above, here is where it all falls apart for me. if you run the video backwards and follow the path of the backward movement of the cyclist into the darkness, you can see where her head passes in front of one of the lighted buildings (it’s is the third oblong from the top on the upper left, the vertical one, zooming in helps but you can clearly see the head pass in front of it). Running the video forward again you can see how this occurs well before her feet first appear. Run backward again until a few moments after her head passes in front of the light and... whoops... what’s this? There is a bicycle-shaped blob, with wheels slightly protruding below that “shadow” crossing the road... like it has been redacted. If fact, once you see this, it is hard to unsee the possibility that someone manually redacted the cyclist completely until the feet started to appear and even then continued to redact in order to cover up the top of her clothes which may have been way more reflective. Seriously...we can see her jeans plain as Day but she managed to order a Vanta Black sweater?
Add to this fact that: 1) the driver looks to the left at the end of the video, and 2) we don’t know if the video actually stops at the moment of impact...it could have been stopped earlier to create the impression that limited reaction time was available ...and I am willing to bet that the bicyclist was indeed visible long before we are led to believe by the video.
It may just be that we are all being royally manipulated. If so, this doesn’t invalidate most of the good points being made in the blog post and the comments...just the ones that rely on the video to suggest that reaction time or darkness or clothing contribute in some way (vs failure of lidar or jaywalking) to allocation of blame.
The counter argument to all of the above is... I am just a random moron with an iPad... why am I so lucky to have figured this out when no one else has? I don’t have an answer for that, especially because I have been professionally trained to be skeptical of such “lucky” situations. Someone with better video equipment, please prove me wrong!
brad
Sun, 2018-03-25 12:25
Permalink
Hard to credit
I see no evidence for a conspiracy theory at present. In particular
Mark Andrew Wood
Sun, 2018-03-25 09:23
Permalink
Victim ignored headlights; computer NEVER detected victim
1: the vehicle's sensor detection sensitivity at 300?150?50 feet is not really the issue ... From 50,40,30,20,10 feet, the vehicle *never* braked = never comprehended her existence
This is a logic-parse failure ...
Case:
If person then dodge
If person-on-bike then dodge
If dog then dodge
But nothing for person-walking-bike
It's like PWB fell through into an unhandled exception
The LIDAR routine may have contained the PWB case-handler, but the radar+camera-only logic path clearly did not
Separately, 38mph is not fast ... The pedestrian could theoretically see those headlights from a mile away, a half mile, a quarter mile, 100 yards, etc ... This was really really bad pedestrian-ing
brad
Sun, 2018-03-25 12:14
Permalink
That's not generally how lidar and others work
With LIDAR, radar and several of the vision approaches, it is not required or expected to classify an obstacle. "Unknown obstacle" will still be detected, and should be slowed for. With image classification, you might miss something you don't recognize, but you would still see it if you are doing stereo, motion parallax, and motion detection. Also person walking bike is less common, but far from unknown as a thing.
Jordan
Sun, 2018-03-25 10:02
Permalink
Reflectivity and Lidar Range
Brad - the question of Lidar's range, and how far out it should have been able to detect the pedestrian, is an important part of the conversation around the accident. I've read a few sources (e.g. Lidar for Self-Driving Cars and Under the Hood of Luminar's Long-Reach Lidar) that say that it is only 30-40 m for low reflectivity objects (10%), which would be no farther than the headlights/camera. Are there tables or other sources somwhere on the internet about reflectivity of various types of objects, including humans? My understanding is that it is about more than color of the object, and includes things such as the material, thickness, "shinyness", etc. Thanks.
brad
Sun, 2018-03-25 12:21
Permalink
Reflectivity
No, a LIDAR like the Luminar most certainly sees a great deal further than 40m on low-reflectivity objects. In fact, it is common to judge the "real" range based on about 10% reflectivity, which is fairly black. The victim in this case also had on blue jeans, bright hair and skin, a red bicycle and it had white bags on it though they were mostly behind her.
Jordan
Sun, 2018-03-25 15:00
Permalink
Reflectivity
Brad - I understand the difference b/w Luminar's long-range lidar and the more "conventional" Velodyne lidar the Uber car had. The 30-40m range in the article on Luminar refers to 905nm lidar, and is used to explain the significance of Luminar's 1550nm lidar. That said, I could have been more specific about what type of lidar I was referring to.
The question still exists however. Even if the only part of her in low-reflectivity clothing was her shirt, that still significantly reduces the size and changes the shape of the object she would appear as to the object classification neural nets. The smaller object comprising a head floating above a combination of legs and bike might need more passes of the lidar to be seen, or might not be recognized at all. Etc. I'm not enough of an expert on lidar or object recognition/CNN but it doesn't seem crazy to imagine that at the edges of the sensors' range capabilities that this could make just enough of a difference to screw things up. Especially in a system like Uber's where everything wasn't exactly tip-top in the first place.
I do have a question about this line: "In fact, it is common to judge the "real" range based on about 10% reflectivity, which is fairly black." Are you saying that "real" ranges are underestimates of the range for anything but dark black? What about other factors of reflectivity? Clearly color is only one factor.
brad
Sun, 2018-03-25 15:23
Permalink
The neural nets
Well, I can't tell you how well the LIDAR would perform on her without having her clothing and the units in question, but one thing to note is that LIDAR detects objects fine without any use of neural nets. That doesn't mean many people don't use neural nets to help them classify their LIDAR targets (and even radar targets these days) but LIDAR objects can be detected at a much more basic level, where you don't know what is there, but you know it is something, and you generally stop for "something" unless something else tells you "don't stop." For example, a rare example of what not to stop for would be birds, though most small birds barely even show to the LIDAR. But when they do, then a vision system might tell you, "While you see something on the road and want to stop, we can confirm it's just birds so don't stop." However in the case of "we see something on the road and don't know what it is" the answer is to stop just to be safe. There are a few other things you don't stop for -- blowing plastic bags, clouds of exhaust, very shallow debris (which does not show on LIDAR very well) but until you get very smart at figuring out what not to stop for, you stop for anything you don't understand. Certainly anything as large as a pair of legs or a bicycle.
I do not have the specs with me but I do believe the Velodyne should be able to see a typical black shirt at 40m. Velodyne has itself examined things and issued a press release that they believe they would have readily detected her.
Dana Farricker
Sun, 2018-03-25 14:01
Permalink
"You can watch the videos
"You can watch the videos here if you have not seen it."
So there's NO proofreading at this site for basic grammar, then?
Andrew
Mon, 2018-03-26 02:37
Permalink
Concept of safety drivers is fundamentally flawed
Whether or not the automation did as was intended or not, or whether the person should not have been crossing at that location, or any other of the myriad of factors that have been highlighted in this case; they all ignore the basic premise that the concept of having safety drivers to 'step in' when required is fundamentally flawed. In response to the earlier post about 'automation complacency'; yes the topic has been discussed at length in psychological literature. It is often cited as reasons for incidents/ near misses; e.g. in process control of chemical plants/ refineries etc. where the operator's role has changed from one of control to one of monitoring and responding to alarms. The seminal paper by Lisanne Bainbridge (1983) discusses the topic of the 'Ironies of Automation'; when the computer goes wrong, the person is left trying to work out what is happening as they are 'out of the loop' and lack any sufficient situational awareness to be able to respond. It has probably been most studied in the aviation industry with autopilots. There's a reason pilots receive hours and hours of training and refresher training to respond to dangerous scenarios; I doubt the same will ever be feasible with cars though. However, the problem of having a safety driver is that most of the time they have nothing to do, so their mental workload is minimal. A paper by Young and Stanton (2002) discusses the concept of 'malleable attentional resources', such that peoples maximum workload capacity is actually temporarily reduced by periods of low workload. Essentially, without systems to keep a driver occupied with the task (and not distracted by mobile phones etc.!), then the use of a safety cannot and should not be relied upon to intervene in such circumstances. It is my view that level 3 automation for self driving cars should be avoided entirely until the technology has advanced sufficiently to level 4.
brad
Mon, 2018-03-26 11:43
Permalink
Not flawed
The concept of safety drivers is far from flawed. It is only flawed if your standard is complete perfection. That is a very wrong standard, and would mean that you never get robocars. The standard is to match humans in testing, and to eventually surpass their safety level.
At other companies like Waymo, the combination of system plus safety drivers is so far surpassing the human safety level. I think they can claim that with their methods, it is not flawed, and is working very well. The only accident Waymo had blame in, fortunately just a fender bender for them, was one where the safety driver was alert but made the same mistake the system did about whether a bus would yield or not. Of course, they will still have accidents, but again, perfection is so not the goal, the way it is in aircraft and nuclear plants.
Dan
Mon, 2018-03-26 13:09
Permalink
Not flawed
Perfection is not the goal in extreme situations, such as when a hazard appears too quickly to respond, or when another vehicle drives erratically, or in hazardous weather conditions.
But I think perfection _is_ expected when a pedestrian calmly walks, day or night, in clear weather across multiple lanes of traffic, and is not obstructed by any other traffic.
If Uber can't certify that its automatics have no difficulty detecting and avoiding such pedestrians, they probably shouldn't be certified for use on public roads.
Walt French
Tue, 2018-03-27 11:47
Permalink
“Right of Way”
I've had the pleasure of going to traffic school a couple of times and vividly remember being told,
“You NEVER ‘have’ right of way. You often have explicit responsibility to yield right of way. As a driver, you are ALWAYS responsible for protecting others' safety; the Fundamental Law is to drive safely, to only proceed when it's safe.”
Otherwise, excellent article, excellent points about a system that failed pretty badly versus what it should have been able to do.
Anonymous
Wed, 2018-03-28 03:02
Permalink
I can fully agree to this
I can fully agree to this article, especially with Uber not caring enough about safety until something goes wrong badly. Another thing which got me thinking: Not sure if I overlooked something here, but why the heck are there signs not to cross the street as a pedestrian when the middle "island" between the highways offers a well developed walkway. Is that a common thing?
Pages
Add new comment