On crosswalks and safety driver interventions for robocars
In the wake of the Uber fatality, I'm seeing lots of questions. Let's consider the issues of crosswalks and interventions by safety drivers.
The importance of the crosswalk
Crosswalks actually are important to robocars in spite of the fact that they still should stop for a pedestrian outside of a crosswalk.
At a crosswalk (marked or implicit) pedestrians have the right of way. They can, and do, just step out into the crosswalk and have a legal right to expect traffic will stop. Of course, if you are rational, you still watch the traffic and make sure it's really stopping before you go too far.
There are actually quite a few different "classes" of road space that exist, and pedestrians act very differently at them, and cars act differently because of that:
- Crosswalks with a crossing guard
- Crosswalks at traffic lights with walk/don't walk signs or traffic lights. Like Shibuya
- General marked crosswalks without signals or lights (at intersections or mid-block)
- Unmarked but official crosswalks that the law says implicitly exist at all intersections
- Non-crosswalks in places where it is still legal to cross, usually yielding right-of-way to the cars
- Non-crosswalks where it is illegal for the pedestrian to cross ("jaywalking")
- Non-crosswalks explicitly signed "do not cross here." Which may be known to be places of regular crossing
- Roads with a physical barrier (fence or wall) blocking pedestrian access
- Limited access freeways (with different customs in different countries.)
- All of these in areas (except freeways, one hopes) where children are commonly present, particularly near schools at certain times
- All of these in different cities and countries, or with different speed limits.
As you can see, there are a lot of variations, and I probably don't have to tell you how you, when a pedestrian, have different behaviours crossing at any of these, and how human drivers also act differently, if for no other reason than the fact that the pedestrians act differently. Even though it is drummed into us as children to "Stop, look both ways and cross with care," it is not that simple.
There's no avoiding that robocars are going to treat these differently. If a person is standing on the corner at a marked crosswalk, everybody knows they should slow, to be ready for the person walking out into the crosswalk and claiming their crosswalker's right-of-way. If cars slowed every time somebody was standing on the sidewalk or even the side of the road in non-crosswalks, we would have much slower and more congested roads. We don't want that, which is why the rules of right-of-way have been encoded.
Unlike humans, robocars are always "fully alert" in that they're always looking in all directions at everything. However, how they let the things they perceive affect their plan does vary, as it does for humans, based on these situations. In our traffic rules, we've come to accept that if a car is driving in the right lane and somebody is foolish enough to just step out onto the street (especially from behind a parked van) that we assign no guilt to the driver for what happens. We don't ask them to drive with any special caution on such streets, other than in setting the speed limit lower as we generally do in pedestrian-heavy areas.
Many robocars are actually too cautious because their perception systems don't read human intent from facial expressions and body language as well as humans think we can. They don't even necessarily understand what is a human. Once I was taking a guest on a ride in an early model Google robocar, and we came to a crosswalk. Right at the corner of the sidewalk -- the entrance to the crosswalk -- Google itself had placed a stand-up rectangular sign directing traffic to parking. The resolution of LIDAR is not very large, so the car saw a 5 foot tall rectangular-object poised but unmoving at the entrance to the crosswalk. It stopped, and the safety driver had to take over to get the car to go through that crosswalk. Today, the car is probably more sophisticated about that, and easily can tell as sign from a poised person, but at the time it was the right decision. To coexist with other traffic, however, robocars can't be that unaware or that overcautious.
That's one reason teams are driving their cars for millions of miles, to learn about all these real-life situations. In my neighbourhood, some neighbours like to put out a cardboard cutout of a child poised to run across the street right on the road (in the unmarked area where cars usually park.) It's meant to remind drivers to think about children crossing roads and get them to slow down. Robocars don't need reminding, though until they learn to recognize that sign they might decide to stop.
While pedestrians do cross the road anywhere, they know when not at a crosswalk to be more cautious and to yield. When you do this, you usually wait for a break in traffic. On a fast road, you definitely wait for a break in traffic. On slower roads you sometimes trustingly walk among slow traffic, making eye contact and judging what the cars will do. This even results in an odd dance -- you go out into the road as a car approaches. Your plan is the car will zoom by, and you will then quickly and safety go behind it and cross before the next car gets to your zone. But then the car slows for you. It even stops and the driver waves at you to cross in front. The driver wants you where they can see you. You want the car already gone. You get mad, because the driver's deference has actually slowed you both down.
When a robocar is cruising the road, it attempts to know where there is a crosswalk and where there isn't, not simply because of the law, but because of the way pedestrians act differently. A pedestrian standing at the entrance of a crosswalk is a signal to slow and see what the pedestrian is going to do. A pedestrian walking briskly towards that crosswalk is also such a symbol. But you can't slow down for every pedestrian along the side of the road in the middle of the block. You dial up different levels of caution around pedestrian activity in the two different places.
Everybody knows that pedestrians ignore the laws about jaywalking. Most people I know do it. But only a reckless pedestrian completely ignores it, and strolls into the street not looking, expecting to own it. Unless you want every car on the road to slow to a crawl any time a pedestrian is standing at the side of the road in the middle of the block, the two zones are going to be treated differently, and there will be higher risk for those crossing outside of crosswalks.
As noted, machines are not anywhere near as good as humans at reading human body language, facial expressions and intent. People are working on that, but it is some time away. Generally, the view is not that we must make machine reading of human intent match human ability before we let cars on the road. That the systems of customs we have for crossing the road should suffice when combined with additional, but not ridiculous caution by the robots. The debate gets a bit more controversial in the presence of children, who don't know the full rules of the road, no matter how much we remind them to stop and look both ways before crossing. And even slightly in the presence of drunk people.
There are different opinions on this. For example, Adriano Alessandrini of CityMobil2 takes the (I think very extreme) view that a car should always go so slowly that it could stop if a bicycle appeared from behind a building and attempted to cross its path, which effectively limits vehicles to 5mph outside of closed zones or very wide open zones -- not very good for mobility.
Some notes on interventions
The world is learning about the principles behind the use of safety drivers. Testing prototype cars with safety drivers has been the standard R&D path since Google's early stealth project in 2009. Nobody has yet to come up with a better way. It mirrors how humans learn to drive -- first with a driving instructor who has an emergency brake pedal -- and later on the road. Teens have terrible driving records, and kill themselves and others more often than older drivers, and if pure safety is our goal, we might not wish to allow them on the road. Letting them drive is the only way we know to turn them into the better drivers we all become, and it also gives them essential mobility. Indeed, with the arrival of Uber services and robotaxis, mobility will be available a different way and might change this equation.
A common policy is to have two safety drivers, one with a focus on the road, another with a focus on the software. The second "software operator" still looks at the road from time to time, and provides a second pair of eyes. If they see something they will yell out. They could be given their own brake to press but for now that has not been necessary. In addition, having a buddy in the car reduces monotony. The main driver can even ask the software operator to watch during those times that eyes will wander, even briefly, from the road.
Safety drivers at most companies (though I am most familiar with Google/Waymo) are given extra driver training and need clean driving records. They are trained to be vigilant and ready to take the controls, and they also practice doing so. Being a really good safety driver is hard. You want to constantly be watching not just what's ahead, but what's in the other lanes in case you will suddenly need to swerve. A good driver of any kind, it is said, never needs to check her blindspot because she already knows if somebody is there.
They are instructed to be conservative, and intervene if they ever suspect that the software is not acting properly or that a situation is too dangerous to allow the software to handle the car. Sometimes it's obvious -- I've been in cars that suddenly veered to the left, or where the wheel started obviously wobbling -- and sometimes it's subtle. The goal is to not put the public at any more risk than necessary. That's not no risk, but it's hoped to be no more risk than having student drivers or teen drivers on the road offers.
Unfortunately, there is the ideal and there is the reality. It's hard to stay that vigilant. That's one reason having two drivers can help. There may also be subtle pressures on the drivers which have unintended consequences.
For example, since California requires all testers to report disengagements, there has become an incentive to have fewer disengagements. Companies might, entirely without intending to, communicate that incentive down to drivers. Certainly there will be pressure to have fewer unnecessary engagements. I am sure nobody wants to be the driver who has twice as many disengagements that turned out to be false alarms as the others. This, in time, may make drivers less cautious. Perversely, while the reporting of disengagements was intended to create transparency to assure public safety, it might do the reverse.
A large fraction of engagements come from doubt not on the part of the drivers, but of the software. Modern software is chock full of code -- usually more than half of it -- checking that everything is working right. If not, the software "throws an exception" and the problem is logged and ideally dealt with. If it can't be dealt with, these exceptions will cause the system to ask the safety driver to take over immediately. Later testing will reveal if the flaw was a serious one that might have led to a safety incident, or just a warning of something to look into. Most of the time, it's the latter case, fortunately.
This is why many of us don't think it's likely that the LIDAR or other key sensors on the Uber car had failed. Sensor failure is obvious, and it seems unlikely that Uber's systems would not immediately detect it and trigger a disengage. The car was not operating right and worse, it was unaware that it wasn't operating right. Worse than that, of course, was that the safety driver was not paying attention and was also unaware. A bunch had to go wrong that should not have.
One, two or zero safety drivers
The more safety drivers the better. Most teams start with two. It's not easy to have more than 2 in most cars. Everybody is trying to get to zero, however. Only Waymo has had the confidence to do that.
At this stage, it seems like a good policy would be:
- Have two drivers with brand new software revisions, or operations in areas and situations the vehicle is not well verified on.
- After a vehicle has matured to a low intervention rate in an operating domain, drop down to one safety driver in order to make testing more efficient.
- After testing confirms sufficiently reliable operation, a move to zero drivers is possible.
- All vehicles will likely have a data link back to operations HQ, where humans can act not to safety drive, but help the vehicle solve complex problems while stopped or driving slowly, with strategic advice.
Cargo robots which have no seat for a human will never have a safety driver. At Starship, our policy for early operations was to have a handler who follows each robot on foot. The handler has a "kill switch." However, at these low speeds harm to pedestrians, even from impact, is extremely low. For heavier robots, harm may be greater.
On-street cargo robots may be served either by being followed by a chase vehicle, or the use of a high reliability, very low latency data network to allow a remote driver to perform safety driver functions.
Coming up: Government regulation and minimum standards