There are many elements of this letter which would also apply to Tesla and other automakers which have built supervised autopilot functions.
Of particular interest is the paragraph which says: “it is insufficient to assert, as you do, that the product does not remove any of the driver’s responsibilities” and “there is a high likelihood that some drivers will use your product in a manner that exceeds its intended purpose.” That must be very scary for Tesla.
I noted before that the new NHTSA regulations appear to forbid the use of “black box” neural network approaches to the car’s path planning and decision making. I wondered if this made illegal the approach being done by Comma, NVIDIA and many other labs and players. This may suggest that.
We now have a taste of the new regulatory regime, and it seems that had it existed before, systems like Tesla’s autopilot, Mercedes Traffic Jam Assist, and Cruise’s original aftermarket autopilot would never have been able to get off the ground.
George Hotz of comma declares “Would much rather spend my life building amazing tech than dealing with regulators and lawyers. It isn’t worth it. The comma one is cancelled. comma.ai will be exploring other products and markets. Hello from Shenzhen, China.”
To be clear, comma is a tiny company taking a radical approach, so it is not a given that what NHTSA has applied to them would have been or will be unanswerable by the big guys. Because Tesla’s autopilot is not a pure machine learning system, they can answer many of the questions in the NHTSA letter that comma can’t. They can do much more extensive testing that a tiny startup can’t. But even so a letter like this sends a huge chill through the industry.
It should also be noted that in Comma’s photos the box replaced the rear-view mirror, and NHTSA had reason to ask about that.
George’s declaration that he’s in Shenzen gives us the first sign of the new regulatory regime pushing innovation away from the United States and California. I will presume the regulators will say, “We only want to scare away dangerous innovation” but the hard truth is that is a very difficult thing to judge. All innovation in this space is going to be a bit dangerous. It’s all there trying to take the car — the 2nd most dangerous legal consumer product — and make it safer, but it starts from a place of danger. We are not going to get to safety without taking risks along the way.
I sometimes ask, “Why do we let 16 year olds drive?” They are clearly a major danger to themselves and others. Driver testing is grossly inadequate. They are not adults so they don’t have the legal rights of adults. We let them drive because they are going to start out dangerous and then get better. It is the only practical way for them to get better, and we all went through it. Today’s early companies are teenagers. They are going to take risks. But this is the fastest and only practical way to let them get better and save millions.
“…some drivers will use your product in a manner that exceeds its intended purpose”
This sentence, though in the cover letter and not the actual legal demand, looks at the question asked so much after the Tesla fatal crash. The question which caused Consumer Reports to ask Tesla to turn off the feature. The question which caused MobilEye, they say, to sever their relationship with Tesla.
The paradox of the autopilot is this: The better it gets, the more likely it is to make drivers over-depend on it. The more likely they will get complacent and look away from the road. And thus, the more likely you will see a horrible crash like the Tesla fatality. How do you deal with a system which adds more danger the better you make it? Customers don’t want annoying countermeasures. This may be another reason that “Level 2,” as I wrote yeterday is not really a meaningful thing.
NHTSA has put a line in the sand. It is no longer going to be enough to say that drivers are told to still pay attention.
Comma is not the only company trying to build a system with pure neural networks doing the actual steering decisions (known as “path planning”.) NVIDIA’s teams have been actively working on this, as have several others. They plan to make commentary to NHTSA about these element of the regulations, which should not be forbidding this approach until we know it to be dangerous. read more »
It’s no secret that I’ve been a critic of the NHTSA “levels” as a taxonomy for types of Robocars since the start. Recent changes in their use calls for some new analysis that concludes that only one of the levels is actually interesting, and only tells part of the story at that. As such, they have become even less useful as a taxonomy. Levels 2 and 3 are unsafe, and Level 5 is remote future technology. Level 4 is the only interesting one and there is thus no taxonomy.
Unfortunately, they have just been encoded into law, which is very much the wrong direction.
NHTSA and SAE both created a similar set of levels, and they were so similar that NHTSA declared they would just defer to the SAE’s system. Nothing wrong with that, but the core flaws are not addressed by this. Far better, their regulations declared that the levels were just part of the story, and they put extra emphasis on what they called the “operating domain” — namely what locations, road types and road conditions the vehicle operates in.
The levels focus entirely on the question of how much human supervision a vehicle needs. This is an important issue, but the levels treated it like the only issue, and it may not even be the most important. My other main criticism was that the levels, by being numbered, imply a progression for the technology. That progression is far from certain and in fact almost certainly wrong. SAE updated its levels to say that they are not intended to imply a progression, but as long as they are numbers this is how people read them.
Today I will go further. All but level 4 are uninteresting. Some may never exist, or exist only temporarily. They will be at best footnotes of history, not core elements of a taxonomy.
Level 4 is what I would call a vehicle capable of “unmanned” operation — driving with nobody inside. This enables most of the interesting applications of robocars.
Here’s why the other levels are less interesting:
Levels 0 and 1 — Manual or ADAS-improved
Levels 0 and 1 refer to existing technology. We don’t really need new terms for our old cars.
Level 2 perhaps best described as a more advanced version of level 1 and that transition has already taken place.
Level 2 — Supervised Autopilot
Supervised autopilots are real. This is what Tesla sells, and many others have similar offerings. They are working in one of two ways. The first is the intended way, with full time supervision. This is little more than a more advanced cruise control, and may not even be as relaxing.
The second way is what we’ve seen happen with Tesla — a car that needs supervision, but is so good at driving that supervisors get complacent and stop supervising. They want a full self-driving car but don’t have it, so they pretend they do. Many are now saying that this makes the idea of supervised autopilot too dangerous to deploy. The better you make it, the more likely it can lull people into bad activity.
This level is really a variation of Level 4, but the vehicle needs the ability to call upon a driver who is not paying attention and get them to take control with 10 to 60 seconds of advance warning. Many people don’t think this can be done safely. When Google experimented with it in 2013, they concluded it was not safe, and decided to take the steering wheel entirely out of their experimental vehicles.
Even if Level 3 is a real thing, it will be short lived as people seek an unmanned capable vehicle. And Level 4 vehicles will offer controls for special use, even if they don’t permit a transition while moving.
Level 5 — Drive absolutely everywhere
SAE, unlike NHTSA’s first proposal, did want to make it clear that an unmanned capable (Level 4) vehicle would only operate in certain places or situations. So they added level 5 to make it clear that level 4 was limited in domain. That’s good, but the reality is that a vehicle that can truly drive everywhere is not on anybody’s plan. It probably requires AI that matches human beings.
Consider this situation in which I’ve been driven. In the African bush on a game safari, we spot a leopard crossing the road. So the guide drives the car off-road (on private land) running over young trees, over rocks, down into wet and dry streambeds to follow the leopard. Great fun, but this is unlikely to be an ability there is ever market demand to develop. Likewise, there are lots of small off-road tracks that are used by only one person. There is no economic incentive for a company to solve this problem any time soon.
Someday we might see cars that can do these things under the high-level control a human, but they are not going to do them on their own, unmanned. As such SAE level 5 is academic, and serves only to remind us that level 4 does not mean everywhere.
Levels vs. Cul-de-sacs
The levels are not a progression. I will contend in fact that even to the extent that levels 2, 3/4 and 5 exist, they are quite probably entirely different technologies.
Level 2 is being done with ADAS technologies. They are designed to have a driver in the loop. Their designs in many case do not have a path to the reliability level needed for unmanned, which is orders of magnitude higher. It is not just a difference of degree, it is one of kind.
Level 3 is related to level 4, in particular because a level 3 car is expected to be able to handle non-response from its driver, and safely stop or pull off the road. It can be viewed as a sucky version of a level 4 system. (It’s also not that different — see below.)
Level 5, as indicated, probably requires technologies that are more like artificial general intelligence than they are like a driving system.
As such the levels are not levels. There is no path between any of the levels and the one above it, except in the case of 3/4.
This leaves Level 4 as the only one worth working on long term, the only one with talking about. The others are just there to create a contrast. NHTSA realizes this and gave the name ODD (Operational Design Domain) to refer to the real area of research, namely what roads and situations the vehicles can handle.
The distinction between 4 and 3 is also not as big as you might expect. Google removed the steering wheel from their prototype to set a high bar for themselves, but they actually left one in for use in testing and development. In reality, even the future’s unmanned cars will feature some way in which a human can control them, for use during breakdowns, special situations, and moving the cars outside of their service areas (operational domains.) Even if the transition from autodrive to human drive is unsafe at speed, it will still be safe if the car pulls over and activates the controls for a licenced driver.
As such, the only distinction of a “level 3” car is it hopes to be able to do that transition while moving, on short but not urgent notice. A pretty minor distinction to be a core element of a taxonomy.
If Level 4 is the only interesting one, my recommendation is to drop the levels from our taxonomy, and focus the taxonomy instead on the classes of roads and conditions the vehicle can handle. It can be a given that outside of those operating domains, other forms of operation might be used, but that does not bear much on the actual problem.
I say we just identify a vehicle capable of unmanned or unsupervised operation as a self-driving car or robocar, and then get to work on the real taxonomy of problems.
I had hoped I was done ranting about our obsession with what robocars will do in no-win “who do I hit?” situations, but this week, even Barack Obama in his interview with Wired opined on the issue, prompted by my friend Joi Ito from the MIT Media Lab. (The Media Lab recently ran a misleading exercise asking people to pretend they were a self-driving car deciding who to run over.)
Almost never do I give a robocar talk without somebody asking about this. Two nights ago, I attended another speaker’s talk and he got the question as his 2nd one. He looked at his watch and declared he had won a bet with himself about how quickly somebody would ask. It has become the #1 question in the mind of the public, and even Presidents.
It is not hard to understand why. Life or death issues are morbidly attractive to us, and the issue of machines making life or death decisions is doubly fascinating. It’s been the subject of academic debates and fiction for decades, and now it appears to be a real question. For those who love these sorts of issues, and even those who don’t, the pull is inescapable.
At the same time, even the biggest fan of these questions, stepping back a bit, would agree they are of only modest importance. They might not agree with the very low priority that I assign, but I don’t think anybody feels they are anywhere close to the #1 question out there. As such we must realize we are very poor at judging the importance of these problems. So each person who has not already done so needs to look at how much importance they assign, and put an automatic discount on this. This is hard to do. We are really terrible at statistics sometimes, and dealing with probabilities of risk. We worry much more about the risks of a terrorist attack on a plane flight than we do about the drive to the airport, but that’s entirely wrong. This is one of those situations, and while people are free to judge risks incorrectly, academics and regulators must not.
Academics call this the Law of triviality. A real world example is terrorism. The risk of that is very small, but we make immense efforts to prevent it and far smaller efforts to fight much larger risks.
These situations are quite rare, and we need data about how rare they are
In order to judge the importance of these risks, it would be great if we had real data. All traffic fatalities are documented in fairly good detail, as are many accidents. A worthwhile academic project would be to figure out just how frequent these incidents are. I suspect they are extremely infrequent, especially ones involving fatality. Right now fatalities happen about every 2 million hours of driving, and the majority of those are single car fatalities (with fatigue and alcohol among leading causes.) I have still yet to read a report of a fatality or serious injury that involved a driver having no escape, but the ability to choose what they hit with different choices leading to injuries for different people. I am not saying they don’t exist, but first examinations suggest they are quite rare. Probably hundreds of billions of miles, if not more, between them.
Those who want to claim they are important have the duty to show that they are more common than these intuitions suggest. Frankly, I think if there were accidents where the driver made a deliberate decision to run down one person to save another, or to hurt themselves to save another, this would be a fairly big human interest news story. Our fascination with this question demands it. Just how many lives would be really saved if cars made the “right” decision about who to hit in the tiny handful of accidents where they must hit somebody?
In addition, there are two broad classes of situations. In one, the accident is the fault of another party or cause, and in the other, it is the fault of the driver making the “who to hit” decision. In the former case, the law puts no blame on you for who you hit if forced into the situation by another driver. In the latter case, we have the unusual situation that a car is somehow out of control or making a major mistake and yet still has the ability to steer to hit the “right” target.
These situations will be much rarer for robocars
Unlike humans, robocars will drive conservatively and be designed to avoid failures. For example, in the MIT study, the scenario was often a car whose brakes had failed. That won’t happen to robocars — ever. I really mean never. Robocar designs now all commonly feature two redundant braking systems, because they can’t rely on a human pumping the hydraulics manually or pulling an emergency brake. In addition, every time they apply the brakes, they will be testing them, and at the first sign of any problem they will go in for repair. The same is true of the two redundant steering systems. Complete failure should be ridiculously unlikely.
The cars will not suddenly come upon a crosswalk full of people with no time to stop — they know where the crosswalks are and they won’t drive so fast as to not be able to stop for one. They will be also constantly measuring traction and road conditions to assure they don’t drive too fast for the road. They won’t go around blind corners at high speeds. They will have maps showing all known bottlenecks and construction zones. Ideally new construction zones will only get created after a worker has logged the zone on their mobile phone and the updates are pushed out to cars going that way, but if for some reason the workers don’t do that, the first car to encounter the anomaly will make sure all other cars know.
This does not mean the cars will be perfect, but they won’t be hitting people because they were reckless or had predictable mechanical failures. Their failures will be more strange, and also make it less likely the vehicle will have the ability to choose who to hit.
To be fair, robocars also introduce one other big difference. Humans can argue that they don’t have time to think through what they might do in a split-second accident decision. That’s why when they do hit things, we call them accidents. They clearly didn’t intend the result. Robocars do have the time to think about it, and their programmers, if demanded to by the law, have the time to think about it. Trolley problems demand the car be programmed to hit something deliberately. The impact will not be an accident, even if the cause was. This puts a much higher standard on the actions of the robocar. One could even argue it’s an unfair standard, which will delay deployment if we need to wait for it.
In spite of what people describe in scenarios, these cars won’t leave their right of way
It is often imagined an ethical robocar might veer into the oncoming lane or onto the sidewalk to hit a lesser target instead of a more vulnerable one in its path. That’s not impossible, but it’s pretty unlikely. For one, that’s super-duper illegal. I don’t see a company, unless forced to do so, programming a car to ever deliberately leave its right of way in order to hit somebody. It doesn’t matter if you save 3 school buses full of kids, deliberately killing anybody standing on the sidewalk sounds like a company-ruining move.
For one thing, developers just won’t put that much energy into making their car drive well on the sidewalk or in oncoming traffic. They should not put their energies there! This means the cars will not be well tested or designed when doing this. Humans are general thinkers, we can handle driving on the grass even though we have had little practice. Robots don’t quite work that way, even ones designed with machine learning.
This limits most of the situations to ones where you have a choice of targets within your right-of-way. And changing lanes is always more risky than staying in your lane, especially if there is something else in the lane you want to change to. Swerving if the other lane is clear makes sense, but swerving into an occupied lane is once again something that is going to be uncharted territory for the car.
By and large the law already has an answer
The vehicle code is quite detailed about who has right-of-way. In almost every accident, somebody didn’t have it and is the one at fault under the law. The first instinct for most programmers will be to have their car follow the law and stick to their ROW. To deliberately leave your ROW is a very risky move as outlined above. You might get criticized for running over jaywalkers when you could have veered onto the sidewalk, but the former won’t be punished by the law and the latter can be. If people don’t like the law, they should change the law.
The lesson of the Trolley problem is “you probably should not try to solve trolley problems.”
Ethicists point out correctly that Trolley problems may be academic exercises, but are worth investigating for what they teach. That’s true in the classroom. But look at what they teach! From a pure “save the most people” utilitarian standpoint, the answer is easy — switch the car onto the track to kill one in order to save 5. But most people don’t pick that answer, particularly in the “big man” version where you can push a big man standing with you on a bridge onto the tracks to stop the trolley and save the 5. The problem teaches us we feel much better about leaving things as they are than in overtly deciding to kill a bystander. What the academic exercise teaches us is that in the real world, we should not foist this problem on the developers.
If it’s rare and a no-win situation, do you have to solve it?
Trolley problems are philosophy class exercises to help academics discuss ethical and moral problems. They aren’t guides to real life. In the classic “trolley problem” we forget that none of it happens unless a truly evil person has tied people to a railway track. In reality, many would argue that the actors in a trolley problem are absolved of moral responsibility because the true blame is on the setting and its architect, not them. In philosophy class, we can still debate which situation is more or less moral, but they are all evil. These are “no win” situations, and in fact one of the purposes of the problems is they often describe situations where there is no clear right answer. All answers are wrong, and people disagree about which is most wrong.
If a situation is rare, and it takes effort to figure out which is the less wrong answer, and things will still be wrong after you do this even if you do it well, does it make sense to demand an answer at all? To individuals involved, yes, but not to society. The hard truth is that with 1.2 million auto fatalities a year — a number we all want to see go down greatly — it doesn’t matter that much to society whether, in a scenario that happens once every few years, you kill 2 people or 3 while arguing which choice was more moral. That’s because answering the question, and implementing the answer, have a cost.
Every life matters, but we regularly make decisions like this. We find things that are bad and rare, and we decide that below a certain risk threshold, we will not try to solve them unless the cost is truly zero. And here the cost is very far from zero. Because these are no-win situations and each choice is wrong, each choice comes with risk. You may work hard to pick the “right” choice and end up having others declare it wrong — all to make a very tiny improvement in safety.
At a minimum each solution will involve thought and programming, as well as emotional strain for those involved. It will involve legal review and in the new regulations, certification processes and documentation. All things that go into the decision must be recorded and justified. All of this is untrod legal ground making it even harder. In addition, no real scenario with match hypothetical situations exactly, so the software must apply to a range of situations and still do the intended thing (let alone the right thing) as the situation varies. This is not minor.
Nobody wants to solve it
In spite of the fascination these problems hold, coming up with “solutions” to these no-win situations are the last things developers want to do. In articles about these problems, we almost always see the statement, “Who should decide who the car will hit?” The answer is nobody wants to decide. The answer is almost surely wrong in the view of some. Nobody is going to get much satisfaction or any kudos for doing a good job, whatever that is. Combined with the rarity of these events compared to the many other problems on the table, solving ethical issues is very, very, very low on the priority list for most teams. Because developers and vendors don’t want to solve these questions and take the blame for those solutions, it makes more sense to ask policymakers to solve what needs to be solved. As Christophe von Hugo of Mercedes put it, “99% of our engineering work is to prevent these situations from happening at all.”
The cost of solving may be much higher than people estimate
People grossly underestimate how hard some of these problems will be to solve. Many of the situations I have seen proposed actually demand that cars develop entirely new capabilities that they don’t need except to solve these problems. In these cases, we are talking about serious cost, and delays to deployment if it is judged necessary to solve these problems. Since robocars are planned as a life-saving technology, each day of delay has serious consequences. Real people will be hurt because of these delays aimed at making a better decision in rare hypothetical situations.
Let’s consider some of the things I have seen:
Many situations involve counting the occupants of other cars, or counting pedestrians. Robocars don’t otherwise have to do this, nor can they easily do it. Today it doesn’t matter if there are 2 or 3 pedestrians — the only rule is not to hit any number of pedestrians. With low resolution LIDAR or radar, such counts are very difficult. Counts inside vehicles are even harder.
One scenario considers evaluating motorcyclists based on whether they are wearing helmets. I think this one is ridiculous, but if people take it seriously it is indeed serious. This is almost impossible to discern from a LIDAR image and can be challenging even with computer vision.
Some scenarios involve driving off cliffs or onto sidewalks or otherwise off the road. Most cars make heavy use of maps to drive, but they have no reason to make maps of off-road areas at the level of detail that goes into the roads.
More extreme scenarios compare things like children vs. adults, or school-buses vs. regular ones. Today’s robocars have no reason to tell these apart. And how do you tell a dwarf adult from a child? Full handling of these moral valuations requires human level perception in some cases.
Some suggestions have asked cars to compare levels of injury. Cars might be asked to judge the difference between a fatal impact and one that just breaks a leg.
These are just a few examples. A large fraction of the hypothetical situations I have seen demand some capability of the cars that they don’t have or don’t need to have just to drive safely.
The problem of course is there are those who say that one must not put cars on the road until the ethical dilemmas have been addressed. Not everybody says this but it’s a very common sentiment, and now the new regulations demand at least some evaluation of it. No matter how much the regulations might claim they are voluntary, this is a false claim, and not just because some states are already talking about making them more mandatory.
Once a duty of care has been suggested, especially by the government, you ignore it at your peril. Once you know the government — all the way to the President — wants you to solve something, then you must be afraid you will be asked “why didn’t you solve that one?” You have to come up with an answer to that, even with voluntary compliance.
The math on this is worth understanding. Robocars will be deployed slowly into society but that doesn’t matter for this calculation. If robocars are rare, they can prevent only a smaller number of accidents, but they will also encounter a correspondingly smaller number of trolley problems. What matters is how many trolley situations there are per fatality, and how many people you can save with better handling of those problems. If you get one trolley problem for every 1,000 or 10,000 fatalities, and robocars are having half the fatalities, the math very clearly says you should not accept any delay to work on these problems.
The court of public opinion
The real courts may or may not punish vendors for picking the wrong solution (or the default solution of staying in your lane) in no-win situations. Chances are there will be a greater fear of the court of public opinion. There is reason to fear the public would not react well if a vehicle could have made an obviously better outcome, particularly if the bad outcome involves children or highly vulnerable road users vs. adults and at-fault or protected road users.
Because of this I think that many companies will still try to solve some of these problems even if the law puts no duty on them. Those companies can evaluate the risk on their own and decide how best to mitigate it. That should be their decision.
For a long time, many people felt any robocar fatality would cause uproar in the public eye. To everybody’s surprise, the first Tesla autopilot deaths resulted in Tesla stock rising for 2 months, even with 3 different agencies doing investigations. While the reality of the Tesla is that the drivers bear much more responsibility than a full robocar would, the public isn’t very clear on that point, so the lack of reaction is astonishing. I suspect companies will discount this risk somewhat after this event.
This is a version 2 feature, not a version 1 feature
As noted, while humans make split-second “gut” decisions and we call the results accidents, robocars are much more intentional. If we demand they solve these problems, we ask something of them and their programmers that we don’t ask of human drivers. We want robocars to drive more safely than humans, but we also must accept that the first robocars to be deployed will only be a little better. The goal is to start saving lives and to get better and better at it as time goes by. We must consider the ethics of making the problem even harder on day one. Robocars will be superhuman in many ways, but primarily at doing the things humans do, only better. In the future, we should demand these cars meet an even higher standard than we put on people. But not today: The dawn of this technology is the wrong time to also demand entirely new capabilities for rare situations.
Performing to the best moral standards in rare situations is not something that belongs on the feature list for the first cars. Solving trolley situations well is in the “how do we make this perfect?” problem set, not the “how do we make this great?” set. It is important to remember how the perfect can be the enemy of the good and to distinguish between the two. Yes, it means accepting there are low chance that somebody could be hurt or die, but people are already being killed, in large numbers, by the human drivers we aim to replace.
So let’s solve trolley problems, but do it after we get the cars out on the road both saving lives and teaching us how to improve them further.
What about the fascination?
The over-fascination with this problem is a real thing even if the problem isn’t. Studies have displayed one interesting result after surveying people: When you ask people what a car should do for the good of society, they would want it to sacrifice its passenger to save multiple pedestrians, especially children. On the other hand if you ask people if they would buy a car that did that, far fewer said yes. As long as the problem is rare, there is no actual “good of society” priority; the real “good of society” comes from getting this technology deployed and driving safely as quickly as possible. Mercedes recently announced a much simpler strategy which does what people actually want, and got criticism for it. Their strategy is reasonable — they want to save the party they can be most sure of saving, namely the passengers. They note that they have very little reliable information on what will happen in other cars or who is in them, so they should focus not on a guess of what would save the most people, but what will surely save the people they know about.
What should we do?
I make the following concrete recommendations:
We should do research to determine how frequent these problems are, how many have “obvious” answers and thus learn just how many fatalities and injuries might be prevented by better handling of these situations.
We should remove all expectation on first generation vehicles that they put any effort into solving the rare ones, which may well be all of them.
It should be made clear there is no duty of care to go to extraordinary lengths (including building new perception capabilities) to deal with sufficiently rare problems.
Due to the public over-fascination, vendors may decide to declare their approaches to satisfy the public. Simple approaches should be encouraged, at in the early years of this technology, almost no answer should be “wrong.”
As the technology matures, and new perception abilities come online, more discussion of these questions can be warranted. This belongs in car 2.0, not car 1.0.
More focus at all levels should go into the real everyday ethical issues of robocars, such as roads where getting around requires regularly violating the law (speeding, aggression etc.) in the way all human users already do.
People writing about these problems should emphasize how rare they are, and when doing artificial scenarios, recount how artificial they are. Because of the public’s fears and poor risk analysis, it is inappropriate to feed on those fears rather than be realistic.
In this section, the remind vendors they still need to meet the same standards as regular cars do. We are not ready to start removing heavy passive safety systems just because the vehicles get in fewer crashes. In the future we might want to change that, as those systems can be 1/3 of the weight of a vehicle.
They also note that different seating configurations (like rear facing seats) need to protect as well. It’s already the case that rear facing seats will likely be better in forward collisions. Face-to-face seating may present some challenges in this environment, as it is less clear how to deploy the airbags. Taxis in London often feature face-to-face seating, though that is less common in the USA. Will this be possible under these regulations?
The rules also call for unmanned vehicles to absorb energy like existing vehicles. I don’t know if this is a requirement on unusual vehicle design for regular cars or not. (If it were, it would have prohibited SUVs with their high bodies that can cause a bad impact with a low-body sports-car.)
Consumer Education and Training
This seems like another mild goal, but we don’t want a world where you can’t ride in a taxi unless you are certified as having taking a training course. Especially if it’s one for which you have very little to do. These rules are written more for people buying a car (for whom training can make sense) than those just planning to be a passenger.
Registration and Certification
This section imagines labels for drivers. It’s pretty silly and not very practical. Is a car going to have a sticker saying “This car can drive itself on Elm St. south of Pine, or on highway 101 except in Gilroy?” There should be another way, not labels, that this is communicated, especially because it will change all the time.
This set is fairly reasonable — it requires a process describing what you do to a vehicle after a crash before it goes back into service.
Federal, State and Local Laws
This section calls for a detailed plan on how to assure compliance with all the laws. Interestingly, it also asks for a plan on how the vehicle will violate laws that human drivers sometimes violate. This is one of the areas where regulatory effort is necessary, because strictly cars are not allowed to violate the law — doing things like crossing the double-yellow line to pass a car blocking your path. read more »
These regulations require a plan about how the vehicle keep logs around any incident (while following privacy rules.) This is something everybody already does — in fact they keep logs of everything for now — since they want to debug any problems they encounter. NHTSA wants the logs to be available to NHTSA for crash investigation.
NHTSA also wants recordings of positive events (the system avoided a problem.)
Most interesting is a requirement for a data sharing plan. NHTSA wants companies to share their logs with their competitors in the event of incidents and important non-incidents, like near misses or detection of difficult objects.
This is perhaps the most interesting element of the plan, but it has seen some resistance from vendors. And it is indeed something that might not happen at scale without regulation. Many teams will consider their set of test data to be part of their crown jewels. Such test data is only gathered by spending many millions of dollars to send drivers out on the roads, or by convincing customers or others to voluntarily supervise while their cars gather test data, as Tesla has done. A large part of the head-start that leaders have in this field is the amount of different road situations they have been able to expose their vehicles to.
Recordings of mundane driving activity are less exciting and will be easier to gather. Real world incidents are rare and gold for testing. The sharing is not as golden, because each vehicle will have different sensors, located in different places, so it will not be easy to adapt logs from one vehicle directly to another. While a vehicle system can play its own raw logs back directly to see how it performs in the same situation, other vehicles won’t readily do that.
Instead this offers the ability to build something that all vendors want and need, and the world needs, which is a high quality simulator where cars can be tested against real world recordings and entirely synthetic events. The data sharing requirement will allow the input of all these situations into the simulator, so every car can test how it would have performed. This simulation will mostly be at the “post perception level” where the car has (roughly) identified all the things on the road and is figuring out what to do with them, but some simulation could be done at lower levels.
These data logs and simulator scenarios will create what is known as a regression test suite. You test your car in all the situations, and every time you modify the software, you test that your modifications didn’t break something that used to work. It’s an essential tool.
In the history of software, there have been shared public test suites (often sourced from academia) and private ones that are closely guarded. For some time, I have proposed that it might be very useful if there were a a public and open source simulator environment which all teams could contribute scenarios to, but I always expected most contributions would come from academics and the open source community. Without this rule, the teams with the most test miles under their belts might be less willing to contribute.
Such a simulator would help all teams and level the playing field. It would allow small innovators to even build and test prototype ideas entirely in simulator, with very low cost and zero risk compared to building it in physical hardware.
This is a great example of where NHTSA could use its money rather than its regulatory power to improve safety, by funding the development of such test tools. In fact, if done open source, the agencies and academic institutions of the world could fund a global one. (This would face opposition from companies hoping to sell test tools, but there will still be openings for proprietary test tools.)
The requirement for user choice is an interesting one, and it conflicts with the logging requirements. People are wary of technology that will betray them in court. Of course, as long as the car is not a hybrid car that mixes human driving with self-driving, and the passenger is not liable in an accident, there should be minimal risk to the passenger from accidents being recorded.
The rules require that personal information be scrubbed from any published data. This is a good idea but history shows it is remarkably hard to do properly. read more »
The recent Federal Automated Vehicles Policy is long. (My same-day analysis is here and the whole series is being released.) At 116 pages (to be fair, less than half is policy declarations and the rest is plans for the future and associated materials) it is much larger than many of us were expecting.
The policy was introduced with a letter attributed to President Obama, where he wrote:
There are always those who argue that government should stay out of free enterprise entirely, but I think most Americans would agree we still need rules to keep our air and water clean, and our food and medicine safe. That’s the general principle here. What’s more, the quickest way to slam the brakes on innovation is for the public to lose confidence in the safety of new technologies.
Both government and industry have a responsibility to make sure that doesn’t happen. And make no mistake: If a self-driving car isn’t safe, we have the authority to pull it off the road. We won’t hesitate to protect the American public’s safety.
This leads in to an unprecedented effort to write regulations for a technology that barely exists and has not been deployed beyond the testing stage. The history of automotive regulation has been the opposite, and so this is a major change. The key question is what justifies such a big change, and the cost that will come with it.
Make no mistake, the cost will be real. The cost of regulations is rarely known in advance but it is rarely small. Regulations slow all players down and make them more cautious — indeed it is sometimes their goal to cause that caution. Regulations result in projects needing “compliance departments” and the establishment of procedures and legal teams to assure they are complied with. In almost all cases, regulations punish small companies and startups more than they punish big players. In some cases, big players even welcome regulation, both because it slows down competitors and innovators, and because they usually also have skilled governmental affairs teams and lobbying teams which are able to subtly bend the regulations to match their needs.
This need not even be nefarious, though it often is. Companies that can devote a large team to dealing with regulations, those who can always send staff to meetings and negotiations and public comment sessions will naturally do better than those which can’t.
The US has had a history of regulating after the fact. Of being the place where “if it’s not been forbidden, it’s permitted.” This is what has allowed many of the most advanced robocar projects to flourish in the USA.
The attitude has been that industry (and startups) should lead and innovate. Only if the companies start doing something wrong or harmful, and market forces won’t stop them from being that way, is it time for the regulators to step in and make the errant companies do better. This approach has worked far better than the idea that regulators would attempt to understand a product or technology before it is deployed, imagine how it might go wrong, and make rules to keep the companies in line before any of them have shown evidence of crossing a line.
In spite of all I have written here, the robocar industry is still young. There are startups yet to be born which will develop new ideas yet to be imagined that change how everybody thinks about robocars and transportation. These innovative teams will develop new concepts of what it means to be safe and how to make things safe. Their ideas will be obvious only well after the fact.
Regulations and standards don’t deal well with that. They can only encode conventional wisdom. “Best practices” are really “the best we knew before the innovators came.” Innovators don’t ignore the old wisdom willy-nilly, they often ignore it or supersede it quite deliberately.
Some players — notably the big ones — have lauded these regulations. Big players, like car companies, Google, Uber and others have a reason to prefer regulations over a wild west landscape. Big companies like certainty. They need to know that if they build a product, that it will be legal to sell it. They can handle the cost of complex regulations, as long as they know they can build it. read more »
The long awaited list of recommendations and potential regulations for Robocars has just been released by NHTSA, the federal agency that regulates car safety and safety issues in car manufacture. Normally, NHTSA does not regulate car technology before it is released into the market, and the agency, while it says it is wary of slowing down this safety-increasing technology, has decided to do the unprecedented — and at a whopping 115 pages.
Broadly, this is very much the wrong direction. Nobody — not Google, Uber, Ford, GM or certainly NHTSA — knows the precise form of these cars will have when deployed. Almost surely something will change from our existing knowledge today. They know this, but still wish to move. Some of the larger players have pushed for regulation. Big companies like certainty. They want to know what the rules will be before they invest. Startups thrive better in the chaos, making up the rules as we go along.
NHTSA hopes to define “best practices” but the best anybody can do in 2016 is lay down existing practices and conventional wisdom. The entirely new methods of providing safety that are yet to be invented won’t be in
such a definition.
The document is very detailed, so it will generate several blog posts of analysis. Here I present just initial reactions. Those reactions are broadly negative. This document is too detailed by an order of magnitude. Its regulations begin today, but fortunately they are also accepting public comment. The scope of the document is so large, however, that it seems extremely unlikely that they would scale back this document to the level it should be at. As such, the progress of robocar development in the USA may be seriously negatively affected.
Vehicle performance guidelines
The first part of the regulations is a proposed 15 point safety standard. It must be certified (by the vendor) that the car meets these standards. NHTSA wants the power, according to an Op-Ed by no less than President Obama, to be able to pull cars from the road that don’t meet these safety promises.
Data Recording and Sharing
Human Machine Interface
Consumer Education and Training
Registration and Certification
Federal, State and Local Laws
Operational Design Domain
Object and Event Detection and Response
Fall Back (Minimal Risk Condition)
As you might guess, the most disturbing is the last one. As I have written many times, the issue of ethical “trolley problems” where cars must decide between killing one person or another are a philosophy class tool, not a guide to real world situations. Developers should spend as close to zero effort on these problems as possible, since they are not common enough to warrant special attention, if not for our morbid fascination with machines making life or death decisions in hypothetical situations. Let the policymakers answer these questions if they want to; programmers and vendors don’t.
For the past couple of years, this has been a game that’s kept people entertained and ethicists employed. The idea that government regulations might demand solutions to these problems before these cars can go on the road is appalling. If these regulations are written this way, we will delay saving lots of real lives in the interest of debating which highly hypothetical lives will be saved or harmed in ridiculously rare situations.
NHTSA’s rules demand that ethical decisions be “made consciously and intentionally.” Algorithms must be “transparent” and based on input from regulators, drivers, passengers and road users. While the section makes mention of machine learning techniques, it seems in the same breath to forbid them.
Most of the other rules are more innocuous. Of course all vendors will know and have little trouble listing what roads their car works on, and they will have extensive testing
data on the car’s perception system and how it handles every sort of failure. However, the requirement to keep the government constantly updated will be burdensome. Some vehicles will be adding streets to their route map literally ever day.
While I have been a professional privacy advocate, and I do care about just how the privacy of car users is protected, I am frankly not that concerned during the pilot project phase about how well this is done. I do want a good regime — and even the ability to do anonymous taxi — so it’s perhaps not too bad to think about these things now, but I suspect these regulations will be fairly meaningless unless written in consultation with independent privacy advocates. The hard reality is that during the test phase, even a privacy advocate has to admit that the cars will need to make very extensive recordings of everything they can, so that any problems encountered can be studied and fixed and placed into the test suite.
50 state laws
NHTSA’s plan has been partially endorsed by the self-driving coalition for safer streets (whose members include big players Ford, Google, Volvo, Uber and Lyft.) They like the fact that it has guidance for states on how to write their regulations, fearing that regulations may differ too much state to state. I have written that having 50 sets of rules may not be that bad an idea because jurisdictional competition can allow legal innovation and having software load new parameters as you drive over a border is not that hard.
In this document NHTSA asks the states to yield to the DOT on regulating robocar operation and performance. States should stick to registering cars, rules of the road, safety inspections and insurance. States will regulate human drivers as before, but the feds will regulate computer drivers.
States will still regulate testing, in theory, but the test cars must comply with the federal regulations.
A large part of the document just lists the legal justifications for NHTSA to regulate in this fashion and is primarily for policy wonks. Section 4, however, lists new authorities NHTSA is going to seek in order to do more regulation.
Some of the authorities they may see include:
Pre-market safety assurance: Defining testing tools and methods to be used before selling
Pre-market approval authority: Vendors would need approval from NHTSA before selling, rather than self-certifying compliance with the regulations
Hybrid approaches of pre-market approval and self-certification
Cease and desist authority: The ability to demand cars be taken off the road
Exemption authority: An ability to grant rue exemptions for testing
Post-sale authority to regulate software changes
Other quick notes:
NHTSA has abandoned their levels in favour of the SAE’s. The SAE’s were almost identical of course, with the addition of a “level 5” which is meaningless because it requires a vehicle that can drive literally everywhere, and there is not really a commercial reason to make a car at present that can do that.
NHTSA is now pushing the acronym “HAV” (highly automated vehicle) as yet another contender in the large sea of names people use for this technology. (Self-driving car, driverless car, autonomous vehicle, automated vehicle, robocar etc.)
Some people have wondered about my forecast in the spreadsheet on Robotaxi economics about the very low parking costs I have predicted. I wrote about most of the reasons for this in my 2007 essay on Robocar Parking but let me expand and add some modern notes here.
The Glut of Parking
Today, researchers estimate there are between 3 and 8 parking spots for every car in the USA. The number 8 includes lots of barely used parking (all the shoulders of all the rural roads, for example) but the value of 3 is not unreasonable. Almost all working cars have a spot at their home base, and a spot at their common destination (the workplace.) There are then lots of other places (streets, retail lots, etc.) to find that 3rd spot. It’s probably an underestimate.
We can’t use all of these at once, but we’re going to get a great deal more efficient at it. Today, people must park within a short walk of their destination. Nobody wants to park a mile away. Parking lots, however, need to be sized for peak demand. Shopping malls are surrounded by parking that is only ever used during the Christmas shopping season. Robocars will “load balance” so that if one lot is full, a spot in an empty lot too far away is just fine.
Small size and Valet Density
When robocars need to park, they’ll do it like the best parking valets you’ve ever seen. They don’t even need to leave space for the valet to open the door to get out. (The best ones get close by getting out the window!) Because the cars can move in concert, a car at the back can get out almost as quickly as one at the front. No fancy communications network is needed; all you need is a simple rule that if you boxed somebody in, and they turn on their lights and move an inch towards you, you move an inch yourself (and so on with those who boxed you in) to clear a path. Already, you’ve got 1.5x to 2x the density of an ordinary lot.
I forecast that many robotaxis will be small, meant for 1-2 people. A car like that, 4’ by 12’ would occupy under 50 square feet of space. Today’s parking lots tend to allocate about 300 square feet per car. With these small cars you’re talking 4 to 6 times as many cars in the same space. You do need some spare space for moving around, but less than humans need.
When we’re talking about robotaxis, we’re talking about sharing. Much of the time robotaxis won’t park at all, they would be off to pick up their next passenger. A smaller fraction of them would be waiting/parked at any given time. My conservative prediction is that one robotaxi could replace 4 cars (some estimate up to 10 but they’re overdoing it.) So at a rough guess we replace 1,000 cars, 900 of which are parked, with 250 cars, only 150 of which are parked at slow times. (Almost none are parked during the busy times.)
Many more spaces available for use
Robocars don’t park, they “stand.” Which means we can let them wait all sorts of places we don’t let you park. In front of hydrants. In front of driveways. In driveways. A car in front of a hydrant should be gone at the first notification of a fire or sound of a siren. A car in front of your driveway should be gone the minute your garage opens or, if your phone signals your approach, before you get close to your house. Ideally, you won’t even know it was there. You can also explicitly rent out your driveway space for money if you wish it. (You could rent your garage too, but the rate might be so low you will prefer to use it to add a new room to your house unless you still own a car.)
In addition, at off-peak times (when less road capacity is needed) robocars can double park or triple park along the sides of roads. (Human cars would need to use only the curb spots, but the moment they put on their turn signal, a hole can clear through the robocars to let them out.)
So if we consider just these numbers — only 1/6 of the time spent parking and either 4 times the density in parking lots or 2-3 times the volume of non-lot parking (due to the 2 spots per car and loads of extra spots) we’re talking about a huge, massive, whopping glut of parking. Such a large glut that in time, a lot of this parking space very likely will be converted to other uses, slowly reducing the glut.
Ability to move in response to demand
To add to this glut, robocars can be the best parking customers you could ever imagine. If you own a parking lot, you might have sold the space at the back or top of your lot to the robocars — they will park in the unpopular more remote sections for a discount. The human driver customers will prefer those spots by the entrance. As your lot fills up, you can ask the robocars to leave, or pay more. If a high paying human driver appears at the entrance, you can tell the robocars you want their space, and off they can go to make room. Or they can look around on the market and discover they should just pay you more to keep the space. The lot owner is always making the most they can.
If robocars are electric, they should also be excellent visitors, making little noise and emitting no soot to dirty your walls. They will leave a tiny amount of rubber and that’s about it.
The “spot” market
All of this will be driven by what I give the ironic name of the “spot” market in parking. Such markets are already being built by start-ups for human drivers. In this market, space in lots would be offered and bid for like any other market. Durations will be negotiated, too. Cars could evaluate potential waiting places based on price and the time it will take to get there and park, as well as the time to get to their likely next pickup. A privately owned car might drive a few miles to a super cheap lot to wait 7 hours, but when it’s closer to quitting time, pay a premium (in competition with many others of course) to be close to their master. read more »
Tesla’s spat with MobilEye reached a new pitch this week, and Tesla announced a new release of their autopilot and new plans. As reported here earlier, MobilEye announced during the summer that they would not be supplying the new and better versions of their EyeQ system to Tesla. Since that system was and is central to the operation of the Telsa autopilot, they may have been surprised that MBLY stock took a big hit after that announcement (though it recovered for a while and is now back down) and TSLA did not.
Tesla’s own efforts represent a threat to MobilEye from the growing revolution in neural network pattern matchers. Computer vision is going through a big revolution. MobilEye is a big player in that revolution, because their ASICs do both standard machine vision functions and can do neural networks. An ASIC will beat a general purpose processor when it comes to cost, speed and power, but only if the ASIC’s abilities were designed to solve those particular problems. Since it takes years to bring an ASIC to production, you have to aim right. MobilEye aimed pretty well, but at the same time lots of research out there is trying to aim even better, or do things with more general purpose chips like GPUs. Soon we will see ASICs aimed directly at neural network computations. To solve the problem with neural networks, you need the computing horsepower, and you need well designed deep network architectures, and you need the right training data and lots of it. Tesla and ME both are gaining lots of training data. Many companies, including Nvidia, Intel and others are working on the hardware for neural networks. Most people would point to Google as the company with the best skills in architecting the networks, though there are many doing interesting work there. (Google’s DeepMind built the tools that beat humans at the seemingly impossible game of Go, for example.) It’s definitely a competitive race.
While Tesla works on their vision systems, they also announced a plan to make much more use of radar. That’s an interesting plan. Radar has been the poor 3rd-class sensor of the robocar, after LIDAR and vision. Everybody uses it — you would be crazy not to unless you need to be very low cost. Radar sees further than the other systems, and it tells you immediately how fast any radar target you see is moving relative to you. It sees through fog and other weather, and it can even see under and around big cars in front of you as it bounces off the road and other objects. It’s really good at licence plates as well.
What radar doesn’t have is high resolution. Today’s automotive radars have gotten good enough to tell you what lane an object like another car is in, but they are not designed to have any vertical resolution — you will get radar returns from a stalled car ahead of you on the road and a sign above that lane, and not be sure of the difference. You need your car to avoid a stalled car in your lane, but you can’t have a car that hits the brakes every time it sees a road sign or bridge!
Real world radar is messy. Your antennas send out and receive from a very broad cone with potential signals from other directions and from side lobes. Reflections are coming from vehicles and road users but also from the ground, hills, trees, fences, signs, bushes and bridges. It’s work to get reliable information from it. Early automotive radars found the best solution was to use the doppler speed information, and discard all returns from anything that wasn’t moving towards or away from you — including stalled cars and cross traffic.
One thing that can help (imperfectly) is a map. You can know where the bridges and signs are so you don’t brake for them. Now you can brake for the stalled cars and the cross traffic the Tesla failed to see. You still have an issue with a stalled car under a bridge or sign, but you’re doing a lot better.
There’s a lot of room for improvement in radar, and I will presume — Tesla has not said — that Tesla plans to work on this. The automotive radars everybody buys (from companies like Bosch) were made for the ADAS market — adaptive cruise control, emergency braking etc. It is possible to design new radars with more resolution (particularly in the vertical) and other approaches. You can also try for more resolution, particularly by splitting the transmitter and receiver to produce a synthetic larger aperture. You can go into different bands and get more bandwidth and get more resolution in general. You can play more software tricks, and most particularly, you can learn by examining not just single radar returns, but rather the pattern of returns over time. (After all, humans don’t navigate from still frames, we depend on our visual system’s deep evolved ability to use motion and other clues to understand the world.)
The neural networks are making strides here. For example, while pedestrians produce basic radar returns, it turns out that their walking stride has a particular pattern of changes that can be identified by neural networks. People are doing research now on how examining the moving and dynamic pattern of radar returns can help you get more resolution and also identify shapes and motion patterns of objects and figure out what they are.
I will also speculate that it might be possible to return to a successor of the “sweeped” radars of old, the ones we are used to seeing in old war movies. Modern car radars don’t scan like that, but I have to wonder if with new techniques, like phased arrays to steer virtual beams (already the norm in military radar) and modern high speed electronics, that we might produce radars that get a better sense of where their target is. We’re also getting better at sensor fusion — identifying a radar target in an image or LIDAR return to help learn more about it.
The one best way to improve radar resolution would be to use more bandwidth. There have been experiments in using ultrawideband signals in the very high frequencies which may offer promise. As the name suggests, UWB uses a very wide band, and it distributes its energy over that very wide band, which means it doesn’t put too much energy into any one band, and has less chance of interfering in those bands. It’s also possible that the FCC, seeing the tremendous public value that reliable robocars offer, might consider opening up more spectrum for use in radar applications using modern techniques, and thus increase the resolution.
In other words, Tesla is wise to work on getting more from radar. With the loss of all MobilEye’s vision tools, they will have to work hard to duplicate and surpass that. For now, Tesla is committed to using parts that are for sale for existing production cars, costing hundreds of dollars. That has taken LIDAR “off their radar” even though almost all research teams depend on LIDAR and expect LIDAR to be cheap in a couple of years. (Including the LIDAR from Quanergy, a company I advise.)
To do this, they are working with only some specific car models, namely some Honda vehicles that already have advanced ADAS in them. Using the car’s internal bus, they can talk to the sensors in these cars (in particular the radar, since the Comma One has a camera) and also send control signals to actuate the steering, brakes and throttle. Then their neural networks can take the sensor information, and output the steering and speed commands to keep you in the lane. (Details are scant so I don’t know if the Comma One box uses its own camera or depends on access to the car’s.)
When I rode in Comma’s prototype it certainly wasn’t up to the level of the Tesla autopilot or some others, but it has been several months so I can’t judge it now. Like the Tesla autopilot, the Comma will not be safe enough to drive the car on its own, and you will need to supervise and be ready to intervene at any time. If you get complacent, as some Tesla drivers have, you could get injured or killed. I have yet to learn what measures Comma will take to make sure people keep their eyes on the road.
Generally, I feel that autopilots are not very exciting products when you have to watch them all the time — as you do — and also that bolt-on products are also not particularly exciting. Cruise’s initial plan (after they abandoned valet parking) was a bolt-on autopilot, but they soon switched to trying to build a real vehicle, and that got them the huge $700M sale to General Motors.
But for Comma, there is a worthwhile angle. Users of this bolt-on box will be helping to provide training data to improve their systems. In fact they will be paying for the privilege of testing the system and training it. Something that companies like Google did the old fashioned way, paying a staff of professionals to drive the cars and gather data. For a tiny, young startup it’s a worthwhile approach.
The vision of many of us for robocars is a world of less private car ownership and more use of robotaxis — on demand ride service in a robocar. That’s what companies like Uber clearly are pushing for, and probably Google, but several of the big car companies including Mercedes, Ford and BMW among others have also said they want to get there — in the case of Ford, without first making private robocars for their traditional customers.
In this world, what does it cost to operate these cars? How much might competitive services charge for rides? How much money will they make? What factors, including price, will they compete on, and how will that alter the landscape?
Here are some basic models of cost. I compare a low-cost 1-2 person robotaxi, a higher-end 1-2 person robotaxi, a 4-person traditional sedan robotaxi and the costs of ownership for a private car, the Toyota Prius 2, as calculated by Edmunds. An important difference is that the taxis are forecast to drive 50,000 miles/year (as taxis do) and wear out fully in 5 years. The private car is forecast to drive 15,000 miles/year (higher than the average for new cars, which is 12,000) and to have many years and miles of life left in it. As such the taxis are fully depreciated in this 5 year timeline, and the private car only partly.
Some numbers are speculative. I am predicting that the robotaxis will have an insurance cost well below today’s cars, which cost about 6 cents/mile for liability insurance. The taxis will actually be self-insured, meaning this is the expected cost of any incidents. In the early days, this will not be true — the taxis will be safer, but the incidents will cost more until things settle down. As such the insurance prices are for the future. This is a model of an early maturing market where the volume of robotaxis is fairly high (they are made in the low millions) and the safety record is well established. It’s a world where battery prices and reliability have improved. It’s a world where there is still a parking glut, before most surplus parking is converted to other purposes.
Fuel is electric for the taxis, gasoline/hybrid for the Prius. The light vehicle is very efficient.
Maintenance is also speculative. Today’s cars spend about 6 cents/mile, including 1 cent/mile for the tires. Electric cars are expected to have lower maintenance costs, but the totals here are higher because the car is going 250,000 miles not 75,000 miles like the Prius. With this high level of maintenance and such smooth driving, I forecast low repair cost.
Parking is cheaper for the taxis for several reasons. First, they can freely move around looking for the cheapest place to wait, which will often be free city parking, or the cheapest advertised parking on the auction “spot” market. They do not need to park right where the passenger is going, as the private car does. They will park valet style, and so the small cars will use less space and pay less too. Parking may actually be much cheaper than this, even free in many cases. Of course, many private car owners do not pay for parking overtly, so this varies a lot from city to city.
The Prius has one of the lowest costs of ownership of any regular car (take out the parking and it’s only 38 cents/mile) but its price is massively undercut by the electric robotaxi, especially my estimates for the half-width electric city car. (I have not even included the tax credits that apply to electric cars today.) For the taxis I add 15% vacant miles to come up with the final cost.
The price of the Prius is the retail cost (on which you must also pay tax) but a taxi fleet operator would pay a wholesale, or even manufacturer’s cost. Of course, they now have the costs of running a fleet of self-driving cars. That includes all the virtual stuff (software, maps and apps) with web sites and all the other staff of a big service company ranging from lawyers to marketing departments. This is hard to estimate because if the company gets big, this cost will not be based on miles, and even so, it will not add many cents per mile. The costs of the Prius for fuel, repair, maintenance and the rest are also all retail. The taxi operator wants a margin, and a big margin at first, though with competition this margin would settle to that of other service businesses. read more »
The past period has seen some very big robocar news. Real news, not the constant “X is partnering with Y” press releases that fill the airwaves some times.
Uber has made a deal to purchase Otto, a self-driving truck company I wrote about earlier founded by several friends of mine from Google. The rumoured terms of the deal as astronomical — possibly 1% of Uber’s highly valued stock (which means almost $700M) and other performance rewards. I have no other information yet on the terms, but it’s safe to say Otto was just getting started with ambitious goals and would not have sold for less than an impressive amount. For a company only 6 months old, the rumoured terms surpass even the amazing valuation stories of Cruise and Zoox.
While Otto has been working on self-driving technology for trucks, any such technology can also move into cars. Uber already has an active lab in Pittsburgh, but up to now has not been involved in long haul trucking. (It does do local deliveries in some places.) There are many startups out there calling themselves the “Uber for Trucks” and Otto has revealed it was also working on shipping management platform tools, so this will strike some fear into those startups. Because of my friendship with Otto’s team, I will do more commentary when more details become public.
In other Uber news, Uber has announced it will sell randomly assigned Uber rides in their self-driving vehicles in Pittsburgh. If your ride request is picked at random (and because it’s in the right place) Uber will send one of their own cars to drive you on your ride, and will make the ride free, to boot. Of course, there will be an Uber safety driver in the vehicle monitoring it and ready to take over in any problem or complex situation. So the rides are a gimmick to some extent, but if they were not free, it would be a sign of another way to get customers to pay for the cost of testing and verifying self-driving cars. The free rides, however, will probably actually cause more people to take Uber rides hoping they will win the lottery and get not simply the free ride but the self-driving ride.
GM announced a similar program for Lyft — but not until next year.
Ford also goes all-in, but with a later date
Ford has announced it wants to commit to making unmanned capable taxi vehicles, the same thing Uber, Google, Cruise/GM, Zoox and most non-car companies want to make. For many years I have outlined the difference between the usual car company approaches, which are evolutionary and involve taking cars and improving their computers and the approaches of the non-car companies which bypass all legacy thinking (mostly around ADAS) to go directly to the final target. I call that “taking a computer and putting wheels on it.” It’s a big and bold move for Ford to switch to the other camp, and a good sign for them. They have said they will have a fleet of such vehicles as soon as 2021. read more »
At the recent AUVSI/TRB conference in San Francisco, there was much talk of upcoming regulation, particularly from NHTSA. Secretary of Transportation Foxx and his NHTSA staff spoke with just vague hints about what might come in the proposals due this fall. Generally, they said good things, namely that they are wary of slowing down the development of the technology. But they said things that suggest other directions.
Secretary Foxx began by agreeing that the past history of automotive driving systems was quite different. Regulations have typically been written years or decades after technologies have been deployed. And the written regulations have tended to involve standards which the vendors self-certify their compliance with. What this means is that there is not a government test center which confirms a car complies with the rules in the safety standards. Instead, the vendor certifies they are following the rules. If they certify falsely, that can get them in trouble later with regulators and more importantly in lawsuits. It’s by far the best approach unless the vendors have shown that they can’t be trusted in spite of the fear of these actions.
But Foxx said that they were going to go against that history and consider “pre-market regulation.” Regular readers will know I think that’s an unwise idea, and so do many regulators, who admit that we don’t know enough about the final form of the technology to regulate yet.
Fortunately it was also suggested that NHTSA’s new documents would be more in the form of “guidance” for states. Many states ask NHTSA to help them write self-driving car regulations. Which gets us to a statement that was echoed by several speakers to justify federal regulation, “Nobody wants 50 different regulations” on these cars.
At first, that seems obvious. I mean, who would want it to be that complex? Clearly it’s simpler to have to deal with only one set of regulations. But while that’s true, it doesn’t mean it’s the best idea. They are overestimating the work involved in dealing with different regulations, and underestimating the value of having the ability for states to experiment with new ideas in regulation, and the value of having states compete on who can write the best regulations.
If regulations differed so much between states as to require different hardware, that makes a stronger case. But most probably we are talking about rules that affect the software. That can be annoying, but it’s just annoying. A car can switch what rules it follows in software when it crosses a border with no trouble. It already has to, just because of the different rules of the road found in every state, and indeed every city and even every street! Having a few different policies state by state is no big addition.
Jurisdictional competition is a good thing though, particularly with emerging technologies. Let some states do it wrong, and others do it better, at least at the start. Le them compete to bring the technology first to their region, and invent new ideas on how to regulate something the world has never seen. Over time these regulations can be normalized. By the time people are making 10s of millions of robocars, that normalization will make more sense. But most vendors only plan to deploy in just a few states to begin, anyway. If a state feels its regulations are making it harder for the cars to spread to its cities, it can copy the rules of the other state it likes best.
The competition assures any mistake is localized — and probably eventually fixed. If California follows through with banning unmanned operation, as they have proposed, Texas has said it won’t.
I noted that if the hardware has to change, that’s more of an issue. It’s still not that much of an issue, because cars that operate as taxi services will probably never leave their base state. Most of them will have limited operational zones, and except in cities that straddle state borders, they won’t even leave town, let alone leave the state. Some day, the cars might do interstate trips, but even then you can solve this by having one car drive you to the border and then transfer to a car for the other state. Annoying, but only slight, and not a deal-breaker on the service. A car you own and take on road trips is a different story.
The one way having different state regulations would be a burden would be if there were 50 different complex certification processes to go through. Today, the federal government regulates how cars are made and the safety standards for that. The states regulate how cars operate on the roads. Robocars do blur that line, because how they are made controls how they drive.
For now, I still believe the tort system — even though it differs in all 50 states — is the best approach to regulation. It already has all developers highly paranoid about safety. When the day comes for certification, a unified process could make sense, but that day is still very far away. But for the regulations of just how these cars will operate, it might make sense to keep that with the states, even though it’s now part of the design of the car rather than the intentions of a human driver.
In time, unified regulations will indeed be desired by all, once we’ve had the time to figure out what the right regulations should be. But today? It’s too soon. Innovation requires variety.
Today, Robin Chase wrote an article wondering if robocars will improve or ruin our cities and asked for my comment on it. It’s a long article, and I have lots of comment, since I have been considering these issues for a while. On this site, I spend most of my time on the potential positive future, though I have written various articles on downsides and there are yet more to write about.
Robin’s question has been a popular one of late, in part a reaction by urban planners who are finally starting to think more deeply on the topic, and reacting to the utopian visions sometimes presented. I am guilty of such visions, though not as guilty as some. We are all seduced in part by excitement of what’s possible in a world where most or all cars are robocars — a world that is not coming for several decades, if in our lifetimes at all. It’s very fair to look at the topic from both sides, and no technology does nothing but good.
When I first met Robin, she was, like most people, a robocar skeptic. She’s done pioneering work in new transportation ideas, but the pace of improvement has surprised even the optimists. I agree with many of the potential negatives directions that she and others paint; in fact I’ve said them myself. Nonetheless my core position is that we can and probably will get tremendous good out of this. While I want city planners to understand these trends, I think it’s too early for them to actually attempt to guide them. Even the developers of the technology don’t quite know the final form it will take when it starts taking over the transport world in the 2020s. Long term planning is simply impossible at this stage — it must be done not with the knowledge of 2016 but with the knowledge of 2023. That approach — the norm in the high tech world, where we expect the world to constantly change underneath us — is anathema to governments and planners. When Marc A. said that software was eating the world, he was telling the world that it will need to start learning the rules of innovation that come from the high tech, internet and computer worlds.
Instead, today’s knowledge can at least guide planners in what not to do. Not to put big investments in things likely to become obsolete. Not to be too clever in thinking they understand the “smart city” of 2025. They need to be like the builders of the internet, who made the infrastructure as simple and stupid as they could, moving innovation away from the infrastructure and into the edges where it could flourish in a way that astounded humanity.
We will get more congestion in the start. Not because of empty vehicles cruising around — most research suggests that will be around 15% of miles, and then only after everybody switches. We’ll get more congestion from two factors:
The early cars, especially the big car company offerings, will make traffic jams more tolerable. As such, people will not work as hard to avoid them.
Car travel will be come much better and much cheaper; far more people will be able to use it, so they’ll travel more miles in cars than they do today.
For some, longer commutes will be more tolerable so they will live further from work. That won’t increase congestion in the central areas (they would still have driven those roads if they lived closer to work) but will increase it in the more remote places.
The tolerance for longer commutes may increase “sprawl.”
The good news is that the era of the ubiquitous smartphone brings us the potential for a traffic “miracle” — the ability to entirely eliminate traffic congestion. I first made that remarkable claim in 2008 in my article on congestion. I have a new article in the works which expands on this and makes it easier to understand. The plan is a rare one for me, because the city is heavily involved, but mostly in virtual infrastructure rather than physical. Virtual infrastructure needs to be the new buzzword of the city planner, because only virtual infrastructure is flexible enough to adapt to a changing world.
While this, and other plans to eliminate congestion won’t actually arise very quickly, the reason is not technological, it’s political. So the rise in congestion for the reasons cited above has a silver lining — it will push the public to be more accepting of entirely new ways of managing traffic.
The other way we can attack congestion is through the potential to make vastly superior group transit. Today’s transit sucks. It uses more energy than cars, provides slow and limited service from station to station (not door to door) in limited areas. When it does work efficiently, at rush hour, people travel standing, packed like sardines. People hate it so much that they spend over $8,000/year on vastly more expensive car ownership, the 2nd largest expense in most households. Robocars offer the potential for very appealing group transit which takes people efficiently from door to door in luxury vans on their schedule and along fast routes. Truly appealing transit might greatly increase ridership at congested times.
Robin suggests her Prius could drive around for $1.50/hour rather than park and that will make things worse. Perhaps if people make the same mistake it could, but when you look at it, you realize it costs closer to $20/hour to have a car drive around, and the fuel is just part of that. (Most auto web sites rate the Prius as costing 50 cents/mile, and at 25mph that’s only $12.50 per hour but in reality urban miles tend to cost more than highway miles so I like hourly rates. The Prius is rare though in that it uses less fuel in city miles.) Certainly no rational actor would do this. In addition, as more cars are shared, parking will become plentiful, particularly since a car no longer needs to park right where it dropped you off, but can instead request price bids on the “spot market” and find space going spare not too far away, which will certainly be available for well under $1.50/hour.
Fewer people will drive for a living. At the same time there are more bank tellers today than in 1970. They just don’t cash your cheques and give out withdrawls much any more. This topic deserves a great deal more verbiage, of course, but the kicker is this: These professional drivers are killing several thousand Americans every year while doing their jobs. Only doctors kill more. While the economic disruption is not an illusion, there is no way you can justify artificially preserving a job that is killing so many people. It’s a bit like arguing everybody should smoke so that tobacco farmers don’t lose their jobs.
Shared Cars & Parking
This will be huge, at least the part about sharing rides. Sharing cars for solo rides does not reduce miles driven or the number of cars made, but it does vastly reduce the amount of parking needed. Sharing rides reduces everything. I go much further in my vision to bring ride sharing to the level of dynamically allocated self-driving vans which replace today’s mass transit with something much more desired by the public and much more efficient at the same time.
I do hope the city parking lots are turned into parks mostly. The privately owned lots will get other uses, though downtown multi-floor lots are a bit harder to change.
It’s true that a major move to electric cars might require more electric capacity. Though they will charge mostly at night when power is cheap (though not solar.) One thing that many people don’t realize we won’t need is charging infrastructure. The great thing about robocars is they go where the energy is. The robocar will drive to the transformer substation which is packed with charging points — you don’t need to put charging stations in parking lots or houses.
However, at least today, electric cars are not cheaper than gasoline ones. The electricity is dirt cheap — under 3 cents/mile. The problem is at today’s battery prices, the battery depreciation is 20 to 40 cents per mile, much more than gasoline. Fortunately, there are optimistic signs about cheaper batteries and longer lasting batteries which could fix this.
But as robocars shrink — especially to one person vehicles for solo riders — they will become much cheaper than today’s cars, and also much more efficient. More efficient than the cars, but also all US transit systems. At a cost of around 30 cents/mile, car transportation will be available to billions more than can afford it today, and certainly to almost all Americans. That has its congestion downsides.
What Should Cities Do?
As noted above, it’s more about what they should not do. I am rebuilding my recommendations here, but my current list includes this:
Avoid regulation until you know what players can’t be trusted to do, and then fix only that
No more light rail or other single-use right-of-way. Stick to plain, bare pavement which can handle everything.
Create “transfer points” for carpools, robotaxi and robovan services to quickly — really quickly — transfer passengers between vehicles. These are useful for robocars, smartphone carpooling and even today’s transit.
Don’t require new buildings to put in tons of parking if they don’t want to
Make as much of your infrastructure virtual as you can. Encourage lots of data networks in the town, with the newest (5G and later) protocols in 2020.
If installing dedicated ROW for transit, make sure it can be converted to use by robocars in the future so the capacity isn’t wasted most of the time. If making tunnels, make sure stations are “offline” so that other vehicles can pass stopped vehicles, and make ramps for access by approved vehicles from the street.
At the recent AUVSI/TRB symposium, a popular research topic was platooning for robocars and trucks. Platooning is perhaps the oldest practical proposal when it comes to car automation because you can have the lead vehicle driven by a human, even a specially trained one, and thus resolve all the problems that come from road situations too complex for software to easily handle.
Early experiments indicated fuel savings, though relatively modest ones. At practical distances, you can see about 10% saving for following vehicles and 5% for the lead vehicle. Unfortunately, a few big negatives showed up. It’s hard to arrange platoons, errors can become catastrophic multi-car pile-ups, other drivers keep inserting themselves into the gap unless it’s dangerously small, and the surprising deal-breaker that comes from the stone chips which are thrown up by lead vehicles which destroy the finish — and in some cases the radiator or windshield — of following cars. They can also create a congestion problem and highway exit problem the way existing convoys of trucks sometimes do that.
One local company named Peloton is making progress with a very simple platooning problem. They platoon two (and only two) trucks on rural highways. The trucks find one another over the regular data networks, and when they get close they establish a local radio connection (using the DSRC protocol that many mistakenly hope will be the standard for vehicle to vehicle communications.) Both drivers keep driving, but the rear driver goes feet-off-the-pedals like a cruise control. The system keeps the vehicles a fixed distance to save fuel. The trucks don’t mind the stone chips too much. Some day, the rear driver might be allowed to go in the back and sleep, which would allow cargo to move 22 hours/day at a lower cost, probably similar to the cost of today’s team driving (about 50 cents/mile) but with two loads instead of one.
Trucks are an easy win, but I also saw a lot of proposals for car platoons. Car platoons are meant to save fuel, but also to increase road capacity. But after looking at all the research a stronger realization came to me. If you have robocars, why would you platoon when you can carpool?. To carpool, you need to find two cars who are going to share a long segment of trip together. Once you have found that, however, you get far more savings in fuel and road usage if the cars can quickly pause together and the passengers from one transfer into the other. Then the empty car can go and move other commuters. This presumes, of course, that the cars are like almost all cars out there today, with many empty seats. When the groups of passengers come to where their path diverts, the vehicle would need to stop at a transfer point and some passengers would move into waiting robotaxis to take them the rest of the way.
All of this is not as convenient as platooning, which in theory can happen without slowing down and finding a transfer point. This is one reason that the carpool transfer stations I wrote about last month could be a very useful thing. Such stations would add only 1-2 minutes of delay, and that’s well worth it if you consider that compared to platooning, this carpooling means a vastly greater fuel saving (almost 50%) and a much greater increase in road capacity, with none of the other downsides of platooning.
If you’re thinking ahead, however, you will connect this idea to my proposed plan for the future of group transit. The real win is to have the computers of the transport service providers notice the common routes of passengers early, before they even get into a vehicle, and thus pool them together with minimal need to stop and switch cars.
A number of folks have imagined designing cars that can physically couple, which would produce very efficient platoons and not add a delay. The problem (aside from the difficulties in doing this safely) is that this requires a physical standard, and physical standards are much harder to get working than software ones. It requires you find a platooning partner who has the same hardware you do, rather than software platooning, which can work with any style of car. Automated matching and carpooling makes no requirements on the individual robocars and their design, which gives it the best path to success.
It is possible (though a bit frightening) to imagine a special bus which could dock to robocars to allow transfer of passengers at speed. Some of you may have seen that a Chinese company has actually built the formerly hypothetical straddling bus (really a train) that has cars drive under it. If you were assured a perfectly smooth road one could imagine a docking extension which could surround a car door of a perfectly synced robocar and allow transfer. I suspect that’s all pretty far in the future.
Beyond the carpool
In a robocar world, we should see a move to having vehicles with fewer empty seats. This happens if more people use single person vehicles for their solo trips, and as carpooling and other technologies make sure that the 4 seater vehicles end up with more people. Indeed, if the carpooling works, that happens naturally. At that point one might say, “now’s the time to platoon.” There is merit to that, but it comes later, rather than sooner. At this later date we can be more comfortable with the safety, and have a greater density of vehicles making it more likely to find others vehicles ready to platoon. Of course, we’ll also have more vans and buses on the road who can combine even larger groups, if you find groups with a lot of journey in common. Platooning is practical even for a few miles, while carpooling tends to need a longer amount of shared journey to make it worth the switch.
At that point in the technology, you can do much more serious platoons, with larger groups of cars, and distances which are short enough for even greater benefit, and short enough to strongly discourage people trying to insert themselves in the middle of the platoon.
So platoons will come and give us even more road capacity. Carpooling, though, is already happening, with 50% of Uber requests in San Francisco being done in UberPool mode. It is the more likely early answer.
Today I want to look at some implications of Tesla’s Master Plan Part Deux which caused some buzz this week. (There was other news of course, including the AUVSI/TRB meeting which I attended and will report on shortly, forecast dates from Volvo, BMW and others, hints from Baidu, Faraday Future and Apple, and more.)
In Musk’s blog post he lays out these elements of Tesla’s plan
Integrating generation and storage (with SolarCity and the PowerWall and your car.)
Expand into trucks and minibuses
More autonomy in Tesla cars
Hiring out your Tesla as a robotaxi when not using it
Except for the first one, all of these are ideas I have covered extensively here. It is good to see an automaker start work in these directions. As such while I will mostly agree with what Tesla is saying, there are a few issues to discuss.
Electric (self-driving) minibus and Trucks
In my article earlier this year on the future of transit I laid out why transit should mostly be done with smaller (van sized) vehicles, taking ad-hoc trips on dynamic paths, rather than the big-vehicle, fixed-route, fixed-schedule approach taken today. The automation is what makes this happen (especially when you add the ability of single person robocars to do first and last miles.) Making the bus electric can make it greener, though making it run full almost all the time is far more important for that.
The same is true for trucks, but both trucks and buses have huge power needs which presents problems for having them be electric. Electric’s biggest problem here is the long recharge time, which puts your valuable asset out of service. For trucks, the big win of having a robotruck is that it can drive 24 hours/day, you don’t want to take that away by making it electric. This means you want to look into things like battery swap, or perhaps more simply tractor swap. In that case, a truck would pull in to a charging station and disconnect from its trailer, and another tractor that just recharged would grab on and keep it going. read more »
The cell phone ride hail apps like Uber and Lyft are now reporting great success with actual ride-sharing, under the names UberPool, LyftLines and Lyft Carpool. In addition, a whole new raft of apps to enable semi-planned and planned carpooling are out making changes.
The most remarkable number I have seen has Uber stating that 50% of rides in San Francisco are now UberPool. With UberPool, the system tries to find people with overlapping ride segments and quotes you a flat price for your ride. When you get it, there may already be somebody there, or your car may travel a small bit out of your way to pick up or drop somebody off. It’s particularly good for airports, but is also working in cities. The prices are often extremely good. During a surge it might be a much more affordable alternative.
It’s often been observed that as you watch any road, you see a huge volume of empty seats go down it. Even partial filling all those empty seats would make our roads vastly more efficient and higher capacity, as well as greener. Indeed, the entire volume of most transit systems could probably be easily absorbed, and a great deal more, if those empty seats were filled.
The strongest approach to date has been the hope that carpool lanes would encourage people to carpool. Sadly, this doesn’t happen very much. Estimates suggest that only 10% of the cars in the carpool lane are “induced” carpools — the rest are people like couples who already would have gone together. As such, many carpool lanes actually increase congestion rather than reducing it, because they create few induced carpools and take away road capacity. That’s why many cities are switching to HOT lanes where solo drivers can pay to get access to excess carpool lane capacity, or allowing electric/PHEV vehicles into the carpool lane.
Most carpool apps today have a focus on people who are employees of the same company. Companies have had tools to organize carpools for ages, and this works modestly well, but typically the carpools are semi-permanent — the same group rides in together each day, sometimes trading off who drives. The companies provide incentives like cash and special parking.
The new generation of carpool apps (outside Uber) tend to focus on people at the same company, and as such they mostly work with big companies. There they can add the magic of dynamic carpooling, which means allowing people to be flexible about when they come and go, and matching them up with different cars of other employees. This makes sense as an early business for many reasons:
People can inherently trust their co-workers
Co-workers naturally share the same workplace, so you only have to find one who live within a reasonable distance
Companies will subsidize the carpooling for many reasons, including saving them parking.
The subsidies can often include a very important one, the guaranteed ride back. Some of these apps say that when you want to leave, if they can’t find a carpool going near your house, they will provide alternate transportation, such as transit tickets or a Taxi/Uber style ride. This gives people the confidence to carpool in with one dynamically assigned group, knowing they will never be stuck at the office with no way home. Independent carpool services can also offer such a guarantee by adding a cost to every ride, but it’s easier for a company to do it. In fact, companies will often pay for the cost of the apps that do this, so that all the employees see is the car operating cost being shared among the poolers.
What has not happened much today is the potential of the multi-leg carpool, where you ride in one car for part of the trip, and another car (or another mode) for another part. Of course changing cars or modes is annoying compared to door-to-door transportation, though it’s the norm for transit riders.
Today, must carpool apps will have the driver go slightly off their route — often off the highway — to pick up a rider or return one home. (Normally the morning destination is a commercial building, usually the same building.)
A multi-leg service has some similarities to the concepts of multi-leg robocar transit I outlined previously. In one vision, the actual carpool sticks to highways and arterial roads, and never deviates from the expected route of the driver or any of the poolers. Poolers get to the carpool by using some other means — including a private Uber style ride — and then join it for the highway portion. If they are not going to the same place as other poolers, they can also use such a ride at the other end, though having two transfers reduces the appeal a fair bit.
This “last mile” leg can be something like Uber, or transit, or a bicycle (including one-way bicycle systems) or a “kiss and ride” drop-off by a spouse, or even another carpool. The difference is to make it dynamic, with live tracking of all parties involved, to reduce waits at the transfer points to very short times. (With robocars and vans, the waits will be measured in seconds, but human drivers won’t be that reliable.)
In spite of the inconvenience of having to do a transfer, if the wait is short, it’s better than the downsides of the driver or other poolers having to go far off the highway to handle a fellow pooler, and there can even be financial incentives to make things smooth.
Transfer points on arterials
The main barrier in the way of a truly frictionless transfer is the absence of good and easy places to do the transfer in many locations. This might be something that highway planners should consider in building or modifying future roads. The benefits can happen today, well before robocars, so it can get on the radar of the planners today. When the robocar transit arrives, tremendous benefits are possible.
Today, there is something a bit like this. In many cities, there are bus lines that run on highways. In some cases, bus stops have been built embedded in the highway, allowing the bus to stop without fully leaving the highway. A common example can be found on intersections which have a private on-ramp/off-ramp lane which stops mergers from interfering with primary traffic. Sometimes these are just off to the side of the regular highway, but in all cases the bus pulls off the highway and then into the bus stop. Riders have some safe path from the non-highway world, including bus stops on regular streets and arterials.
In the fast-transfer world, you want something like this, though you don’t necessarily need a path to other roads. A rider brought in an Uber can be dropped off there, and in interchanges with a private collector lane, the car that drops the rider off can easily get back onto the regular road in the opposite direction.
In the map is an intersection that already has all the ingredients needed for carpool transfer points — collector lanes, long ramps and lots of spare space. Most intersections are not as adaptable as this one, but new and reconstructed intersections can be adapted in much less space. In addition, transfer points may be possible in the center median, if there is room, under bridges, through the installation of a staircase from the bridge. (If there is no elevator, the disabled can be brought to the transfer point through a longer route that goes on the highway.) This is a common layout for transit lines which run down the median.
Full cloverleaf is better for the placement of transfer points, though there are other places they can go in other intersection designs. (It’s become popular of late to replace full coverleaf intersections with the parclo design that comes from my home town of Toronto. This change is mostly done to avoid the complex merge and tight turns of a full cloverleaf, though robocars can handle the full clover just fine. You can easily put some transfer points in a parclo, you just have an extra minute or two spent by the stopping carpool.
Transfer points are dirt cheap infrastructure, pretty much identical to bus stops, though ideally they would use angled parking so vehicles can come and go without blocking others. You do want space for a van or even a bus to come when you have found a super-carpool synergy, as will probably be the case at the peak of rush hour. Of course, if the volume of poolers grows very high, it justifies making larger transfer points and more of them. For super peak times, it’s OK to use transfer points that are just off the highways (where parking lots to do this are plentiful) because with high volume, pools are making just one stop to pick up passengers and can handle a small detour.
Transfer with parking
Of course, today the easiest way to do these carpools is with “carpool lots” not too far from the highway — places with spare parking which allow carpool riders to drive to the lot to meet their carpool driver. Indeed, carpoolers should be those who own cars because the first goal is to take a car off the road that otherwise have driven, and the second goal is to fill the empty seat with somebody who would otherwise have been on transit.
It can be difficult to get lots of parking convenient to the highway. One carpool lot I use has room for only about 50 cars. Nice that it’s there, but it takes no more than 50 cars off the road. At scale, one could imagine it being worthwhile to have shuttles from parking lots to on-highway transfer points, though nobody likes having to do 3 or 4 legs for a trip unless it’s zero wait time. If Robocars were not coming, one could imagine designing future highways with transfer points connected to parking lots. The people of the past did not imagine robocars or cell phone coordination of carpooling.
It’s not surprising there is huge debate about the fatal Tesla autopilot crash revealed to us last week. The big surprise to me is actually that Tesla and MobilEye stock seem entirely unaffected. For many years, one of the most common refrains I would hear in discussions about robocars was, “This is all great, but the first fatality and it’s all over.” I never believed it would all be over, but I didn’t think there would barely be a blip.
There’s been lots of blips in the press and online, of course, but most of it has had some pretty wrong assumptions. Tesla’s autopilot is a distant cousin of a real robocar, and that would explain why the fatality is no big deal for the field, but the press shows that people don’t know that.
Tesla’s autopilot is really a fancy cruise control. It combines several key features from the ADAS (Advance Driver Assist) world, such as adaptive cruise control, lane-keeping and forward collision avoidance, among others. All these features have been in cars for years, and they are also combined in similar products in other cars, both commercial offerings and demonstrated prototypes. In fact, Honda promoted such a function over 10 years ago!
Tesla’s autopilot primarily uses the MobilEye EyeQ3 camera, combined with radars and some ultrasonic sensors. It doesn’t have a lidar (the gold standard in robocar sensors) and it doesn’t use a map to help it understand the road and environment.
Most importantly, it is far from complete. There is tons of stuff it’s not able to handle. Some of those things it can’t do are known, some are unknown. Because of this, it is designed to only work under constant supervision by a driver. Tesla drivers get this explained in detail in their manual and when they turn on the autopilot.
ADAS cars are declared not to be self-driving cars in many state laws
This is nothing new — lots of cars have lots of features to help drive (including the components used like cruise controls, each available on their own) which are not good enough to drive the car, and only are supposed to augment an alert driver, not replace one. Because car companies have been selling things like this for years, when the first robocar laws were drafted, they made sure there was a carve-out in the laws so that their systems would not be subject to the robocar regulations companies like Google wanted.
The Florida law, similar to other laws, says:
The term [Autonomous Vehicle] excludes a motor vehicle enabled with active safety systems or driver assistance systems, including, without limitation, a system to provide electronic blind spot
assistance, crash avoidance, emergency braking, parking
assistance, adaptive cruise control, lane keep assistance, lane
departure warning, or traffic jam and queuing assistant, unless
any such system alone or in combination with other systems
enables the vehicle on which the technology is installed to
drive without the active control or monitoring by a human
The Tesla’s failure to see the truck was not surprising
There’s been a lot of writing (and I did some of it) about the particulars of the failure of Tesla’s technology, and what might be done to fix it. That’s an interesting topic, but it misses a very key point. Tesla’s system did not fail. It operated within its design parameters, and according to the way Tesla describes it in its manuals and warnings. The Tesla system, not being a robocar system, has tons of stuff it does not properly detect. A truck crossing the road is just one of those things. It’s also poor on stopped vehicles and many other situations.
Tesla could (and in time, will) fix the system’s problem with cross traffic. (MobilEye itself has that planned for its EyeQ4 chip coming out in 2018, and freely admits that the EyeQ3 Tesla uses does not detect cross traffic well.) But fixing that problem would not change what the system is, and not change the need for constant monitoring that Tesla has always declared it to have. read more »
Today at Starship, we announced our first pilot projects for robotic delivery which will begin operating this summer. We’ll be working with a London food delivery startup Pronto as well as German parcel company Hermes and the Metro Group of retailers, plus Just Eat restaurant food delivery to trial on-your-schedule delivery of packages, groceries and meals to people’s homes.
(It’s a nice break from Tesla news — and besides, our little robots weigh so little and move so slowly that even if something went horribly wrong and they hit you, injury is quite unlikely.)
Hermes, which does traditional package delivery is very interested in what I think is one of the core values of robot delivery — namely delivery on the recipient’s schedule. Today, delivery is done on the schedule of delivery trucks, and you may or may not be home when it arrives. With a personal delivery robot, it will only come when you’re home, reducing the risk of theft and lost packages. Robots don’t mind waiting for you.
The last mile is a huge part of the logistics world. Starship robots will get you packages with less cost, energy, time, traffic, congestion and emissions than you going to the store to get it yourself. They use a combination of autonomous driving with human control centers able to remotely fix any problems the robots can’t figure out. Robots don’t mind pausing if they have a problem and our robots can stop in under 30cm. As we progress, operation will reach near full autonomy and super low cost.
Executive Summary: A rundown of different approaches for validation of self-driving and
driver assist systems, and a recommendation to Tesla and others to have countermeasures
to detect drivers not watching the road, and permanently disable their Autopilot if they
show a pattern of inattention.
The recent fatality for a man who was allowing his car to be driven by the Tesla “autopilot”
system has ignited debate on whether it was appropriate for Tesla to allow their system to
be used as it was.
Tesla’s autopilot is a driver assist system, and Tesla tells customers it must always be
supervised by an alert driver ready to take the controls at any time. The autopilot is not
a working self-driving car system, and it’s not rated for all sorts of driving conditions,
and there are huge numbers of situations that it is not designed to handle and can’t handle. Tesla knows that, but the
public, press and Tesla customers forget that, and there are many Tesla users who are treating
the autopilot like a real self-driving car system, and who are not paying attention to the road —
and Tesla is aware of that as well. Press made this mistake as well, regularly writing
fanciful stories about how Tesla was ahead of Google and other teams.
Brown, the driver killed in the crash, was very likely one of those people, and if so, he paid for
it with his life. In spite of all the warnings Tesla may give about the system, some users
do get a sense of false security. There is debate if that means driver assist systems are
a bad idea.
There have been partial self-driving systems that require supervision since the arrival of the
cruise control. Adaptive cruise control is even better, and other car companies have released
autopilot like systems which combine adaptive cruise control with lane-keeping and forward
collision avoidance, which hits the brakes if you’re about to rear end another car. Mercedes
has sold a “traffic jam assist” like the Telsa autopilot since 2014 that only runs at low speeds
in the USA. You can even go back to a Honda demo in 2005 of an autopilot like system.
With cruise control, you might relax a bit but you know you have to pay attention. You’re steering
and for a long time even the adaptive cruise controls did not slow down for stopped cars.
The problem with Tesla’s autopilot is that it was more comprehensive and better performing than
earlier systems, and even though it had tons of things it could not handle, people started to
trust it with their lives.
Tesla’s plan can be viewed in several ways. One view is that Tesla was using customers as
“beta testers,” as guinea pigs for a primitive self-drive system which is not production ready,
and that this is too much of a risk.
Another is that Tesla built (and tested) a superior driver assist system with known and warned
limitations, and customers should have listened to those warnings.
Neither is quite right. While Tesla has been clear about the latter stance, with the knowledge that
people will over-trust it, we must face the fact that it is not only the daring drivers who
are putting themselves at risk, it’s also others on the road who are put at risk by the
over-trusting drivers — or perhaps by Tesla. What if the errant car had not gone under a truck, but
instead hit another car, or even plowed into a pedestrian when it careened off the road after the crash?
At the same time, Tesla’s early deployment approach is a powerful tool for the development and
quality assurance of self-drive systems. I have written before about how testing is the big
unsolved problem in self-driving cars. Companies like Google have spent many millions to use a
staff of paid drivers to test their cars for 1.6 million miles. This is massively expensive and
time consuming, and even Google’s money can’t easily generate the billions of miles of testing
that some feel might be needed. Human drivers will have about 12 fatalities in a billion miles,
and we want our self-driving cars to do much better. Just how we’ll get enough verification and testing done
to bring this technology to the world is not a solved problem. read more »
A Tesla blog post describes the first fatality involving a self drive system. A Tesla was driving on autopilot down a divided highway. A truck made a left turn and crossed the Tesla’s lanes. A white truck body against a bright sky is not something the MobilEye camera system in the Tesla perceives well, and it is not designed for cross traffic.
The truck trailer was also high, so when the Tesla did not stop, it went “under” it, so that the windshield was the first part of the Tesla to hit the truck body, with fatal consequences for the “driver.” Tesla notes that the autopilot system has driven 130 million miles, while human drivers in the USA have a fatality about every 94 million miles (though it’s a longer interval on the highway.) The Tesla is a “supervised” system where the driver is required to agree they are monitoring the system and will take control in the event of any problem, but this driver, a major Tesla fan named Joshua Brown, did not hit the brakes. As such, the fault for this accident will presumably reside with Brown, or perhaps the Truck driver — the accident report claims the truck did fail to yield to oncoming traffic, but as yet the driver has not been cited for this. (Tesla also notes that had the front of the car hit the truck, the crumple zones and other safety systems would probably have saved the driver — hitting a high target is the worst case situation.)
Any commentary here is preliminary until more facts are established, but here are my initial impressions:
There has been much speculation of whether Tesla was taking too much risk by releasing autopilot so early, and this will be boosted after this.
In particular, a core issue is that the autopilot works too well, and I have seen reports from many Tesla drivers of them trusting it far more than they should. The autopilot is fine if used as Tesla directs, but the better it gets, the more it encourages people to over-trust it.
Both Tesla stock and MobilEye stock were up today, with a bit of downturn after-hours. The market may not have absorbed this. The MobilEye is the vision sensor used by the Tesla to power the autopilot, and the failure to detect the truck in this situation is a not-unexpected result for the sensor.
For years, I have frequently heard it said that “the first fatality with this technology will end it all, or set the industry back many years.” My estimation is that this will not happen.
One report suggests the truck was making a left turn, which is a more expected situation, though if a truck turned with oncoming traffic it would be at fault.
Another report suggests that “friends” claim that the driver often used his laptop while driving, and some sources claim that a Harry Potter movie was playing in the car. (A portable DVD player was found in the wreckage.)
Tesla’s claim of 130M miles is a bit misleading, because most of those miles actually were supervised by humans. So that’s like reporting the record of student drivers with a driving instructor always there to take over. And indeed there are reports of many, many people taking over for the Tesla Autopilot, as Tesla says they should. So at best Tesla can claim that the supervised autopilot has a similar record to human drivers, ie. is no better than the humans on their own. Though one incident does not a driving record make.
Whatever we judge about this, the ability of ordinary users to test systems, if they are well informed and understand what they are doing is a useful one that will advance the field and give us better and safer cars, faster. Just how to do this may require more discussion, but the idea of doing it is worthwhile.
MobilEye issued a statement reminding people their system is not designed to do well on cross traffic at present, but their 2018 product will. It is also worth noting that the camera they use sees only red and gray intensity, it does not see all the colours, making it have an even harder time with the white truck and bright sky. The sun was not a factor, it was up high in the sky.
The Truck Driver claims the Tesla changed lanes before hitting him, an odd thing to happen with the Autopilot, particular if the driver was not paying attention. The lack of braking suggests the driver was not paying attention.
Camera vs. Lidar, and maps.
I have often written about the big question of cameras vs. LIDAR. Elon Musk is famously on record as being against LIDAR, when almost all robocar projects in the world rely on LIDAR. Current LIDARs are too expensive for production automobiles, but many companies, including Quanergy (where I am an advisor) are promising very low cost LIDARs for future generations of vehicles.
Here there is a clear situation where LIDAR would have detected the truck. A white truck against the sky would be no issue at all for a self-driving capable LIDAR, it would see it very well. In fact, a big white target like that would be detected beyond the normal range of a typical LIDAR. That range is an issue here — most LIDARs would only detect other cars about 100m out, but a big white truck would be detected a fair bit further, perhaps even 200m. 100m is not quite far enough to stop in time for an obstacle like this at highway speeds, however, such a car would brake to make the impact vastly less, and a clever car might even have had time to swerve or aim for the wheels of the truck rather than slide underneath the body.
Another sensor that is problematic here is radar. Radar would have seen this truck no problem, but since it was perpendicular to the travel of the car, it would not be moving away from or towards the car, and thus have the doppler speed signature of a stopped object. Radar is great because it tracks the speed of obstacles, but because there are so many stationary objects, most radars have to just disregard such signals — they can’t tell a stalled vehicle from a sign, bridge or berm. To help with that, a map of where all the fixed radar reflection sources are located can help. If you get a sudden bright radar return from a truck or car somewhere that the map says a big object is not known to be, that’s an immediate sign of trouble. (At the same time, it means that you don’t easily detect a stalled vehicle next to a bridge or sign.)
One solution to this is longer range LIDAR or higher resolution radar. Google has said it has developed longer range LIDAR. It is likely in this case that even regular range LIDAR, or radar and a good map, might have noticed the truck.