Robocars

New NHTSA Robocar regulations are a major, but positive, reversal

NHTSA released their latest draft robocar regulations just a week after the U.S. House passed a new regulatory regime and the senate started working on its own. The proposed regulations preempt state regulation of vehicle design, and allow companies to apply for high volume exemptions from the standards that exist for human-driven cars.

It’s clear that the new approach will be quite different from the Obama-era one, much more hands-off. There are not a lot of things to like about the Trump administration but this could be one of them. The prior regulations reached 116 pages with much detail, though they were mostly listed as “voluntary.” I wrote a long critique of the regulations in a 4 part series which can be found in my NHTSA tag. They seem to have paid attention to that commentary and the similar commentary of others.

At 26 pages, the new report is much more modest, and actually says very little. Indeed, I could sum it up as follows:

  • Do the stuff you’re already doing
  • Pay attention to where and when your car can drive and document that
  • Document your processes internally and for the public
  • Go to the existing standards bodies (SAE, ISO etc.) for guidance
  • Create a standard data format for your incident logs
  • Don’t forget all the work on crash avoidance, survival and post-crash safety in modern cars that we worked very hard on
  • Plans for how states and the feds will work together on regulating this

Goals vs. Approaches

The document does a better job at understanding the difference between goals — public goods that it is the government’s role to promote — and approaches to those goals, which should be entirely the province of industry.

The new document is much more explicit that the 12 “safety design elements” are voluntary. I continue to believe that there is a risk they may not be truly voluntary, as there will be great pressure to conform with them, and possible increased liability for those who don’t, but the new document tries to avoid that, and its requests are much milder.

The document understands the important realization that developers in this space will be creating new paths to safety and establishing new and different concepts of best practices. Existing standards have value, but they can at best encode conventional wisdom. Robocars will not be created using conventional wisdom. The new document takes the approach of more likely recommending that the existing standards be considered, which is a reasonable plan.

A lightweight regulatory philosophy

My own analysis is guided by a lightweight regulatory approach which has been the norm until now. The government’s role is to determine important public goals and interests, and to use regulations and enforcement when, and only when, it becomes clear that industry can’t be trusted to meet these goals on its own.

In particular, the government should very rarely regulate how something should be done, and focus instead on what needs to happen as the end result, and why. In the past, all automotive safety technologies were developed by vendors and deployed, sometimes for decades, before they were regulated. When they were regulated, it was more along the lines of “All cars should now have anti-lock brakes.” Only with the more mature technologies have the regulations had to go into detail on how to build them.

Worthwhile public goals include safety, of course, and the promotion of innovation. We want to encourage both competition and cooperation in the right places. We want to protect consumer rights and privacy. (The prior regulations proposed a mandatory sharing of incident data which is watered down greatly in these new regulations.)  read more »

NTSB Tesla Crash report (New NHTSA regs to come)

The NTSB (National Transportation Safety Board) has released a preliminary report on the fatal Tesla crash with the full report expected later this week. The report is much less favourable to autopilots than their earlier evaluation.

(This is a giant news day for Robocars. Today NHTSA also released their new draft robocar regulations which appear to be much simpler than the earlier 116 page document that I was very critical of last year. It’s a busy day, so I will be posting a more detailed evaluation of the new regulations — and the proposed new robocar laws from the House — later in the week.)

The earlier NTSB report indicated that though the autopilot had its flaws, overall the system was working. This is to say that though drivers were misusing the autopilot, the combined system including drivers not misusing the autopilot combined with those who did, was overall safer than drivers with no autopilot. The new report makes it clear that this does not excuse the autopilot being so easy to abuse. (By abuse, I mean ignore the warnings and treat it like a robocar, letting it drive you without you actively monitoring the road, ready to take control.)

While the report mostly faults the truck driver for turning at the wrong time, it blames Tesla for not doing a good enough job to assure that the driver is not abusing the autopilot. Tesla makes you touch the wheel every so often, but NTSB notes that it is possible to touch the wheel without actually looking at the road. NTSB also is concerned that the autopilot can operate in this fashion even on roads it was not designed for. They note that Tesla has improved some of these things since the accident.

This means that “touch the wheel” systems will probably not be considered acceptable in future, and there will have to be some means of assuring the driver is really paying attention. Some vendors have decided to put in cameras that watch the driver or in particular the driver’s eyes to check for attention. After the Tesla accident, I proposed a system which tested driver attention from time to time and punished them if they were not paying attention which could do the job without adding new hardware.

It also seems that autopilot cars will need to have maps of what roads they work on and which they don’t, and limit features based on the type of road you’re on.

Planning for hurricanes and other disasters with robocars

How will robocars fare in a disaster, like Harvey in Houston, Irma, or the tsunamis in Japan or Indonesia, or a big Earthquake, or a fire, or 9/11, or a war?

These are very complex questions, and certainly most teams developing cars have not spent a lot of time on solutions to them at present. Indeed, I expect that these will not be solved issues until after the first significant pilot projects are deployed, because as long as robocars are a small fraction of the car population, they will not have that much effect on how things go. Some people who have given up car ownership for robocars — not that many in the early days — will possibly find themselves hunting for transportation the way other people who don’t own cars do today.

It’s a different story when, perhaps a decade from now, we get significant numbers of people who don’t own cars and rely on robocar transportation. That means people who don’t have any cars, not the larger number of people who have dropped from 2 cars to 1 thanks to robocar services.

I addressed a few of these questions before regarding Tsunamis and Earthquakes.

A few key questions should be addressed:

  1. How will the car fleets deal with massively increased demand during evacuations and flight during an emergency?
  2. How will the cars deal with shutdown and overload of the mobile data networks, if it happens?
  3. How will cars deal with things like floods, storms, earthquakes and more which block roads or make travel unsafe on certain roads?

Most of these issues revolve around fleets. Privately owned robocars will tend to have steering wheels and be usable as regular cars, and so only improve the situation. If they encounter unsafe roads, they will ask their passengers for guidance, or full driving. (However, in a few decades, their passengers may no longer be very capable at driving but the car will handle the hard parts and leave them just to provide video-game style directions.)

Increased demand

An immediately positive thing is the potential ability for private robocars to, once they have taken their owners to safety, drive back into the evacuation zone as temporary fleet cars, and fetch other people, starting with those selected by the car’s owner, but also members of the public needing assistance. This should dramatically increase the ability of the car fleet to get people moved.

Nonetheless, it is often noted that in a robocar taxi world, there don’t need to be nearly as many cars in a city as we have today. With ideal efficiency, there would be exactly enough seats to handle the annual peak, but few more. We might drop to just 1/4 of the cars, and we might also have many of them be only 1 or 2 seater cars. There will be far fewer SUVs, pickup trucks, minivans and other large cars, because we don’t really need nearly as many as we have today.  read more »

Talk Thursday in Silicon Valley: Everything you know on Robocars is wrong

For those in Silicon Valley, I will be giving a talk at the monthly autonomous vehicle enthusiast meetup. Some time ago I did my general talk, but this one will get into the meat on some of the big myths and issues. With luck we’ll get some good debate going.

You can register on the Meetup site It takes a nominal charge to stop people from grabbing a slot if they don’t really plan to come. The event will probably sell out, but fear not, there are usually no-shows anyway so get on the waitlist if you want to come.

Many different approaches to Robocar Mapping

Almost all robocars use maps to drive. Not the basic maps you find in your phone navigation app, but more detailed maps that help them understand where they are on the road, and where they should go. These maps will include full details of all lane geometries, positions and meaning of all road signs and traffic signals, and also details like the texture of the road or the 3-D shape of objects around it. They may also include potholes, parking spaces and more.

The maps perform two functions. By holding a representation of the road texture or surrounding 3D objects, they let the car figure out exactly where it is on the map without much use of GPS. A car scans the world around it, and looks in the maps to find a location that matches that scan. GPS and other tools help it not have to search the whole world, making this quick and easy.

Google, for example, uses a 2D map of the texture of the road as seen by LIDAR. (The use of LIDAR means the image is the same night and day.) In this map you see the location of things like curbs and lane markers but also all the defects in those lane markers and the road surface itself. Every crack and repair is visible. Just as you, a human being, will know where you are by recognizing things around you, a robocar does the same thing.

Some providers measure things about the 3D world around them. By noting where poles, signs, trees, curbs, buildings and more are, you can also figure out where you are. Road texture is very accurate but fails if the road is covered with fresh snow. (3D objects also change shape in heavy snow.)

Once you find out where you are (the problem called “localization”) you want a map to tell you where the lanes are so you can drive them. That’s a more traditional computer map, though much more detailed than the typical navigation app map.

Some teams hope to get a car to drive without a map. That is possible for simpler tasks like following a road edge or a lane. There you just look for a generic idea of what lane markings or road edges should look like, find them and figure out what the lanes look like and how to stay in the one you want to drive in. This is a way to get a car up and running fast. It is what humans do, most of the time.

Driving without a map means making a map

Most teams try to do more than driving without a map because software good enough to do that is also software good enough to make a map. To drive without a map you must understand the geometry of the road and where you are on it. You must understand even more, like what to do at intersections or off-ramps.

Creating maps is effectively the act of saying, “I will remember what previous cars to drive on this road learned about it, and make use of that the next time a car drives it.”

Put this way it seems crazy not to build and use maps, even with the challenges listed below. Perhaps some day the technology will be so good that it can’t be helped by remembering, but that is not this day.

The big advantages of the map

There are many strong advantages of having the map:

  • Human beings can review the maps built by software, and correct errors. You don’t need software that understands everything. You can drive a tricky road that software can’t figure out. (You want to keep this to a minimum to control costs and delays, but you don’t want to give it up entirely.)
  • Even if software does all the map building, you can do it using arbitrary amounts of data and computer power in cloud servers. To drive without a map you can must process the data in real time with low computing resources.
  • You can take advantage of multiple scans of the road from different lanes and vantage points. You can spot things that moved.
  • You can make use of data from other sources such as the cities and road authorities themselves.
  • You can cooperate with other players — even competitors — to make everybody’s understanding of the road better.

One intermediate goal might be to have cars that can drive with only a navigation map, but use more detailed maps in “problem” areas. This is pretty similar, except in database size, with automatic map generation with human input only on the problem areas. If your non-map driving is trustworthy, such that it knows not to try problem areas, you could follow the lower cost approach of “don’t map it until somebody’s car pulled over because it could not handle an area.”

Levels of maps

There are two or three components of the maps people are building, in order to perform the functions above. At the most basic level is something not too far above the navigation maps found in phones. That’s a vector map, except with lane level detail. Such maps know how many lanes there are, and usually what lanes connect to what lanes. For example, they will indicate that to turn right, you can use either of the right two lanes at some intersections.  read more »

No, you don't need to drive a billion miles to test a robocar

Earlier I noted that Nidi Kalra of Rand spoke at the AVS about Rand’s research suggesting that purely road testing robocars is an almost impossible task, because it would take hundreds of millions to a billion miles of driving to prove that a robocar is 10% better than human drivers.

(If the car is 10x better than humans, it doesn’t take that long, but that’s not where the first cars will be.)

This study has often been cited as saying that it’s next to impossible to test robocars. The authors don’t say that — their claim is that road testing will not be enough, and will take too long to really work — but commenters and press have taken it further to the belief that we’ll never be able to test.

The mistake is that while it could take a billion miles to prove a vehicle is 10% safer than human drivers, that is not the goal. Rather, the goal is to decide that it’s unlikely it is much worse than that number. It may seem like “better than X” and “not worse than X” are the same thing, but they are not. The difference is where you give the benefit of the doubt.

Consider how we deal with new drivers. We give them a very basic test and hand them a licence. We presume, because they are human teens, that they will have a safety record similar to other human teens. Such a record is worse than the level for experienced drivers, and in fact one could argue it’s not at all safe enough, but we know of no way to turn people into experienced drivers without going through the risky phase.

If a human driver starts showing evidence of poor skills or judgments — lots of tickets, and in particular multiple accidents, we pull their licence. It actually takes a really bad record for that to happen. By my calculations the average human takes around 20 years to have an accident that gets reported to insurance, and 40-50 years to have one that gets reported to police. (Most people never have an injury accident, and a large fraction never have any reported or claimed accident.)  read more »

Federal regulations past next hurdle

Today’s news is preliminary, but a U.S. house committee panel passed some new federal regulations which suggest sweeping change in the US regulatory approach to robocars.

Today, all cars sold must comply with the Federal Motor Vehicles Safety Standards (FMVSS.) This is a huge set of standards, and it’s full of things written with human driven cars in mind, and making a radically different vehicle, like the Zoox, or the Waymo Firefly, or a delivery robot, is simply not going to happen under those standards. There is a provision where NHTSA can offer exemptions but it’s in small volumes, for prototype and testing vehicles mostly. The new rules would allow a vendor to get an exemption to make 100,000 vehicles per year, which should be enough for the early years of robocar deployment.

Secondly, these and other new regulations would preempt state regulations. Most players (except some states) have pushed for this. Many states don’t want the burden of regulating robocar design, since they don’t have the resources to do so, and most vendors don’t want what they call a “patchwork” of 50 regulations in the USA. My take is different. I agree the cost of a patchwork is not to be ignored, but the benefits of having jurisdictional competition may be much greater. When California proposed to ban vehicles like the Google Firefly, Texas immediately said, “Come to Texas, we won’t get in your way.” That pushed California to rethink. Having one regulation is good — but it has to be the right regulation, and we’re much too early in the game to know what the right regulation is.

This is just a committee in the house, and there is lots more distance to go, including the Senate and all the other usual hurdles. Whatever people thought about how much regulation there should be, everybody has known that the FMVSS needs a difficult and complex revision to work in the world of robocars, and a temporary exemption can be a solution to that.

Uncovered: NHTSA Levels of 1900 (Satire)

I have recently managed to dig up some old documents from the earliest days of car regulation. Here is a report from NHTSA on the state of affairs near the turn of the 20th century.

National Horse Trail Safety Administration (NHTSA)

Regulation of new Horse-Auto-mobile Vehicles (HAV), sometimes known as “Horseless carriages.”

In recent years, we’ve seen much excitement about the idea of carriages and coaches with the addition of “motors” which can propel the carriage without relying entirely on the normal use of horses or other beasts of burden. These “Horseless carriages,” sometimes also known as “auto mobile” are generating major excitement, and prototypes have been generated by men such as Karl Benz and Armand Peugeot, along with the Duryea brothers, Ransom Olds and others in the the USA. The potential for these carriages has resulted in many safety questions and many have asked if and how NHTSA will regulate safety of these carriages when they are common.

Previously, NHTSA released a set of 4, and later 5 levels to classify and lay out the future progression of this technology.

Levels of Motorized Carriages

Level 0

Level zero is just the existing rider on horseback.

Level 1

Level one is the traditional horse drawn carriage or coach, as has been used for many years.

Level 2

A level 2 carriage has a motor to assist the horses. The motor may do the work where the horses trot along side, but at any time the horses may need to take over on short notice.

Level 3

In a level 3 carriage, sometimes the horses will provide the power, but it is allowed to switch over entirely to the “motor,” with the horses stepping onto a platform to avoid working them. If the carriage approaches an area it can’t handle, or the motor has problems, the horses should be ready, with about 10-20 seconds notice, to step back on the ground and start pulling. In some systems the horse(s) can be in a hoist which can raise or lower them from the trail.

Level 4

A Level 4 carriage is one which can be pulled entirely by a motor in certain types of terrain or types of weather — an operating domain — but may need a horse at other times. There is no need for a sudden switch to the horses, which should be pulled in a trailer so they can be hitched up for travel outside the operating domain.

Level 5

The recently added fifth level is much further in the future, and involves a “horseless” carriage that can be auto mobile in all situations, with no need for any horse at all. (It should carry a horse for off-road use or to handle breakdowns, but this is voluntary.)  read more »

News and commentary from AUVSI/TRB Automated Vehicle Symposium 2017

In San Francisco, I’m just back from the annual Automated Vehicle Symposium, co-hosted by the AUVSI (a commercial unmanned vehicle organization) and the Transportation Research Board, a government/academic research organization. It’s an odd mix of business and research, but also the oldest self-driving car conference. I’ve been at every one, from the tiny one with perhaps 100-200 people to this one with 1,400 that fills a large ballroom.

Toyota Research VC Fund

Tuesday morning did not offer too many surprises. The first was an announcement by Toyota Research Institute of a $100M venture fund. Toyota committed $1B to this group a couple of years ago, but surprisingly Gil Pratt (who ran the DARPA Robotics Challenge for humanoid-like robots) has been somewhat a man of mixed views, with less optimistic forecasts.

Different about this VC fund will be the use of DARPA like “calls.” The fund will declare, “Toyota would really like to see startups solving problem X” and then startups will apply, and a couple will be funded. It will be interesting to see how that pans out.

Nissan’s control room is close to live

At CES, Nissan showed off their plan to have a remote control room to help robocars get out of sticky situations they can’t understand like unusual construction zones or police directing traffic. Here, they showed it as further along and suggested it will go into operation soon.

This idea has been around for a while (Nissan based it on some NASA research) and at Starship, it has always been our plan for our delivery robots. Others are building such centers as well. The key question is how often robocars need to use the human assistance, and how you make sure that unmanned vehicles stay in regions where they can get a data connection through which to get help. As long as interventions are rare, the cost is quite reasonable for a larger fleet.

This answers the question that Rod Brooks (of Rethink Robotics and iRobot) recently asked, pondering how robocars will handle his street in Cambridge, where strange things like trucks blocking the road to do deliveries, are frequently found.

It’s a pretty good bet that almost all our urban spaces will have data connectivity in the 2020s. If any street doesn’t have solid data, and has frequent bizarre problems of any type, yet is really important for traversal by unmanned vehicles — an unlikely trifecta — it’s quite reasonable for vehicle operators to install local connectivity (with wifi, for example) on that street if they can’t wait for the mobile data companies to do it. Otherwise, don’t go down such streets in empty cars unless you are doing a pickup/drop-off on the street.

Switching Cities

Karl Iagenemma of nuTonomy told the story of moving their cars from Singapore, where driving is very regulated and done on the left, to Boston where it is chaotic and done on the right.  read more »

Can we test robocars the way we tested regular cars?

I’ve written a few times that perhaps the biggest unsolved problem in robocars is how to know we have made them safe enough. While most people think of that in terms of government certification, the truth is that the teams building the cars are very focused on this, and know more about it than any regulator, but they still don’t know enough. The challenge is going to be convincing your board of directors that the car is safe enough to release, for if it is not, it could ruin the company that releases it, at least if it’s a big company with a reputation.

We don’t even have a good definition of what “safe enough” is though most people are roughly taking that as “a safety record superior to the average human.” Some think it should be much more, few think it should be less. Tesla, now with the backing of the NTSB, has noted that their autopilot system — combined with a mix of mostly attentive but some inattentive humans, may have a record superior to the average human, for example, even though with the inattentive humans it is worse.

Last week I attended a conference in Stuttgart devoted to robocar safety testing, part of a larger auto show including an auto testing show. It was interesting to see the main auto testing show — scores of expensive and specialized machines and tools that subject cars to wear and tear, slamming doors thousands of times, baking the surfaces, rattling and vibrating everything. And testing the electronics, too.

In Europe, the focus of testing is very strongly on making sure you are compliant with standards and regulations. That’s true in the USA but not quite as much. It was in Europe some time ago that I learned the word “homologation” which names this process.

There is a lot to be learned from the previous regimes of testing. They have built a lot of tools and learned techniques. But robocars are different beasts, and will fail in different ways. They will definitely not fail the way human drivers do, where usually small things are always going wrong, and an accident happens when 2 or 3 things go wrong at once. The conference included a lot of people working on simulation, which I have been promoting for many years. The one good thing in the NHTSA regulations — the open public database of all incidents — may vanish in the new rules, and it would have made for a great simulator. The companies making the simulators (and the academic world) would have put every incident into a shared simulator so every new car could test itself in every known problem situation.

Still, we will see lots of simulators full of scenarios, and also ways to parameterize them. That means that instead of just testing how a car behaves if somebody cuts it off, you test what it does if it gets cut off with a gap of 1cm, or 10cm, or 1m, or 2m, and by different types of vehicles, and by two at once etc. etc. etc. The nice thing about computers is you can test just about every variation you can think of, and test it in every road situation and every type of weather, at least if your simulator is good enough,

Yoav Hollander, who I met when he came as a student to the program at Singularity U, wrote a report on the approaches to testing he saw at the conference that contains useful insights, particularly on this question of new and old thinking, and what regulations drive vs. liability and fear of the public. He puts it well — traditional and certification oriented testing has a focus on assuring you don’t have “expected bugs” but is poor at finding unexpected ones. Other testing is about finding unexpected bugs. Expected bugs are of the “we’ve seen this sort of thing before, we want to be sure you don’t suffer from it” kind. Unexpected bugs are “something goes wrong that we didn’t know to look for.”

Avoiding old thinking

I believe that we are far from done on the robocar safety question. I think there are startups who have not yet been founded who, in the future, will come up with new techniques both for promoting safety and testing it that nobody has yet thought of. As such, I strongly advise against thinking that we know very much about how to do it yet.

A classic example of things going wrong is the movement towards “explainable AI.” Here, people are concerned that we don’t really know how “black box” neural network tools make the decisions they do. Car regulations in Europe are moving towards banning software that can’t be explained in cars. In the USA, the draft NHTSA regulations also suggest the same thing, though not as strongly.

We may find ourselves in a situation where we take to systems for robocars, one explainable and the other not. We put them through the best testing we can, both in simulator and most importantly in the real world. We find the explainable system has a “safety incident” every 100,000 miles, and the unexplainable system has an incident every 150,000 miles. To me it seems obvious that it would be insane to make a law that demands the former system which, when deployed, will hurt more people. We’ll know why it hurt them. We might be better at fixing the problems, but we also might not — with the unexplainable system we’ll be able to make sure that particular error does not happen again, but we won’t be sure that others very close it it are eliminated.

Testing in sim is a challenge here. In theory, every car should get no errors in sim, because any error found in sim will be fixed or judged as not really an error or so rare as to be unworthy of fixing. Even trained machine learning systems will be retrained until they get no errors in sim. The only way to do this sort of testing in sim will be to have teams generate brand new scenarios in sim that the cars have never seen, and see how they do. We will do this, but it’s hard. Particularly because as the sims get better, there will be fewer and fewer real world situations they don’t contain. At best, the test suite will offer some new highly unusual situations, which may not be the best way to really judge the quality of the cars. In addition, teams will be willing to pay simulator companies well for new and dangerous scenarios in sim for their testing — more than the government agencies will pay for such scenarios. And of course, once a new scenario displays a problem, every customer will fix it and it will become much less valuable. Eventually, as government regulations become more prevalent, homologation companies will charge to test your compliance rate on their test suites, but again, they will need to generate a new suite every time since everybody will want the data to fix any failure. This is not like emissions testing, where they tell you that you went over the emissions limit, and it’s worth testing the same thing again.

The testing was interesting, but my other main focus was on the connected car and security sessions. More on that to come.

New more laissez-faire robocar rules may arise

While very few details have come out, Reuters reports that new proposed congressional bills on self-driving cars will reverse many of the provisions I critiqued in the NHTSA regulations last year.

One big change is a reversal of the new idea of pre-market regulation. Today, new car technologies are not regulated before they are deployed, but NHTSA proposed giving itself the power to regulate technologies even before they exist. Currently most car technologies like adaptive cruise control, autopilots, forward collision avoidance, lanekeeping and the like remain unregulated after a decade or more of deployment with few, if any, problems.

This is important because the old doctrine of “We don’t regulate until we see a problem the industry own’t fix on its own” is a much better one for innovation, and the speed of innovation is key in deciding which countries and companies lead this technology. The opposite approach of “we try to imagine what might go wrong and ban it ahead of time” may seem safer, but it’s definitely an impediment to innovation and may actually result in far more deaths through the delay of life-saving technologies.

Harder to judge is the preemption of state rules. While states are also attempting to pre-regulate, having a laboratory of 50 different competing states can also be good for innovation on the legal side. There is not one answer, and while it’s more complex to deal with 50 sets of regulations instead of one, it’s not that much more complex.

One of the few interesting and good ideas in the NHTSA regs may also vanish. NHTSA wanted all vendors to make available all sensor logs from all incidents. As I predicted, companies pushed back on this — their testing logs and the resulting test suites are very important competitive assets. The company with the best test suite is the furthest on the path to the safety needed for deployment. On the other hand, sharing this data would let everybody get further on that path, faster.

There has been lots of other news during the long road-trip I am on in Europe. This includes more entrants in the race, the retirement of Google’s 3rd generation “koala” car, lots more at Uber and more. Plus I will report from the Autonomous Car Testing and Development conference in Stuttgart starting Tuesday.

The DSRC/V2V/Connected Car Emperor has no clothes

Plans are underway to ask for a legal mandate to install radio communications devices in all new cars, starting around 2020. These radios would do “vehicle to vehicle” (v2v) and vehicle to infrastructure communication using a wifi-derived protocol called DSRC.

These plans began long ago, when all of us wondered, “wouldn’t it be cool if computers in cars could talk to other cars?” It seemed like it should be cool but in fact, after decades of trying, very few useful applications have actually shown up. However, that has not stopped fans of the idea. They had almost given up when robocars came along. As the hype built over robocars they realized that they might have an application there, and this application could make their solution finally find its problem. Since then, there have been many declarations that V2V communication is important or even essential for robocars. That what this is all really about is the “connected car.” Whole conferences and industry groups push heavily on the connected car concept.

Of course robocars will be connected, but barely. They will want updates to maps and on road conditions and events — the same things you see if you run programs like Waze. When parked, they will also want updates to their software and more detailed map data. But no sane designer plans to have them depend on real time connectivity. It might provide useful information, but it often won’t, and you need to depend on things you built and tested that work 100% of the time. Everything else is just a little gravy.

I have written many comments on the issues with v2v and related technologies. Recently, the DSRC fans got a proposal in place for the government to mandate all new cars come with DSRC radios which will, among other things, constantly broadcast their position and what they are doing. The government will mandate a decade old radio technology that is already obsolete, and which probably will never work, and certainly won’t work as well as other technologies which are arriving without government help in mobile phones and data networks.

There was a comment period. I wrote up a commentary, and have expanded it into an essay on:

Why the V2V “emperor” has no clothes.

Connected Autonomous Vehicles — Pick 2.

Waymo starts pilot in Phoenix, Apple gets more real and other news

Waymo (Google) has announced a pilot project in Phoenix offering a full ride service, with daily use, in their new minivans. Members of the public can sign up — the link is sure to be overwhelmed with applicants, but it has videos and more details — and some families are already participating. There’s also a Waymo Blog post. I was in Phoenix this morning as it turns out, but to tell real estate developers about robocars, not for this.

There are several things notable about Waymo’s pilot:

  1. They are attempting to cover a large area — they claim twice the size of San Francisco, or 90 square miles. That’s a lot. It’s enough to cover the vast majority of trips for some pilot users. In other words, this is the first pilot which can test what it’s like to offer a “car replacement.”
  2. They are pushing at families, which means even moving children, including those not of driving age. The mother in the video expects to use it to send some children to activities. While I am sure there will be safety drivers watching over things, trusting children to the vehicles is a big milestone. Google’s safety record (with safety drivers) suggests this is actually a very safe choice for the parents, but there is emotion over trusting children to robots (other than the ones that go up and down shafts in buildings.)
  3. In the videos, they are acting like there are no safety drivers, but there surely are, for legal reasons as well as safety.
  4. They are using the Pacifia minivans. The Firefly bubble cars are too slow for anything but neighbourhood operation. The minivans feature motorized doors, a feature which, though minor and commonplace, meets the image of what you want from a self-driving car.

Apple is in the game

There has been much speculation recently because of some departures from Apple’s car team that they had given up. In fact, last week they applied for self-driving car test plates for California. I never thought they had left the game.  read more »

Luminar unstealths their 1.5 micron LIDAR

Luminar, a bay area startup, has revealed details on their new LIDAR. Unlike all other commercial offerings, this is a LIDAR using 1.5 micron infrared light. They hope to sell it for $1,000.

1.5 micron LIDAR has some very special benefits. The lens of your eye does not focus medium depth infrared light like this. Ordinary light, including the 0.9 micron infrared light of the lasers in most commercial LIDARS is focused to a point by the lens. That limits the amount of power you can put in the laser beam, because you must not create any risk to people’s eyes.

Because of this, you can put a lot more power into the 1.5 micron laser beam. That, in turn, means you can see further, and collect more points. You can easily get out to 250 meters, while regular lidars are limited to about 100m and are petering out there.

What doesn’t everybody use 1.5 micron? The problem is silicon sensors don’t react to this type of light. Silicon is the basis of all mass market electronics. To detect 1.5 micron light, you need different materials, which are not themselves that hard to find, but they are not available cheap and off the shelf. So far, this makes units like this harder to build and more expensive. If Luminar can do this, it will be valuable.

Why do you need to see 250m? Well, you don’t for city driving, though it’s nice. For highway driving, you can get by with 100m as well, and you use radar to help you perceive, at very low resolution, what’s going on beyond that. Still, there are things that radar can’t tell you. Rare things, but still important. So you need a sensor that sees further to spot things like stalled cars under bridges. Radar sees those, but can’t tell them from the bridge.

To this point, Google has been the only company to say they have a long range LIDAR, but it has not been for sale. And as we all know, there is a famous lawsuit underway accusing Uber/Otto of copying Google’s LIDAR designs.

The Luminar point clouds are impressive. This will be a company to watch. (In the interests of disclosure, I am an advisor to Quanergy, another LIDAR startup.)

No, Detroit is not winning the robocar race.

A new report from Navigant Research includes the chart shown below, ranking various teams on the race to robocar deployment. It’s causing lots of press headlines about how Ford is the top company and companies like Google and Uber are far behind.

I elected not to buy the $3800 report, but based on the summary I believe their conclusions are ill founded to say the least.

This ordering smacks of old-world car industry thinking. Saying that Ford and GM are ahead of Waymo/Google is like saying that Foxconn is ahead of Google or Apple in the smartphone market. Foxconn makes the iPhone of course, and makes lots of money at a modest profit margin of a few percent. Apple and Google don’t make their phones, they design them and the software platform.

Ford and GM might feel good reading this report, but they should not. I do actually like Ford’s plan quite a bit — especially their declaration that they will not sell their robocar to end-users. I also like Daimler’s declaration that they want to have a taxi style service called “Car2Come” after their Car2Go one-way on-demand car rental service. (Americans giggle at the name, Germans are never bothered by such things. :-)

GM does not belong high on the list, other than for its partnership with Lyft. They were wise to acquire a company like Cruise — and I know the folks at Cruise, this is not a criticism of them — but it’s not enough to catapult you to the front of the list.

A similar article came out a few months ago, declaring that Silicon Valley was sure to lose to Detroit, because Detroit knows how to make cars, and Silicon Valley doesn’t. The report went further and declared that Google was falling behind because they had said they did not plan to make a car. The author had mistakenly thought Google had plans to make a car — Google never said anything like that — and so decided that the announcement that they would not make one was a big retreat on their part.

Companies like Google, Apple and Uber have never stated they wished to make cars, or felt they were any good at it. If they want to make cars, they have the cash to go buy a car company, but there is no need to do that. There are a couple of dozen companies around the world who are already very good at making cars, and if you come to them with an order for 100,000 cars to your specification, they will jump to say “yes, sir!” Some of the companies, the big leaders like Toyota or BMW, might well refuse that order, not wanting to be the supplier for a threat to their existence. But it won’t help them. Somebody will be that supplier. If not a German, Japanese or U.S. company, then a Korean company, or failing that a Chinese company. In fact, Foxconn has said it is interested in making cars, and Apple is designing them, so the Apple-Foxconn relationship may be far more than a metaphor for this situation.

When you summon an Uber, you don’t care what nameplate is on the car. When you summon UberSelect, you don’t care if it’s Lexus, or Mercedes or BMW. Uber was your brand, and you aren’t buying the car for 15 years, you are buying it for 15 minutes. Brand plays a completely different role.

Companies like Waymo, Apple, Uber, Zoox and others would be foolish to manufacture cars, unless they want a car so radically different that nobody knows how to make it. (Then, they might decide to be the first to figure it out.) The car manufacturers would be foolish to turn down the giant purchase order, or partnerships with whoever has the best technology.

The winner of the transportation game of the future will be the company that thinks outside the car. That doesn’t mean the big car companies can’t do that. It’s just harder for them to do.

The chart’s not entirely wrong. Honda is pretty far behind — but PSA is even further behind. BMW, Daimler and Ford are among the best of the car companies, but Tesla and Volvo deserve higher ranking. Hyundai is not ahead of Toyota, and Tesla, while not ahead of Waymo, is in a pretty good place. Bosch is a surprising absentee from the list. FCA should be on it, just very low on the chart, along with the smaller Japanese vendors and many Chinese vendors.

But be clear. Making the car is essential, but it’s also old and a commodity. The value will lie in those building the self-driving software systems and sensors, and those putting services together around the technologies. The big automaker’s advantages — nameplate and reputation, reliability, manufacturing skill and capacity, retail channel experience — these are all less valuable or commoditized. They have to act fast to move to new business models that will make it in the future. Of course, one plan is to own the important components I name above, and several companies like BMW, Daimler, Ford, Nissan and Volvo are trying to do that. But they’re behind Waymo by a fair distance.

Flying cars, electogliding and noise

The recently released national noise map makes it strikingly clear just how much air travel contributes to the noise pollution in our lives. In my previous discussion of flying cars I expressed the feeling that the noise of flying cars is one of their greatest challenges. While we would all love a flying car (really a VTOL helicopter) that takes off from our back yards, we will not tolerate our neighbour having one if there is regular buzzing and distraction overhead and in the next yard.

Helicopters are also not energy efficient, so real efforts for flying cars are fixed wing, using electric multirotors to provide vertical take-off but converting in some way to fixed wing flight, usually powered by those same motors in a different orientation. If batteries continue their path of getting cheaper, and more importantly lighter, this is possible.

Fixed wing planes can be decently efficient — particularly when they travel as the crow flies — though they can have trouble competing with lightweight electric ground vehicles. Almost all aircraft today fly much faster than their optimum efficiency speed. There are a lot of reasons for this. One is the fact that maintenance is charged by the hour, not the mile. Another is that planes need powerful engines to take off, and people are in a hurry and want to use that powerful engine to fly fast once they get up there.

Typical powered planes have a glide ratio (which is a good measure of their aerodynamic efficiency) around 10:1 to 14:1. That means for every foot they drop, they go forward 10 to 14 feet. Gliders, more properly known as “sailplanes” are commonly at a 50:1 glide ratio today and go even higher. Sailplane pilots can use that efficiency to enter slowly rising columns of air found over hot spots on the ground and “soar” around in a circle to gain altitude, staying up for hours. Silent flying is great fun, though the tight turns to rise in a thermal can cause nausea. Efficient sailplanes are also light and can have fairly bumpy rides. (Note as well that the extra weight of energy storage and motors and drag of propellers means a lower glide ratio.)

It is the silent flight that is interesting. An autonomous high efficiency aircraft, equipped with redundant electric motors and power systems, need not run its engines a lot of the time. While you would never want to be constantly starting and stopping piston powered aircraft engines, electric engines can start and stop and change speed very quickly. The motors provide tremendous torque for fast response times. It would be insane to regularly land your piston powered aircraft without power, figuring you can just turn on the engine “if you need it.” It might not be that crazy to do it in an electric aircraft when you can get the engine up and operating in a fraction of a second with high reliability, and you have multiple systems, so even the rare failures can be tolerated.

Both passengers and people on the ground would greatly appreciate planes that were silent most of the time, including when landing at short airstrips. It could make the difference for acceptance.

For a more radical idea, consider my more futuristic proposal of airports that grab and stop planes with robotic platforms on cables. Such a system would even allow for mostly silent takeoff in electric aircraft.

Making efficient aircraft VTOL is a challenge. They tend to have large wingspans and are not so suitable for backyards, even if they can hover. But the option for redundant multirotor systems makes possible something else — aircraft wings that unfold in the air. There are “flying cars” with folding wings which fold the wings up so the car can get on the road, but unfolding in the air is one of those things that is insane for today’s aircraft designs. A VTOL multirotor could rise up, unfold its wings, and if they don’t unfold properly, it can descend (noisily) on the VTOL system, either to where it took off form, or a nearby large area if the wings unfolded but not perfectly. An in-flight failure of the folding system could again be saved (uncomfortably but safely) by the VTOL system.

We don’t yet know how to make powered vertical takeoff or landing quiet enough. We might make the rest of flight fairly silent, and make the noisy part fairly brief. The neighbours don’t all run their leaf blower several times per day. But a combination of robocars that take you on the first and last kilometer to places where aircraft can make noise without annoyance if they do it briefly might be a practical alternative.

Planes that fly silently would not fit well with today’s air traffic control regiments that allocate ranges of altitude to planes. A plane with a 50:1 ratio could travel 10 miles while losing 1,000 feet of altitude, then climb back up on power for another silent pass. But constant changing of altitude would freak out ATC. A computerized ATC for autonomous planes could enable entirely different regimens of keeping planes apart that would allow this, and it would also allow long slow glides all the way to the runway.

LIDAR (lasers) and cameras together -- but which is more important?

Recently we’ve seen a series of startups arise hoping to make robocars with just computer vision, along with radar. That includes recently unstealthed AutoX, the off-again, on-again efforts of comma.ai and at the non-startup end, the dedication of Tesla to not use LIDAR because it wants to sell cars today, before LIDARs can be bought at automotive quantities and prices.

Their optimism is based on the huge progress being made in the use of machine learning, most notably convolutional neural networks, at solving the problems of computer vision. Milestones are dropping quickly in AI and particularly pattern matching and computer vision. (The CNNs can also be applied to radar and LIDAR data.)

There are reasons pushing some teams this way. First of all, the big boys, including Google, already have made tons of progress with LIDAR. There right niche for a startup can be the place that the big boys are ignoring. It might not work, but if it does, the payoff is huge. I fully understand the VCs investing in companies of this sort, that’s how VCs work. There is also the cost, and for Tesla and some others, the non-availability of LIDAR. The highest capability LIDARs today come from Velodyne, but they are expensive and in short supply — they can’t make them to keep up with the demand just from research teams!

Note, for more detailed analysis on this, read my article on cameras vs. lasers.

For the three key technologies, these trends seem assured:

  1. LIDAR will improve price/performance, eventually costing just hundreds of dollars for high resolution units, and less for low-res units.
  2. Computer vision will improve until it reaches the needed levels of reliability, and the high-end processors for it will drop in cost and electrical power requirements.
  3. Radar will drop in cost to tens of dollars, and software to analyse radar returns will improve

In addition, there are some more speculative technologies whose trends are harder to predict, such as long-range LWIR LIDAR, new types of radar, and even a claimed lidar alternative that treats the photons like radio waves.

These trends are very likely. As a result, the likely winner continues to be a combination of all these technologies, and the question becomes which combination.

LIDAR’s problem is that it’s low resolution, medium in range and expensive today. Computer Vision (CV)’s problem is that it’s insufficiently reliable, depends on external lighting and needs expensive computers today. Radar’s problem is super low resolution.

Option one — high-end LIDAR with computer vision assist

High end LIDARs, like the 32 and 64 laser units favoured by the vast majority of teams, are extremely reliable at detecting potential obstacles on the road. They never fail (within their range) to differentiate something on the road from the background. But they often can’t tell you just what it is, especially at a distance. It won’t know a car from a pickup truck, or 2 pedestrians from 3. It won’t read facial expressions or body language. It can read signs but only when they are close. It can’t see colours, such as traffic signals.

The fusion of the depth map of LIDAR with the scene understanding of neural net based vision systems is powerful. The LIDAR can pull the pedestrian image away from the background, and then make it much easier for the computer vision to reliably figure out what it is. The CV is not 100% reliable, but it doesn’t have to be. Instead, it can ideally just improve the result. LIDAR alone is good enough if you take the very simple approach of “If there’s something in the way, don’t hit it.” But that’s a pretty primitive result that make brake too much for things you should not brake for.

Consider a bird on the road, or a blowing trash bag. It’s a lot harder for the LIDAR system to reliably identify those things. On the other hand, the visions systems will do a very good job at recognizing the birds. A vision system that makes errors 1 time every 10,000 is not adequate for driving. That’s too high an error rate as you encounter thousands of obstacles every hour. But missing 1 bird out of 10,000 means that you brake unnecessarily for a bird perhaps once every year or two, which is quite acceptable.

Option two — lower end LIDAR with more dependence on vision

Low end lidars, with just 4 or so scanning planes, cost a lot less. Today’s LIDAR designs basically need to have an independent laser, lens and sensor for each plane, and so the more planes, the more cost. But that’s not enough to identify a lot of objects, and will be pretty deficient on things low to the ground or high up, or very small objects.

The interesting question is, can the flaws of current computer vision systems be made up for by a lower-end, lower cost LIDAR. Those flaws, of course, include not always discerning things in their field. They also include needing illumination at night. This is a particular issue when you want a 360 degree view — one can project headlights forward and see as far as they see, but you can’t project headlights backward or to the side without distracting drivers.

It’s possible one could use infrared headlights in the other directions (or forward for that matter.) After all, the LIDAR sends out infrared laser beams. There are eye safety limits (your iris does not contract and you don’t blink to IR light) but the heat output is also not very high.

Once again, the low end lidar will eliminate most of the highly feared false negatives (when the sensor doesn’t see something that’s there) but may generate more false positives (ghosts that make the vehicle brake for nothing.) False negatives are almost entirely unacceptable. False positives can be tolerated but if there are too many, the system does not satisfy the customer.

This option is cheaper but still demands computer vision even better than we have today. But not much better, which makes it interesting.

Other options

Tesla has said they are researching what they can do with radar to supplement cameras. Radar is good for obstacles in front of you, especially moving ones. Better radar is coming that does better with stationary objects and pulls out more resolution. Advanced tricks (including with neural networks) can look at radar signals over time to identify things like walking pedestrians.

Radar sees cars very well (especially licence plates) but is not great on pedestrians. On the other hand, for close objects like pedestrians, stereo vision can help the computer vision systems a lot. You mostly need long range for higher speeds, such as the highways, where vehicles are your only concern.

Who wins?

Cost will eventually be a driver of robocar choices, but not today. Today, safety is the only driver. Get it safe, before your competitors do, at almost any cost. Later make it cheap. That’s why most teams have chosen the use of higher end LIDAR and are supplementing in with vision.

There is an easy mistake to make, though, and sometimes the press and perhaps some teams are making it. It’s “easy” on the grand scale to make a car that can do basic driving and have a nice demo. You can do it with just LIDAR or just vision. The hard part is the last 1%, which takes 99% of the time, if not more. Google had a car drive 1,000 miles of different roads and 100,000 total roads in the first 2 years of their project back in 2010, and even in 2017 with by far the largest and most skilled team, they do not feel their car is ready. It gets easier every day, as tech advances, to get the demo working, but that should not be mistaken for the real success that is required.

California New Regs, Intel buys MobilEye, Waymo sues Uber

California has published updated draft regulations for robocars whose most notable new feature is rules for testing and operating unmanned cars, including cars which have no steering wheel, such as Google, Navya, Zoox and others have designed.

This is a big step forward from earlier plans which would have banned testing and deploying those vehicles. That that they are ready to deploy, but once you ban something it’s harder to un-ban it.

One type of vehicle whose coverage is unclear are small unmanned delivery robots, like we’re working on at Starship. Small, light, low speed, inherently unmanned and running mostly on the sidewalks they are not at all a fit for these regulations and presumably would not be covered by them — that should be made more explicit.

Another large part of the regulations cover revoking permits and the bureaucracy around that. You can bet that this is because of the dust-up between the DMV and Uber/Otto a few months ago, where Uber declared that they didn’t need permits (probably technically true) but the DMV found it not at all in the spirit of the rules and revoked the licence plates on the cars. The DMV wants to be ready to fight those who challenge its authority.

Intel buys MobilEye

Intel has paid over $15B to buy Jerusalem based MobilEye. MobilEye builds ASIC-based camera/computer vision systems to do ADAS and has been steadily enhancing them to work as a self-driving sensor. They’ve done so well the stock market already got very excited and pushed them up to near this rich valuation — the stock traded at close to this for a while, but fell after ME said it would no longer sell their chips to Tesla. (Tesla’s first autopilot depended heavily on the MobilEye, and while ME’s contract with Tesla explicitly stated it did not detect things like cross-traffic, that failure to detect played a role in the famous Tesla autopilot fatal crash.

In a surprising and wise move, Intel is going to move its other self-driving efforts to Israel and let MobilEye run them, rather than gobble them up and swallow/destroy them. ME is a smart company, fairly nimble, though it has too much focus on making low-cost sensors in a world where safety at high cost is better than less safety at low cost. (Disclaimer: I own some MBLY and made a nice profit on it in this sale.)

MobilEye has been the leader in doing ADAS functions with just cameras and cameras+radar. Several other startups are attempting this, and of course so is Tesla in their independent effort. However, LIDAR continues to get cheaper (with many companies, including Quanergy, whom I advise, working hard on that.) The question may be shifting from will it be cameras or lasers? to “will it be fancy vision systems with low-end LIDAR, or will it be high-end LIDAR with more limited vision systems?” In fact, that question deserves another post.

Waymo and Uber Lawsuit

I am not going to comment a great deal on this lawsuit, because I am close with both sides, and have NDAs with both Otto and formerly with Google/Waymo. There are lots of press reports on the lawsuit, filed by Waymo accusing Anthony Levandowski (who co-founded Otto and helped found the car team at Google) of stealing a vast trove of Google’s documents and designs. This fairly detailed Bloomberg report has a lot of information, including reports that at an internal meeting, Anthony told his colleagues that any downloading he did was simply to allow work from home.

The size of the lawsuit is staggering. Since Otto sold for 1% of Uber stock (worth over $750M) the dollar values are huge, particularly if, as Google alleges, they can demonstrate Uber encouraged wrongdoing. At the same time, if Google doesn’t prove their allegations, Otto and Anthony could file for what might be the largest libel lawsuit in history, since Google published their accusations not just in court filings, but in their blog.

One reason that might not happen is that Uber is seeking to force arbitration. Like almost all contracts these days, the contracts here included clauses forcing disputes to go to arbitrators, not courts. That will mean that the resolution and other data remain secret.

It’s very serious for both sides. Some have said it’s mission critical for Uber, though I have disputed that, pointing out that even if Uber fails to develop good self-drive technology, they remain free to buy it from other people. That’s something the other players can’t do — even Lyft which has bound itself up with GM for now.

At the same time, Uber should fear something else. Uber is nothing, a $0 company, without iPhone and Android. (There is a Windows mobile app but it’s very low penetration.) Uber could push all drivers to iPhone, but if they ever found themselves unable to use Android for customers, they would lose more than they can afford.

I am not suggesting Google would go as far as to pull or block the Uber app on Android if it got into a battle. Aside from being unethical that might well violate antitrust regulations. But don’t underestimate the risk of betting half your business on a platform controlled by a company you go to war with. There are tricks I can think of (but am not yet publishing here) which Google could do which would not be seen as unfair or anti-competitive but which could potentially ruin Uber. Uber and Google will both have to be cautious in any serious battle.

In other Uber news, leaked reports say their intervention rate is still quite high. Intervention figures can be hard to interpret. Drivers are told to intervene at the smell of trouble, so the rate of grabbing the wheel can be much higher than the rate of actual problems. These leaks suggest, however, a fairly high rate of actual problems. This should remind people that while it’s pretty easy for a skilled team to get a car on the road and doing basic driving in a short time, there is a reason that Google’s very smart team has been at it 9 years and is still not ready to ship. The last 1% of the work takes 99% of the time.

Electrify Caltrain? Or could robocars do it for less than 1.5 billion?

Caltrain is the commuter rail line of the San Francisco peninsula. It’s not particularly good, and California is the land of the car commuter, but a plan was underway to convert it from diesel to electric. This made news this week as the California Republican house members announced they want to put a stop to both this project, and the much larger California High Speed Rail that hopes to open in 2030. For various reasons they may be right about the high speed rail but stop the electric trains? Electric trains are much better than diesel; they are cleaner and faster and quieter. But one number stands out in the plan.

To electrify the 51 miles of track, and do some other related improvements is forecast to cost over 1.5 billion dollars. Around $30M per mile.

So I started to ask, what other technology could we buy with $1.5 billion plus a private right-of-way through the most populated areas of silicon valley and the peninsula? Caltrain carries about 60,000 passengers/weekday (30,000 each way.) That’s about $50,000 per rider. In particular, what about a robotic transit line, using self-driving cars, vans and buses?

Paving over the tracks is relatively inexpensive. In fact, if we didn’t have buses, you could get by with fairly meager pavement since no heavy vehicles would travel the line. You could leave the rails intact in the pavement, though that makes the paving job harder. You want pavement because you want stations to become “offline” — vehicles depart the main route when they stop so that express vehicles can pass them by. That’s possible with rail, but in spite of the virtues of rail, there are other reasons to go to tires.

Fortunately, due to the addition of express trains many years ago, some stations already are 4 tracks wide, making it easy to convert stations to an express route with space by the side for vehicles to stop and let passengers on/off. Many other stations have parking lots or other land next to them allowing reasonably easy conversion. A few stations would present some issues.

Making robocars for a dedicated track is easy; we could have built that decades ago. In fact, with their much shorter stopping distance they could be safer than trains on rails. Perhaps we had to wait to today to convince people that one could get the same safety off of rails. Another thing that only arrived recently was the presence of smartphones in the hands of almost all the passengers, and low cost computing to make kiosks for the rest. That’s because the key to a robotic transit line would be coordination on the desires of passengers. A robotic transit line would know just who was going from station A to station J, and attempt to allocate a vehicle just for them. This vehicle would stop only at those two stations, providing a nonstop trip for most passengers. The lack of stops is also more energy efficient, but the real win is that it’s more pleasant and faster. With private ROW, it can easily beat a private car on the highways, especially at rush hour.

Another big energy win is sizing the vehicles to the load. If there are only 8 passengers going from B to K, then a van is the right choice, not a bus. This is particularly true off-peak, where vast amounts of energy are wasted moving big trains with just a few people. Caltrain’s last train to San Francisco never has more than 100 people on it. Smaller vehicles also allow for more frequent service in an efficient manner, and late night service as well — except freight uses these particular rails at night. (Most commuter trains shut down well before midnight.) Knowing you can get back is a big factor in whether you take a transit line at night.

An over-done service with a 40 passenger bus every 2 seconds would move 72,000 people (but really 30,000) in one hour in one direction to Caltrain’s 30,000 in a day. So of course we would not build that, and there would only be a few buses, mainly for rush hour. Even a fleet of just 4,000 9 passenger minvans (3 rows of 3) could move around 16,000 per hour (but really 8,000) in each direction. Even if each van was $50,000 each, we’ve spent only $200M of our $1.5B, though they might wear out too fast at that price, so we could bump the price and give them a much longer lifetime.

Energy

These vans and cars could be electric. This could be done entirely with batteries and a very impressive battery swap system, or you could have short sections of track which are electrified — with overhead rails or even third rails. The electric lines would be used to recharge batteries and supercapacitors, and would only be present on parts of the track. Unlike old 3rd rail technology, which requires full grade separation, there are new techniques to build safe 3rd rails that only provide current in a track segment after getting a positive digital signal from the vehicle. This is much cheaper than overhead wires. Inductive charging is also possible but makes pavement construction and maintenance much more expensive.

Other alternatives would be things like natural gas (which is cheap and much cleaner than liquid fuels, though still emits CO2) because it can be refilled quickly. Or hydrogen fuel cell vehicles could work here — hydrogen can be refilled quickly and can be zero emissions. Regular fossil fuel is also an option for peak times. For example the rush hour buses might make more sense running on CNG or even gasoline. The lack of starts and stops can make this pretty efficient.

Stations, anywhere

In such a system, you can also add new “stations” anywhere the ROW is wide enough for a side-lane and a small platform. You don’t need the 100m long platform able to hold a big train, just some pavement big enough to load a van. You can add a new station for extremely low cost. Of course, with more stations, it’s harder to group people for nonstop trips, and more people would need to take two-hop trips — a small van or car that takes them from a mini-station to a major station, where they join a larger group heading to their true destination.

Of course, if you were designing this from scratch, you would make the ROW with a shoulder everywhere that allowed vehicles to pull off the main track at any point to pick up a passenger and there would barely be “stations” — they would be closer to bus stops.

Getting off the track

Caltrain’s station in San Francisco is quite far from most of the destinations people want to go to. It’s one of the big reasons people don’t ride it. Vans on tires, however, have the option of keeping going once they get to the station. Employers could sponsor vehicles that arrive at the station and keep driving to their office tower. Vans could also continue to BART or more directly to underground Muni, long before the planned subway is ready. Likewise on the peninsula, vans and buses would travel from stations to corporate HQ. Google, Yahoo, Apple and many other companies already run transit fleets to bring employees in — you can bet that given the option they would gladly have those vans drive the old rail line at express speeds. On day one, they could have a driver who only drives the section back and forth between the station and the corporate office. In the not too distant future, the van or bus would of course drive itself. It’s not even out of the question that one of the passengers in a van, after having taken a special driving test, could drive that last mile, though you may need to assure somebody drives it back.

At-grade crossings

I noted above that capacity would be slightly less than half of full. That’s because Caltrain has 40 at-grade crossings on the peninsula. The robotic vehicles would coordinate their trips to travel in bunches, leaving gaps where the cross-street’s light can be turned green. If any car was detected trying to run the red, the signal could be uploaded to allow all the robotic vans to slow or even brake hard. Unlike trains, they could brake in reasonable amounts of time if somebody stalls on the old track. You would also detect people attempting to drive on the path or walk on it. Today’s cameras and cheap LIDARs can make that affordable. The biggest problem is the gaps must appear in both directions (more on that in the comments.)

Over time, there is also the option in some places to build special crossings. Because the vans and cars would all be not very high, much less expensive underpasses could be created under some of the roads for use only by the smaller vehicles. Larger vehicles would still need to bunch themselves together to leave gaps for the cross-traffic. One could also create overpasses rated only for lightweight vehicles at much lower cost, though those would still need to be high enough for trucks to go underneath. In addition, while cars can handle much, much steeper grades than trains, it could get disconcerting to handle too much up and down at 100mph. And yes, in time, they would go 100mph or even faster. And in time, some would even draft one another to both increase capacity and save energy — creating virtual trains where there used to be physical ones.

And then, obsolete

This robotic transit line would be much better than the train. But it would also be obsolete in just a couple of decades! As the rest of the world moves to more robocars, the transit line would switch to being just another path for the robocars. It would be superior, because it would allow only robocars and never have traffic congestion. You would have to pay extra to use it at rush hour, but many vehicles would, and large vehicles would get preference. The stations would largely vanish as all vehicles are able to go door to door. Most of the infrastructure would get re-used after the transit line shuts down.

It might seem crazy to build such a system if it will be obsolete in a short time, but it’s even crazier to spend billions on shoring up 19th century train.

What about the first law?

I’ve often said the first law of robocars is you don’t change the infrastructure. In particular, I am in general against ideas like this which create special roads just for robocars, because it’s essential that we not imagine robocars are only good on special roads. It’s only when huge amounts of money are already earmarked for infrastructure that this makes sense. Now we are well on the way to making general robocars good for ordinary streets. As such, special cars only for the former rail line run less risk of making people believe that robocars are only safe on dedicated paths. In fact, the funded development would almost surely lead to vehicles that work off the path as well, and allow high volume manufacturing of robotic transit vehicles for the future.

Could this actually happen?

I do fear that our urban and transit planners are unlikely to be so forward looking as to abandon a decades old plan for a centuries old technology overnight. But the advantages are huge:

  • It should be cheaper
  • Many companies could do it, and many would want to, to fund development of other technology
  • It would almost surely be technology from the Bay Area, not foreign technology, though vehicle manufacturing would come from outside
  • They could also get money for the existing rolling stock and steel in the rails to fund this
  • The service level would be vastly better. Wait times of mere minutes. Non-stop service. Higher speeds.
  • The energy use would be far lower and greener, especially if electric, CNG or hydrogen vehicles are used

The main downside is risk. This doesn’t exist yet. If you pave the road to retain the rails embedded in them, you would not need to shut down the rail line at first. In fact, you could keep it running as long as there were places that the vans could drive around trains that are slowing or stopping in the stations. Otherwise you do need to switch one day.

California publishes robocar "intervention" reports -- Google/Waymo so far ahead it's ridiculous

California published its summary of all the reports submitted by vendors testing robocars in the state. You can read the individual reports — and they are interesting, but several other outlines have created summaries of the reports calculating things like the number of interventions per mile.

On these numbers, Google’s lead is extreme. Of over 600,000 autonomous miles driven by the various teams, Google/Waymo was 97% of them — in other words 30 times as much as everybody else put together. Beyond that, their rate of miles between disengagements (around 5,000 — a 4x improvement over 2015) is one or two orders of magnitude better than the others, and in fact for most of the others, they have so few miles that you can’t even produce a meaningful number. Only Cruise, Nissan and Delphi can claim enough miles to really tell.

Tesla is a notable entry. In 2015 they reported driving zero miles, and in 2016 they did report a very small number of miles with tons of disengagements from software failures (one very 3 miles.) That’s because Tesla’s autopilot is not a robocar system, and so miles driven by it are not counted. Tesla’s numbers must come from small scale tests of a more experimental vehicle. This is very much not in line with Tesla’s claim that it will release full autonomy features for their cars fairly soon, and that they already have all the hardware needed for that to happen.

Unfortunately you can’t easily compare these numbers:

  • Some companies are doing most of their testing on test tracks, and they do not need to report what happens there.
  • Companies have taken different interpretations of what needs to be reported. Most of Cruise’s disengagements are listed as “planned” but in theory those should not be listed in these reports. But they also don’t list the unplanned ones which should be there.
  • Delphi lists real causes and Nissan is very detailed as well. Others are less so.
  • Many teams test outside California, or even do most of their testing there. Waymo/Google actually tests a bunch outside California, making their numbers even bigger.
  • Cars drive all sorts of different roads. Urban streets with pedestrians are much harder than highway miles. The reports do list something about conditions but it takes a lot to compare apples to apples. (Apple is not one of the companies filing a report, BTW.)

One complication is that typically safety drivers are told to disengage if they have any doubts. It thus varies from driver to driver and company to company what “doubts” are and how to deal with them.

Google has said their approach is to test any disengagement in simulator, to find out what probably would have happened if the driver did not disengage. If there would have been a “contact” (accident) then Google considers that a real incident, and those are more rare than is reported here. Many of the disengagements are when software detects faults with software or sensors. There, we do indeed have a problem, but like human beings who zone out, not all such failures will cause accidents or even safety issues. You want to get rid of all of them, to be sure, but if you are are trying to compare the safety of the systems to humans, it’s not easy to do.

It’s hard to figure out a good way to get comparable numbers from all teams. The new federal guidelines, while mostly terrible, contain an interesting rule that teams must provide their sensor logs for any incident. This will allow independent parties to compare incidents in a meaningful way, and possibly even run them all in simulator at some level.

It would be worthwhile for every team to be required to report incidents that would have caused accidents. That requires a good simulator, however, and it’s hard for the law to demand this of everybody.

Syndicate content