Recently we’ve seen a series of startups arise hoping to make robocars with just computer vision, along with radar. That includes recently unstealthed AutoX, the off-again, on-again efforts of comma.ai and at the non-startup end, the dedication of Tesla to not use LIDAR because it wants to sell cars today, before LIDARs can be bought at automotive quantities and prices.
Their optimism is based on the huge progress being made in the use of machine learning, most notably convolutional neural networks, at solving the problems of computer vision. Milestones are dropping quickly in AI and particularly pattern matching and computer vision. (The CNNs can also be applied to radar and LIDAR data.)
There are reasons pushing some teams this way. First of all, the big boys, including Google, already have made tons of progress with LIDAR. There right niche for a startup can be the place that the big boys are ignoring. It might not work, but if it does, the payoff is huge. I fully understand the VCs investing in companies of this sort, that’s how VCs work. There is also the cost, and for Tesla and some others, the non-availability of LIDAR. The highest capability LIDARs today come from Velodyne, but they are expensive and in short supply — they can’t make them to keep up with the demand just from research teams!
For the three key technologies, these trends seem assured:
LIDAR will improve price/performance, eventually costing just hundreds of dollars for high resolution units, and less for low-res units.
Computer vision will improve until it reaches the needed levels of reliability, and the high-end processors for it will drop in cost and electrical power requirements.
Radar will drop in cost to tens of dollars, and software to analyse radar returns will improve
In addition, there are some more speculative technologies whose trends are harder to predict, such as long-range LWIR LIDAR, new types of radar, and even a claimed lidar alternative that treats the photons like radio waves.
These trends are very likely. As a result, the likely winner continues to be a combination of all these technologies, and the question becomes which combination.
LIDAR’s problem is that it’s low resolution, medium in range and expensive today. Computer Vision (CV)’s problem is that it’s insufficiently reliable, depends on external lighting and needs expensive computers today. Radar’s problem is super low resolution.
Option one — high-end LIDAR with computer vision assist
High end LIDARs, like the 32 and 64 laser units favoured by the vast majority of teams, are extremely reliable at detecting potential obstacles on the road. They never fail (within their range) to differentiate something on the road from the background. But they often can’t tell you just what it is, especially at a distance. It won’t know a car from a pickup truck, or 2 pedestrians from 3. It won’t read facial expressions or body language. It can read signs but only when they are close. It can’t see colours, such as traffic signals.
The fusion of the depth map of LIDAR with the scene understanding of neural net based vision systems is powerful. The LIDAR can pull the pedestrian image away from the background, and then make it much easier for the computer vision to reliably figure out what it is. The CV is not 100% reliable, but it doesn’t have to be. Instead, it can ideally just improve the result. LIDAR alone is good enough if you take the very simple approach of “If there’s something in the way, don’t hit it.” But that’s a pretty primitive result that make brake too much for things you should not brake for.
Consider a bird on the road, or a blowing trash bag. It’s a lot harder for the LIDAR system to reliably identify those things. On the other hand, the visions systems will do a very good job at recognizing the birds. A vision system that makes errors 1 time every 10,000 is not adequate for driving. That’s too high an error rate as you encounter thousands of obstacles every hour. But missing 1 bird out of 10,000 means that you brake unnecessarily for a bird perhaps once every year or two, which is quite acceptable.
Option two — lower end LIDAR with more dependence on vision
Low end lidars, with just 4 or so scanning planes, cost a lot less. Today’s LIDAR designs basically need to have an independent laser, lens and sensor for each plane, and so the more planes, the more cost. But that’s not enough to identify a lot of objects, and will be pretty deficient on things low to the ground or high up, or very small objects.
The interesting question is, can the flaws of current computer vision systems be made up for by a lower-end, lower cost LIDAR. Those flaws, of course, include not always discerning things in their field. They also include needing illumination at night. This is a particular issue when you want a 360 degree view — one can project headlights forward and see as far as they see, but you can’t project headlights backward or to the side without distracting drivers.
It’s possible one could use infrared headlights in the other directions (or forward for that matter.) After all, the LIDAR sends out infrared laser beams. There are eye safety limits (your iris does not contract and you don’t blink to IR light) but the heat output is also not very high.
Once again, the low end lidar will eliminate most of the highly feared false negatives (when the sensor doesn’t see something that’s there) but may generate more false positives (ghosts that make the vehicle brake for nothing.) False negatives are almost entirely unacceptable. False positives can be tolerated but if there are too many, the system does not satisfy the customer.
This option is cheaper but still demands computer vision even better than we have today. But not much better, which makes it interesting.
Tesla has said they are researching what they can do with radar to supplement cameras. Radar is good for obstacles in front of you, especially moving ones. Better radar is coming that does better with stationary objects and pulls out more resolution. Advanced tricks (including with neural networks) can look at radar signals over time to identify things like walking pedestrians.
Radar sees cars very well (especially licence plates) but is not great on pedestrians. On the other hand, for close objects like pedestrians, stereo vision can help the computer vision systems a lot. You mostly need long range for higher speeds, such as the highways, where vehicles are your only concern.
Cost will eventually be a driver of robocar choices, but not today. Today, safety is the only driver. Get it safe, before your competitors do, at almost any cost. Later make it cheap. That’s why most teams have chosen the use of higher end LIDAR and are supplementing in with vision.
There is an easy mistake to make, though, and sometimes the press and perhaps some teams are making it. It’s “easy” on the grand scale to make a car that can do basic driving and have a nice demo. You can do it with just LIDAR or just vision. The hard part is the last 1%, which takes 99% of the time, if not more. Google had a car drive 1,000 miles of different roads and 100,000 total roads in the first 2 years of their project back in 2010, and even in 2017 with by far the largest and most skilled team, they do not feel their car is ready. It gets easier every day, as tech advances, to get the demo working, but that should not be mistaken for the real success that is required.
California has published updated draft regulations for robocars whose most notable new feature is rules for testing and operating unmanned cars, including cars which have no steering wheel, such as Google, Navya, Zoox and others have designed.
This is a big step forward from earlier plans which would have banned testing and deploying those vehicles. That that they are ready to deploy, but once you ban something it’s harder to un-ban it.
One type of vehicle whose coverage is unclear are small unmanned delivery robots, like we’re working on at Starship. Small, light, low speed, inherently unmanned and running mostly on the sidewalks they are not at all a fit for these regulations and presumably would not be covered by them — that should be made more explicit.
Another large part of the regulations cover revoking permits and the bureaucracy around that. You can bet that this is because of the dust-up between the DMV and Uber/Otto a few months ago, where Uber declared that they didn’t need permits (probably technically true) but the DMV found it not at all in the spirit of the rules and revoked the licence plates on the cars. The DMV wants to be ready to fight those who challenge its authority.
Intel buys MobilEye
Intel has paid over $15B to buy Jerusalem based MobilEye. MobilEye builds ASIC-based camera/computer vision systems to do ADAS and has been steadily enhancing them to work as a self-driving sensor. They’ve done so well the stock market already got very excited and pushed them up to near this rich valuation — the stock traded at close to this for a while, but fell after ME said it would no longer sell their chips to Tesla. (Tesla’s first autopilot depended heavily on the MobilEye, and while ME’s contract with Tesla explicitly stated it did not detect things like cross-traffic, that failure to detect played a role in the famous Tesla autopilot fatal crash.
In a surprising and wise move, Intel is going to move its other self-driving efforts to Israel and let MobilEye run them, rather than gobble them up and swallow/destroy them. ME is a smart company, fairly nimble, though it has too much focus on making low-cost sensors in a world where safety at high cost is better than less safety at low cost. (Disclaimer: I own some MBLY and made a nice profit on it in this sale.)
MobilEye has been the leader in doing ADAS functions with just cameras and cameras+radar. Several other startups are attempting this, and of course so is Tesla in their independent effort. However, LIDAR continues to get cheaper (with many companies, including Quanergy, whom I advise, working hard on that.) The question may be shifting from will it be cameras or lasers? to “will it be fancy vision systems with low-end LIDAR, or will it be high-end LIDAR with more limited vision systems?” In fact, that question deserves another post.
Waymo and Uber Lawsuit
I am not going to comment a great deal on this lawsuit, because I am close with both sides, and have NDAs with both Otto and formerly with Google/Waymo. There are lots of press reports on the lawsuit, filed by Waymo accusing Anthony Levandowski (who co-founded Otto and helped found the car team at Google) of stealing a vast trove of Google’s documents and designs. This fairly detailed Bloomberg report has a lot of information, including reports that at an internal meeting, Anthony told his colleagues that any downloading he did was simply to allow work from home.
The size of the lawsuit is staggering. Since Otto sold for 1% of Uber stock (worth over $750M) the dollar values are huge, particularly if, as Google alleges, they can demonstrate Uber encouraged wrongdoing. At the same time, if Google doesn’t prove their allegations, Otto and Anthony could file for what might be the largest libel lawsuit in history, since Google published their accusations not just in court filings, but in their blog.
One reason that might not happen is that Uber is seeking to force arbitration. Like almost all contracts these days, the contracts here included clauses forcing disputes to go to arbitrators, not courts. That will mean that the resolution and other data remain secret.
At the same time, Uber should fear something else. Uber is nothing, a $0 company, without iPhone and Android. (There is a Windows mobile app but it’s very low penetration.) Uber could push all drivers to iPhone, but if they ever found themselves unable to use Android for customers, they would lose more than they can afford.
I am not suggesting Google would go as far as to pull or block the Uber app on Android if it got into a battle. Aside from being unethical that might well violate antitrust regulations. But don’t underestimate the risk of betting half your business on a platform controlled by a company you go to war with. There are tricks I can think of (but am not yet publishing here) which Google could do which would not be seen as unfair or anti-competitive but which could potentially ruin Uber. Uber and Google will both have to be cautious in any serious battle.
In other Uber news, leaked reports say their intervention rate is still quite high. Intervention figures can be hard to interpret. Drivers are told to intervene at the smell of trouble, so the rate of grabbing the wheel can be much higher than the rate of actual problems. These leaks suggest, however, a fairly high rate of actual problems. This should remind people that while it’s pretty easy for a skilled team to get a car on the road and doing basic driving in a short time, there is a reason that Google’s very smart team has been at it 9 years and is still not ready to ship. The last 1% of the work takes 99% of the time.
Caltrain is the commuter rail line of the San Francisco peninsula. It’s not particularly good, and California is the land of the car commuter, but a plan was underway to convert it from diesel to electric. This made news this week as the California Republican house members announced they want to put a stop to both this project, and the much larger California High Speed Rail that hopes to open in 2030. For various reasons they may be right about the high speed rail but stop the electric trains? Electric trains are much better than diesel; they are cleaner and faster and quieter. But one number stands out in the plan.
To electrify the 51 miles of track, and do some other related improvements is forecast to cost over 1.5 billion dollars. Around $30M per mile.
So I started to ask, what other technology could we buy with $1.5 billion plus a private right-of-way through the most populated areas of silicon valley and the peninsula? Caltrain carries about 60,000 passengers/weekday (30,000 each way.) That’s about $50,000 per rider. In particular, what about a robotic transit line, using self-driving cars, vans and buses?
Paving over the tracks is relatively inexpensive. In fact, if we didn’t have buses, you could get by with fairly meager pavement since no heavy vehicles would travel the line. You could leave the rails intact in the pavement, though that makes the paving job harder. You want pavement because you want stations to become “offline” — vehicles depart the main route when they stop so that express vehicles can pass them by. That’s possible with rail, but in spite of the virtues of rail, there are other reasons to go to tires.
Fortunately, due to the addition of express trains many years ago, some stations already are 4 tracks wide, making it easy to convert stations to an express route with space by the side for vehicles to stop and let passengers on/off. Many other stations have parking lots or other land next to them allowing reasonably easy conversion. A few stations would present some issues.
Making robocars for a dedicated track is easy; we could have built that decades ago. In fact, with their much shorter stopping distance they could be safer than trains on rails. Perhaps we had to wait to today to convince people that one could get the same safety off of rails. Another thing that only arrived recently was the presence of smartphones in the hands of almost all the passengers, and low cost computing to make kiosks for the rest. That’s because the key to a robotic transit line would be coordination on the desires of passengers. A robotic transit line would know just who was going from station A to station J, and attempt to allocate a vehicle just for them. This vehicle would stop only at those two stations, providing a nonstop trip for most passengers. The lack of stops is also more energy efficient, but the real win is that it’s more pleasant and faster. With private ROW, it can easily beat a private car on the highways, especially at rush hour.
Another big energy win is sizing the vehicles to the load. If there are only 8 passengers going from B to K, then a van is the right choice, not a bus. This is particularly true off-peak, where vast amounts of energy are wasted moving big trains with just a few people. Caltrain’s last train to San Francisco never has more than 100 people on it. Smaller vehicles also allow for more frequent service in an efficient manner, and late night service as well — except freight uses these particular rails at night. (Most commuter trains shut down well before midnight.) Knowing you can get back is a big factor in whether you take a transit line at night.
An over-done service with a 40 passenger bus every 2 seconds would move 72,000 people (but really 30,000) in one hour in one direction to Caltrain’s 30,000 in a day. So of course we would not build that, and there would only be a few buses, mainly for rush hour. Even a fleet of just 4,000 9 passenger minvans (3 rows of 3) could move around 16,000 per hour (but really 8,000) in each direction. Even if each van was $50,000 each, we’ve spent only $200M of our $1.5B, though they might wear out too fast at that price, so we could bump the price and give them a much longer lifetime.
These vans and cars could be electric. This could be done entirely with batteries and a very impressive battery swap system, or you could have short sections of track which are electrified — with overhead rails or even third rails. The electric lines would be used to recharge batteries and supercapacitors, and would only be present on parts of the track. Unlike old 3rd rail technology, which requires full grade separation, there are new techniques to build safe 3rd rails that only provide current in a track segment after getting a positive digital signal from the vehicle. This is much cheaper than overhead wires. Inductive charging is also possible but makes pavement construction and maintenance much more expensive.
Other alternatives would be things like natural gas (which is cheap and much cleaner than liquid fuels, though still emits CO2) because it can be refilled quickly. Or hydrogen fuel cell vehicles could work here — hydrogen can be refilled quickly and can be zero emissions. Regular fossil fuel is also an option for peak times. For example the rush hour buses might make more sense running on CNG or even gasoline. The lack of starts and stops can make this pretty efficient.
In such a system, you can also add new “stations” anywhere the ROW is wide enough for a side-lane and a small platform. You don’t need the 100m long platform able to hold a big train, just some pavement big enough to load a van. You can add a new station for extremely low cost. Of course, with more stations, it’s harder to group people for nonstop trips, and more people would need to take two-hop trips — a small van or car that takes them from a mini-station to a major station, where they join a larger group heading to their true destination.
Of course, if you were designing this from scratch, you would make the ROW with a shoulder everywhere that allowed vehicles to pull off the main track at any point to pick up a passenger and there would barely be “stations” — they would be closer to bus stops.
Getting off the track
Caltrain’s station in San Francisco is quite far from most of the destinations people want to go to. It’s one of the big reasons people don’t ride it. Vans on tires, however, have the option of keeping going once they get to the station. Employers could sponsor vehicles that arrive at the station and keep driving to their office tower. Vans could also continue to BART or more directly to underground Muni, long before the planned subway is ready. Likewise on the peninsula, vans and buses would travel from stations to corporate HQ. Google, Yahoo, Apple and many other companies already run transit fleets to bring employees in — you can bet that given the option they would gladly have those vans drive the old rail line at express speeds. On day one, they could have a driver who only drives the section back and forth between the station and the corporate office. In the not too distant future, the van or bus would of course drive itself. It’s not even out of the question that one of the passengers in a van, after having taken a special driving test, could drive that last mile, though you may need to assure somebody drives it back.
I noted above that capacity would be slightly less than half of full. That’s because Caltrain has 40 at-grade crossings on the peninsula. The robotic vehicles would coordinate their trips to travel in bunches, leaving gaps where the cross-street’s light can be turned green. If any car was detected trying to run the red, the signal could be uploaded to allow all the robotic vans to slow or even brake hard. Unlike trains, they could brake in reasonable amounts of time if somebody stalls on the old track. You would also detect people attempting to drive on the path or walk on it. Today’s cameras and cheap LIDARs can make that affordable. The biggest problem is the gaps must appear in both directions (more on that in the comments.)
Over time, there is also the option in some places to build special crossings. Because the vans and cars would all be not very high, much less expensive underpasses could be created under some of the roads for use only by the smaller vehicles. Larger vehicles would still need to bunch themselves together to leave gaps for the cross-traffic. One could also create overpasses rated only for lightweight vehicles at much lower cost, though those would still need to be high enough for trucks to go underneath. In addition, while cars can handle much, much steeper grades than trains, it could get disconcerting to handle too much up and down at 100mph. And yes, in time, they would go 100mph or even faster. And in time, some would even draft one another to both increase capacity and save energy — creating virtual trains where there used to be physical ones.
And then, obsolete
This robotic transit line would be much better than the train. But it would also be obsolete in just a couple of decades! As the rest of the world moves to more robocars, the transit line would switch to being just another path for the robocars. It would be superior, because it would allow only robocars and never have traffic congestion. You would have to pay extra to use it at rush hour, but many vehicles would, and large vehicles would get preference. The stations would largely vanish as all vehicles are able to go door to door. Most of the infrastructure would get re-used after the transit line shuts down.
It might seem crazy to build such a system if it will be obsolete in a short time, but it’s even crazier to spend billions on shoring up 19th century train.
What about the first law?
I’ve often said the first law of robocars is you don’t change the infrastructure. In particular, I am in general against ideas like this which create special roads just for robocars, because it’s essential that we not imagine robocars are only good on special roads. It’s only when huge amounts of money are already earmarked for infrastructure that this makes sense. Now we are well on the way to making general robocars good for ordinary streets. As such, special cars only for the former rail line run less risk of making people believe that robocars are only safe on dedicated paths. In fact, the funded development would almost surely lead to vehicles that work off the path as well, and allow high volume manufacturing of robotic transit vehicles for the future.
Could this actually happen?
I do fear that our urban and transit planners are unlikely to be so forward looking as to abandon a decades old plan for a centuries old technology overnight. But the advantages are huge:
It should be cheaper
Many companies could do it, and many would want to, to fund development of other technology
It would almost surely be technology from the Bay Area, not foreign technology, though vehicle manufacturing would come from outside
They could also get money for the existing rolling stock and steel in the rails to fund this
The service level would be vastly better. Wait times of mere minutes. Non-stop service. Higher speeds.
The energy use would be far lower and greener, especially if electric, CNG or hydrogen vehicles are used
The main downside is risk. This doesn’t exist yet. If you pave the road to retain the rails embedded in them, you would not need to shut down the rail line at first. In fact, you could keep it running as long as there were places that the vans could drive around trains that are slowing or stopping in the stations. Otherwise you do need to switch one day.
On these numbers, Google’s lead is extreme. Of over 600,000 autonomous miles driven by the various teams, Google/Waymo was 97% of them — in other words 30 times as much as everybody else put together. Beyond that, their rate of miles between disengagements (around 5,000 — a 4x improvement over 2015) is one or two orders of magnitude better than the others, and in fact for most of the others, they have so few miles that you can’t even produce a meaningful number. Only Cruise, Nissan and Delphi can claim enough miles to really tell.
Tesla is a notable entry. In 2015 they reported driving zero miles, and in 2016 they did report a very small number of miles with tons of disengagements from software failures (one very 3 miles.) That’s because Tesla’s autopilot is not a robocar system, and so miles driven by it are not counted. Tesla’s numbers must come from small scale tests of a more experimental vehicle. This is very much not in line with Tesla’s claim that it will release full autonomy features for their cars fairly soon, and that they already have all the hardware needed for that to happen.
Unfortunately you can’t easily compare these numbers:
Some companies are doing most of their testing on test tracks, and they do not need to report what happens there.
Companies have taken different interpretations of what needs to be reported. Most of Cruise’s disengagements are listed as “planned” but in theory those should not be listed in these reports. But they also don’t list the unplanned ones which should be there.
Delphi lists real causes and Nissan is very detailed as well. Others are less so.
Many teams test outside California, or even do most of their testing there. Waymo/Google actually tests a bunch outside California, making their numbers even bigger.
Cars drive all sorts of different roads. Urban streets with pedestrians are much harder than highway miles. The reports do list something about conditions but it takes a lot to compare apples to apples. (Apple is not one of the companies filing a report, BTW.)
One complication is that typically safety drivers are told to disengage if they have any doubts. It thus varies from driver to driver and company to company what “doubts” are and how to deal with them.
Google has said their approach is to test any disengagement in simulator, to find out what probably would have happened if the driver did not disengage. If there would have been a “contact” (accident) then Google considers that a real incident, and those are more rare than is reported here. Many of the disengagements are when software detects faults with software or sensors. There, we do indeed have a problem, but like human beings who zone out, not all such failures will cause accidents or even safety issues. You want to get rid of all of them, to be sure, but if you are are trying to compare the safety of the systems to humans, it’s not easy to do.
It’s hard to figure out a good way to get comparable numbers from all teams. The new federal guidelines, while mostly terrible, contain an interesting rule that teams must provide their sensor logs for any incident. This will allow independent parties to compare incidents in a meaningful way, and possibly even run them all in simulator at some level.
It would be worthwhile for every team to be required to report incidents that would have caused accidents. That requires a good simulator, however, and it’s hard for the law to demand this of everybody.
I generally pay very little attention when companies issues a press release about an “alliance.” It’s usually not a lot more than a press release unless there are details on what will actually be built.
The recent announcement that Uber plans to buy some self-driving cars from Daimler/Mercedes is mostly just such an announcement — a future intent, when Mercedes actually builds a full self-driving car, that Uber will buy some. This, in spite of the fact that Uber has its own active self-driving system in development, and that it paid stock worth $760M to purchase freshly-minted startup Otto to accelerate that.
This shows a special advantage that Uber has over other players here. Their own project is very active, but unlike others, it doesn’t cripple Uber if it fails. Uber’s business is selling rides, and it will continue to be. If Uber can’t do it with its own cars, it can buy somebody else’s. Uber does not have the intention to make cars (neither does Google and that’s probably true of most other non-car companies.) There are many companies who will make cars to order for you. But if Google’s self-drive software (and hardware) project fails, they are left with very little. If Uber’s fails, they are still very much in business, but not as much in control of the underlying vehicles. As long as there are multiple suppliers for Uber to choose from, they are good.
One nightmare for the car companies is the reduction in value of their brands. If you summon “UberSelect” (the luxury Uber) you don’t care if it is a Lexus or Mercedes that shows up. As long as it’s a decent luxury car, you are good, because you are not buying the car, you are using it for 20 minutes. Uber is the brand you are trusting — and car companies fear that. I presume one thing that Daimler wants from this announcement is to remind people that they are a leader and may well be the supplier of cars to companies like Uber. But will they be in charge of the relationship? I doubt it.
Lyft should have the same advantage — but it took a $500M investment from GM which strongly pressures it to use whatever solution GM creates. Of course, if GM’s project fails, Lyft still has the freedom to use another, including Mercedes.
A lawsuit from Tesla against former Tesla autopilot team leader Sterling Anderson and former head of Google Chauffeur (now Waymo) Chris Urmson reveals little, other than the two have a company which will get a lot of attention in the space. But that’s enough. Google’s project is the most advanced one in the world. I was there and worked for Chris in its early days. Tesla’s is not necessarily the most advanced technologically — it has no LIDAR development — but it’s way ahead of others in terms of getting out there and deploying to gain experience, which has given it a headstart, especially in camera/radar based systems. The leaders of the two projects together will cause a stir in the auto business.
Earlier I posted my gallery of CES gadgets, and included a photo of the eHang 184 from China, a “personal drone” able, in theory, to carry a person up to 100kg.
Whether the eHang is real or not, some version of the personal automated flying vehicle is coming, and it’s not that far away. When I talk about robocars, I am often asked “what about flying cars?” and there will indeed be competition between them. There are a variety of factors that will affect that competition, and many other social effects not yet much discussed.
The VTOL Multirotor
There are two visions of the flying car. The most common is VTOL — vertical takeoff and landing — something that may have no wheels at all because it’s more a helicopter than a car or airplane. The recent revolution in automation and stability for multirotor helicopters — better known as drones — is making people wonder when we’ll get one able to carry a person. Multirotors almost exclusively use electric motors because you must adjust speed very quickly to get stability and control. You also want the redundancy of multiple motors and power systems, so you can lose a rotor or a battery and still fly.
This creates a problem because electric batteries are heavy. It takes a lot of power to fly this way. Carrying more batteries means more weight — and thus more power needed to carry the batteries. There are diminishing returns, and you can’t get much speed, power or range before the batteries are dead. OK in a 3 kilo drone, not OK in a 150 kilo one.
Lots of people are experimenting with combining multirotor for takeoff and landing, and traditional “fixed wing” (standard airplane) designs to travel any distance. This is a great deal more efficient, but even so, still a challenge to do with batteries for long distance flight. Other ideas including using liquid fuels some way. Those include just using a regular liquid fuel motor to run a generator (not very efficient) or combining direct drive of a master propeller with fine-control electric drive of smaller propellers for the dynamic control needed.
Another interesting option is the autogyro, which looks like a helicopter but needs a small runway for takeoff.
The traditional aircraft
Some “flying car” efforts have made airplanes whose wings fold up so they can drive on the road. These have never “taken off” — they usually end up a compromise that is not a very good car or a very good plane. They need airports but you can keep driving from the airport. They are not, for now, autonomous.
Some want to fly most of their miles, and drive just short distances. Some other designs are mostly for driving, but have an ability to “short hop” via parasailing or autogyro flying when desired. read more »
It is insufficient to assert, as you do, that the product does not remove any of the driver’s responsibilities” and “there is a high likelihood that some drivers will use your product in a manner that exceeds its intended purpose.”
The ODI report rules that Tesla properly considered driver distraction risks in its design of the product. It goes even further, noting that after the introduction of Tesla autopilot (including driving by those monitoring it properly, those who were distracted, and those who drove with it off) still had a decently lower accident rate for mile than drivers of Teslas before autopilot. In other words, while the autopilot without supervision is not good enough to drive on its own, the autopilot even with the occasionally lapsed supervision that is known to happen, combined with improved AEB and other ADAS functions, is still overall a safer system than not having the autopilot at all.
This will provide powerful support for companies developing autopilot style systems, and companies designing robocars who wish to use customer supervised driving as a means to build up test miles and verification data. They are not putting their customers at risk as long as they do it as well as Tesla. This is interesting (and the report notes that evaluation of autopilot distraction is not a settled question) because it seems probable that people using the autopilot and ignoring the road to do e-Mail or watch movies are not safer than regular drivers. But the overall collection of distracted and watchful drivers is still a win.
This might change as companies introduce technologies which watch drivers and keep them out of the more dangerous inattentive style of use. As the autopilots get better, it will become more and more tempting, after all.
Tesla stock did not seem to be moved by this report. But it was also not moved by the accident or other investigations — it actually went on a broadly upward course for 2 months following announcement of the fatality.
The ODI’s job is to judge if a vehicle is defective. That is different from saying it’s not perfect. Perfection is not expected, especially from ADAS and similar systems. The discussion about the finer points of whether drivers might over-trust the system are not firmly settled here. That can still be true without the car being defective and failing to perform as designed, or being designed negligently.
Recently we’ve seen two essays by people I highly respect in the field of AI and robotics. Their points are worthy of reading, but in spite of my respect, I have some differences of course.
The first essay comes from Andrew Ng, head of AI (and thus the self-driving car project) at Baidu. You will find few who can compete with Andrew when it comes to expertise on AI. (Update: This essay is not recent, but I only came upon it recently.)
In Wired he writes that Self-Driving Cars Won’t Work Until We Change Our Roads—And Attitudes. And the media have read this essay as being much more strong about changing the roads than he actually writes. I have declared it to be the “first law of robocars” that you don’t change the infrastructure. You improve your car to match the world you are given, you don’t ask the world to change to help your cars. There are several reasons I promote this rule:
As soon as you depend on a change in the world in order to drive safely, you have vastly limited where you can deploy. You declare that your technology will be, for a very long time, a limited area technology.
You have to depend on, and wait for others to change the world or their attitudes. It’s beyond your control.
When it comes to cities and infrastructure, the pace of change is glacial. When it comes to human behaviour, it can be even worse.
While it may seem that the change to infrastructure is clearer and easier to plan, the reality is almost assuredly the opposite. That’s because the clever teams of developers, armed with the constantly improving technologies driven by Moore’s law, have the ability to solve problems in a way that is much faster than our linear intuitions suggest. Consider measuring traffic by installing tons of sensors, vs. just getting everybody to download Waze. Before Waze, the sensor approach seemed clear, if expensive. But it was wrong.
As noted, Andrew Ng does not actually suggest that much change to the infrastructure. He talks about:
Having road construction crews log changes to the road before they do them
Giving police and others who direct traffic a more reliable way to communicate their commands to cars
Better painting of lane markers
More reliable ways to learn the state of traffic lights
Tools to help humans understand the actions and plans of robocars
The first proposal is one I have also made, because it’s very doable, thanks to computer technology. All it requires at first blush is a smartphone app in the hands of construction crews. Before starting a project, they would know that just as important as laying out cones and signs is opening the app and declaring the start of a project. The phone has a GPS and can offer a selection of precise road locations and log it. Of course, the projects should be logged even before they begin, but because that’s imperfect, smartphone logging is good enough. You could improve this by sticking old smartphones in all the road construction machines (old phones are cheap and there are only so many machines) so that any time a machine stops on a road for very long, it sends a message to a control center. Even emergency construction gets detected this way.
Even with all that, cars still need to detect changes to the road (that’s easy with good maps) and cones and machines. Which they can do.
I think the redirection problem is more difficult. Many people redirect traffic, even civilians. However, I would be interested to see Ng’s prediction on how hard it is to get neural network based recognizers to understand all the common gestures. Considering that computers are now getting better at reading sign languages, which are much more complex, I am optimistic here. But in any event, there is another solution for the cases where the system can’t understand the advice, namely calling in an operator in a remote control center, which is what Nissan plans to do, and what we do at Starship. Unmanned cars, with no human to help, will just avoid data dead zones. If somehow they get to them, there can be other solutions, which are imperfect but fine when the problem is very rare, such as a way for the traffic manager to speak to the car (after all, spoken language understanding is now close to a solved problem for limited vocabulary problems.)
Here I disagree with Andrew. His statement may be a result of efforts to drive on roads without maps, even though Baidu has good map expertise. Google’s car has a map of the texture of the road. It knows where the cracks and jagged lane markers are. The car actually likes degrading lane markers. It’s perfectly painted straight and smooth roads which confuse it (though only slightly, and not enough to cause a problem.) So no, I think that better line painting is not on the must-do list.
He’s right, seeing lights can be challenging, though the better cars are getting good at it. The simple algorithm is “you don’t go if you don’t confirm green.” That means you don’t run a red but you could block traffic. If that’s very rare it’s OK. We can consider infrastructure to solve that, though I’m wary. Fortunately, if the city is controlling its lights with a central computer, you don’t have to alter the traffic light itself (which is hard,) you can just query the city, in those rare cases, for when the light will be changing. I think that problem will be solved, but I also think it may well be solved just by better cameras. Good robocars know exactly where all the lights are, and they know where they are, and thus they know exactly what pixels in a video image are from the light, even if the sun is behind it. (Good robocars also know where the sun is and will avoid stopping in a place where there is no light they can see without the sun right behind it.)
Working with people
How cars interact with people is one of Andrew Ng’s points and the central point of Rodney Brooks’ essay Unexpected Consequences of Self Driving Cars. Already many of the car companies have had fun experimenting with that, putting displays on the outside of cars of various sorts. While cars don’t have the body language and eye contact of human drivers, I don’t predict a problem we can’t solve with good effort.
Brooks’ credentials are also superb, as founder of iRobot (Roomba) and Rethink Robotics (Baxter) as well as many accomplishments as an MIT professor. His essay delves into one of the key questions I have wondered about for some time — how to deal with a world where things do not follow the rules, and where there are lots of implicit and changing rules and interactions. Google discovered the first instant of this when their car got stuck at a 4 way stop by being polite. They had to program the car to assert its right to go in order to handle the stop. Likewise, you need to speed to be a good citizen on many of our roads today.
His key points are as follows:
There is a well worked out dance between pedestrians and cars, that varies greatly among different road types, with give and take, and it’s not suitable for machines yet.
People want to know a driver has seen them before stepping near or certainly in front of a vehicle.
People jaywalk, and even expect cars to stop for them when they do on some streets.
In snowy places, people walk on the street when the sidewalk is not shoveled.
Foot traffic can be so much that timid cars can’t ever get out of sidestreets or driveways. Nice pedestrians often let them out. They will hand signal their willingness to yield or use body language.
Sometimes people just stand at the corner or edge of the road, and you can’t tell if they are standing there or getting ready to cross.
People setting cars to circle rather than park
People might jump out of their car to do something, leaving it in the middle of the street blocking traffic, where today they would be unwilling to double park.
People might abuse parking spots by having a car “hold” them for quick service when they want to leave an event.
Cars will grab early spots to pick up children at schools.
Brooks starts with one common mistake — he has bought into the “levels” defined by SAE, even claiming them to be well accepted. In fact, many people don’t accept them, especially the most advanced developers, and I outlined recently why there is only one level, namely unmanned operation, and so the levels are useless as a taxonomy. Instead the real taxonomy in the early days will be the difference between mobility on demand services (robotaxi) and self-drive enabled high end luxury cars. Many of his problems involve privately owned cars and selfish behaviour by their owners. Many of those behaviours don’t make sense in a world with robotaxis. I think it’s very likely that the robotaxis come first, and come in large numbers first, while some imagine it’s the other way around.
Brooks is right that there will be unintended consequences, and the technology will be put to uses nobody thought of. People will be greedy, and antisocial, that can be assured. Fortunately, however, people will work out solutions, in advance, to anything you can think of or notice just by walking down the street or thinking about issues for a few days. The experienced developers have been thinking about these problems for decades now, and cars like Google’s have driven for 300 human lifetimes of driving, and that number keeps increasing. They note every unusual situation they encounter on every road they can try to drive, and the put it into the simulator if it’s important. They’ve already seen more situations than any one human will encounter on those roads, though they certainly haven’t driven all the types of road in the world. But they will, before they certify as safe for deployment on such roads.
As I noted, only the “level 4” situation is real. Level 5 is an aspirational science-fiction goal, and the others are unsafe. Key to the improved thinking on “levels” it is no longer the amount of human supervision needed that makes the difference, it is the types of roads and situations you can handle. All these vehicles will only handle a subset of roads, and that is what everybody plans. If there is a road that is too hard, they just won’t drive it. Fortunately, there are lots of road subsets out there that are very, very useful and make economic sense. For a while, many companies planned only to do highways, which are the simplest road subset of all, except for the speed. A small subset, but everybody agrees it’s valuable.
So the short answer is, solutions will be found to these problems if the roads they occur on are commercially necessary. If they are not necessary, the solutions will be delayed until they can be found, though that’s probably not too long.
As noted above, many people do expect systems to be developed to allow dialogue between robocars and pedestrians or other humans. One useful tool is gaze detection — just as a cheap flash camera causes “red eye” in photos, machines shining infrared light can easily tell if you are looking at them. Eye contact in that direction is detectable. There have been various experiments in sending information in the reverse direction. Some cars have lasers that can paint lines on the road. Others can display text. Some have an LED ribbon surrounding them that shows all the objects and people tracked by the car, so people can understand that they are being perceived. You can also flash a light back directly at people to return their eye contact — I see you and I see that you saw me.
Over time, we’ll develop styles of communication, and they will get standarized. It’s not essential to do that on day one; you just stay on the simpler roads until you know you can handle the others. Private cars will pause and pop out a steering wheel. Services like Uber will send you a human driver in the early days if the car is going somewhere the systems can’t drive, or they might even let you drive part of it. Such incrementalism is the only way it can ever work.
People taking advantage of timidity of robocars
I believe there are solutions to some of the problems laid out. One I have considered is pedestrians and others who take advantage of the naturally conservative and timid nature of a robocar. If people feel they can safely cut off or jaywalk in front of robocars, they will. And the unmanned cars will mostly just accept that, though only about 10% of all cars should be unmanned at any given time. The cars with passengers are another story. Those passengers will be bothered if they are cut off, or forced to brake quickly. They will spill their coffee. And they will fight back.
Citizen based strong traffic code enforcement
Every time you jump in front of such a car, it will of course have saved the video and other sensor data. It’s always doing that. But the passenger might tell the car, “Please save that recent encounter. E-mail it to the police.” The police will do little with it at first, but in time, especially since there are rich people in these cars, they will throw a face recognizer and licence plate recognizer on the system that gets the videos. They will notice that one person keeps jaywalking right in front of the cars and annoying the passengers. Or the guy who keeps cutting off the cars as though they are not there because they always brake. They will have video of him doing it 40 times, or 100. And at that point, they will do something. The worst offender will get identified and get an E-mail from police. We have 50 videos of you doing this. Here are 50 tickets. Then the next, and the next until nobody wants to get to the top of the list.
This might actually create pressure the other way — a street that belongs only to the cars and excludes the non-car user. A traffic code that is enforced to the letter because every person inconvenienced has an ability to file a complaint trivially. We don’t want that either, but we can control that balance.
I actually look forward to fixing one of the dynamics of jaywalking that doesn’t work. Often, people like to jaywalk and a car is approaching. They want to have the car pass at full speed and then walk behind it — everybody is more comfortable behind a car than in front of one. But the driver gets paranoid and stops, and eventually you uncomfortably cross in front, annoyed at that and that you stopped somebody you didn’t intend to stop. I suspect robocars will be able to handle this dynamic better, predicting when people might actually be on a path to enter their lane, but not slowing down for stopped pedestrians (adults at least) and trust them to manage their crossing. Children are a different matter.
People being selfish with robocars
Brooks wonders about people doing selfish things with their robocars. Here, he mostly talks about privately owned robocars, since most of what he describes would not or could not happen with a robotaxi. There will be some private cars so we want to think about this.
A very common supposition I see here and elsewhere is the idea of a car that circles rather than parking. Today, operating a car is about $20/hour so that’s already completely irrational, and even when robocar operation drops to $8/hour or less, parking is going to be ridiculously cheap and plentiful so that’s not too likely. There could be competition for spots in very busy areas (schools, arenas etc.) which don’t have much space for pick-up and drop-off, and that’s another area where a bit of traffic code could go a long way. Allow facilities to make a rule: “No car may enter unless its passenger is waiting at the pick-up spot” with authority to ticket and evict any car that does otherwise. Over time, such locations will adjust their pick-up spots to the robocar world and become more like Singapore’s airport, which provides amazing taxi throughput with no cab lines by making it all happen in parallel. Of course, cars would wait outside the zone but robocars can easily double and triple park without blocking the cars they sit in the path of. Robocars waiting for passengers at busy locations will be able to purchase waiting spaces for less than the cost of circling, and then serve their customers or owners. If necessary, market prices can be put on the prized close waiting spaces to solve any problems of scarcity.
So when can it happen?
Robocars will come to different places at different times. They will handle different classes of streets at different times. They will handle different types of interactions with pedestrians and other road users at different times. Where you live will dictate when you can use it and how you can use it. Vendors will push at the most lucrative routes to start, then work down. There will be many problems that are difficult at first, and the result will be the early cars just don’t go on those sorts of streets or into those sorts of situations. Human driving, either by the customer or something like an Uber driver, will fill in the gaps.
Long before then, teams will have encountered or thought of just about any situation you’ve seen, and any situation you’ve likely thought of in a short amount of time. They will have programmed every variation of that situation they can imagine into their simulators to see what their car does. They will use this to grow the network of roads the cars handle every day. Even if at the start, it is not a network of use to you, it won’t be too long before it becomes that, at first for some of your rides, and eventually for most or all.
CES has become the big event for major car makers to show off robocar technology. Most of the north hall, and a giant and valuable parking lot next to it, were devoted to car technology and self-driving demos.
Gallery of CES comments
Earlier I posted about many of the pre-CES announcements and it turns out there were not too many extra events during the show. I went to visit many of the booths and demos and prepared some photo galleries. The first is my gallery on cars. In this gallery, each picture has a caption so you need to page through them to see the actual commentary at the bottom under the photo. Just 3 of many of the photos are in this post.
To the left you see BMW’s concept car, which starts to express the idea of an ultimate non-driving machine. Inside you see that the back seat has a bookshelf in it. Chances are you will just use your eReader, but this expresses and important message — that the car of the future will be more like a living, playing or working space than a transportation space.
The main announcement during the show was from Nissan, which outlined their plans and revealed some concept cars you will see in the gallery. The primary demo they showed involved integration of some technology worked on by Nissan’s silicon valley lab leader, Maarten Sierhuis in his prior role at NASA. Nissan is located close to NASA Ames (I myself work at Singularity University on the NASA grounds) and did testing there.
Their demo showed an ability to ask a remote control center to assist a car with a situation it doesn’t understand. When the car sees something it can’t handle, it stops or pulls over, and people in the remote call center can draw a path on their console to tell the car where to go instead. For example, it can be drawn how to get around an obstacle, or take a detour, or obey somebody directing traffic. If the same problem happens again, and it is approved, the next car can use the same path if it remains clear.
I have seen this technology a number of places before, including of course the Mars rovers, and we use something like it at Starship Technologies for our delivery robots. This is the first deployment by a major automaker.
Nissan also committed to deployment in early 2020 as they have before — but now it’s closer.
You can also see Nissan’s more unusual concepts, with tiny sensor pods instead of side-view mirrors, and steering wheels that fold up.
Several startups were present. One is AIMotive, from Hungary. They gave me a demo ride in their test car. They are building a complete software suite, primarily using cameras and radar but also able to use LIDAR. They are working to sell it to automotive OEMs and already work with Volvo on DriveMe. The system uses neural networks for perception, but more traditional coding for path planning and other functions. It wasn’t too fond of Las Vegas roads, because the lane markers are not painted there — lanes are divided only with Bott’s Dots. But it was still able to drive by finding the edge of the road. They claim they now have 120 engineers working on self-driving systems in Hungary. read more »
You may have seen a lot of press around a dashcam video of a car accident in the Netherlands. It shows a Tesla in AutoPilot hitting the brakes around 1.4 seconds before a red car crashes hard into a black SUV that isn’t visible from the viewpoint of the dashcam. Many press have reported that the Tesla predicted that the two cars would hit, and because of the imminent accident, it hit the brakes to protect its occupants. (The articles most assuredly were not saying the Tesla predicted the accident that never happened had the Tesla failed to brake, they are talking about predicting the dramatic crash shown in the video.)
The accident is brutal but apparently nobody was hurt.
The press speculation is incorrect. It got some fuel because Elon Musk himself retweeted the report linked to, but Telsa has in fact confirmed the alternate and more probable story which does not involve any prediction of the future accident. In fact, the red car plays little to no role in what took place.
Tesla’s autopilot uses radar as a key sensor. One great thing about radar is that it tells you how fast every radar target is going, as well as how far away it is. Radar for cars doesn’t tell you very accurately where the target is (roughly it can tell you what lane a target is in.) Radar beams bounce off many things, including the road. That means a radar beam can bounce off the road under a car that is in front of you, and then hit a car in front of it, even if you can’t see the car. Because the radar tells you “I see something in your lane 40m ahead going 20mph and something else 30m ahead going 60mph” you know it’s two different things. read more »
Thursday night I am heading off to CES, and it’s become the main show it seems for announcing robocar news. There’s already a bunch.
BMW says it will deploy a fleet of 40 cars in late 2017
Bumping up the timetables, BMW has declared it will have a fleet of 40 self-driving series 7 cars, using BMW’s technology combined with MobilEye and Intel. Intel has recently been making a push to catch up to Nvidia as a chipmaker supplier to automakers for self-driving. It’s not quite said what the cars will do, but they will be trying lots of different roads. So far BMW has mostly been developing its own tech. More interesting has been their announcement of plans to sell rides via their DriveNow service. This was spoken of a year ago but not much more has been said.
Intel also bought 15% of “HERE” the company formerly known as Navteq and Nokia. Last year, the German automakers banded together to buy HERE from Nokia and the focus has been on “HD” self-driving maps.
Hyundai, Delphi show off cars
There are demo cars out there from Delphi and a Hyundai Ioniq. Delphi’s car has been working for a while (it’s an Audi SUV) but recently they have also added a bunch of MobilEye sensors to it. Reports about the car are good, and they hope to have it ready by 2019, showing up in 2020 or 2021 cars on dealer lots.
Toyota sticks to concepts
Toyota’s main announcement is the Concept-i meant to show off some UI design ideas. It’s cute but still very much a car, though with all the silly hallmarks of a concept — hidden wheels, strangely opening doors and more.
Quanergy announces manufacturing plans for $250 solid state LIDAR
Quanergy (Note: I am on their advisory board) announced it will begin manufacturing this year of automotive grade $250 solid state LIDARs. Perhaps this will stop all the constant articles about how LIDAR is super-expensive and means that robocars must be super-expensive too. The first model is only a taste of what’s to come in the next couple of years as well.
New Ford Model has sleeker design
Ford has become the US carmaker to watch (in addition to Tesla) with their announcement last year that they don’t plan to sell their robocars, only use them to offer ride service in fleets. They are the first and only carmaker to say this is their exclusive plan. Just prior to CES, Ford showed off a new test model featuring smaller Velodyne pucks and a more deliberate design.
I have personally never understood the desire to design robocars to “look like regular cars.” I strongly believe that, just like the Prius, riders in the early robocars will want them to look distinctive, so they can show off how they are in a car of the future. Ford’s carm based on the Fusion hybrid, is a nice compromise — clearly a robocar with its sensors, but also one of sleek and deliberate design.
Nvidia keeps its push
Nvidia has a new test car they have called BB8. (Do they have to licence that name?) It looks fairly basic, and they show a demo of it taking somebody for a ride with voice control, handling a lot of environments. It’s notable that at the end, the driver has to take over to get to the destination, so it doesn’t have everything, nor would we expect it. NVIDIA is pushing their multi-GPU board as the answer to how to get a lot of computing power to run neural networks in the car.
Announcements are due tomorrow from Nissan and probably others. I’ll report Friday from the show floor. See you there.
The California DMV got serious in their battle with Uber and revoked the car registrations for Uber’s test vehicles. Uber had declined to register the cars for autonomous testing, using an exemption in that law which I described earlier. The DMV decided to go the next step and pull the more basic licence plate every car has to have if based in California. Uber announced it would take the cars to another state.
While I’m friends with the Uber team, I have not discussed this matter with them, so I can only speculate why it came to this. As noted, Uber was complying with the letter of the law but not the spirit, which the DMV didn’t like. At the same time, the DMV kept pointing out that registering was really not that hard or expensive, so they can’t figure out why Uber stuck to its guns. (Of course, Uber has a long history of doing that when it comes to cities trying to impose old-world taxi regulations on them.)
The DMV is right, it’s not hard to register. But with that registration comes other burdens, in particular filing regular public reports on distance traveled, interventions and any accidents. Companies doing breakthrough R&D don’t usually work under such regimes, and I am speculating this might have been one of Uber’s big issues. We’ve all see the tremendous amount of press that Google has gotten over accidents which were clearly not the fault of their system. The question is whether the public’s right to know (or the government’s) about risks to public safety supersedes the developer’s desires to keep their research projects proprietary and secret.
It’s clear that we would not want a developer going out on the roads and having above-average numbers of accidents and keeping it hidden. And it may also be true that we can’t trust the developers to judge the cause of fault, because they could have a bias. (Though on most of the teams I have seen, the bias has been a safety paranoid one, not the other way around.)
Certainly when we let teens start to drive, we don’t have them make a public report of any accidents they have. The police and DMV know, and people who get too many tickets or accidents get demerits and lose licences when it is clear they are a danger to the public. Perhaps a reasonable compromise would have been that all developers report all problems to the DMV, but that those results are not made public immediately. They would be revealed eventually, and immediately if it was determined the system was at fault.
Uber must be somewhat jealous of Tesla. Tesla registered several cars under the DMV system, and last I saw, they sent in their reports saying their cars had driven zero miles. That’s because they are making use of the same exemption that Uber wanted to make use of, and saying that the cars are not currently qualifying as autonomous under the law.
As you can see, the van still has Waymo’s custom 360 degree LIDAR dome on top, and two sensors at the back top corners, plus other forward sensors. The back sensors I would guess to be rear radar — which lets you make lane changes safely. We also see three apparent small LIDARs, one on the front bumper, and the other two on the sides near the windshield pillars with what may be side-view radars.
A bumper LIDAR makes sure you can see what’s right in front of the bumper, an area that the rooftop LIDAR might not see. That’s important for low speed operations and parking, or situations where there might be something surprising right up close. I am reminded of reports from the Navya team that when they deployed their shuttles, teens would try to lie down in front of the shuttle to find out if it would stop for them. Teens will be teens, so you may need a sensor for that.
Side radar is important for cross traffic when trying to do things like making turns at stop signs onto streets with high speed. Google also has longer range LIDAR to help with that.
The minivan is of course the opposite end of the spectrum from the 2-passenger no-steering-wheel 3rd generation prototype. That car tested many ideas for low speed urban taxi operations, and the new vehicle seems aimed at highway travel and group travel (with six or more seats.) One thing people particularly like is that like most minivans these days, it has an automatic sliding door. Somehow that conveys the idea of a robotic taxi even more when it opens the door for you! The step-in-step-out convenience of the minivan does indeed give people a better understanding of the world of frictionless transportation that is coming.
Update: Also announced yesterday was a partnership between Honda and Waymo. It says they will be putting the Waymo self-driving system into Honda cars. While the details in the release are scant, this actually could be a much bigger announcement than the minivans, in which Chrysler’s participation is quite minimal. Waymo has put out the spec for the modified minivan, and Chrysler builds it to their spec, then Waymo installs the tech. A Waymo vehicle sourced from Chrysler. The Honda release suggests something much bigger — a Honda vehicle sourced from, or partnering with Waymo.
There has not been as much press about this Honda announcement but it may be the biggest one.
NPRM for DSRC and V2V
The DoT has finally released their proposed rules requiring all new cars (starting between 2020 and 2022) to come equipped with vehicle-to-vehicle radio units, speaking the DSRC protocol and blabbing their location everywhere they go. Regular readers will know that I think this is a pretty silly idea, even a dangerous one from the standpoint of privacy and security, and that most developers of self-driving cars, rather than saying this is a vital step, describe it as “something we would use if it gets out there, but certainly not essential for our vehicles.”
For a few months, Uber has been testing their self-driving prototypes in Pittsburgh, giving rides to willing customers with a safety driver (or two) in the front seat monitoring the drive and ready to take over.
When Uber came to do this in San Francisco, starting this week, it was a good step to study new territory and new customers, but the real wrinkle was they decided not to get autonomous vehicle test permits from the California DMV. Google/Waymo and most others have such permits. Telsa has such permits but claims it never uses them.
I played an advisory role for Google when the Nevada law was drafted, and this followed into the California law. One of the provisions in both laws is that they specifically exempt cars that are unable to drive without a human supervisor. This provision showed up, not because of the efforts of Google or other self-drive teams, but because the big automakers wanted to make sure that these new self-driving laws did not constrain the only things they were making at the time — advanced ADAS and “autopilot” cars which are effectively extra-fancy cruise controls that combine lanekeeping functions with adaptive cruise control for speed. Many car makers offered products like that going back a decade, and they wanted to make sure that whatever crazy companies like Google wanted in their self-driving laws, it would not pertain to them.
The law says:
“…excluding vehicles equipped with one or more systems that enhance safety or provide driver assistance but are not capable of driving or operating the vehicle without the active physical control or monitoring of a natural person.”
Now Uber (whose team is managed by my friend Anthony Levandowski who played a role in the creation of those state laws while he was at Google) wants to make use of these carve-outs to do their pilot project. As long as their car is tweaked so that it can’t drive without human monitoring, it would seem to fit under that exemption. (I don’t know, but would presume they might do some minor modifications so the system can’t drive without the driver weight sensor activated, or a button held down or similar to prove the driver is monitoring.)
The DMV looks at it another way. Since their testing regulations say you can’t test without human safety drivers monitoring and ready to take over, it was never the intent of the law to effectively exempt everything. You can’t test a car without human monitoring under the regulations, but cars that need monitoring are exempt. The key is calling the system a driver assistance system rather than a driving system.
The DMV is right about the spirit. Uber may be right about the letter. Of course, Uber has a long history of not being all that diligent in complying with the law, and then getting the law to improve, but this time, I think they are within the letter. At least for a while.
Velodyne reports success in research into solid state LIDAR. Velodyne has owned the market for self-driving car LIDAR for years, as they are the only producers of a high-end model. Their models are mechanical and very expensive, so other companies have been pushing the lower cost end of the market, including Quanergy (Where I am an advisor) which has also had solid state LIDAR for some time, and appears closer to production.
These and others verify something that most in the industry have expected for some time — LIDAR is going to get cheap soon. Companies like Tesla, which have avoided LIDAR because you can’t get a decently priced unit in production quantities, have effectively bet that cameras will get good before LIDAR gets cheap. The reality is that most early cars will simply use both cheap LIDAR and improving neural network based vision at the same time.
Google’s car project (known as “Chauffeur”) really kickstarted the entire robocar revolution, and Google has put in more work, for longer, than anybody. The car was also the first project of what became Google “X” (or just “X” today under Alphabet. Inside X, a lab devoted to big audacious “moonshot” projects that affect the physical world as well as the digital, they have promoted the idea that projects should eventually “graduate,” moving from being research to real commercial efforts.
Alphabet has announced that the project will be its own subsidiary company with the new name “Waymo.” The name is not the news, though; what’s important is the move away from being a unit of a mega-company like Google or Alphabet. The freedoms to act that come with being a start-up (though a fairly large and well funded one) are greater than units in large corporations have. Contrast what Uber was able to do, skirting and even violating the law until it got the law changed, with what big corporations need to do.
Google also released information about how in 2015 they took Steve Mahan — the blind man who was also the first non-employee to try out a car for running errands — for the first non-employee (blind or otherwise) fully self-driving ride on public streets, in a vehicle with no steering wheel and no backup safety driver in the vehicle. (This may be an effort to counter the large amount of press about public ride offerings by Nutonomy in Singapore and Uber in Pittsburgh, as well as truck deliveries by Uber/Otto in 2016.)
It took Google/Alphabet 6 years to let somebody ride on public streets in part because it is a big company. It’s an interesting contrast with how Otto did a demonstration video after just a few months of life of a truck driving a Nevada highway with nobody behind the wheel (but Otto employees inside and around it.) That’s the sort of radical step that startups.
Waymo has declared their next goal is to “let people use our vehicles to do everyday things like run errands, commute to work, or get safely home after a night on the town.” This is the brass ring, a “Mobility on Demand” service able to pick people up (ie. run unmanned) and even carry a drunk person.
The last point is important. To carry a drunk is a particular challenge. In terms of improving road safety it’s one of the most worthwhile things we could do with self-driving cars, since drunks have so many of the accidents. To carry a drunk, you can’t let the human take control even if they want to. Unlike unmanned operation, you must travel at the speed impatient humans demand, and you must protect the precious cargo. To make things worse, in some legal jurisdictions, they still want to consider the person inside the car the “driver,” which could mean that since the “driver” is impaired, operation is illegal.
Waymo as leader
The importance of this project is hard to overstate. While most car companies had small backburner projects related to self-driving going back many years, and a number of worthwhile research milestones were conquered in the 90s and even earlier, the Google/Waymo project, which sprang from the Darpa Grand Challenge, energized everybody. Tiny projects at car companies all got internal funding because car companies couldn’t tolerate the press and the world thinking and writing the that true future of the car was coming from a non-car company, a search engine company. Now the car companies have divisions with thousands of engineers, and it’s thanks to Google. The Google/Waymo team was accomplishing tasks 5 years ago that most projects are only now just getting to, especially in non-highway driving. They were rejecting avenues (like driving with a human on standby ready to take the wheel on short notice) in 2013 that many projects are still trying to figure out.
Indeed, even in 2010, when I first joined the project and it had just over a dozen people, it had already accomplished more complex tasks that most projects, even the Tesla autopilot that some people think is in the lead, have yet to accomplish.
Robocars are broadly going to be a huge boon for many people with disabilities, especially disabilities which make it difficult to drive or those that make it hard to get in and out of vehicles. Existing disability regulations and policies were written without robocars in mind, and there are probably some improvements that need to be made.
While I was at Google, I helped slightly with the project to show the first non-employee getting to use the car to run errands. The subject we selected was 95% blind, and of course he can’t drive, and even using transit is a burden. It was obvious to him immediately how life-changing the technology will be.
Some background on disabled transport
There are two rough policy approaches to making things more accessible. One requires that we make everything accessible. The other uses special accommodations for the disabled.
Making everything accessible is broadly preferred by advocates. Wheelchair ramps on all public buildngs etc. Doing less than this runs a risk of “separate but equal” which quickly becomes separate and inferior. It’s also hugely expensive, and while that cost is borne by people like building owners and society, there is not unlimited budget, and there are arguments that there may be more efficient ways to spend the resources that are available. There are also lots of very different disabilities, and you need very different methods to deal with impairments in sight, mobility, hearing, cognition and the rest.
Over 50 million people in the USA have some sort of disability, so this is no minor matter.
In transportation, there is a general goal to make public transit accessible. To supplement that, or where that is not done, there are the paratransit rules. Paratransit offers people who meet certain tests an alternate ride (usually in a door to door van) for themselves and a helper for no more than twice the cost of a regular bus ticket. That sounds great until you learn you also have to schedule it a day in advance, and have a one-hour pickup window (which the disabled hate) and it’s hugely expensive, with an average cost per ride of over $30, which cities hate. (In the worst towns, it is $60/ride.) In some cities it approaches half the transit budget. Some cities, looking at that huge cost, let some disabled customers just use taxis for short trips, which provide much better service and cost much less. (Though to avoid over-use they put limitations on this.)
There are Americans with Disabilities Act rules for taxis. Regular sedan taxis are not directly regulated though there can be no discrimination of disabled customers who are capable of riding in a sedan. Any new van of up to 8 seats has to been accessible, which often means things like wheelchair lifts. In addition, once a taxi fleet has accessible vans, it has to offer “equivalent service” levels. This might mean that if it has 200 sedans, it can’t buy just one van because there would be much longer wait times to get that van. To get around this, a lot of companies use a loophole and purchase only used vans. The law only covers the use of new vans. Companies like Uber and Lyft don’t own vehicles at all, and so are not governed in the same way by fleet requirements, though they do offer accessible vehicle services in some cities.
When Uber and similar companies move to offering robotaxi service with vehicles they own, these laws would apply to them. Unlike some companies, the used van loophole will also be difficult since most robotaxis will be custom built new.
New Types of Vehicles
Robotaxi service offers the promise of a vehicle on demand, and it offers the potential of a vehicle well fitted to the trip. Mostly I talk about things like the ability to use a small and inexpensive one person vehicle for solo urban trips (which are 80% of trips, so this is a big deal) but it also means sending an SUV when 3 people want to go skiing, or a pickup-truck for a work run, or a van designed for socializing when a group of people want to travel together.
It also offers the ability to create vehicles just for people with certain disabilities. One example I find quite interesting is the Kenguru — a small, single person vehicle which is hollow, and allows a user in a wheelchair to just roll in the back and steer it with hand controls. For wheelchair users with working arms, this is hugely superior to designs that require you to get out of your chair into a car seat, or which involve the time delays of using a wheelchair lift. Especially with nobody to assist. Roll-in, roll-out can match the convenience of the able-bodied. The current Kenguru is to be steered, but a self-driving vehicle like this could handle even those in power chairs, and offer a fold-down bench for an able-bodied companion.
Being computerized, these vehicles will also offer accessible user interfaces. Indeed, they may mostly rely on the user’s phone, which will already be customized to their needs.
Custom-designed to meet particular disabilities, these vehicles will both serve the disabled better and frankly be not that useful for others. As such, regimes that require adapting all vehicles to handle both types of customers may have the right spirit, but provide inferior service.
Another key benefit of robotaxi service for the disabled will be the low price. Reduced job prospects drive many with disabilities into poverty. Service that is naturally low in price will be enabling.
Equivalent service or Separate but Superior
Providing “equivalent” service is difficult with traditional taxis, particularly for smaller fleets. Robotaxis, which don’t mind waiting around because no human driver is waiting, make this easier to do. The service level of a robotaxi service is based on the density of currently unused vehicles in your area. Increase fleet size with the same demand, and service level goes up. As long as fleet size is not way overblown, so that vehicles still wear out by the mile rather than by the year, increasing fleet size is not nearly as expensive as it is for regular cars or human driven taxis.
This means you can, fairly readily, offer equivalent or even superior service at a pretty reasonable cost. As long as disabled-designed vehicles are made in decent quantities to keep their costs low, the cost should be close to the cost of regular vehicles. In the public interest, regular vehicle customers might subsidize the slightly higher cost of these lower volume vehicles.
With increased fleets, service levels would generally be superior to the regular fleets, but not always. The law generally allows this, but the disabled community will need to understand a few unequal things that probably will happen:
Slightly more advanced notice of rides will often make it possible to provide service at lower cost. Regular vehicles will naturally be present on every block. Disabled vehicles might be present with less density during high use times, but the ability to reposition lets even slight advance notice do a lot.
For those in groups, it may not be easy to carry a person in a wheelchair along with several non-wheelchair passengers. This might mean the wheelchair passenger goes in their own vehicle (with videoconference link.) This is not as good, but is much more cost effective than requiring every van to have a wheelchair lift.
To increase service levels, it is likely competing companies would cooperate on serving the disabled, and pool fleets. Until the disabled become a profitable market rather than one done to meet goals of public good, companies will prefer to work together. As such if you call for an Uber, you might often get a Lyft or other small fleet car.
Low cost disabled transport may mean that accessible public transit and paratransit slowly fade. Public transit which has its own tracks will continue to be accessible as it offers a speed advantage which may not be met on the roads, but otherwise it may be much cheaper to offer private robotaxis than to make all transit accessible. This would mean a group of people might not be able to ride transit together if it’s not accessible.
Small electric vehicles may be allowed to enter buildings, dropping passengers right at elevator lobbies or other destinations.
The biggest trade-off will be the loss of social group experiences. There certainly will be buses and vans with lifts which allow groups of mixed-ability passengers to travel together, but it is unlikely these would be so common as to offer the same service level as ordinary vans. With advance notice of just 10 minutes, they could probably be available.
I’ve seen many enraged notes from friends on how United Airlines will now charge for putting a bag in the overhead bin. While they aren’t actually doing this, my reaction is not outrage, but actually something quite positive. And yours should be to, even when other airlines follow suit, as they will.
I fly too much on United. I have had their 1K status for several years, this year I logged over 200,000 miles, so I know all the things to dislike about the airline. Why is it good for them to do this?
Strictly speaking, what they are doing is creating a new fare class, which is extra discount, and it includes no bin space and no assigned seat before departure. They claim the new class will cost less than existing fares, and you can still buy the regular economy fare which comes which bin space and a seat assignment. Naturally, we can suspect they will soon raise the price. The other reason people can complain is that when you comparison shop, you tend to look for the cheapest price, and it’s annoying when the products are not similar. (To fix this, shopping sites will need to start having options so you can ask for a comparison of what you really want to buy.)
The reason it’s good is that it means it’s more likely that I will get bin space when I show up late, and more likely I will get a tolerable seat when I book late. Airlines that give those things to all passengers, even the ones who don’t care that much about them, do not serve their more frequent flyers well. If I have to pay for seat assignment and bin space, it’s great, because I truly need them and will not have a better chance of getting them. Of course, as a super-elite, I won’t have to pay directly, I pay by all the other money I have given the airline, which is even better for me.
I need bin space because I am a photographer who carries a lot of cameras and lenses. Even if I check a bag, I still bring along a big carry-on, and everything in it is too fragile to go in the hold. If they tell me they need to gate check it, I will either talk them out of it, or if that ever fails to work I may take another flight. Of course, elite flyers board first, so we don’t have a bin space problem, but sometimes we need to get to a flight late, or have a short connection, and then we can find ourselves with no bin space today.
I won’t take a middle seat because I’m big. My fault or not, it’s the way it is. Sometimes I need to book last minute, or change flights or even go standby. This can mean a flight with nothing but middle seats. If it’s a flight of any duration, this is also just not an option anybody wants. Since in today’s system, everybody gets a seat based on when they bought, the guy with the discount ticket who bought 3 months ago has the aisle, and the elite flyer who paid a lot more for their ticket (possibly even downgraded from business class due to changes) is in the middle seat. Not the way you want to serve your better customers. (Since the airline will assign seats on day of flight, it will only help this moderately.)
But the point is the same — I would rather pay for what I really need than have it come by default and end up not being available to me because a lot of people didn’t actually want it that much. People who don’t need a big carry-on. People who are small and can tolerate a middle seat easily and would rather do that than pay money. An airline that charges for these things is the airline I want. In fact, I would even be OK if they charged a bit more for aisles and less for windows and middles, even on the day of the flight. And yes, elites sometimes solve all these problems with a business class upgrade, but on the big popular routes, that is far from certain. United has gotten too good at filling its planes, and other airlines are also getting good.
The overhead bag problem is partly a result of the charges for checked bags. Those do me no good (though again, elites don’t pay them.) There is no shortage of hold space, so charging for bags is just pure money for the airline, and that’s why they all started doing it. The problem, of course, is it makes people carry bigger carry-on bags, not for the reason that I or other frequent flyers do, but because they want to avoid the bag charge. I would be very pleased if they made sure the overhead charge is larger than the checked bag charge, or if they charged you to gave you the choice — either an overhead space or a bag in the hold, but not both.
There is another good reason for this — bigger overhead bags from those doing it simply to avoid charges slow down security lines. Leave the overhead bins for those who truly need them, because they have lots of fragiles, or because they value their time more than money and don’t want the delays of bag checking. (I continue to show up for flights quite late, another reason I don’t want to check a bag and be forced to meet the deadlines for that. But I notice I am almost always alone — everybody else listens to the crazy advice about showing up 60, 90 or even 120 minutes before flights. I’m glad everybody else listens; but in reality this has not caused me to miss flights, so I will continue to not listen. And if you fly enough, that time makes a big difference.
In the end, all airlines face the problem that on full planes, there is not enough room for everybody to put a big bag in the overhead bins. So the only question is who it will be that get the space? Today, it’s “who boarded first?” which is tolerable to many (until you have a late connection or other factors make you on time but later than others.) United now wants to make it “Those who didn’t give up the space for a discount” which seems pretty fair to me.
I am curious as to just how they will enforce this. I know some airlines tag cabin baggage, does this actually work? Passengers not using the overhead bin also do not stand in the aisle loading it, though they do often stand there pulling things out of the bag they will be putting under the seat. One way to enforce would be to have the no-bin folks board last, though it causes a problem when people together have different boarding groups. Some airlines, I think, give you tags for overhead bags and under-seat bags.
So while I don’t usually like how United does it, this one’s an exception. (Their new business class redesign also looks good, if long overdue.)
I believe we have the potential to eliminate a major fraction of traffic congestion in the near future,
using technology that exists today which will be cheap in the future. The method has
been outlined by myself and others in the past, but here I offer an alternate way to
explain it which may help crystallize it in people’s minds.
Today many people drive almost all the time guided by their smartphone, using navigation
apps like Google Maps, Apple Maps or Waze (now owned by Google.) Many have come to
drive as though they were a robot under the command of the app, trusting and obeying it
at every turn. Tools like these apps are even causing controversy, because in the hunt
for the quickest trip, they are often finding creative routes that bypass congested
major roads for local streets that used to be lightly used.
Put simply, the answer to traffic congestion might be, “What if you, by law, had to
obey your navigation app at rush hour?” To be more specific, what if the cities and towns that own
the streets handed out reservations for routes on those streets to you via those apps, and
your navigation app directed you
down them? And what if the cities made sure there were never more cars put on a piece of road
than it had capacity to handle? (The city would not literally run Waze, it would hand out route reservations to it, and it would still do the UI and be a private company.)
The value is huge. Estimates suggest congestion costs around 160 billion dollars per year in the USA, including 3 billion gallons of fuel and 42 hours of time for
every driver. Roughly quadruple that for the world.
Road metering actually works
This approach would exploit one principle in road management that’s been most effective
in reducing congestion, namely road metering. The majority of traffic congestion is caused,
no surprise, by excess traffic — more cars trying to use a stretch of road than it has the capacity
to handle. There are other things that cause congestion — accidents, gridlock and
irrational driver behaviour, but even these only cause traffic jams when the road is near
or over capacity.
Today, in many cities, highway metering is keeping the highways flowing far better than they
used to. When highways stall, the metering lights stop cars from entering the freeway as
fast as they want. You get frustrated waiting at the metering light but the reward is you
eventually get on a freeway that’s not as badly overloaded.
Another type of metering is called congestion pricing. Pioneered in Singapore, these
systems place a toll on driving in the most congested areas, typically the downtown cores
at rush hour. They are also used in London, Milan, Stockholm and some smaller towns, but have never caught on in many
other areas for political reasons. Congestion charging can easily be viewed as allocating
the roads to the rich when they were paid for by everybody’s taxes.
A third successful metering system is the High-occupancy toll lane. HOT lanes take
carpool lanes that are being underutilized, and let drivers pay a market-based price to use them
solo. The price is set to bring in just enough solo drivers to avoid wasting the spare
capacity of the lane without overloading it. Taking those solo drivers out of the other
lanes improves their flow as well. While not every city will admit it, carpool lanes themselves
have not been a success. 90% of the carpools in them are families or others who would have
carpooled anyway. The 10% “induced” carpools are great, but if the carpool lane only runs at
50% capacity, it ends up causing more congestion than it saves. HOT is a metering system
that fixes that problem. read more »
There have been few postings this month since I took the time to enjoy a holiday in New Zealand around speaking at the SingularityU New Zealand summit in Christchurch. The night before the summit, we enjoyed a 7.8 earthquake not so far from Christchurch, whose downtown was over 2/3 demolished after quakes in 2010 and 2011. On the 11th floor of the hotel, it was a disturbing nailbiter of swaying back and forth for over 2 minutes — but of course swaying is what the building is supposed to do; that means it’s working. The shocks were rolling, not violent, and in fact we got more violent jolts from aftershocks a week later when we went to Picton.
While driving around that region, we encountered this classic earthquake scene on the road:
There were many like this, and in fact the main highway of the South Island was destroyed long-term not too far away, cutting off several towns. A scene like this makes you wonder just what a robocar would do in such situations. I already answered this question in a blog post on how to handle a tsunami. Fortunately there was only a mild tsunami for this quake. A tsunami will result in a warning in the rich world, and the car will know the elevation map of the roads and know how to get to high ground. In some places, like Japan,t here is also an advanced earthquake warning system that tells you quakes are coming well before they hit you, since electrons go much faster than seismic waves. With such a system, robocars should receive a warning and come to a stop unless they need to evacuate a tsunami zone. Without such a warning, we still could imagine the road cracking and collapsing in front of you as might have happened on this road. Of course the cones and signs that warned me days later would not be present.
The answer again lies in the fact that pictures like mine will be used to create situations like this in simulator, and all car developers will be able to test their systems with simulated quake damage to make sure they do the right thing. I’ve spoken since 2010 on the value of a shared simulator environment and I think if government agencies like NHTSA want to really help development, providing funding and tools for such an environment would be a good step. NHTSA’s proposal that all developers share their logs of all incidents would clearly make such a simulator better, but there is pushback because of the proprietary value of those logs. When it comes to strange situations like earthquakes, I doubt there would be much pushback on having an open and shared simulator environment.
New Zealand’s government is taking a very welcoming approach to robocars. They are not regulating for a while, and have invited developers to come and test. They have even said it’s OK to test unmanned vehicles under some fairly simple rules. NZ does not have any auto industry, and of course it’s quite remote, but we’ll see if they can attract developers to come test. Their roads feature something you don’t see much in the USA — tons and tons of one-lane bridges and other one-lane stretches of highway. Turns out that robocars, with a little bit of communication, can make very superhumanly efficient use of one-lane two-way roads, and it might be worth exploring.
The automotive industry has had a long history of valuing the tinkerer. All the big car companies had their beginnings with small tinkerers and inventors. Some even died in the very machines they were inventing. These beginnings have allowed people to do all sorts of playing around in their garages with new car ideas, without government oversight, in spite of the risk to themselves and even others on the road. If a mechanic wants to charge you for working on your car, they must be licenced, but you are free to work on it yourself with no licence, and even build experimental cars. You just can’t sell them. And even those rights have been eroded.
Clearly far fewer people will have the inclination to build an autopilot using the comma.ai tools by themselves. But it won’t be that hard to do, and they can make it easier with time, too. One could even imagine a car which already had the necessary hardware, so that you only needed to download software to make it happen.
In recent times, there has been a strong effort to prevent people with tinkering with their cars, even in software. One common area of controversy has been around engine tuning. Engine tuning is regulated by the EPA to keep emissions low. Car vendors have to show they have done this — and they can’t program their car to give good emissions only on the test while getting better performance off the test as VW did. But owners have been known to want to make such modifications. Now we will see modifications that affect not just emissions but safety. Car companies don’t want to be responsible if you modify the code in your car and there is an accident involving both their code and yours. As such, they will try to secure their car systems so you can’t change them, and the government may help them or even insist on it. When you add computer security risks to the mix — who can certify the modified car can’t be taken over and used as a weapon? — it will get even more fun.
I will also point out that I suspect that comma’s approach would not know what to do about the collapsed road, because it would never have been trained in that situation. It might, however, simply sound an alert and kick out, not being able to find the lane any more.
Regular readers will have seen my strong critique of the NHTSA rules. The other major news during my break was the pushback from major players in the public comment on the regulations. In some ways the regulations didn’t do enough to give vendors the certainty they need to make their plans. At the same time, they were criticsed for not giving enough flexibility to vendors. In addition, as expected, they resist giving up their proprietary data in the proposed forced sharing. I predict continued ambivalence on the regulations. Big players actually like having lots of regulations, because big players know how to deal with that and small players don’t.
There are many elements of this letter which would also apply to Tesla and other automakers which have built supervised autopilot functions.
Of particular interest is the paragraph which says: “it is insufficient to assert, as you do, that the product does not remove any of the driver’s responsibilities” and “there is a high likelihood that some drivers will use your product in a manner that exceeds its intended purpose.” That must be very scary for Tesla.
I noted before that the new NHTSA regulations appear to forbid the use of “black box” neural network approaches to the car’s path planning and decision making. I wondered if this made illegal the approach being done by Comma, NVIDIA and many other labs and players. This may suggest that.
We now have a taste of the new regulatory regime, and it seems that had it existed before, systems like Tesla’s autopilot, Mercedes Traffic Jam Assist, and Cruise’s original aftermarket autopilot would never have been able to get off the ground.
George Hotz of comma declares “Would much rather spend my life building amazing tech than dealing with regulators and lawyers. It isn’t worth it. The comma one is cancelled. comma.ai will be exploring other products and markets. Hello from Shenzhen, China.”
To be clear, comma is a tiny company taking a radical approach, so it is not a given that what NHTSA has applied to them would have been or will be unanswerable by the big guys. Because Tesla’s autopilot is not a pure machine learning system, they can answer many of the questions in the NHTSA letter that comma can’t. They can do much more extensive testing that a tiny startup can’t. But even so a letter like this sends a huge chill through the industry.
It should also be noted that in Comma’s photos the box replaced the rear-view mirror, and NHTSA had reason to ask about that.
George’s declaration that he’s in Shenzen gives us the first sign of the new regulatory regime pushing innovation away from the United States and California. I will presume the regulators will say, “We only want to scare away dangerous innovation” but the hard truth is that is a very difficult thing to judge. All innovation in this space is going to be a bit dangerous. It’s all there trying to take the car — the 2nd most dangerous legal consumer product — and make it safer, but it starts from a place of danger. We are not going to get to safety without taking risks along the way.
I sometimes ask, “Why do we let 16 year olds drive?” They are clearly a major danger to themselves and others. Driver testing is grossly inadequate. They are not adults so they don’t have the legal rights of adults. We let them drive because they are going to start out dangerous and then get better. It is the only practical way for them to get better, and we all went through it. Today’s early companies are teenagers. They are going to take risks. But this is the fastest and only practical way to let them get better and save millions.
“…some drivers will use your product in a manner that exceeds its intended purpose”
This sentence, though in the cover letter and not the actual legal demand, looks at the question asked so much after the Tesla fatal crash. The question which caused Consumer Reports to ask Tesla to turn off the feature. The question which caused MobilEye, they say, to sever their relationship with Tesla.
The paradox of the autopilot is this: The better it gets, the more likely it is to make drivers over-depend on it. The more likely they will get complacent and look away from the road. And thus, the more likely you will see a horrible crash like the Tesla fatality. How do you deal with a system which adds more danger the better you make it? Customers don’t want annoying countermeasures. This may be another reason that “Level 2,” as I wrote yeterday is not really a meaningful thing.
NHTSA has put a line in the sand. It is no longer going to be enough to say that drivers are told to still pay attention.
Comma is not the only company trying to build a system with pure neural networks doing the actual steering decisions (known as “path planning”.) NVIDIA’s teams have been actively working on this, as have several others. They plan to make commentary to NHTSA about these element of the regulations, which should not be forbidding this approach until we know it to be dangerous. read more »
It’s no secret that I’ve been a critic of the NHTSA “levels” as a taxonomy for types of Robocars since the start. Recent changes in their use calls for some new analysis that concludes that only one of the levels is actually interesting, and only tells part of the story at that. As such, they have become even less useful as a taxonomy. Levels 2 and 3 are unsafe, and Level 5 is remote future technology. Level 4 is the only interesting one and there is thus no taxonomy.
Unfortunately, they have just been encoded into law, which is very much the wrong direction.
NHTSA and SAE both created a similar set of levels, and they were so similar that NHTSA declared they would just defer to the SAE’s system. Nothing wrong with that, but the core flaws are not addressed by this. Far better, their regulations declared that the levels were just part of the story, and they put extra emphasis on what they called the “operating domain” — namely what locations, road types and road conditions the vehicle operates in.
The levels focus entirely on the question of how much human supervision a vehicle needs. This is an important issue, but the levels treated it like the only issue, and it may not even be the most important. My other main criticism was that the levels, by being numbered, imply a progression for the technology. That progression is far from certain and in fact almost certainly wrong. SAE updated its levels to say that they are not intended to imply a progression, but as long as they are numbers this is how people read them.
Today I will go further. All but level 4 are uninteresting. Some may never exist, or exist only temporarily. They will be at best footnotes of history, not core elements of a taxonomy.
Level 4 is what I would call a vehicle capable of “unmanned” operation — driving with nobody inside. This enables most of the interesting applications of robocars.
Here’s why the other levels are less interesting:
Levels 0 and 1 — Manual or ADAS-improved
Levels 0 and 1 refer to existing technology. We don’t really need new terms for our old cars.
Level 2 perhaps best described as a more advanced version of level 1 and that transition has already taken place.
Level 2 — Supervised Autopilot
Supervised autopilots are real. This is what Tesla sells, and many others have similar offerings. They are working in one of two ways. The first is the intended way, with full time supervision. This is little more than a more advanced cruise control, and may not even be as relaxing.
The second way is what we’ve seen happen with Tesla — a car that needs supervision, but is so good at driving that supervisors get complacent and stop supervising. They want a full self-driving car but don’t have it, so they pretend they do. Many are now saying that this makes the idea of supervised autopilot too dangerous to deploy. The better you make it, the more likely it can lull people into bad activity.
This level is really a variation of Level 4, but the vehicle needs the ability to call upon a driver who is not paying attention and get them to take control with 10 to 60 seconds of advance warning. Many people don’t think this can be done safely. When Google experimented with it in 2013, they concluded it was not safe, and decided to take the steering wheel entirely out of their experimental vehicles.
Even if Level 3 is a real thing, it will be short lived as people seek an unmanned capable vehicle. And Level 4 vehicles will offer controls for special use, even if they don’t permit a transition while moving.
Level 5 — Drive absolutely everywhere
SAE, unlike NHTSA’s first proposal, did want to make it clear that an unmanned capable (Level 4) vehicle would only operate in certain places or situations. So they added level 5 to make it clear that level 4 was limited in domain. That’s good, but the reality is that a vehicle that can truly drive everywhere is not on anybody’s plan. It probably requires AI that matches human beings.
Consider this situation in which I’ve been driven. In the African bush on a game safari, we spot a leopard crossing the road. So the guide drives the car off-road (on private land) running over young trees, over rocks, down into wet and dry streambeds to follow the leopard. Great fun, but this is unlikely to be an ability there is ever market demand to develop. Likewise, there are lots of small off-road tracks that are used by only one person. There is no economic incentive for a company to solve this problem any time soon.
Someday we might see cars that can do these things under the high-level control a human, but they are not going to do them on their own, unmanned. As such SAE level 5 is academic, and serves only to remind us that level 4 does not mean everywhere.
Levels vs. Cul-de-sacs
The levels are not a progression. I will contend in fact that even to the extent that levels 2, 3/4 and 5 exist, they are quite probably entirely different technologies.
Level 2 is being done with ADAS technologies. They are designed to have a driver in the loop. Their designs in many case do not have a path to the reliability level needed for unmanned, which is orders of magnitude higher. It is not just a difference of degree, it is one of kind.
Level 3 is related to level 4, in particular because a level 3 car is expected to be able to handle non-response from its driver, and safely stop or pull off the road. It can be viewed as a sucky version of a level 4 system. (It’s also not that different — see below.)
Level 5, as indicated, probably requires technologies that are more like artificial general intelligence than they are like a driving system.
As such the levels are not levels. There is no path between any of the levels and the one above it, except in the case of 3/4.
This leaves Level 4 as the only one worth working on long term, the only one with talking about. The others are just there to create a contrast. NHTSA realizes this and gave the name ODD (Operational Design Domain) to refer to the real area of research, namely what roads and situations the vehicles can handle.
The distinction between 4 and 3 is also not as big as you might expect. Google removed the steering wheel from their prototype to set a high bar for themselves, but they actually left one in for use in testing and development. In reality, even the future’s unmanned cars will feature some way in which a human can control them, for use during breakdowns, special situations, and moving the cars outside of their service areas (operational domains.) Even if the transition from autodrive to human drive is unsafe at speed, it will still be safe if the car pulls over and activates the controls for a licenced driver.
As such, the only distinction of a “level 3” car is it hopes to be able to do that transition while moving, on short but not urgent notice. A pretty minor distinction to be a core element of a taxonomy.
If Level 4 is the only interesting one, my recommendation is to drop the levels from our taxonomy, and focus the taxonomy instead on the classes of roads and conditions the vehicle can handle. It can be a given that outside of those operating domains, other forms of operation might be used, but that does not bear much on the actual problem.
I say we just identify a vehicle capable of unmanned or unsupervised operation as a self-driving car or robocar, and then get to work on the real taxonomy of problems.