The testing regulations did not bother too many, though I am upset that they effectively forbid the testing of delivery robots like the ones we are making at Starship because the test vehicles must have a human safety driver with a physical steering system. Requiring that driver makes sense for passenger cars but is impossible for a robot the size of breadbox.
Needing a driver
The draft operating rules effectively forbid Google’s current plan, making it illegal to operate a vehicle without a licenced and specially certified driver on board and ready to take control. Google’s research led them to feel that having a transition between human driver and software is dangerous, and that the right choice is a vehicle with no controls for humans. Most car companies, on the other hand, are attempting to build “co-pilot” or “autopilot” systems in which the human still plays a fundamental role.
The state proposes banning Google style vehicles for now, and drafting regulations on them in the future. Unfortunately, once something is banned, it is remarkably difficult to un-ban it. That’s because nobody wants to be the regulator or politician who un-bans something that later causes harm that can be blamed on them. And these vehicles will cause harm, just less harm than the people currently driving are doing.
The law forbids unmanned operation, and requires the driver/operator to be “monitoring the safe operation of the vehicle at all times and be capable of taking over immediate control.” This sounds like it certainly forbids sleeping, and might even forbid engrossing activities like reading, working or watching movies.
Drivers must not just have a licence, they must have a certificate showing they are trained in operation of a robocar. On the surface, that sounds reasonable, especially since the hand-off has dangers which training could reduce. But in practice, it could mean a number of unintended things:
Rental or even borrowing of such vehicles becomes impossible without a lot of preparation and some paperwork by the person trying it out.
Out of state renters may face a particular problem as they can’t have California licences. (Interstate law may, bizarrely, let them get by without the certificate while Californians would be subject to this rule.)
Car sharing or delivered car services (like my “whistlecar” concept or Mercedes Car2Come) become difficult unless sharers get the certificate.
The operator is responsible for all traffic violations, even though several companies have said they will take responsibility. They can take financial responsibility, but can’t help you with points on your licence or criminal liability, rare as that is. People will be reluctant to assume that responsibility for things that are the fault of the software in the car they use, as they have little ability to judge that software.
With no robotaxis or unmanned operation, a large fraction of the public benefits of robocars are blocked. All that’s left is the safety benefit for car owners. This is not a minor thing, but it’s a small a part of the whole game (and active safety systems can attain a fair chunk of it in non-robocars.)
The state says it will write regulations for proper robocars, able to run unmanned. But it doesn’t say when those will arrive, and unfortunately, any promises about that will be dubious and non-binding. The state was very late with these regulations — which is actually perfectly understandable, since not even vendors know the final form of the technology, and it may well be late again. Unfortunately, there are political incentives for delay, perhaps indeterminate delay.
This means vendors will be uncertain. They may know that someday they can operate in California, but they can’t plan for it. With other states and countries around the world chomping at the bit to get vendors to move their operations, it will be difficult for companies to choose California, even though today most of them have.
People already in California will continue their R&D in California, because it’s expensive to move such things, and Silicon Valley retains its attractions as the high-tech capital of the world. But they will start making plans for first operation outside California, in places that have an assured timetable.
It will be less likely that somebody would move operations to California because of the uncertainty. Why start a project here — which in spite of its advantages is also the most expensive place to operate — without knowing when you can deploy here. And people want to deploy close to home if they have the option.
It might be that the car companies, whose prime focus is on co-pilot or autopilot systems today, may not be bothered by this uncertainty. In fact, it’s good for their simpler early goals because it slows the competition down. But most of them have also announced plans for real self-driving robocars where you can act just like a passenger. Their teams all want to build them. They might enjoy a breather, but in the end, they don’t want these regulations either.
And yes, it means that delivery robots won’t be able to go on the roads, and must stick to the sidewalks. That’s the primary plan at Starship today, but not the forever plan.
California should, after receiving comment, alter these regulations. They should allow unmanned vehicles which meet appropriate functional safety goals to operate, and they should have a real calendar date when this is going to happen. If they don’t, they won’t be helping to protect Californians. They will take California from being the envy of the world as the place that has attracted robocar development from all around the planet to just another contender. And that won’t just cost jobs, it will delay the deployment in California of a technology that will save the lives of Californians.
I don’t want to pretend that deploying full robocars is without risk. Quite the reverse, people will be hurt. But people are already being hurt, and the strategy of taking no risk is the wrong one.
Another road trip has meant fewer posts — this trip included being in Paris on the night of Nov 13 but fortunately taking a train out a couple of hours before the shooting began, and I am now in South Africa on the way to Budapest — but a few recent items merit some comment.
Almost every newspaper in the world reported the story of how a motorcycle cop pulled over one of Google’s 3rd generation test cars, the 2 seaters, and a lot of incorrect reports that the car was given a ticket for going too slow. Or that there was “no driver to ticket.” Today, Google’s cars always have a safety driver (who has a steering wheel) and who is responsible for the car in case it does something unexpected or enters an especially risky situation. So had there been a ticket to write, there would have been a driver in the car, just as there is if you get a ticket for speeding while using your cruise control.
Google’s prototype is what is known as a “Neighbourhood Electric Vehicle” or NEV. There are special NEV rules in place that make such vehicles much less subject to the complex web of regulations required for a general purpose vehicle. They need to be electric, must not travel on roads with a speed limit over 35mph and they must themselves not be capable of going more than 25mph. The Google car was doing 24mph when the officer asked the safety driver to pull over, so there was nothing to ticket. Of course, that does not mean an officer can’t get confused and need an explanation of the law — even they don’t know all of them.
The NEV regulations are great for testing, though there is indeed an issue around how the earliest robocars will probably want to go a little slow, because safety really is the top priority on all teams I know. As such, they may go as slowly as the law allows, and they may indeed annoy other drivers when doing that. This should be a temporary phase but could create problems while cars learn to go faster. I have suggested in the past that cars wanting to go slow might actually notice anybody coming up behind them and pull off the road, pausing briefly in driveways or other open spots, so that the drivers coming up behind never have to even brake. A well behaved unmanned vehicle might go slowly but not present a burden to hurried humans.
Ford may also avoid standby supervision
Recent reports suggest that Ford, like Google, may have concluded that there is not an evolutionary path from ADAS to full self driving, in particular, the so-called “level 3” which I call standby supervision, where a human driver can be called on with about 10 seconds notice (but not anything shorter) to resolve live driving problems or to take the wheel when the car enters a zone it can’t drive. This transition may just be too dangerous, Google has said, along with many others.
Cheaper LIDAR etc.
Noted without much comment — Quanergy, on whose advisory board I sit, as announced progress on the plans for an inexpensive solid state LIDAR, and plans to ship the first on schedule, in 2016. This sub-$1000 LIDAR keeps us on a path to even cheaper LIDAR, which should eliminate all the people who keep saying they want to build robocars without LIDAR — I am looking at you, Elon Musk. Nobody will make their first full robocar less safe just to save a few hundred dollars.
Also related to Starship, another company I advise, is the arrival of not one but two somewhat similar startups to build small delivery robots. “Dispatch Network” involves U.S. roboticists who participated in a Chinese based hardware accelerator and have a basic prototype, larger than the Starship robot. “Sidewalk” — a Lithuanian company, also has a prototype model and a deal with DHL to do research together with them on last mile robots.
I’m pleased to announce today the unveiling of a new self-driving vehicle company with which I am involved,
not building self-driving cars, but instead small delivery robots which are going to change the face of
retailing and last-mile delivery and logistics.
Starship Technologies comes out of Europe, created by two of the founders of Skype, Janus Friis and Ahti Heinla who
is CEO. The mission is similar to the vision I laid out in 2007 for the Deliverbot —
the self-driving box that can get you anything in 30 minutes for under a dollar.
Starship is still in early stages, but will be conducting a pilot project next year in the UK, and another in the
USA shortly thereafter. Customers will be able to place online orders and have a robot come to their home immediately
or on their schedule.
Why is this possible well before full unmanned self-driving cars can go into public use? There are all sorts of
The boxes are not in a super hurry:
They will go slowly and cautiously
They don’t mind detours, and can take the safest rather than shortest route to you
It’s not a big deal if they have to pause if they encounter children or anything confusing or risky, or need to wait for a remote operator to solve a problem
They will travel on the sidewalks, rather than the roads (already legal in many places but work is needed in others)
They will be slow and light, so that if something goes seriously wrong and they hit you, they won’t injure you
They won’t hit you though, because they can come to a full stop in under a foot
You don’t need crumple zones, airbags or other passenger safety features for cargo, making them simple and inexpensive
How big is the last mile? It’s huge. It’s not just what today’s delivery companies do. Most deliveries are
actually made by customers who run out to stores to get stuff. The Starship robot will bring you things in less
time than a round-trip shopping trip would take, for less money, and with vastly less energy, pollution, traffic
congestion and parking. It’s a win for the store, the customer and for society.
It’s a really big win for those with disabilities or difficulty moving. The elderly are going to be able to live
in their own homes with greater independence even if shopping has become such a chore they were contemplating a
The robots will eventually create an “internet of parcels” (I guess the
term “internet of things” is already in use) where physical goods can move
around cities with an ease surpassed only by data. Not only will you be
able to buy anything, you’ll be able to rent things on short notice too,
or borrow them from your neighbours. The sharing economy can be enabled
and the meaning of ownership may change.
The convenience of robot delivery will surprise people. A common question I get asked is
“What does the robot do if you’re not home when it delivers?” and my answer is, “why would
you want the robot to deliver when you’re not home?” Regular delivery run’s on a driver’s
schedule, robotic delivery will run on yours. Robots don’t mind waiting either, so you
could ask a shop to put 5 pairs of shoes into a robot, and at home you could try them on and
put 4 pairs back in the robot and keep the ones you like.
I also expect interesting changes in prepared food. It will be possible to run a
“restaurant” inexpensively in a private kitchen and get ingredients and deliver
dishes quickly and cheaply with no wastage. A family might even order different
dishes from different locations to create a meal.
Nothing is 100% good for everybody — there will be disruption in the retail industry, and
retailers who exist primarily so you can go get things will have trouble competing
if they don’t embrace this model, but other retailers who are suffering from
competition from the online stores may find they can now dominate with fast
Of course, there might be competition in the air. Our students at Singularity University
were among the first to work on drone delivery with the Matternet project
which is beginning a trial delivering for the post office on the steep slopes of Switzerland.
Both methods face legal challenges, and both have their advantages. Drones will be faster and
can cover unusual terrain, while ground robots can carry more with less energy and have an easier
time landing. :-) I suspect people will tolerate small robots on their sidewalks more than
drones with heavy packages over their heads, but we’ll see.
Note that I’m a special advisor to Starship on both technology and business, but this post is
written with my own voice, and doesn’t speak on behalf of the company.
In the buzz over the Tesla autopilot update, a lot of commentary has appeared comparing this Autopilot with Google’s car effort and other efforts and what I would call a “real” robocar — one that can operate unmanned or with a passenger paying no attention to the road. We’ve seen claims that “Tesla has beaten Google to the punch” and other similar errors. While the Tesla release is a worthwhile step forward, the two should not be confused as all that similar.
Tesla’s autopilot isn’t even particularly new. Several car makers have had similar products in their labs for several years, and some have released it to the public, at first in a “traffic jam assist” mode, but reportedly in full highway cruise mode outside the USA. The first companies to announce it were Cadillac with the “Super Cruise” and VW’s “Temporary Autopilot” but they delayed that until much later.
Remarkably, Honda showed off a car ten years ago doing this sort of basic autopilot (without lane change) and sold only in the UK. They decided to stop doing that, however.
That this was actually promoted as an active product ten years ago will give you some clue it’s very different from the bigger efforts.
These cruise products require constant human supervision. That goes back to cruise control itself. With regular cruise control, you could take your feet off the pedals, but might have to intervene fairly often either by using the speed adjust buttons or full control. Interventions could be several times a minute. Later, “Adaptive Cruise Control” arose which still required you to steer and fully supervise, but would only require intervention on the pedals rarely on the highway. A few times an hour might be acceptable.
The new autopilot systems allow you to take your hands off the wheel but demand full attention. Users report needing to intervene rarely on some highways, but frequently on other roads. Once again, the product is useful if you only intervene once an hour, it might make your drive more relaxing.
Now look at what a car that drives without supervision has to do. Human drivers have an accident around every 2,500 to 6,000 hours, depending on what figures we believe. That’s a minor accident, and it’s after around 10 to 20 years of driving. A fatality accident takes place every 2,000,000 hours of driving — around 10,000 years for the typical driver. (It’s very good that it’s much more than a lifetime.)
If a full robocar needs an intervention, that means it’s going to have an accident, because there is nobody there to intervene. Just like humans, most of the errors that would cause an accident are minor. Running off the road. Fender benders. Not every mistake that could cause a crash or a fatality causes one. Indeed, humans make mistakes that might cause a fatality far more often than every 2,000,000 hours, because we “get away” with many of them.
Even so, the difference is staggering. A cruise autopilot like Tesla and the others have made is a workable product if you have to correct it a few times an hour. A full robocar product is only workable if you would need to correct it in decades or even lifetimes of driving. This is not a difference of degree, it is a difference of kind. It is why there is probably not an evolutionary path from the cruise/autopilot systems based on existing ADAS technologies to a real robocar. Doing many thousands times better will not be done by incremental improvement. It almost surely requires a radically different approach, and probably very different sensors.
To top it all off, a full robocar doesn’t just need to be this good, it needs a lot of other features and capabilities once you imagine it runs unmanned, with no human inside to help it at all.
The mistaken belief in an evolutionary path also explains why some people imagine robocars are many decades away. If you wanted evolutionary approaches to take you to 100,000x better, you would expect to wait a long time. When an entirely different approach is required, what you learn from the old approach doesn’t help you predict how the other approaches — including unknown ones — will do.
It does teach you something. By being on the road, Tesla will encounter all sorts of interesting situations they didn’t expect. They will use this data to train new generations of software that do better. They will learn things that help them make the revolutionary unmanned product they hope to build in the 2020s. This is a good thing. Google and others have also been out learning that, and soon more teams will.
Last night, one day early, I attended Stanford’s unveiling of their newest research vehicle for self-driving. In order to do experiments with drifting (where you let the rear wheels skid freely) they heavily modified an old Delorean.
They managed to get Jamie Hyneman of Mythbusters to host the event so there was a good crowd. He asked “Why a Delorean?” and instead of saying the obvious line:
“The way I see it, if you’re going to build a self-driving drifting car, why not do it with some style?”
They got into the actual technical reasons for it, even though they called the car Marty and were revealing it one day before “Back to the Future Day” — the day in the 2nd movie where Marty travels to in the future, Oct 21, 2015.
Back to the present, this car, with rear wheel drive and central engine mount is not a great car to drive, and they removed the engine and replaced it with dual electric motors from Renovo. This creates a car able to drive the two rear wheels independently. This offers the ability for the software to spin the wheels at different rates, and do things that no human driver could ever do, including special types of drifting. They already have managed to get the car to turn tighter doughnuts (circles) than a human could.
Drifting is usually done for show — it rarely will help you in a race. Hyneman actually showed that in one of the episodes of his show. Stanford’s team wants to answer whether the robot’s ability to do inhuman driving might offer more “outs” in a dangerous situation, like trying to avoid a collision. Might a car twist its wheels (perhaps some day all of its wheels) and spin them at different speeds to make the car take a path which could avoid an accident.
In effect, you would be trying to make a car that can drive like a Hollywood stunt car. In movies, stunt drivers often do fairly improbable and impossible moves with cars to avoid accidents. A classic Hollywood scene involves a car titling up in two wheels to get through a tiny gap it could not drive through. (The Stanford team did not propose this, and it’s a pretty hard thing to do, but it’s one way to envision the general idea.)
Up to now, research on accident avoidance has been fairly low-key. After all, the main task is to drive safely in the lane you are supposed to drive in. That’s plenty of work and the result of almost all of the focus. Eventually, teams will focus on what to do when things go wrong, but for now the prime priority is to make sure things don’t go wrong. Someday, they may even focus on the infamous trolley problem.
Normally, drifting is a bad idea. It means a loss of control and a loss of power. Normally, the connection of the tires and the road is the sole tool you have to drive and control a car. You would only give it up if you absolutely have to. Perhaps, the research will show, there are times you might want to.
Generally, drift or not, robots should become very good at avoiding accidents. They will know the physics of the tires perfectly, and they will calculate without panic, and will be able to drive with full confidence missing things by very thin margins while staying safe. While a human could not navigate a space only a few inches wider than the car with confidence, a robot could. A robot will always use the optimal combination of steering and braking, which humans need a lot of training to do. (Your tires can give you braking force or steering force but you must reduce one to get more of the other, so often the best strategy is to brake first and then steer, though the human instinct is to do both.)
The car is not super autonomous. It is meant to do test algorithms on private open spaces. It won’t be avoiding obstacles or plotting lanes on a highway. It will be testing how well a computer can get the most from the tires.
Tesla’s offering is not too different from what many other automakers have shown in what is sometimes called “highway cruise” — a combination of lanekeeping and adaptive cruise control, both of which have been around for a while. The vehicle reportedly insists you keep your hands touching the wheel, but some owners report you can take them off. Of particular note is the addition of lane-changing, which you can do by flicking the turn signal. You must check behind you, since if you attempt this when the next lane is moving fast and somebody is coming up on you quickly, you could cause a real problem if you don’t check. The vehicle won’t do the lane change if its blind spot detector sees an adjacent car, but it won’t stop you from cutting off somebody who is not in that zone and gaining on you.
I am curious about the claim that the system will “overtake” another car — I have not tried it out yet, but I presume since it should not change lanes on its own, user input will be required to command driving around another car. While you can’t safely make a lane change into a lane where you see no cars, you actually can make the change after passing a car because you know where that car is and that nothing is going to rush at you in the lane.
The release of this, and similar products will test a supposition I made earlier, that this products may not be as exciting as hoped. Worse, some drivers may find it a bit frightening trusting control to a vehicle knowing that from time to time they will need to grab the wheel. I have felt that myself driving in cars with adaptive cruise control, I am wondering if the ACC has seen the car stopped up ahead of me and will stop for it, since I see that before the radar or camera system does.
On the other hand, many people have reported that even though they must supervise, highway cruise can make the trip more relaxing, just as basic cruise control does. The trick is to get your brain into putting focus on its new sole task — supervisor — so that the rest of your brain can relax. With cruise control, you are reasonably able to have one part of your brain worry about steering, and relax the part that was going to worry about speed. So this may happen here.
More of a concern are the people who will trust it too much. Many of us already do crazy things, texting or playing with things on our phones when we are doing fully manual driving. It’s a given this will happen here. It will be safer to take your eyes from the road for a longer period than you should in a manual car, but people may magnify that. Yes, if you take your eyes off the road and the car ahead of you stops suddenly, the car will very probably brake for you — as will any car with forward collision avoidance. But not 100% of the time, and that’s the rub if you trust it to.
I may have more to say after taking a ride in one of these. I think the first really interesting product won’t be this, but a more full-auto traffic jam assist, that will drive for you in a traffic jam and allow you to take your attention off the road entirely to read or work on your phone or computer. At the low speeds of a traffic jam, boxed in by other cars, the driving problem is much simpler. You don’t even need to see the lanes, just follow the cars. If the car in front of you zooms ahead, the traffic jam is over, and the driver needs to manually speed the car up. Done at low speeds, that transition can be fairly safe, and in addition, if the driver does nothing and the car slows to a halt, that is not unsafe — just annoying — in a breaking up traffic jam. The main remaining problem in traffic jams is what to do when the cars in front of you change lanes. Are they turning just to go into another lane, or is it because all the lanes are turning due to restriping (common in jams) or an obstacle. You need to get that right, and let the driver know about it, but you can’t buzz the driver every time somebody does a lane change or the product is not useful.
Several car vendors (and probably Tesla) have been working on this, and could release it quite soon, if they get the guts and legal approval to do so.
Tesla may get more attention for the way they delivered these new features, as an over the air software update. Telsa has now done this several times, and promised it would do this, but from the viewpoint of traditional car makers, it is incredibly radical. In the modern computerized car, like the phone, regular software updates are just part of the system. This is going to be mostly positive, but will create some issues when the time comes for a “recall” of some electronic function of a car. Today when that happens, the car company mails all the owners and says, “Please come in to your dealer to get new firmware for your ECU to fix the problem.” From that point, it is the owner’s responsibility to get the update done. Tesla can send a fix over the air, but that means it can’t pass responsibility on to the owners. Some day, a company is going to find a problem in their self-drive system that clearly should be fixed, but won’t have the fix ready to go for weeks. It will face the question of what to do in those intervening weeks. Will it be forced to turn off the system it now knows to have a flaw? Or can it tell owners, “We know we have this flaw, if you want to drive, it’s your responsibility now.”
During a very busy September of travel, I let a number of important stories fall through the cracks. The volume of mainstream press articles on Robocars is immense. Most are rehashes of things you have already seen here, but if you want the fastest breaking news, there are now some sources that focus on that. Here I will report the important news with analysis.
Earlier we learned that Google restructured itself and put the car project in the new Alphabet Holding company. Google also hired John Krafcik to lead the project. Krafcik is a car industry veteran from Hyundai, Ford and Truecar but what’s interesting is he’s been announced as “CEO,” which strongly implies that the project will be spun out as a subsidiary as I suspected, with freedom to be its own company. Chris Urmson has led the project since Sebastian Thrun moved on to Udacity, but the bulk of the work has been engineering, which Chris will continue to lead. This is a good move, one person probably should not do both. (Chris did do a great job on the recent 60 minutes, though.)
Google continues to state it does not wish to be a carmaker, and will work with existing carmakers.
Mercedes and BMW not selling cars
Perhaps the biggest news comes in announcements from both BMW and Mercedes that they plan to investigate selling rides instead of cars. They both own large car-sharing systems (DriveNow and Car2Go respectively) which rent cars one way by the minute, but while they are large for the industry, they are tiny portions of these companies. However, the idea that these companies, with a century of being about selling cars and nameplate luxury to consumers who drive away in them, can think seriously of being like Uber is a sign we’re in the 21st century. BMW and Mercedes are not idiots — they have always known this was a potential business plan. The hard part at a big company is having the guts and leadership to turn the company 90 degrees if it needs to be done.
Mercedes prototype has the name “Car2Come” — a car that delivers itself to you and you drive it. They understand that name doesn’t really sound that great in the US market. :-) Longtime readers will recognize this as similar to what I called a whistlecar in 2007.
Apple clues keep showing up
Apple refuses to say anything, but little clues keep emerging, including records of Apple’s request to use an old military base converted into a robocar test track in Northern California, and also talking to the DMV’s crew about robocars. Other leaks suggest with certainty that Apple project Titan is building an electric car (due in 2019) and that making it self-driving is on the table, but not the #1 priority.
After Uber raided some of the top people from CMU’s robotics labs (it should be noted that Chris Urmson and Sebastian Thrun also came out of that lab) they have been donating money back to fund more research inside the school, and also at the University of Arizona.
Uber remains one of the biggest game-changers out there. Aside from their money and unconventional thinking, and of course the world’s #1 brand in selling rides, Uber also has the easiest path to collecting vast volumes of driving data at low cost, and data are important.
Toyota and Honda make announcements
The Japanese have been surprisingly behind, except for Nissan, but now Honda has finally made some serious steps getting permits for California, and Toyota has announced both new projects, and has joined on the popular goal of 2020, saying that Toyota cars will be driving the public around at the 2020 Olympics in Tokyo.
Some other companies have also joined the game, such as Citroën, which had a car drive to the recent ITS World Congress in Bordeaux.
In the Shuttle business, Navia is back as the Navya, and the new vehicle is more enclosed, as shown in this video. Many other private campus shuttle projects are heating up around the world, including Citymobil2. Easymile (also from France) is setting up a pilot project in the Bay Area and shuttle projects are underway in many labs and towns around the world. (Disclaimer: I am discussing involvement in one of them which I will talk about later.)
The news goes on
The volume of news stories shows why Gartner put robocars so high on their hype cycle. I have not covered a lot of other news, including:
New states, provinces and countries passing new laws or enabling testing. Even my own home province of Ontario.
The creation of new test tracks and facilities — these are useful but as news they are mostly PR
Last week, I commented on the VW scandal and asked the question we have all wondered, “what the hell were they thinking?” Elements of an answer are starting to arise, and they are very believable and teach us interesting lessons, if true. That’s because things like this are rarely fully to blame on a small group of very evil people, but are more often the result of a broad situation that pushed ordinary (but unethical) people well over the ethical line. This we must understand because frankly, it can happen to almost anybody.
The ingredients, in this model are:
A hard driving culture of expected high performance, and doing what others thought was difficult or impossible.
Promising the company you will deliver a hotly needed product in that culture.
Realizing too late that you can’t deliver it.
Panic, leading to cheating as the only solution in which you survive (at least for a while.)
There’s no question that VW has a culture like that. Many successful companies do, some even attribute their excellence to it. Here’s a quote from the 90s from VW’s leader at the time, talking about his desire for a hot new car line, and what would happen if his team told him that they could not delivery it:
“Then I will tell them they are all fired and I will bring in a new team,” Piech, the grandson of Ferdinand Porsche, founder of both Porsche and Volkswagen, declared forcefully. “And if they tell me they can’t do it, I will fire them, too.”
Now we add a few more interesting ingredients, special to this case:
European emissions standards and tests are terrible, and allowed diesel to grow very strong in Europe, and strong for VW in particular
VW wanted to duplicate that success in the USA, which has much stronger emissions standards and tests
The team is asked to develop an engine that can deliver power and fuel economy for the US and other markets, and do it while meeting the emissions standards. The team (or its leader) says “yes,” instead of saying, “That’s really, really hard.”
They get to work, and as has happened many times in many companies, they keep saying they are on track. Plans are made. Tons of new car models will depend on this engine. Massive marketing and production plans are made. Billions are bet.
And then it unravels
Not too many months before ship date, it is reported, the team working on the engine — it is not yet known precisely who — finally comes to a realization. They can’t deliver. They certainly can’t deliver on time, possibly they can actually never deliver for the price budget they have been given.
Now we see the situation in which ordinary people might be pushed over the line. If they don’t deliver, the company has few choices. They might be able to put in a much more expensive engine, with all the cost such a switch would entail, and price their cars much more than they hoped, delivering them late. They could cancel all the many car models which were depending on this engine, costing billions. They could release a wimpy car that won’t sell very well. In either of these cases, they are all fired, and their careers in the industry are probably over.
Or they can cheat and hope they won’t get caught. They can be the heroes who delivered the magic engine, and get bonuses and rewards. 95% they don’t get caught, and even if they are caught, it’s worse, but not in their minds a lot worse than what they are facing. So they pretend they built the magic engine, and program it to fake that on the tests. read more »
Among the most common questions I have seen in articles in the mainstream press, near the top is, “Who is going to be liable in a crash?” Writers always ask it but never answer it. I have often given the joking answer by changing the question to “Who gets sued?” and saying, “In the USA, that’s easy. Everybody will get sued.”
But in reality, in spite of all the writing that this is a hard and central question, the long term answer has always been obvious. If the software/hardware in a car is responsible for the crash (ie. caused the vehicle to do something like depart its right-of-way) then it’s pretty obvious that the vendor of that car will be liable, or perhaps some proxy for the vendor like a taxi fleet operator.
The main reason this has sat as an open question for so long is that any lawyer will tell you never to admit in advance that you should be liable for something. From a lawyer’s standpoint, that can never do anything but come back to haunt you later. There’s no upside, and a big downside, so they tell clients not to do it.
It is not just a legal decision. After all, customers are not going to want to buy or even ride in cars if the rule is, “If this car crashes because of our bug, then you (or your insurance) will be liable, and demerit points or even criminal charges might go to you.” Early adopters might accept that but it’s not a workable long-term policy. If the question of points and rare criminal charges could be eliminated, we could see a workable system where the passenger is liable and has insurance to fully cover it, but deep down that’s a silly system; again something only for the early days.
Even if you could get such a system in place with passenger-insurance, the reality is the vendor would still get sued. Even a great policy and indemnification from the passenger would not prevent plaintiff’s lawyers from wanting to go after the deep pocketed vendor. They would look for the hope of negligence (or in their dreams, VW style fraud) to get juicy damages. Even if not directly liable, the vendor would pay more in legal costs for some cases than the cost of the accident.
As I have written before, in today’s world, car accident costs are not paid by individuals or even companies. If I’m liable, my insurance company pays, and every policyholder shares the cost in their premiums. If my car has a defect, the car company pays, but builds in a share of that cost or insurance against it into the price of every car — once again the public shares the cost. This will not change in the world of robocars, and fighting over liability is really just fighting over who the money will flow through, and who will get the burdens and benefits of control of the legal strategy.
This saner approach has the vendor responsible — at least while the car was driving itself — and the vendor self-insuring and getting reinsurance, or product liability insurance, to cover the cost. The cost we currently know very, very well — rooms of actuaries at every auto insurance company study it all day — and we can handle it. (We don’t know the cost of the early, special lawsuits which will be unlike typical car crash cases.)
If the world is rational, the total number of accidents and their severity goes way down, and that cost goes way down with it. The world may not be rational, but ideally this new lower cost is built into the cost of the ride or the car, and we all pay less. Hooray.
For cars that people drive some of the time, traditional insurance will do the job, but it should be billed by the mile — called PAYD or Pay-as-you-drive.
Yesterday I attended the “Silicon Valley reinvents the wheel” conference by the Western Automotive Journalists which had a variety of talks and demonstrations of new car technology.
Now that robocars have hit the top of the “Gartner Hype Cycle” for 2015, everybody is really piling on, hoping to see what’s good for their industry due to the robocar. And of course, there is a great deal of good, but not for several industries.
Let me break down some potential misconceptions if my predictions are true:
The dashboard almost vanishes
The dashboard of the modern car is amazing and expensive. Multiple digital screens and lots of buttons and interfaces on the wheel, the central stack and beyond. Fancy navigation systems, audio and infotainment systems, mobile apps, phone integration, car information and much more can be found there. There have been experiments with gesture and speech based controls, concierge services and fancy experimental controls. There are video screens in the headrests for the rear passengers. In recent years, specialized offerings like Ford Sync, GM OnStar and many others have become differentiating factors in cars, and there’s a lot of money in that dashboard.
This started changing a bit as car companies came to accept the dominance of the mobile phone in the driver’s life. People stopped wanting things like navigation and music from their car. They had better versions of those things in their phone, and they knew how to use them and had customized them. This year we are seeing deployment of “Android Auto” and “Apple CarPlay” which connect your phone to the car’s dashboard screen, and let you see and control a very limited number of apps on the screen. The car makers had to be dragged to this kicking and screaming, but frankly today’s offerings from both Google and Apple are fairly poor in this department. For example, you can only run the special approved and modified apps. If you like to navigate with Waze instead of Google Maps (even though Waze is a Google product) you can’t — your phone is locked out when it is driving Android Auto.
All of this is still based on the the idea that the driver must put all focus on driving. You can’t have complex UI where you look at the screen for more than a glance, and you should not distract the driver.
The Mercedes F015 concept car shown here is one of the first automaker explorations of a car where it’s OK to distract the driver. On the doors and walls of this car, you can see large touchscreens with concept apps on them.
In contrast, Google’s new prototype 2-seater car barely has a dashboard at all. Their answer shows more wisdom, I think. Your phone and tablet are always going to be your preferred choice for mobile computer interactions. Access to the internet, music and entertainment will go through them. The phone is updated every 2 years, or even less, and will always have superior hardware and services, and more effort goes into its design and the design of its systems and apps than will go into a car system. But even if the car system is fantastic, 2 years later it will be behind the phone.
As such, the phone is even how you give commands to the car, such as what destination to drive to. And we all know it’s vastly easier to enter destinations on phones than in any car nav system we’ve ever seen.
The full-auto robocar becomes more like a living space than a car. More like your TV room or your office. It is those places where we might seek clues as to the interior of the car. My living spaces do not have touchscreens on all the walls, for example, so it’s not too likely my car will either — or if they become useful, both will have them.
In my office I do have a desk and a big screen, along with better user input devices (full keyboard and trackball) as well as more computing power. These things I will want in my car, but of course they should personalize to me, probably using my phone as a gateway to that information. In my TV room I do have a large screen, and that will probably show up in the car, but it probably won’t be a touchscreen any more than my TV is. I will control it from my phone.
Car makers agreed that they should not attempt to pioneer new forms of user interface, and that this is why experiments in gesture control and other novel UIs have not done well. Drivers don’t want to learn entirely new styles of computer interaction when sitting in a car. They want to use the forms they already know. New forms should be pioneered in the general computing world, and moved into the car.
With so much dependence on the phone, the car will need a small and simple tethered phone so that if your phone is not present for some reason, you can still do all the basic functions. Of course there will be power so running out of battery is not an issue, other than for unlocking and summoning the car.
We also have to realize that long ago, car dashboards had barely anything on them. Even emergency driving (when the self-drive system has failed, and you plug in handlebars or a joystick) hardly needs anything more than a speedometer, and frankly you can drive fine with other traffic even without that.
It should be noted that the phone is not a secure device. As such, it won’t send much to the self-driving system. It will receive status information from it, but only give very limited commands. Indeed, it is quite possible that your phone might control your car through the cloud, sending commands to a central server which then talks to the car. A more simplified interface must exist for cloud dead zones, with all its traffic highly scrutinized. Robocars will be able to drive without the cloud, but it will be the exception, not the rule, and it won’t be done for very long or far.
A lot of the event dealt with audio. Audio makers are making good strides in producing fantastic car audio, and doing it at high prices. This trend started when it became clear that the car was the primary place to listen to music for many people, though this has decreased with the rise of the smartphone and other good music players.
People will continue to want good audio systems in their cars, but they will be offered a new, and cheaper option thanks to robocars, namely quality noise-canceling headphones with subwoofer seats. Today, drivers are not allowed to wear noise isolating headphones because they should respond to traffic sounds and sirens. In the future, drivers might like to take themselves away from road noise to get good music. Aside from being cheaper, this solution allows all occupants of a car to have different music (or video soundtrack) if they wish to.
Last weekend I tried out a new product called a SubPac, which is a small seat cushion which emits strong subwoofer bass directly into your body, providing a fair bit of the feeling of standing in from of giant dance club speakers. This unit is $380 today but will come down in price and soon should be a modest extra cost in car seats. (I am also interested in backpack versions which will eventually make silent disco more acceptable to those who crave mind-numbing bass.
In the other direction, the freedom of design that robocars provide means that larger vehicles will become interesting spaces, and you might come to view the car parked in your driveway as your music listening room or even home theatre and video game room, and invest the money in that (if you own your robocar)
Results from a design competition at Academy of Art University of San Francisco were presented. The designs were interesting, in line with many other futuristic car designs I have seen, though larger and with more lounging. Sadly, the focus was on large vehicles. The reality, I think, is that most new vehicles will be small 1-2 person vehicles, and I would like to see more radical design ideas there. Around 80% of urban trips are solo, and we have a tremendous opportunity to design comfortable higher end solo vehicles which will also be very efficient in terms of both energy and road space occupancy. Here’s where we want to see adjustable comfortable chairs, fold out desks and screens and other techniques to give us the things from our offices and homes in a space just 1.5m wide.
The concept of infrastructure changes were touched on. Regular readers here will know I instead favour minimal infrastructure change and the idea of “virtual infrastructure,” where as much is done at software levels as possible. “Smart cars on stupid roads.” In spite of this cities and agencies commonly ask what they should do to hasten the robocar, and I love the sentiment but the reality is that the answers are minimal.
All of this takes us further towards the conclusion that some models of robocar in the future will be incredibly cheap. Particularly the “city cars” which never go on the highway and only carry 1-2 people. With simple electric drivetrains they will be easy to build and maintain, and eventually their battery cost will become very reasonable. The dashboard vanishes along with many other controls. The expensive sound system vanishes. The windshields need not be a large custom piece of curved glass — in fact they don’t even have to exist other than for passenger comfort. The parts count goes down significantly. The only additions are the sensors and computers, both on Moore’s law downward curves, and perhaps that nice large screen for working. We may also see a more advanced computer-controlled suspension to keep the ride smooth and comfortable.
Recently I did a road trip through Portugal. I always enjoy finding something new that they are doing in a country which has not yet spread to the rest of the world.
Along a number of Portuguese roads, you will see a sign marked “velocidade controlada” — speed control — and then a modest distance down the road will be a traffic light in the middle of nowhere. There is no cross street. This is an interesting alternative to the speed bump or other “traffic calming” systems.
At the sign a radar gun measures your speed. If you are over the limit, then as you approach the light, it turns red. It turns red for you, and also anybody behind you and the oncoming traffic.
The result is people slow down for these signs to the limit. Far more effectively than any speed bump, and without the very annoying bump. Mostly this is done on faster roads than the quiet residential streets that have speed bumps, and of course traffic lights cost more than speed bumps, at least today.
The social dynamic is interesting. Even though many of us are scofflaws when it comes to the speed limit, most are much more religious about a red light. Even a red light like this one where there is no physical danger to running the red light, just the fairly unlikely risk of a stronger ticket. Strangely, though both speeding and running this light are both just violations of the law, I never saw anybody run one, and drivers who were total speed demons elsewhere quickly slowed down before these signs. (People know where they are, so they aren’t a general speed reducer, but rather more like a speed bump in cutting speed in one particular place.)
Added to this is the element of public shame. If you trigger this light, you stop everybody around you too. If you’re a sociopath, this won’t concern you, but for most there is a deep shame about it.
Today, as noted, a traffic light and radar gun are a moderately expensive thing. These lights are not nearly as expensive because they don’t require the complex intersection survey and programming of sometimes 20-30 real lights, but they still need a pole, and electricity, and weather hardened gear. In the future, I predict this sort of tech will get quite inexpensive, possibly cheaper than a speed bump. You could imagine making one with solar power and LEDs which only displayed the red light, not the green, and so needed no external power for it. They need not be on all the time — in fact if the batteries got low, they could just shut down until they recharged. The radar and communications link could also become quite cheap.
Of course, I would like to see this combined with more reasonable speed limits. I have pointed out before that the French Autoroute approach of a realistic limit of 130km/h that everybody obeys and where you really get tickets if you exceed it is much better than the US approach of a 65mph limit that 90% of drivers disregard. This system is much better than the speed bump. Speed bumps hurt cars and impede emergency vehicles. Emergency vehicles can blow through these. These could even vary their speed based on conditions and time of day.
Robocars of course would know where all these are and never trigger one, even if the occupants have commanded the vehicle to exceed the limit. But this is mostly a technology for human drivers. It is halfway along the path to “virtual infrastructure,” which is how roads and traffic control will work in the future when every car, human driven or not, uses a maps and data over phones to know the road, rather than signs and lights.
Most of you would have heard about the giant scandal where it has been revealed that Volkswagen put software in their cars to deliberately cheat on emissions tests in the USA and possibly other places. It’s very bad for VW, but what does it mean for all robocar efforts?
You can read tons about the Volkswagen emissions violations but here’s a short summary. All modern cars have computer controlled fuel and combustion systems, and these can be tuned for different levels of performance, fuel economy and emissions. (Of course, ignition in a diesel is not done by an electronic spark.) Cars have to pass emission tests, so most cars have to tune their systems in ways that reduce other things (like engine performance and fuel economy) in order to reduce their pollution. Most cars attempt to detect the style of driving going on, and tune the engine differently for the best results in that situation.
VW went far beyond that. Apparently their system was designed to detect when it was in an emissions test. In these tests, the car is on rollers in a garage, and it follows certain patterns. VW set their diesel cars to look for this, and tune the engine to produce emissions below the permitted numbers. When the car saw it was in more regular driving situations, it switched the tuning to modes that gave it better performance and better mileage but in some cases vastly worse pollution. A commonly reported number is that in some modes 40 times the California limit of Nitrogen Oxides could be emitted, and even over a wide range of driving it was as high as 20 times the California limit (about 5 times the European limit.) NOx are a major smog component and bad for your lungs.
It has not been revealed just who at VW did this, and whether other car companies have done this as well. (All companies do variable tuning, and it’s “normal” to have modestly higher emissions in real driving compared to the test, but this was beyond the pale.) The question everybody is asking is “What the hell were they thinking?”
That is indeed the question, because I think the central issue is why VW would do this. After all, having been caught, the cost is going to be immense, possibly even ruining one of the world’s great brands. Obviously they did not really believe that they might get caught.
Beyond that, they have seriously reduced the trust that customers and governments will place not just in VW, but in car makers in general, and in their software offerings in particular. VW will lose trust, but this will spread to all German carmakers and possibly all carmakers. This could result in reduced trust in the software in robocars.
What the hell were they thinking?
The motive is the key thing we want to understand. In the broad sense, it’s likely they did it because they felt customers would like it, and that would lead to selling more cars. At a secondary level, it’s possible that those involved felt they would gain prestige (and compensation) if they pulled off the wizard’s trick of making a diesel car which was clean and also high performance, at a level that turns out to be impossible. read more »
Much press has been made over Jonathan Petit’s recent disclosure of
an attack on some LIDAR systems used in robocars. I saw Petit’s presentation
on this in July, but he asked me for confidentiality until they released their
paper in October. However, since he has decided to disclose it, there’s
been a lot of press, with truth and misconceptions.
There are many security aspects to robocars. By far the greatest concern
would be compromise of the control computers by malicious software, and great
efforts will be taken to prevent that. Many of those efforts will involve
having the cars not talk to any untrusted sources of code or data which
might be malicious. The car’s sensors, however, must take in information
from outside the vehicle, so they are another source of compromise.
There are ways to compromise many of the sensors on a robocar. GPS can be
easily spoofed, and there are tools out there to do that now. (Fortunately
real robocars will only use GPS as one clue to their location.) Radar is
also very easy to spooof — far easier than LIDAR, agrees Petit — but
their goal was to see if LIDAR is vulnerable.
The attack is a real one, but at the same time it’s not, in spite of the
press, a particularly frightening one. It may cause a well designed
vehicle to believe there are “ghost” objects that don’t actually exist, so that
it might brake for something that’s not there, or even swerve around it.
It might also overwhelm the sensor, so that it feels the sensor has failed,
and thus the car would go into a failure mode, stopping or pulling off the
road. This is not a good thing, of course, and it has some safety
consequences, but it’s also a fairly unlikely attack. Essentially, there
are far easier ways to do these things that don’t involve the LIDAR, so it’s
not too likely anybody would want to mount such an attack.
Indeed, to do these attacks, you need to be physically present, near the target car, and you need a solid
object that’s already in front of the car, such as the back of a truck that
it’s following. (It is possible the road surface might work.) This is a higher bar than attacks which might be done
remotely (such as computer intrusions) or via radio signals (such as with
hypothetical vehicle-to-vehicle radio, should cars decide to use that tech.)
Here’s how it works: LIDAR works by sending out a very short pulse of laser
light, and then waiting for the light to reflect back. The pulse is a small
dot, and the reflection is seen through a lens aimed tightly at the place the
pulse was sent. The time it takes for the light to come back tells you how
far away the target is, and the brightness tells you how reflective it is,
like a black-and-white photo.
To fool a lidar, you must send another pulse that comes from or appears to come
from the target spot, and it has to come in at just the right time, before (or on some, after)
the real pulse from what’s really in front of the LIDAR comes in.
The attack requires knowing the characteristics of the target LIDAR very
well. You must know exactly when it is going to send its pulses before it
sends them, and thus precisely (to the nanosecond) when a return reflection
(“return”) would arrive from a hypothetical object in front of the LIDAR. Many LIDARS
are quite predictable. They scan a scene with a rotating drum, and you can
see the pulses coming out, and know when they will be sent. read more »
Jean-Louis Gassée, while a respected computer entrepreneur, wrote a critical post on robocars recently which matches a very common pattern of critical articles:
The pattern is as follows:
The author has been hearing about robocars for a while, and is interested
While out driving, or sometimes just while thinking, they encounter a situation which seems challenging
They can’t figure out what a robocar would do in that situation
They conclude that thus the technology is very far in the future.
His scenario is the very narrow road, so narrow that it really should be one-way but it isn’t. In most of the road, two cars can’t pass one another. Humans resolve this through various human dynamics, discussion and experience.
In most of these examples, the situation is not one that is new to robocar developers. They’ve been thinking about all the problems they might encounter in driving for over a decade in many cases. It’s extremely rare for a newcomer to come up with a scenario they have not thought of. In addition, developers are putting cars on the road, with over a million miles in Google’s case, to find the situations that they didn’t think of just by thinking and driving themselves. It is not impossible for novices to come up with something new — in fact a fresh eye can often be very valuable — but the fresh eyes should check to see what prior thinking may exist.
Some of the problems are indeed hard, and developers have put them later on the roadmap. They will not release their cars to operate on roads where the unsolved situations may occur. If snow is hard, the first cars will be released in places where it does not snow, or they will not drive on their own if it’s snowing. In the meantime, the problems will be solved, in a priority order based on how often they happen and how important they are.
The “two cars meet” situation involves very rare roads in the USA, so it’s not a high priority problem there, but it would not be a surprise problem. That’s because current plans have cars only drive with a map of the road they are driving. No map, they don’t drive the road.
That means they know the road well, and exactly how wide it is at every spot, and what its rules are (one-way vs. two-way and so on.) They will know their own width and the width of oncoming vehicles accurately. If they can’t safely drive a road, they won’t drive it. If it’s a rare road, the cost of that will be accepted. Driving every road everywhere is a nice dream, but not necessary to have a highly useful product. While Google’s ideal prototype is planned to be released for urban situations without a wheel, cars that need to go places where they can’t drive will continue to offer wheels or other interfaces (joysticks, tablet apps) that let a human guide them to get through problems.
The two-cars meeting problem is interesting because it’s actually one where the cars can far outperform humans. It’s also one of the rare times that communication between cars turns out to be useful. (Typically car to server to server to car, not direct v2v, but that’s another matter.)
The reason is that super narrow roads, including country roads and urban back-alleys have occasional wide-spots and turn outs where people can pass. They have to, to be two-way. And these will all be on the map. Cars on such a road would desire traffic data about other cars on the road. They will be able to make predictions about when they might encounter another car coming the other way. Most interestingly, one or both of the cars can adjust their speed so that they will encounter one another precisely at one of the wider spots where passing can take place.
In fact, if they do this well, they might drive a one-lane road at a nice fast speed, barely slowing down in these wider passing zones, in part because by knowing the width of the vehicles they will be able to confidently pass quite closely. If a robocar is meeting a human driven car, it would leave some slop, picking the right passing zone, arriving early in case the other car is faster than expected, waiting if it is slower.
This remarkable ability would allow us to build low-traffic roads and alleys which are mostly only one lane wide, but which could carry traffic fairly quickly and safely in both directions. Gassée’s problem is far from a problem — it’s actually a great opportunity to vastly decrease the cost and land requirements of road construction. I wrote about this a couple of years ago, in fact.
Even without communication, a robocar would do pretty well here. Its map would tell it, should it encounter another vehicle on the road it can’t pass, just where the closest passing spot is. It could back up if need be, or if the other car should back up, it could nudge in that direction, or even display instructions to a human driver on a screen. It would be able to do this far better than humans could because of its accurate measurements and driving ability. Generally, any human car should defer to the robocar’s superior knowledge and superior ability to manage a close pass-by. The car would figure it out the moment it sensed the other car, and immediately adjust speed to meet at a passing point, or possibly to back up. Unlike humans, they will be able to drive in reverse at high speed if they have 360 degree sensors.
Human drivers could actually play a role in this. Those running a mobile app like WAZE could know about other cars running the app, or robocars. The app could give them advice to speed up or slow down to encounter the other car at a wide spot. Of course, if there are cars not using the app, they would just fall back to the old fashioned human approach. One could imagine a sign at the entry to a narrow road saying, “We recommend running the XYZ app for a smoother trip down this road.”
Not all these problems that people put forward were as easily resolved as this one, so I am not calling for people to “shut up and let the experts get to work.” There are many problems yet to be solved. Most of them can be be solved by punting because you don’t need to drive everywhere. Though Google has shown that having a steering wheel that can be grabbed while moving is a bad idea, I do expect most cars to have some form of control that can be activated when a car is stopped. If a road needs the human touch, it will be available.
Everybody has heard about Google’s restructuring. In the restructuring, Google [x], which includes the self-driving car division, will be a subsidiary of the new Alphabet holding company, and no longer part of Google.
Having been a consultant on that team, I have some perspective to offer on how the restructuring might affect the companies that become Alphabet subsidiaries and leave the Google umbrella.
The biggest positive is that Google has become a large corporation, and as a large corporation, it suffers from many of the problems that large companies have. Google is perhaps one of the most unusual large companies, so it suffers most of them less, but it is not immune. As small subsidiaries of Alphabet, the various companies will be able to escape this and act a bit more like startups. They won’t get to be entirely like startups — they will have a rich sugar daddy and not have to raise money in the venture funding world, and it’s yet to be seen if they will get any resources from their cousin at Google. Even so, this change can’t be understated. There are just ways of thinking at big companies that seem entirely rational when looked at up close, but which doom so many projects inside big companies.
Here let me list some of the factors that will be positives:
While Alphabet has said nothing about the structure of the Google [X] companies, it seems likely that they will be able to give options and equity to their employees; options that might have a big upside. Google stock options have lost the big upside. Due to the structure, however, the equity packages will probably be smaller, with nobody getting the large chunks founders get — and nobody taking the risks founders do.
It will be easier, of course, for Alphabet to sell off these subsidiaries, or even take them public or do other unusual things normally not done with corporate divisions. (It’s not impossible with corporate divisions, but it’s rare and it rarely is a bonanza for the staff.)
The subsidiaries will be freed from the matrix management of large companies. They will get their own legal departments, be able to set their own benefit structures and culture to some extent. Don’t underestimate the value of not having to work within a corporate legal or HR department when you’re trying to be a startup.
The companies can take risks that Google can’t take. For example, consider Uber, which simply violated local laws in some area to kickstart ride service. It’s much harder for a division of a large company to even try a stunt like that. For Uber, it worked — but it doesn’t always work.
The companies can also do things that would otherwise tarnish the Google brand. Huge as it is, the public has a natural distrust of Google, particularly on issues like privacy. While I think all robocar companies should work hard to protect privacy, being inside Google creates a whole new amount of scrutiny and established principles. In the case of making robocars, they might one day injure somebody, and that is a scary thing for the big brands. If you live in fear of that all the time, you won’t win the race, either.
The CEOs of the new companies should have a lot more autonomy than they had before.
They still will have access to vast financial resources. If the new car company needs ten billion dollars to build a fleet of 400,000 taxis, or even needs to buy an existing car company, it’s not out of the question.
Being inside Google conveys a certain arrogance to people because it’s one of the world’s leading companies in many different ways. But sometimes it’s good not to be so cocky.
Out of fear, there are companies that won’t do business with Google. I once asked the folks at WAZE if we might get their data on accidents. I was told, “The one company we would be afraid to sell our data to is Google.” Of course, Google got WAZE’s data, but at a much higher price!
People will finally stop wondering if they are building a car just to show advertising to you while you ride.
Of course there are some negatives:
Google brings with it vast, vast resources, not just in money. Google is also the world’s #1 mapping company, and in fact many of the early members of the Chauffeur team were people who worked on maps and streetview. Google’s world-leading computing resources also are useful for the big data projects and simulation a robocar team has to do.
There is also a giant talent pool at Google, though of course in all big companies, poaching top employees from within the company comes with risks of internal strife. The ability to even borrow top-notch people and resources is immensely valuable.
Google has fantastic benefits that are hard to duplicate in a small company. One suspects Alphabet’s subsidiaries will probably mirror a lot of Google’s policies, but there is a limit to what they can do. A google badge does not just get you dozens of restaurants and a large commuter bus system, it gets you things like a great series of internal talks from technical and world leaders and many other events. Google spends a lot on keeping its people happy. A lot.
Google has a fun company culture, with lots of cool people. People make a lot of friendships with very smart friends there, even outside their groups.
Inside Google there is always the opportunity to switch to different projects, many of which are grand and sure to affect a lot of people, without getting a new job.
The projects at Google [x] have the personal interest of Larry Page and Sergey Brin. That’s been very useful to them within Google, but it also threatens the necessary independence of the CEOs of the new subsidiaries, who will still report to Alphabet. It remains to be seen if the founders can be sufficiently hands off.
Google is perhaps the world’s top brand. It is able to get things done. When you call companies and say, “I’m calling from the Google car team” they return your calls right away and jump at the chance to talk with you. Doors are opened that are closed to most startups. (Admittedly, the project at Google is now so famous that it might overcome this.)
Google’s power has allowed it to also do things like get laws made and changed around robocars; in fact this kickstarted the legal changes around the world. A small company will have a harder time.
Google’s power gives it a strange upper hand in negotiations with other players like big car companies. Big car companies are very used to being in charge of any talks they have with partners and suppliers.
I have no inside information on this deal — this is all based on lots of observation of public information about Google and non-confidential impressions from having been there. Some of this could be wrong. Alphabet might have Google re-sell some of its perks like the bus system to the other companies. It will certainly lend a hand where it makes a lot of sense. There is a fine line, though — the more “help” you give, the more “perfectly reasonable” conditions the help comes with and soon you’re like a division again.
There have been no specific announcements about Chauffeur either. Will Google [x] be a subsidiary with Astro Teller as CEO, including the car? Will the car have its own company? Will Chris Urmson be CEO if so? Or will [x] continue as the research lab of Alphabet, while other “graduated” portions of it go off into their own companies? Specific mention was made of “Wing” which is doing drones, but not of other [x] projects. More news will surely come.
Overall, I think this is a strong decision. If Google was to fail in the race to robocars, I always felt that that failure would come from one of two fronts — either the mistakes that big companies make because they are big, or from the special hubris of Google and its #1 position. Now these two dangers are dimmed.
From small beginnings, over 800 people are here at the Ann Arbor AUVSI/TRB Automated Vehicles symposium. Let’s summarize some of the news.
Lots of PR about the new test track opening at University of Michigan. I have not been out to see it, but it certainly is a good idea to share one of these rather than have everybody build their own, as long as you don’t want to test in secret.
Mark Rosekind, the NHTSA administrator gave a pretty good talk for an official, though he continued the DoT’s bizarre promotion of V2V/DSRC. He said that they were even open to sharing the DSRC spectrum with other users (the other users have been chomping at the bit to get more unlicenced spectrum opened up, and this band, which remains unused, is a prime target, and the DoT realizes it probably can’t protect it.) Questions however, clarified that he wants to demand evidence that the spectrum can be shared without interfering with the ability of cars to get a clear signal for safety purposes. Leaving aside the fact that the safety applications are not significant, this may bode a different approach — they may plan to demand this evidence, and when they don’t get it — because of course there will be interference — they will then use that as a grounds to fight to retain the spectrum.
I say there will be interference because the genius of the unlicenced bands (like the 2.4ghz where your 802.11b and bluetooth work) was the idea that if you faced interference, it was your problem to fix, not the transmitter’s, as long as the transmitter stayed low power. A regime where you don’t interfere would be a very different band, one that could only be used a long distance from any road — ie. nowhere that anybody lives.
The most disappointing session for everybody was the vendor’s session, particularly the report from GM. In the past GM has shown real stuff based on their work. Instead we got a recap of ancient stuff. The other reports were better, but only a little. Perhaps it is a sign that the field is getting big, and people are no longer treating it like a research discipline where you share with your colleagues.
Chris Gerdes’ report on a Stanford ethics conference was good in that it went well past the ridiculous trolley problem question (what if the machine has to choose between harming two different humans) which has become the bane of anybody who talks about robocars. You can see my answer if you haven’t by now.
Their focus was on more real problems, like when you illegally cross the double yellow line to get around a stalled car, or what you do if a child runs into the street chasing a ball. I am not sure I liked Gerdes’ proposal — that the systems compute a moral calculus, putting weights on various outcomes and following a formula. I don’t think that’s a good thing to ask the programmers to do.
If we really do have a lot of this to worry about, I think this is a place where policymakers could actually do something useful. They could set up a board of some sort. A vendor/programmer who has an ethical problem to program would put it to the board, and get a ruling, and program in that ruling with the safe knowledge they would not be blamed, legally, for following it.
The programmers would know how to properly frame the questions, but they could also refine them. They would frame them differently that lay people would imagine, because they would know things. For example:
My vehicle encounters a child (99% confidence) who darts out from behind a parked van, and it is not possible to stop in time before hitting the child. I have an X% confidence (say 95%) that the oncoming lane is clear and a y% confidence (90%) that the sidewalk is clear though driving there would mean climbing a curb, which may injure my passenger. While on the sidewalk, I am operating outside my programming so my risk of danger increases 100 fold while doing so. What should I do?
Let the board figure it out, and let them understand the percentages, and even come back with a formula on what to do based on X, Y and other numbers. Then the programmer can implement it and refine it.
For the first time, there was a panel about investment in the technology, with one car company, two VCs and a car oriented family fund (Porsche.) Lots more interest in the space, but still a reluctance to get involved in hardware, because it costs a lot, is uncertain, and takes a long time to generate a return.
I largely missed these. Many were just filled with more talks. I have suggested to conference organizers a rule that the breakout sessions be no more than 40% prepared talks, and the rest interactive discussion.
Wednesday starts with Chris Urmson of Google
Chris’ talk was perhaps the most anticipated one. (Disclaimer — I used to work for Chris on the Google team.) It has similarities to a number of his other recent talks at TeD and ITS America, with lots of good video examples of the car’s perception system in operation. Chris also addressed this week’s hot topic in the press, namely the large number of times Google’s car fleet is being hit by other drivers in accidents that are clearly the fault of the other driver.
While some (including me) have speculated this might be because the car is unusual and distracting, Google’s analysis of the accidents strongly suggests that our impression of how common small bumper-bender accidents are was seriously underestimated. There are 6 million reported accidents in the US every year, and common suggestions from insurers and researchers suggested the real number might include another 6 million unreported ones. It’s now clear, based on Google’s experience, that the number of small accidents that go unreported is much higher.
Google thinks that is good news in several ways. First, it tells us just how distracted human drivers are, and how bad they are, and it shows that their car is doing even better than was first thought. The task of outperforming humans on safety may be easier than expected.
Adriano Allessandrini has always been an evocative and controversial character at these events. His report on Citymobil2 (a self-driving shuttle bus that has run in several cities with real passengers) was deliberately done as contrast to Google’s approach. Google is building a car meant to drive existing roads, a very complex task. Allesandrini believes the right approach is to make the vehicle much simpler, and only run it on certified safe infrastructure (not mixed with cars) and at very low speeds. As much as I disagree with almost everything he says, he does have a point when it comes to the value of simplicity. His vehicles are serving real passengers, something few else can claim.
We got to see a number of study results. Frankly, I have always been skeptical of the studies that report what the public thinks of future self-driving cars and how much they want them. In reality, only a tiny fraction of the 800 people at the conference, supposed experts in the field, probably have a really solid concept of what these future vehicles will look like. None of us truly know the final form. So I am not sure how you can ask the general public what they think of them.
Of greater interest are reports on what people think of today’s advanced features. For example, blindspot warning is much more popular than I realized, and is changing the value of cars and what cars people will buy.
For Tuesday afternoon I attended a very interesting security session. I will write more about this later, particularly about a great paper on spoofing robocar sensors (I will await first publication of the paper by its author) but in general I feel there is a lot of work to be done here.
In another post I will sum up a new expression of my thoughts here, which I will describe as “Connected and Automated: Pick only one.” While most of the field seems to be raving about the values of connectivity, and that debate has some merit, I feel that if the value of connectivity (other than to the car’s HQ) is not particularly high, it does not justify the security risk that comes from it. As such, if you have a vehicle that can drive itself, that system should not be “on the internet” as it were, connecting to other cars or to various infrastructure services. It should only talk to its maker (probably over a verified and encrypted tunnel on top of the cellular data network) and it should frankly be a little scared even of talking to its maker.
I proposed this to the NHTSA administrator, and as huge backers of V2V he could not give me an answer — he mostly want to talk about the perception of security rather than the security itself — but I think it’s an important question to be discussed.
Since many people don’t accept this there are efforts to increase security. First of all people are working to put in the security that always should have been in cars (they have almost none at present.) Secondly there are efforts at more serious security, with the lessons of the internet’s failures fresh in our minds. Efforts at provably correct algorithms are improving, and while nobody thinks you could build a provably correct self-driving system, there is some hope that the systems which parse inputs from outside could be made provably secure, and they could be compartmentalized from other systems in a way that compromise of one system would have a hard time getting to the driving system where real danger could be done.
There were calls for standards, which I oppose — we are way too early in this game to know how to write the standards. Standards at best encode the conventional wisdom of 3 years ago, and make it hard to go beyond it. Not what we need now.
Nonetheless there is research going to make this more secure, if it is to be done.
I’m in the Detroit area for the annual TRB/AUVSI Automated Vehicle Symposium, which starts tomorrow. Today, those in Ann Arbor attended the opening of the new test track at the University of Michigan. Instead, I was at a small event with a lot of good folks in downtown Detroit, sponsored by SAFE which is looking to wean the USA off oil.
Much was discussed, but a particularly interesting idea was just how close we are getting to something I had put further in the future — robocars that are cheaper than ordinary cars.
Most public discussion of robocars has depicted them as costing much more than regular cars. That’s because the cars built to date have been standard cars modified by placing expensive computers and sensors on them. Many cars use the $75,000 Velodyne Lidar and the similarly priced Applanix IMU/GPS, and most forecasts and polls have imagined the first self-driving cars as essentially a Mercedes with $10,000 added to the price tag to make it self driving. After all, that’s how things like Adaptive Cruise Control and the like are sold.
Google is showing us an interesting vision with their 3rd generation buggy-style car. That car has no steering wheel, brakes or gas pedal, and it is electric and small. It’s a car aimed at “Mobility on Demand.”
When people have asked me “how much extra will these cars cost,” my usual answer has been that while the cars might cost more, they will be available for use by the mile, where they can cost less per mile than owning a car does today — ie. that overall it will be cheaper. That’s in part because of the savings from sharing, and having vehicles go more miles in their lifetime. More miles in the life of a car at the same cost means a lower cost per mile, even if the car costs a little more.
The sensors cost money, but that cost is already in serious decline. We’re just a few years away from $250 Lidars and even cheaper radar. Cameras are already cheap, and there are super cheap IMUs and GPSs already getting near the quality we need. Computers of course get cheaper every year.
This means we are not too far when the cost of the sensors is less than the money saved by what you take out of the car. After all, having a steering wheel, gas and brakes costs money. Side mirrors cost money (ever had to replace them?) That fancy dashboard with all its displays and controls costs a lot of money, but almost everything it does in a robocar can be done by your tablet.
That said, you need a few extra things in your robocar. You need two steering motors and two braking systems. You need some more short range sensors and a cell phone radio. But there’s even more you can save, especially with time.
Because mobility on demand means you can make cars that are never used for anything but short urban trips (the majority of trips, as it turns out) you can save a lot more money on those cars. These cars need not be large or fast. They don’t need acceleration. They won’t ever go on the highway so they don’t need to be safe at 60mph. Electric drive, as we discussed earlier, is great for these cars, and electric cars have far fewer parts than gasoline ones. Today, their batteries are too expensive, but everything else in the car is cheaper, so if you solve the battery cost using the methods I outlined Saturday we’re saving serious money. And small one or two person cars are inherently cheaper to boot.
Of course, you need to make highway cars, and long-range 4WD SUVs to take people skiing. But these only need be a fraction of the cars, and people who use a mix of cars will see a big saving.
For a long time, we’ve talked about some day also removing many of the expensive safety systems from cars. When the roads become filled with robocars, you can start talking about having so few accidents you don’t need all the safety systems, or the 1/3 of vehicle weight that is attributable to passive safety. That day is still far away, though cars like the Edison2 Very-Light-Car have done amazing things even while meeting today’s crash tests. Companies like Zoox and other startups have pushed visions of completely redesigned cars, some of them at lower cost for a while. But this seems like it might become true sooner rather than later.
Evacuation in a hurricane
One participant asked how, if we only had 1/9th as many cars (as some people forecast, I suspect it’s closer to 1/4) we would evacuate sections of Florida or similar places when a hurricane is coming. I think the answer is a very positive one — simply enforce car pooling / ride sharing in the evacuation. While there is not a lot I think policymakers should do at this time, some simple mandates could help a lot in this arena. While people would not be able to haul as much personal property, it is very likely there would be more than enough seats available in robocars to evacuate a large population quickly if you fill all the seats in cars going out. Further, those cars can go back in to get more people if need be.
Filling those seats would actually get everybody out faster, because there would be far less traffic congestion and the roads would carry far more people per hour. In fact, that’s such a good idea it could even be implemented today. When there’s an evacuation, require all to use an app to register when they are almost ready to leave. If you have spare seats, you could not leave (within reason) until you picked up neighbours and filled the seats. With super-carpooling, everybody would get out very fast on much less congested roads. Those crossing the checkpoint on the way out with empty seats would be photographed and ticketed unless the app allowed them to leave like that, or the app records that it tried to reach the server and failed, or other mitigating circumstances. (This is all hours before the storm, of course, before there is panic, when people will do whatever they can.) Some storms might be so bad the cars are at risk. In that case, if the road capacity is enough, people could move out all the cars too, to protect them. But in most cases, it’s the people that are the priority.
We know electric cars are getting better and likely to get popular even when driven by humans. Tesla, at its core, is a battery technology company as much as it’s a car company, and it is sometimes joked that the $85,000 Telsa with a $40,000 battery is like buying a battery with a car wrapped around it. (It’s also said that it’s a computer with a car wrapped around it, but that’s a better description of a robocar.)
Tesla did a lot of work on building cooling systems for standard cylinder Lithium-Ion cells and was able to make a high performance vehicle. The Model S also by default charges to only 80% of capacity because battery life is hurt by charging all the way to full. In fact, charging to 3.92 volts (about 60%) capacity is the sweet spot. Some of the other things that reduce battery life include:
Discharging too close to empty
Getting too warm while discharging
Getting too warm while charging, and in particular causing thermal expansion which creates physical damage
Even ordinary warmth, where the vehicle is stored for long periods, particularly at high charge, is dangerous. The closer to freezing the better, and even above 25 degrees centigrade causes some loss.
The important, but little reported statistic for a battery is the total watt-hours you will be able to get out of it during its usable lifetime. This tells you the lifetime of the battery in miles, and the cost tells you the cost per mile. How important is this? If the Tesla $40,000 battery lasts you 150,000 miles and sells for $10,000 when done, the straight-line cost per mile is 20 cents/mile — more than the cost of gasoline in most cars, and much more than the 3 cent/mile or less cost of electricity.
Humans will drive as humans want to drive, and it’s hard to change that. They will accelerate for both fun and to get ahead of other cars. They will take mixes of short trips and long trips. They don’t know how long their trips are and demand a flexible vehicle always ready for anything.
Electric robotaxis change that game. They will drive predictably, rarely ever demanding quick acceleration. A driver likes zippy fun, a passenger wants a gentle ride. They can go even further, and set their driving pattern based on the temperature of their batteries. Are we making the batteries too warm? Then “cool off,” literally. This applies both to fast starts and also slowing down. Regenerative braking conserves energy and increases range, but doing it too hard heats the batteries. Start slowing down sooner — especially if you have data on what traffic lights and traffic are doing and it can make a big difference.
Robotaxis can always use the sweet spot of the battery charge duty cycle.
You will rarely be sent a robotaxi that, in order to get you, needs to dig deep into its maximum range.
Often demand is predictable, so if need be, vehicles can be charged above 60% only when such demand is expected or is arising.
While robotaxis will prefer to charge at night when power is cheapest, they can charge any time to get back up to the optimal level
As I’ve noted before, battery swap doesn’t work well for humans, but robots don’t mind making an appointment or driving out of their way for a swap. This makes it easy to use batteries only in the sweet spot, and to charge them only at night on cheap power.
If battery swap is not an option, there are many options to supplement range during peak demand. Vehicles can go to depots to pick up trunk batteries, battery trailers, or even slot-in units with small motorcycle engines and liquid fuel tanks. If this is cheaper than the alternatives, it’s an option.
When it gets hot, robotaxis can seek out the shade, or even places with cooling, to keep the batteries from being too warm.
Robotaxis don’t mind the loss of range all that much
As a battery ages, its capacity drops. Humans hate that — having bought a car with a 100 mile range they won’t accept it can now only do 60. For a human, that means time to replace the battery. For a robotaxi, that just means you have a shorter range, and you don’t get sent on long range trips. Or you may decide that while before, you only charged to 60% to get maximum battery life, now you charge more, knowing it will eat the remaining life, but getting the most out of the battery.
Of course, as the range drops, now you run into another problem. You’re carrying around the extra weight of battery for half the range, and it’s costing you energy to do that, especially in an ultralight car where the battery is the biggest component of the weight. (This also enters into the math of whether it makes sense to charge only to 60%.) Eventually the time comes that the battery is not practical. This is the time to sell it. Tesla and others are working to produce a home and grid storage market for used car batteries. In those applications, the weight doesn’t matter, just the cost for the remaining lifetime watt-hours. You care about the capacity, but you pay a market price for it.
Eventually, even this is not practical and you scrap to recycle the materials.
Typical predictions for Lithium-Ion run from 500 to 1,000 cycles. Tesla’s techniques seem to be beating that. With robotaxis, who knows just how many lifetime kwh we’ll be able to get out of these batteries, or perhaps even other chemistries. Turns out that human drivers like a chemistry that keeps its life as long as possible then falls off a cliff. Slow decline is harder to sell — but slow decline chemistries, like Lithium Iron Phosphate and others could make more sense for the robots that don’t care.
It’s often suggested that electric cars could be used as grid storage. Problem is, with car batteries today, it costs around 15 cents to put a kwh into a battery and get it out. That means to be grid storage, you need to have the spot price on the grid be the price you bought at, plus 15 cents, plus a margin to make it worth this. Night power can get as low as 6 cents, so this does happen, but not as much as one might hope. The problem is that the grid’s peak demand is around 4 to 7pm, which is also a peak time for driving. That’s the last time most car owners will want to drain off their batteries to make a bit of money on the power. You will only do that if you know you won’t be using the car. For a robotaxi fleet, that might be the case. Of course, selling power to the grid you will do it only at a rate that does not harm your battery or warm it up too much.
When the grid gets to a super peak, the price can really spike to attractive numbers. That’s because building extra power plant capacity just for those rare days is expensive, and so almost any price is better. Here we could talk about cars as storage, when we know their batteries are not going to be used. That’s even more true of batteries sitting in a battery swap facility.
The situation described, one car cutting off another, was a very unlikely one for several reasons:
All these cars are operated by trained safety drivers who are expected to be vigilant and take control at any sign of trouble.
In particular, special moves like a lane change would get extra vigilance. If something unusual happened (such as 2 cars going for the same spot) the safety drivers would be watching in advance, tracking what the car was doing, and pull back if the car’s own displays were not telling them it was going to do the right thing.
The safety drivers are not perfect of course, but an autonomous lane change is a rare event and one that most people are still just testing, so they would be very unlikely to miss that the car was going to cut somebody else off.
Of course, situations will arise when two cars try to change into the same spot at the same time, and robocars will probably be fairly timid in these situations. The most likely situation if two robocars tried to take the same spot would be that both would back off and return to their original lane, and it will probably be that way until being so timid is not a workable strategy.
Robocars won’t be the lane-changing demons that some people (including myself sometimes are.) Many human drivers are constantly trying to find the fastest lane and we weave, often finding the lane we move into seems to become the slowest. Part of that is our psychology.
Robocars won’t do this as much because their passengers will be occupied doing other things, and in most cases will not be in a super hurry. Those passengers will prefer a stable ride where they can get work done to a weaving ride with extra starts and stops. If we’re in a big hurry, we might ask the car to try to work extra hard to make the fastest trip but this will be the exception.
When we do want that, the robocar will actually have a very nice model of just how fast each lane is moving. It won’t be fooled the way we are by seeing some lanes that seem to be faster when in fact neither lane is winning by that much. If they read licence plates to identify cars, they will get excellent appraisals of what’s going on. If one lane is truly faster they will find it. On the other hand, they will be worse at the standard game of chicken needed to change lanes in heavy traffic, where you depend on the car you are moving in front of to slow down. They will know the physics though, and if a lane change is needed, they will warn the passengers of high acceleration and perfectly make a smaller spot than you might be able to make.
In other news, Google has sent two cars to Austin, Texas to expand their testing ground. I don’t have a particular insight on why they selected Austin — I know that many towns and states regularly contact Google in the hope they might bring some cars to their area, though Texas has no modified laws yet.
I’ve written a few times about the work of Vislab in Parma, Italy. They have a focus on doing self-driving with machine vision, and did a famous cross-continent trek from Italy to Shanghai a few years ago, using a lead car to map the way and a following car self-driving, mostly with vision.
This lab was spun out of its university but now has been [acquired by Ambarella], a company that specializes in video compression chips. One can see why Ambarella would want a computer vision lab — but it seems this might spell the end of their self-driving efforts, unless they are spun out.
A new paper is out in Nature Climate Change on the potential for robocars to reduce emissions, inspired by some of my research in this area. Sadly, it’s behind a paywall, but the author will give a talk at Nissan’s lab in Silicon Valley on July 15th at our local self-driving car meetup.