Robocars

Matternet launches drone delivery platform

I often speak about deliverbots — the potential for ground based delivery robots. There is also excitement about drone (UAV/quadcopter) based delivery. We’ve seen many proposed projects, including Amazon prime Air and much debate. Many years ago I also was perhaps the first to propose that drones deliver a defibrillator anywhere and there are a few projects underway to do this.

Some of my students in the Singularity University Graduate Studies Program in 2011 really caught the bug, and their team project turned into Matternet — a company with a focus in drone delivery in the parts of the world without reliable road infrastructure. Example applications including moving lightweight items like medicines and test samples between remote clinics and eventually much more.

I’m pleased to say they just announced moving to a production phase called Matternet One. Feel free to check it out.

When it comes to ground robots and autonomous flying vehicles, there are a number of different trade-offs:

  • Drones will be much faster, and have an easier time getting roughly to a location. It’s a much easier problem to solve. No traffic, and travel mostly as the crow flies.
  • Deliverbots will be able to handle much heavier and larger cargo, consuming a lot less energy in most cases. Though drones able to move 40kg are already out there.
  • Regulations stand in the way of both vehicles, but current proposed FAA regulations would completely prohibit the drones, at least for now.
  • Landing a drone in a random place is very hard. Some drone plans avoid that by lowering the cargo on a tether and releasing the tether.
  • Driving to a doorway or even gate is not super easy either, though.
  • Heavy drones falling on people or property are an issue that scares people, but they are also scared of robots on roads and sidewalks.
  • Drones probably cost more but can do more deliveries per hour.
  • Drones don’t have good systems in place to avoid collisions with other drones. Deliverbots won’t go that fast and so can stop quickly for obstacles seen with short range sensors.
  • Deliverbots have to not hit cars or pedestrians. Really not hit them.
  • Deliverbots might be subject to piracy (people stealing them) and drones may have people shoot at them.
  • Drones may be noisy (this is yet to be seen) particularly if they have heavier cargo.
  • Drones can go where their are no roads or paths. For ground robots, you need legs like the BigDog.
  • Winds and rain will cause problems for drones. Deliverbots will be more robust against these, but may have trouble on snow and ice.

In the long run, I think we’ll see drones for urgent, light cargo and deliverbots for the rest, along with real trucks for the few large and heavy things we need.

Delphi's cross-country trip and a raft of Robocar News

I’ve been on the road, and there has been a ton of news in the last 4 weeks. In fact, below is just a small subset of the now constant stream of news items and articles that appear about robocars.

Delphi has made waves by undertaking a road trip from San Francisco to New York in their test car, which is equipped with an impressive array of sensors. The trip is now underway, and on their page you can see lots of videos of the vehicle along the trek.

The Delphi vehicle is one of the most sensor-laden vehicles out there, and that’s good. In spite of all those who make the rather odd claim that they want to build robocars with fewer sensors, Moore’s Law and other principles teach us that the right procedure is to throw everything you can at the problem today, because those sensors will be cheap when it comes time to actually ship. Particularly for those who say they won’t ship for a decade.

At the same time, the Delphi test is mostly of highway driving, with very minimal urban street driving according to Kristen Kinley at Delphi. They are attempting off-map driving, which is possible on highways due to their much simpler environment. Like all testing projects these days, there are safety drivers in the cars ready to intervene at the first sign of a problem.

Delphi is doing a small amount of DSRC vehicle to infrastructure testing as well, though this is only done in Mountain View where they used some specially installed roadside radio infrastructure equipment.

Delphi is doing the right thing here — getting lots of miles and different roads under their belt. This is Google’s giant advantage today. Based on Google’s announcements, they have more than a million miles of testing in the can, and that makes a big difference.

Hype and reality of Tesla’s autopilot announcement

Telsa has announced they will do an over the air upgrade of car software in a few months to add autopilot functionality to existing models that have sufficient sensors. This autopilot is the “supervised” class of self driving that I warned may end up viewed as boring. The press have treated this as something immense, but as far as I can tell, this is similar to products built by Mercedes, BMW, Audi and several other companies and even sold in the market (at least for traffic jams) for a couple of years now.

The other products have shied away from doing full highway speed in commercial products, though rumours exist of it being available in commercial cars in Europe. What is special about Tesla’s offering is that it will be the first car sold in the US to do this at highway speed, and they may offer supervised lane change as well. It’s also interesting that since they have been planning this for a while, it will come as a software upgrade to people who bought their technology package earlier.

UK project budget rises to £100 million

What started with a £10 million pound prize has grown in the UK has become over 100m in grants in the latest UK budget. While government research labs will not provide us with the final solutions, this money will probably create some very useful tools and results for the private players to exploit.

MobilEye releases their EyeQ4 chip

MobilEye from Jerusalem is probably the leader in automotive machine vision, and their new generation chip has been launched, but won’t show up in cars for a few years. It’s an ASIC packed with hardware and processor cores aimed at doing easy machine vision. My personal judgement is that this is not sufficient for robocar driving, but MobilEye wants to prove me wrong. (The EQ4 chip does have software to do sensor fusion with LIDAR and Radar, so they don’t want to prove me entirely wrong.) Even if not good enough on their own, ME chips offer a good alternate path for redundancy

Chris Urmson gives a TeD talk about the Google Car

Talks by Google’s team are rare — the project is unusual in trying to play down its publicity. I was not at TeD, but reports from there suggest Chris did not reveal a great deal new, other than repeating his goal of having the cars be in practical service before his son turns 16. Of course, humans will be driving for a long time after robocars start becoming common on the roads, but it is true that we will eventually see teens who would have gotten a licence never get around to getting one. (Teems are already waiting longer to get their licences so this is not a hard prediction.)

The war between DSRC and more wifi is heating up.

2 years ago, the FCC warned that since auto makers had not really figured out much good to do with the DSRC spectrum at 5.9ghz, it was time to repurpose it for unlicenced use, like more WiFi.

There is now a bill to force this being proposed.  read more »

Uber price in LA approaches robocar cheap

I was recently considering the price of UberX in Los Angeles. It’s gotten disturbingly low:

Flag drop: $0 18 cents/minute 90 cents/mile

This is not a very good deal for the driver. After Uber’s 20% cut, that’s 72 cents/mile. According to AAA, a typical car costs about 60 cents/mile to operate, not including parking. (Some cars are a bit cheaper, including the Prius favoured by UberX drivers.) In any event, the UberX driver is not making much money on their car.

The 18 cents/minute — $10.80 per hour, drops to only $8.64/hour while driving. Not that much above minimum wage. And I’m not counting the time spent waiting and driving to and from rides, nor the miles, which is odd that the flag drop fee. There is a $1 “safe rides fee” that Uber pockets (they are being sued over that.) And there is a $4 minimum, which will hit you on rides of up to about 2.5 miles.

So Uber drivers aren’t getting paid that well — not big news — but a bigger thing is the comparison of this with private car ownership.

As noted, private car ownership is typically around 60 cents/mile. The Uber ride then, is only 50% more per mile. You pay the driver a low rate to drive you, but in return, you get that back as free time in which you can work, or socialize on your phone, or relax and read or watch movies. For a large number of people who value their time much more than $10/hour, it’s a no-brainer win.

The average car trip for urbanites is 8.5 miles — though that of course is biased up by long road trips that would never be done in something like Uber. I will make a guess and drop urban trips to 6.

The Uber and private car costs do have some complications: * That Safe Rides Fee adds $1/trip, or about 16 cents/mile on a 6 mile trip * The minimum fee is a minor penalty from 2 to 2.5 miles, a serious penalty on 1 mile trips * Uber has surge pricing some of the time that can double or even triple this price

As UberX prices drop this much, we should start seeing people deliberately dropping cars for Uber, just as I have predicted for robocars. I forecast robotaxi service can be available for even less. 60 cents/mile with no cost for a driver and minimal flag drop or minimum fees. In other words, beating the cost of private car ownership and offering free time while riding. UberX is not as good as this, but for people of a certain income level who value their own time, it should already be beating the private car.

We should definitely see 2 car families dropping down to 1 car plus digital rides. The longer trips can be well handled by services like Zipcar or even better, Car2Go or DriveNow which are one way.

The surge pricing is a barrier. One easy solution would be for a company like Uber to make an offer: “If you ride more than 4,000 miles/year with us, then no surge pricing for you.” Or whatever deal of that sort can make economic sense. Sort of frequent rider loyalty miles. (Surprised none of the companies have thought about loyalty programs yet.)

Another option that might make sense in car replacement is an electric scooter for trips under 2 miles, UberX like service for 2 to 30 miles, and car rental/carshare for trips over 30 miles.

If we don’t start seeing this happen, it might tell us that robocars may have a larger hurdle in getting people to give up a car for them than predicted. On the other hand, some people will actually much prefer the silence of a robocar to having to interact with a human driver — sometimes you are not in the mood for it. In addition, Americans at least are not quite used to the idea of having a driver all the time. Even billionaires I know don’t have a personal chauffeur, in spite of the obvious utility of it for people whose time is that valuable. On the other hand, having a robocar will not seem so ostentatious.

Issues in regulating robocars, and the case for a light hand

All over the world, people (and governments) are debating about regulations for robocars. First for testing, and then for operation. It mostly began when Google encouraged the state of Nevada to write regulations, but now it’s in full force. The topic is so hot that there is a danger that regulations might be drafted long before the shape of the first commercial deployments of the technology take place.

As such I have prepared a new special article on the issues around regulating robocars. The article concludes that in spite of a frequent claim that we want to regulate and standarize even before the technology has been out in the market for a while, this is in fact both a highly unusual approach, and possibly even a dangerous approach.

Read:

Regulating Robocar Safety : An examination of the issues around regulating robocar safety and the case for a very light touch

Keep Calm and Carry Passengers -- UK robocar projects level up

The government baked robocar projects in the UK are going full steam, with this press release from the UK government to accompany the unveiling of the prototype Lutz pod which should ply the streets of Milton Keynes and Greenwich.

This comes along with the realizations of laws enabling of testing of vehicles (with safety drivers) and discussion of changes to the UK vehicle code.

The new pod follows a similar path to other fully-autonomous prototypes, reminding me of the EN-V from GM, the MIT City Car and the Google buggy prototype. It’s electric, meant for “last mile” and will lose its steering wheel once testing is over.

I also note they talk eagerly about the Meridian shuttle being tested in Greenwich, even though that’s a French vehicle.

When it comes to changes to the vehicle code, I think it’s premature. Even without looking at the proposed changes, I would say that we don’t know enough to work out what changes are needed, even though we all might be full of ideas.

One proposal is to remove the ban on tailgating to allow convoys. A reasonable enough thing, except people are not going to build convoys for quite some time, I think. The Volvo/SARTRE experiment found a number of problems with the idea, and you don’t want to do your first deployments with something that could crash 10 cars if it goes wrong instead of one. You do that later, once you feel very confident in your tech.

Another proposal called for changing how cyclists are treated. The law in the UK (and some other places) demands cyclists be given the full berth of a car, and in practice nobody ever does that, and if they did do it, it would mean they just followed along at bicycle speed, impeding traffic. One of those classic cases, like speed limits in the USA, where the law only works if nobody follows it. (Though cyclists would say that they should just get the full lane like the law says.)

We will need to fix these areas of the vehicle codes, but we should fix them only after we see a problem, unless it’s clear that the vehicles can’t be deployed without the change. Give the developers the chance to fix the problem on their own first. If you fix the law before you know what the vehicles will be like, you may ensconce old thinking into the law and have a hard time getting it out.

It is interesting to see Governments adapt so quickly to a disruptive technology. It’s quite probable that our hype is a bit too good and will come back to bite us. I predicted this sort of jurisdictional competition as governments realize they have a chance to make their regions become players in the new automotive industry, but they are embracing some things faster than I expected.

Is Apple building a robocar? Maybe, maybe not

There is great buzz about some sensor-laden vehicles being driven around the USA which have been discovered to be owned by Apple Computer. The vehicles have cameras and LIDARs and GPS antennas and many are wondering is this an Apple Self-Driving Car? See also speculation from cult of Mac.

Here’s a video of the vehicle driving around the East Bay (50 miles from Cupertino) but they have also been seen in New York.

We don’t see the front of the vehicle, but it sure has plenty of sensors. On the front and back you see two Velodyne 32E Lidars. These are 32 plane LIDARS that cost about $30K. You see two GPS antennas and what appear to be cameras in all directions. You don’t see the front in these pictures, which is where the most interesting sensors will be.

So is this a robocar, or is this a fancy mapping car? Rumours about Apple working on a car have been swirling for a while, but one thing to contradict that has been the absence of sightings of cars like this. You can’t have an active program without testing on the roads. There are ways to hide LIDARS (and Apple is super secretive so they might) and even cameras to a degree, but this vehicle hides little.

Most curious are the Velodynes. They are tilted down significantly. The 32E unit sees from about 10 degrees up to 30 degrees down. Tilting them this much means you don’t see out horizontally, which is not at all what you want if this is for a self-driving car. These LIDARs are densely scanning the road close around the car, and higher things in the opposite direction. The rear LIDAR will be seeing out horizontally, but it’s placed just where you wouldn’t place it to see what’s in front of you. A GPS antenna is blocking the direct forward view, so if the goal of the rear LIDAR is to see ahead, it makes no sense.

We don’t see the front, so there might be another LIDAR up there, along with radars (often hidden in the grille) and these would be pretty important for any research car.

For mapping, these strange angles and blind spots are not an issue. You are trying to build a 3D and visible light scan of the world. What you do’t see from one point you get from another. For stree mapping, what’s directly in front and behind are generally road and not interesting, but what’s to the side is really interesting.

Also on the car is an accurate encoder on the wheel to give improved odemetry. Both robocars and mapping cars are interested in precise position information.

Arguments this is a robocar:

  • The Velodynes are expensive, high end and more than you need for mapping, though if cost is no object, they are a decent choice.
  • Apple knows it’s being watched, and might try to make their robocar look like a mapping car
  • There are other sensors we can’t seee

Arguments it’s a mapping car

  • As noted, the Velodynes are titled in a way that really suggests mapping. (Ford uses tilted ones but paired with horizontal ones.)
  • The cameras are aimed at the corners, not forward as you would want
  • They are driving in remote locations, which eventually you want to do, but initially you are more likely to get to the first stage close to home. Google has not done serious testing outside the Bay Area in spite of their large project.
  • The lack of streetview is a major advantage Google has over Apple, so it is not surprising they might make their own.

I can’t make a firm conclusion, but this leans toward it being a mapping car. Seeing the front (which I am sure will happen soon) will tell us more.

Another option is it could be a mapping car building advanced maps for a different, secret, self-driving car.

Uber and Google are not breaking up quite yet

After yesterday’s story about Uber and CMU, a lot of speculation has flown that Uber will now be at odds with Google, both about building robocars and also on providing network taxi service, since another rumour said Google plans to launch an Uber like “ride share” service.

Since then, the Uber blog post and this interview with Uber folks tell a slightly different story. Uber is funding a research center at CMU, and giving lots of grants to academics. Details are not fully available, but typically this means being at an early research stage. With these research labs, academics are keen to publish all they do, so little gets done in secret. In many cases the sponsor gets a licence to the technology but it’s often not exclusive. If Uber wanted to build their own car, chances are they would do it in a more private lab.

Rumours that David Drummond would resign from the Uber board also have not panned out. Google has invested hugely in Uber (already for good return at the present valuation) and Google Maps offers you an Uber if you ask it for directions somewhere — it’s actually one of the easier interfaces for ordering one.

Rumours around Google’s efforts suggest that Big G has been testing a “ride share” app with employees and plans to launch it. Google has denied that, and says it loves Uber and Lyft. Further news revealed the rumours were about an internal carpooling system, not involving the self-driving cars. I could imagine confusion because Uber and others call themselves “ride sharing” which is a bit of a fabrication to not look like a taxi, while a carpooling app would be real ride sharing. (UberPool is real ride sharing.) Google, which has a terrible undersupply of parking is very keen on getting employees to ride its bus system and to carpool.

That said, Google has talked about the same thing I talk about — the true goal of robocar technology being the creation of a mobility on demand taxi service, like Uber but at a much lower cost. Google has not said that they would provide that themselves, or who they would partner with if they did it. Most people have presumed it might be Uber but I don’t think that’s at all assured.

At the same time, Uber has assured its drivers they are not going away for the foreseeable future. I suspect that’s an equivocation, and just means that we can’t see very far in the future right now!

Will robocars use V2V at all?

I commonly see statements from connected car advocates that vehicle to vehicle (V2V) and vehicle to infrastructure communications are an important, even essential technology for robocar development. Readers of this blog will know I disagree strongly, and while I think I2V will be important (done primarily over the existing mobile data network) I suspect that V2V is only barely useful, with minimal value cases that have a hard time justifying its cost.

Of late, though, my forecast for V2V grows even more dismal, because I wonder if robocars will implement V2V with human-driven cars at all, even if it becomes common for ordinary cars to have the technology because of a legal mandate.

The problem is security. A robocar is a very dangerous machine. Compromised, it can cause a lot of damage, even death. As such, security will have a very strong focus in development. You don’t want anybody breaking into the computer systems or your car or anybody else’s. You really don’t want it.

One clear fact that people in security know — a very large fraction of computer security breaches caused by software faults have come from programs that receive input data from external sources, in particular when you will accept data from anybody. Internet tools are the biggest culprits, and there is a long history of buffer overflows, injection attacks and other trouble that has fallen on tools which will accept a message from just anyone. Servers (which openly accept messages from outside) are at the greatest risk, but even client tools like web browsers run into trouble because they go to vast numbers of different web sites, and it’s not hard to trick people to sending them to a random web site.

We work very hard to remove these vulnerabilities, because when you’re writing a web tool, you have no choice. You must accept input from random strangers. Holes still get found, and we pay the price.

The simplest strategy to improve your chances is to go deaf. Don’t receive inputs from outside at all. You can’t do that in most products, but if you can close off a channel without impeding functionality it’s a good approach. Generally you will do the following to be more secure:

  1. Be a client, which means you make communications requests, you do not receive them.
  2. You only connect to places you trust. You avoid allowing yourself to be directed to connect to other things
  3. You use digital signature and encryption to assure that you really are talking to your trusted server.

This doesn’t protect you perfectly. Your home server can be compromised — it often will be running in an environment not as locked down as this. In fact, if it becomes your relay for messages from outside, as it must, it has a vector for attack. Still, the extra layer adds some security.  read more »

Uber to research robocars?

Rumours reported in TechCrunch suggest Uber is opening a robocar lab in Pittsburgh and hiring up to 50 CMU folks to staff it.

Update: On the Uber blog we now see it’s more funding of research labs at CMU, on many topics

That’s a major step, if true. People have often pointed out how well Uber is poised to make use of robocar technology to bring computer summoned taxi service to the next level. If Uber did not exist, I would surely be building it to get that advantage. Many have assumed that since Google is a major investment partner in Uber that they would partner on this technology, but this suggests otherwise.

I write about Uber a lot here not just because of interest in what they do today, but because it teaches us a lot about how people will view Robocars in the future. Uber’s interface is very similar to what you might see for a robocar service, and the experience is fairly similar, just much more expensive. UberX is $1.30/mile plus 26 cents/minute with $2.20 flag drop. The Black service is $3.75/mile and 65 cents/minute with an $8 flag drop. I expect robocar tax service to be cheaper than 50 cents/mile with minimal per-minute charges. The flag drop is not yet easy to calculate. What richer people do with Uber teaches us what the whole public will do with robocars.

Uber lets you say where you are going but doesn’t demand it. That’s one thing I suspect will be different with your robotaxi, because it’s really nice if they can send you a vehicle chosen for the trip you have in mind. Ie. a small, efficient car without much range for short, single person trips. Robotaxi services will offer you the ability to not say your destination — but they will probably charge more for it, and that means most people will be willing to say their destination.

Uber does not hide their desire to get rid of all their drivers, which sounds like a strange strategy, but the truth is that cab driving is not something most people view as a career. It’s a quick source of money with no special skills, something people do until something better comes along, or in the gaps in their day to make extra cash. Unlike people losing jobs to robots on a factory line, nobody is particularly upset at the idea.

UMich team works on perception and localization using cameras

Some new results from the NGV Team at the University of Michigan describe different approaches for perception (detecting obstacles on the road) and localizations (figuring out precisely where you are.) Ford helped fund some of the research so they issued press releases about it and got some media stories. Here’s a look at what they propose.

Many hope to be able to solve robotics (and thus car) problems with just cameras. While LIDAR is going to become cheap, it is not yet, and cameras are much cheaper. I outline many of the trade-offs between the systems in my article on cameras vs lasers. Everybody hopes for a research breakthrough or computer vision breakthrough to make vision systems reliable enough for safe operation.

The Michigan lab’s approach is a special machine vision one. They map the road in advance in 3D and visible light by using a mapping car equipped with lots of expensive LIDAR and other sensors. They build a 3D representation of the road similar to what you need for a video game engine, and from that, with the use of GPUs, they can indeed create a 2D image of what a camera should see from any given point.

The car goes out into the world and its actual camera delivers a 2D frame of what it sees. Their system then compares that with generated 2D images of what the camera should see until it finds the closest match. Effectively, it’s like you looking out a window and then going into a video game and wandering around looking for a place that looks like what you see out that window, and then you know where the window is.

Of course it is not “wandering,” and they develop efficient search algorithms to quickly find the location that looks most like the real world image. We’ve all seen video games images, and know they only approximate the real world, so nothing will be an exact match, but if the system is good enough, there will be a “most similar” match that also corresponds with what other sensors, like your GPS and your odometer/dead reckoning system, tell you about where you probably are.

Localization with cameras has been done before, and this is a new approach taking advantage of new generations of GPUs, so it’s interesting. The big challenge is simulating the lighting, because the real world is full of different lighting, high dynamic range, and shadows. The human system has no problem understanding a stripe on the road as it moves through the shadow of a tree, but computer systems have a pretty tough time with that. Sun shadows can be mapped well with GPUs, but shadows from things like the moving limbs of trees are not possible to simulate, as are the shadows of other vehicles and road users. At night, light and shadows come from car headlights and urban lights. The team is optimistic about how well they will handle these problems.

The much larger challenge is object perception. Once you have a simulation of what the camera should see, you can notice when there are things present that are not in the prediction — like another car or pedestrian, or a new road sign. (Right now their system mostly is looking at the ground.) Once you identify the new region, you can attempt to classify it using computer vision techniques, and also by watching it move against the expected background.

This is where it gets challenging, because the bar is very high. To be used for driving it must effectively always work. Even if you miss 1 pedestrian in a million you have a real problem because there are billions of pedestrians encountered by a billion drivers every day. This is why people love LIDAR — if something (other than a mirror or sheet of glass) sufficiently large is sufficiently close you, you’re going to get laser returns from it, and not from what’s behind it. It has the reliability number that is needed. The challenge of vision systems is to meet that reliability goal.

This work is interesting because it does a lot without relying on AI “computer vision” techniques. It is not trying to look at a picture and recognize a person. Humans are able to look at 2D pictures with bizarre lighting and still tell you not just what the things in the picture are, but often how far away they are and what they are doing. While we can be fooled in a 2D image, once you have a moving dynamic world, humans are, generally reliable enough at spotting other things on the road. (Though of course, with 1.2 million dead each year, and probably 50 million or more accidents, the majority because somebody was “not looking,” we are far from perfect.)

Some day, computer vision will be as good at recognizing and understanding the world as people are — and in fact surpass us. There are fields (like identifying traffic signs from photos) where they already surpass us. For those not willing to wait until that day, new techniques in perception that don’t require full object understanding are always interesting.

I should also point out that while lowering cost is of course a worthwhile goal, it is a false goal at this time. Today, maximal safety is the overriding goal, and as such, nobody will actually release a vehicle to consumers without LIDAR just to save the estimated 2017 cost of LIDAR, which will be sub-$500. Only later, when cameras get so good they completely replace LIDAR safety capabilities for less money would people release such a system to save cost. On the other hand, improving cameras to be used together with LIDAR is a real goal; superior safety, not lower cost.

Might the first, supervised robocars be... well... boring?

Let me confess a secret fear. I suspect that the first “autopilot” functions on cars is going to be a bit boring.

I’m talking the offerings like traffic jam assist from Mercedes, super cruise from Cadillac and others. The faster highway assist versions which combine ADAS functions like lane-keeping and adaptive cruise control to keep the car in its lane and a fixed distance from the car in front of you. What Tesla has promoted and what scrappy startup “Cruise” plans to offer as a retrofit later this year. This is, in NHTSA’s flawed “levels” document what could be called supervision type 2.

Some of them also offer lane change, if you approve the safety of the change.

All these products will drive your car, slow or fast on highways, but they require your supervision. They may fail to find the lane in certain circumstances, because the makers are badly painted, or confusing, or just missing, or the light is wrong. When they do they’ll kick out and insist you drive. They’ll really insist, and you are expected to be behind the wheel, watching and grabbing it quickly — ideally even noticing the failure before the system does.

Some will kick out quite rarely. Others will do it several times during a typical commute. But the makers will insist you be vigilant, not just to cover their butts legally, but because in many situations you really do need to be vigilant.

Testing shows that operators of these cars get pretty confident, especially if they are not kicking out very often. They do things they are told not to do. Pick up things to read. Do e-mails and texts. This is no surprise — people are texting even now when the car isn’t driving for them at all.

To reduce that, most companies are planning what they call “countermeasures” to make sure you are paying attention to the road. Some of them make you touch the wheel every 8 to 10 seconds. Some will have a camera watching your eyes that sounds an alarm if you look away from the road for too long. If you don’t keep alert, and ignore the alarms, the cars will either come to a stop in the middle of the freeway, or perhaps even just steer wild and run off the road. Some vendors are talking about how to get the car to pull off safely to the side of the road.

There is debate about whether all this will work, whether the countermeasures or other techniques will assure safety. But let’s leave that aside for a moment, and assume it works, and people stay safe.

I’m now asking the harder question, is this a worthwhile product? I’ve touted it as a milestone — a first product put out to customers. That Mercedes offered traffic jam assist in the 2014 S-Class and others followed with that and freeway autopilots is something I tell people in my talks to make it clear this is not just science fiction ideas and cute prototypes. Real, commercial development is underway.

That’s all true, and I would like these products. What I fear though, is whether it will be that much more useful or relaxing as adaptive cruise control (ACC.) You probably don’t have ACC in your car. Uptake on it is quite low — as an individual add-on, usually costing $1,000 to $2,000, only 1-2% of car buyers get it. It’s much more commonly purchased as part of a “technology package” for more money, and it’s not sure what the driving force behind the purchase is.

Highway and traffic jam autopilot is just a “pleasant” feature, as is ACC. It makes driving a bit more relaxing, once you trust it. But it doesn’t change the world, not at all.

I admit to not having this in my car yet. I’ve sat in the driver’s seat of Google’s car some number of times, but there I’ve been on duty to watch it carefully. I got special driver training to assure I had the skills to deal with problem situations. It’s very interesting, but not relaxing. Some folks who have commuted long term in such cars have reported it to be relaxing.

A Step to greater things?

If highway autopilot is just a luxury feature, and doesn’t change the world, is it a stepping stone to something that does? From a standpoint of marketing, and customer and public reaction, it is. From a technical standpoint, I am not so sure.  read more »

Robocar Parking

In my earlier article on robocar challenges I gave very brief coverage to the issue of parking. Challenged on that, I thought it was time to expand.

The world “parking” means many things, and the many classes of parking problems have varying difficulties.

The taxi doesn’t park

One of the simplest solutions to parking involves robotaxi service. Such vehicles don’t really park, at least not where they dropped you off. They drop you off and go to their next customer. If they don’t have another ride, they can deliberately go to a place where they know they can easily park to wait. They don’t need to tackle a parking space that’s challenging at all.

Simple non-crowded lots

Parking in basic parking lots — typical open ground lots that are not close to full — is a pretty easy problem. So easy in fact, that we’ve seen a number of demonstrations, ranging back to Junior 3 and Audi Piloted Parking. Cars in the showroom now will identify parking spots for you (and tell you if you fit.) They have done basic parallel parking (with you on the brakes) for several years, and are starting to now even do it with you out of the car (but watching from a distance.) At CES VW showed the special case of parking in your own garage or driveway, where you show the car where it’s going to go.

The early demos required empty parking lots with no pedestrians, and even no other moving cars, but today reasonably well-behaved other cars should not be a big problem. That’s the thing about non-crowded lots: People are not hunting or competing for spaces. The robocars actually would be very happy to seek out the large empty sections at the back of most parking lots because you aren’t going to be walking out that far, the car is going to come get you.

The biggest issue is the question of pedestrians who can appear out from behind a minivan. The answer to this is simply that vehicles that are parking can and do go slow, and slow automatically gives you a big safety boost. At parking lot speed, you really can stop very quickly if a pedestrian appears out of nowhere. The car, after all, is not in a hurry, and can slow itself when close to minivans, or if it has noticed pedestrians who are moving near it and have disappeared behind vehicles. Out at the back of a parking lot, nobody cares if you go 5 km/h, or even right down the center of the lane to assure there are no surprises.

To the right we see a picture of Junior 3 entering a parking lot, hunting for a space and taking it — in 2009.

Mapping

Mapping is still desirable for parking lots. This is particularly true because parking lots, not being public roads, set up their own sets of rules and put up signs meant only for humans. They may direct traffic to be one-way in certain areas in nonstandard ways. They may have gates when you have to pay or insert tickets. Parking spots will be marked reserved for certain cars (Electric vehicle, expectant mother, wheelchair, employee of the month, CEO, customers of company X) with signs meant for humans.

It’s not necessarily super hard to map a parking lot, just time consuming to encode all these rules. Unlike roads, which everybody drives, any given parking lot likely only serves the people who live, work or shop next to it — you will never park in 95% of the lots in your city, though you will drive most of its main roads. Somebody has to pay for the cost of that mapping — either because lots of people want to use the lot, or because the owner of the lot wants to encourage robocars. Fortunately, with the robocars doing things like using the least popular spots, or even valet parking as described below, there is a strong incentive to the owner of a lot to get it mapped and keep it mapped. Only lots that never fill out would have no incentive, and those lots can often be parked in without a map.

While you want trained mappers to confirm the geometry of a parking lot, coding in the signs and special rules is a task easily left to the parking lot owner. If the lot manager forgets to tag the CEO’s space as reserved, nobody is hurt (except the lot manager when the CEO arrives.)

Robocar parking mistakes are easy to fix. Robocars can put a phone number or URL on the back where you can go to complain about a robocar that is parked badly or blocking things. As long as that doesn’t happen too often, the cost of the support desk is manageable. The folks at the support desk can look out with the robot’s sensors and tell it to move. It’s not like finding a human driven car blocking something, where you have to find the owner. In a minute, the robocar will be gone.

More crowded lots

The challenge of parking lots, in spite of the low speeds, is that they don’t have well defined rules of the road. People ignore the arrows on the ground. They pause and wait for cars to exit. In really crowded lots, cars follow people who are leaving at walking speed, hoping to get dibs on their spot. They wait, blocking traffic, for a spot they claim as theirs. People fight for spots and steal spots. People park badly and cross over the lines.

As far as I know, nobody has tried to solve this challenge, and so it remains unsolved. It is one of the few problems in robocars that actually deserves the label of “AI,” though some think all driving is AI.

Even so, on the grand scheme of things, my intuition is that this is not one of the grand unsolved challenges of AI. Parking lots don’t have legalized rules of the road, but they do have rules and principles, and we all learn them the more we park. Creating a system that can do well with these rules using various AI tools seems like a doable challenge when the time comes. My intuition is that it’s a lot easier than winning on Jeopardy. This system will be able to take advantage of a couple of special abilities of the robocars:

  • They will be able to park and exit spots quickly and efficiently. They won’t be like the people you always see who do a 5 point turn to exit their parking spot when you (but not they) can see they still have 5 feet of room behind them.
  • In general, they will be superb parkers, centering themselves as well as possible inside spots
  • They don’t need room to open their doors, so they can park right next to walls and pillars.
  • Yes, they could also park right next to badly parked cars which have encroached into other spaces and thus made a space no human can use. There is a risk of course that the bad parker, who finds they can’t get in one side, might retaliate. (I’ve had a guy rip my mirror off in revenge.) In this case, though, they will have a photo of the licence plate and a sensor record of the revenge taking place!
  • In the event of problems or deadlock, they are open to the idea of just giving up and parking somewhere farther away that is easier to park in. Unlike humans they could drive as quickly in reverse as forward to back out of situations.

In spite of all this, the cars will want to avoid the full parking lots where the chaos happens. If there is another lot not far away, they will just go there, and require a couple minutes more advance notice from their master when summoned to pick them up. If there is nowhere nearby to park, the car will tell its passenger that she has to do the parking.

Robo-valet zones

Even in the most crowded lots, there is the potential to easily create zones of the parking lot that are marked:

“Robot Valet Parking only. All other cars may be blocked in or towed. No pedestrians.”

In the car’s map, it will indicate what server is handling the robo-valet section, though it is possible to have it work without any communication at all.

In the most basic version the car would ask permission to enter the lot. The database might even assign it a spot, but generally it would just enter and take any spot. By “any spot”, I mean any piece of pavement, ignoring the lines on the ground. At first the cars would choose spots that let them have an unblocked pack to leave. As soon as too many cars arrive to do that, they would switch to a more dense, valet pattern that blocks in some cars (the ones who said they were leaving latest.) It would report where it parked to the database, as well as how to send it a message, and when it expects to leave.

Other cars would arrive. Eventually one would block in your car. If the database has given them a way to communicate (probably over the internet, though if they had V2V they could use that) they might discuss who plans to leave first, and the cars would adjust themselves to put the cars that will leave sooner at the front. This is strongly in the interests of the cars. If you plan to be there a while, you want to go to the back so you don’t have to keep moving to let cars behind you out. But it still works, just not as well, if the cars just take any available spot.

When it’s time to leave, the cars could try to send a message over the data networks to the cars in front of them, but a simpler approach might be to just nudge slightly forward — a few cm will do it. This will cause the car in the direction of the nudge to notice, and it too would nudge forward, and so on, and so on until the front car moves out, and then all the cars in that row can move out, including your car, which leaves the lot. Then the other cars can move in to fill the spot. If they have a database which maps the cars in that section, they could try to be clever in how they re-fill the empty column to minimize movement.

There are even faster algorithms if you leave a few empty spaces. Robocars have the ability to move in concert to “move the space” and put it next to a car that wants to exit. It’s more efficient, but not needed.

The database becomes more useful if a human driver ignores the signs and tries to park in the lot. That’s because the database is the simplest way of spotting a vehicle that’s not supposed to be there. As a first step, the cars in the lot could start flashing their lights and honking their horns at the interloper, or even speak human language messages out a speaker. “Hey, this is the robot valet lot, you are blocking me in! We’re calling a tow truck to come remove you if you don’t leave.” Some idiots may still try, and the robots could arrange so that almost all of them can still get out, and if not, they might call that tow truck.

The robo-valet section can be at the back of the parking lot, or the top of a structure — those places the humans park in last. The owner of the lot has a huge incentive to do this, since they can make much more efficient use of their land with the tight valet-dense parking. All the owner has to do is register the lot section in a database — a database that a company like Google would probably be happy to offer for free to benefit their cars.

Human valets could also park cars in this area. They would just need to use an app on their smartphone that tells them where to park and allows them to register that they did it. The robots will want the human-parked cars to park at the back, because they will move out of the way when it’s time for the human parked car to be driven back out.

The main requirements for this parking area would be that it be reachable from the outside without going through a zone of chaos, and that it then be possible to also reach the pickup/dropoff point for passengers without the risk of getting stuck in chaos. Larger lots tend to have entrance lanes without spots on them that serve this purpose.

Pedestrians will still enter the lot, in spite of the sign. Just go extra slow if they are there, and perhaps talk to them and ask them to leave. While you won’t actually present a danger to them at your low speed, they probably will heed the advice of 3000lb robots. Perhaps tell them they have 15 seconds to put down their weapon.

Robotic sign?

To get really clever, the sign marking the border of the Robo-Valet area might itself be on a small robot. Thus, when the robo-valet area gets full, the sign can move to expand the area if space is available. You could expand even into areas occupied by human-parked cars — just know that they are there and don’t block them in — or move out of their way when needed. Eventually they leave and only robocars enter.

When the demand goes down, the sign can easily move to shrink the valet area.

Detroit Auto Show and more news

Robocar news continues after CES with announcements from the Detroit Auto Show (and a tiny amount from the TRB meeting.)

Google doesn’t talk a lot about their car, so address by Chris Urmson at the Detroit Auto Show generated a lot of press. Notable statements from Chris included:

  • A timeline of 2 to 5 years for deployment of a vehicle
  • Public disclosure that Roush of Michigan acted as contract manufacturer to build the new “buggy” models — an open secret since May
  • A list of other partners involved in building the car, such as Continental, LG (batteries), Bosch and others.
  • A restatement that Google does not plan to become a car manufacturer, and feels working with Detroit is the best course to make cars
  • A statement that Chris does not believe regulation will be a major barrier to getting the vehicles out, and they work regularly to keep NHTSA informed
  • A few more details about Google’s own LIDAR, indicating that units are the size of coffee cups. (You will note the new image of the buggy car does not have a Velodyne on the roof.)
  • More indication that things like driving in snow are not in the pipeline for the first vehicles

Almost all of this has been said before, though the date forecasts are moved back a bit. That doesn’t surprise me. As Google-watchers know, Google began by doing extensive, mostly highway based testing of modified hybrid cars, and declared last May that they were uncomfortable with the safety issues of doing a handoff to a human driver, and also that they have been doing a lot more on non-highway driving. This culminated with the unveiling of the small custom built buggy with no steering wheel. The shift in direction (though the Lexus cars are still out there) will expand the work that needs to be done.

Car company announcements out of the Detroit show were minor. The press got all excited when one GM executive said they “would be open to working with Google.” While I don’t think it was actually an official declaration, Google has said many times they have talked to all major car companies, so there would be no reason for GM to go out to the press to say they want to talk to Google. Much PR over nothing, I suspect.

Ford, on the other hand, actually backtracked and declared “we won’t be first” when it comes to this technology. I understand their trepidation. Being first does not mean being the winner in this game. But neither does being 2nd — there will be a time after which the game is lost.

There were concept vehicles displayed by Johnson Controls (a newcomer) and even a Chinese company which put a fish tank in the rear of the car. You could turn the driver’s seat around and watch your fish. Whaa?

In general, car makers were pushing their dates towards 2025. For some, that was a push back from 2020, for others a push forward from 2030, as both of those numbers have been common in predictions. I guess now that it’s 2015, 2020 is just to realistic a number to make an uncertain prediction about.

Earlier, Boston Consulting Group released a report suggesting robocars would be a $42B market in 2025 — the car companies had better get on it. With the global ground transportation market in the range of $7 trillion in my guesstimate, that’s a drop in the bucket, but also a huge number.

News from the Transportation Research Board annual meeting has been sparse. The combined conference of the TRB and AUVSI on self-driving cars in the summer has been the go-to conference of late, and other things usually happen at the big meeting. Released research suggested 10% of vehicles could be robocars in 2035 — a number I don’t think is nearly aggressive enough.

There also was tons of press over the agreement between NASA Ames and Nissan’s Sunnyvale research lab to collaborate. Again, not a big surprise, since they are next door to one another, and Martin Sierhuis the director of the research lab made his career over at Nasa. (Note of disclosure: I am good friends with Martin, and Singularity U is based at the NASA Research Park.)

Day 3 of CES -- BMW and robots

Day 3 at CES started with a visit to BMW’s demo. They were mostly test driving new cars like the i3 and M series cars, but for a demo, they made the i3 deliver itself along a planned corridor. It was a mostly stock i3 electric car with ultrasonic sensors — and the traffic jam assist disabled. When one test driver dropped off the car, they scanned it, and then a BMW staffer at the other end of a walled course used a watch interface to summon that car. It drove empty along the line waiting for test drives, and then a staffer got in to finish the drive to the parking spot where the test driver would actually get in, unfortunately.

Also on display were BMW’s collision avoidance systems in a much more equipped research car with LIDARs, Radar etc. This car has some nice collision avoidance. It has obstacle detection — the demo was to deliberately drive into an obstacle, but the vehicle hits the brakes for you. More gently than the Volvo I did this in a couple of years ago.

More novel is detection of objects you might hit from the side or back in low speed operations. If it looks like you might sideswipe or back into a parking column or another car, the vehicle hits the brakes on you (harder) to stop it from happening.

Insurers will like this — low speed collisions in parking lots are getting to be a much larger fraction of insurance claims. The high speed crashes get all the attention, but a lot of the payout is in low speed.

I concluded with a visit to my favourite section of CES — Eureka Park, where companies get small lower cost booths, with a focus on new technology. Also in the Sands were robotics, 3D printing, health, wearables and more — never enough time to see it all.

I have added 12 more photos to my gallery, with captions — check the last part out for notes on cool products I saw, from self-tightening belts and regenerating roller skates to phone-charging camping pots.

CES Day 2 Gallery and notes

After a short Day 1 at CES a more full day was full of the usual equipment — cameras, TVs, audio and the like and visits to several car booths.

I’ve expanded my gallery of notable things with captions with cars and other technology.

Lots of people were making demonstrations of traffic jam assist — simple self-driving at low speeds among other cars. All the demos were of a supervised traffic jam assist. This style of product (as well as supervised highway cruising) is the first thing that car companies are delivering (though they are also delivering various parking assist and valet parking systems.)

This makes sense as it’s an easy problem to solve. So easy, in fact, that many of them now admit they are working on making a real traffic jam assist, which will drive the jam for you while you do e-mail or read a book. This is a readily solvable problem today — you really just have to follow the other cars, and you are going slow enough that short of a catastrophic error like going full throttle, you aren’t going to hurt people no matter what you do, at least on a highway where there are no pedestrians or cyclists. As such, a full auto traffic jam assist should be the first product we see form car companies.

None of them will say when they might do this. The barrier is not so much technological as corporate — concern about liability and image. It’s a shame, because frankly the supervised cruise and traffic jam assist products are just in the “pleasant extra feature” category. They may help you relax a bit (if you trust them) as cruise control does, but they give you little else. A “read a book” level system would give people back time, and signal the true dawn of robocars. It would probably sell for lots more money, too.

The most impressive car is Delphi’s, a collaboration with folks out of CMU. The Delphi car, a modified Audi SUV, has no fewer than 6 4-plane LIDARs and an even larger number of radars. It helps if you make the radars, as otherwise this is an expensive bill of materials. With all the radars, the vehicle can look left and right, and back left and back right, as well as forward, which is what you need for dealing with intersections where cross traffic doesn’t stop, and for changing lanes at high speed.

As a refresher: Radar gives you great information, including speed on moving objects, and sucks on stationary ones. It goes very far and sees through all weather. It has terrible resolution. LIDAR has more resolution but does not see as far, and does not directly give you speed. Together they do great stuff.

For notes and photos, browse the gallery

CES Day 1 -- Mercedes concept

A reasonable volume of robocar related stuff here at CES. I just had a few hours today, and went to see the much touted Mercedes F015 “Luxury in Motion.” This is a concept and not a planned vehicle, but it draws together a variety of ideas — most of which we’ve seen before — with some new explorations.

The vehicle has a long wheelbase design to allow it to have a very large passenger compartment, which features just 4 bucket seats, the front two of which can rotate to create face to face seating. (In addition, they can rotate to make it easier to get into the car.) We’ve seen a number of face to face concepts and designs and I’ve been interested in the idea from the start, the idea of making car travel more social and better for both families and co-workers. As a plus, rear facing seats, though less comfortable for some fraction of the population, are going to be safer in a front end collision.

The vehicle features a bevy of giant touchscreens. We see a lot of this, but I actually will note that we don’t have this at our desks or in our homes. I suspect passengers in robocars will prefer the tablets they already have, though there is the issue that looking down at a tablet generates motion sickness sometimes.

The interior has an odd mix of carpet and hardwood, perhaps trying to be more like a living room.

More interesting, though not on display, are the vehicle’s systems for communicating with pedestrians and other road users. These include LEDs that can indicate if the car is self-driving (boring, and something I pushed to have removed from the Nevada law,) but more interesting are indicators that help to tell pedestrians the vehicle has seen them. One feature, which only is likely to work at night, laser projects a crosswalk in front of the vehicle when it stops, to tell a pedestrian it sees them and is expecting them to cross in front. It can also make LED words at the back for other cars (something that is I think illegal in some jurisdictions.

Also interesting has been the press reaction. Wired thinks it’s bonkers and not designed very well. The bonkers part is because the writer thinks it de-emphasizes driving too much. Of course, those of that stripe are quite upset at Google’s car with no controls. Other writers have liked the design, and find it quite superior to Google’s non-threatening design, suggesting the Google design is for regulators and the Mercedes design is for customers. Google plans to get approval for their car and operate it, while Mercedes is just using the F015 as a concept.

I have a gallery of several pictures of the car which I will add to during the week. In the gallery you will also see:

Audio Piloted Driving prototype

Audi drove one of their cars from the Bay Area to CES, letting press take 100 mile stints. It also helped them learn things about different conditions. One prototype is in the booth, I will go out to see the real car outdoors tomorrow.

TRW

TRW was showing off their technology with a transparent model showing where they had put an array of radars to make 360 degree radar and camera coverage. No LIDAR, but they will probably get one eventually. Radar’s resolution is low, but they believe that by fusing the radar and the camera views they can get very good perception of the road.

Others

There are more for me to see tomorrow. Ford showed more of their ADAS systems and also their Focus which has 4 of the 32 plane velodyne LIDARs on it. Toyota showed only a hydrogen fuel cell car. Valeo has some interesting demos I will want to see — they have promised doing a good traffic jam assist. While they have not said so, I think the most interesting car company robocar function will be a traffic jam assist which does not require supervision — ie. you can read. While no car company is ready to have the driver out of the loop at high speeds, doing it at traffic jam speeds is much easier, because mainly you just have to follow the other cars, and you stop self-driving if the jam opens up. Several companies are working on a product like this and I suspect it will be the first real robocar product to reach the market that is actually practical. The “super cruise” products which drive while you watch are pleasant, but not much more world-changing than adaptive cruise control. When the car can give people time back, even if it’s only the traffic jam time, then something interesting starts happening.

Robocars driving when the map is wrong

Yesterday’s note on Here’s maps brought up the question of the wisdom of map-based driving. While I addressed this a bit earlier let me add a bit more detail.

A common first intuition is that because people are able to drive just fine on a road they have never seen before that this is how robots will do it. They are bothered that present designs instead create a super-detailed map of the road by having human driven cars scan the road with sensors in advance. After all, the geometry of the road can change due to construction; what happens then?

They hope for a car that, like a human, can build its model of the road in real time while driving the road for the first time. That would be nice, of course, and gives you a car that can drive most roads right away, without needing to map them. But it’s a much harder problem to solve, and unlikely to ever be solved perfectly. Car companies are building very simple systems which can follow the lines on a freeway under human supervision without need for a map. But real city streets are a different story.

The first thing to realize is that any system which could build the correct model as you drive is a system that could build a map with no human oversight, so the situations are related. But building a map in advance is always going to have several very large advantages:

  1. You build the map from not just one scan of the road, but several, and done in different lanes and directions. As a result, you get 3-D scans of everything from different angles, and can build a superior model of the world.
  2. Using multiple scans lets you learn about things that are stationary but move one day to the next, like parked cars.
  3. You can process the data using a cloud supercomputer in as much time, memory and data storage as you want. Your computer is effectively thousands of times more capable.
  4. Humans can review the map built by the software if there’s anything it is uncertain about (or even if there is nothing) at their leisure.
  5. Humans can also test the result of the automatic and guided mapping to assure accuracy with one extra drive down the road.

In turn there are disadvantages

  1. At times, such as construction, the road will have changed from when it was mapped
  2. This process costs effort, and so the vehicle either does not drive off the map, or only handles a more limited set of simpler roads off the map.

The advantages are so great that even if you did have a system which could handle itself without a map, it is still always going to be able to do better with a map. Even with a great independent system you would want to make an effort to map the most popular roads and the most complex roads, up to the limit of your budget. The cost is an issue, but the cost of mapping roads is nothing compared to the cost of building or maintaining them. It’s a few times driving down the road, and some medium-skilled labour.

The road has changed

Let’s get to the big issue — the map is wrong, usually because construction has changed it.

First of all, we must understand that the sensors always disagree with the map, because the sensors are showing all the other cars and pedestrians etc. Any car has to be able to perceive these and drive so as not to hit them. If a traffic cone, “road closed” sign or flagman appears in the road, a car is not going to just plow into them because they are not on the map! The car already knows where not to go, the question is where it should go when the lanes have changed.

Even vehicles not rated to drive any road without a map can probably still do basic navigation and stay within their lane markers without a map. For the 10,000 miles of driving you do in a year, you need a car that does that 99.99999% of the time (for which you want a map) but it may be acceptable to have a car that’s only 99.9% able to do that for the occasional mile of restriped road. Indeed, when there are other, human-driven cars on the road, a very good strategy is just to follow them — follow one in front, and watch cars to the side. If the car has a clear path following new lane markers or other cars, it can do so.

Google, for example, has shown videos of their vehicle detecting traffic cones and changing lanes to obey the cones. That’s today — it is only going to get better at this.

But not all the time. There will be times when the lanes are unclear (sometimes the old lanes are still visible or the new ones are not well marked.) If there are no other cars to follow, there are also no other cars to hit, and no other traffic to block.

Still, there will be times when the car is not sure of where to go, and will need help. Of course, if there is a passenger in the car, as there would be most of the time, that passenger can help. They don’t need to be a licenced driver, they just need to be somebody who can point on the screen and tell the car which of the possible paths it is considering is the right one. Or guide it with something like a joystick — not physically driving but just guiding the car as to where to go, where to turn.

If the car is empty, and has a network connection, it can send a picture, 3-D scan and low-res video to a remote help station, where a person can draw a path for the car to go for its next 100 meters, and keep doing that. Not steering the car but helping it solve the problem of “where is my lane?” The car will be cautious and stop or pull over for any situation where it is not sure of where to go, and the human just helps it get over that, and confirms where it is safe to go.

If the car is unmanned and has no network connection of any kind, and can’t figure out the road, then it will pull over, or worst case, stop and wait for a human to come and help. Is that acceptable? Turns out it probably is, due to one big factor:

This only applies to the first car to encounter an unplanned, unreported construction zone

We all drive construction zones every day. But it’s much more rare that we are the first car to drive the construction zone as they are setting it up. And most of the rules I describe above are only for the first connected car to encounter a surprise change to the road. In other words, it’s not going to happen very often. Once a car encounters a surprise change to the road, it will report the problem with the map. Immediately all other cars will know about the zone.

If that first car is able to navigate the new zone, it will be scanning it with sensors, and uploading that data, where a crew can quickly build a corrected map. Within a few minutes, the map and the road will no longer differ. And that first car will be able to navigate the new zone 99.999% of the time — either because it has a human on board, remote human help or it’s a simple enough change that the car is able to drive it with an incorrect map.

In addition, the construction zone has to be a surprise. That means that, in spite of regulations, the construction crews did not log plans for it in the appropriate databases. Today that happens fairly often, but over time it’s going to happen less. In fact, there are plans to have transponders on construction equipment and even traffic cones that make it impossible to create a new construction zone without it showing up in the databases. Setting up a road change has a lot of strongly enforced safety rules, and I predict we’ll see “Get out your smartphone and make sure the zone is in the database before you create it” as one of them, especially since that’s so easy to do.

(You have probably also seen that tools like Waze, driven by ordinary human driver smartphones, are already mapping all the construction zones when they pop up.)

If a complex zone is present and unmapped, unmanned cars just won’t route through there until the map is updated. The more important the zone, the more quickly it will get updated. If need be, a mapping worker will go out in a car before work even begins. If a plan was filed, we’ll also know the plan for the zone, and whether cars can handle it with an old map or not.

Most of the time, though, a human passenger will be there to guide the car through the zone. Not to steer — there may not be a steering wheel — but to guide. The car will go slowly and stay safe.

Once a car is through, it will send the scans up to the mapping center, and all future cars will have a map to guide them until the crew changes the road again without logging it. I believe that doing so should be made against safety regulations, and be quite rare.

So look at those numbers. I will hope it’s reasonable to expect that 99% of construction zones will be logged in road authority databases before they begin. Of the 1% that aren’t, there will be a first robocar to encounter the zone. 90% of the time that car will have a passenger able to help. For the 10% unmanned cars, I predict a data network will be available 99% of the time. (Some would argue 100% of the time because unmanned cars will just not go where there is not a data connection, and we may also get new data services like Google’s Loon, or Facebook’s drone program to assure coverage everywhere.)

So now we are looking at one construction zone in 100,000 where there was no warning, there is no human, and there is no data. But we’ve rated are car as able to handle handle off-map driving 99.9% of the time. For the other .1%, it decides it can’t see a clear path, and pulls over. When it doesn’t report back in on the other side of the data dead zone, a service vehicle is dispatched and fixes the problem.

So now in one in 100,000,000 construction zones, we have a car deciding to pull over. Perhaps for half of those, it can’t figure out how to pull over, and it stops in the lane. Not great — but this is one in 200 million construction zones. In other words, it happens with much less frequency than accidents or stalled cars. And there is even a solution. If a construction worker flashes an ID card at the car’s camera when it’s in a confused state, the car can then follow that worker to a place to stop. In fact, since the confused state is so rare, there is probably not even a need for an ID card. Just walk up, make a “follow me” gesture and walk the car where it needs to go.

Tweak these numbers as you like. Perhaps you think there will be far more construction zones not logged in databases. Perhaps you think the car’s ability to drive a changed zone will only be 50%. Perhaps you think there will still be lots of unmanned cars running in wireless dead zones in 2020. Even so the number of cars that stop and give up will still be far fewer than the number of cars that block roads today due to accidents and mechanical problems. In other words, no big whoop.

It’s important to realize that unmanned cars are not in a hurry. They can avoid zones they are not comfortable with. If they can’t get through at all, the taxi company sending the car can just send another from a different direction in almost all cases.

It’s also important to realize that cars in an uncertain situation are also not in a big hurry. They will slow until they can be sure they are safe and able to handle the road. Slow, it turns out, is easy. Slow and heavy traffic (ie. a traffic jam) is actually also very easy — you don’t even need to see the lines on the road to handle that one; you usually can’t.

Once again this is only for the first car to encounter the surprise zone. Much more common will be a car that is the first to encounter a planned zone. This car will always have a competent passenger, because the service will not direct an unmanned car into an unknown construction zone where there is no data. This passenger will get plenty of warning, and their car may well pull over so there is no transition from full-auto to semi-auto while the car is moving. Then this person will guide the car through the zone at reduced speed. Probably just with a joystick, though possibly there will handlebars that can pop out or plug in if true semi-manual driving is needed.

New road signs

Road signs are a different problem. Already there are very decent systems for recognizing road signs captured by the camera — systems that actually do better at it than human beings. But sometimes there are road signs with text, and the system may recognize them, but not understand them. Here again we may call upon human beings, either in the vehicle, or available via a data connection. Once again, this is only for the first unmanned car to encounter the new road sign.

I will propose something stronger, though. I believe there should be a government mandated database of all road signs. Further, I believe the law should say that no road sign has legal effect until it is entered in the database. Ie. if you put up a sign with a new speed limit, it is not a violation of the limit to ignore the sign until the sign is in the database. At least not for robots. Once again, all this needs is that the crews putting in the signs have smartphones so they can plonk the sign on the map and enter what it is.

We may never need this, though, because the ability of computers to read signs is getting very good. It may be faster to just make it even better than to wait for a law that mandates the database. With a 3-D map, you will never miss a brand new sign, but you might get confused by a changed sign — you will know it changed but may need to ask for help to understand it if it is non-standard. There are already laws that standardize road signs, but only to a limited extent. Even so, the number of sign styles in any given country is still a very manageable number.

Random road events

Sometimes driving geometry changes not due to construction, but due to accidents and the environment. Trees get knocked down. Roads flood. Power lines may fall. The trees will be readily seen, and for the first car to come to a fallen tree, the procedure will be similar, though in a low traffic area the vehicles will be programmed to go around them, as they are for stalled cars and slow moving vehicles. Flooding and power lines are more challenging because they are harder to see. Flooding, of course, does not happen by surprise. That there is flooding in a region will be well known so cars will be on the lookout for it. Human guides will again be key.

A plane is not a bird

Aircraft do not fly by flapping their wings, and robocars will not see the world as people do nor drive as they do. When they have accurate maps, it gives us much more confidence in their safety, particularly the ability to pick the right path reliably at speed. But they have a number of tools open to them for driving a road that doesn’t match the map precisely without needing to have the ability to drive unmapped roads 99.999999% of the time. That’s a human level ability and they don’t need it.

Cars in the UK, China, LA, CES and Here : Robocar News Update

I see new articles on robocars in the press every day now, though most don’t say a lot new. Here, however, are some of the recent meaningful stories from the last month or two while I’ve been on the road. There are other sites, like the LinkedIn self-driving car group and others, if you want to see all the stories.

Winners chosen in UK competition

Four cities in the UK have been chosen for testing and development of robcars using the £10 million funding contest. As expected, Milton Keynes was chosen along with Coventry, and also Greenwich and Bristol. The BBC has more.

Chinese competition has another round

Many don’t know it, but China has been running its own “DARPA Grand Challenge” style race for 6 years now. The entrants are mostly academic, and not super far along, but the rest of the world stopped having contests long ago, much to its detriment. I was recently in Beijing giving a talk about robocars for guests of Baidu — my venue was none other than the Forbidden City — and the Chinese energy is very high. Many, however, thought that an announcement that Baidu would provide map data for BMW car research meant that Baidu was doing a project the way Google is. It isn’t, at least for now.

LA Mayor wants the cars

I’ve seen lots of calls from cities and regions that robocars come there first. In the fall, the mayor of Los Angeles made such a call. What makes this interesting is that LA is indeed a good early target city, with nice wide and simple roads, lots of freeways, and relatively well-behaved drivers compared to the rest of the world. And it’s in California, which is where a lot of the best development is happening, although that’s all in the SF Bay Area.

Concept designs for CES and beyond

More interesting concept cars are arising, as designers realize what they can do when freed of having a driver’s seat that faces forward and has all the controls, and as electric drivetrains allow you to move around where the drivetrain goes. Our friends at the design firm IDEO came up with some concepts that are probably not realistic but illustrate worthwhile principles. In particular, their vision of the delivery robot is quite at odds with mine. I see delivery robots as being very small, just suitcase sized boxes on wheels, except for the few that are built for very large cargo like furniture and industrial deliveries. Delivery robots will come to you on your schedule, not on the delivery company’s schedule. There will be larger robots with compartments when you can service a group of people who live together, but there is a limit to how many you can serve and still deliver at exactly the right time that people expect.

Everybody is also interested to see what Daimler will unveil at the Consumer Electronics Show. They showed off an interior with face-to-face seating and everybody wearing a VR headset, and have been testing a car under wraps.

It’s interesting to think about the VR headset. A lot of people would get sick if jostled in a car while wearing a VR headset. However, it might be possible to have the VR headset deliberately bounce the environment it’s showing you, so that it looks like you’re riding a car in that environment that’s bumping just the way you are. Or even walking.

Here (Nokia/Navteq) builds a big library of HD maps

Robocars work better if they get a really detailed map of their environment to drive with. Google’s project is heavily based on maps, and they have mapped out all the roads they test near Google HQ. Nokia’s “Here” division has decided to enter this in a big way. Nokia calls its projects “HD Maps,” which is a good name because you want to make it clear that these are quite unlike the navigation maps we are used to from Google, Here and other companies. These maps track every lane and path a car could take on the road, but also every lane marker, every curb, every tree — anything that might be seen by the cameras and 3D sensors.

Nokia makes the remarkable claim to have produced 1.2 million miles of HD Maps in 30 countries in the last 15 months. That’s remarkable because Google declared that one of their unsolved problems was that the cost of producing maps, and they were working to bring that cost down. Either Nokia/Here has made great strides in reducing that cost, or their HD Maps are not quite at the level of accuracy and detail that might be needed.

Nonetheless, the cost of the mapping will come down. In fact, many people express surprise when they learn that the cars rely so heavily on maps, as they expect a vehicle that, like a human being, can easily drive on a road they’ve never seen before, with no map. Humans can do that, but a car that could do that is also a car that could build the sort of map we’re talking about, in real time. Making the map ahead of time has several advantages, and is easier to do than doing it in real time. Perhaps some day that real-time map builder (what roboticists call Simultaneous localization and mapping) will arise, but for now, pre-mapping is the way to go.

510 Systems story told (sort of.)

There was recently press about the kept-quiet acquisition by Google of 510 Systems. I was at Google at the time, and it involves friends of mine, so I will have to say there are some significant errors in the story, but it’s interesting to see it come out. It wasn’t really that secret. What Anthony did with PriBot was hardly secret — he was on multiple TV shows for his work — and that he was at Google working at first on Streetview and later on the car was also far from secret. But it wasn’t announced so nobody picked up on it.

Uber's legal battles and robocars

Uber is spreading fast, and running into protests from the industries it threatens, and in many places, the law has responded and banned, fined or restricted the service. I’m curious what its battles might teach us about the future battles of robocars.

Taxi service has a history of very heavy regulation, including government control of fares, and quota/monopolies on the number of cabs. Often these regulations apply mostly to “official taxis” which are the only vehicles allowed to pick up somebody hailing a cab on the street, but they can also apply to “car services” which you phone for a pick-up. In addition, there’s lots of regulation at airports, including requirements to pay extra fees or get a special licence to pick people up, or even drop them off at the airport.

Why we have Taxi regulation and monopolies

The heavy regulation had a few justifications:

  • When hailing a cab, you can’t do competitive shopping very easily. You take the first cab to come along. As such there is not a traditional market.
  • Cab oversupply can cause congestion
  • Cab oversupply can drive the cost of a taxi so low the drivers don’t make a living wage.
  • We want to assure public safety for the passengers, and driving safety for the drivers.
  • A means, in some places, to raise tax revenue, especially taxing tourists.

Most of these needs are eliminated when you summon from an app on your phone. You can choose from several competing companies, and even among their drivers, with no market failure. Cabs don’t cruise looking for fares so they won’t cause much congestion. Drivers and companies can have reputations and safety records that you can look up, as well as safety certifications. The only remaining public interest is the question of a living wage.

Taxi regulations sometimes get stranger. In New York (the world’s #1 taxi city) you must have one of the 12,000 “medallions” to operate a taxi. These medallions over time grew to cost well north of $1 million each, and were owned by cab companies and rich investors. Ordinary cabbies just rented the medallions by the hour. To avoid this, San Francisco made rules insisting a large fraction of the cabs be owned by their drivers, and that no contractual relationship could exist between the driver and any taxi company.

This created the situation which led to Uber. In San Francisco, the “no contract” rule meant if you phoned a dispatcher for a cab, they had no legal power to make it happen. They could just pass along your desire to the cabbie. If the driver saw somebody else with their arm up on the way to get you, well, a bird in the hand is worth two in the bush, and 50% of the time you called for a cab, nobody showed up!

Uber came into that situation using limos, and if you summoned one you were sure to get one, even if it was more expensive than a cab. Today, that’s only part of the value around the world but crazy regulations prompted its birth.

The legal battles (mostly for Uber)

I’m going to call all these services (Uber, Lyft, Sidecar and to some extent Hail-O) “Online Ride” services.  read more »

The many business models for cars

When I talk about robocars, I often get quite opposite reactions:

  • Americans, in particular, will never give up car ownership! You can pry the bent steering wheel from my cold, dead hands.
  • I can’t see why anybody would own a car if there were fast robotaxi service!
  • Surely human drivers will be banned from the roads before too long.

I predict neither extreme will be true. I predict the market will offer all options to the public, and several options will be very popular. I am not even sure which will be the most popular.

  1. Many people will stick to buying and driving classic, manually driven cars. The newer versions of these cars will have fancy ADAS systems that make them much harder to crash, and their accident levels will be lower.
  2. Many will buy a robocar for their near-exclusive use. It will park near where it drops them off and always be ready. It will keep their stuff in the trunk.
  3. People who live and work in an area with robotaxi service will give up car ownership, and hire for all their needs, using a wide variety of vehicles.
  4. Some people will purchase a robocar mostly for their use, but will hire it out when they know they are not likely to use it, allowing them to own a better car. They will make rarer use of robotaxi services to cover specialty trips or those times when they hired it out and ended up needing it. Their stuff will stay in a special locker in the car.

In addition, people will mix these models. Families that own 2 or more cars will switch to owning fewer cars and hiring for extra use and special uses. For example, if you own a 2 person car, you would summon a larger taxi when 3 or more are together. In particular, parents may find that they don’t want to buy a car for their teen-ager, but would rather just subsidize their robotaxi travel. Parents will want to do this and get logs of where their children travel, and of course teens will resist that, causing a conflict.  read more »

Syndicate content