Robocars

UMich team works on perception and localization using cameras

Some new results from the NGV Team at the University of Michigan describe different approaches for perception (detecting obstacles on the road) and localizations (figuring out precisely where you are.) Ford helped fund some of the research so they issued press releases about it and got some media stories. Here’s a look at what they propose.

Many hope to be able to solve robotics (and thus car) problems with just cameras. While LIDAR is going to become cheap, it is not yet, and cameras are much cheaper. I outline many of the trade-offs between the systems in my article on cameras vs lasers. Everybody hopes for a research breakthrough or computer vision breakthrough to make vision systems reliable enough for safe operation.

The Michigan lab’s approach is a special machine vision one. They map the road in advance in 3D and visible light by using a mapping car equipped with lots of expensive LIDAR and other sensors. They build a 3D representation of the road similar to what you need for a video game engine, and from that, with the use of GPUs, they can indeed create a 2D image of what a camera should see from any given point.

The car goes out into the world and its actual camera delivers a 2D frame of what it sees. Their system then compares that with generated 2D images of what the camera should see until it finds the closest match. Effectively, it’s like you looking out a window and then going into a video game and wandering around looking for a place that looks like what you see out that window, and then you know where the window is.

Of course it is not “wandering,” and they develop efficient search algorithms to quickly find the location that looks most like the real world image. We’ve all seen video games images, and know they only approximate the real world, so nothing will be an exact match, but if the system is good enough, there will be a “most similar” match that also corresponds with what other sensors, like your GPS and your odometer/dead reckoning system, tell you about where you probably are.

Localization with cameras has been done before, and this is a new approach taking advantage of new generations of GPUs, so it’s interesting. The big challenge is simulating the lighting, because the real world is full of different lighting, high dynamic range, and shadows. The human system has no problem understanding a stripe on the road as it moves through the shadow of a tree, but computer systems have a pretty tough time with that. Sun shadows can be mapped well with GPUs, but shadows from things like the moving limbs of trees are not possible to simulate, as are the shadows of other vehicles and road users. At night, light and shadows come from car headlights and urban lights. The team is optimistic about how well they will handle these problems.

The much larger challenge is object perception. Once you have a simulation of what the camera should see, you can notice when there are things present that are not in the prediction — like another car or pedestrian, or a new road sign. (Right now their system mostly is looking at the ground.) Once you identify the new region, you can attempt to classify it using computer vision techniques, and also by watching it move against the expected background.

This is where it gets challenging, because the bar is very high. To be used for driving it must effectively always work. Even if you miss 1 pedestrian in a million you have a real problem because there are billions of pedestrians encountered by a billion drivers every day. This is why people love LIDAR — if something (other than a mirror or sheet of glass) sufficiently large is sufficiently close you, you’re going to get laser returns from it, and not from what’s behind it. It has the reliability number that is needed. The challenge of vision systems is to meet that reliability goal.

This work is interesting because it does a lot without relying on AI “computer vision” techniques. It is not trying to look at a picture and recognize a person. Humans are able to look at 2D pictures with bizarre lighting and still tell you not just what the things in the picture are, but often how far away they are and what they are doing. While we can be fooled in a 2D image, once you have a moving dynamic world, humans are, generally reliable enough at spotting other things on the road. (Though of course, with 1.2 million dead each year, and probably 50 million or more accidents, the majority because somebody was “not looking,” we are far from perfect.)

Some day, computer vision will be as good at recognizing and understanding the world as people are — and in fact surpass us. There are fields (like identifying traffic signs from photos) where they already surpass us. For those not willing to wait until that day, new techniques in perception that don’t require full object understanding are always interesting.

I should also point out that while lowering cost is of course a worthwhile goal, it is a false goal at this time. Today, maximal safety is the overriding goal, and as such, nobody will actually release a vehicle to consumers without LIDAR just to save the estimated 2017 cost of LIDAR, which will be sub-$500. Only later, when cameras get so good they completely replace LIDAR safety capabilities for less money would people release such a system to save cost. On the other hand, improving cameras to be used together with LIDAR is a real goal; superior safety, not lower cost.

Might the first, supervised robocars be... well... boring?

Let me confess a secret fear. I suspect that the first “autopilot” functions on cars is going to be a bit boring.

I’m talking the offerings like traffic jam assist from Mercedes, super cruise from Cadillac and others. The faster highway assist versions which combine ADAS functions like lane-keeping and adaptive cruise control to keep the car in its lane and a fixed distance from the car in front of you. What Tesla has promoted and what scrappy startup “Cruise” plans to offer as a retrofit later this year. This is, in NHTSA’s flawed “levels” document what could be called supervision type 2.

Some of them also offer lane change, if you approve the safety of the change.

All these products will drive your car, slow or fast on highways, but they require your supervision. They may fail to find the lane in certain circumstances, because the makers are badly painted, or confusing, or just missing, or the light is wrong. When they do they’ll kick out and insist you drive. They’ll really insist, and you are expected to be behind the wheel, watching and grabbing it quickly — ideally even noticing the failure before the system does.

Some will kick out quite rarely. Others will do it several times during a typical commute. But the makers will insist you be vigilant, not just to cover their butts legally, but because in many situations you really do need to be vigilant.

Testing shows that operators of these cars get pretty confident, especially if they are not kicking out very often. They do things they are told not to do. Pick up things to read. Do e-mails and texts. This is no surprise — people are texting even now when the car isn’t driving for them at all.

To reduce that, most companies are planning what they call “countermeasures” to make sure you are paying attention to the road. Some of them make you touch the wheel every 8 to 10 seconds. Some will have a camera watching your eyes that sounds an alarm if you look away from the road for too long. If you don’t keep alert, and ignore the alarms, the cars will either come to a stop in the middle of the freeway, or perhaps even just steer wild and run off the road. Some vendors are talking about how to get the car to pull off safely to the side of the road.

There is debate about whether all this will work, whether the countermeasures or other techniques will assure safety. But let’s leave that aside for a moment, and assume it works, and people stay safe.

I’m now asking the harder question, is this a worthwhile product? I’ve touted it as a milestone — a first product put out to customers. That Mercedes offered traffic jam assist in the 2014 S-Class and others followed with that and freeway autopilots is something I tell people in my talks to make it clear this is not just science fiction ideas and cute prototypes. Real, commercial development is underway.

That’s all true, and I would like these products. What I fear though, is whether it will be that much more useful or relaxing as adaptive cruise control (ACC.) You probably don’t have ACC in your car. Uptake on it is quite low — as an individual add-on, usually costing $1,000 to $2,000, only 1-2% of car buyers get it. It’s much more commonly purchased as part of a “technology package” for more money, and it’s not sure what the driving force behind the purchase is.

Highway and traffic jam autopilot is just a “pleasant” feature, as is ACC. It makes driving a bit more relaxing, once you trust it. But it doesn’t change the world, not at all.

I admit to not having this in my car yet. I’ve sat in the driver’s seat of Google’s car some number of times, but there I’ve been on duty to watch it carefully. I got special driver training to assure I had the skills to deal with problem situations. It’s very interesting, but not relaxing. Some folks who have commuted long term in such cars have reported it to be relaxing.

A Step to greater things?

If highway autopilot is just a luxury feature, and doesn’t change the world, is it a stepping stone to something that does? From a standpoint of marketing, and customer and public reaction, it is. From a technical standpoint, I am not so sure.  read more »

Robocar Parking

In my earlier article on robocar challenges I gave very brief coverage to the issue of parking. Challenged on that, I thought it was time to expand.

The world “parking” means many things, and the many classes of parking problems have varying difficulties.

The taxi doesn’t park

One of the simplest solutions to parking involves robotaxi service. Such vehicles don’t really park, at least not where they dropped you off. They drop you off and go to their next customer. If they don’t have another ride, they can deliberately go to a place where they know they can easily park to wait. They don’t need to tackle a parking space that’s challenging at all.

Simple non-crowded lots

Parking in basic parking lots — typical open ground lots that are not close to full — is a pretty easy problem. So easy in fact, that we’ve seen a number of demonstrations, ranging back to Junior 3 and Audi Piloted Parking. Cars in the showroom now will identify parking spots for you (and tell you if you fit.) They have done basic parallel parking (with you on the brakes) for several years, and are starting to now even do it with you out of the car (but watching from a distance.) At CES VW showed the special case of parking in your own garage or driveway, where you show the car where it’s going to go.

The early demos required empty parking lots with no pedestrians, and even no other moving cars, but today reasonably well-behaved other cars should not be a big problem. That’s the thing about non-crowded lots: People are not hunting or competing for spaces. The robocars actually would be very happy to seek out the large empty sections at the back of most parking lots because you aren’t going to be walking out that far, the car is going to come get you.

The biggest issue is the question of pedestrians who can appear out from behind a minivan. The answer to this is simply that vehicles that are parking can and do go slow, and slow automatically gives you a big safety boost. At parking lot speed, you really can stop very quickly if a pedestrian appears out of nowhere. The car, after all, is not in a hurry, and can slow itself when close to minivans, or if it has noticed pedestrians who are moving near it and have disappeared behind vehicles. Out at the back of a parking lot, nobody cares if you go 5 km/h, or even right down the center of the lane to assure there are no surprises.

To the right we see a picture of Junior 3 entering a parking lot, hunting for a space and taking it — in 2009.

Mapping

Mapping is still desirable for parking lots. This is particularly true because parking lots, not being public roads, set up their own sets of rules and put up signs meant only for humans. They may direct traffic to be one-way in certain areas in nonstandard ways. They may have gates when you have to pay or insert tickets. Parking spots will be marked reserved for certain cars (Electric vehicle, expectant mother, wheelchair, employee of the month, CEO, customers of company X) with signs meant for humans.

It’s not necessarily super hard to map a parking lot, just time consuming to encode all these rules. Unlike roads, which everybody drives, any given parking lot likely only serves the people who live, work or shop next to it — you will never park in 95% of the lots in your city, though you will drive most of its main roads. Somebody has to pay for the cost of that mapping — either because lots of people want to use the lot, or because the owner of the lot wants to encourage robocars. Fortunately, with the robocars doing things like using the least popular spots, or even valet parking as described below, there is a strong incentive to the owner of a lot to get it mapped and keep it mapped. Only lots that never fill out would have no incentive, and those lots can often be parked in without a map.

While you want trained mappers to confirm the geometry of a parking lot, coding in the signs and special rules is a task easily left to the parking lot owner. If the lot manager forgets to tag the CEO’s space as reserved, nobody is hurt (except the lot manager when the CEO arrives.)

Robocar parking mistakes are easy to fix. Robocars can put a phone number or URL on the back where you can go to complain about a robocar that is parked badly or blocking things. As long as that doesn’t happen too often, the cost of the support desk is manageable. The folks at the support desk can look out with the robot’s sensors and tell it to move. It’s not like finding a human driven car blocking something, where you have to find the owner. In a minute, the robocar will be gone.

More crowded lots

The challenge of parking lots, in spite of the low speeds, is that they don’t have well defined rules of the road. People ignore the arrows on the ground. They pause and wait for cars to exit. In really crowded lots, cars follow people who are leaving at walking speed, hoping to get dibs on their spot. They wait, blocking traffic, for a spot they claim as theirs. People fight for spots and steal spots. People park badly and cross over the lines.

As far as I know, nobody has tried to solve this challenge, and so it remains unsolved. It is one of the few problems in robocars that actually deserves the label of “AI,” though some think all driving is AI.

Even so, on the grand scheme of things, my intuition is that this is not one of the grand unsolved challenges of AI. Parking lots don’t have legalized rules of the road, but they do have rules and principles, and we all learn them the more we park. Creating a system that can do well with these rules using various AI tools seems like a doable challenge when the time comes. My intuition is that it’s a lot easier than winning on Jeopardy. This system will be able to take advantage of a couple of special abilities of the robocars:

  • They will be able to park and exit spots quickly and efficiently. They won’t be like the people you always see who do a 5 point turn to exit their parking spot when you (but not they) can see they still have 5 feet of room behind them.
  • In general, they will be superb parkers, centering themselves as well as possible inside spots
  • They don’t need room to open their doors, so they can park right next to walls and pillars.
  • Yes, they could also park right next to badly parked cars which have encroached into other spaces and thus made a space no human can use. There is a risk of course that the bad parker, who finds they can’t get in one side, might retaliate. (I’ve had a guy rip my mirror off in revenge.) In this case, though, they will have a photo of the licence plate and a sensor record of the revenge taking place!
  • In the event of problems or deadlock, they are open to the idea of just giving up and parking somewhere farther away that is easier to park in. Unlike humans they could drive as quickly in reverse as forward to back out of situations.

In spite of all this, the cars will want to avoid the full parking lots where the chaos happens. If there is another lot not far away, they will just go there, and require a couple minutes more advance notice from their master when summoned to pick them up. If there is nowhere nearby to park, the car will tell its passenger that she has to do the parking.

Robo-valet zones

Even in the most crowded lots, there is the potential to easily create zones of the parking lot that are marked:

“Robot Valet Parking only. All other cars may be blocked in or towed. No pedestrians.”

In the car’s map, it will indicate what server is handling the robo-valet section, though it is possible to have it work without any communication at all.

In the most basic version the car would ask permission to enter the lot. The database might even assign it a spot, but generally it would just enter and take any spot. By “any spot”, I mean any piece of pavement, ignoring the lines on the ground. At first the cars would choose spots that let them have an unblocked pack to leave. As soon as too many cars arrive to do that, they would switch to a more dense, valet pattern that blocks in some cars (the ones who said they were leaving latest.) It would report where it parked to the database, as well as how to send it a message, and when it expects to leave.

Other cars would arrive. Eventually one would block in your car. If the database has given them a way to communicate (probably over the internet, though if they had V2V they could use that) they might discuss who plans to leave first, and the cars would adjust themselves to put the cars that will leave sooner at the front. This is strongly in the interests of the cars. If you plan to be there a while, you want to go to the back so you don’t have to keep moving to let cars behind you out. But it still works, just not as well, if the cars just take any available spot.

When it’s time to leave, the cars could try to send a message over the data networks to the cars in front of them, but a simpler approach might be to just nudge slightly forward — a few cm will do it. This will cause the car in the direction of the nudge to notice, and it too would nudge forward, and so on, and so on until the front car moves out, and then all the cars in that row can move out, including your car, which leaves the lot. Then the other cars can move in to fill the spot. If they have a database which maps the cars in that section, they could try to be clever in how they re-fill the empty column to minimize movement.

There are even faster algorithms if you leave a few empty spaces. Robocars have the ability to move in concert to “move the space” and put it next to a car that wants to exit. It’s more efficient, but not needed.

The database becomes more useful if a human driver ignores the signs and tries to park in the lot. That’s because the database is the simplest way of spotting a vehicle that’s not supposed to be there. As a first step, the cars in the lot could start flashing their lights and honking their horns at the interloper, or even speak human language messages out a speaker. “Hey, this is the robot valet lot, you are blocking me in! We’re calling a tow truck to come remove you if you don’t leave.” Some idiots may still try, and the robots could arrange so that almost all of them can still get out, and if not, they might call that tow truck.

The robo-valet section can be at the back of the parking lot, or the top of a structure — those places the humans park in last. The owner of the lot has a huge incentive to do this, since they can make much more efficient use of their land with the tight valet-dense parking. All the owner has to do is register the lot section in a database — a database that a company like Google would probably be happy to offer for free to benefit their cars.

Human valets could also park cars in this area. They would just need to use an app on their smartphone that tells them where to park and allows them to register that they did it. The robots will want the human-parked cars to park at the back, because they will move out of the way when it’s time for the human parked car to be driven back out.

The main requirements for this parking area would be that it be reachable from the outside without going through a zone of chaos, and that it then be possible to also reach the pickup/dropoff point for passengers without the risk of getting stuck in chaos. Larger lots tend to have entrance lanes without spots on them that serve this purpose.

Pedestrians will still enter the lot, in spite of the sign. Just go extra slow if they are there, and perhaps talk to them and ask them to leave. While you won’t actually present a danger to them at your low speed, they probably will heed the advice of 3000lb robots. Perhaps tell them they have 15 seconds to put down their weapon.

Robotic sign?

To get really clever, the sign marking the border of the Robo-Valet area might itself be on a small robot. Thus, when the robo-valet area gets full, the sign can move to expand the area if space is available. You could expand even into areas occupied by human-parked cars — just know that they are there and don’t block them in — or move out of their way when needed. Eventually they leave and only robocars enter.

When the demand goes down, the sign can easily move to shrink the valet area.

Detroit Auto Show and more news

Robocar news continues after CES with announcements from the Detroit Auto Show (and a tiny amount from the TRB meeting.)

Google doesn’t talk a lot about their car, so address by Chris Urmson at the Detroit Auto Show generated a lot of press. Notable statements from Chris included:

  • A timeline of 2 to 5 years for deployment of a vehicle
  • Public disclosure that Roush of Michigan acted as contract manufacturer to build the new “buggy” models — an open secret since May
  • A list of other partners involved in building the car, such as Continental, LG (batteries), Bosch and others.
  • A restatement that Google does not plan to become a car manufacturer, and feels working with Detroit is the best course to make cars
  • A statement that Chris does not believe regulation will be a major barrier to getting the vehicles out, and they work regularly to keep NHTSA informed
  • A few more details about Google’s own LIDAR, indicating that units are the size of coffee cups. (You will note the new image of the buggy car does not have a Velodyne on the roof.)
  • More indication that things like driving in snow are not in the pipeline for the first vehicles

Almost all of this has been said before, though the date forecasts are moved back a bit. That doesn’t surprise me. As Google-watchers know, Google began by doing extensive, mostly highway based testing of modified hybrid cars, and declared last May that they were uncomfortable with the safety issues of doing a handoff to a human driver, and also that they have been doing a lot more on non-highway driving. This culminated with the unveiling of the small custom built buggy with no steering wheel. The shift in direction (though the Lexus cars are still out there) will expand the work that needs to be done.

Car company announcements out of the Detroit show were minor. The press got all excited when one GM executive said they “would be open to working with Google.” While I don’t think it was actually an official declaration, Google has said many times they have talked to all major car companies, so there would be no reason for GM to go out to the press to say they want to talk to Google. Much PR over nothing, I suspect.

Ford, on the other hand, actually backtracked and declared “we won’t be first” when it comes to this technology. I understand their trepidation. Being first does not mean being the winner in this game. But neither does being 2nd — there will be a time after which the game is lost.

There were concept vehicles displayed by Johnson Controls (a newcomer) and even a Chinese company which put a fish tank in the rear of the car. You could turn the driver’s seat around and watch your fish. Whaa?

In general, car makers were pushing their dates towards 2025. For some, that was a push back from 2020, for others a push forward from 2030, as both of those numbers have been common in predictions. I guess now that it’s 2015, 2020 is just to realistic a number to make an uncertain prediction about.

Earlier, Boston Consulting Group released a report suggesting robocars would be a $42B market in 2025 — the car companies had better get on it. With the global ground transportation market in the range of $7 trillion in my guesstimate, that’s a drop in the bucket, but also a huge number.

News from the Transportation Research Board annual meeting has been sparse. The combined conference of the TRB and AUVSI on self-driving cars in the summer has been the go-to conference of late, and other things usually happen at the big meeting. Released research suggested 10% of vehicles could be robocars in 2035 — a number I don’t think is nearly aggressive enough.

There also was tons of press over the agreement between NASA Ames and Nissan’s Sunnyvale research lab to collaborate. Again, not a big surprise, since they are next door to one another, and Martin Sierhuis the director of the research lab made his career over at Nasa. (Note of disclosure: I am good friends with Martin, and Singularity U is based at the NASA Research Park.)

Day 3 of CES -- BMW and robots

Day 3 at CES started with a visit to BMW’s demo. They were mostly test driving new cars like the i3 and M series cars, but for a demo, they made the i3 deliver itself along a planned corridor. It was a mostly stock i3 electric car with ultrasonic sensors — and the traffic jam assist disabled. When one test driver dropped off the car, they scanned it, and then a BMW staffer at the other end of a walled course used a watch interface to summon that car. It drove empty along the line waiting for test drives, and then a staffer got in to finish the drive to the parking spot where the test driver would actually get in, unfortunately.

Also on display were BMW’s collision avoidance systems in a much more equipped research car with LIDARs, Radar etc. This car has some nice collision avoidance. It has obstacle detection — the demo was to deliberately drive into an obstacle, but the vehicle hits the brakes for you. More gently than the Volvo I did this in a couple of years ago.

More novel is detection of objects you might hit from the side or back in low speed operations. If it looks like you might sideswipe or back into a parking column or another car, the vehicle hits the brakes on you (harder) to stop it from happening.

Insurers will like this — low speed collisions in parking lots are getting to be a much larger fraction of insurance claims. The high speed crashes get all the attention, but a lot of the payout is in low speed.

I concluded with a visit to my favourite section of CES — Eureka Park, where companies get small lower cost booths, with a focus on new technology. Also in the Sands were robotics, 3D printing, health, wearables and more — never enough time to see it all.

I have added 12 more photos to my gallery, with captions — check the last part out for notes on cool products I saw, from self-tightening belts and regenerating roller skates to phone-charging camping pots.

CES Day 2 Gallery and notes

After a short Day 1 at CES a more full day was full of the usual equipment — cameras, TVs, audio and the like and visits to several car booths.

I’ve expanded my gallery of notable things with captions with cars and other technology.

Lots of people were making demonstrations of traffic jam assist — simple self-driving at low speeds among other cars. All the demos were of a supervised traffic jam assist. This style of product (as well as supervised highway cruising) is the first thing that car companies are delivering (though they are also delivering various parking assist and valet parking systems.)

This makes sense as it’s an easy problem to solve. So easy, in fact, that many of them now admit they are working on making a real traffic jam assist, which will drive the jam for you while you do e-mail or read a book. This is a readily solvable problem today — you really just have to follow the other cars, and you are going slow enough that short of a catastrophic error like going full throttle, you aren’t going to hurt people no matter what you do, at least on a highway where there are no pedestrians or cyclists. As such, a full auto traffic jam assist should be the first product we see form car companies.

None of them will say when they might do this. The barrier is not so much technological as corporate — concern about liability and image. It’s a shame, because frankly the supervised cruise and traffic jam assist products are just in the “pleasant extra feature” category. They may help you relax a bit (if you trust them) as cruise control does, but they give you little else. A “read a book” level system would give people back time, and signal the true dawn of robocars. It would probably sell for lots more money, too.

The most impressive car is Delphi’s, a collaboration with folks out of CMU. The Delphi car, a modified Audi SUV, has no fewer than 6 4-plane LIDARs and an even larger number of radars. It helps if you make the radars, as otherwise this is an expensive bill of materials. With all the radars, the vehicle can look left and right, and back left and back right, as well as forward, which is what you need for dealing with intersections where cross traffic doesn’t stop, and for changing lanes at high speed.

As a refresher: Radar gives you great information, including speed on moving objects, and sucks on stationary ones. It goes very far and sees through all weather. It has terrible resolution. LIDAR has more resolution but does not see as far, and does not directly give you speed. Together they do great stuff.

For notes and photos, browse the gallery

CES Day 1 -- Mercedes concept

A reasonable volume of robocar related stuff here at CES. I just had a few hours today, and went to see the much touted Mercedes F015 “Luxury in Motion.” This is a concept and not a planned vehicle, but it draws together a variety of ideas — most of which we’ve seen before — with some new explorations.

The vehicle has a long wheelbase design to allow it to have a very large passenger compartment, which features just 4 bucket seats, the front two of which can rotate to create face to face seating. (In addition, they can rotate to make it easier to get into the car.) We’ve seen a number of face to face concepts and designs and I’ve been interested in the idea from the start, the idea of making car travel more social and better for both families and co-workers. As a plus, rear facing seats, though less comfortable for some fraction of the population, are going to be safer in a front end collision.

The vehicle features a bevy of giant touchscreens. We see a lot of this, but I actually will note that we don’t have this at our desks or in our homes. I suspect passengers in robocars will prefer the tablets they already have, though there is the issue that looking down at a tablet generates motion sickness sometimes.

The interior has an odd mix of carpet and hardwood, perhaps trying to be more like a living room.

More interesting, though not on display, are the vehicle’s systems for communicating with pedestrians and other road users. These include LEDs that can indicate if the car is self-driving (boring, and something I pushed to have removed from the Nevada law,) but more interesting are indicators that help to tell pedestrians the vehicle has seen them. One feature, which only is likely to work at night, laser projects a crosswalk in front of the vehicle when it stops, to tell a pedestrian it sees them and is expecting them to cross in front. It can also make LED words at the back for other cars (something that is I think illegal in some jurisdictions.

Also interesting has been the press reaction. Wired thinks it’s bonkers and not designed very well. The bonkers part is because the writer thinks it de-emphasizes driving too much. Of course, those of that stripe are quite upset at Google’s car with no controls. Other writers have liked the design, and find it quite superior to Google’s non-threatening design, suggesting the Google design is for regulators and the Mercedes design is for customers. Google plans to get approval for their car and operate it, while Mercedes is just using the F015 as a concept.

I have a gallery of several pictures of the car which I will add to during the week. In the gallery you will also see:

Audio Piloted Driving prototype

Audi drove one of their cars from the Bay Area to CES, letting press take 100 mile stints. It also helped them learn things about different conditions. One prototype is in the booth, I will go out to see the real car outdoors tomorrow.

TRW

TRW was showing off their technology with a transparent model showing where they had put an array of radars to make 360 degree radar and camera coverage. No LIDAR, but they will probably get one eventually. Radar’s resolution is low, but they believe that by fusing the radar and the camera views they can get very good perception of the road.

Others

There are more for me to see tomorrow. Ford showed more of their ADAS systems and also their Focus which has 4 of the 32 plane velodyne LIDARs on it. Toyota showed only a hydrogen fuel cell car. Valeo has some interesting demos I will want to see — they have promised doing a good traffic jam assist. While they have not said so, I think the most interesting car company robocar function will be a traffic jam assist which does not require supervision — ie. you can read. While no car company is ready to have the driver out of the loop at high speeds, doing it at traffic jam speeds is much easier, because mainly you just have to follow the other cars, and you stop self-driving if the jam opens up. Several companies are working on a product like this and I suspect it will be the first real robocar product to reach the market that is actually practical. The “super cruise” products which drive while you watch are pleasant, but not much more world-changing than adaptive cruise control. When the car can give people time back, even if it’s only the traffic jam time, then something interesting starts happening.

Robocars driving when the map is wrong

Yesterday’s note on Here’s maps brought up the question of the wisdom of map-based driving. While I addressed this a bit earlier let me add a bit more detail.

A common first intuition is that because people are able to drive just fine on a road they have never seen before that this is how robots will do it. They are bothered that present designs instead create a super-detailed map of the road by having human driven cars scan the road with sensors in advance. After all, the geometry of the road can change due to construction; what happens then?

They hope for a car that, like a human, can build its model of the road in real time while driving the road for the first time. That would be nice, of course, and gives you a car that can drive most roads right away, without needing to map them. But it’s a much harder problem to solve, and unlikely to ever be solved perfectly. Car companies are building very simple systems which can follow the lines on a freeway under human supervision without need for a map. But real city streets are a different story.

The first thing to realize is that any system which could build the correct model as you drive is a system that could build a map with no human oversight, so the situations are related. But building a map in advance is always going to have several very large advantages:

  1. You build the map from not just one scan of the road, but several, and done in different lanes and directions. As a result, you get 3-D scans of everything from different angles, and can build a superior model of the world.
  2. Using multiple scans lets you learn about things that are stationary but move one day to the next, like parked cars.
  3. You can process the data using a cloud supercomputer in as much time, memory and data storage as you want. Your computer is effectively thousands of times more capable.
  4. Humans can review the map built by the software if there’s anything it is uncertain about (or even if there is nothing) at their leisure.
  5. Humans can also test the result of the automatic and guided mapping to assure accuracy with one extra drive down the road.

In turn there are disadvantages

  1. At times, such as construction, the road will have changed from when it was mapped
  2. This process costs effort, and so the vehicle either does not drive off the map, or only handles a more limited set of simpler roads off the map.

The advantages are so great that even if you did have a system which could handle itself without a map, it is still always going to be able to do better with a map. Even with a great independent system you would want to make an effort to map the most popular roads and the most complex roads, up to the limit of your budget. The cost is an issue, but the cost of mapping roads is nothing compared to the cost of building or maintaining them. It’s a few times driving down the road, and some medium-skilled labour.

The road has changed

Let’s get to the big issue — the map is wrong, usually because construction has changed it.

First of all, we must understand that the sensors always disagree with the map, because the sensors are showing all the other cars and pedestrians etc. Any car has to be able to perceive these and drive so as not to hit them. If a traffic cone, “road closed” sign or flagman appears in the road, a car is not going to just plow into them because they are not on the map! The car already knows where not to go, the question is where it should go when the lanes have changed.

Even vehicles not rated to drive any road without a map can probably still do basic navigation and stay within their lane markers without a map. For the 10,000 miles of driving you do in a year, you need a car that does that 99.99999% of the time (for which you want a map) but it may be acceptable to have a car that’s only 99.9% able to do that for the occasional mile of restriped road. Indeed, when there are other, human-driven cars on the road, a very good strategy is just to follow them — follow one in front, and watch cars to the side. If the car has a clear path following new lane markers or other cars, it can do so.

Google, for example, has shown videos of their vehicle detecting traffic cones and changing lanes to obey the cones. That’s today — it is only going to get better at this.

But not all the time. There will be times when the lanes are unclear (sometimes the old lanes are still visible or the new ones are not well marked.) If there are no other cars to follow, there are also no other cars to hit, and no other traffic to block.

Still, there will be times when the car is not sure of where to go, and will need help. Of course, if there is a passenger in the car, as there would be most of the time, that passenger can help. They don’t need to be a licenced driver, they just need to be somebody who can point on the screen and tell the car which of the possible paths it is considering is the right one. Or guide it with something like a joystick — not physically driving but just guiding the car as to where to go, where to turn.

If the car is empty, and has a network connection, it can send a picture, 3-D scan and low-res video to a remote help station, where a person can draw a path for the car to go for its next 100 meters, and keep doing that. Not steering the car but helping it solve the problem of “where is my lane?” The car will be cautious and stop or pull over for any situation where it is not sure of where to go, and the human just helps it get over that, and confirms where it is safe to go.

If the car is unmanned and has no network connection of any kind, and can’t figure out the road, then it will pull over, or worst case, stop and wait for a human to come and help. Is that acceptable? Turns out it probably is, due to one big factor:

This only applies to the first car to encounter an unplanned, unreported construction zone

We all drive construction zones every day. But it’s much more rare that we are the first car to drive the construction zone as they are setting it up. And most of the rules I describe above are only for the first connected car to encounter a surprise change to the road. In other words, it’s not going to happen very often. Once a car encounters a surprise change to the road, it will report the problem with the map. Immediately all other cars will know about the zone.

If that first car is able to navigate the new zone, it will be scanning it with sensors, and uploading that data, where a crew can quickly build a corrected map. Within a few minutes, the map and the road will no longer differ. And that first car will be able to navigate the new zone 99.999% of the time — either because it has a human on board, remote human help or it’s a simple enough change that the car is able to drive it with an incorrect map.

In addition, the construction zone has to be a surprise. That means that, in spite of regulations, the construction crews did not log plans for it in the appropriate databases. Today that happens fairly often, but over time it’s going to happen less. In fact, there are plans to have transponders on construction equipment and even traffic cones that make it impossible to create a new construction zone without it showing up in the databases. Setting up a road change has a lot of strongly enforced safety rules, and I predict we’ll see “Get out your smartphone and make sure the zone is in the database before you create it” as one of them, especially since that’s so easy to do.

(You have probably also seen that tools like Waze, driven by ordinary human driver smartphones, are already mapping all the construction zones when they pop up.)

If a complex zone is present and unmapped, unmanned cars just won’t route through there until the map is updated. The more important the zone, the more quickly it will get updated. If need be, a mapping worker will go out in a car before work even begins. If a plan was filed, we’ll also know the plan for the zone, and whether cars can handle it with an old map or not.

Most of the time, though, a human passenger will be there to guide the car through the zone. Not to steer — there may not be a steering wheel — but to guide. The car will go slowly and stay safe.

Once a car is through, it will send the scans up to the mapping center, and all future cars will have a map to guide them until the crew changes the road again without logging it. I believe that doing so should be made against safety regulations, and be quite rare.

So look at those numbers. I will hope it’s reasonable to expect that 99% of construction zones will be logged in road authority databases before they begin. Of the 1% that aren’t, there will be a first robocar to encounter the zone. 90% of the time that car will have a passenger able to help. For the 10% unmanned cars, I predict a data network will be available 99% of the time. (Some would argue 100% of the time because unmanned cars will just not go where there is not a data connection, and we may also get new data services like Google’s Loon, or Facebook’s drone program to assure coverage everywhere.)

So now we are looking at one construction zone in 100,000 where there was no warning, there is no human, and there is no data. But we’ve rated are car as able to handle handle off-map driving 99.9% of the time. For the other .1%, it decides it can’t see a clear path, and pulls over. When it doesn’t report back in on the other side of the data dead zone, a service vehicle is dispatched and fixes the problem.

So now in one in 100,000,000 construction zones, we have a car deciding to pull over. Perhaps for half of those, it can’t figure out how to pull over, and it stops in the lane. Not great — but this is one in 200 million construction zones. In other words, it happens with much less frequency than accidents or stalled cars. And there is even a solution. If a construction worker flashes an ID card at the car’s camera when it’s in a confused state, the car can then follow that worker to a place to stop. In fact, since the confused state is so rare, there is probably not even a need for an ID card. Just walk up, make a “follow me” gesture and walk the car where it needs to go.

Tweak these numbers as you like. Perhaps you think there will be far more construction zones not logged in databases. Perhaps you think the car’s ability to drive a changed zone will only be 50%. Perhaps you think there will still be lots of unmanned cars running in wireless dead zones in 2020. Even so the number of cars that stop and give up will still be far fewer than the number of cars that block roads today due to accidents and mechanical problems. In other words, no big whoop.

It’s important to realize that unmanned cars are not in a hurry. They can avoid zones they are not comfortable with. If they can’t get through at all, the taxi company sending the car can just send another from a different direction in almost all cases.

It’s also important to realize that cars in an uncertain situation are also not in a big hurry. They will slow until they can be sure they are safe and able to handle the road. Slow, it turns out, is easy. Slow and heavy traffic (ie. a traffic jam) is actually also very easy — you don’t even need to see the lines on the road to handle that one; you usually can’t.

Once again this is only for the first car to encounter the surprise zone. Much more common will be a car that is the first to encounter a planned zone. This car will always have a competent passenger, because the service will not direct an unmanned car into an unknown construction zone where there is no data. This passenger will get plenty of warning, and their car may well pull over so there is no transition from full-auto to semi-auto while the car is moving. Then this person will guide the car through the zone at reduced speed. Probably just with a joystick, though possibly there will handlebars that can pop out or plug in if true semi-manual driving is needed.

New road signs

Road signs are a different problem. Already there are very decent systems for recognizing road signs captured by the camera — systems that actually do better at it than human beings. But sometimes there are road signs with text, and the system may recognize them, but not understand them. Here again we may call upon human beings, either in the vehicle, or available via a data connection. Once again, this is only for the first unmanned car to encounter the new road sign.

I will propose something stronger, though. I believe there should be a government mandated database of all road signs. Further, I believe the law should say that no road sign has legal effect until it is entered in the database. Ie. if you put up a sign with a new speed limit, it is not a violation of the limit to ignore the sign until the sign is in the database. At least not for robots. Once again, all this needs is that the crews putting in the signs have smartphones so they can plonk the sign on the map and enter what it is.

We may never need this, though, because the ability of computers to read signs is getting very good. It may be faster to just make it even better than to wait for a law that mandates the database. With a 3-D map, you will never miss a brand new sign, but you might get confused by a changed sign — you will know it changed but may need to ask for help to understand it if it is non-standard. There are already laws that standardize road signs, but only to a limited extent. Even so, the number of sign styles in any given country is still a very manageable number.

Random road events

Sometimes driving geometry changes not due to construction, but due to accidents and the environment. Trees get knocked down. Roads flood. Power lines may fall. The trees will be readily seen, and for the first car to come to a fallen tree, the procedure will be similar, though in a low traffic area the vehicles will be programmed to go around them, as they are for stalled cars and slow moving vehicles. Flooding and power lines are more challenging because they are harder to see. Flooding, of course, does not happen by surprise. That there is flooding in a region will be well known so cars will be on the lookout for it. Human guides will again be key.

A plane is not a bird

Aircraft do not fly by flapping their wings, and robocars will not see the world as people do nor drive as they do. When they have accurate maps, it gives us much more confidence in their safety, particularly the ability to pick the right path reliably at speed. But they have a number of tools open to them for driving a road that doesn’t match the map precisely without needing to have the ability to drive unmapped roads 99.999999% of the time. That’s a human level ability and they don’t need it.

Cars in the UK, China, LA, CES and Here : Robocar News Update

I see new articles on robocars in the press every day now, though most don’t say a lot new. Here, however, are some of the recent meaningful stories from the last month or two while I’ve been on the road. There are other sites, like the LinkedIn self-driving car group and others, if you want to see all the stories.

Winners chosen in UK competition

Four cities in the UK have been chosen for testing and development of robcars using the £10 million funding contest. As expected, Milton Keynes was chosen along with Coventry, and also Greenwich and Bristol. The BBC has more.

Chinese competition has another round

Many don’t know it, but China has been running its own “DARPA Grand Challenge” style race for 6 years now. The entrants are mostly academic, and not super far along, but the rest of the world stopped having contests long ago, much to its detriment. I was recently in Beijing giving a talk about robocars for guests of Baidu — my venue was none other than the Forbidden City — and the Chinese energy is very high. Many, however, thought that an announcement that Baidu would provide map data for BMW car research meant that Baidu was doing a project the way Google is. It isn’t, at least for now.

LA Mayor wants the cars

I’ve seen lots of calls from cities and regions that robocars come there first. In the fall, the mayor of Los Angeles made such a call. What makes this interesting is that LA is indeed a good early target city, with nice wide and simple roads, lots of freeways, and relatively well-behaved drivers compared to the rest of the world. And it’s in California, which is where a lot of the best development is happening, although that’s all in the SF Bay Area.

Concept designs for CES and beyond

More interesting concept cars are arising, as designers realize what they can do when freed of having a driver’s seat that faces forward and has all the controls, and as electric drivetrains allow you to move around where the drivetrain goes. Our friends at the design firm IDEO came up with some concepts that are probably not realistic but illustrate worthwhile principles. In particular, their vision of the delivery robot is quite at odds with mine. I see delivery robots as being very small, just suitcase sized boxes on wheels, except for the few that are built for very large cargo like furniture and industrial deliveries. Delivery robots will come to you on your schedule, not on the delivery company’s schedule. There will be larger robots with compartments when you can service a group of people who live together, but there is a limit to how many you can serve and still deliver at exactly the right time that people expect.

Everybody is also interested to see what Daimler will unveil at the Consumer Electronics Show. They showed off an interior with face-to-face seating and everybody wearing a VR headset, and have been testing a car under wraps.

It’s interesting to think about the VR headset. A lot of people would get sick if jostled in a car while wearing a VR headset. However, it might be possible to have the VR headset deliberately bounce the environment it’s showing you, so that it looks like you’re riding a car in that environment that’s bumping just the way you are. Or even walking.

Here (Nokia/Navteq) builds a big library of HD maps

Robocars work better if they get a really detailed map of their environment to drive with. Google’s project is heavily based on maps, and they have mapped out all the roads they test near Google HQ. Nokia’s “Here” division has decided to enter this in a big way. Nokia calls its projects “HD Maps,” which is a good name because you want to make it clear that these are quite unlike the navigation maps we are used to from Google, Here and other companies. These maps track every lane and path a car could take on the road, but also every lane marker, every curb, every tree — anything that might be seen by the cameras and 3D sensors.

Nokia makes the remarkable claim to have produced 1.2 million miles of HD Maps in 30 countries in the last 15 months. That’s remarkable because Google declared that one of their unsolved problems was that the cost of producing maps, and they were working to bring that cost down. Either Nokia/Here has made great strides in reducing that cost, or their HD Maps are not quite at the level of accuracy and detail that might be needed.

Nonetheless, the cost of the mapping will come down. In fact, many people express surprise when they learn that the cars rely so heavily on maps, as they expect a vehicle that, like a human being, can easily drive on a road they’ve never seen before, with no map. Humans can do that, but a car that could do that is also a car that could build the sort of map we’re talking about, in real time. Making the map ahead of time has several advantages, and is easier to do than doing it in real time. Perhaps some day that real-time map builder (what roboticists call Simultaneous localization and mapping) will arise, but for now, pre-mapping is the way to go.

510 Systems story told (sort of.)

There was recently press about the kept-quiet acquisition by Google of 510 Systems. I was at Google at the time, and it involves friends of mine, so I will have to say there are some significant errors in the story, but it’s interesting to see it come out. It wasn’t really that secret. What Anthony did with PriBot was hardly secret — he was on multiple TV shows for his work — and that he was at Google working at first on Streetview and later on the car was also far from secret. But it wasn’t announced so nobody picked up on it.

Uber's legal battles and robocars

Uber is spreading fast, and running into protests from the industries it threatens, and in many places, the law has responded and banned, fined or restricted the service. I’m curious what its battles might teach us about the future battles of robocars.

Taxi service has a history of very heavy regulation, including government control of fares, and quota/monopolies on the number of cabs. Often these regulations apply mostly to “official taxis” which are the only vehicles allowed to pick up somebody hailing a cab on the street, but they can also apply to “car services” which you phone for a pick-up. In addition, there’s lots of regulation at airports, including requirements to pay extra fees or get a special licence to pick people up, or even drop them off at the airport.

Why we have Taxi regulation and monopolies

The heavy regulation had a few justifications:

  • When hailing a cab, you can’t do competitive shopping very easily. You take the first cab to come along. As such there is not a traditional market.
  • Cab oversupply can cause congestion
  • Cab oversupply can drive the cost of a taxi so low the drivers don’t make a living wage.
  • We want to assure public safety for the passengers, and driving safety for the drivers.

Most of these needs are eliminated when you summon from an app on your phone. You can choose from several competing companies, and even among their drivers, with no market failure. Cabs don’t cruise looking for fares so they won’t cause much congestion. Drivers and companies can have reputations and safety records that you can look up, as well as safety certifications. The only remaining public interest is the question of a living wage.

Taxi regulations sometimes get stranger. In New York (the world’s #1 taxi city) you must have one of the 12,000 “medallions” to operate a taxi. These medallions over time grew to cost well north of $1 million each, and were owned by cab companies and rich investors. Ordinary cabbies just rented the medallions by the hour. To avoid this, San Francisco made rules insisting a large fraction of the cabs be owned by their drivers, and that no contractual relationship could exist between the driver and any taxi company.

This created the situation which led to Uber. In San Francisco, the “no contract” rule meant if you phoned a dispatcher for a cab, they had no legal power to make it happen. They could just pass along your desire to the cabbie. If the driver saw somebody else with their arm up on the way to get you, well, a bird in the hand is worth two in the bush, and 50% of the time you called for a cab, nobody showed up!

Uber came into that situation using limos, and if you summoned one you were sure to get one, even if it was more expensive than a cab. Today, that’s only part of the value around the world but crazy regulations prompted its birth.

The legal battles (mostly for Uber)

I’m going to call all these services (Uber, Lyft, Sidecar and to some extent Hail-O) “Online Ride” services.  read more »

The many business models for cars

When I talk about robocars, I often get quite opposite reactions:

  • Americans, in particular, will never give up car ownership! You can pry the bent steering wheel from my cold, dead hands.
  • I can’t see why anybody would own a car if there were fast robotaxi service!
  • Surely human drivers will be banned from the roads before too long.

I predict neither extreme will be true. I predict the market will offer all options to the public, and several options will be very popular. I am not even sure which will be the most popular.

  1. Many people will stick to buying and driving classic, manually driven cars. The newer versions of these cars will have fancy ADAS systems that make them much harder to crash, and their accident levels will be lower.
  2. Many will buy a robocar for their near-exclusive use. It will park near where it drops them off and always be ready. It will keep their stuff in the trunk.
  3. People who live and work in an area with robotaxi service will give up car ownership, and hire for all their needs, using a wide variety of vehicles.
  4. Some people will purchase a robocar mostly for their use, but will hire it out when they know they are not likely to use it, allowing them to own a better car. They will make rarer use of robotaxi services to cover specialty trips or those times when they hired it out and ended up needing it. Their stuff will stay in a special locker in the car.

In addition, people will mix these models. Families that own 2 or more cars will switch to owning fewer cars and hiring for extra use and special uses. For example, if you own a 2 person car, you would summon a larger taxi when 3 or more are together. In particular, parents may find that they don’t want to buy a car for their teen-ager, but would rather just subsidize their robotaxi travel. Parents will want to do this and get logs of where their children travel, and of course teens will resist that, causing a conflict.  read more »

Are today's challenges of making robocars dealbreakers?

There’s been a lot of press recently about an article in Slate by Lee Gomes which paints a pessimistic picture of the future of robocars, and particularly Google’s project. The Slate article is a follow-on to a similar article in MIT Tech Review

Gomes and others seem to feel that they and the public were led to believe that current projects were almost finished and ready to be delivered any day, and they are disappointed to learn that these vehicles are still research projects and prototypes. In a classic expression of the Gartner Hype Cycle there are now predictions that the technology is very far away.

Both predictions are probably wrong. Fully functional robocars that can drive almost everywhere are not coming this decade, but nor are they many decades away. But more to the point, less-functional robocars are probably coming this decade — much sooner than these articles expect, and these vehicles are much more useful and commercially viable than people may expect.

There are many challenges facing developers, and those challenges will keep them busy refining products for a long time to come. Most of those challenges either already have a path to solution, or constrain a future vehicle only in modest ways that still allow it to be viable. Some of the problems are in the “unsolved” class. It is harder to predict when those solutions will come, of course, but at the same time one should remember that many of the systems in today’s research vehicles were in this class just a few years ago. Tackling hard problems is just what these teams are good at doing. This doesn’t guarantee success, but neither does it require you bet against it.

And very few of the problems seem to be in the “unsolvable without human-smart AI” class, at least none that bar highly useful operation.

Gomes’ articles have been the major trigger of press, so I will go over those issues in detail here first. Later, I will produce an article that has even more challenges than listed, and what people hope to do about them. Still, the critiques are written almost as though they expected Google and others, rather than make announcements like “Look at the new milestone we are pleased to have accomplished” to instead say, “Let’s tell you all the things we haven’t done yet.”

Gomes begins by comparing the car to the Apple Newton, but forgets that 9 years after the Newton fizzled we had the success of the Palm Pilot, and 10 years after that Apple came back with the world-changing iPhone. Today, the pace of change is much faster than in the 80s.

Here are the primary concerns raised:

Maps are too important, and too costly

Google’s car, and others, rely on a clever technique that revolutionized the DARPA challenges. Each road is driven manually a few times, and the scans are then processed to build a super-detailed “ultramap” of all the static features of the road. This is a big win because big server computers get to process the scans in as much time as they need, and see everything from different angles. Then humans can review and correct the maps and they can be tested. That’s hard to beat, and you will always drive better if you have such a map than if you don’t.

Any car that could drive without a map would effectively be a car that’s able to make an adequate map automatically. As things get closer to that, making maps will become cheaper and cheaper.

Naturally, if the road differs from the map, due to construction or other changes, the vehicle has to notice this. That turns out to be fairly easy. Harder is assuring it can drive safely in this situation. That’s still a much easier problem than being able to drive safely everywhere without a map, and in the worst case, the problem of the changed road can be “solved” by just the ability to come to a safe stop. You don’t want to do that super often, but it remains the fail-safe out. If there is a human in the car, they can guide the vehicle in this. Even if the vehicle can’t figure out where to go to be safe, the human can. Even a remote human able to look at transmitted pictures can help the car with that — not live steering, but strategic guidance.

This problem only happens to the first car to encounter the surprise construction. If that car is still able to navigate (perhaps with human help,) the map can be quickly rebuilt, and if the car had to stop, all unmanned cars can learn to avoid the zone. They are unmanned, and thus probably not in a hurry.

The cost of maps

In the interests of safety, a lot of work is put into today’s maps. It’s a cost that somebody like Google or Mercedes can afford if they need to, (after all, Google’s already scanned every road in many countries multiple times) but it would be high for smaller players.  read more »

Live public test in Singapore

In late August, I visited Singapore to give an address at a special conference announcing a government sponsored collaboration involving their Ministry of Transport, the Land Transport Authority and A-STAR, the government funded national R&D centre. I got a chance to meet the minister and sit down with officials and talk about their plans, and 6 months earlier I got the chance to visit A-Star and also the car project at the National University of Singapore. At the conference, there were demos of vehicles, including one from Singapore Technologies, which primarily does military contracting.

Things are moving fast there, and this week, the NUS team announced they will be doing a live public demo of their autonomous golf carts and they have made much progress. They will be running the carts over a course with 10 stops in the Singapore Chinese and Japanese Gardens. The public will be able to book rides online, and then come and summon and direct the vehicles with their phones. The vehicles will have a touch tablet where the steering wheel will go. Rides will be free. Earlier, they demonstrated not just detecting pedestrians but driving around them (if they stay still) but I don’t know if this project includes that.

This is not the first such public demo - the CityMobil2 demonstration in Sardinia ran in August, on a stretch of beachfront road blocked to cars but open to bicycles, service vehicles and pedestrians. This project slowed itself to unacceptably slow speeds and offered a linear route.

The Singapore project will also mix with pedestrians, but the area is closed to cars and bicycles. There will be two safety officers on bicycles riding behind the golf carts, able to shut them down if any problem presents, and speed will also be limited.

Singapore is interesting because they have a long history of transportation innovation, and good reason for it. As a city-state, it’s almost all urban, and transportation is a real problem. That’s why congestion charging was first developed in Singapore, along with other innovations. Every vehicle in Singapore has a transponder, and they use them not just for congestion tolling, but to pay for parking seamlessly in almost all parking lots and a few other tricks.

In spite of this history of innovation, Singapore is also trending conservative — this might dampen truly fast innovation, but this joint project is a good start. Though I advised them that private projects will be able to move faster than public sector ones, in my view.

The NUS project is a collaboration with MIT, involving professor Emilio Frazzoli. Their press release has more details, including maps showing the route is non-linear but the speed is slow.

Tesla, Audi and other recent announcements

Some recent announcements have caused lots of press stir, and I have not written much about them, both because of my busy travel schedule, but also because there is less news that we might imagine.

Tesla is certainly an important company to watch. As the first successful start-up car company in the USA, they are showing they know how to do things differently, taking advantage of the fact that they don’t have a baked in knowledge of “how a car company works” the way other companies do. Tesla’s announcements of plans for more self-driving are important. Unfortunately, the announcements around the new dual-motor Model S involve offerings quite similar to what can be found already in cars from Mercedes, Audi and a few others. Namely advanced ADAS and the combination of lane-keeping and adaptive cruise control to provide a hands-off cruise control where you must keep your eyes on the road.

One notable feature demonstrated by Tesla is automatic lane change, which you trigger by hitting a turn signal. That’s a good interface, but it must be made clear to people that they still have the duty to check that it’s safe to change lanes. It’s not that easy for a robocar’s sensors, especially the limited sensor package in the Telsa, to see a car coming up fast behind you in the next lane. On some highways relative speeds can get pretty high. You’re not likely to be hit by such cars, but in some cases that’s because they will probably brake for you, not because you did a fully safe lane change.

Much more interesting are Elon Musk’s predictions of a real self-driving car in 5 to 6 years. He means one where you can read a book, or even, as he suggests, go to sleep. Going to sleep is one of the greatest challenges, almost as hard as operating unmanned or carrying a drunk or disabled person. You won’t likely do that just with cameras — but 5 to 6 years is a good amount of time for a company like Tesla.

Another unusual thing about Tesla is that while they are talking about robocars a lot, they have also built one of the finest driver’s cars ever made. The Model S is great fun to drive, and has what I call a “telepathic” interface sometimes — the motors have so much torque that you can almost think about where you want to go and the vehicle makes it happen. (Other examples of telepathic interfaces include touch-typing and a stickshift.) In some ways it is the last car that people might want to automate. But it’s also a luxury vehicle, and that makes self-driving desirable too.

Audi Racing

Another recent announcement creating buzz is Audi’s self-driving race car on a test track in Germany. Audi has done racing demos several times now. They are both important but also unimportant. It definitely makes sense to study how to control a car in extreme, high performance situations. To understand the physics of the tires so fully that you can compete in racing will teach lessons of use in danger situations (like accidents) or certain types of bad weather.

At the same time, real-world driving is not like racing, and nobody is going to be doing race-like driving on ordinary streets in their robocar. 99.9999% of driving consists of “staying in your lane” and some other basic maneuvers and so racing is fun and sexy but not actually very high on the priority list. (Not that teams don’t deserve to spend some of their time on a bit of fun and glory.) The real work of building robocars involves putting them through all the real-world road situations you can put them through, both real and in some cases simulated on a track or in a computer.

Google first showed its system to many people by having it race figure-8s on the roof parking lot at the TeD conference. The car followed a course through a group of cones at pretty decent speed and wowed the crowd with the tight turns. What most of the crowd didn’t know was that the cones were only there for show, largely. The car was guiding itself from its map of all the other physical things in the parking lot — line markers, pavement defects and more. The car is able to localize itself fine from those things. The cones just showed the public that it really was following the planned course. At the same time, making a car do that is something that was accomplished decades ago, and is used routinely to run “dummy cars” on car company test tracks.

A real demo turns out to be very boring, because that’s how being driven should be. I’m not saying it’s bad in any way to work on racing problems. The only error would be forgetting that the real-world driving problems are higher priority and success in them is less dramatic but more impressive in the technical sense.

This doesn’t mean we won’t see more impressive demos soon. Many people have shown off automatic braking. Eventually we will see demos of how vehicles respond in danger situations — accidents, pedestrians crossing into the road and the like. A tiny part of driving but naturally one we care about. And we will want them to understand the physics of what the tires and vehicle are capable of so that they perform well, but not so they can find the most efficient driving line on the track.

There was some debate about having a new self-driving car contest like the DARPA grand challenges, and a popular idea was man vs. machine, including racing. That would have been exciting. We asked ourselves whether a robot might have an advantage because it would have no fear of dying. (It might have some “fear” of smashing its owners very expensive car.) Turns out this happens on the racetrack fairly often with new drivers who try to get an edge by driving like they have no fear, that they will win all games of chicken. When this happens, the other drivers get together to teach that new driver a lesson. A lesson about cooperating and reciprocation in passing and drafting. So the robots would need to be programmed with that as well, or their owners would find a lot of expensive crashes and few victories.

Robocar Retirement

Here’s an interview with me in the latest Wall Street Journal on the subject of robocars and seniors.

This has always been a tricky question. Seniors are not early adopters, so the normal instinct would be to expect them to fear a new technology as dramatic as this one. Look at the market for simplified cell phones aimed at seniors who can’t imagine why they want a smartphone. Not all are like this, but enough are to raise the question.

Sometimes this barrier is broken. Pictures of grandchildren in e-mail brought grandparents online, as did video calls with them. Necessity overcomes the fear of change.

As people get older, they start losing driving ability. They die more often in accidents, eventually surpassing the rates of reckless teens, because they are more fragile, and they make mistakes that cause other people to hit them. Many seniors report troubles with vision at night, and they stop driving at night. In some cases, they get their licences taken away by the state — though the AARP and others fight this so it’s rare — or their kids take away their keys when things get really dangerous. And the kids become a taxi service for their parents.

The boomer generation, which took over the suburbs and exurbs have nice houses with minimal transit. Some find themselves leaving that home because they can’t drive any more and they will become a shut-in if they don’t do something.

The robocar offers answers to many of these problems. Safe transportation for those with disabilities. (Eventually even mild dementia.) Inexpensive taxi transportation anywhere, including those low-transit suburbs. And a chance to video chat with the grandchildren while on the way.

It’s no surprise that retirement communities are discussed as an early deployment zone for robocars. In those communities, you have a controlled street environment — often with heavy use of NEVs/golf carts already. You have people losing the ability to drive who have limited mobility needs. If they can get to basic shopping and a few other locations (including transit hubs to travel further) they can do pretty well.

Until the robocar came along, we were all doomed to lose the freedom cars gave us. This is no longer going to happen.

Talking soon on robocars and insurance

I’ve been on the road a lot, talking in places like Singapore, Shenzen and Hong Kong, and visiting Indonesia which is a driving chaos eye-opener. In a bit over 10 hours I will speak at Swiss Re’s conference on robocars and insurance in Zurich. While the start will be my standard talk, in the latter section we will have some new discussion of liability and insurance.

A live stream of the event should be available at http://swissre.adobeconnect.com/theautonomouscar/ I talk at 8:45am Central European Summer Time.

A lot of news while I’ve been on the road — driving permits in California, new projects and the Singapore effort I was there at the announcement of. And lots of non-news that got people very excited like the “revelation” that Google’s car doesn’t drive in snow (nobody thought it could) or on all roads (nobody even suggested this) or that it was forced to add a steering wheel for testing (this was always planned, Google participating in the hearings writing those laws.) And lots of car company announcements from the ITS world congress (a conference that 2 years ago barely acknowledged the presence of self-driving cars.)

More to come later.

Short Big Think video piece on Privacy vs. Security

There’s another video presentation by me that I did while visiting Big Think in NYC.

This one is on The NSA, Snowden and the “tradeoff” of Privacy and Security.

Earlier, I did a 10 minute piece on Robocars for Big Think that won’t be news to regular readers here but was reasonably popular.

The Neighbourhood Elevator and a new vision of urban density

I’ve been musing more on the future of the city under the robocar, and many visions suggest we’ll have more sprawl. Earlier I have written visions of Robocar Oriented Development and outlined all the factors urban planners should look at.

In the essay linked below, I introduce the concept of a medium density urban neighbourhood that acts like a higher density space thanks to robocars functioning like the elevators in the high-rises of high density development.

Read The Neighbourhood Elevator and 21st century urban density at robocars.com.

Robocar News: UK Legalization, MobilEye IPO, Baidu, new Lidar, Nissan pullback, FBI Weapons, Navia, CityMobil2

A whole raft of recent robocar news.

UK to modify laws for full testing, large grants for R&D

The UK announced that robocar testing will be legalized in January, similar to actions by many US states, but the first major country to do so. Of particular interest is the promise that fully autonomous vehicles, like Google’s no-steering-wheel vehicle, will have regulations governing their testing. Because the US states that wrote regulations did so before seeing Google’s vehicle, their laws still have open questions about how to test faster versions of it.

Combined with this are large research grant programs, on top of the £10M prize project to be awarded to a city for a testing project, and the planned project in Milton Keynes.

Jerusalem’s MobilEye going public in largest Israeli IPO

The leader in doing automated driver assist using cameras is Jerusalem’s MobilEye. This week they’re going public, to a valuation near $5B and raising over $600 million. MobilEye makes custom ASICs full of machine vision processing tools, and uses those to make camera systems to recognize things on the road. They have announced and demonstrated their own basic supervised self-driving car with this. Their camera, which is cheaper than the radar used in most fancy ADAS systems (but also works with radar for better results) is found in many high-end vehicles. They are a supplier to Tesla, and it is suggested that MobilEye will play a serious role in Tesla’s own self-driving plans.

As I have written, I don’t believe cameras are even close to sufficient for a fully autonomous vehicle which can run unmanned, though they can be a good complement to radar and especially LIDAR. LIDAR prices will soon drop to the low $thousands, and people taking the risk of deploying the first robocars would be unwise to not use LIDAR to improve their safety just to save a few thousand for early adopters.

Chinese search engine Baidu has robocar (and bicycle) project

Baidu is the big boy in Chinese search — sadly a big beneficiary of Google’s wise and moral decision not to be collaborators on massive internet censorship in China — and now it’s emulating Google in a big way by opening its own self-driving car project.

Various stories suggest a vehicle which involves regular handoff between a driver and the car’s systems, something Google decided was too risky. Not many other details are known.

Also rumoured is a project with bicycles. Unknown if that’s something like the “bikebot” concept I wrote about 6 years ago, where a small robot would clamp to a bike and use its wheels to deliver the bicycle on demand.

Why another search engine company? Well, one reason Google was able to work quickly is that it is the world’s #1 mapping company, and mapping plays a large role in the design of robocars. Baidu says it is their expertise in big data and AI that’s driving them to do this.

Velodyne has a new LIDAR

The Velodyne 64 plane LIDAR, which is seen spinning on top of Google’s cars and most of the other serious research cars, is made in small volumes and costs a great deal of money — $75,000. David Hall, who runs Velodyne, has regularly said that in volume it would cost well under $1,000, but we’re not there yet. He has released a new LIDAR with just 16 planes. The price, while not finalized, will be much higher than $1K but much lower than $75K (or even the $30K for the 32 plane version found on Ford’s test vehicle and some others.)

As a disclaimer, I should note I have joined the advisory board of Quanergy, which is making 8 plane LIDARs at a much lower price than these units.

Nissan goes back and forth on dates

Conflicting reports have come from Nissan on their dates for deployment. At first, it seemed they had predicted fairly autonomous cars by 2020. A later announcement by CEO Carlos Ghosn suggested it might be even earlier. But new reports suggest the product will be less far along, and need more human supervision to operate.

FBI gets all scaremongering

Many years ago, I wrote about the danger that autonomous robots could be loaded with explosives and sent to an address to wreak havoc. That is a concern, but what I wrote was that the greater danger could be the fear of that phenomenon. After all, car accidents kill more people every month in the USA than died at the World Trade Center 13 years ago, and far surpass war and terrorism as forms of violent death and injury in most nations for most of modern history. Nonetheless, an internal FBI document, released through a leak, has them pushing this idea along with the more bizarre idea that such cars would let criminals multitask more and not have to drive their own getaway cars.  read more »

The two cultures of robocars

I have many more comments pending on my observations from the recent AUVSI/TRB Automated Vehicles Symposium, but for today I would like to put forward an observation I made about two broad schools of thought on the path of the technology and the timeline for adoption. I will call these the aggressive and conservative schools. The aggressive school is represented by Google, Induct (and its successors) and many academic teams, the conservative school involves car companies, most urban planners and various others.

The conservative (automotive) view sees this technology as a set of wheels that has a computer.

The aggressive (digital) school sees this as a computer that has a set of wheels.

The conservative view sees this as an automotive technology, and most of them are very used to thinking about automotive technology. For the aggressive school, where I belong, this is a computer technology, and will be developed — and change the world — at the much faster pace that computer technologies do.

Neither school is probably entirely right, of course. It won’t go as gung-ho as a smartphone, suddenly in every pocket within a few years of release, being discarded when just 2 years old even though it still performs exactly as designed. Nor will it advance at the speed of automotive technology, a world where electric cars are finally getting some traction a century after being introduced.

The conservative school embraces the 4 NHTSA Levels or 5 SAE levels of technology, and expects these levels to be a path of progress. Car companies are starting to sell “level 2” and working on “level 3” and declaring level 4 or 5 to be far in the future. Google is going directly to SAE level 4.

The two cultures do agree that the curve of deployment is not nearly-instant like a smartphone. It will take some time until robocars are a significant fraction of the cars on the road. What they disagree on is how quickly that has a big effect on society. In sessions I attended, the feeling that the early 2020s would see only a modest fraction of cars being self-driving meant to the conservatives that they would not have that much effect on the world.

In one session, it was asked how many people had cars with automatic cruise control (ACC.) Very few hands went up, and this is no surprise — the uptake of ACC is quite low, and almost all of it is part of a “technology package” on the cars that offer it. This led people to believe that if ACC, now over a decade old, could barely get deployed, we should not expect rapid deployment of more complete self-driving. And this may indeed be a warning for those selling super-cruise style products which combine ACC and lanekeeping under driver supervision, which is the level 2 most car companies are working on.

To counter this, I asked a room how many had ridden in Uber or its competitors. Almost every hand went up this time — again no surprise. In spite of the fact that Uber’s cars represent an insignificant fraction of the deployed car fleet. In the aggressive view, robocars are more a service than a product, and as we can see, a robocar-like service can start affecting everybody with very low deployment and only a limited service area.

This dichotomy is somewhat reflected in the difference between SAE’s Level 4 and NHTSA’s. SAE Level 4 means full driving (including unmanned) but in a limited service area or under other limited parameters. This is what Google has said they will make, this is what you see planned for services in campuses and retirement communities. This is where it begins, and grows one region at a time. NHTSA’s levels falsely convey the idea that you slowly move to fully automated mode and immediately do it over a wide service area. Real cars will vary as to what level of supervision they need (the levels) over different times, streets and speeds, existing at all the levels at different times.

Follow the conservative model and you can say that society will not see much change until 2030 — some even talk about 2040. I believe that is an error.

Another correlated difference of opinion lies around infrastructure. Those in the aggressive computer-based camp wish to avoid the need to change the physical infrastructure. Instead of making the roads smart, make the individual cars smart. The more automotive camp has also often spoken of physical changes as being more important, and also believes there is strong value in putting digital “vehicle to vehicle” radios in even non-robocars. The computer camp is much more fond of “virtual infrastructure” like the detailed ultra-maps used by Google and many other projects.

It would be unfair to claim that the two schools are fully stratified. There are researchers who bridge the camps. There are people who see both sides very well. There are “computer” folks working at car companies, and car industry folks on the aggressive teams.

The two approaches will also clash when it comes to deciding how to measure the safety of the products and how they should be regulated, which will be a much larger battle. More on that later.

Syndicate content