A recent Ford article on how they clean bug splatter off their LIDAR prompted me to write about how you do redundancy in robocars to deal with the inevitable failure of sensors and other components. It's a software way of thinking, not a "make the hardware more reliable" approach.
The future of computer-driven cars and deliverbots
As I've written earlier, Tesla has the ability to load special "search" neural networks into the cars to hunt for things they want to use to train with. In this article on Forbes, I hypothesize the day when there's an Amber Alert, and police ask to load networks to search for the car and people involved, and it quickly works. And then police get a taste for this, not just in the USA but China and other places. Where does it lead and can we stop it?
Some customer satisfaction scores leaked from Waymo and were posted by "The Information." The story depicts the 70% 5-star rating as very much a glass-half-empty story, worrying about the problem rides. I think that's actually a very impressive score, and a sign of great things to come, which I detail in the new Forbes site story at:
Another Tesla car crash, allegedly on autopilot, teaches us something about how well (or not well) Tesla is doing with its claimed ability to use its fleet of cars to quickly learn to identify unusual obstacles and situations. Here, a Tesla on autopilot crashes into a tow truck sticking out into the right lane (injuring the Tesla driver.) The driver says it was on Autopilot but that he was distracted for a few seconds.
At the Automated Vehicle Summit, and in may other places, one of the watchwords is "sharing." Everything is going to be great because robocar technology enables "sharing." Yet people use it to mean two different things -- taxi hailing and riding in groups -- and they don't really understand the real consequences of both.
I'm back from the AUVSI/TRB "Automated Vehicles Summit," this year in Orlando, Florida.
The opening session, kicked off by Chris Urmson of Aurora, was about current approaches to safety. In the various presentations, I noticed an evolution in thinking about safety, which I describe in this Forbes site article. We've moving away from incidents and miles and functional safety to operational safety and risk management.
It's not a big surprise, but Cruise has announced they will not meet their goal of deploying in 2019. Cruise says deploying in San Francisco is 40x harder than a place like Phoenix where Waymo is deploying, but that once they solve this harder problem, they will be the leader.
Is that the right strategy? I examine this in a new Forbes site article:
People are working hard to get robocars to handle public streets, but they also need to handle private parking lots for parking, pick-up and drop-off. Private lots have all sorts of strange rules, so a system is needed to make it easy to map them and make those maps and rules available to cars. I outline such a system in a new Forbes site article found here:
GM's "Cruise" robocar unit is often cited as #2 behind Waymo. Some recent leaks of their internal metrics for progress paint a dim picture; that they aren't nearly as far along as they hoped, which does not bode well for the planned 2019 launch. In fact, they show as an order of magnitude behind where Google/Waymo was back in 2015.
You've seen the hype and battles over 5G. You may also have seen claims that one of the most important reasons we need 5G is communication with robocars. While more bandwidth and lower latency are never bad things, it's a mistake to presume the cars are doing to depend on them, or that getting 5G is some sort of blocking factor.
I explain the (fairly low) bandwidth needs of cars in a new Forbes.com article: