Brad Templeton is an EFF director, Singularity U faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist.
This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.
Today I want to look at some implications of Tesla’s Master Plan Part Deux which caused some buzz this week. (There was other news of course, including the AUVSI/TRB meeting which I attended and will report on shortly, forecast dates from Volvo, BMW and others, hints from Baidu, Faraday Future and Apple, and more.)
In Musk’s blog post he lays out these elements of Tesla’s plan
Integrating generation and storage (with SolarCity and the PowerWall and your car.)
Expand into trucks and minibuses
More autonomy in Tesla cars
Hiring out your Tesla as a robotaxi when not using it
Except for the first one, all of these are ideas I have covered extensively here. It is good to see an automaker start work in these directions. As such while I will mostly agree with what Tesla is saying, there are a few issues to discuss.
Electric (self-driving) minibus and Trucks
In my article earlier this year on the future of transit I laid out why transit should mostly be done with smaller (van sized) vehicles, taking ad-hoc trips on dynamic paths, rather than the big-vehicle, fixed-route, fixed-schedule approach taken today. The automation is what makes this happen (especially when you add the ability of single person robocars to do first and last miles.) Making the bus electric can make it greener, though making it run full almost all the time is far more important for that.
The same is true for trucks, but both trucks and buses have huge power needs which presents problems for having them be electric. Electric’s biggest problem here is the long recharge time, which puts your valuable asset out of service. For trucks, the big win of having a robotruck is that it can drive 24 hours/day, you don’t want to take that away by making it electric. This means you want to look into things like battery swap, or perhaps more simply tractor swap. In that case, a truck would pull in to a charging station and disconnect from its trailer, and another tractor that just recharged would grab on and keep it going. read more »
The cell phone ride hail apps like Uber and Lyft are now reporting great success with actual ride-sharing, under the names UberPool, LyftLines and Lyft Carpool. In addition, a whole new raft of apps to enable semi-planned and planned carpooling are out making changes.
The most remarkable number I have seen has Uber stating that 50% of rides in San Francisco are now UberPool. With UberPool, the system tries to find people with overlapping ride segments and quotes you a flat price for your ride. When you get it, there may already be somebody there, or your car may travel a small bit out of your way to pick up or drop somebody off. It’s particularly good for airports, but is also working in cities. The prices are often extremely good. During a surge it might be a much more affordable alternative.
It’s often been observed that as you watch any road, you see a huge volume of empty seats go down it. Even partial filling all those empty seats would make our roads vastly more efficient and higher capacity, as well as greener. Indeed, the entire volume of most transit systems could probably be easily absorbed, and a great deal more, if those empty seats were filled.
The strongest approach to date has been the hope that carpool lanes would encourage people to carpool. Sadly, this doesn’t happen very much. Estimates suggest that only 10% of the cars in the carpool lane are “induced” carpools — the rest are people like couples who already would have gone together. As such, many carpool lanes actually increase congestion rather than reducing it, because they create few induced carpools and take away road capacity. That’s why many cities are switching to HOT lanes where solo drivers can pay to get access to excess carpool lane capacity, or allowing electric/PHEV vehicles into the carpool lane.
Most carpool apps today have a focus on people who are employees of the same company. Companies have had tools to organize carpools for ages, and this works modestly well, but typically the carpools are semi-permanent — the same group rides in together each day, sometimes trading off who drives. The companies provide incentives like cash and special parking.
The new generation of carpool apps (outside Uber) tend to focus on people at the same company, and as such they mostly work with big companies. There they can add the magic of dynamic carpooling, which means allowing people to be flexible about when they come and go, and matching them up with different cars of other employees. This makes sense as an early business for many reasons:
People can inherently trust their co-workers
Co-workers naturally share the same workplace, so you only have to find one who live within a reasonable distance
Companies will subsidize the carpooling for many reasons, including saving them parking.
The subsidies can often include a very important one, the guaranteed ride back. Some of these apps say that when you want to leave, if they can’t find a carpool going near your house, they will provide alternate transportation, such as transit tickets or a Taxi/Uber style ride. This gives people the confidence to carpool in with one dynamically assigned group, knowing they will never be stuck at the office with no way home. Independent carpool services can also offer such a guarantee by adding a cost to every ride, but it’s easier for a company to do it. In fact, companies will often pay for the cost of the apps that do this, so that all the employees see is the car operating cost being shared among the poolers.
What has not happened much today is the potential of the multi-leg carpool, where you ride in one car for part of the trip, and another car (or another mode) for another part. Of course changing cars or modes is annoying compared to door-to-door transportation, though it’s the norm for transit riders.
Today, must carpool apps will have the driver go slightly off their route — often off the highway — to pick up a rider or return one home. (Normally the morning destination is a commercial building, usually the same building.)
A multi-leg service has some similarities to the concepts of multi-leg robocar transit I outlined previously. In one vision, the actual carpool sticks to highways and arterial roads, and never deviates from the expected route of the driver or any of the poolers. Poolers get to the carpool by using some other means — including a private Uber style ride — and then join it for the highway portion. If they are not going to the same place as other poolers, they can also use such a ride at the other end, though having two transfers reduces the appeal a fair bit.
This “last mile” leg can be something like Uber, or transit, or a bicycle (including one-way bicycle systems) or a “kiss and ride” drop-off by a spouse, or even another carpool. The difference is to make it dynamic, with live tracking of all parties involved, to reduce waits at the transfer points to very short times. (With robocars and vans, the waits will be measured in seconds, but human drivers won’t be that reliable.)
In spite of the inconvenience of having to do a transfer, if the wait is short, it’s better than the downsides of the driver or other poolers having to go far off the highway to handle a fellow pooler, and there can even be financial incentives to make things smooth.
Transfer points on arterials
The main barrier in the way of a truly frictionless transfer is the absence of good and easy places to do the transfer in many locations. This might be something that highway planners should consider in building or modifying future roads. The benefits can happen today, well before robocars, so it can get on the radar of the planners today. When the robocar transit arrives, tremendous benefits are possible.
Today, there is something a bit like this. In many cities, there are bus lines that run on highways. In some cases, bus stops have been built embedded in the highway, allowing the bus to stop without fully leaving the highway. A common example can be found on intersections which have a private on-ramp/off-ramp lane which stops mergers from interfering with primary traffic. Sometimes these are just off to the side of the regular highway, but in all cases the bus pulls off the highway and then into the bus stop. Riders have some safe path from the non-highway world, including bus stops on regular streets and arterials.
In the fast-transfer world, you want something like this, though you don’t necessarily need a path to other roads. A rider brought in an Uber can be dropped off there, and in interchanges with a private collector lane, the car that drops the rider off can easily get back onto the regular road in the opposite direction.
In the map is an intersection that already has all the ingredients needed for carpool transfer points — collector lanes, long ramps and lots of spare space. Most intersections are not as adaptable as this one, but new and reconstructed intersections can be adapted in much less space. In addition, transfer points may be possible in the center median, if there is room, under bridges, through the installation of a staircase from the bridge. (If there is no elevator, the disabled can be brought to the transfer point through a longer route that goes on the highway.) This is a common layout for transit lines which run down the median.
Full cloverleaf is better for the placement of transfer points, though there are other places they can go in other intersection designs. (It’s become popular of late to replace full coverleaf intersections with the parclo design that comes from my home town of Toronto. This change is mostly done to avoid the complex merge and tight turns of a full cloverleaf, though robocars can handle the full clover just fine. You can easily put some transfer points in a parclo, you just have an extra minute or two spent by the stopping carpool.
Transfer points are dirt cheap infrastructure, pretty much identical to bus stops, though ideally they would use angled parking so vehicles can come and go without blocking others. You do want space for a van or even a bus to come when you have found a super-carpool synergy, as will probably be the case at the peak of rush hour. Of course, if the volume of poolers grows very high, it justifies making larger transfer points and more of them. For super peak times, it’s OK to use transfer points that are just off the highways (where parking lots to do this are plentiful) because with high volume, pools are making just one stop to pick up passengers and can handle a small detour.
Transfer with parking
Of course, today the easiest way to do these carpools is with “carpool lots” not too far from the highway — places with spare parking which allow carpool riders to drive to the lot to meet their carpool driver. Indeed, carpoolers should be those who own cars because the first goal is to take a car off the road that otherwise have driven, and the second goal is to fill the empty seat with somebody who would otherwise have been on transit.
It can be difficult to get lots of parking convenient to the highway. One carpool lot I use has room for only about 50 cars. Nice that it’s there, but it takes no more than 50 cars off the road. At scale, one could imagine it being worthwhile to have shuttles from parking lots to on-highway transfer points, though nobody likes having to do 3 or 4 legs for a trip unless it’s zero wait time. If Robocars were not coming, one could imagine designing future highways with transfer points connected to parking lots. The people of the past did not imagine robocars or cell phone coordination of carpooling.
It’s not surprising there is huge debate about the fatal Tesla autopilot crash revealed to us last week. The big surprise to me is actually that Tesla and MobilEye stock seem entirely unaffected. For many years, one of the most common refrains I would hear in discussions about robocars was, “This is all great, but the first fatality and it’s all over.” I never believed it would all be over, but I didn’t think there would barely be a blip.
There’s been lots of blips in the press and online, of course, but most of it has had some pretty wrong assumptions. Tesla’s autopilot is a distant cousin of a real robocar, and that would explain why the fatality is no big deal for the field, but the press shows that people don’t know that.
Tesla’s autopilot is really a fancy cruise control. It combines several key features from the ADAS (Advance Driver Assist) world, such as adaptive cruise control, lane-keeping and forward collision avoidance, among others. All these features have been in cars for years, and they are also combined in similar products in other cars, both commercial offerings and demonstrated prototypes. In fact, Honda promoted such a function over 10 years ago!
Tesla’s autopilot primarily uses the MobilEye EyeQ3 camera, combined with radars and some ultrasonic sensors. It doesn’t have a lidar (the gold standard in robocar sensors) and it doesn’t use a map to help it understand the road and environment.
Most importantly, it is far from complete. There is tons of stuff it’s not able to handle. Some of those things it can’t do are known, some are unknown. Because of this, it is designed to only work under constant supervision by a driver. Tesla drivers get this explained in detail in their manual and when they turn on the autopilot.
ADAS cars are declared not to be self-driving cars in many state laws
This is nothing new — lots of cars have lots of features to help drive (including the components used like cruise controls, each available on their own) which are not good enough to drive the car, and only are supposed to augment an alert driver, not replace one. Because car companies have been selling things like this for years, when the first robocar laws were drafted, they made sure there was a carve-out in the laws so that their systems would not be subject to the robocar regulations companies like Google wanted.
The Florida law, similar to other laws, says:
The term [Autonomous Vehicle] excludes a motor vehicle enabled with active safety systems or driver assistance systems, including, without limitation, a system to provide electronic blind spot
assistance, crash avoidance, emergency braking, parking
assistance, adaptive cruise control, lane keep assistance, lane
departure warning, or traffic jam and queuing assistant, unless
any such system alone or in combination with other systems
enables the vehicle on which the technology is installed to
drive without the active control or monitoring by a human
The Tesla’s failure to see the truck was not surprising
There’s been a lot of writing (and I did some of it) about the particulars of the failure of Tesla’s technology, and what might be done to fix it. That’s an interesting topic, but it misses a very key point. Tesla’s system did not fail. It operated within its design parameters, and according to the way Tesla describes it in its manuals and warnings. The Tesla system, not being a robocar system, has tons of stuff it does not properly detect. A truck crossing the road is just one of those things. It’s also poor on stopped vehicles and many other situations.
Tesla could (and in time, will) fix the system’s problem with cross traffic. (MobilEye itself has that planned for its EyeQ4 chip coming out in 2018, and freely admits that the EyeQ3 Tesla uses does not detect cross traffic well.) But fixing that problem would not change what the system is, and not change the need for constant monitoring that Tesla has always declared it to have. read more »
Today at Starship, we announced our first pilot projects for robotic delivery which will begin operating this summer. We’ll be working with a London food delivery startup Pronto as well as German parcel company Hermes and the Metro Group of retailers, plus Just Eat restaurant food delivery to trial on-your-schedule delivery of packages, groceries and meals to people’s homes.
(It’s a nice break from Tesla news — and besides, our little robots weigh so little and move so slowly that even if something went horribly wrong and they hit you, injury is quite unlikely.)
Hermes, which does traditional package delivery is very interested in what I think is one of the core values of robot delivery — namely delivery on the recipient’s schedule. Today, delivery is done on the schedule of delivery trucks, and you may or may not be home when it arrives. With a personal delivery robot, it will only come when you’re home, reducing the risk of theft and lost packages. Robots don’t mind waiting for you.
The last mile is a huge part of the logistics world. Starship robots will get you packages with less cost, energy, time, traffic, congestion and emissions than you going to the store to get it yourself. They use a combination of autonomous driving with human control centers able to remotely fix any problems the robots can’t figure out. Robots don’t mind pausing if they have a problem and our robots can stop in under 30cm. As we progress, operation will reach near full autonomy and super low cost.
Executive Summary: A rundown of different approaches for validation of self-driving and
driver assist systems, and a recommendation to Tesla and others to have countermeasures
to detect drivers not watching the road, and permanently disable their Autopilot if they
show a pattern of inattention.
The recent fatality for a man who was allowing his car to be driven by the Tesla “autopilot”
system has ignited debate on whether it was appropriate for Tesla to allow their system to
be used as it was.
Tesla’s autopilot is a driver assist system, and Tesla tells customers it must always be
supervised by an alert driver ready to take the controls at any time. The autopilot is not
a working self-driving car system, and it’s not rated for all sorts of driving conditions,
and there are huge numbers of situations that it is not designed to handle and can’t handle. Tesla knows that, but the
public, press and Tesla customers forget that, and there are many Tesla users who are treating
the autopilot like a real self-driving car system, and who are not paying attention to the road —
and Tesla is aware of that as well. Press made this mistake as well, regularly writing
fanciful stories about how Tesla was ahead of Google and other teams.
Brown, the driver killed in the crash, was very likely one of those people, and if so, he paid for
it with his life. In spite of all the warnings Tesla may give about the system, some users
do get a sense of false security. There is debate if that means driver assist systems are
a bad idea.
There have been partial self-driving systems that require supervision since the arrival of the
cruise control. Adaptive cruise control is even better, and other car companies have released
autopilot like systems which combine adaptive cruise control with lane-keeping and forward
collision avoidance, which hits the brakes if you’re about to rear end another car. Mercedes
has sold a “traffic jam assist” like the Telsa autopilot since 2014 that only runs at low speeds
in the USA. You can even go back to a Honda demo in 2005 of an autopilot like system.
With cruise control, you might relax a bit but you know you have to pay attention. You’re steering
and for a long time even the adaptive cruise controls did not slow down for stopped cars.
The problem with Tesla’s autopilot is that it was more comprehensive and better performing than
earlier systems, and even though it had tons of things it could not handle, people started to
trust it with their lives.
Tesla’s plan can be viewed in several ways. One view is that Tesla was using customers as
“beta testers,” as guinea pigs for a primitive self-drive system which is not production ready,
and that this is too much of a risk.
Another is that Tesla built (and tested) a superior driver assist system with known and warned
limitations, and customers should have listened to those warnings.
Neither is quite right. While Tesla has been clear about the latter stance, with the knowledge that
people will over-trust it, we must face the fact that it is not only the daring drivers who
are putting themselves at risk, it’s also others on the road who are put at risk by the
over-trusting drivers — or perhaps by Tesla. What if the errant car had not gone under a truck, but
instead hit another car, or even plowed into a pedestrian when it careened off the road after the crash?
At the same time, Tesla’s early deployment approach is a powerful tool for the development and
quality assurance of self-drive systems. I have written before about how testing is the big
unsolved problem in self-driving cars. Companies like Google have spent many millions to use a
staff of paid drivers to test their cars for 1.6 million miles. This is massively expensive and
time consuming, and even Google’s money can’t easily generate the billions of miles of testing
that some feel might be needed. Human drivers will have about 12 fatalities in a billion miles,
and we want our self-driving cars to do much better. Just how we’ll get enough verification and testing done
to bring this technology to the world is not a solved problem. read more »
A Tesla blog post describes the first fatality involving a self drive system. A Tesla was driving on autopilot down a divided highway. A truck made a left turn and crossed the Tesla’s lanes. A white truck body against a bright sky is not something the MobilEye camera system in the Tesla perceives well, and it is not designed for cross traffic.
The truck trailer was also high, so when the Tesla did not stop, it went “under” it, so that the windshield was the first part of the Tesla to hit the truck body, with fatal consequences for the “driver.” Tesla notes that the autopilot system has driven 130 million miles, while human drivers in the USA have a fatality about every 94 million miles (though it’s a longer interval on the highway.) The Tesla is a “supervised” system where the driver is required to agree they are monitoring the system and will take control in the event of any problem, but this driver, a major Tesla fan named Joshua Brown, did not hit the brakes. As such, the fault for this accident will presumably reside with Brown, or perhaps the Truck driver — the accident report claims the truck did fail to yield to oncoming traffic, but as yet the driver has not been cited for this. (Tesla also notes that had the front of the car hit the truck, the crumple zones and other safety systems would probably have saved the driver — hitting a high target is the worst case situation.)
Any commentary here is preliminary until more facts are established, but here are my initial impressions:
There has been much speculation of whether Tesla was taking too much risk by releasing autopilot so early, and this will be boosted after this.
In particular, a core issue is that the autopilot works too well, and I have seen reports from many Tesla drivers of them trusting it far more than they should. The autopilot is fine if used as Tesla directs, but the better it gets, the more it encourages people to over-trust it.
Both Tesla stock and MobilEye stock were up today, with a bit of downturn after-hours. The market may not have absorbed this. The MobilEye is the vision sensor used by the Tesla to power the autopilot, and the failure to detect the truck in this situation is a not-unexpected result for the sensor.
For years, I have frequently heard it said that “the first fatality with this technology will end it all, or set the industry back many years.” My estimation is that this will not happen.
One report suggests the truck was making a left turn, which is a more expected situation, though if a truck turned with oncoming traffic it would be at fault.
Another report suggests that “friends” claim that the driver often used his laptop while driving, and some sources claim that a Harry Potter movie was playing in the car. (A portable DVD player was found in the wreckage.)
Tesla’s claim of 130M miles is a bit misleading, because most of those miles actually were supervised by humans. So that’s like reporting the record of student drivers with a driving instructor always there to take over. And indeed there are reports of many, many people taking over for the Tesla Autopilot, as Tesla says they should. So at best Tesla can claim that the supervised autopilot has a similar record to human drivers, ie. is no better than the humans on their own. Though one incident does not a driving record make.
Whatever we judge about this, the ability of ordinary users to test systems, if they are well informed and understand what they are doing is a useful one that will advance the field and give us better and safer cars, faster. Just how to do this may require more discussion, but the idea of doing it is worthwhile.
MobilEye issued a statement reminding people their system is not designed to do well on cross traffic at present, but their 2018 product will. It is also worth noting that the camera they use sees only red and gray intensity, it does not see all the colours, making it have an even harder time with the white truck and bright sky. The sun was not a factor, it was up high in the sky.
The Truck Driver claims the Tesla changed lanes before hitting him, an odd thing to happen with the Autopilot, particular if the driver was not paying attention. The lack of braking suggests the driver was not paying attention.
Camera vs. Lidar, and maps.
I have often written about the big question of cameras vs. LIDAR. Elon Musk is famously on record as being against LIDAR, when almost all robocar projects in the world rely on LIDAR. Current LIDARs are too expensive for production automobiles, but many companies, including Quanergy (where I am an advisor) are promising very low cost LIDARs for future generations of vehicles.
Here there is a clear situation where LIDAR would have detected the truck. A white truck against the sky would be no issue at all for a self-driving capable LIDAR, it would see it very well. In fact, a big white target like that would be detected beyond the normal range of a typical LIDAR. That range is an issue here — most LIDARs would only detect other cars about 100m out, but a big white truck would be detected a fair bit further, perhaps even 200m. 100m is not quite far enough to stop in time for an obstacle like this at highway speeds, however, such a car would brake to make the impact vastly less, and a clever car might even have had time to swerve or aim for the wheels of the truck rather than slide underneath the body.
Another sensor that is problematic here is radar. Radar would have seen this truck no problem, but since it was perpendicular to the travel of the car, it would not be moving away from or towards the car, and thus have the doppler speed signature of a stopped object. Radar is great because it tracks the speed of obstacles, but because there are so many stationary objects, most radars have to just disregard such signals — they can’t tell a stalled vehicle from a sign, bridge or berm. To help with that, a map of where all the fixed radar reflection sources are located can help. If you get a sudden bright radar return from a truck or car somewhere that the map says a big object is not known to be, that’s an immediate sign of trouble. (At the same time, it means that you don’t easily detect a stalled vehicle next to a bridge or sign.)
One solution to this is longer range LIDAR or higher resolution radar. Google has said it has developed longer range LIDAR. It is likely in this case that even regular range LIDAR, or radar and a good map, might have noticed the truck.
With Mobility on Demand, you don’t buy a car, you buy rides. That’s certainly Uber’s plan, and is a plan that makes sense for Google, Apple and other no-car companies. But even Daimler, with Car2Go/Car2Come, BMW with DriveNow and GM with Lyft plan to sell you a ride rather than a car, because it’s the more lucrative thing to do.
So what does that car of the future look like? There is no one answer, because in this world, the car that is sent to pick you up is tailored to your trip. The more people traveling, the bigger the car is. If your trip does not involve a highway, it may not be a car capable of the highway. If your trip is up to a mountain cabin, it’s more like an SUV, but you never use an SUV to go get a bottle of milk the way we do today. If it’s for a cruise to the beach on a sunny day, the roof may have been removed at the depot. If it’s for an overnight trip to a country home, it may be just beds.
I outlined many of these changes in this article on design changes in cars but today I will focus on the incredibly cheap and simple design of what should become the most common vehicle made, namely the car designed for a short urban trip by one person. That’s 80% of trips and around 45% of miles, so this should be a large fraction of the fleet. I predict a lot of these cars will be made every year — more than all the cars made today, even though they are used as taxis and shared among many passengers.
What does it look like?
A car for 1-2 people will be small. It will probably be around 1.5m wide, narrow enough that you can fit two in a lane, and have it park very efficiently when it has to wait. If it’s for just one person, it won’t be very long either. For two people, there will be a “face to face” configuration which is longer and an “tandem” configuration which is a bit shorter. The 2 person vehicles aren’t a lot bigger or heavier than the one person, so they might be the most common cars, since you can serve a solo rider fairly efficiently with one, even if not perfectly efficient.
A car that is so narrow can’t corner very fast. A wide stance is much more stable. There are a few solutions to that, including combinations of these:
The wheels bank independently, allowing the vehicle to lean like a motorcycle when in corners. This is the best solution, but it costs some money.
Alternately it’s a two wheeler, which is also able to lean, but has other tricks like the LIT Motors C-1 to stay upright.
It’s electric, and has all the batteries in the floor, giving it a very low center of gravity. (One extreme example of this is the Tango, which uses lead batteries deliberately to give it that stability.)
It never goes on fast roads, so it never needs to corner very fast, and its precision robot driving assures it never corners so fast as to become unstable, and it plans its route accordingly.
Not super aerodynamic
The car already has a big win when it comes to aerodynamic drag by only being half-width. The non-highway version probably gives back a bit of that because you don’t need to worry as much about that if you are not going fast. Energy lost to drag goes up with the square of velocity. So a 30mph car has 1/4 the drag of a 60mph car, and 1/8th the drag of a similar car of full width. The highway car needs to be shaped as close to a “teardrop” as you can, but the city car can get away with being a bit taller for more comfortable seating and entry/exit. read more »
Political debate is going overboard these days. I travel overseas all the time and if I reveal I live in the USA, you can’t stop people from asking about Trump. It’s getting frustrating and boring. But to avoid contentious topics, let’s talk about guns!
As a Canadian, I’ve seen how the gun rules in Canada work. It’s the culture most similar to the USA in the world, with tons of rifle ownership, but almost no handguns and comparatively no handgun deaths. So I don’t doubt that something can be done. On the other hand, I also am a strong supporter of the bill of rights, and even though I don’t like the 2nd amendment, I can’t disregard it or pretend that it’s weakened very much by the Militia clause. And for the future we can see, the second amendment is not going to be repealed. Without repealing it, you can’t do a lot. In spite of all the bad press, the AR-15 was easily redesigned to comply with the “assault weapon ban” that was temporarily in effect, and unless they can figure out how to ban the semi-automatic hunting rifle under the 2nd amendment, it’s not likely much can happen here.
Here’s a much more radical proposal. Modify the second amendment to reserve the power to regulate firearms to the states. In other words, make it a states rights issue more than a weapons issue. The new amendment would empower a state’s constitution to supersede the 2nd amendment. If the state does not include such a rule in its constitution, the original 2nd amendment would still apply there. Each state would have to follow its own constitutional procedures to declare new rules, explicitly declaring them as replacing the 2nd amendment.
This is not really ideal, of course. There would be a patchwork of laws. Many guns would end up being illegal in some states and legal in others. And of course, it would not be that hard to illegally move such guns into a state where they are not legal for use in criminal activities. As such, there would still be a fair bit of gun crime using guns supposedly banned in a location. Still, I think it could cause a significant reduction in gun crime.
It’s also the only thing I can think of that has a chance of passing. Many of those who champion gun rights also champion states’ rights. While clearly some states would move to restrict gun ownership, gun proponents could not only keep their state unrestricted, but they could actually reduce gun restrictions if they wished to, even removing any effect of the Militia clause.
Illegal weapons would still be present, but these changes would reduce the culture of gun ownership and gun use. In Canada, many of us have rifles, including semi-automatic hunting rifles. I was taught to shoot as a child, as were most kids I knew. But handguns are almost unknown. It seems to make a difference — it seems people are less likely to end up shooting somebody in a domestic dispute if they have to handle a physically large weapon. It seems the guns are less likely to be used in anger. It seems that the less the weapons are around, the less they are used, not by criminals, but by ordinary people who get angry. Canada had 172 firearm homicides compared to the USA’s 8,800, with just a handful caused by handguns. Canadians have 10 million guns (and a million handguns allowed only for police and guards or for use on a gun range.)
With this change, some states would have the power to make themselves a bit more like Canada if their democratic will is that way.
Clinton and Trump
Now that the parties have their candidates (I bet wrong on Trump) one thing I have been disappointed to see is Clinton (and Obama) ripping into Trump, calling him out on his lies and crazy statements. Do they imagine the electorate that listens to them don’t know about these things?
I would have loved to see Clinton make a decision to not mention the word Trump for the rest of the election. If she absolutely has to, get surrogates like Bill Clinton and President Obama to do it, but ideally not even them. Run a clean campaign, with all the focus on why she would be a good President. The reality is that there is tons of coverage of the negatives of what Trump says. Getting into a mud battle with him is the wrong decision. He likes mud, is already covered in it, and is better at it.
While a lot of press attributed the idea to him, Musk is actually restating almost exactly the well known thesis of Nick Bostrom on this topic, which has spawned much debate (some of which can be seen at the site linked.) The short precis of the thesis is as follows:
If you accept that the eventual progression of our work in creating digital (or “simulated”) worlds is to make ones that match our reality, then you probably accept that once we can do this, we will do it a whole lot, and that eventually there will be very large numbers of created digital worlds, many based on our own. If that’s true, then the probability that any particular world (including this one, of course) is the original one is vanishingly small.
Like many, I find the argument interesting, though not quite so compelling, as it contains some logical fallacies. For one, even in the “root” universe, the argument is equally compelling, but also clearly false.
I also oppose the term “simulation.” For far too many, “simulated” means “not real” or “less real.” This world is clearly “real” even if it is synthetic and based on computation. If you accept the truth of “I think therefore I am,” then you are thinking, not engaging in a simulation of thinking. (Just as AlphaGo doesn’t simulate playing Go, it plays Go.)
Better terms include “Computational” and “Snythetic” or other synonyms like “digital,” “emulated,” or “artificial.”
Leaving aside the debate over the merits of the argument, let’s assume it’s true for the moment. The biggest consequence of synthetic is that it means created. As in, “there is a creator/god” in the sense of a being who created this universe and who is in some limited way omnipotent over it and in another limited way omniscient about it. I say a limited way, because this “god” is perhaps a programmer named Martha who has a few hundred digital Earths running in her dorm room. A being perhaps (but not surely) exactly like us in her world, but with the potential ability to observe and change anything about this one.
That is a theistic view, though quite unlike typical theist doctrines. (It bears a small and bizarre similarity to Mormon theology which teaches that our god was once an ordinary being on another world who was rewarded with his own new world to be god of.)
From what we can observe, Martha doesn’t interfere overtly with this world. As such, the first conclusion is that even if you believe in this, it should not change very much about how you live your life. If you have no shot at interaction with the “parent” universe, and there is always the chance this whole thesis is false, you should go about being you as though you felt you lived in the root or “first” universe — what you might incorrectly call the “real” one.
There are some changes that are justified if you believe this, though. They are grand philosophical changes, but some apply to Elon Musk himself.
You see, Elon has made it his prime life goal to get humanity off the Earth. To stop us from being a “one planet species” which would be wiped out if something catastrophic happens here. History shows that bad things have happened naturally (like asteroid strikes) and more bad things could happen due to the works of humanity, like killer diseases or nuclear winters. As such, Elon’s goal of getting a self-supporting colony on Mars is a grand one, well worthy of being a prime life-goal for a world-shaker.
But it’s taken down a peg if you accept the synthetic world hypothesis. Now, you conclude it’s very likely that this is very much not the only cradle of humanity. That there are probably millions or billions of them. That even this one quite probably has backups taken every so often, so that even if we wipe ourselves out, all can be preserved and even restarted, if Martha wants to.
We don’t know anything about Martha’s motives, other than she appears to not do any noticeable interference. Martha might not even be remotely human, though once again, the probability is (at least from our viewpoint) that beings would create more synthetic worlds like their own than entirely different experiments. But if you believe in Martha than you believe we are not alone and that alters goals about the future of humanity.
If you want to get more extreme, there is also an issue with Mars. While again, we have no information on Martha’s goals for this project, it seems likely, unless resources are truly free, that most synthetic worlds will be just the surface of the Earth, just the interesting part in question. Running an entire galaxy or an entire universe is many orders of magnitude more costly. Sure, you might run some of them, but if you can run a trillion Earths for the cost of a couple of galaxies, that’s gotta bend things a bit.
As such, the rest of the universe truly is “simulated” in that it’s just being computed with barely enough resources to make the few photons which reach us be realistic. (Or it’s just a playback of an earlier run.) Many fans of this theory like that it explains Fermi’s famous paradox — no aliens have visited because there are not any — in this universe.
It’s hard to imagine, unless computation is totally free, that there would not be any “optimization” of the computation. Now, at the extreme, this would mean the parts of your house that nobody is looking at would be computed at a lower resolution, and that indeed, if a tree fell in the forest and nobody was there to hear it, it truly would not make a sound in a full way. That’s very philosophically spooky, but less spooky is the idea that until we went to it, Mars the planet would not even be “booted up” into our universe. When probes arrived, it might have been fully started, but more likely only where the probes went — the rest would just be a recorded copy of the original Mars, presuming there is such a thing in Martha’s world, as the whole sky would be.
As such, it would mean going to a place that only “fully exists” (which is to say is being computed at full resolution) because we went there. Somewhat less satisfying.
Still worth going?
People imagining the idea of a synthetic, computed Earth do like to speculate about the motives of its creation. If Martha is just like us, then they probably have rules and ethics about doing this. There are huge ethical questions about all the suffering and evil that comes with creating a universe. One rule I’ve imagined is that the creator really has some duties to the people inside. Those might include having a heaven of some sorts, or even letting people graduate up to the parent universe and gaining rights there. The most impressive might even get to chat with Martha, though she only has time for a few. Perhaps somebody who does something truly great, like taking humanity off-planet, gets some reward for it. We can suppose this because we might do something like that if we were making these computed places. But we really have no evidence for any of that. Some would argue there is almost nothing ethical about creating a world with so much misery and keeping the inhabitants in the dark about the reality to boot. At least by our standards — not theirs.
Is there a root?
One popular theme is to suggest that Martha’s universe is also synthetic, and there is another creator above her. I describe this by saying, “It’s turtles all the way up.” Nobody can truly be sure they are in the root of the tree.
This is particularly interesting if you speculate that the rules of our universe, when we finally learn them in depth, will show that computation lies at the bottom of everything. This has often been speculated, and most of the quests for a unified “theory-o’-everything” tend to try to express the rules in simpler and simpler mathematics. People care about that because for now, this theory is based on the idea that computation is being used to simulate the physics of a “real” universe, one made of particles and forces. We are only able to see the particles and forces, and so might conclude we aren’t digital. Particularly when emulating the activity of subatomic particles is today very expensive computationally. It makes implementing a synthetic universe at the deep level seem impossible to us. If there are deeper rules that are computational, then you can also postulate that the “root” universe could also be computational. In fact, you sort of need that, because it’s hard to figure out how to get the resources needed to have worlds within worlds if you have to implement particles based on computation done with particles which are based on computation and so on. You quickly run out. If, on the other hand, you are in a universe of computation and you create sub-worlds, you can just give those sub-worlds access to the computation substrate of your own, and it scales a lot better.
We like to believe our universe is made of particles which are physical and bounce off one another and follow analog rules. But we don’t know that’s true. The rules of our universe are a mystery to us. We don’t know where they came from, and we can’t even declare that whatever they are, any parent or root universe might not run on the same rules or a variant of them.
So should you believe this is a synthetic, computational universe — or simulation if you insist? Well, you can, but unless you are leading a mission to Mars it is not greatly productive. When the time comes — as it will — that we make our own small digital worlds that match our own for reality, doubt will of course increase, but as long as Martha remains hands-off, live your life as you always would have. One of the more spooky ideas in this theory is Last Thursdayism — the idea that there is no way to tell this world wasn’t forked from a backup last Thursday, that all of your memories before then happened to a predecessor. Perhaps that’s true, but again it doesn’t alter how you should spend your days. Indeed, it is not my goal to convince Elon to abandon his quest for Mars at all; that’s worthy even if it doesn’t help save humanity.
When I give talks on robocars, the most common question, asked almost all the time, is the one known as the “trolley problem” question, “What will the car do if it has to choose between killing one person or another” or other related dilemmas. I have written frequently about how this is a very low priority question in reality, much more interesting to philosophy classes than it is important. It is a super-rare event and there are much more important everyday ethical questions that self-driving car developers have to solve long before they will tackle this one.
In spite of this, the question persists in the public mind. We are fascinated and afraid of the idea of machines making life or death decisions. The tiny number of humans faced with such dilemmas don’t have a detailed ethical debate in their minds; they can only go with their “gut” or very simple and quick reasoning. We are troubled because machines don’t have a difference between instant and carefully pondered reactions. The one time in billions of miles(*) that a machine faces such a question it would presumably make a calculated decision based on its programming. That’s foreign to our nature, and indeed not a task desired by programmers or vendors of robocars.
There have been calls to come up with “ethical calculus” algorithms and put them in the cars. As a programmer, I could imagine coding such an algorithm, but I certainly would not want to, nor would I want to be held accountable for what it does, because by definition, it’s going to do something bad. The programmer’s job is to make driving safer. On their own, I think most builders of robocars would try to punt the decision elsewhere if possible. The simplest way to punt the decision is to program the car to follow the law, which generally means to stay in its right-of-way. Yes, that means running over 3 toddlers who ran into the road instead of veering onto the sidewalk to run over Hitler. Staying in our lane is what the law says to do, and you are not punished for doing it. The law strongly forbids going onto the sidewalk or another lane to deliberately hit something, no matter who you might be saving.
We might not like the law, but we do have the ability to change it.
Thus I propose the following: Driving regulators should create a special panel which can rule on driving ethics questions. If a robocar developer sees a question which requires some sort of ethical calculation whose answer is unclear, they can submit that question to the panel. The panel can deliberate and provide an answer. If the developer conforms to the ruling, they are absolved of responsibility. They did the right thing.
The panel would of course have people with technical skill on it, to make sure rulings are reasonable and can be implemented. Petitioners could also appeal rulings that would impede development, though they would probably suggest answers and describe their difficulty to the panel in any petition.
The panel would not simply be presented with questions like, “How do you choose between hitting 2 adults or one child?” It might make more sense to propose formulae for evaluating multiple different situations. In the end, it would need to be reduced to something you can do with code.
Very important to the rulings would be an understanding of how certain requirements could slow down robocar development or raise costs. For example, a ruling that car must make a decision based on the number of pedestrians it might hit demands it be able to count pedestrians. Today’s robocars may often be unsure whether a blob is 2 or 3 pedestrians, and nobody cares because generally the result is the same — you don’t want to hit any number of pedestrians. Likeways, requirements to know the age of people on the road demands a great deal more of the car’s perception system than anybody would normally develop, particularly if you imagine you will ask it to tell a dwarf adult from a child. read more »
Reports from Tesla suggest they are gathering huge amounts of driving data from logs in their cars — 780 million miles of driving, and as much as 100 million miles in autopilot mode. This contrasts with the 1.6 million miles of test operations at Google. Huge numbers, but what do they mean now, and in the future?
As I’ve written before, testing is one of the biggest remaining challenges in robocar development — how do you prove to yourself and then to others that you’ve reached the desired safety goals? Tons of miles are a very important component to that. If car companies are able to get their customer to do the testing for them, that can be a big advantage. (As I wrote last week, another group which can get others to do testing are companies like Uber and even operators of large commercial and taxi fleets.) Lots of miles mean lots of testing, lots of learning, and lots of data.
Does Tesla’s quick acquisition of so many miles mean they have lapped Google? The short answer is no, but it suggests a significant threat since Google is, for now, limited to testing with its small fleet and team of professional testing drivers.
Tesla is collecting vastly less data from its cars than Google does. Orders of magnitude less. First of all, the Tesla has a lot fewer sensors and no LIDAR, and to the best of my knowledge from various sources I have spoken to, Tesla is only collecting a fraction of what their sensors gather. To collect all that they gather would be a huge data volume, not one you would send over the cell network, and even over the wifi at home it would be very noticeable. Instead, reports suggest Tesla is gathering only data on incidents and road features the car did not expect or did not handle well. However, nothing stops them in the future from logging more, though they might want to get approval from owners to use all that bandwidth.
Tesla wants to make a car for people to buy today. As such, it has no LIDAR, because a car today, and even the autopilot, can be done without LIDAR. Tomorrow’s LIDARs will be cheap but today’s production LIDARs for cars are simple and/or expensive. So while the real production door-to-door self driving car almost certainly uses LIDAR, Tesla is unable and unwilling to test and develop with it. (Of course, they can also argue that in a few years, neural networks will be good enough to eliminate the need for LIDAR. That’s not impossible, but it’s a risky bet. The first cars must be built in a safety-obsessed way, and you’re not going to release the car less safe than you could have made it just to save what will be only a few hundred dollars of cost.)
As noted, Google has being doing their driving with professional safety drivers, who are also recording a lot of data from the human perspective that ordinary drivers never will. That isn’t 100 times better but it’s pretty important.
Tesla is also taking a risk, and this has shown up in a few crashes. Their customers are beta testing a product that’s not yet fully safe. In fact, it was a pretty bold move to do this, and it’s less likely that the big car companies would have turned their customers into beta testers — at least no until forced by Tesla.
If they do, then the big automakers have even more customers than Tesla, and they can rack up even more miles of testing and data gathering.
When it comes to training neural networks, ordinary drivers can provide a lot of useful data. That’s why Commma.ai, who I wrote about earlier is even asking volunteers to put a smartphone on their dash facing out to get them more training data. At present, this app does not do much, but it will not be hard to make one that offers things like forward collision warning and lane departure warning for free, paid for by the data it gathers.
Watch me Sunday night on Dateline NBC: On Assignment
On Sunday, June 5, at 7pm (Eastern and Pacific) the news show Dateline: NBC will do a segment on self driving cars featuring Sebastian Thrun, Jay Leno and myself. I sat down for several hours with Harry Smith, but who knows how much actual airtime that turns into. Here is the promo for the episode and another more specific one.
Executive summary: Can our emotional fear of machines and the call for premature regulation be mollified by a temporary increase in liability which takes the place of specific regulations to keep people safe?
So far, most new automotive technologies, especially ones that control driving such as autopilot, forward collision avoidance, lanekeeping, anti-lock brakes, stability control and adaptive cruise control, have not been covered by specific regulations. They were developed and released by vendors, sold for years or decades, and when (and if) they got specific regulations, those often took the form of “Electronic stability control is so useful, we will now require all cars to have it.” It’s worked reasonably well.
That there are no specific regulations for these things does not mean they are unregulated. There are rafts of general safety regulations on cars, and the biggest deterrent to the deployment of unsafe technology is the liability system, and the huge cost of recalls. As a result, while there are exceptions, most carmakers are safety paranoid to a rather high degree just because of liability. At the same time they are free to experiment and develop new technologies. Specific regulations tend to come into play when it becomes clear that automakers are doing something dangerous, and that they won’t stop doing it because of the liability. In part this is because today, it’s easy to assign blame for accidents to drivers, and often harder to assign it to a manufacturing defect, or to a deliberate design decision.
The exceptions, like GM’s famous ignition switch problem, arise because of the huge cost of doing a recall for a defect that will have rare effects. Companies are afraid of having to replace parts in every car they made when they know they will fail — even fatally — just one time in a million. The one person killed or injured does not feel like one in a million, and our system pushes the car maker (and thus all customers) to bear that cost.
Robocars change some of this equation. First of all, in robocar accidents, the maker of the car (or driving system) is going to be liable by default. Nobody else really makes sense, and indeed some companies, like Volvo, Mercedes and Google, have already accepted that. Some governments are talking about declaring it but frankly it could never be any other way. Making the owner or passenger liable is technically possible, but do you want to ride in an Uber where you have to pay if it crashes for reasons having nothing to do with you?
Due to this, the fear of liability is even stronger for robocar makers.
Robocar failures will almost all be software issues. As such, once fixed, they can be deployed for free. The logistics of the “recall” will cost nothing. GM would have no reason not to send out a software update once they found a problem like the faulty ; they would be crazy not to. Instead, there is the difficult question of what to do between the time a problem is discovered and a fix has been declared safe to deploy. Shutting down the whole fleet is not a workable answer; it would kill deployment of robocars if several times a year they all stopped working.
In spite of all this history and the prospect of it getting even better, a number of people — including government regulators — think they need to start writing robocar safety regulations today, rather than 10-20 years after the cars are on the road as has been traditional. This desire is well-meaning and understandable, but it’s actually dangerous, because it will significantly slow down the deployment of safety technologies which will save many lives by making the world’s 2nd most dangerous consumer product safer. Regulations and standards generally codify existing practice and conventional wisdom. They are very bad ideas with emerging technologies, where developers are coming up with entirely new ways to do things, and entirely new ways to be safe. The last thing you want is to tell vendors they must apply old-world thinking when they can come up with much better thinking.
Sadly, there are groups who love old-world thinking, namely the established players. Big companies start out hating regulation but eventually come to crave it, because it mandates the way they do things and understand into the law. This stops upstarts from figuring out how to do it better, and established players love that.
The fear of machines is strong, so it may be that something else needs to be done to satisfy all desires: The desire of the public to feel the government is working to keep these scary new robots from being unsafe, and the need for unconstrained innovation. I don’t desire to satisfy the need to protect old ways of doing things.
One option would be to propose a temporary rule: For accidents caused by robocar systems, the liability, if the system should be at fault, shall be double that if a similar accident were caused by driver error. (Punitive damages for willful negligence would not be governed by this rule.) We know the cost of accidents caused by humans. We all pay for it with our insurance premiums, at an average rate of about 6 cents/mile. This would double that cost, pushing vendors to make their systems at least twice as safe as the average human in order to match that insurance cost.
Victims of these accidents (including hapless passengers in the vehicles) would now be doubly compensated. Sometimes no compensation is enough, but for better or worse, we have set on values and doubling them is not a bad deal. Creators of systems would have a higher bar to reach, and the public would know it.
While doubling the cost is a high price, I think most system creators would accept this as part of the risk of a bold new venture. You expect those to cost extra as they get started. You invest to make the system sustainable.
Over time, the liability multiplier would reduce, and the rule would go away entirely. I suspect that might take about a decade. The multiplier does present a barrier to entry for small players, and we don’t want something like that around for too long.
Here is the first report of a real Tesla autopilot crash. To be fair to Tesla, their owner warnings specify fairly clearly that the autopilot could crash in just this situation — there is a stalled car partly in the lane, and the car in front of you swerves around it, revealing it with little time for you or the autopilot to react.
The deeper issue is the way that the improving quality of the Tesla Autopilot and systems like it are lulling drivers into a false sense of security. I have heard reports of people who now are trusting the Tesla system enough to work while being driven, and indeed, most people will get away with this. And as people get away with it more and more, we will see more people driving like this driver, not really prepared to react. This is one of the reasons Google decided not to make a system which requires driver takeover ever. As the system gets better, does it get more dangerous?
Some technical notes:
This is one of the things LIDAR is much more reliable at seeing than cameras. Of course, whether you can swerve once the LIDAR sees it is another matter.
On the other hand, this is where radar fails. I mean the stalled car is clear on radar, but it’s stationary, so you can’t tell it from the road or guardrail which are also stationary.
This is one of the classic V2V value propositions, but it’s not a good one. You don’t need 10ms latency to have a stalled car tell you it is stalled. Far better that car report to a server that it’s stalled and for everybody coming down that road to learn it, whether they have line of sight radio to the stall, or V2V at all. Waze already reports this just with human manual reporting and that’s a really primitive way to do it.
Declaration of Amsterdam
Last month, various EU officials gathered in Amsterdam and signed the Declaration of Amsterdam which outlines a plan for normalizing EU laws around self-driving cars. The meeting also included a truck automation demo in the Netherlands and a self-drive transit shuttle demonstration. It’s a fairly bland document, more an expression of the times, and it sadly spends a lot of time on the red herring of “connected” vehicles and V2V/V2I, which governments seem to love, and self-driving car developers care very little about.
Let’s hope the regulatory touch is light. The reality is that even the people building these vehicles can’t make firm pronouncements on their final form or development needs, so governments certainly can’t do that, and we must be careful of attempts to “help” that hinder. We already have a number of examples of that happening in draft and real regulations, and we’ve barely gotten started. For now, government statements should be limited to, “let’s get out of the way until people start figuring out how this will actually work, unless we see somebody doing something demonstrably dangerous that can’t be stopped except through regulations.” Sadly, too many regulators and commentators imagine it should be, “let’s use our limited current knowledge to imagine what might go wrong and write rules to ban it before it happens.”
Speech from the Throne
It was a sign of the times when her Majesty the Queen, giving the speech from the throne in the UK parliament, laid out some elements of self-driving car plans. The Queen drove jeeps during her military days, and so routinely drives herself at her country estates, otherwise she would be among the set of people most used to never driving.
The UK has 4 pilot projects in planning. Milton Keynes is underway, and later this year, a variation of the Ultra PRT pods in use at T5 of Heathrow airport — they run on private tracks to the car park — will go out on the open road in Greenwich. They are already signing up people for rides.
Car companies thinking differently
In deciding which car companies are going to survive the transition to robocars, one thing I look for is willingness to stop thinking like a traditional car company which makes cars and sells them to customers. Most car company CEOs have said they don’t plan to keep thinking that way, but what they do is more important than what they say. read more »
Uber has announced the official start of self-driving tests in Pittsburgh. Uber has been running their lab for over a year, and had various vehicles out there mapping and gathering data, but their new vehicle is sleeker and loaded with sensors - more than on Google’s cars or most of the other research cars I have seen. You can see several lidars on the roof and bumpers, and a seriously big array of cameras and other sensors.
In addition, recently it was announced that the GM-Lyft-Cruise combination will be offering rides in 2017 in a self-driving Chevy Bolt. Of course, there will be a safety driver in the car supervising it so it would be an empty taxi coming to pick you up, but it’s a nice step.
These two announcements bring attention to two of the most important companies in the space, even though their technical efforts are much less mature than Google’s or Daimler’s. That’s because of one key forecast that I have emphasized from the start:
A large fraction of the automotive industry is going to switch to be about selling rides, not selling cars
As we all know, Uber has already become the #1 brand in the world in selling rides in just a few years. It’s a very important position to have. Lyft has #2 but other companies like Didi own China (and just got a $1B investment from Apple.)
As the owner of the ride brand, Uber has a lot of control. The brand of the car that drives you is less important and interchangeable. But that’s not the only advantage these ride companies have:
Ride companies have huge volumes of drivers on the road all day. They can be used as a resource for mapping, testing and verifying self-driving systems. Companies like Google had to pay staff and buy cars to do that.
Ride companies can combine human driven ride service with robotic taxis, to take you from anywhere to anywhere any time. It just costs more if you want to travel where the robots can’t.
Uber and Lyft can fail in their research program and still win. They just have to find somebody else to sell them the cars. Of course that does mean a power trade — it’s very nice to own the magic sauce that makes it all work, but the ride companies are among the few would could have another provider and still have a lot of control.
At the same time, Lyft is now bound to probably work with GM, and Didi possibly with Apple, which leaves Uber with more flexibility among these.
The ride companies are already doing big experiments in real ride-sharing, ie. multiple independent passengers in the same car. Today, using UberPool is popular and can save significant money. A more interesting question arises when robotic taxi service is available for 30 cents/mile. I don’t think people would share their ride to reduce the price from $1.50 to $1. Saving 50 cents does not move the needle for most people of even moderate income levels.
How will ride companies compete?
An important social question asks how many ride companies can compete in a market? Right now Uber has established a lot of dominance. In San Francisco, birthplace of Uber, Lyft and Sidecar, Sidecar shut its doors from difficulty competing with the other two. Is there room for only a few companies? That’s bad, because competition is good for the public.
The first intuition is that fleet size is a big competitive advantage because you can offer faster pickup times and more choices of vehicle. Customers will care a lot about how long they have to wait for a ride. That will vary of course based on random positions of vehicles, and also how good the predictive positioning is in the fleet management system.
At the same time, it is possible to have a successful limo company today with just one limo. You only do scheduled rides (or ad-hoc rides booked via networks like UberBlack) but you have a business. It is not the size of your fleet that fully governs your wait time, but rather the ratio of the size of your fleet to the number of customers you have. Lyft has a smaller fleet but also fewer users, so I find it can often match or beat Uber on wait time, though neither wins all the time. There is a natural balance here — the better your fleet-size/user-base ratio is, the shorter wait times you have, but that brings you more customers until the advantage starts reducing.
Today sees the un-stealthing of a new company called Otto which plans to build self-driving systems for long haul trucks. The company has been formed by a skilled team, including former members of Google’s car team and people I know well. You can see their opening blog post
My entire focus on this blog, and the focus of most people in this space, has been on cars, particularly cars capable of unmanned operation and door-to-door service. Most of those not working on that have had their focus on highway cars and autopilots. The highway is a much simpler environment so much easier to engineer, but it operates at higher speeds so the cost of accidents is worse.
That goes doubly true for trucks that are fast, big and massive. At the same time, 99% of truck driving is actually very straightforward — stay in a highway lane, usually the slow one, with no fancy moving about.
Some companies have done exploration of truck automation. Daimler/Freightliner has been testing trucks in Nevada. Volvo (trucks and cars together) has done truck and platooning experiments, notably the Sartre project some years ago. A recent group of European researchers did a truck demonstration in the Netherlands, leading up to the Declaration of Amsterdam which got government ministers to declare a plan to modify regulations to make self-driving systems legal in Europe. Local company Peloton has gone after the more tractable problem of two-truck platoons with a driver in each truck, aimed primarily at fuel savings and some safety increases.
While trucks are big and thus riskier to automate, they are also risky for humans to drive. Even though truck drivers are professionals who drive all day, there are still around 4,000 killed every year in the USA in truck accidents. More than half of those are truck drivers, but a large number of ordinary road users are also killed. Done well, self-driving trucks will reduce this toll. Just as with cars, companies will not release the systems until they believe they can match and beat the safety record of human drivers.
Self-driving trucks don’t change the way we move, but they will have a big economic effect on trucking. Driver pay accounts for about 25-35% of the cost of truck operation, but in fact early self-driving won’t take away jobs because there is a serious shortage of truck drivers in the market — companies can’t hire enough of them at the wages they currently pay. It is claimed that there are 50,000 job openings unfilled at the present time. Truck driving is grueling work, sometimes mind-numbing, and it takes the long haul driver away from home and family for over a week on every long-haul run. It’s not very exciting work, and it involves long days (11 hours is the legal limit) and a lot of eating and sleeping in truck stops or the cabin of the truck.
Average pay is about 36 cents/mile for a solo trucker on a common route. Alternately, loads that need to move fast are driven by a team of two. They split 50 cents/mile between them, but can drive 22 hours/day — one driver sleeps in the back while the first one takes the wheel. You make less per mile per driver, but you are also paid for the miles you are sleeping or relaxing.
A likely first course is trucks that keep their solo driver who drives up to 11 hours — probably less — and have the software drive the rest. Nonstop team driving speed with just one person. Indeed, that person might be an owner-operator who is paying for the system as a businessperson, rather than a person losing a job to automation. The human would drive the more complex parts of the route (including heavy traffic) while the system can easily handle the long nights and sparse heartland interstate roads.
The economics get interesting when you can do things that are expensive for human drivers and teams. Aside from operating 22 or more hours/day at a lower cost, certain routes will become practical that were not economic with human drivers, opening up new routes and business models.
Computer driven trucks will drive more regularly than humans, effectively driving in “hypermile” style as much as they can. That should save fuel. In addition, while I would not do it at first, the platooning experimented with by Peloton and Sartre does result in fuel savings. Also interesting is the ability to convert trucks to natural gas, which is domestic and burns cleaner (though it still emits CO2.) Automated trucks on fixed routes might be more willing to make this conversion.
There is strong potential to reduce the damage to roads (and thus the cost of maintaining them, which is immense and seriously in arrears) thanks to the robotruck. That’s because heavy trucks and big buses cause almost all the road wear today. A surprising rule of thumb is that road damage goes up with the 4th power of the weight per axle. As such an 80,000lb truck with 34,000lb on two sets of 2 axles and 6,000lb on the front axle does around 2,000 times the road damage of a typical car! read more »
A week ago, a rather strange event took place. No, I’m not talking about just the Transit of Mercury in front of the sun on May 9, but an odd result of it.
That morning I was staying at the Westin Waterfront in Boston. I like astrophotography, and have shot several transits. I am particularly proud of my gallery of the 2004 Transit of Venus which is unusual because I shot it in a hazy sunrise where it was a naked eye event, so I have photos of the sun with a lake and birds. Indeed, since the prior transit of Venus was in 1882, we may have been among the first alive to deliberately see it as a naked eye event.
I did not have my top lenses with me but I decided to photograph it anyway with my small size Sony 210mm zoom and a welding glass I brought along. I shot the transit, holding the welding glass over the lens, with all mounted on my super-light “3 legged thing” portable tripod. Not wanting to leave the lens pointed at the sun when I removed the glass, I pulled the drape shut, looked at photos and then tilted the camera away. I went off to my meetings in Boston.
At 10am I got a frantic call from the organizer of the Exponential Manufacturing conference I would be speaking at the next day. “You need to talk to the FBI!” he declared. Did they want my advice on privacy and security? “No,” he said, “They saw you taking photos of the federal building with a tripod from your hotel window and want to talk to you.” (Note: It probably wasn’t the FBI, that was just a first impression. The detectives would not name who had reported it.)
Of course, I had no idea there was any federal building out the window and I did not take any photos of the buildings. In fact, I’m not quite sure what the federal facility is, though I presume it’s at the Barnes Building at 495 Summer St. — they never told me. Anybody know what’s there? Google maps shows a credit union and a military recruiting office, and there was suggestion of a Navy facility. Amusingly the web page for the recruiting center features a (small) photo of the building.
Nothing to justify them having a surveillance crew constantly looking into the hotel rooms of guests and going nuts when they see a camera on a mini-tripod.
I talked to hotel security. Turns out they had gone into my room! Sadly, though police can’t enter your room without a warrant, hotel staff usually can. Two Boston detectives were put on the case. After talking to hotel security, I thought it was over, but no, the next day after my talk, I had the detectives waiting for me in the hotel.
First of all, I was concerned the hotel had given them my name. The hotel insisted the Boston innkeeper statutes require they do this. In reality, such statutes were found facially unconstitutional last year by the Supreme Court in City of Los Angeles v. Patel. In a facial challenge, the law is declared inherently invalid regardless of the specific facts of a case. The Boston police don’t believe this ruling applies to their law yet. So now my name is in police records over photographing the sun. Yes, when they met me, they realized I was just an astro-nerd and not a terrorist casing out the sun for an attack. (General conclusion, it’s too bright, so do it at night.)
To scare me, and to justify their actions, they said the unnamed complainers (probably not FBI) had been “unsure if it was just a camera” (ie. pretending it might be a gun) even though it looks nothing like it. And when I closed the drape — they were watching me live — they imagined it was because I had seen them and was hiding.
Mostly I laugh but the other part of me asks, “what the hell has gone wrong with this country?” Feds peering into our hotel rooms? Being afraid of a cheap lens (on an expensive camera, admittedly) on an ultralight tripod? Getting a police record for taking a photo out your hotel window, not even of the nondescript building that I would have no idea is a federal building? Having to demonstrate to not one, but two detectives that you’re just a harmless nerd? Not good. (They did Google me but did not clue in that I was on the board of the organization suing the NSA and other intelligence groups over the illegal mass wiretapping going on.)
Above you will find my evil picture of the sun — not that bad for a $150 lens, actually — and a picture of my room when I returned to it, with the camera pointing up and into the room. Yes, I took a picture of the buildings after all this, though I did not take one in the morning. That’s Mercury in the lower left corner of the solar disk. The dark area in the middle is a sunspot, another good location for an attack.
(BTW, I see many duplicate comments pointing to the story of the Economics professor pulled from a plane for doing some diffEQs on paper in the plane seat on his way to a conference. I think the whole nerd world saw that story already.)
My recent efforts in consulting and speaking have led to a lot more travel — which is great sometimes, but also often a drain. I’ve been staying in so many hotels that I thought it worth enumerating some of the things I think every hotel room should have, and what I often find missing.
Most of these things are fairly inexpensive to do, though a few have higher costs. The cheaper ones I would hope can be just included, I realize some might incur extra charges or a slightly more expensive room, or perhaps they can be offered as a perk to loyalty program members.
Desk space for all occupants
Most rooms usually only have a workspace for one, even if it’s a double room. The modern couple both have computers, and both need a place to work, ideally not crammed together. That’s also true when two co-workers share a room. And in a perfect room, both desk spaces share the other attributes of a good desk, namely:
The surface is not glass. I would say more than half the desks in hotel rooms are glass, which don’t work well with optical mice. Sure, you put down some papers, but this seems kinda silly.
Of course, 2 or even 3 power outlets, on the desk or wall above it. Ideally the “universal” kind that accept most of the world’s plugs. (Sure, I bring adapters but this is always handy.) Don’t make me crawl under the desk to plug things in, have to unplug something else.
To my horror, Marriott has been building some new hotels with no desk space at all. Some person (I would say some idiot) decided that since millennials use fewer laptops and just want to sit on a couch with their tablet, it was better to sacrifice the desk. Those hotels had better have folding desks you can borrow, in fact all hotels could do that to fix the desk space shortage, particularly if rooms are small. Another option would be a leaf that folds down from the wall.
Surfaces/racks for luggage and other things for everybody.
Many rooms are very lacking in table or surface space beyond the desk. Almost every hotel room comes with only one luggage holder, where a couple might find themselves with 3 or in rare case 4 bags. I doubt these folding luggage holders are that expensive, but if you can’t put more than one in every room, then watch people as they check in, and note how many bags they have, and have somebody automatically send up some extra holders to their room. At the very least make it easy for them to ask. I mean these things are under $30 quantity one. Get more!
Bathrooms need surface space, too. Too often I’ve seen sinks with nowhere to put your toiletries and freedom bag. In fact, I want space everywhere to unpack the things I want to access.
Power by the bed (and other places)
Sure, I get that older hotel rooms did not load up with power outlets, and modern ones do. But aside from the desk, most people want power by the bed now, for their phone charger if nothing else. If you just have one plug by the bed, put a 3-way splitter (global plug, of course) on that plug so that people can plug things in without unplugging the light or clock. And ideally up high, so I don’t have to crawl behind things to get at it.
A little more controversial is the idea of offering USB charging power. Today, we all carry chargers, but the hope is that if charging becomes commonplace, then like the travel hair dryer people used to carry and no longer do, we might be able to depend on finding a charger. Problem is, charging standards are many and change frequently — we now have USB regular (useless) and fast-charge, along with Qualcomm quick-charge and USB C. More will come. On top of this, strictly you should not plug your device into a random USB port which might try to take it over. You can get what’s called a “USB Condom” to block the data lines, but those might interfere with the negotiation phase of smarter power standards. A wireless “Qi” charging plate could be a useful thing.
As a couple, we have had up to 8 things charging at the same time, when you include phones, cameras, external batteries, headphones, tablets and other devices. So I bring a 5-way USB fast charger and rely on laptops or other chargers to go the distance.
Let me access the HDTV as a monitor, or give me a monitor.
Some rooms block you from any access to the TV. Some have a VGA or HDMI port built into a console on the desk. The latter is great, but usually the TV is mounted in a way that makes it not very useful as a computer monitor for working. It’s primarily useful for watching video. I pretty much never watch video in a hotel room, so given the choice, I would put the monitor by the desk, and it should be 1080p or better — in fact 4K should be the norm for any new installations. If you don’t have one, have one I can call down for, even at a modest fee. read more »
A recent news story from Utah describes a Tesla which entered self-park (“summon”) mode and drive itself into the back of a flatbed truck raises some interesting issues.
Tesla says that the owner of the vehicle initiated auto-Summon, which requires pressing the gear selector stalk twice and then shifting into park, then leaving the vehicle. After that the car goes into its self-park mode in 3 seconds, and the driver is supposed to be watching because the feature is a beta.
The owner says he never activated the self-park, and if somehow he did by accident, he was standing by the car for 20 seconds showing it off to a stranger, and as such he claims he is absolutely certain the car did not begin moving 3 seconds after he got out. Tesla says the logs say otherwise.
Generally, one believes log files over human memory, though these stories are surprisingly at odds. When doing Summon, the Tesla is flashing its hazard lights and moving, so it’s not exactly subtle. And it’s not supposed to work unless the keyfob is close to the car. No doubt there will be back and forth on just what happened.
However, there are some things that are less disputed:
Unless the owner is out and out lying, there is a problem which allowed an owner to activate the auto-summon feature by accident, and to do so when not close to the car. (When you activate it the hazards start blinking and it shows auto-park on the screen.)
The car should not have hit the metal bars on the back of the flatbed. However, Tesla warns that the feature may not detect thin objects or hanging objects. These bars are quite low, but are sticking off the end of the truck by a large amount. Clearly the obstacle detection is indeed very “beta” if it could not see these. Apparently auto-park is done using the ultrasonic sensors, not the camera. Bumper based ultrasound is not enough.
This also adds some fuel to the ongoing debate about maps. The car was in a place where there would be no reason to initiate Tesla’s self-park, which is designed for it to drive straight into narrow parking spaces. In this case, it is not necessary to have a map of all the spaces a car might self-park, but even a fairly coarse and inaccurate map could allow the car to say, “This seems like an odd place to use the self-park feature, are you sure?” And pretty much all parallel parking spaces on the side of the road qualify as a place you would not use this particular self-park function.
So is the owner lying? Was he playing with auto-summon and screwed up? (You have to screw up royally as it drives quite slowly and any touch on the door handles or the fob will stop it.) The problem is that he claims that the car did it while he was not present, which is not supposed to happen, and if he was present, why did he not stop it?
If you had asked me recently what big car company was the furthest behind when it came to robocars, one likely answer would be Fiat-Chrysler. In fact, famously, Chrysler ran ads several years ago during the superbowl making fun of self-driving cars and Google in particular:
Now Google has announced a minor partnership with Chyrsler where they will be getting Chrysler to build 100 custom versions of their hybrid minivans for Google’s experiments. Minivans are a good choice for taxis, with their spacious seating and electric sliding doors — if you want a vehicle to come pick you up, it probably should have something like this.
This is a pretty minor partnership, something closer to a purchase order than a partnership, but it will be touted as a great deal more. My own feeling is it’s unlikely a major automaker will truly partner with a big non-auto player like Google, Uber, Baidu or Apple. Everybody is very concerned about who will own the customer and the brand, and who will be the “Foxconn” and the big tech companies have no great reason to yield on that (because they are big) and the big car companies are unlikely to yield, either. Instead, they will acquire or do deals they control with smaller companies (like the purchase of Cruise or the partnership with Lyft from GM.)
Still, what may change this is an automaker (like FCA) getting desperate. GM got desperate and spent billions. FCA may do the same. Other companies with little underway (like Honda, Peugeot, Mazda, Subaru, Suzuki) may also panic, or hope that the Tier 1 suppliers (Bosch, Delphi, Conti) will save them.
Google custom designed a car for their 3rd generation prototype, with 2 seats, no controls and and electric NEV power train. This has taught them a lot, but I bet it has also taught them that designing a car from scratch is an expensive proposition before you are ready to make many thousands of them.
I have often written on the challenge facing existing automakers in the world of robocars. They need to learn to completely switch their way of thinking in a world of mobility on demand, and not all of them will do so. But they face serious challenges even if they are among the lucky ones who fully “get” the robocar revolution, change their DNA and make products to compete with Google and the rest of the non-car companies.
Unfortunately for the car companies, their biggest assets — their brands, their experience, their quality and their car manufacturing capacity — are no longer as valuable as they were.
Their brands are not valuable
Today if you summon a car with a company like Uber, you don’t care about what brand of car it is, as long as it’s decent. Even with the “luxury” variants of Uber, you don’t care which type of luxury car shows up, as long as it meets certain standards. For companies who have most of their value in their nameplate, this is nightmare #1. The taxi service (Uber or otherwise) becomes the brand that is seen and valued by the customer.
When you are buying a car for 5 years at the dealership, you care a lot about the brand, both for what it means, and for what it says about you when you show up driving it. When you buy a car by the ride, you don’t care a lot about the brand, because you are only going to use it for a short time.
Their brands might be tarnished
There will be accidents in Robocars, unfortunately. Those accidents will cost money, but they will also cause problems in public image. The problem is, “Mercedes runs over grandmother” is a headline that will make people less likely to buy any type of Mercedes. As such, Mercedes has plans to market self-driving car service under their Car2Go brand. You may not even know that Car2Go is Daimler, and they might like it that way. “Google car runs over grandmother” is bad news for the Google car project, but is not going to make anybody stop doing web searches with Google. (Except the grandmother…)
The non-car companies don’t have a car brand to tarnish, but they do have famous brands. They can use those brands to attract customers without the same risk. Big car companies have famous brands but may be afraid to use them.
They might just be the contract manufacturer
Companies like Uber, Google, Apple and others don’t plan to manufacture cars. Why would they? There is tons of car manufacturing capacity out there. They can just go to carmakers and say, “here’s a purchase order for 100,000 cars — built to our spec with our logo on them.” It will be very hard to turn down such an order. Still, some companies will be too proud to do this, or too unwilling to sign their own suicide note.
If they don’t accept the order, somebody else will. If nobody in the west does, somebody in China will. China is the world’s #1 car manufacturing country, but the cars are rarely exported to the west. They would love to change that.
A likely model for this is the relationship of Apple and Foxconn. Foxconn makes your iPhone, but many don’t know that. Foxconn makes good money, but Apple makes much more, designing the product and owning the customer. The car companies don’t want to be Foxconn in the world of the future, but the alternative may be to be much smaller.
(BTW, Foxconn has said it is interested in making cars.)
First-rate quality might not be that important
Chinese manufacturers don’t have the quality of the current leaders. But they may not need to. Just as Apple taught Foxconn how to make good iPhones, they might follow the same pattern here. But they don’t need to. That’s because a less reliable robocar is not the same sort of problem an unreliable personal car is. Sure, it should not break down while you are riding in it — but even then the company can quickly send you a replacement to pick you up in just a few minutes. If it breaks down otherwise, it just goes out of service. This costs the fleet manager money, but they saved a lot of money with the lower quality manufacturer. When cars can move on demand to service customers, breakdowns are not the same sort of problem. When your own car breaks down it’s a nightmare, and you will pay a lot to avoid it. For a fleet, it’s just a cost. All cars are down for maintenance some of the time. Cheaper cars will be down more, but if they are cheap enough, it still saves money.
Customer perception of quality is still important. The vehicle must maintain the level of comfort and interior quality the customer has paid for. Safety related failures are of course much less tolerable.
New car designs will be radically different
The robocar of the future will look quite different from the cars of the past. Existing car companies can handle this, but they lose some of the advantage that comes from decades of experience. The future robocars are probably electric and much simpler, with hundreds of parts rather than tens of thousands. It’s a new world and experience with the old may actually be a disadvantage. Only Nissan and Tesla have lots of electric car experience today, though GM is building it. Electric platforms are much simpler and ripe for creativity from new players.