brad's blog

Car NAS for semi-offsite backup

Everybody should have off-site backup of their files. For most people, the biggest threat is fire, but here in California, the most likely disaster you will encounter is an earthquake. Only a small fraction of houses will burn down, but everybody will experience the big earthquake that is sure to come in the next few decades. Of course, fortunately only a modest number of houses will collapse, but many computers will be knocked off desks or have things fall on them.

To deal with this, I’ve been keeping a copy of my data in my car — encrypted of course. I park in my driveway, so nothing will fall on the car in a quake, and only a very large fire would have risk of spreading to the car, though it’s certainly possible.

The two other options are network backup and truly remote backup. Network backup is great, but doesn’t work for people who have many terabytes of storage. I came back from my latest trip with 300gb of new photos, and that would take a very long time to upload if I wanted network storage. In addition, many TB of network storage is somewhat expensive. Truly remote storage is great, but the logistics of visiting it regularly, bringing back disks for update and then taking them back again is too much for household and small business backup. In fact, even being diligent about going down to the car to get out the disk and update is difficult.

A possible answer — a wireless backup box stored in the car. Today, there are many low-cost linux based NAS boxes and they mostly run on 12 volts. So you could easily make a box that goes into the car, plugs into power (many cars now have 12v jacks in the trunk or other access to that power) and wakes up every so often to see if it is on the home wifi, and triggers a backup sync, ideally in the night.  read more »

The terrible power of computer espionage in our world of shame

I have some dark secrets. Some I am not proud of, some that are fine by me but I know would be better kept private. So do you. So does everybody. And the more complex your life, the more “big” things you have done in the world, the bigger your mistakes and other secrets are. It is true for all of us. This is one of the reasons the world needs privacy to work.

The 2016 US election hack makes clear the big challenge. In a world where everybody has secret flaws, the person who can point the flashlight at their enemies, and not themselves or their friends, has a truly powerful weapon. Now that we conduct our entire lives on computers, those who can penetrate them can learn those secrets.

We’re not good at being intellectual about this. When one house has a big pile of dirty laundry in front, we know intellectually that all the other houses almost surely have a similar pile in the basement. But the smell of the exposed one is clear, and it’s bad, and we can’t keep our minds on that fact. So we can be manipulated, even though we know we are being manipulated.

In this election, we got to see exposed various flaws at the Democratic National Committee. The flaws were real (though on the scale of such things, not overwhelming.) Our gut reaction, though, is to feel, “it doesn’t matter how we learned this, it’s still bad and not to be ignored.” We feel this even though we know the information was gathered illegally, then disclosed to manipulate us. That’s because generally we do and should love whistleblowers. They are usually brave heroes. But what we learn that the whistleblower revealed the secrets not for the public good, not to expose a wrong, but instead cherry-picked what to expose to manipulate us, we must do something else we normally taught is wrong and “shoot the messenger.”

The legal system figured this out long ago. It has detailed rules about how evidence can be collected and used. If those rules are violated, the system attempts to disregard the evidence in its deliberations. Everything that came from the improper evidence is to be unseen, disregarded. People we know for certain who are murderers and rapists are set free because there was something untoward about how we learned it.

The public is incapable of the logical dispassion demanded in the courts. If this can never be fixed, we are in for trouble. There will always be secrets. And now there will always be people with the tools to get at all but the most highly protected ones and selectively disclose them.

Some people believe we can get used to a more fully transparent world, and have no secrets. If we can do that, this weapon is diminished. They hope that if we all see how many secrets others have, we won’t be so ashamed of ours. I am highly doubtful. People will keep secrets. The powerful will be better at protecting them, but the even more powerful will be better at extracting them. The secrets will not be just shameful things but actually illegal things. We live in a world of so many laws that we are all breaking them regularly.

I am not sure I see a way out. This is not simply about Clinton. While everybody is bothered by fake news, this is news which is true, but not the whole truth and not misleading.

In the past I have written about extending the concept of “privilege” to information on our computers. Perhaps this form of invasion of privacy could be viewed the same way socially. That breaking into your computer to disclose your secrets would be like beating up somebody’s priest or lawyer to extract those secrets. If a news story started with, “we bugged his lawyer’s office and heard him confess this crime to his lawyer” we might still be bothered but see it in a different light, and be more bothered by those using the information.

Uber's battle in San Francisco

For a few months, Uber has been testing their self-driving prototypes in Pittsburgh, giving rides to willing customers with a safety driver (or two) in the front seat monitoring the drive and ready to take over.

When Uber came to do this in San Francisco, starting this week, it was a good step to study new territory and new customers, but the real wrinkle was they decided not to get autonomous vehicle test permits from the California DMV. Google/Waymo and most others have such permits. Telsa has such permits but claims it never uses them.

I played an advisory role for Google when the Nevada law was drafted, and this followed into the California law. One of the provisions in both laws is that they specifically exempt cars that are unable to drive without a human supervisor. This provision showed up, not because of the efforts of Google or other self-drive teams, but because the big automakers wanted to make sure that these new self-driving laws did not constrain the only things they were making at the time — advanced ADAS and “autopilot” cars which are effectively extra-fancy cruise controls that combine lanekeeping functions with adaptive cruise control for speed. Many car makers offered products like that going back a decade, and they wanted to make sure that whatever crazy companies like Google wanted in their self-driving laws, it would not pertain to them.

The law says:

“…excluding vehicles equipped with one or more systems that enhance safety or provide driver assistance but are not capable of driving or operating the vehicle without the active physical control or monitoring of a natural person.”

Now Uber (whose team is managed by my friend Anthony Levandowski who played a role in the creation of those state laws while he was at Google) wants to make use of these carve-outs to do their pilot project. As long as their car is tweaked so that it can’t drive without human monitoring, it would seem to fit under that exemption. (I don’t know, but would presume they might do some minor modifications so the system can’t drive without the driver weight sensor activated, or a button held down or similar to prove the driver is monitoring.)

The DMV looks at it another way. Since their testing regulations say you can’t test without human safety drivers monitoring and ready to take over, it was never the intent of the law to effectively exempt everything. You can’t test a car without human monitoring under the regulations, but cars that need monitoring are exempt. The key is calling the system a driver assistance system rather than a driving system.

The DMV is right about the spirit. Uber may be right about the letter. Of course, Uber has a long history of not being all that diligent in complying with the law, and then getting the law to improve, but this time, I think they are within the letter. At least for a while.

Other News

Velodyne reports success in research into solid state LIDAR. Velodyne has owned the market for self-driving car LIDAR for years, as they are the only producers of a high-end model. Their models are mechanical and very expensive, so other companies have been pushing the lower cost end of the market, including Quanergy (Where I am an advisor) which has also had solid state LIDAR for some time, and appears closer to production.

These and others verify something that most in the industry have expected for some time — LIDAR is going to get cheap soon. Companies like Tesla, which have avoided LIDAR because you can’t get a decently priced unit in production quantities, have effectively bet that cameras will get good before LIDAR gets cheap. The reality is that most early cars will simply use both cheap LIDAR and improving neural network based vision at the same time.

Google car is now Waymo

Google’s car project (known as “Chauffeur”) really kickstarted the entire robocar revolution, and Google has put in more work, for longer, than anybody. The car was also the first project of what became Google “X” (or just “X” today under Alphabet. Inside X, a lab devoted to big audacious “moonshot” projects that affect the physical world as well as the digital, they have promoted the idea that projects should eventually “graduate,” moving from being research to real commercial efforts.

Alphabet has announced that the project will be its own subsidiary company with the new name “Waymo.” The name is not the news, though; what’s important is the move away from being a unit of a mega-company like Google or Alphabet. The freedoms to act that come with being a start-up (though a fairly large and well funded one) are greater than units in large corporations have. Contrast what Uber was able to do, skirting and even violating the law until it got the law changed, with what big corporations need to do.

Google also released information about how in 2015 they took Steve Mahan — the blind man who was also the first non-employee to try out a car for running errands — for the first non-employee (blind or otherwise) fully self-driving ride on public streets, in a vehicle with no steering wheel and no backup safety driver in the vehicle. (This may be an effort to counter the large amount of press about public ride offerings by Nutonomy in Singapore and Uber in Pittsburgh, as well as truck deliveries by Uber/Otto in 2016.)

It took Google/Alphabet 6 years to let somebody ride on public streets in part because it is a big company. It’s an interesting contrast with how Otto did a demonstration video after just a few months of life of a truck driving a Nevada highway with nobody behind the wheel (but Otto employees inside and around it.) That’s the sort of radical step that startups.

Waymo has declared their next goal is to “let people use our vehicles to do everyday things like run errands, commute to work, or get safely home after a night on the town.” This is the brass ring, a “Mobility on Demand” service able to pick people up (ie. run unmanned) and even carry a drunk person.

The last point is important. To carry a drunk is a particular challenge. In terms of improving road safety it’s one of the most worthwhile things we could do with self-driving cars, since drunks have so many of the accidents. To carry a drunk, you can’t let the human take control even if they want to. Unlike unmanned operation, you must travel at the speed impatient humans demand, and you must protect the precious cargo. To make things worse, in some legal jurisdictions, they still want to consider the person inside the car the “driver,” which could mean that since the “driver” is impaired, operation is illegal.

Waymo as leader

The importance of this project is hard to overstate. While most car companies had small backburner projects related to self-driving going back many years, and a number of worthwhile research milestones were conquered in the 90s and even earlier, the Google/Waymo project, which sprang from the Darpa Grand Challenge, energized everybody. Tiny projects at car companies all got internal funding because car companies couldn’t tolerate the press and the world thinking and writing the that true future of the car was coming from a non-car company, a search engine company. Now the car companies have divisions with thousands of engineers, and it’s thanks to Google. The Google/Waymo team was accomplishing tasks 5 years ago that most projects are only now just getting to, especially in non-highway driving. They were rejecting avenues (like driving with a human on standby ready to take the wheel on short notice) in 2013 that many projects are still trying to figure out.

Indeed, even in 2010, when I first joined the project and it had just over a dozen people, it had already accomplished more complex tasks that most projects, even the Tesla autopilot that some people think is in the lead, have yet to accomplish.

Let’s see where Waymo goes.

Therapy session for somebody with real family issues

On the lighter side, the other day I was daydreaming how a conversation about her family might go with a famous character… You’ll probably guess who fairly early in, but it’s pretty strange to read it like this:

Therapist: So, I’m told you have had some serious issues with your family? I’m here to help.

Patient: You might say that.

T: Did something painful happen recently?

P: My son murdered his father, my ex.

T: You son murdered his father! Is he in prison?

P: Not going to happen, he’s too highly placed.

T: Why did he do it?

P: It’s a long story. And a bit of a pattern.

T: Others in your family have done this?

P: You might say that. There are bad stories about everybody in my family.

T: Surely you had a good relationship with your mother?

P: I never met my mother. She died just as I was born.

T: How terrible. Death in childbirth is so rare in the modern era.

P: She didn’t die in childbirth. I am told my father choked her.

T: Your father! So he went to jail?  read more »

What disability rules are right for robotaxis?

Robocars are broadly going to be a huge boon for many people with disabilities, especially disabilities which make it difficult to drive or those that make it hard to get in and out of vehicles. Existing disability regulations and policies were written without robocars in mind, and there are probably some improvements that need to be made.

While I was at Google, I helped slightly with the project to show the first non-employee getting to use the car to run errands. The subject we selected was 95% blind, and of course he can’t drive, and even using transit is a burden. It was obvious to him immediately how life-changing the technology will be.

Some background on disabled transport

There are two rough policy approaches to making things more accessible. One requires that we make everything accessible. The other uses special accommodations for the disabled.

Making everything accessible is broadly preferred by advocates. Wheelchair ramps on all public buildngs etc. Doing less than this runs a risk of “separate but equal” which quickly becomes separate and inferior. It’s also hugely expensive, and while that cost is borne by people like building owners and society, there is not unlimited budget, and there are arguments that there may be more efficient ways to spend the resources that are available. There are also lots of very different disabilities, and you need very different methods to deal with impairments in sight, mobility, hearing, cognition and the rest.

Over 50 million people in the USA have some sort of disability, so this is no minor matter.

In transportation, there is a general goal to make public transit accessible. To supplement that, or where that is not done, there are the paratransit rules. Paratransit offers people who meet certain tests an alternate ride (usually in a door to door van) for themselves and a helper for no more than twice the cost of a regular bus ticket. That sounds great until you learn you also have to schedule it a day in advance, and have a one-hour pickup window (which the disabled hate) and it’s hugely expensive, with an average cost per ride of over $30, which cities hate. (In the worst towns, it is $60/ride.) In some cities it approaches half the transit budget. Some cities, looking at that huge cost, let some disabled customers just use taxis for short trips, which provide much better service and cost much less. (Though to avoid over-use they put limitations on this.)

There are Americans with Disabilities Act rules for taxis. Regular sedan taxis are not directly regulated though there can be no discrimination of disabled customers who are capable of riding in a sedan. Any new van of up to 8 seats has to been accessible, which often means things like wheelchair lifts. In addition, once a taxi fleet has accessible vans, it has to offer “equivalent service” levels. This might mean that if it has 200 sedans, it can’t buy just one van because there would be much longer wait times to get that van. To get around this, a lot of companies use a loophole and purchase only used vans. The law only covers the use of new vans. Companies like Uber and Lyft don’t own vehicles at all, and so are not governed in the same way by fleet requirements, though they do offer accessible vehicle services in some cities.

When Uber and similar companies move to offering robotaxi service with vehicles they own, these laws would apply to them. Unlike some companies, the used van loophole will also be difficult since most robotaxis will be custom built new.

New Types of Vehicles

Robotaxi service offers the promise of a vehicle on demand, and it offers the potential of a vehicle well fitted to the trip. Mostly I talk about things like the ability to use a small and inexpensive one person vehicle for solo urban trips (which are 80% of trips, so this is a big deal) but it also means sending an SUV when 3 people want to go skiing, or a pickup-truck for a work run, or a van designed for socializing when a group of people want to travel together.

It also offers the ability to create vehicles just for people with certain disabilities. One example I find quite interesting is the Kenguru — a small, single person vehicle which is hollow, and allows a user in a wheelchair to just roll in the back and steer it with hand controls. For wheelchair users with working arms, this is hugely superior to designs that require you to get out of your chair into a car seat, or which involve the time delays of using a wheelchair lift. Especially with nobody to assist. Roll-in, roll-out can match the convenience of the able-bodied. The current Kenguru is to be steered, but a self-driving vehicle like this could handle even those in power chairs, and offer a fold-down bench for an able-bodied companion.

Being computerized, these vehicles will also offer accessible user interfaces. Indeed, they may mostly rely on the user’s phone, which will already be customized to their needs.

Custom-designed to meet particular disabilities, these vehicles will both serve the disabled better and frankly be not that useful for others. As such, regimes that require adapting all vehicles to handle both types of customers may have the right spirit, but provide inferior service.

Another key benefit of robotaxi service for the disabled will be the low price. Reduced job prospects drive many with disabilities into poverty. Service that is naturally low in price will be enabling.

Equivalent service or Separate but Superior

Providing “equivalent” service is difficult with traditional taxis, particularly for smaller fleets. Robotaxis, which don’t mind waiting around because no human driver is waiting, make this easier to do. The service level of a robotaxi service is based on the density of currently unused vehicles in your area. Increase fleet size with the same demand, and service level goes up. As long as fleet size is not way overblown, so that vehicles still wear out by the mile rather than by the year, increasing fleet size is not nearly as expensive as it is for regular cars or human driven taxis.

This means you can, fairly readily, offer equivalent or even superior service at a pretty reasonable cost. As long as disabled-designed vehicles are made in decent quantities to keep their costs low, the cost should be close to the cost of regular vehicles. In the public interest, regular vehicle customers might subsidize the slightly higher cost of these lower volume vehicles.

With increased fleets, service levels would generally be superior to the regular fleets, but not always. The law generally allows this, but the disabled community will need to understand a few unequal things that probably will happen:

  • Slightly more advanced notice of rides will often make it possible to provide service at lower cost. Regular vehicles will naturally be present on every block. Disabled vehicles might be present with less density during high use times, but the ability to reposition lets even slight advance notice do a lot.
  • For those in groups, it may not be easy to carry a person in a wheelchair along with several non-wheelchair passengers. This might mean the wheelchair passenger goes in their own vehicle (with videoconference link.) This is not as good, but is much more cost effective than requiring every van to have a wheelchair lift.
  • To increase service levels, it is likely competing companies would cooperate on serving the disabled, and pool fleets. Until the disabled become a profitable market rather than one done to meet goals of public good, companies will prefer to work together. As such if you call for an Uber, you might often get a Lyft or other small fleet car.
  • Low cost disabled transport may mean that accessible public transit and paratransit slowly fade. Public transit which has its own tracks will continue to be accessible as it offers a speed advantage which may not be met on the roads, but otherwise it may be much cheaper to offer private robotaxis than to make all transit accessible. This would mean a group of people might not be able to ride transit together if it’s not accessible.
  • Small electric vehicles may be allowed to enter buildings, dropping passengers right at elevator lobbies or other destinations.

The biggest trade-off will be the loss of social group experiences. There certainly will be buses and vans with lifts which allow groups of mixed-ability passengers to travel together, but it is unlikely these would be so common as to offer the same service level as ordinary vans. With advance notice of just 10 minutes, they could probably be available.

Thank you, United, for finally charging for the overhead bin

I’ve seen many enraged notes from friends on how United Airlines will now charge for putting a bag in the overhead bin. While they aren’t actually doing this, my reaction is not outrage, but actually something quite positive. And yours should be to, even when other airlines follow suit, as they will.

I fly too much on United. I have had their 1K status for several years, this year I logged over 200,000 miles, so I know all the things to dislike about the airline. Why is it good for them to do this?

Strictly speaking, what they are doing is creating a new fare class, which is extra discount, and it includes no bin space and no assigned seat before departure. They claim the new class will cost less than existing fares, and you can still buy the regular economy fare which comes which bin space and a seat assignment. Naturally, we can suspect they will soon raise the price. The other reason people can complain is that when you comparison shop, you tend to look for the cheapest price, and it’s annoying when the products are not similar. (To fix this, shopping sites will need to start having options so you can ask for a comparison of what you really want to buy.)

The reason it’s good is that it means it’s more likely that I will get bin space when I show up late, and more likely I will get a tolerable seat when I book late. Airlines that give those things to all passengers, even the ones who don’t care that much about them, do not serve their more frequent flyers well. If I have to pay for seat assignment and bin space, it’s great, because I truly need them and will not have a better chance of getting them. Of course, as a super-elite, I won’t have to pay directly, I pay by all the other money I have given the airline, which is even better for me.

I need bin space because I am a photographer who carries a lot of cameras and lenses. Even if I check a bag, I still bring along a big carry-on, and everything in it is too fragile to go in the hold. If they tell me they need to gate check it, I will either talk them out of it, or if that ever fails to work I may take another flight. Of course, elite flyers board first, so we don’t have a bin space problem, but sometimes we need to get to a flight late, or have a short connection, and then we can find ourselves with no bin space today.

I won’t take a middle seat because I’m big. My fault or not, it’s the way it is. Sometimes I need to book last minute, or change flights or even go standby. This can mean a flight with nothing but middle seats. If it’s a flight of any duration, this is also just not an option anybody wants. Since in today’s system, everybody gets a seat based on when they bought, the guy with the discount ticket who bought 3 months ago has the aisle, and the elite flyer who paid a lot more for their ticket (possibly even downgraded from business class due to changes) is in the middle seat. Not the way you want to serve your better customers. (Since the airline will assign seats on day of flight, it will only help this moderately.)

But the point is the same — I would rather pay for what I really need than have it come by default and end up not being available to me because a lot of people didn’t actually want it that much. People who don’t need a big carry-on. People who are small and can tolerate a middle seat easily and would rather do that than pay money. An airline that charges for these things is the airline I want. In fact, I would even be OK if they charged a bit more for aisles and less for windows and middles, even on the day of the flight. And yes, elites sometimes solve all these problems with a business class upgrade, but on the big popular routes, that is far from certain. United has gotten too good at filling its planes, and other airlines are also getting good.

The overhead bag problem is partly a result of the charges for checked bags. Those do me no good (though again, elites don’t pay them.) There is no shortage of hold space, so charging for bags is just pure money for the airline, and that’s why they all started doing it. The problem, of course, is it makes people carry bigger carry-on bags, not for the reason that I or other frequent flyers do, but because they want to avoid the bag charge. I would be very pleased if they made sure the overhead charge is larger than the checked bag charge, or if they charged you to gave you the choice — either an overhead space or a bag in the hold, but not both.

There is another good reason for this — bigger overhead bags from those doing it simply to avoid charges slow down security lines. Leave the overhead bins for those who truly need them, because they have lots of fragiles, or because they value their time more than money and don’t want the delays of bag checking. (I continue to show up for flights quite late, another reason I don’t want to check a bag and be forced to meet the deadlines for that. But I notice I am almost always alone — everybody else listens to the crazy advice about showing up 60, 90 or even 120 minutes before flights. I’m glad everybody else listens; but in reality this has not caused me to miss flights, so I will continue to not listen. And if you fly enough, that time makes a big difference.

In the end, all airlines face the problem that on full planes, there is not enough room for everybody to put a big bag in the overhead bins. So the only question is who it will be that get the space? Today, it’s “who boarded first?” which is tolerable to many (until you have a late connection or other factors make you on time but later than others.) United now wants to make it “Those who didn’t give up the space for a discount” which seems pretty fair to me.

I am curious as to just how they will enforce this. I know some airlines tag cabin baggage, does this actually work? Passengers not using the overhead bin also do not stand in the aisle loading it, though they do often stand there pulling things out of the bag they will be putting under the seat. One way to enforce would be to have the no-bin folks board last, though it causes a problem when people together have different boarding groups. Some airlines, I think, give you tags for overhead bags and under-seat bags.

So while I don’t usually like how United does it, this one’s an exception. (Their new business class redesign also looks good, if long overdue.)

App stores need offline interfaces

Here’s the situation: You’re in a place with no bandwidth or limited bandwidth. It’s just the place that you need to download an app, because the good apps, at least, can do more things locally and not make as much use of the network. But you can’t get to the app store. The archetype of this situation is being on a plane with wifi and video offerings over the wifi. You get on board and you connect and it says you needed to download the app before you took off and got disconnected.

There’s an obvious answer. The app stores should allow segments of themselves to be cached offline. This means that the app market app (such as iTunes or Google Play) should allow you to use a cached version of the store, as long as everything is signed and not too old. Then the plane’s server could keep copies of things like the airline app or video playing app in the cache, along with games and entertainment they want to make available to you. Mostly free stuff, though you could also allow payment with cached transactions (with a bit of trust) if need be.

Same experience for the user. They could go to the app store, search for and find the airline app, and download and install it, all without a network connection. Only if they tried to get a non-cached app would they get told they were offline.

As I wander the world, I get reminded all the time how we get a bit spoiled in our land of fast wifi and LTE phone data. You even get to understand why Google started de-ranking pages that don’t support mobile well in their mobile search results. Even as we move to having internet from drones, balloons or satellites everywhere we go, until we have gigabits everywhere, we need to design for lower connectivity environments.

Of course, the airlines could, on Android, offer you an APK file that you can manually install, but you have to check boxes and take security risks to do so, because the certification systems are centralized.

What if the city ran Waze and you had to obey it? Could this cure congestion?

I believe we have the potential to eliminate a major fraction of traffic congestion in the near future, using technology that exists today which will be cheap in the future. The method has been outlined by myself and others in the past, but here I offer an alternate way to explain it which may help crystallize it in people’s minds.

Today many people drive almost all the time guided by their smartphone, using navigation apps like Google Maps, Apple Maps or Waze (now owned by Google.) Many have come to drive as though they were a robot under the command of the app, trusting and obeying it at every turn. Tools like these apps are even causing controversy, because in the hunt for the quickest trip, they are often finding creative routes that bypass congested major roads for local streets that used to be lightly used.

Put simply, the answer to traffic congestion might be, “What if you, by law, had to obey your navigation app at rush hour?” To be more specific, what if the cities and towns that own the streets handed out reservations for routes on those streets to you via those apps, and your navigation app directed you down them? And what if the cities made sure there were never more cars put on a piece of road than it had capacity to handle? (The city would not literally run Waze, it would hand out route reservations to it, and Waze would still do the UI and be a private company.)

The value is huge. Estimates suggest congestion costs around 160 billion dollars per year in the USA, including 3 billion gallons of fuel and 42 hours of time for every driver. Roughly quadruple that for the world.

Road metering actually works

This approach would exploit one principle in road management that’s been most effective in reducing congestion, namely road metering. The majority of traffic congestion is caused, no surprise, by excess traffic — more cars trying to use a stretch of road than it has the capacity to handle. There are other things that cause congestion — accidents, gridlock and irrational driver behaviour, but even these only cause traffic jams when the road is near or over capacity.

Today, in many cities, highway metering is keeping the highways flowing far better than they used to. When highways stall, the metering lights stop cars from entering the freeway as fast as they want. You get frustrated waiting at the metering light but the reward is you eventually get on a freeway that’s not as badly overloaded.

Another type of metering is called congestion pricing. Pioneered in Singapore, these systems place a toll on driving in the most congested areas, typically the downtown cores at rush hour. They are also used in London, Milan, Stockholm and some smaller towns, but have never caught on in many other areas for political reasons. Congestion charging can easily be viewed as allocating the roads to the rich when they were paid for by everybody’s taxes.

A third successful metering system is the High-occupancy toll lane. HOT lanes take carpool lanes that are being underutilized, and let drivers pay a market-based price to use them solo. The price is set to bring in just enough solo drivers to avoid wasting the spare capacity of the lane without overloading it. Taking those solo drivers out of the other lanes improves their flow as well. While not every city will admit it, carpool lanes themselves have not been a success. 90% of the carpools in them are families or others who would have carpooled anyway. The 10% “induced” carpools are great, but if the carpool lane only runs at 50% capacity, it ends up causing more congestion than it saves. HOT is a metering system that fixes that problem.  read more »

The Electoral College: Good, bad or Trump trumper, and how to abolish it if you want

Many are writing about the Electoral college. Can it still prevent Trump’s election, and should it be abolished?

Like almost everybody, I have much to say about the US election results. The core will come later — including an article I was preparing long before the election but whose conclusions don’t change much because of the result, since Trump getting 46.4% is not (outside of the result) any more surprising than Trump getting 44% like we expected. But for now, since I have written about the college before, let me consider the debate around it.

By now, most people are aware that the President is not elected Nov 8th, but rather by the electors around Dec 19. The electors are chosen by their states, based on popular vote. In almost all states all electors are from the party that won the popular vote in a “winner takes all,” but in a couple small ones they are distributed. In about half the states, the electors are bound by law to vote for the candidate who won the popular vote in that state. In other states they are party loyalists but technically free. Some “faithless” electors have voted differently, but it’s very rare.

I’m rather saddened by the call by many Democrats to push for electors to be faithless, as well as calls at this exact time to abolish the college. There are arguments to abolish the college, but the calls today are ridiculously partisan, and thus foolish. I suspect that very few of those shouting to abolish the college would be shouting that if Trump had won the popular vote and lost the college (which was less likely but still possible.) In one of Trump’s clever moves, he declared that he would not trust the final results (if he lost) and this tricked his opponents into getting very critical of the audacity of saying such a thing. This makes it much harder for Democrats to now declare the results are wrong and should be reversed.

The college approach — where the people don’t directly choose their leader — is not that uncommon in the world. In my country, and in most of the British parliamentary democracies, we are quite used to it. In fact, the Prime Minister’s name doesn’t even appear on our ballots as a fiction the way it does in the USA. We elect MPs, voting for them mostly (but not entirely) on party lines, and the parties have told us in advance who they will name as PM. (They can replace their leader after if they want, but by convention, not rule, another election happens not long after.)

In these systems it’s quite likely that a party will win a majority of seats without winning the popular vote. In fact, it happens a lot of the time. That’s because in the rest of the world there are more than 2 parties, and no party wins the popular vote. But it’s also possible for the party that came 2nd in the popular vote to form the government, sometimes with a majority, and sometimes in an alliance.

Origins of the college

When the college was created, the framers were not expecting popular votes at all. They didn’t think that the common people (by which they meant wealthy white males) would be that good at selecting the President. In the days before mass media allowed every voter to actually see the candidates, one can understand this. The system technically just lets each state pick its electors, and they thought the governor or state house would do it.

Later, states started having popular votes (again only of land owning white males) to pick the electors. They did revise the rules of the college (12th amendment) but they kept it because they were federalists, strong advocates of states’ rights. They really didn’t imagine the public picking the President directly.  read more »

Comma One goes Open Source, Robocars in New Zealand Earthquakes and more

There have been few postings this month since I took the time to enjoy a holiday in New Zealand around speaking at the SingularityU New Zealand summit in Christchurch. The night before the summit, we enjoyed a 7.8 earthquake not so far from Christchurch, whose downtown was over 2/3 demolished after quakes in 2010 and 2011. On the 11th floor of the hotel, it was a disturbing nailbiter of swaying back and forth for over 2 minutes — but of course swaying is what the building is supposed to do; that means it’s working. The shocks were rolling, not violent, and in fact we got more violent jolts from aftershocks a week later when we went to Picton.

While driving around that region, we encountered this classic earthquake scene on the road:

There were many like this, and in fact the main highway of the South Island was destroyed long-term not too far away, cutting off several towns. A scene like this makes you wonder just what a robocar would do in such situations. I already answered this question in a blog post on how to handle a tsunami. Fortunately there was only a mild tsunami for this quake. A tsunami will result in a warning in the rich world, and the car will know the elevation map of the roads and know how to get to high ground. In some places, like Japan,t here is also an advanced earthquake warning system that tells you quakes are coming well before they hit you, since electrons go much faster than seismic waves. With such a system, robocars should receive a warning and come to a stop unless they need to evacuate a tsunami zone. Without such a warning, we still could imagine the road cracking and collapsing in front of you as might have happened on this road. Of course the cones and signs that warned me days later would not be present.

The answer again lies in the fact that pictures like mine will be used to create situations like this in simulator, and all car developers will be able to test their systems with simulated quake damage to make sure they do the right thing. I’ve spoken since 2010 on the value of a shared simulator environment and I think if government agencies like NHTSA want to really help development, providing funding and tools for such an environment would be a good step. NHTSA’s proposal that all developers share their logs of all incidents would clearly make such a simulator better, but there is pushback because of the proprietary value of those logs. When it comes to strange situations like earthquakes, I doubt there would be much pushback on having an open and shared simulator environment.

New Zealand’s government is taking a very welcoming approach to robocars. They are not regulating for a while, and have invited developers to come and test. They have even said it’s OK to test unmanned vehicles under some fairly simple rules. NZ does not have any auto industry, and of course it’s quite remote, but we’ll see if they can attract developers to come test. Their roads feature something you don’t see much in the USA — tons and tons of one-lane bridges and other one-lane stretches of highway. Turns out that robocars, with a little bit of communication, can make very superhumanly efficient use of one-lane two-way roads, and it might be worth exploring.

Open Source Comma One box

Speaking of Open, today Comma.ai, which previously had declared they were giving up on their neural network autopilot due to NHTSA threats today announced they have open sourced their software, along with hardware designs and case designs. NHTSA did not want them making an autopilot, and said they could not simply rely on the fact that drivers were told they must be diligent, it will be very interesting to see how NHTSA reacts to the release of open designs that anybody can then install on their car.

The automotive industry has had a long history of valuing the tinkerer. All the big car companies had their beginnings with small tinkerers and inventors. Some even died in the very machines they were inventing. These beginnings have allowed people to do all sorts of playing around in their garages with new car ideas, without government oversight, in spite of the risk to themselves and even others on the road. If a mechanic wants to charge you for working on your car, they must be licenced, but you are free to work on it yourself with no licence, and even build experimental cars. You just can’t sell them. And even those rights have been eroded.

Clearly far fewer people will have the inclination to build an autopilot using the comma.ai tools by themselves. But it won’t be that hard to do, and they can make it easier with time, too. One could even imagine a car which already had the necessary hardware, so that you only needed to download software to make it happen.

In recent times, there has been a strong effort to prevent people with tinkering with their cars, even in software. One common area of controversy has been around engine tuning. Engine tuning is regulated by the EPA to keep emissions low. Car vendors have to show they have done this — and they can’t program their car to give good emissions only on the test while getting better performance off the test as VW did. But owners have been known to want to make such modifications. Now we will see modifications that affect not just emissions but safety. Car companies don’t want to be responsible if you modify the code in your car and there is an accident involving both their code and yours. As such, they will try to secure their car systems so you can’t change them, and the government may help them or even insist on it. When you add computer security risks to the mix — who can certify the modified car can’t be taken over and used as a weapon? — it will get even more fun.

I will also point out that I suspect that comma’s approach would not know what to do about the collapsed road, because it would never have been trained in that situation. It might, however, simply sound an alert and kick out, not being able to find the lane any more.

Regulatory pushback

Regular readers will have seen my strong critique of the NHTSA rules. The other major news during my break was the pushback from major players in the public comment on the regulations. In some ways the regulations didn’t do enough to give vendors the certainty they need to make their plans. At the same time, they were criticsed for not giving enough flexibility to vendors. In addition, as expected, they resist giving up their proprietary data in the proposed forced sharing. I predict continued ambivalence on the regulations. Big players actually like having lots of regulations, because big players know how to deal with that and small players don’t.

How will robotaxi services compete in the future?

Right now Uber, Lyft and traditional taxis are competing. But in the robocar world of the future, when large fleets of cars operate as taxis and replace car ownership for many, how will they compete with one another. Will there be a monopoly in each town, or just a couple of companies? Can we have dozens? Does the biggest fleet win?

I have a new major article on the subject. I also welcome comments on other ways these services might find a competitive edge.

Read Competition in the Robotaxi world

If you built "Westworld" (or other robot sex) it would probably be with VR

HBO released a new version of “Westworld” based on the old movie about a robot-based western theme park. The show hasn’t excited me yet — it repeats many of the old tropes on robots/AI becoming aware — but I’m interested in the same thing the original talked about — simulated experiences for entertainment.

The new show misses what’s changed since the original. I think it’s more likely they will build a world like this with a combination of VR, AI and specialty remotely controlled actuators rather than with independent self-contained robots.

One can understand the appeal of presenting the simulation in a mostly real environment. But the advantages of the VR experience are many. In particular, with the top-quality, retinal resolution light-field VR we hope to see in the future, the big advantage is you don’t need to make the physical things look real. You will have synthetic bodies, but they only have to feel right, and only just where you touch them. They don’t have to look right. In particular, they can have cables coming out of them connecting them to external computing and power. You don’t see the cables, nor the other manipulators that are keeping the cables out of your way (even briefly unplugging them) as you and they move.

This is important to get data to the devices — they are not robots as their control logic is elsewhere, though we will call them robots — but even more important for power. Perhaps the most science fictional thing about most TV robots is that they can run for days on internal power. That’s actually very hard.

The VR has to be much better than we have today, but it’s not as much of a leap as the robots in the show. It needs to be at full retinal resolution (though only in the spot your eyes are looking) and it needs to be able to simulate the “light field” which means making the light from different distances converge correctly so you focus your eyes at those distances. It has to be lightweight enough that you forget you have it on. It has to have an amazing frame-rate and accuracy, and we are years from that. It would be nice if it were also untethered, but the option is also open for a tether which is suspended from the ceiling and constantly moved by manipulators so you never feel its weight or encounter it with your arms. (That might include short disconnections.) However, a tracking laser combined with wireless power could also do the trick to give us full bandwidth and full power without weight.

It’s probably not possible to let you touch the area around your eyes and not feel a headset, but add a little SF magic and it might be reduced to feeling like a pair of glasses.

The advantages of this are huge:

  • You don’t have to make anything look realistic, you just need to be able to render that in VR.
  • You don’t even have to build things that nobody will touch, or go to, including most backgrounds and scenery.
  • You don’t even need to keep rooms around, if you can quickly have machines put in the props when needed before a player enters the room.
  • In many cases, instead of some physical objects, a very fast manipulator might be able to quickly place in your way textures and surfaces you are about to touch. For example, imagine if, instead of a wall, a machine with a few squares of wall surface quickly holds one out anywhere you’re about to touch. Instead of a door there is just a robot arm holding a handle that moves as you push and turn it.
  • Proven tricks in VR can get people to turn around without realizing it, letting you create vast virtual spaces in small physical ones. The spaces will be designed to match what the technology can do, of course.
  • You will also control the audio and cancel sounds, so your behind-the-scenes manipulations don’t need to be fully silent.
  • You do it all with central computers, you don’t try to fit it all inside a robot.
  • You can change it all up any time.

In some cases, you need the player to “play along” and remember not to do things that would break the illusion. Don’t try to run into that wall or swing from that light fixture. Most people would play along.

For a lot more money, you might some day be able to do something more like Westworld. That has its advantages too:

  • Of course, the player is not wearing any gear, which will improve the reality of the experience. They can touch their faces and ears.
  • Superb rendering and matching are not needed, nor the light field or anything else. You just need your robots to get past the uncanny valley
  • You can use real settings (like a remote landscape for a western) though you may have a few anachronisms. (Planes flying overhead, houses in the distance.)
  • The same transmitted power and laser tricks could work for the robots, but transmitting enough power to power a horse is a great deal more than enough to power a headset. All this must be kept fully hidden.

The latter experience will be made too, but it will be more static and cost a lot more money.

Yes, there will be sex

Warning: We’re going to get a bit squicky here for some folks.

Westworld is on HBO, so of course there is sex, though mostly just a more advanced vision of the classic sex robot idea. I think that VR will change sex much sooner. In fact, there is already a small VR porn industry, and even some primitive haptic devices which tie into what’s going on in the porn. I have not tried them but do not imagine them to be very sophisticated as yet, but that will change. Indeed, it will change to the point where porn of this sort becomes a substitute for prostitution, with some strong advantages over the real thing (including, of course, the questions of legality and exploitation of humans.)  read more »

Comma.ai cancels comma-one add-on box after threats from NHTSA

Comma.ai, the brash startup attempting to make a self-driving system entirely from a neural network has announced it will cancel the “comma one” add-on box it has planned to sell to owners of certain Honda vehicles. The box stuck on the rear-view mirror and used the car’s own bus commands to provide an autopilot similar to those offered by car makers, with lane-keeping and adaptive cruise control.

Of particular importance is the letter from NHTSA to comma.ai which I suggest you read. This letter creates several big issues:

  1. There are many elements of this letter which would also apply to Tesla and other automakers which have built supervised autopilot functions.
  2. Of particular interest is the paragraph which says: “it is insufficient to assert, as you do, that the product does not remove any of the driver’s responsibilities” and “there is a high likelihood that some drivers will use your product in a manner that exceeds its intended purpose.” That must be very scary for Tesla.
  3. I noted before that the new NHTSA regulations appear to forbid the use of “black box” neural network approaches to the car’s path planning and decision making. I wondered if this made illegal the approach being done by Comma, NVIDIA and many other labs and players. This may suggest that.
  4. We now have a taste of the new regulatory regime, and it seems that had it existed before, systems like Tesla’s autopilot, Mercedes Traffic Jam Assist, and Cruise’s original aftermarket autopilot would never have been able to get off the ground.
  5. George Hotz of comma declares “Would much rather spend my life building amazing tech than dealing with regulators and lawyers. It isn’t worth it. The comma one is cancelled. comma.ai will be exploring other products and markets. Hello from Shenzhen, China.”

To be clear, comma is a tiny company taking a radical approach, so it is not a given that what NHTSA has applied to them would have been or will be unanswerable by the big guys. Because Tesla’s autopilot is not a pure machine learning system, they can answer many of the questions in the NHTSA letter that comma can’t. They can do much more extensive testing that a tiny startup can’t. But even so a letter like this sends a huge chill through the industry.

It should also be noted that in Comma’s photos the box replaced the rear-view mirror, and NHTSA had reason to ask about that.

George’s declaration that he’s in Shenzen gives us the first sign of the new regulatory regime pushing innovation away from the United States and California. I will presume the regulators will say, “We only want to scare away dangerous innovation” but the hard truth is that is a very difficult thing to judge. All innovation in this space is going to be a bit dangerous. It’s all there trying to take the car — the 2nd most dangerous legal consumer product — and make it safer, but it starts from a place of danger. We are not going to get to safety without taking risks along the way.

I sometimes ask, “Why do we let 16 year olds drive?” They are clearly a major danger to themselves and others. Driver testing is grossly inadequate. They are not adults so they don’t have the legal rights of adults. We let them drive because they are going to start out dangerous and then get better. It is the only practical way for them to get better, and we all went through it. Today’s early companies are teenagers. They are going to take risks. But this is the fastest and only practical way to let them get better and save millions.

“…some drivers will use your product in a manner that exceeds its intended purpose”

This sentence, though in the cover letter and not the actual legal demand, looks at the question asked so much after the Tesla fatal crash. The question which caused Consumer Reports to ask Tesla to turn off the feature. The question which caused MobilEye, they say, to sever their relationship with Tesla.

The paradox of the autopilot is this: The better it gets, the more likely it is to make drivers over-depend on it. The more likely they will get complacent and look away from the road. And thus, the more likely you will see a horrible crash like the Tesla fatality. How do you deal with a system which adds more danger the better you make it? Customers don’t want annoying countermeasures. This may be another reason that “Level 2,” as I wrote yeterday is not really a meaningful thing.

NHTSA has put a line in the sand. It is no longer going to be enough to say that drivers are told to still pay attention.

Black box

Comma is not the only company trying to build a system with pure neural networks doing the actual steering decisions (known as “path planning”.) NVIDIA’s teams have been actively working on this, as have several others. They plan to make commentary to NHTSA about these element of the regulations, which should not be forbidding this approach until we know it to be dangerous.  read more »

Of the SAE's robocar "levels" only level 4 will be meaningful, and only partly

It’s no secret that I’ve been a critic of the NHTSA “levels” as a taxonomy for types of Robocars since the start. Recent changes in their use calls for some new analysis that concludes that only one of the levels is actually interesting, and only tells part of the story at that. As such, they have become even less useful as a taxonomy. Levels 2 and 3 are unsafe, and Level 5 is remote future technology. Level 4 is the only interesting one and there is thus no taxonomy.

Unfortunately, they have just been encoded into law, which is very much the wrong direction.

NHTSA and SAE both created a similar set of levels, and they were so similar that NHTSA declared they would just defer to the SAE’s system. Nothing wrong with that, but the core flaws are not addressed by this. Far better, their regulations declared that the levels were just part of the story, and they put extra emphasis on what they called the “operating domain” — namely what locations, road types and road conditions the vehicle operates in.

The levels focus entirely on the question of how much human supervision a vehicle needs. This is an important issue, but the levels treated it like the only issue, and it may not even be the most important. My other main criticism was that the levels, by being numbered, imply a progression for the technology. That progression is far from certain and in fact almost certainly wrong. SAE updated its levels to say that they are not intended to imply a progression, but as long as they are numbers this is how people read them.

Today I will go further. All but level 4 are uninteresting. Some may never exist, or exist only temporarily. They will be at best footnotes of history, not core elements of a taxonomy.

Level 4 is what I would call a vehicle capable of “unmanned” operation — driving with nobody inside. This enables most of the interesting applications of robocars.

Here’s why the other levels are less interesting:

Levels 0 and 1 — Manual or ADAS-improved

Levels 0 and 1 refer to existing technology. We don’t really need new terms for our old cars. Level 2 perhaps best described as a more advanced version of level 1 and that transition has already taken place.

Level 2 — Supervised Autopilot

Supervised autopilots are real. This is what Tesla sells, and many others have similar offerings. They are working in one of two ways. The first is the intended way, with full time supervision. This is little more than a more advanced cruise control, and may not even be as relaxing.

The second way is what we’ve seen happen with Tesla — a car that needs supervision, but is so good at driving that supervisors get complacent and stop supervising. They want a full self-driving car but don’t have it, so they pretend they do. Many are now saying that this makes the idea of supervised autopilot too dangerous to deploy. The better you make it, the more likely it can lull people into bad activity.

Update: One day after I wrote this, it was revealed that NHTSA shut down comma.ai’s efforts to build an aftermarket autopilot citing these concerns, among others.

Level 3 — Standby driver

This level is really a variation of Level 4, but the vehicle needs the ability to call upon a driver who is not paying attention and get them to take control with 10 to 60 seconds of advance warning. Many people don’t think this can be done safely. When Google experimented with it in 2013, they concluded it was not safe, and decided to take the steering wheel entirely out of their experimental vehicles.

Even if Level 3 is a real thing, it will be short lived as people seek an unmanned capable vehicle. And Level 4 vehicles will offer controls for special use, even if they don’t permit a transition while moving.

Level 5 — Drive absolutely everywhere

SAE, unlike NHTSA’s first proposal, did want to make it clear that an unmanned capable (Level 4) vehicle would only operate in certain places or situations. So they added level 5 to make it clear that level 4 was limited in domain. That’s good, but the reality is that a vehicle that can truly drive everywhere is not on anybody’s plan. It probably requires AI that matches human beings.

Consider this situation in which I’ve been driven. In the African bush on a game safari, we spot a leopard crossing the road. So the guide drives the car off-road (on private land) running over young trees, over rocks, down into wet and dry streambeds to follow the leopard. Great fun, but this is unlikely to be an ability there is ever market demand to develop. Likewise, there are lots of small off-road tracks that are used by only one person. There is no economic incentive for a company to solve this problem any time soon.

Someday we might see cars that can do these things under the high-level control a human, but they are not going to do them on their own, unmanned. As such SAE level 5 is academic, and serves only to remind us that level 4 does not mean everywhere.

Levels vs. Cul-de-sacs

The levels are not a progression. I will contend in fact that even to the extent that levels 2, 3/4 and 5 exist, they are quite probably entirely different technologies.

Level 2 is being done with ADAS technologies. They are designed to have a driver in the loop. Their designs in many case do not have a path to the reliability level needed for unmanned, which is orders of magnitude higher. It is not just a difference of degree, it is one of kind.

Level 3 is related to level 4, in particular because a level 3 car is expected to be able to handle non-response from its driver, and safely stop or pull off the road. It can be viewed as a sucky version of a level 4 system. (It’s also not that different — see below.)

Level 5, as indicated, probably requires technologies that are more like artificial general intelligence than they are like a driving system.

As such the levels are not levels. There is no path between any of the levels and the one above it, except in the case of 3/4.

Level 4

This leaves Level 4 as the only one worth working on long term, the only one with talking about. The others are just there to create a contrast. NHTSA realizes this and gave the name ODD (Operational Design Domain) to refer to the real area of research, namely what roads and situations the vehicles can handle.

The distinction between 4 and 3 is also not as big as you might expect. Google removed the steering wheel from their prototype to set a high bar for themselves, but they actually left one in for use in testing and development. In reality, even the future’s unmanned cars will feature some way in which a human can control them, for use during breakdowns, special situations, and moving the cars outside of their service areas (operational domains.) Even if the transition from autodrive to human drive is unsafe at speed, it will still be safe if the car pulls over and activates the controls for a licenced driver.

As such, the only distinction of a “level 3” car is it hopes to be able to do that transition while moving, on short but not urgent notice. A pretty minor distinction to be a core element of a taxonomy.

If Level 4 is the only interesting one, my recommendation is to drop the levels from our taxonomy, and focus the taxonomy instead on the classes of roads and conditions the vehicle can handle. It can be a given that outside of those operating domains, other forms of operation might be used, but that does not bear much on the actual problem.

I say we just identify a vehicle capable of unmanned or unsupervised operation as a self-driving car or robocar, and then get to work on the real taxonomy of problems.

Our routers need to remove the "internet" from the "internet of things" to stop DDOS

I frequently say that there is no “internet of things.” That’s a marketing phrase for now. You can’t go buy a “thing” and plug it into the “internet of things.” IoT is still interesting because underneath the name is a real revolution from the way that computing, sensing and communications are getting cheaper, smaller and using less power. New communications protocols are also doing interesting things.

We learned a lesson on Friday though, about why using the word “internet” is its own mistake. The internet — one of the world’s greatest inventions — was created as a network of networks where anything could talk to anything, and it was useful for this to happen. Later, for various reasons, we moved to putting most devices behind NATs and firewalls to diminish this vision, but the core idea remains.

Attackers on Friday made use of growing collection of low cost IoT devices with low security to mount a DDOS attack on DYN’s domain name servers, shutting off name lookup for some big sites. While not the only source of the attack, a lot of attention has come to certain Chinese brands of IP based security cameras and baby monitors. To make them easy to use, they are designed with very poor security, and as a result they can be hijacked and put into botnets to do DDOS — recruiting a million vulnerable computers to all overload some internet site or service at once.

Most applications for small embedded systems — the old and less catchy name of the “internet of things” — aren’t at all in line with the internet concept. They have no need or desire to be able to talk to the whole world the way your phone, laptop or web server do. They only need to talk to other local devices, and sometimes to cloud servers from their vendor. We are going to see billions of these devices connected to our networks in the coming years, perhaps hundreds of billions. They are going to be designed by thousands of vendors. They are going to be cheap and not that well made. They are not going to be secure, and little we can do will change that. Even efforts to make punishments for vendors of insecure devices won’t change that.

So here’s an alternative; a long term plan for our routers and gateways to take the internet out of IoT.

Our routers should understand that two different classes of devices will connect to them. The regular devices, like phones and laptops, should connect to the internet as we expect today. There should also be a way to know that the connecting devices does not want regular internet access, and not to give it. One way to do that is for the devices to know about this, and to convey how much access they need when they first connect. One proposal for this is my friend Eliot Lear’s MUD proposal. Unfortunately, we can’t count on devices to do this. We must limit stupid devices and old devices too.  read more »

Vendors push back on California Robocar regulations - plus Tesla and Apple news

California Hearings

Wednesday, California held hearings on the latest draft of their regulations. The new regulations heavily incorporate the new NHTSA guidelines released last month, and now incorporate language on the testing and deployment of unmanned vehicles.

The earlier regulations caused consternation because they correctly identified that nobody had sufficient understanding of unmanned vehicle operations to write regulations, but incorrectly proceeded to forbid those vehicles until later. Once you ban something, it’s very hard to un-ban it. The new approach does not ban the vehicles, but attempts instead to write regulations for them that are too premature.

Comment from developers of the vehicles reflected sentiment that all the regulations are premature. California worked together with NHTSA on their regulations, and incorporated them. In particular, while NHTSA’s regulations lay out a 15 point list of functional domains that creators of vehicles should certify, the federal regulations technically declare this certification to be optional. A vendor in submitting a report can explicitly state they decline to certify most of the items.

California suggests that this certification might be mandatory here. For all my criticism of NHTSA’s plan, they do have an understanding that it is still far too early to be writing detailed rules for vehicles that don’t yet exist, and left these avenues for change and disagreement within their regulations. The avenues are not great — I feel that vendors will be concerned that truly treating the regulations as voluntary will will be done at their peril — but at least they exist.

Several vendors also pointed out the serious problems with traditional regulatory timelines and the speed of development of computer technologies. The California regulations may require that a car be tested for a year before it is deployed. On the surface that sounds normal by old standards, but the reality of development is very different. Pretty much all the vendors I know are producing new builds of their vehicle software and testing them out on the roads the next day — with trained safety drivers behind the wheel. The software goes through extensive “regression testing,” running through every tricky situation the team has encountered anywhere, as well as simulated situations, but the safety driver is there to deal with any problem not found with that testing.

Vendors won’t release into production cars with only one night of testing, but neither can they wait a year. This is particularly true because in the early days of this technology, new problems will be found during deployment, and you want to get the fixes out on the road as quickly as is safe to do. An arbitrary timeline makes no sense.

This is just the start of the problems. While one may argue that it was always going to be hard for startups and tinkerers to develop these cars, these regulations (and the federal ones) put more nails in the coffin of the small innovator. The amount of bureaucracy, the size of the insurance bonds and many other factors will make it hard for teams the size of the DARPA challenge teams who kickstarted this technology and make it real to actually play in the game. The auto industry has a long history of allowing tinkerers to innovate, even at the cost of relaxing safety requirements applied to them. We may end up with a world where only the big players can play at all, and we know that this is generally not good at all for the pace of innovation.

Delivery Robots

The new regulations allowing unmanned vehicles might seem to open doors for delivery robots like we’re working on at Starship. Unfortunately they seem aimed primarily at large vehicles. Since California rules define the sidewalk as part of the street, these regulations might end up demanding a small, slow, light delivery robot still comply with the bulky Federal Motor Vehicle Safety Standards (which are meant for passenger cars) and is impossible without major exceptions being made. (More reading is needed to tell if this is truly how this will play out.)

Tesla says all future cars will have full sensor suite

Tesla has declared that all their future cars, including the lower cost Model 3, will include the full suite of radars, cameras and other sensors needed for self driving. That’s good news, though the Tesla sensor suite, lacking LIDAR, is not currently sufficient for a full self-driving car. Tesla is making a bet of sorts that by the time this becomes in play, cameras and radars will be sufficient to make an acceptably safe system. If not, they will have to stick with autopilot function on those cars. Since there is strong evidence that LIDAR will be inexpensive in a couple of years, I don’t believe anybody should plan to deploy their first (and riskiest) robocars without every sensor that’s at all affordable. Why make it less safe than you could just to save a few hundred dollars?

Today, Tesla can’t do that because no production low cost LIDAR is available. Most other teams are betting it will be. In the future, when cost becomes a bigger issue, vendors will decide to eliminate sensors based on cost.

Apple might have changed their plans

Apple hasn’t said anything official about their rumoured car project. All we know has come from leaks and from looking at who has been hired or who has departed. (I do know one secret thing about the Apple car — it will only work if you have a new iPhone.) Many rumours came out this week that Apple may have cancelled plans to actually make an Apple Car, and instead will take an approach more like Google — building the software and self-driving systems and letting others worry about car manufacture. That is a good strategy, so Apple is hardly out of the game, but it does mean it’s less likely the world will see a car with the particular Apple flair and marketing genius.

The relationship between powerful self-drive system developers (like Apple, Google and Uber) and car manufacturers will be an interesting one. Car makers are used to being in charge, owning the process and owning the customer. So are these hi-tech companies. But many companies will do “contract manufacturing” in auto. If Apple shows up with a purchase order for 100,000 cars to be built to their spec, there are many companies who will take the order, even if the high end Daimlers and Toyotas of the world won’t. So just as Apple doesn’t build the iPhone and gets Foxconn to do it, the fact that Apple will stick to the software systems doesn’t mean their design will not appear in a car.

Here is a summary of Apple car rumours.

Most voting is about the next election, not this one.

When people vote, what do they think it will accomplish? How does this affect how they vote, and how should it?

My apologies for more of this in a season when our social media are overwhelmed with politics, but in a lot of the postings I see about voting plans, I see different implicit views on just what the purpose of voting is. The main focus will be on the vote for US President.

The vast majority of people will vote in non-contested states. The logic is different in the “swing” states where all the campaign attention is.

In a non-contested state, there is essentially zero chance your vote will affect the result of the election. If you’re voting thinking you are exerting your small power to have a say in who wins, you are deluding yourself. Your vote does one, and only one thing — it changes the popular vote totals that are published and looked at by some people. You will change the total for the nation, your state, and some will even look at the totals in your region.

For minor party candidates, having a higher vote total — in particular reaching 5% — can also make a giant difference by giving access to federal campaign funding, which can make a serious difference in the funding level for those parties.

Voters should ask themselves, whose popular vote total do they want to increase? Some logic suggests that it makes more sense to vote for a minor party that you support. Not because they will win, but because you will create a larger proportionate increase in their total. One more vote for a Republican or Democrat will be barely noticed. One more vote for a minor party will also on its own make no difference, but proportionately it may be 10 times or more greater.

It’s for the next election, not this one

You don’t increase the popular vote totals to affect this election. You do it to affect the next one. Supporting a party makes other supporters realize they are not alone. It makes them just a bit more likely to join the cause, if they believe in it. Most voters don’t understand this “next election” principle, and so while a minor party remains too small to win or affect the election, they are less likely to support it.

This is how most movements go from being small to being large. When a protest movement is small, people are afraid to show their support. When they see a real crowd march in the square, they are now more likely to join the crowd and to let the world see how much support there really is.

As such, the particular platform planks and candidate quirks are almost entirely irrelevant for the non-swing voter. When you’re voting for the next election, you are really supporting only the party and its broad platform, or a basic overall impression of a candidate. I often see voters say, “I could not vote for a candidate who supports X” but they do not realize that is not what they are doing.

The minor parties are particularly bad at this. Most of them like to pretend they are just like major parties. They nominate candidates based on what they say or stand for. They create detailed party platforms. This is an error. A detailed platform is only a reason for people to vote against you. Detailed platforms are only for candidates who might actually have a shot at implementing their platform. Minor party candidates take it as gospel that they should never admit that they can’t win, even though any rational person knows it quite clearly. The reality is that you can know you can’t win the current election, but can more reasonably hope you can step higher and get within range of winning in a future election. Only when this happens should you act like a major party. You almost never see minor candidates say the truth: “Vote for me, not because you can make me win — you can’t — but to show and build support for the ideas of our party.”

I personally would much rather vote for somebody who said the truth like that, but perhaps I am unusual.

As I’ve said earlier, under this philosophy I recommend people in non-swing states consider minor parties that they want to boost. While it is commonly said that voting for a minor party is “throwing away your vote,” I believe it’s more likely that voting for a major party is actually throwing away the vote. The major party vote will not move any needles, not wake anybody up to the existence of these major parties. Because the minor party can’t win, you can vote for it simply to signal that there is support for its core ideas. This is something a voter should consider even when they still prefer the major party more. Most minor parties have bizarre and fringe policies that most voters would not support. Because they can’t win, this is not important. Should they ever get bigger, they will moderate those policies, or they will never make the jump to serious contender. Yesterday the John Oliver show did a funny skewering of minor party candidates but it entirely misses this point.

In addition, as minor political movements gain strength, they get noticed by the major parties. If the Greens got 10% of the vote, you can bet the Democrats would take notice, and try to court those voters. They don’t want the Greens to get so large that they become a potential “spoiler” in the swing states, so they will become slightly Green to prevent that. Once again, how you vote today affects the election of the future.

Polls are good too

Of course, even better is to express these desires in the polls. What you say in polls can affect this election, but primarily polls encourage other people who think like you to come out of the woodwork and express that view. Polls are stage one in the process of gaining critical mass — they lead to actual votes, which lead to more polls and so on. Of course, you only want to express support for a party in a poll if you really want this to happen. You should not lie, but you should not be afraid to show what you really support because somebody convinces you it’s wasted.

What if everybody voted this way?

Some people have said to me, “If everybody voted for minority views the vote might actually become real!” All remember the 2000 Florida election where Greens split out the Democrat vote and that resulted eventually in President Bush the 2nd. That was a swing state. People knew that would be close.

The truth is, the idea that you are voting for the next election is not widely accepted at present. Perhaps in the future it will be strong enough to change a state from non-contended to swing. But not today.

It’s also true that if you leave in a really non-swing state, like California, it is impossible your vote will make a difference. The truth is, if it ever got to the point where California was 50-50 about a choice like Clinton-Trump, then Trump already won long ago in the other states. Solid safe states can’t be the deciding state. (Rare events, like having the Republican candidate be a California governor can turn a safe state into a swing state, but not by surprise.) The only way the truly safe states ever can swing is in an election that’s already settled. The polls will tell you things long in advance.

Can this really work in the USA?

The biggest counter-argument to this approach I have seen is the suggestion that the USA is different, that the two party system is so entrenched that anything else is a waste of time.

In the rest of the world, 3rd parties are very common. They often are players in elections and often no party gets a majority and so coalitions must be formed, where the large party agrees to do some of the agenda of the smaller party to get their support in the coalition. Parties begin small and grow, as described above. Parties like the Greens are now a powerful minority force in Europe. Some countries, like Iceland, have never had a majority party.

The USA has been two-party for a long time, and the two powerful parties tend to make the rules so as to keep it that way. The above federal funding rule is just one example. In Presidential elections, the system requires a majority in the electoral college. A serious 3rd candidate could simply mean the election is sent to the House of Representatives (which is now long term Republican due to gerrymandering.)

There are some approaches that could cause minority political opinion to be able to do more in the USA. The best would be to move states away from plurality methods to multi-candidate voting such as Approval voting or a Condorcet method. These are no rules against a state doing that for any of their elections. They don’t because the two parties like keeping it as two parties. Efforts are underway in the states that have ballot propositions (bypassing the two parties) to make such changes.

What about major parties

This view also can affect your vote for major parties. For example, even though you know your vote in California will make no difference, you may want to make a tiny contribution to public and party perception of how much one party beat another. You may want to support the idea of a landslide or a “mandate.” You might also go the other way, and vote to punish your preferred party (for not listening to you or picking the wrong nominee) by voting for the other major party so that they don’t think they have a mandate. Sanders supporters who hated Clinton would be foolish to vote for Trump in a swing state, but in the safe states they could send this message if they desired. (It should be noted that this does run a very tiny risk of causing the popular vote to not match the college, which doesn’t stop your candidate from winning but sends a very strong message of dissatisfaction, and causes some lessening of support for the legitimacy of the process.)

What about in a swing state

This logic applies much less in swing states. There, your vote might change the state, and there is a very low chance it could swing the election. Now it is worth pointing out that this has never happened in a Presidential election. There’s never been an election where one vote made a difference. Unlike the non-contested states, there is still a chance of this happening. There, you will certainly vote for a major party if you want one, and you might even think twice about doing so even if you love a minor party, since your desire to pick the least of the two evils may exceed your desire to show support for your real values. Here, it is possible for minor parties to split the ballot, and in the view of the major parties, “spoil” the vote. This point is valid, the main error is in people applying this advice outside the swing states.

It is an interesting exercise to calculate just how much effect a single vote has even in a swing state. Again, the probability that a single state makes the difference in the election is already low in most elections, and the probability that this state’s result is within a single vote is also extremely low. On the other hand, if it does happen, then it happens for every voter in the state who voted for the winner — they all made the difference equally.

What is it worth to be able to make your candidate become President? In 2012 it was estimated that donors put in $2.6B, and that was not for a guarantee. For an ordinary individual, one could do research to figure out what it’s truly worth to each voter by trying to ask how much money they would take to accept the other candidate. That will vary from race to race and person to person, but for most people, it doesn’t make a huge difference in their lives who is President. They might feel they will make a bit more money with one, be a bit happier, get more things they care about done, but it’s not worth millions to anybody but business people who think it will majorly affect their business. Throwing out ballpark numbers, let’s assume it’s worth $100,000 to a given individual — and I think that’s actually very high, and of course I know it’s not just about money.

The problem is that the odds of the vote actually making the difference are low. Even a close race usually has a margin of thousands of votes, so the odds of a win-by-one are perhaps 1 in 10,000, and the odds that your state will be the decider are also small. After all, only a few elections have ever been decided by one close state, though Florida of 2000 is one of them and it’s in recent memory. If you judge your state has a 1 in 100 chance of being the decider, then this back of envelope calculation values your vote at just $1 — a one in 100,000 chance of something worth $100K.

One might argue that bumping the popular vote total is worth more. Unlike changing the result (which almost never happens) your vote always changes the popular vote totals, no matter which election or state you vote in. So while the value of that is small, the fact that it always happens bumps its expected value. Would adding 100,000 votes to the Green total in California be worth $100K to the Greens there? I would say it would be far more, suggesting a value much more than $1 per vote.

This may explain why voter turnout is so low.

Yikes - even Barack Obama wants to solve robocar "Trolley Problems" now

I had hoped I was done ranting about our obsession with what robocars will do in no-win “who do I hit?” situations, but this week, even Barack Obama in his interview with Wired opined on the issue, prompted by my friend Joi Ito from the MIT Media Lab. (The Media Lab recently ran a misleading exercise asking people to pretend they were a self-driving car deciding who to run over.)

I’ve written about the trouble with these problems and even proposed a solution but it seems there is still lots of need to revisit this. Let’s examine why this problem is definitely not important enough to merit the attention of the President or his regulators, and how it might even make the world more dangerous.

We are completely fascinated by this problem

Almost never do I give a robocar talk without somebody asking about this. Two nights ago, I attended another speaker’s talk and he got the question as his 2nd one. He looked at his watch and declared he had won a bet with himself about how quickly somebody would ask. It has become the #1 question in the mind of the public, and even Presidents.

It is not hard to understand why. Life or death issues are morbidly attractive to us, and the issue of machines making life or death decisions is doubly fascinating. It’s been the subject of academic debates and fiction for decades, and now it appears to be a real question. For those who love these sorts of issues, and even those who don’t, the pull is inescapable.

At the same time, even the biggest fan of these questions, stepping back a bit, would agree they are of only modest importance. They might not agree with the very low priority that I assign, but I don’t think anybody feels they are anywhere close to the #1 question out there. As such we must realize we are very poor at judging the importance of these problems. So each person who has not already done so needs to look at how much importance they assign, and put an automatic discount on this. This is hard to do. We are really terrible at statistics sometimes, and dealing with probabilities of risk. We worry much more about the risks of a terrorist attack on a plane flight than we do about the drive to the airport, but that’s entirely wrong. This is one of those situations, and while people are free to judge risks incorrectly, academics and regulators must not.

Academics call this the Law of triviality. A real world example is terrorism. The risk of that is very small, but we make immense efforts to prevent it and far smaller efforts to fight much larger risks.

These situations are quite rare, and we need data about how rare they are

In order to judge the importance of these risks, it would be great if we had real data. All traffic fatalities are documented in fairly good detail, as are many accidents. A worthwhile academic project would be to figure out just how frequent these incidents are. I suspect they are extremely infrequent, especially ones involving fatality. Right now fatalities happen about every 2 million hours of driving, and the majority of those are single car fatalities (with fatigue and alcohol among leading causes.) I have still yet to read a report of a fatality or serious injury that involved a driver having no escape, but the ability to choose what they hit with different choices leading to injuries for different people. I am not saying they don’t exist, but first examinations suggest they are quite rare. Probably hundreds of billions of miles, if not more, between them.

Those who want to claim they are important have the duty to show that they are more common than these intuitions suggest. Frankly, I think if there were accidents where the driver made a deliberate decision to run down one person to save another, or to hurt themselves to save another, this would be a fairly big human interest news story. Our fascination with this question demands it. Just how many lives would be really saved if cars made the “right” decision about who to hit in the tiny handful of accidents where they must hit somebody?

In addition, there are two broad classes of situations. In one, the accident is the fault of another party or cause, and in the other, it is the fault of the driver making the “who to hit” decision. In the former case, the law puts no blame on you for who you hit if forced into the situation by another driver. In the latter case, we have the unusual situation that a car is somehow out of control or making a major mistake and yet still has the ability to steer to hit the “right” target.

These situations will be much rarer for robocars

Unlike humans, robocars will drive conservatively and be designed to avoid failures. For example, in the MIT study, the scenario was often a car whose brakes had failed. That won’t happen to robocars — ever. I really mean never. Robocar designs now all commonly feature two redundant braking systems, because they can’t rely on a human pumping the hydraulics manually or pulling an emergency brake. In addition, every time they apply the brakes, they will be testing them, and at the first sign of any problem they will go in for repair. The same is true of the two redundant steering systems. Complete failure should be ridiculously unlikely.

The cars will not suddenly come upon a crosswalk full of people with no time to stop — they know where the crosswalks are and they won’t drive so fast as to not be able to stop for one. They will be also constantly measuring traction and road conditions to assure they don’t drive too fast for the road. They won’t go around blind corners at high speeds. They will have maps showing all known bottlenecks and construction zones. Ideally new construction zones will only get created after a worker has logged the zone on their mobile phone and the updates are pushed out to cars going that way, but if for some reason the workers don’t do that, the first car to encounter the anomaly will make sure all other cars know.

This does not mean the cars will be perfect, but they won’t be hitting people because they were reckless or had predictable mechanical failures. Their failures will be more strange, and also make it less likely the vehicle will have the ability to choose who to hit.

To be fair, robocars also introduce one other big difference. Humans can argue that they don’t have time to think through what they might do in a split-second accident decision. That’s why when they do hit things, we call them accidents. They clearly didn’t intend the result. Robocars do have the time to think about it, and their programmers, if demanded to by the law, have the time to think about it. Trolley problems demand the car be programmed to hit something deliberately. The impact will not be an accident, even if the cause was. This puts a much higher standard on the actions of the robocar. One could even argue it’s an unfair standard, which will delay deployment if we need to wait for it.

In spite of what people describe in scenarios, these cars won’t leave their right of way

It is often imagined an ethical robocar might veer into the oncoming lane or onto the sidewalk to hit a lesser target instead of a more vulnerable one in its path. That’s not impossible, but it’s pretty unlikely. For one, that’s super-duper illegal. I don’t see a company, unless forced to do so, programming a car to ever deliberately leave its right of way in order to hit somebody. It doesn’t matter if you save 3 school buses full of kids, deliberately killing anybody standing on the sidewalk sounds like a company-ruining move.

For one thing, developers just won’t put that much energy into making their car drive well on the sidewalk or in oncoming traffic. They should not put their energies there! This means the cars will not be well tested or designed when doing this. Humans are general thinkers, we can handle driving on the grass even though we have had little practice. Robots don’t quite work that way, even ones designed with machine learning.

This limits most of the situations to ones where you have a choice of targets within your right-of-way. And changing lanes is always more risky than staying in your lane, especially if there is something else in the lane you want to change to. Swerving if the other lane is clear makes sense, but swerving into an occupied lane is once again something that is going to be uncharted territory for the car.

By and large the law already has an answer

The vehicle code is quite detailed about who has right-of-way. In almost every accident, somebody didn’t have it and is the one at fault under the law. The first instinct for most programmers will be to have their car follow the law and stick to their ROW. To deliberately leave your ROW is a very risky move as outlined above. You might get criticized for running over jaywalkers when you could have veered onto the sidewalk, but the former won’t be punished by the law and the latter can be. If people don’t like the law, they should change the law.

The lesson of the Trolley problem is “you probably should not try to solve trolley problems.”

Ethicists point out correctly that Trolley problems may be academic exercises, but are worth investigating for what they teach. That’s true in the classroom. But look at what they teach! From a pure “save the most people” utilitarian standpoint, the answer is easy — switch the car onto the track to kill one in order to save 5. But most people don’t pick that answer, particularly in the “big man” version where you can push a big man standing with you on a bridge onto the tracks to stop the trolley and save the 5. The problem teaches us we feel much better about leaving things as they are than in overtly deciding to kill a bystander. What the academic exercise teaches us is that in the real world, we should not foist this problem on the developers.

If it’s rare and a no-win situation, do you have to solve it?

Trolley problems are philosophy class exercises to help academics discuss ethical and moral problems. They aren’t guides to real life. In the classic “trolley problem” we forget that none of it happens unless a truly evil person has tied people to a railway track. In reality, many would argue that the actors in a trolley problem are absolved of moral responsibility because the true blame is on the setting and its architect, not them. In philosophy class, we can still debate which situation is more or less moral, but they are all evil. These are “no win” situations, and in fact one of the purposes of the problems is they often describe situations where there is no clear right answer. All answers are wrong, and people disagree about which is most wrong.

If a situation is rare, and it takes effort to figure out which is the less wrong answer, and things will still be wrong after you do this even if you do it well, does it make sense to demand an answer at all? To individuals involved, yes, but not to society. The hard truth is that with 1.2 million auto fatalities a year — a number we all want to see go down greatly — it doesn’t matter that much to society whether, in a scenario that happens once every few years, you kill 2 people or 3 while arguing which choice was more moral. That’s because answering the question, and implementing the answer, have a cost.

Every life matters, but we regularly make decisions like this. We find things that are bad and rare, and we decide that below a certain risk threshold, we will not try to solve them unless the cost is truly zero. And here the cost is very far from zero. Because these are no-win situations and each choice is wrong, each choice comes with risk. You may work hard to pick the “right” choice and end up having others declare it wrong — all to make a very tiny improvement in safety.

At a minimum each solution will involve thought and programming, as well as emotional strain for those involved. It will involve legal review and in the new regulations, certification processes and documentation. All things that go into the decision must be recorded and justified. All of this is untrod legal ground making it even harder. In addition, no real scenario with match hypothetical situations exactly, so the software must apply to a range of situations and still do the intended thing (let alone the right thing) as the situation varies. This is not minor.

Nobody wants to solve it

In spite of the fascination these problems hold, coming up with “solutions” to these no-win situations are the last things developers want to do. In articles about these problems, we almost always see the statement, “Who should decide who the car will hit?” The answer is nobody wants to decide. The answer is almost surely wrong in the view of some. Nobody is going to get much satisfaction or any kudos for doing a good job, whatever that is. Combined with the rarity of these events compared to the many other problems on the table, solving ethical issues is very, very, very low on the priority list for most teams. Because developers and vendors don’t want to solve these questions and take the blame for those solutions, it makes more sense to ask policymakers to solve what needs to be solved. As Christophe von Hugo of Mercedes put it, “99% of our engineering work is to prevent these situations from happening at all.”

The cost of solving may be much higher than people estimate

People grossly underestimate how hard some of these problems will be to solve. Many of the situations I have seen proposed actually demand that cars develop entirely new capabilities that they don’t need except to solve these problems. In these cases, we are talking about serious cost, and delays to deployment if it is judged necessary to solve these problems. Since robocars are planned as a life-saving technology, each day of delay has serious consequences. Real people will be hurt because of these delays aimed at making a better decision in rare hypothetical situations.

Let’s consider some of the things I have seen:

  • Many situations involve counting the occupants of other cars, or counting pedestrians. Robocars don’t otherwise have to do this, nor can they easily do it. Today it doesn’t matter if there are 2 or 3 pedestrians — the only rule is not to hit any number of pedestrians. With low resolution LIDAR or radar, such counts are very difficult. Counts inside vehicles are even harder.
  • One scenario considers evaluating motorcyclists based on whether they are wearing helmets. I think this one is ridiculous, but if people take it seriously it is indeed serious. This is almost impossible to discern from a LIDAR image and can be challenging even with computer vision.
  • Some scenarios involve driving off cliffs or onto sidewalks or otherwise off the road. Most cars make heavy use of maps to drive, but they have no reason to make maps of off-road areas at the level of detail that goes into the roads.
  • More extreme scenarios compare things like children vs. adults, or school-buses vs. regular ones. Today’s robocars have no reason to tell these apart. And how do you tell a dwarf adult from a child? Full handling of these moral valuations requires human level perception in some cases.
  • Some suggestions have asked cars to compare levels of injury. Cars might be asked to judge the difference between a fatal impact and one that just breaks a leg.

These are just a few examples. A large fraction of the hypothetical situations I have seen demand some capability of the cars that they don’t have or don’t need to have just to drive safely.

The problem of course is there are those who say that one must not put cars on the road until the ethical dilemmas have been addressed. Not everybody says this but it’s a very common sentiment, and now the new regulations demand at least some evaluation of it. No matter how much the regulations might claim they are voluntary, this is a false claim, and not just because some states are already talking about making them more mandatory.

Once a duty of care has been suggested, especially by the government, you ignore it at your peril. Once you know the government — all the way to the President — wants you to solve something, then you must be afraid you will be asked “why didn’t you solve that one?” You have to come up with an answer to that, even with voluntary compliance.

The math on this is worth understanding. Robocars will be deployed slowly into society but that doesn’t matter for this calculation. If robocars are rare, they can prevent only a smaller number of accidents, but they will also encounter a correspondingly smaller number of trolley problems. What matters is how many trolley situations there are per fatality, and how many people you can save with better handling of those problems. If you get one trolley problem for every 1,000 or 10,000 fatalities, and robocars are having half the fatalities, the math very clearly says you should not accept any delay to work on these problems.

The court of public opinion

The real courts may or may not punish vendors for picking the wrong solution (or the default solution of staying in your lane) in no-win situations. Chances are there will be a greater fear of the court of public opinion. There is reason to fear the public would not react well if a vehicle could have made an obviously better outcome, particularly if the bad outcome involves children or highly vulnerable road users vs. adults and at-fault or protected road users.

Because of this I think that many companies will still try to solve some of these problems even if the law puts no duty on them. Those companies can evaluate the risk on their own and decide how best to mitigate it. That should be their decision.

For a long time, many people felt any robocar fatality would cause uproar in the public eye. To everybody’s surprise, the first Tesla autopilot deaths resulted in Tesla stock rising for 2 months, even with 3 different agencies doing investigations. While the reality of the Tesla is that the drivers bear much more responsibility than a full robocar would, the public isn’t very clear on that point, so the lack of reaction is astonishing. I suspect companies will discount this risk somewhat after this event.

This is a version 2 feature, not a version 1 feature

As noted, while humans make split-second “gut” decisions and we call the results accidents, robocars are much more intentional. If we demand they solve these problems, we ask something of them and their programmers that we don’t ask of human drivers. We want robocars to drive more safely than humans, but we also must accept that the first robocars to be deployed will only be a little better. The goal is to start saving lives and to get better and better at it as time goes by. We must consider the ethics of making the problem even harder on day one. Robocars will be superhuman in many ways, but primarily at doing the things humans do, only better. In the future, we should demand these cars meet an even higher standard than we put on people. But not today: The dawn of this technology is the wrong time to also demand entirely new capabilities for rare situations.

Performing to the best moral standards in rare situations is not something that belongs on the feature list for the first cars. Solving trolley situations well is in the “how do we make this perfect?” problem set, not the “how do we make this great?” set. It is important to remember how the perfect can be the enemy of the good and to distinguish between the two. Yes, it means accepting there are low chance that somebody could be hurt or die, but people are already being killed, in large numbers, by the human drivers we aim to replace.

So let’s solve trolley problems, but do it after we get the cars out on the road both saving lives and teaching us how to improve them further.

What about the fascination?

The over-fascination with this problem is a real thing even if the problem isn’t. Studies have displayed one interesting result after surveying people: When you ask people what a car should do for the good of society, they would want it to sacrifice its passenger to save multiple pedestrians, especially children. On the other hand if you ask people if they would buy a car that did that, far fewer said yes. As long as the problem is rare, there is no actual “good of society” priority; the real “good of society” comes from getting this technology deployed and driving safely as quickly as possible. Mercedes recently announced a much simpler strategy which does what people actually want, and got criticism for it. Their strategy is reasonable — they want to save the party they can be most sure of saving, namely the passengers. They note that they have very little reliable information on what will happen in other cars or who is in them, so they should focus not on a guess of what would save the most people, but what will surely save the people they know about.

What should we do?

I make the following concrete recommendations:

  1. We should do research to determine how frequent these problems are, how many have “obvious” answers and thus learn just how many fatalities and injuries might be prevented by better handling of these situations.
  2. We should remove all expectation on first generation vehicles that they put any effort into solving the rare ones, which may well be all of them.
  3. It should be made clear there is no duty of care to go to extraordinary lengths (including building new perception capabilities) to deal with sufficiently rare problems.
  4. Due to the public over-fascination, vendors may decide to declare their approaches to satisfy the public. Simple approaches should be encouraged, at in the early years of this technology, almost no answer should be “wrong.”
  5. For non-rare problems, governments should set up a system where developers/vendors can ask for rulings on the right behaviour from the policymakers, and limit the duty of care to following those rulings.
  6. As the technology matures, and new perception abilities come online, more discussion of these questions can be warranted. This belongs in car 2.0, not car 1.0.
  7. More focus at all levels should go into the real everyday ethical issues of robocars, such as roads where getting around requires regularly violating the law (speeding, aggression etc.) in the way all human users already do.
  8. People writing about these problems should emphasize how rare they are, and when doing artificial scenarios, recount how artificial they are. Because of the public’s fears and poor risk analysis, it is inappropriate to feed on those fears rather than be realistic.

The social networks could hold great political power due to GOTV. Should they?

The social networks have access (or more to the point can give their users access) to an unprecedented trove of information on political views and activities. Could this make a radical difference in affecting who actually shows up to vote, and thus decide the outcome of elections?

I’ve written before about how the biggest factor in US elections is the power of GOTV - Get Out the Vote. US Electoral turnout is so low — about 60% in Presidential elections and 40% in off-year — that the winner is determined by which side is able to convince more of their weak supporters to actually show up and vote. All those political ads you see are not going to make a Democrat vote Republican or vice versa, they are going to scare a weak supporter to actually show up. It’s much cheaper, in terms of votes per dollar (or volunteer hour) to bring in these weak supporters than it is to swing a swing voter.

The US voter turnout numbers are among the worst in the wealthy world. Much of this is blamed on the fact the US, unlike most other countries, has voter registration; effectively 2 step voting. Voter registration was originally implemented in the USA as a form of vote suppression, and it’s stuck with the country ever since. In almost all other countries, some agency is responsible for preparing a list of citizens and giving it to each polling place. There are people working to change that, but for now it’s the reality. Registration is about 75%, Presidential voting about 60%. (Turnout of registered voters is around 80%)

Scary negative ads are one thing, but one of the most powerful GOTV forces is social pressure. Republicans used this well under Karl Rove, working to make social groups like churches create peer pressure to vote. But let’s look at the sort of data sites like Facebook have or could have access to:

  • They can calculate a reasonably accurate estimate of your political leaning with modern AI tools and access to your status updates (where people talk politics) and your friend network, along with the usual geographic and demographic data
  • They can measure the strength of your political convictions through your updates
  • They can bring in the voter registration databases (which are public in most states, with political use allowed on the data. Commercial use is forbidden in a portion of states but this would not be commercial.)
  • In many cases, the voter registration data also reveals if you voted in prior elections
  • Your status updates and geographical check-ins and postings will reveal voting activity. Some sites (like Google) that have mobile apps with location sensing can detect visits to polling places. Of course, for the social site to aggregate and use this data for its own purposes would be a gross violation of many important privacy principles. But social networks don’t actually do (too many) things; instead they provide tools for their users to do things. As such, while Facebook should not attempt to detect and use political data about its users, it could give tools to its users that let them select subsets of their friends, based only on information that those friends overtly shared. On Facebook, you can enter the query, “My friends who like Donald Trump” and it will show you that list. They could also let you ask “My Friends who match me politically” if they wanted to provide that capability.

Now imagine more complex queries aimed specifically at GOTV, such as: “My friends who match me politically but are not scored as likely to vote” or “My friends who match me politically and are not registered to vote.” Possibly adding “Sorted by the closeness of our connection” which is something they already score.  read more »

Syndicate content