brad's blog

What if the city ran Waze and you had to obey it? Could this cure congestion?

I believe we have the potential to eliminate a major fraction of traffic congestion in the near future, using technology that exists today which will be cheap in the future. The method has been outlined by myself and others in the past, but here I offer an alternate way to explain it which may help crystallize it in people’s minds.

Today many people drive almost all the time guided by their smartphone, using navigation apps like Google Maps, Apple Maps or Waze (now owned by Google.) Many have come to drive as though they were a robot under the command of the app, trusting and obeying it at every turn. Tools like these apps are even causing controversy, because in the hunt for the quickest trip, they are often finding creative routes that bypass congested major roads for local streets that used to be lightly used.

Put simply, the answer to traffic congestion might be, “What if you, by law, had to obey your navigation app at rush hour?” To be more specific, what if the cities and towns that own the streets handed out reservations for routes on those streets to you via those apps, and your navigation app directed you down them? And what if the cities made sure there were never more cars put on a piece of road than it had capacity to handle? (The city would not literally run Waze, it would hand out route reservations to it, and it would still do the UI and be a private company.)

The value is huge. Estimates suggest congestion costs around $160B per year in the USA, and 42 hours of time for every driver, or around another $160B. Roughly quadruple that for the world.

Road metering works

This approach would exploit one principle in road management that’s been most effective in reducing congestion, namely road metering. The majority of traffic congestion is caused, no surprise, by excess traffic — more cars trying to use a stretch of road than it has the capacity to handle. There are other things that cause congestion — accidents, gridlock and irrational driver behaviour, but even these only cause traffic jams when the road is near or over capacity.

Today, in many cities, highway metering is keeping the highways flowing far better than they used to. When highways stall, the metering lights stop cars from entering the freeway as fast as they want. You get frustrated waiting at the metering light but the reward is you eventually get on a freeway that’s not as badly overloaded.

Another type of metering is called congestion pricing. Pioneered in Singapore, these systems place a toll on driving in the most congested areas, typically the downtown cores at rush hour. They are also used in London, Milan, Stockholm and some smaller towns, but have never caught on in many other areas for political reasons. Congestion charging can easily be viewed as allocating the roads to the rich when they were paid for by everybody’s taxes.

A third successful metering system is the High-occupancy toll lane. HOT lanes take carpool lanes that are being underutilized, and let drivers pay a market-based price to use them solo. The price is set to bring in just enough solo drivers to avoid wasting the spare capacity of the lane without overloading it. Taking those solo drivers out of the other lanes improves their flow as well. While not every city will admit it, carpool lanes themselves have not been a success. 90% of the carpools in them are families or others who would have carpooled anyway. The 10% “induced” carpools are great, but if the carpool lane only runs at 50% capacity, it ends up causing more congestion than it saves. HOT is a metering system that fixes that problem.  read more »

The Electoral College: Good, bad or Trump trumper, and how to abolish it if you want

Many are writing about the Electoral college. Can it still prevent Trump’s election, and should it be abolished?

Like almost everybody, I have much to say about the US election results. The core will come later — including an article I was preparing long before the election but whose conclusions don’t change much because of the result, since Trump getting 46.4% is not (outside of the result) any more surprising than Trump getting 44% like we expected. But for now, since I have written about the college before, let me consider the debate around it.

By now, most people are aware that the President is not elected Nov 8th, but rather by the electors around Dec 19. The electors are chosen by their states, based on popular vote. In almost all states all electors are from the party that won the popular vote in a “winner takes all,” but in a couple small ones they are distributed. In about half the states, the electors are bound by law to vote for the candidate who won the popular vote in that state. In other states they are party loyalists but technically free. Some “faithless” electors have voted differently, but it’s very rare.

I’m rather saddened by the call by many Democrats to push for electors to be faithless, as well as calls at this exact time to abolish the college. There are arguments to abolish the college, but the calls today are ridiculously partisan, and thus foolish. I suspect that very few of those shouting to abolish the college would be shouting that if Trump had won the popular vote and lost the college (which was less likely but still possible.) In one of Trump’s clever moves, he declared that he would not trust the final results (if he lost) and this tricked his opponents into getting very critical of the audacity of saying such a thing. This makes it much harder for Democrats to now declare the results are wrong and should be reversed.

The college approach — where the people don’t directly choose their leader — is not that uncommon in the world. In my country, and in most of the British parliamentary democracies, we are quite used to it. In fact, the Prime Minister’s name doesn’t even appear on our ballots as a fiction the way it does in the USA. We elect MPs, voting for them mostly (but not entirely) on party lines, and the parties have told us in advance who they will name as PM. (They can replace their leader after if they want, but by convention, not rule, another election happens not long after.)

In these systems it’s quite likely that a party will win a majority of seats without winning the popular vote. In fact, it happens a lot of the time. That’s because in the rest of the world there are more than 2 parties, and no party wins the popular vote. But it’s also possible for the party that came 2nd in the popular vote to form the government, sometimes with a majority, and sometimes in an alliance.

Origins of the college

When the college was created, the framers were not expecting popular votes at all. They didn’t think that the common people (by which they meant wealthy white males) would be that good at selecting the President. In the days before mass media allowed every voter to actually see the candidates, one can understand this. The system technically just lets each state pick its electors, and they thought the governor or state house would do it.

Later, states started having popular votes (again only of land owning white males) to pick the electors. They did revise the rules of the college (12th amendment) but they kept it because they were federalists, strong advocates of states’ rights. They really didn’t imagine the public picking the President directly.  read more »

Comma One goes Open Source, Robocars in New Zealand Earthquakes and more

There have been few postings this month since I took the time to enjoy a holiday in New Zealand around speaking at the SingularityU New Zealand summit in Christchurch. The night before the summit, we enjoyed a 7.8 earthquake not so far from Christchurch, whose downtown was over 2/3 demolished after quakes in 2010 and 2011. On the 11th floor of the hotel, it was a disturbing nailbiter of swaying back and forth for over 2 minutes — but of course swaying is what the building is supposed to do; that means it’s working. The shocks were rolling, not violent, and in fact we got more violent jolts from aftershocks a week later when we went to Picton.

While driving around that region, we encountered this classic earthquake scene on the road:

There were many like this, and in fact the main highway of the South Island was destroyed long-term not too far away, cutting off several towns. A scene like this makes you wonder just what a robocar would do in such situations. I already answered this question in a blog post on how to handle a tsunami. Fortunately there was only a mild tsunami for this quake. A tsunami will result in a warning in the rich world, and the car will know the elevation map of the roads and know how to get to high ground. In some places, like Japan,t here is also an advanced earthquake warning system that tells you quakes are coming well before they hit you, since electrons go much faster than seismic waves. With such a system, robocars should receive a warning and come to a stop unless they need to evacuate a tsunami zone. Without such a warning, we still could imagine the road cracking and collapsing in front of you as might have happened on this road. Of course the cones and signs that warned me days later would not be present.

The answer again lies in the fact that pictures like mine will be used to create situations like this in simulator, and all car developers will be able to test their systems with simulated quake damage to make sure they do the right thing. I’ve spoken since 2010 on the value of a shared simulator environment and I think if government agencies like NHTSA want to really help development, providing funding and tools for such an environment would be a good step. NHTSA’s proposal that all developers share their logs of all incidents would clearly make such a simulator better, but there is pushback because of the proprietary value of those logs. When it comes to strange situations like earthquakes, I doubt there would be much pushback on having an open and shared simulator environment.

New Zealand’s government is taking a very welcoming approach to robocars. They are not regulating for a while, and have invited developers to come and test. They have even said it’s OK to test unmanned vehicles under some fairly simple rules. NZ does not have any auto industry, and of course it’s quite remote, but we’ll see if they can attract developers to come test. Their roads feature something you don’t see much in the USA — tons and tons of one-lane bridges and other one-lane stretches of highway. Turns out that robocars, with a little bit of communication, can make very superhumanly efficient use of one-lane two-way roads, and it might be worth exploring.

Open Source Comma One box

Speaking of Open, today Comma.ai, which previously had declared they were giving up on their neural network autopilot due to NHTSA threats today announced they have open sourced their software, along with hardware designs and case designs. NHTSA did not want them making an autopilot, and said they could not simply rely on the fact that drivers were told they must be diligent, it will be very interesting to see how NHTSA reacts to the release of open designs that anybody can then install on their car.

The automotive industry has had a long history of valuing the tinkerer. All the big car companies had their beginnings with small tinkerers and inventors. Some even died in the very machines they were inventing. These beginnings have allowed people to do all sorts of playing around in their garages with new car ideas, without government oversight, in spite of the risk to themselves and even others on the road. If a mechanic wants to charge you for working on your car, they must be licenced, but you are free to work on it yourself with no licence, and even build experimental cars. You just can’t sell them. And even those rights have been eroded.

Clearly far fewer people will have the inclination to build an autopilot using the comma.ai tools by themselves. But it won’t be that hard to do, and they can make it easier with time, too. One could even imagine a car which already had the necessary hardware, so that you only needed to download software to make it happen.

In recent times, there has been a strong effort to prevent people with tinkering with their cars, even in software. One common area of controversy has been around engine tuning. Engine tuning is regulated by the EPA to keep emissions low. Car vendors have to show they have done this — and they can’t program their car to give good emissions only on the test while getting better performance off the test as VW did. But owners have been known to want to make such modifications. Now we will see modifications that affect not just emissions but safety. Car companies don’t want to be responsible if you modify the code in your car and there is an accident involving both their code and yours. As such, they will try to secure their car systems so you can’t change them, and the government may help them or even insist on it. When you add computer security risks to the mix — who can certify the modified car can’t be taken over and used as a weapon? — it will get even more fun.

I will also point out that I suspect that comma’s approach would not know what to do about the collapsed road, because it would never have been trained in that situation. It might, however, simply sound an alert and kick out, not being able to find the lane any more.

Regulatory pushback

Regular readers will have seen my strong critique of the NHTSA rules. The other major news during my break was the pushback from major players in the public comment on the regulations. In some ways the regulations didn’t do enough to give vendors the certainty they need to make their plans. At the same time, they were criticsed for not giving enough flexibility to vendors. In addition, as expected, they resist giving up their proprietary data in the proposed forced sharing. I predict continued ambivalence on the regulations. Big players actually like having lots of regulations, because big players know how to deal with that and small players don’t.

How will robotaxi services compete in the future?

Right now Uber, Lyft and traditional taxis are competing. But in the robocar world of the future, when large fleets of cars operate as taxis and replace car ownership for many, how will they compete with one another. Will there be a monopoly in each town, or just a couple of companies? Can we have dozens? Does the biggest fleet win?

I have a new major article on the subject. I also welcome comments on other ways these services might find a competitive edge.

Read Competition in the Robotaxi world

If you built "Westworld" (or other robot sex) it would probably be with VR

HBO released a new version of “Westworld” based on the old movie about a robot-based western theme park. The show hasn’t excited me yet — it repeats many of the old tropes on robots/AI becoming aware — but I’m interested in the same thing the original talked about — simulated experiences for entertainment.

The new show misses what’s changed since the original. I think it’s more likely they will build a world like this with a combination of VR, AI and specialty remotely controlled actuators rather than with independent self-contained robots.

One can understand the appeal of presenting the simulation in a mostly real environment. But the advantages of the VR experience are many. In particular, with the top-quality, retinal resolution light-field VR we hope to see in the future, the big advantage is you don’t need to make the physical things look real. You will have synthetic bodies, but they only have to feel right, and only just where you touch them. They don’t have to look right. In particular, they can have cables coming out of them connecting them to external computing and power. You don’t see the cables, nor the other manipulators that are keeping the cables out of your way (even briefly unplugging them) as you and they move.

This is important to get data to the devices — they are not robots as their control logic is elsewhere, though we will call them robots — but even more important for power. Perhaps the most science fictional thing about most TV robots is that they can run for days on internal power. That’s actually very hard.

The VR has to be much better than we have today, but it’s not as much of a leap as the robots in the show. It needs to be at full retinal resolution (though only in the spot your eyes are looking) and it needs to be able to simulate the “light field” which means making the light from different distances converge correctly so you focus your eyes at those distances. It has to be lightweight enough that you forget you have it on. It has to have an amazing frame-rate and accuracy, and we are years from that. It would be nice if it were also untethered, but the option is also open for a tether which is suspended from the ceiling and constantly moved by manipulators so you never feel its weight or encounter it with your arms. (That might include short disconnections.) However, a tracking laser combined with wireless power could also do the trick to give us full bandwidth and full power without weight.

It’s probably not possible to let you touch the area around your eyes and not feel a headset, but add a little SF magic and it might be reduced to feeling like a pair of glasses.

The advantages of this are huge:

  • You don’t have to make anything look realistic, you just need to be able to render that in VR.
  • You don’t even have to build things that nobody will touch, or go to, including most backgrounds and scenery.
  • You don’t even need to keep rooms around, if you can quickly have machines put in the props when needed before a player enters the room.
  • In many cases, instead of some physical objects, a very fast manipulator might be able to quickly place in your way textures and surfaces you are about to touch. For example, imagine if, instead of a wall, a machine with a few squares of wall surface quickly holds one out anywhere you’re about to touch. Instead of a door there is just a robot arm holding a handle that moves as you push and turn it.
  • Proven tricks in VR can get people to turn around without realizing it, letting you create vast virtual spaces in small physical ones. The spaces will be designed to match what the technology can do, of course.
  • You will also control the audio and cancel sounds, so your behind-the-scenes manipulations don’t need to be fully silent.
  • You do it all with central computers, you don’t try to fit it all inside a robot.
  • You can change it all up any time.

In some cases, you need the player to “play along” and remember not to do things that would break the illusion. Don’t try to run into that wall or swing from that light fixture. Most people would play along.

For a lot more money, you might some day be able to do something more like Westworld. That has its advantages too:

  • Of course, the player is not wearing any gear, which will improve the reality of the experience. They can touch their faces and ears.
  • Superb rendering and matching are not needed, nor the light field or anything else. You just need your robots to get past the uncanny valley
  • You can use real settings (like a remote landscape for a western) though you may have a few anachronisms. (Planes flying overhead, houses in the distance.)
  • The same transmitted power and laser tricks could work for the robots, but transmitting enough power to power a horse is a great deal more than enough to power a headset. All this must be kept fully hidden.

The latter experience will be made too, but it will be more static and cost a lot more money.

Yes, there will be sex

Warning: We’re going to get a bit squicky here for some folks.

Westworld is on HBO, so of course there is sex, though mostly just a more advanced vision of the classic sex robot idea. I think that VR will change sex much sooner. In fact, there is already a small VR porn industry, and even some primitive haptic devices which tie into what’s going on in the porn. I have not tried them but do not imagine them to be very sophisticated as yet, but that will change. Indeed, it will change to the point where porn of this sort becomes a substitute for prostitution, with some strong advantages over the real thing (including, of course, the questions of legality and exploitation of humans.)  read more »

Comma.ai cancels comma-one add-on box after threats from NHTSA

Comma.ai, the brash startup attempting to make a self-driving system entirely from a neural network has announced it will cancel the “comma one” add-on box it has planned to sell to owners of certain Honda vehicles. The box stuck on the rear-view mirror and used the car’s own bus commands to provide an autopilot similar to those offered by car makers, with lane-keeping and adaptive cruise control.

Of particular importance is the letter from NHTSA to comma.ai which I suggest you read. This letter creates several big issues:

  1. There are many elements of this letter which would also apply to Tesla and other automakers which have built supervised autopilot functions.
  2. Of particular interest is the paragraph which says: “it is insufficient to assert, as you do, that the product does not remove any of the driver’s responsibilities” and “there is a high likelihood that some drivers will use your product in a manner that exceeds its intended purpose.” That must be very scary for Tesla.
  3. I noted before that the new NHTSA regulations appear to forbid the use of “black box” neural network approaches to the car’s path planning and decision making. I wondered if this made illegal the approach being done by Comma, NVIDIA and many other labs and players. This may suggest that.
  4. We now have a taste of the new regulatory regime, and it seems that had it existed before, systems like Tesla’s autopilot, Mercedes Traffic Jam Assist, and Cruise’s original aftermarket autopilot would never have been able to get off the ground.
  5. George Hotz of comma declares “Would much rather spend my life building amazing tech than dealing with regulators and lawyers. It isn’t worth it. The comma one is cancelled. comma.ai will be exploring other products and markets. Hello from Shenzhen, China.”

To be clear, comma is a tiny company taking a radical approach, so it is not a given that what NHTSA has applied to them would have been or will be unanswerable by the big guys. Because Tesla’s autopilot is not a pure machine learning system, they can answer many of the questions in the NHTSA letter that comma can’t. They can do much more extensive testing that a tiny startup can’t. But even so a letter like this sends a huge chill through the industry.

It should also be noted that in Comma’s photos the box replaced the rear-view mirror, and NHTSA had reason to ask about that.

George’s declaration that he’s in Shenzen gives us the first sign of the new regulatory regime pushing innovation away from the United States and California. I will presume the regulators will say, “We only want to scare away dangerous innovation” but the hard truth is that is a very difficult thing to judge. All innovation in this space is going to be a bit dangerous. It’s all there trying to take the car — the 2nd most dangerous legal consumer product — and make it safer, but it starts from a place of danger. We are not going to get to safety without taking risks along the way.

I sometimes ask, “Why do we let 16 year olds drive?” They are clearly a major danger to themselves and others. Driver testing is grossly inadequate. They are not adults so they don’t have the legal rights of adults. We let them drive because they are going to start out dangerous and then get better. It is the only practical way for them to get better, and we all went through it. Today’s early companies are teenagers. They are going to take risks. But this is the fastest and only practical way to let them get better and save millions.

“…some drivers will use your product in a manner that exceeds its intended purpose”

This sentence, though in the cover letter and not the actual legal demand, looks at the question asked so much after the Tesla fatal crash. The question which caused Consumer Reports to ask Tesla to turn off the feature. The question which caused MobilEye, they say, to sever their relationship with Tesla.

The paradox of the autopilot is this: The better it gets, the more likely it is to make drivers over-depend on it. The more likely they will get complacent and look away from the road. And thus, the more likely you will see a horrible crash like the Tesla fatality. How do you deal with a system which adds more danger the better you make it? Customers don’t want annoying countermeasures. This may be another reason that “Level 2,” as I wrote yeterday is not really a meaningful thing.

NHTSA has put a line in the sand. It is no longer going to be enough to say that drivers are told to still pay attention.

Black box

Comma is not the only company trying to build a system with pure neural networks doing the actual steering decisions (known as “path planning”.) NVIDIA’s teams have been actively working on this, as have several others. They plan to make commentary to NHTSA about these element of the regulations, which should not be forbidding this approach until we know it to be dangerous.  read more »

Of the SAE's robocar "levels" only level 4 will be meaningful, and only partly

It’s no secret that I’ve been a critic of the NHTSA “levels” as a taxonomy for types of Robocars since the start. Recent changes in their use calls for some new analysis that concludes that only one of the levels is actually interesting, and only tells part of the story at that. As such, they have become even less useful as a taxonomy. Levels 2 and 3 are unsafe, and Level 5 is remote future technology. Level 4 is the only interesting one and there is thus no taxonomy.

Unfortunately, they have just been encoded into law, which is very much the wrong direction.

NHTSA and SAE both created a similar set of levels, and they were so similar that NHTSA declared they would just defer to the SAE’s system. Nothing wrong with that, but the core flaws are not addressed by this. Far better, their regulations declared that the levels were just part of the story, and they put extra emphasis on what they called the “operating domain” — namely what locations, road types and road conditions the vehicle operates in.

The levels focus entirely on the question of how much human supervision a vehicle needs. This is an important issue, but the levels treated it like the only issue, and it may not even be the most important. My other main criticism was that the levels, by being numbered, imply a progression for the technology. That progression is far from certain and in fact almost certainly wrong. SAE updated its levels to say that they are not intended to imply a progression, but as long as they are numbers this is how people read them.

Today I will go further. All but level 4 are uninteresting. Some may never exist, or exist only temporarily. They will be at best footnotes of history, not core elements of a taxonomy.

Level 4 is what I would call a vehicle capable of “unmanned” operation — driving with nobody inside. This enables most of the interesting applications of robocars.

Here’s why the other levels are less interesting:

Levels 0 and 1 — Manual or ADAS-improved

Levels 0 and 1 refer to existing technology. We don’t really need new terms for our old cars. Level 2 perhaps best described as a more advanced version of level 1 and that transition has already taken place.

Level 2 — Supervised Autopilot

Supervised autopilots are real. This is what Tesla sells, and many others have similar offerings. They are working in one of two ways. The first is the intended way, with full time supervision. This is little more than a more advanced cruise control, and may not even be as relaxing.

The second way is what we’ve seen happen with Tesla — a car that needs supervision, but is so good at driving that supervisors get complacent and stop supervising. They want a full self-driving car but don’t have it, so they pretend they do. Many are now saying that this makes the idea of supervised autopilot too dangerous to deploy. The better you make it, the more likely it can lull people into bad activity.

Update: One day after I wrote this, it was revealed that NHTSA shut down comma.ai’s efforts to build an aftermarket autopilot citing these concerns, among others.

Level 3 — Standby driver

This level is really a variation of Level 4, but the vehicle needs the ability to call upon a driver who is not paying attention and get them to take control with 10 to 60 seconds of advance warning. Many people don’t think this can be done safely. When Google experimented with it in 2013, they concluded it was not safe, and decided to take the steering wheel entirely out of their experimental vehicles.

Even if Level 3 is a real thing, it will be short lived as people see an unmanned capable vehicle. And Level 4 vehicles will offer controls for special use, just not transition while moving.

Level 5 — Drive absolutely everywhere

SAE, unlike NHTSA’s first proposal, did want to make it clear that an unmanned capable (Level 4) vehicle would only operate in certain places or situations. So they added level 5 to make it clear that level 4 was limited in domain. That’s good, but the reality is that a vehicle that can truly drive everywhere is not on anybody’s plan. It probably requires AI that matches human beings.

Consider this situation in which I’ve been driven. In the African bush on a game safari, we spot a leopard crossing the road. So the guide drives the car off-road (on private land) running over young trees, over rocks, down into wet and dry streambeds to follow the leopard. Great fun, but this is unlikely to be an ability there is ever market demand to develop. Likewise, there are lots of small off-road tracks that are used by only one person. There is no economic incentive for a company to solve this problem any time soon.

Someday we might see cars that can do these things under the high-level control a human, but they are not going to do them on their own, unmanned. As such SAE level 5 is academic, and serves only to remind us that level 4 does not mean everywhere.

Levels vs. Cul-de-sacs

The levels are not a progression. I will contend in fact that even to the extent that levels 2, 3/4 and 5 exist, they are quite probably entirely different technologies.

Level 2 is being done with ADAS technologies. They are designed to have a driver in the loop. Their designs in many case do not have a path to the reliability level needed for unmanned, which is orders of magnitude higher. It is not just a difference of degree, it is one of kind.

Level 3 is related to level 4, in particular because a level 3 car is expected to be able to handle non-response from its driver, and safely stop or pull off the road. It can be viewed as a sucky version of a level 4 system. (It’s also not that different — see below.)

Level 5, as indicated, probably requires technologies that are more like artificial general intelligence than they are like a driving system.

As such the levels are not levels. There is no path between any of the levels and the one above it, except in the case of 3/4.

Level 4

This leaves Level 4 as the only one worth working on long term, the only one with talking about. The others are just there to create a contrast. NHTSA realizes this and gave the name ODD (Operational Design Domain) to refer to the real area of research, namely what roads and situations the vehicles can handle.

The distinction between 4 and 3 is also not as big as you might expect. Google removed the steering wheel from their prototype to set a high bar for themselves, but they actually left one in for use in testing and development. In reality, even the future’s unmanned cars will feature some way in which a human can control them, for use during breakdowns, special situations, and moving the cars outside of their service areas (operational domains.) Even if the transition from autodrive to human drive is unsafe at speed, it will still be safe if the car pulls over and activates the controls for a licenced driver.

As such, the only distinction of a “level 3” car is it hopes to be able to do that transition while moving, on short but not urgent notice. A pretty minor distinction to be a core element of a taxonomy.

If Level 4 is the only interesting one, my recommendation is to drop the levels from our taxonomy, and focus the taxonomy instead on the classes of roads and conditions the vehicle can handle. It can be a given that outside of those operating domains, other forms of operation might be used, but that does not bear much on the actual problem.

I say we just identify a vehicle capable of unmanned or unsupervised operation as a self-driving car or robocar, and then get to work on the real taxonomy of problems.

Our routers need to remove the "internet" from the "internet of things" to stop DDOS

I frequently say that there is no “internet of things.” That’s a marketing phrase for now. You can’t go buy a “thing” and plug it into the “internet of things.” IoT is still interesting because underneath the name is a real revolution from the way that computing, sensing and communications are getting cheaper, smaller and using less power. New communications protocols are also doing interesting things.

We learned a lesson on Friday though, about why using the word “internet” is its own mistake. The internet — one of the world’s greatest inventions — was created as a network of networks where anything could talk to anything, and it was useful for this to happen. Later, for various reasons, we moved to putting most devices behind NATs and firewalls to diminish this vision, but the core idea remains.

Attackers on Friday made use of growing collection of low cost IoT devices with low security to mount a DDOS attack on DYN’s domain name servers, shutting off name lookup for some big sites. While not the only source of the attack, a lot of attention has come to certain Chinese brands of IP based security cameras and baby monitors. To make them easy to use, they are designed with very poor security, and as a result they can be hijacked and put into botnets to do DDOS — recruiting a million vulnerable computers to all overload some internet site or service at once.

Most applications for small embedded systems — the old and less catchy name of the “internet of things” — aren’t at all in line with the internet concept. They have no need or desire to be able to talk to the whole world the way your phone, laptop or web server do. They only need to talk to other local devices, and sometimes to cloud servers from their vendor. We are going to see billions of these devices connected to our networks in the coming years, perhaps hundreds of billions. They are going to be designed by thousands of vendors. They are going to be cheap and not that well made. They are not going to be secure, and little we can do will change that. Even efforts to make punishments for vendors of insecure devices won’t change that.

So here’s an alternative; a long term plan for our routers and gateways to take the internet out of IoT.

Our routers should understand that two different classes of devices will connect to them. The regular devices, like phones and laptops, should connect to the internet as we expect today. There should also be a way to know that the connecting devices does not want regular internet access, and not to give it. One way to do that is for the devices to know about this, and to convey how much access they need when they first connect. One proposal for this is my friend Eliot Lear’s MUD proposal. Unfortunately, we can’t count on devices to do this. We must limit stupid devices and old devices too.  read more »

Vendors push back on California Robocar regulations - plus Tesla and Apple news

California Hearings

Wednesday, California held hearings on the latest draft of their regulations. The new regulations heavily incorporate the new NHTSA guidelines released last month, and now incorporate language on the testing and deployment of unmanned vehicles.

The earlier regulations caused consternation because they correctly identified that nobody had sufficient understanding of unmanned vehicle operations to write regulations, but incorrectly proceeded to forbid those vehicles until later. Once you ban something, it’s very hard to un-ban it. The new approach does not ban the vehicles, but attempts instead to write regulations for them that are too premature.

Comment from developers of the vehicles reflected sentiment that all the regulations are premature. California worked together with NHTSA on their regulations, and incorporated them. In particular, while NHTSA’s regulations lay out a 15 point list of functional domains that creators of vehicles should certify, the federal regulations technically declare this certification to be optional. A vendor in submitting a report can explicitly state they decline to certify most of the items.

California suggests that this certification might be mandatory here. For all my criticism of NHTSA’s plan, they do have an understanding that it is still far too early to be writing detailed rules for vehicles that don’t yet exist, and left these avenues for change and disagreement within their regulations. The avenues are not great — I feel that vendors will be concerned that truly treating the regulations as voluntary will will be done at their peril — but at least they exist.

Several vendors also pointed out the serious problems with traditional regulatory timelines and the speed of development of computer technologies. The California regulations may require that a car be tested for a year before it is deployed. On the surface that sounds normal by old standards, but the reality of development is very different. Pretty much all the vendors I know are producing new builds of their vehicle software and testing them out on the roads the next day — with trained safety drivers behind the wheel. The software goes through extensive “regression testing,” running through every tricky situation the team has encountered anywhere, as well as simulated situations, but the safety driver is there to deal with any problem not found with that testing.

Vendors won’t release into production cars with only one night of testing, but neither can they wait a year. This is particularly true because in the early days of this technology, new problems will be found during deployment, and you want to get the fixes out on the road as quickly as is safe to do. An arbitrary timeline makes no sense.

This is just the start of the problems. While one may argue that it was always going to be hard for startups and tinkerers to develop these cars, these regulations (and the federal ones) put more nails in the coffin of the small innovator. The amount of bureaucracy, the size of the insurance bonds and many other factors will make it hard for teams the size of the DARPA challenge teams who kickstarted this technology and make it real to actually play in the game. The auto industry has a long history of allowing tinkerers to innovate, even at the cost of relaxing safety requirements applied to them. We may end up with a world where only the big players can play at all, and we know that this is generally not good at all for the pace of innovation.

Delivery Robots

The new regulations allowing unmanned vehicles might seem to open doors for delivery robots like we’re working on at Starship. Unfortunately they seem aimed primarily at large vehicles. Since California rules define the sidewalk as part of the street, these regulations might end up demanding a small, slow, light delivery robot still comply with the bulky Federal Motor Vehicle Safety Standards (which are meant for passenger cars) and is impossible without major exceptions being made. (More reading is needed to tell if this is truly how this will play out.)

Tesla says all future cars will have full sensor suite

Tesla has declared that all their future cars, including the lower cost Model 3, will include the full suite of radars, cameras and other sensors needed for self driving. That’s good news, though the Tesla sensor suite, lacking LIDAR, is not currently sufficient for a full self-driving car. Tesla is making a bet of sorts that by the time this becomes in play, cameras and radars will be sufficient to make an acceptably safe system. If not, they will have to stick with autopilot function on those cars. Since there is strong evidence that LIDAR will be inexpensive in a couple of years, I don’t believe anybody should plan to deploy their first (and riskiest) robocars without every sensor that’s at all affordable. Why make it less safe than you could just to save a few hundred dollars?

Today, Tesla can’t do that because no production low cost LIDAR is available. Most other teams are betting it will be. In the future, when cost becomes a bigger issue, vendors will decide to eliminate sensors based on cost.

Apple might have changed their plans

Apple hasn’t said anything official about their rumoured car project. All we know has come from leaks and from looking at who has been hired or who has departed. (I do know one secret thing about the Apple car — it will only work if you have a new iPhone.) Many rumours came out this week that Apple may have cancelled plans to actually make an Apple Car, and instead will take an approach more like Google — building the software and self-driving systems and letting others worry about car manufacture. That is a good strategy, so Apple is hardly out of the game, but it does mean it’s less likely the world will see a car with the particular Apple flair and marketing genius.

The relationship between powerful self-drive system developers (like Apple, Google and Uber) and car manufacturers will be an interesting one. Car makers are used to being in charge, owning the process and owning the customer. So are these hi-tech companies. But many companies will do “contract manufacturing” in auto. If Apple shows up with a purchase order for 100,000 cars to be built to their spec, there are many companies who will take the order, even if the high end Daimlers and Toyotas of the world won’t. So just as Apple doesn’t build the iPhone and gets Foxconn to do it, the fact that Apple will stick to the software systems doesn’t mean their design will not appear in a car.

Here is a summary of Apple car rumours.

Most voting is about the next election, not this one.

When people vote, what do they think it will accomplish? How does this affect how they vote, and how should it?

My apologies for more of this in a season when our social media are overwhelmed with politics, but in a lot of the postings I see about voting plans, I see different implicit views on just what the purpose of voting is. The main focus will be on the vote for US President.

The vast majority of people will vote in non-contested states. The logic is different in the “swing” states where all the campaign attention is.

In a non-contested state, there is essentially zero chance your vote will affect the result of the election. If you’re voting thinking you are exerting your small power to have a say in who wins, you are deluding yourself. Your vote does one, and only one thing — it changes the popular vote totals that are published and looked at by some people. You will change the total for the nation, your state, and some will even look at the totals in your region.

For minor party candidates, having a higher vote total — in particular reaching 5% — can also make a giant difference by giving access to federal campaign funding, which can make a serious difference in the funding level for those parties.

Voters should ask themselves, whose popular vote total do they want to increase? Some logic suggests that it makes more sense to vote for a minor party that you support. Not because they will win, but because you will create a larger proportionate increase in their total. One more vote for a Republican or Democrat will be barely noticed. One more vote for a minor party will also on its own make no difference, but proportionately it may be 10 times or more greater.

It’s for the next election, not this one

You don’t increase the popular vote totals to affect this election. You do it to affect the next one. Supporting a party makes other supporters realize they are not alone. It makes them just a bit more likely to join the cause, if they believe in it. Most voters don’t understand this “next election” principle, and so while a minor party remains too small to win or affect the election, they are less likely to support it.

This is how most movements go from being small to being large. When a protest movement is small, people are afraid to show their support. When they see a real crowd march in the square, they are now more likely to join the crowd and to let the world see how much support there really is.

As such, the particular platform planks and candidate quirks are almost entirely irrelevant for the non-swing voter. When you’re voting for the next election, you are really supporting only the party and its broad platform, or a basic overall impression of a candidate. I often see voters say, “I could not vote for a candidate who supports X” but they do not realize that is not what they are doing.

The minor parties are particularly bad at this. Most of them like to pretend they are just like major parties. They nominate candidates based on what they say or stand for. They create detailed party platforms. This is an error. A detailed platform is only a reason for people to vote against you. Detailed platforms are only for candidates who might actually have a shot at implementing their platform. Minor party candidates take it as gospel that they should never admit that they can’t win, even though any rational person knows it quite clearly. The reality is that you can know you can’t win the current election, but can more reasonably hope you can step higher and get within range of winning in a future election. Only when this happens should you act like a major party. You almost never see minor candidates say the truth: “Vote for me, not because you can make me win — you can’t — but to show and build support for the ideas of our party.”

I personally would much rather vote for somebody who said the truth like that, but perhaps I am unusual.

As I’ve said earlier, under this philosophy I recommend people in non-swing states consider minor parties that they want to boost. While it is commonly said that voting for a minor party is “throwing away your vote,” I believe it’s more likely that voting for a major party is actually throwing away the vote. The major party vote will not move any needles, not wake anybody up to the existence of these major parties. Because the minor party can’t win, you can vote for it simply to signal that there is support for its core ideas. This is something a voter should consider even when they still prefer the major party more. Most minor parties have bizarre and fringe policies that most voters would not support. Because they can’t win, this is not important. Should they ever get bigger, they will moderate those policies, or they will never make the jump to serious contender. Yesterday the John Oliver show did a funny skewering of minor party candidates but it entirely misses this point.

In addition, as minor political movements gain strength, they get noticed by the major parties. If the Greens got 10% of the vote, you can bet the Democrats would take notice, and try to court those voters. They don’t want the Greens to get so large that they become a potential “spoiler” in the swing states, so they will become slightly Green to prevent that. Once again, how you vote today affects the election of the future.

Polls are good too

Of course, even better is to express these desires in the polls. What you say in polls can affect this election, but primarily polls encourage other people who think like you to come out of the woodwork and express that view. Polls are stage one in the process of gaining critical mass — they lead to actual votes, which lead to more polls and so on. Of course, you only want to express support for a party in a poll if you really want this to happen. You should not lie, but you should not be afraid to show what you really support because somebody convinces you it’s wasted.

What if everybody voted this way?

Some people have said to me, “If everybody voted for minority views the vote might actually become real!” All remember the 2000 Florida election where Greens split out the Democrat vote and that resulted eventually in President Bush the 2nd. That was a swing state. People knew that would be close.

The truth is, the idea that you are voting for the next election is not widely accepted at present. Perhaps in the future it will be strong enough to change a state from non-contended to swing. But not today.

It’s also true that if you leave in a really non-swing state, like California, it is impossible your vote will make a difference. The truth is, if it ever got to the point where California was 50-50 about a choice like Clinton-Trump, then Trump already won long ago in the other states. Solid safe states can’t be the deciding state. (Rare events, like having the Republican candidate be a California governor can turn a safe state into a swing state, but not by surprise.) The only way the truly safe states ever can swing is in an election that’s already settled. The polls will tell you things long in advance.

Can this really work in the USA?

The biggest counter-argument to this approach I have seen is the suggestion that the USA is different, that the two party system is so entrenched that anything else is a waste of time.

In the rest of the world, 3rd parties are very common. They often are players in elections and often no party gets a majority and so coalitions must be formed, where the large party agrees to do some of the agenda of the smaller party to get their support in the coalition. Parties begin small and grow, as described above. Parties like the Greens are now a powerful minority force in Europe. Some countries, like Iceland, have never had a majority party.

The USA has been two-party for a long time, and the two powerful parties tend to make the rules so as to keep it that way. The above federal funding rule is just one example. In Presidential elections, the system requires a majority in the electoral college. A serious 3rd candidate could simply mean the election is sent to the House of Representatives (which is now long term Republican due to gerrymandering.)

There are some approaches that could cause minority political opinion to be able to do more in the USA. The best would be to move states away from plurality methods to multi-candidate voting such as Approval voting or a Condorcet method. These are no rules against a state doing that for any of their elections. They don’t because the two parties like keeping it as two parties. Efforts are underway in the states that have ballot propositions (bypassing the two parties) to make such changes.

What about major parties

This view also can affect your vote for major parties. For example, even though you know your vote in California will make no difference, you may want to make a tiny contribution to public and party perception of how much one party beat another. You may want to support the idea of a landslide or a “mandate.” You might also go the other way, and vote to punish your preferred party (for not listening to you or picking the wrong nominee) by voting for the other major party so that they don’t think they have a mandate. Sanders supporters who hated Clinton would be foolish to vote for Trump in a swing state, but in the safe states they could send this message if they desired. (It should be noted that this does run a very tiny risk of causing the popular vote to not match the college, which doesn’t stop your candidate from winning but sends a very strong message of dissatisfaction, and causes some lessening of support for the legitimacy of the process.)

What about in a swing state

This logic applies much less in swing states. There, your vote might change the state, and there is a very low chance it could swing the election. Now it is worth pointing out that this has never happened in a Presidential election. There’s never been an election where one vote made a difference. Unlike the non-contested states, there is still a chance of this happening. There, you will certainly vote for a major party if you want one, and you might even think twice about doing so even if you love a minor party, since your desire to pick the least of the two evils may exceed your desire to show support for your real values. Here, it is possible for minor parties to split the ballot, and in the view of the major parties, “spoil” the vote. This point is valid, the main error is in people applying this advice outside the swing states.

It is an interesting exercise to calculate just how much effect a single vote has even in a swing state. Again, the probability that a single state makes the difference in the election is already low in most elections, and the probability that this state’s result is within a single vote is also extremely low. On the other hand, if it does happen, then it happens for every voter in the state who voted for the winner — they all made the difference equally.

What is it worth to be able to make your candidate become President? In 2012 it was estimated that donors put in $2.6B, and that was not for a guarantee. For an ordinary individual, one could do research to figure out what it’s truly worth to each voter by trying to ask how much money they would take to accept the other candidate. That will vary from race to race and person to person, but for most people, it doesn’t make a huge difference in their lives who is President. They might feel they will make a bit more money with one, be a bit happier, get more things they care about done, but it’s not worth millions to anybody but business people who think it will majorly affect their business. Throwing out ballpark numbers, let’s assume it’s worth $100,000 to a given individual — and I think that’s actually very high, and of course I know it’s not just about money.

The problem is that the odds of the vote actually making the difference are low. Even a close race usually has a margin of thousands of votes, so the odds of a win-by-one are perhaps 1 in 10,000, and the odds that your state will be the decider are also small. After all, only a few elections have ever been decided by one close state, though Florida of 2000 is one of them and it’s in recent memory. If you judge your state has a 1 in 100 chance of being the decider, then this back of envelope calculation values your vote at just $1 — a one in 100,000 chance of something worth $100K.

One might argue that bumping the popular vote total is worth more. Unlike changing the result (which almost never happens) your vote always changes the popular vote totals, no matter which election or state you vote in. So while the value of that is small, the fact that it always happens bumps its expected value. Would adding 100,000 votes to the Green total in California be worth $100K to the Greens there? I would say it would be far more, suggesting a value much more than $1 per vote.

This may explain why voter turnout is so low.

Yikes - even Barack Obama wants to solve robocar "Trolley Problems" now

I had hoped I was done ranting about our obsession with what robocars will do in no-win “who do I hit?” situations, but this week, even Barack Obama in his interview with Wired opined on the issue, prompted by my friend Joi Ito from the MIT Media Lab. (The Media Lab recently ran a misleading exercise asking people to pretend they were a self-driving car deciding who to run over.)

I’ve written about the trouble with these problems and even proposed a solution but it seems there is still lots of need to revisit this. Let’s examine why this problem is definitely not important enough to merit the attention of the President or his regulators, and how it might even make the world more dangerous.

We are completely fascinated by this problem

Almost never do I give a robocar talk without somebody asking about this. Two nights ago, I attended another speaker’s talk and he got the question as his 2nd one. He looked at his watch and declared he had won a bet with himself about how quickly somebody would ask. It has become the #1 question in the mind of the public, and even Presidents.

It is not hard to understand why. Life or death issues are morbidly attractive to us, and the issue of machines making life or death decisions is doubly fascinating. It’s been the subject of academic debates and fiction for decades, and now it appears to be a real question. For those who love these sorts of issues, and even those who don’t, the pull is inescapable.

At the same time, even the biggest fan of these questions, stepping back a bit, would agree they are of only modest importance. They might not agree with the very low priority that I assign, but I don’t think anybody feels they are anywhere close to the #1 question out there. As such we must realize we are very poor at judging the importance of these problems. So each person who has not already done so needs to look at how much importance they assign, and put an automatic discount on this. This is hard to do. We are really terrible at statistics sometimes, and dealing with probabilities of risk. We worry much more about the risks of a terrorist attack on a plane flight than we do about the drive to the airport, but that’s entirely wrong. This is one of those situations, and while people are free to judge risks incorrectly, academics and regulators must not.

Academics call this the Law of triviality. A real world example is terrorism. The risk of that is very small, but we make immense efforts to prevent it and far smaller efforts to fight much larger risks.

These situations are quite rare, and we need data about how rare they are

In order to judge the importance of these risks, it would be great if we had real data. All traffic fatalities are documented in fairly good detail, as are many accidents. A worthwhile academic project would be to figure out just how frequent these incidents are. I suspect they are extremely infrequent, especially ones involving fatality. Right now fatalities happen about every 2 million hours of driving, and the majority of those are single car fatalities (with fatigue and alcohol among leading causes.) I have still yet to read a report of a fatality or serious injury that involved a driver having no escape, but the ability to choose what they hit with different choices leading to injuries for different people. I am not saying they don’t exist, but first examinations suggest they are quite rare. Probably hundreds of billions of miles, if not more, between them.

Those who want to claim they are important have the duty to show that they are more common than these intuitions suggest. Frankly, I think if there were accidents where the driver made a deliberate decision to run down one person to save another, or to hurt themselves to save another, this would be a fairly big human interest news story. Our fascination with this question demands it. Just how many lives would be really saved if cars made the “right” decision about who to hit in the tiny handful of accidents where they must hit somebody?

In addition, there are two broad classes of situations. In one, the accident is the fault of another party or cause, and in the other, it is the fault of the driver making the “who to hit” decision. In the former case, the law puts no blame on you for who you hit if forced into the situation by another driver. In the latter case, we have the unusual situation that a car is somehow out of control or making a major mistake and yet still has the ability to steer to hit the “right” target.

These situations will be much rarer for robocars

Unlike humans, robocars will drive conservatively and be designed to avoid failures. For example, in the MIT study, the scenario was often a car whose brakes had failed. That won’t happen to robocars — ever. I really mean never. Robocar designs now all commonly feature two redundant braking systems, because they can’t rely on a human pumping the hydraulics manually or pulling an emergency brake. In addition, every time they apply the brakes, they will be testing them, and at the first sign of any problem they will go in for repair. The same is true of the two redundant steering systems. Complete failure should be ridiculously unlikely.

The cars will not suddenly come upon a crosswalk full of people with no time to stop — they know where the crosswalks are and they won’t drive so fast as to not be able to stop for one. They will be also constantly measuring traction and road conditions to assure they don’t drive too fast for the road. They won’t go around blind corners at high speeds. They will have maps showing all known bottlenecks and construction zones. Ideally new construction zones will only get created after a worker has logged the zone on their mobile phone and the updates are pushed out to cars going that way, but if for some reason the workers don’t do that, the first car to encounter the anomaly will make sure all other cars know.

This does not mean the cars will be perfect, but they won’t be hitting people because they were reckless or had predictable mechanical failures. Their failures will be more strange, and also make it less likely the vehicle will have the ability to choose who to hit.

To be fair, robocars also introduce one other big difference. Humans can argue that they don’t have time to think through what they might do in a split-second accident decision. That’s why when they do hit things, we call them accidents. They clearly didn’t intend the result. Robocars do have the time to think about it, and their programmers, if demanded to by the law, have the time to think about it. Trolley problems demand the car be programmed to hit something deliberately. The impact will not be an accident, even if the cause was. This puts a much higher standard on the actions of the robocar. One could even argue it’s an unfair standard, which will delay deployment if we need to wait for it.

In spite of what people describe in scenarios, these cars won’t leave their right of way

It is often imagined an ethical robocar might veer into the oncoming lane or onto the sidewalk to hit a lesser target instead of a more vulnerable one in its path. That’s not impossible, but it’s pretty unlikely. For one, that’s super-duper illegal. I don’t see a company, unless forced to do so, programming a car to ever deliberately leave its right of way in order to hit somebody. It doesn’t matter if you save 3 school buses full of kids, deliberately killing anybody standing on the sidewalk sounds like a company-ruining move.

For one thing, developers just won’t put that much energy into making their car drive well on the sidewalk or in oncoming traffic. They should not put their energies there! This means the cars will not be well tested or designed when doing this. Humans are general thinkers, we can handle driving on the grass even though we have had little practice. Robots don’t quite work that way, even ones designed with machine learning.

This limits most of the situations to ones where you have a choice of targets within your right-of-way. And changing lanes is always more risky than staying in your lane, especially if there is something else in the lane you want to change to. Swerving if the other lane is clear makes sense, but swerving into an occupied lane is once again something that is going to be uncharted territory for the car.

By and large the law already has an answer

The vehicle code is quite detailed about who has right-of-way. In almost every accident, somebody didn’t have it and is the one at fault under the law. The first instinct for most programmers will be to have their car follow the law and stick to their ROW. To deliberately leave your ROW is a very risky move as outlined above. You might get criticized for running over jaywalkers when you could have veered onto the sidewalk, but the former won’t be punished by the law and the latter can be. If people don’t like the law, they should change the law.

The lesson of the Trolley problem is “you probably should not try to solve trolley problems.”

Ethicists point out correctly that Trolley problems may be academic exercises, but are worth investigating for what they teach. That’s true in the classroom. But look at what they teach! From a pure “save the most people” utilitarian standpoint, the answer is easy — switch the car onto the track to kill one in order to save 5. But most people don’t pick that answer, particularly in the “big man” version where you can push a big man standing with you on a bridge onto the tracks to stop the trolley and save the 5. The problem teaches us we feel much better about leaving things as they are than in overtly deciding to kill a bystander. What the academic exercise teaches us is that in the real world, we should not foist this problem on the developers.

If it’s rare and a no-win situation, do you have to solve it?

Trolley problems are philosophy class exercises to help academics discuss ethical and moral problems. They aren’t guides to real life. In the classic “trolley problem” we forget that none of it happens unless a truly evil person has tied people to a railway track. In reality, many would argue that the actors in a trolley problem are absolved of moral responsibility because the true blame is on the setting and its architect, not them. In philosophy class, we can still debate which situation is more or less moral, but they are all evil. These are “no win” situations, and in fact one of the purposes of the problems is they often describe situations where there is no clear right answer. All answers are wrong, and people disagree about which is most wrong.

If a situation is rare, and it takes effort to figure out which is the less wrong answer, and things will still be wrong after you do this even if you do it well, does it make sense to demand an answer at all? To individuals involved, yes, but not to society. The hard truth is that with 1.2 million auto fatalities a year — a number we all want to see go down greatly — it doesn’t matter that much to society whether, in a scenario that happens once every few years, you kill 2 people or 3 while arguing which choice was more moral. That’s because answering the question, and implementing the answer, have a cost.

Every life matters, but we regularly make decisions like this. We find things that are bad and rare, and we decide that below a certain risk threshold, we will not try to solve them unless the cost is truly zero. And here the cost is very far from zero. Because these are no-win situations and each choice is wrong, each choice comes with risk. You may work hard to pick the “right” choice and end up having others declare it wrong — all to make a very tiny improvement in safety.

At a minimum each solution will involve thought and programming, as well as emotional strain for those involved. It will involve legal review and in the new regulations, certification processes and documentation. All things that go into the decision must be recorded and justified. All of this is untrod legal ground making it even harder. In addition, no real scenario with match hypothetical situations exactly, so the software must apply to a range of situations and still do the intended thing (let alone the right thing) as the situation varies. This is not minor.

Nobody wants to solve it

In spite of the fascination these problems hold, coming up with “solutions” to these no-win situations are the last things developers want to do. In articles about these problems, we almost always see the statement, “Who should decide who the car will hit?” The answer is nobody wants to decide. The answer is almost surely wrong in the view of some. Nobody is going to get much satisfaction or any kudos for doing a good job, whatever that is. Combined with the rarity of these events compared to the many other problems on the table, solving ethical issues is very, very, very low on the priority list for most teams. Because developers and vendors don’t want to solve these questions and take the blame for those solutions, it makes more sense to ask policymakers to solve what needs to be solved. As Christophe von Hugo of Mercedes put it, “99% of our engineering work is to prevent these situations from happening at all.”

The cost of solving may be much higher than people estimate

People grossly underestimate how hard some of these problems will be to solve. Many of the situations I have seen proposed actually demand that cars develop entirely new capabilities that they don’t need except to solve these problems. In these cases, we are talking about serious cost, and delays to deployment if it is judged necessary to solve these problems. Since robocars are planned as a life-saving technology, each day of delay has serious consequences. Real people will be hurt because of these delays aimed at making a better decision in rare hypothetical situations.

Let’s consider some of the things I have seen:

  • Many situations involve counting the occupants of other cars, or counting pedestrians. Robocars don’t otherwise have to do this, nor can they easily do it. Today it doesn’t matter if there are 2 or 3 pedestrians — the only rule is not to hit any number of pedestrians. With low resolution LIDAR or radar, such counts are very difficult. Counts inside vehicles are even harder.
  • One scenario considers evaluating motorcyclists based on whether they are wearing helmets. I think this one is ridiculous, but if people take it seriously it is indeed serious. This is almost impossible to discern from a LIDAR image and can be challenging even with computer vision.
  • Some scenarios involve driving off cliffs or onto sidewalks or otherwise off the road. Most cars make heavy use of maps to drive, but they have no reason to make maps of off-road areas at the level of detail that goes into the roads.
  • More extreme scenarios compare things like children vs. adults, or school-buses vs. regular ones. Today’s robocars have no reason to tell these apart. And how do you tell a dwarf adult from a child? Full handling of these moral valuations requires human level perception in some cases.
  • Some suggestions have asked cars to compare levels of injury. Cars might be asked to judge the difference between a fatal impact and one that just breaks a leg.

These are just a few examples. A large fraction of the hypothetical situations I have seen demand some capability of the cars that they don’t have or don’t need to have just to drive safely.

The problem of course is there are those who say that one must not put cars on the road until the ethical dilemmas have been addressed. Not everybody says this but it’s a very common sentiment, and now the new regulations demand at least some evaluation of it. No matter how much the regulations might claim they are voluntary, this is a false claim, and not just because some states are already talking about making them more mandatory.

Once a duty of care has been suggested, especially by the government, you ignore it at your peril. Once you know the government — all the way to the President — wants you to solve something, then you must be afraid you will be asked “why didn’t you solve that one?” You have to come up with an answer to that, even with voluntary compliance.

The math on this is worth understanding. Robocars will be deployed slowly into society but that doesn’t matter for this calculation. If robocars are rare, they can prevent only a smaller number of accidents, but they will also encounter a correspondingly smaller number of trolley problems. What matters is how many trolley situations there are per fatality, and how many people you can save with better handling of those problems. If you get one trolley problem for every 1,000 or 10,000 fatalities, and robocars are having half the fatalities, the math very clearly says you should not accept any delay to work on these problems.

The court of public opinion

The real courts may or may not punish vendors for picking the wrong solution (or the default solution of staying in your lane) in no-win situations. Chances are there will be a greater fear of the court of public opinion. There is reason to fear the public would not react well if a vehicle could have made an obviously better outcome, particularly if the bad outcome involves children or highly vulnerable road users vs. adults and at-fault or protected road users.

Because of this I think that many companies will still try to solve some of these problems even if the law puts no duty on them. Those companies can evaluate the risk on their own and decide how best to mitigate it. That should be their decision.

For a long time, many people felt any robocar fatality would cause uproar in the public eye. To everybody’s surprise, the first Tesla autopilot deaths resulted in Tesla stock rising for 2 months, even with 3 different agencies doing investigations. While the reality of the Tesla is that the drivers bear much more responsibility than a full robocar would, the public isn’t very clear on that point, so the lack of reaction is astonishing. I suspect companies will discount this risk somewhat after this event.

This is a version 2 feature, not a version 1 feature

As noted, while humans make split-second “gut” decisions and we call the results accidents, robocars are much more intentional. If we demand they solve these problems, we ask something of them and their programmers that we don’t ask of human drivers. We want robocars to drive more safely than humans, but we also must accept that the first robocars to be deployed will only be a little better. The goal is to start saving lives and to get better and better at it as time goes by. We must consider the ethics of making the problem even harder on day one. Robocars will be superhuman in many ways, but primarily at doing the things humans do, only better. In the future, we should demand these cars meet an even higher standard than we put on people. But not today: The dawn of this technology is the wrong time to also demand entirely new capabilities for rare situations.

Performing to the best moral standards in rare situations is not something that belongs on the feature list for the first cars. Solving trolley situations well is in the “how do we make this perfect?” problem set, not the “how do we make this great?” set. It is important to remember how the perfect can be the enemy of the good and to distinguish between the two. Yes, it means accepting there are low chance that somebody could be hurt or die, but people are already being killed, in large numbers, by the human drivers we aim to replace.

So let’s solve trolley problems, but do it after we get the cars out on the road both saving lives and teaching us how to improve them further.

What about the fascination?

The over-fascination with this problem is a real thing even if the problem isn’t. Studies have displayed one interesting result after surveying people: When you ask people what a car should do for the good of society, they would want it to sacrifice its passenger to save multiple pedestrians, especially children. On the other hand if you ask people if they would buy a car that did that, far fewer said yes. As long as the problem is rare, there is no actual “good of society” priority; the real “good of society” comes from getting this technology deployed and driving safely as quickly as possible. Mercedes recently announced a much simpler strategy which does what people actually want, and got criticism for it. Their strategy is reasonable — they want to save the party they can be most sure of saving, namely the passengers. They note that they have very little reliable information on what will happen in other cars or who is in them, so they should focus not on a guess of what would save the most people, but what will surely save the people they know about.

What should we do?

I make the following concrete recommendations:

  1. We should do research to determine how frequent these problems are, how many have “obvious” answers and thus learn just how many fatalities and injuries might be prevented by better handling of these situations.
  2. We should remove all expectation on first generation vehicles that they put any effort into solving the rare ones, which may well be all of them.
  3. It should be made clear there is no duty of care to go to extraordinary lengths (including building new perception capabilities) to deal with sufficiently rare problems.
  4. Due to the public over-fascination, vendors may decide to declare their approaches to satisfy the public. Simple approaches should be encouraged, at in the early years of this technology, almost no answer should be “wrong.”
  5. For non-rare problems, governments should set up a system where developers/vendors can ask for rulings on the right behaviour from the policymakers, and limit the duty of care to following those rulings.
  6. As the technology matures, and new perception abilities come online, more discussion of these questions can be warranted. This belongs in car 2.0, not car 1.0.
  7. More focus at all levels should go into the real everyday ethical issues of robocars, such as roads where getting around requires regularly violating the law (speeding, aggression etc.) in the way all human users already do.
  8. People writing about these problems should emphasize how rare they are, and when doing artificial scenarios, recount how artificial they are. Because of the public’s fears and poor risk analysis, it is inappropriate to feed on those fears rather than be realistic.

The social networks could hold great political power due to GOTV. Should they?

The social networks have access (or more to the point can give their users access) to an unprecedented trove of information on political views and activities. Could this make a radical difference in affecting who actually shows up to vote, and thus decide the outcome of elections?

I’ve written before about how the biggest factor in US elections is the power of GOTV - Get Out the Vote. US Electoral turnout is so low — about 60% in Presidential elections and 40% in off-year — that the winner is determined by which side is able to convince more of their weak supporters to actually show up and vote. All those political ads you see are not going to make a Democrat vote Republican or vice versa, they are going to scare a weak supporter to actually show up. It’s much cheaper, in terms of votes per dollar (or volunteer hour) to bring in these weak supporters than it is to swing a swing voter.

The US voter turnout numbers are among the worst in the wealthy world. Much of this is blamed on the fact the US, unlike most other countries, has voter registration; effectively 2 step voting. Voter registration was originally implemented in the USA as a form of vote suppression, and it’s stuck with the country ever since. In almost all other countries, some agency is responsible for preparing a list of citizens and giving it to each polling place. There are people working to change that, but for now it’s the reality. Registration is about 75%, Presidential voting about 60%. (Turnout of registered voters is around 80%)

Scary negative ads are one thing, but one of the most powerful GOTV forces is social pressure. Republicans used this well under Karl Rove, working to make social groups like churches create peer pressure to vote. But let’s look at the sort of data sites like Facebook have or could have access to:

  • They can calculate a reasonably accurate estimate of your political leaning with modern AI tools and access to your status updates (where people talk politics) and your friend network, along with the usual geographic and demographic data
  • They can measure the strength of your political convictions through your updates
  • They can bring in the voter registration databases (which are public in most states, with political use allowed on the data. Commercial use is forbidden in a portion of states but this would not be commercial.)
  • In many cases, the voter registration data also reveals if you voted in prior elections
  • Your status updates and geographical check-ins and postings will reveal voting activity. Some sites (like Google) that have mobile apps with location sensing can detect visits to polling places. Of course, for the social site to aggregate and use this data for its own purposes would be a gross violation of many important privacy principles. But social networks don’t actually do (too many) things; instead they provide tools for their users to do things. As such, while Facebook should not attempt to detect and use political data about its users, it could give tools to its users that let them select subsets of their friends, based only on information that those friends overtly shared. On Facebook, you can enter the query, “My friends who like Donald Trump” and it will show you that list. They could also let you ask “My Friends who match me politically” if they wanted to provide that capability.

Now imagine more complex queries aimed specifically at GOTV, such as: “My friends who match me politically but are not scored as likely to vote” or “My friends who match me politically and are not registered to vote.” Possibly adding “Sorted by the closeness of our connection” which is something they already score.  read more »

NHTSA Regulations part 4: Crashes, Training, Certification, State Law, Operation, Validation and Autopilots

After my initial reactions and Overall Analysis here is a point by point consideration of second set of elements from NHTSA’s 15 point certification list for robocars. See my series for other articles or the first half of the list.

Crashworthiness

In this section, the remind vendors they still need to meet the same standards as regular cars do. We are not ready to start removing heavy passive safety systems just because the vehicles get in fewer crashes. In the future we might want to change that, as those systems can be 1/3 of the weight of a vehicle.

They also note that different seating configurations (like rear facing seats) need to protect as well. It’s already the case that rear facing seats will likely be better in forward collisions. Face-to-face seating may present some challenges in this environment, as it is less clear how to deploy the airbags. Taxis in London often feature face-to-face seating, though that is less common in the USA. Will this be possible under these regulations?

The rules also call for unmanned vehicles to absorb energy like existing vehicles. I don’t know if this is a requirement on unusual vehicle design for regular cars or not. (If it were, it would have prohibited SUVs with their high bodies that can cause a bad impact with a low-body sports-car.)

Consumer Education and Training

This seems like another mild goal, but we don’t want a world where you can’t ride in a taxi unless you are certified as having taking a training course. Especially if it’s one for which you have very little to do. These rules are written more for people buying a car (for whom training can make sense) than those just planning to be a passenger.

Registration and Certification

This section imagines labels for drivers. It’s pretty silly and not very practical. Is a car going to have a sticker saying “This car can drive itself on Elm St. south of Pine, or on highway 101 except in Gilroy?” There should be another way, not labels, that this is communicated, especially because it will change all the time.

Post-Crash Behavior

This set is fairly reasonable — it requires a process describing what you do to a vehicle after a crash before it goes back into service.

Federal, State and Local Laws

This section calls for a detailed plan on how to assure compliance with all the laws. Interestingly, it also asks for a plan on how the vehicle will violate laws that human drivers sometimes violate. This is one of the areas where regulatory effort is necessary, because strictly cars are not allowed to violate the law — doing things like crossing the double-yellow line to pass a car blocking your path.  read more »

NHTSA Regulations part 3: Data Sharing, Privacy, Safety, Security and HMI

After my initial reactions and Overall Analysis here is a point by point consideration of the elements from NHTSA’s 15 point certification list for robocars. See also the second half and the whole series

Let’s dig in:

Data Recording and Sharing

These regulations require a plan about how the vehicle keep logs around any incident (while following privacy rules.) This is something everybody already does — in fact they keep logs of everything for now — since they want to debug any problems they encounter. NHTSA wants the logs to be available to NHTSA for crash investigation.

NHTSA also wants recordings of positive events (the system avoided a problem.)

Most interesting is a requirement for a data sharing plan. NHTSA wants companies to share their logs with their competitors in the event of incidents and important non-incidents, like near misses or detection of difficult objects.

This is perhaps the most interesting element of the plan, but it has seen some resistance from vendors. And it is indeed something that might not happen at scale without regulation. Many teams will consider their set of test data to be part of their crown jewels. Such test data is only gathered by spending many millions of dollars to send drivers out on the roads, or by convincing customers or others to voluntarily supervise while their cars gather test data, as Tesla has done. A large part of the head-start that leaders have in this field is the amount of different road situations they have been able to expose their vehicles to. Recordings of mundane driving activity are less exciting and will be easier to gather. Real world incidents are rare and gold for testing. The sharing is not as golden, because each vehicle will have different sensors, located in different places, so it will not be easy to adapt logs from one vehicle directly to another. While a vehicle system can play its own raw logs back directly to see how it performs in the same situation, other vehicles won’t readily do that.

Instead this offers the ability to build something that all vendors want and need, and the world needs, which is a high quality simulator where cars can be tested against real world recordings and entirely synthetic events. The data sharing requirement will allow the input of all these situations into the simulator, so every car can test how it would have performed. This simulation will mostly be at the “post perception level” where the car has (roughly) identified all the things on the road and is figuring out what to do with them, but some simulation could be done at lower levels.

These data logs and simulator scenarios will create what is known as a regression test suite. You test your car in all the situations, and every time you modify the software, you test that your modifications didn’t break something that used to work. It’s an essential tool.

In the history of software, there have been shared public test suites (often sourced from academia) and private ones that are closely guarded. For some time, I have proposed that it might be very useful if there were a a public and open source simulator environment which all teams could contribute scenarios to, but I always expected most contributions would come from academics and the open source community. Without this rule, the teams with the most test miles under their belts might be less willing to contribute.

Such a simulator would help all teams and level the playing field. It would allow small innovators to even build and test prototype ideas entirely in simulator, with very low cost and zero risk compared to building it in physical hardware.

This is a great example of where NHTSA could use its money rather than its regulatory power to improve safety, by funding the development of such test tools. In fact, if done open source, the agencies and academic institutions of the world could fund a global one. (This would face opposition from companies hoping to sell test tools, but there will still be openings for proprietary test tools.)

Privacy

This section demands a privacy policy. I’m not against that, though of course the history of privacy policies is not a great one. They mostly involve people clicking “I agree” to things they don’t read. More important is the requirement that vendors be thinking about privacy.

The requirement for user choice is an interesting one, and it conflicts with the logging requirements. People are wary of technology that will betray them in court. Of course, as long as the car is not a hybrid car that mixes human driving with self-driving, and the passenger is not liable in an accident, there should be minimal risk to the passenger from accidents being recorded.

The rules require that personal information be scrubbed from any published data. This is a good idea but history shows it is remarkably hard to do properly.  read more »

Detailed analysis of NHTSA robocar regulations: Overview

The recent Federal Automated Vehicles Policy is long. (My same-day analysis is here and the whole series is being released.) At 116 pages (to be fair, less than half is policy declarations and the rest is plans for the future and associated materials) it is much larger than many of us were expecting.

The policy was introduced with a letter attributed to President Obama, where he wrote:

There are always those who argue that government should stay out of free enterprise entirely, but I think most Americans would agree we still need rules to keep our air and water clean, and our food and medicine safe. That’s the general principle here. What’s more, the quickest way to slam the brakes on innovation is for the public to lose confidence in the safety of new technologies. Both government and industry have a responsibility to make sure that doesn’t happen. And make no mistake: If a self-driving car isn’t safe, we have the authority to pull it off the road. We won’t hesitate to protect the American public’s safety.

This leads in to an unprecedented effort to write regulations for a technology that barely exists and has not been deployed beyond the testing stage. The history of automotive regulation has been the opposite, and so this is a major change. The key question is what justifies such a big change, and the cost that will come with it.

Make no mistake, the cost will be real. The cost of regulations is rarely known in advance but it is rarely small. Regulations slow all players down and make them more cautious — indeed it is sometimes their goal to cause that caution. Regulations result in projects needing “compliance departments” and the establishment of procedures and legal teams to assure they are complied with. In almost all cases, regulations punish small companies and startups more than they punish big players. In some cases, big players even welcome regulation, both because it slows down competitors and innovators, and because they usually also have skilled governmental affairs teams and lobbying teams which are able to subtly bend the regulations to match their needs.

This need not even be nefarious, though it often is. Companies that can devote a large team to dealing with regulations, those who can always send staff to meetings and negotiations and public comment sessions will naturally do better than those which can’t.

The US has had a history of regulating after the fact. Of being the place where “if it’s not been forbidden, it’s permitted.” This is what has allowed many of the most advanced robocar projects to flourish in the USA.

The attitude has been that industry (and startups) should lead and innovate. Only if the companies start doing something wrong or harmful, and market forces won’t stop them from being that way, is it time for the regulators to step in and make the errant companies do better. This approach has worked far better than the idea that regulators would attempt to understand a product or technology before it is deployed, imagine how it might go wrong, and make rules to keep the companies in line before any of them have shown evidence of crossing a line.

In spite of all I have written here, the robocar industry is still young. There are startups yet to be born which will develop new ideas yet to be imagined that change how everybody thinks about robocars and transportation. These innovative teams will develop new concepts of what it means to be safe and how to make things safe. Their ideas will be obvious only well after the fact.

Regulations and standards don’t deal well with that. They can only encode conventional wisdom. “Best practices” are really “the best we knew before the innovators came.” Innovators don’t ignore the old wisdom willy-nilly, they often ignore it or supersede it quite deliberately.

What’s good?

Some players — notably the big ones — have lauded these regulations. Big players, like car companies, Google, Uber and others have a reason to prefer regulations over a wild west landscape. Big companies like certainty. They need to know that if they build a product, that it will be legal to sell it. They can handle the cost of complex regulations, as long as they know they can build it.  read more »

Critique of NHTSA's newly released regulations

The long awaited list of recommendations and potential regulations for Robocars has just been released by NHTSA, the federal agency that regulates car safety and safety issues in car manufacture. Normally, NHTSA does not regulate car technology before it is released into the market, and the agency, while it says it is wary of slowing down this safety-increasing technology, has decided to do the unprecedented — and at a whopping 115 pages.

Broadly, this is very much the wrong direction. Nobody — not Google, Uber, Ford, GM or certainly NHTSA — knows the precise form of these cars will have when deployed. Almost surely something will change from our existing knowledge today. They know this, but still wish to move. Some of the larger players have pushed for regulation. Big companies like certainty. They want to know what the rules will be before they invest. Startups thrive better in the chaos, making up the rules as we go along.

NHTSA hopes to define “best practices” but the best anybody can do in 2016 is lay down existing practices and conventional wisdom. The entirely new methods of providing safety that are yet to be invented won’t be in such a definition.

The document is very detailed, so it will generate several blog posts of analysis. Here I present just initial reactions. Those reactions are broadly negative. This document is too detailed by an order of magnitude. Its regulations begin today, but fortunately they are also accepting public comment. The scope of the document is so large, however, that it seems extremely unlikely that they would scale back this document to the level it should be at. As such, the progress of robocar development in the USA may be seriously negatively affected.

Vehicle performance guidelines

The first part of the regulations is a proposed 15 point safety standard. It must be certified (by the vendor) that the car meets these standards. NHTSA wants the power, according to an Op-Ed by no less than President Obama, to be able to pull cars from the road that don’t meet these safety promises.

  • Data Recording and Sharing
  • Privacy
  • System Safety
  • Vehicle Cybersecurity
  • Human Machine Interface
  • Crashworthiness
  • Consumer Education and Training
  • Registration and Certification
  • Post-Crash Behavior
  • Federal, State and Local Laws
  • Operational Design Domain
  • Object and Event Detection and Response
  • Fall Back (Minimal Risk Condition)
  • Validation Methods
  • Ethical Considerations

As you might guess, the most disturbing is the last one. As I have written many times, the issue of ethical “trolley problems” where cars must decide between killing one person or another are a philosophy class tool, not a guide to real world situations. Developers should spend as close to zero effort on these problems as possible, since they are not common enough to warrant special attention, if not for our morbid fascination with machines making life or death decisions in hypothetical situations. Let the policymakers answer these questions if they want to; programmers and vendors don’t.

For the past couple of years, this has been a game that’s kept people entertained and ethicists employed. The idea that government regulations might demand solutions to these problems before these cars can go on the road is appalling. If these regulations are written this way, we will delay saving lots of real lives in the interest of debating which highly hypothetical lives will be saved or harmed in ridiculously rare situations.

NHTSA’s rules demand that ethical decisions be “made consciously and intentionally.” Algorithms must be “transparent” and based on input from regulators, drivers, passengers and road users. While the section makes mention of machine learning techniques, it seems in the same breath to forbid them.

Most of the other rules are more innocuous. Of course all vendors will know and have little trouble listing what roads their car works on, and they will have extensive testing data on the car’s perception system and how it handles every sort of failure. However, the requirement to keep the government constantly updated will be burdensome. Some vehicles will be adding streets to their route map literally ever day.

While I have been a professional privacy advocate, and I do care about just how the privacy of car users is protected, I am frankly not that concerned during the pilot project phase about how well this is done. I do want a good regime — and even the ability to do anonymous taxi — so it’s perhaps not too bad to think about these things now, but I suspect these regulations will be fairly meaningless unless written in consultation with independent privacy advocates. The hard reality is that during the test phase, even a privacy advocate has to admit that the cars will need to make very extensive recordings of everything they can, so that any problems encountered can be studied and fixed and placed into the test suite.

50 state laws

NHTSA’s plan has been partially endorsed by the self-driving coalition for safer streets (whose members include big players Ford, Google, Volvo, Uber and Lyft.) They like the fact that it has guidance for states on how to write their regulations, fearing that regulations may differ too much state to state. I have written that having 50 sets of rules may not be that bad an idea because jurisdictional competition can allow legal innovation and having software load new parameters as you drive over a border is not that hard.

In this document NHTSA asks the states to yield to the DOT on regulating robocar operation and performance. States should stick to registering cars, rules of the road, safety inspections and insurance. States will regulate human drivers as before, but the feds will regulate computer drivers.

States will still regulate testing, in theory, but the test cars must comply with the federal regulations.

New Authorities

A large part of the document just lists the legal justifications for NHTSA to regulate in this fashion and is primarily for policy wonks. Section 4, however, lists new authorities NHTSA is going to seek in order to do more regulation.

Some of the authorities they may see include:

  • Pre-market safety assurance: Defining testing tools and methods to be used before selling
  • Pre-market approval authority: Vendors would need approval from NHTSA before selling, rather than self-certifying compliance with the regulations
  • Hybrid approaches of pre-market approval and self-certification
  • Cease and desist authority: The ability to demand cars be taken off the road
  • Exemption authority: An ability to grant rue exemptions for testing
  • Post-sale authority to regulate software changes
  • Much more

Other quick notes:

  • NHTSA has abandoned their levels in favour of the SAE’s. The SAE’s were almost identical of course, with the addition of a “level 5” which is meaningless because it requires a vehicle that can drive literally everywhere, and there is not really a commercial reason to make a car at present that can do that.
  • NHTSA is now pushing the acronym “HAV” (highly automated vehicle) as yet another contender in the large sea of names people use for this technology. (Self-driving car, driverless car, autonomous vehicle, automated vehicle, robocar etc.)

This was my preliminary report. More analysis can be found under the NHTSA tag.

The incredible Cheapness of Being Parked

Some people have wondered about my forecast in the spreadsheet on Robotaxi economics about the very low parking costs I have predicted. I wrote about most of the reasons for this in my 2007 essay on Robocar Parking but let me expand and add some modern notes here.

The Glut of Parking

Today, researchers estimate there are between 3 and 8 parking spots for every car in the USA. The number 8 includes lots of barely used parking (all the shoulders of all the rural roads, for example) but the value of 3 is not unreasonable. Almost all working cars have a spot at their home base, and a spot at their common destination (the workplace.) There are then lots of other places (streets, retail lots, etc.) to find that 3rd spot. It’s probably an underestimate.

We can’t use all of these at once, but we’re going to get a great deal more efficient at it. Today, people must park within a short walk of their destination. Nobody wants to park a mile away. Parking lots, however, need to be sized for peak demand. Shopping malls are surrounded by parking that is only ever used during the Christmas shopping season. Robocars will “load balance” so that if one lot is full, a spot in an empty lot too far away is just fine.

Small size and Valet Density

When robocars need to park, they’ll do it like the best parking valets you’ve ever seen. They don’t even need to leave space for the valet to open the door to get out. (The best ones get close by getting out the window!) Because the cars can move in concert, a car at the back can get out almost as quickly as one at the front. No fancy communications network is needed; all you need is a simple rule that if you boxed somebody in, and they turn on their lights and move an inch towards you, you move an inch yourself (and so on with those who boxed you in) to clear a path. Already, you’ve got 1.5x to 2x the density of an ordinary lot.

I forecast that many robotaxis will be small, meant for 1-2 people. A car like that, 4’ by 12’ would occupy under 50 square feet of space. Today’s parking lots tend to allocate about 300 square feet per car. With these small cars you’re talking 4 to 6 times as many cars in the same space. You do need some spare space for moving around, but less than humans need.

When we’re talking about robotaxis, we’re talking about sharing. Much of the time robotaxis won’t park at all, they would be off to pick up their next passenger. A smaller fraction of them would be waiting/parked at any given time. My conservative prediction is that one robotaxi could replace 4 cars (some estimate up to 10 but they’re overdoing it.) So at a rough guess we replace 1,000 cars, 900 of which are parked, with 250 cars, only 150 of which are parked at slow times. (Almost none are parked during the busy times.)

Many more spaces available for use

Robocars don’t park, they “stand.” Which means we can let them wait all sorts of places we don’t let you park. In front of hydrants. In front of driveways. In driveways. A car in front of a hydrant should be gone at the first notification of a fire or sound of a siren. A car in front of your driveway should be gone the minute your garage opens or, if your phone signals your approach, before you get close to your house. Ideally, you won’t even know it was there. You can also explicitly rent out your driveway space for money if you wish it. (You could rent your garage too, but the rate might be so low you will prefer to use it to add a new room to your house unless you still own a car.)

In addition, at off-peak times (when less road capacity is needed) robocars can double park or triple park along the sides of roads. (Human cars would need to use only the curb spots, but the moment they put on their turn signal, a hole can clear through the robocars to let them out.)

So if we consider just these numbers — only 1/6 of the time spent parking and either 4 times the density in parking lots or 2-3 times the volume of non-lot parking (due to the 2 spots per car and loads of extra spots) we’re talking about a huge, massive, whopping glut of parking. Such a large glut that in time, a lot of this parking space very likely will be converted to other uses, slowly reducing the glut.

Ability to move in response to demand

To add to this glut, robocars can be the best parking customers you could ever imagine. If you own a parking lot, you might have sold the space at the back or top of your lot to the robocars — they will park in the unpopular more remote sections for a discount. The human driver customers will prefer those spots by the entrance. As your lot fills up, you can ask the robocars to leave, or pay more. If a high paying human driver appears at the entrance, you can tell the robocars you want their space, and off they can go to make room. Or they can look around on the market and discover they should just pay you more to keep the space. The lot owner is always making the most they can.

If robocars are electric, they should also be excellent visitors, making little noise and emitting no soot to dirty your walls. They will leave a tiny amount of rubber and that’s about it.

The “spot” market

All of this will be driven by what I give the ironic name of the “spot” market in parking. Such markets are already being built by start-ups for human drivers. In this market, space in lots would be offered and bid for like any other market. Durations will be negotiated, too. Cars could evaluate potential waiting places based on price and the time it will take to get there and park, as well as the time to get to their likely next pickup. A privately owned car might drive a few miles to a super cheap lot to wait 7 hours, but when it’s closer to quitting time, pay a premium (in competition with many others of course) to be close to their master.  read more »

Tesla Radar, MobilEye fight and the Comma One $1,000 add-on-box

Tesla’s spat with MobilEye reached a new pitch this week, and Tesla announced a new release of their autopilot and new plans. As reported here earlier, MobilEye announced during the summer that they would not be supplying the new and better versions of their EyeQ system to Tesla. Since that system was and is central to the operation of the Telsa autopilot, they may have been surprised that MBLY stock took a big hit after that announcement (though it recovered for a while and is now back down) and TSLA did not.

Statements and documents now show a nastier battle, with MobilEye intimating they were worried about Tesla using their tool in an unsafe way, invoking all the debate about the fatality and other crashes allegedly caused by people who are lulled into not bothering to supervise the autopilot. Tesla says that instead they have been developing their own advanced vision tools, and that MobilEye was afraid of that and told Tesla that if they wanted more EyeQ chips, they would need to halt the competing project and commit to ME. That’s a nasty spat.

Tesla’s own efforts represent a threat to MobilEye from the growing revolution in neural network pattern matchers. Computer vision is going through a big revolution. MobilEye is a big player in that revolution, because their ASICs do both standard machine vision functions and can do neural networks. An ASIC will beat a general purpose processor when it comes to cost, speed and power, but only if the ASIC’s abilities were designed to solve those particular problems. Since it takes years to bring an ASIC to production, you have to aim right. MobilEye aimed pretty well, but at the same time lots of research out there is trying to aim even better, or do things with more general purpose chips like GPUs. Soon we will see ASICs aimed directly at neural network computations. To solve the problem with neural networks, you need the computing horsepower, and you need well designed deep network architectures, and you need the right training data and lots of it. Tesla and ME both are gaining lots of training data. Many companies, including Nvidia, Intel and others are working on the hardware for neural networks. Most people would point to Google as the company with the best skills in architecting the networks, though there are many doing interesting work there. (Google’s DeepMind built the tools that beat humans at the seemingly impossible game of Go, for example.) It’s definitely a competitive race.

Using Radar

While Tesla works on their vision systems, they also announced a plan to make much more use of radar. That’s an interesting plan. Radar has been the poor 3rd-class sensor of the robocar, after LIDAR and vision. Everybody uses it — you would be crazy not to unless you need to be very low cost. Radar sees further than the other systems, and it tells you immediately how fast any radar target you see is moving relative to you. It sees through fog and other weather, and it can even see under and around big cars in front of you as it bounces off the road and other objects. It’s really good at licence plates as well.

What radar doesn’t have is high resolution. Today’s automotive radars have gotten good enough to tell you what lane an object like another car is in, but they are not designed to have any vertical resolution — you will get radar returns from a stalled car ahead of you on the road and a sign above that lane, and not be sure of the difference. You need your car to avoid a stalled car in your lane, but you can’t have a car that hits the brakes every time it sees a road sign or bridge!

Real world radar is messy. Your antennas send out and receive from a very broad cone with potential signals from other directions and from side lobes. Reflections are coming from vehicles and road users but also from the ground, hills, trees, fences, signs, bushes and bridges. It’s work to get reliable information from it. Early automotive radars found the best solution was to use the doppler speed information, and discard all returns from anything that wasn’t moving towards or away from you — including stalled cars and cross traffic.

One thing that can help (imperfectly) is a map. You can know where the bridges and signs are so you don’t brake for them. Now you can brake for the stalled cars and the cross traffic the Tesla failed to see. You still have an issue with a stalled car under a bridge or sign, but you’re doing a lot better.

There’s a lot of room for improvement in radar, and I will presume — Tesla has not said — that Tesla plans to work on this. The automotive radars everybody buys (from companies like Bosch) were made for the ADAS market — adaptive cruise control, emergency braking etc. It is possible to design new radars with more resolution (particularly in the vertical) and other approaches. You can also try for more resolution, particularly by splitting the transmitter and receiver to produce a synthetic larger aperture. You can go into different bands and get more bandwidth and get more resolution in general. You can play more software tricks, and most particularly, you can learn by examining not just single radar returns, but rather the pattern of returns over time. (After all, humans don’t navigate from still frames, we depend on our visual system’s deep evolved ability to use motion and other clues to understand the world.)

The neural networks are making strides here. For example, while pedestrians produce basic radar returns, it turns out that their walking stride has a particular pattern of changes that can be identified by neural networks. People are doing research now on how examining the moving and dynamic pattern of radar returns can help you get more resolution and also identify shapes and motion patterns of objects and figure out what they are.

I will also speculate that it might be possible to return to a successor of the “sweeped” radars of old, the ones we are used to seeing in old war movies. Modern car radars don’t scan like that, but I have to wonder if with new techniques, like phased arrays to steer virtual beams (already the norm in military radar) and modern high speed electronics, that we might produce radars that get a better sense of where their target is. We’re also getting better at sensor fusion — identifying a radar target in an image or LIDAR return to help learn more about it.

The one best way to improve radar resolution would be to use more bandwidth. There have been experiments in using ultrawideband signals in the very high frequencies which may offer promise. As the name suggests, UWB uses a very wide band, and it distributes its energy over that very wide band, which means it doesn’t put too much energy into any one band, and has less chance of interfering in those bands. It’s also possible that the FCC, seeing the tremendous public value that reliable robocars offer, might consider opening up more spectrum for use in radar applications using modern techniques, and thus increase the resolution.

In other words, Tesla is wise to work on getting more from radar. With the loss of all MobilEye’s vision tools, they will have to work hard to duplicate and surpass that. For now, Tesla is committed to using parts that are for sale for existing production cars, costing hundreds of dollars. That has taken LIDAR “off their radar” even though almost all research teams depend on LIDAR and expect LIDAR to be cheap in a couple of years. (Including the LIDAR from Quanergy, a company I advise.)

Comma announces a $1,000 autopilot box

I wrote earlier about comma.ai and their efforts to drive with just vision, radar and neural networks. They now plan to offer a box for $1,000 to give you some basic autopilot functionlity as an add-on.

To do this, they are working with only some specific car models, namely some Honda vehicles that already have advanced ADAS in them. Using the car’s internal bus, they can talk to the sensors in these cars (in particular the radar, since the Comma One has a camera) and also send control signals to actuate the steering, brakes and throttle. Then their neural networks can take the sensor information, and output the steering and speed commands to keep you in the lane. (Details are scant so I don’t know if the Comma One box uses its own camera or depends on access to the car’s.)

When I rode in Comma’s prototype it certainly wasn’t up to the level of the Tesla autopilot or some others, but it has been several months so I can’t judge it now. Like the Tesla autopilot, the Comma will not be safe enough to drive the car on its own, and you will need to supervise and be ready to intervene at any time. If you get complacent, as some Tesla drivers have, you could get injured or killed. I have yet to learn what measures Comma will take to make sure people keep their eyes on the road.

Generally, I feel that autopilots are not very exciting products when you have to watch them all the time — as you do — and also that bolt-on products are also not particularly exciting. Cruise’s initial plan (after they abandoned valet parking) was a bolt-on autopilot, but they soon switched to trying to build a real vehicle, and that got them the huge $700M sale to General Motors.

But for Comma, there is a worthwhile angle. Users of this bolt-on box will be helping to provide training data to improve their systems. In fact they will be paying for the privilege of testing the system and training it. Something that companies like Google did the old fashioned way, paying a staff of professionals to drive the cars and gather data. For a tiny, young startup it’s a worthwhile approach.

Robotaxi Economics

The vision of many of us for robocars is a world of less private car ownership and more use of robotaxis — on demand ride service in a robocar. That’s what companies like Uber clearly are pushing for, and probably Google, but several of the big car companies including Mercedes, Ford and BMW among others have also said they want to get there — in the case of Ford, without first making private robocars for their traditional customers.

In this world, what does it cost to operate these cars? How much might competitive services charge for rides? How much money will they make? What factors, including price, will they compete on, and how will that alter the landscape?

Here are some basic models of cost. I compare a low-cost 1-2 person robotaxi, a higher-end 1-2 person robotaxi, a 4-person traditional sedan robotaxi and the costs of ownership for a private car, the Toyota Prius 2, as calculated by Edmunds. An important difference is that the taxis are forecast to drive 50,000 miles/year (as taxis do) and wear out fully in 5 years. The private car is forecast to drive 15,000 miles/year (higher than the average for new cars, which is 12,000) and to have many years and miles of life left in it. As such the taxis are fully depreciated in this 5 year timeline, and the private car only partly.

Some numbers are speculative. I am predicting that the robotaxis will have an insurance cost well below today’s cars, which cost about 6 cents/mile for liability insurance. The taxis will actually be self-insured, meaning this is the expected cost of any incidents. In the early days, this will not be true — the taxis will be safer, but the incidents will cost more until things settle down. As such the insurance prices are for the future. This is a model of an early maturing market where the volume of robotaxis is fairly high (they are made in the low millions) and the safety record is well established. It’s a world where battery prices and reliability have improved. It’s a world where there is still a parking glut, before most surplus parking is converted to other purposes.

Fuel is electric for the taxis, gasoline/hybrid for the Prius. The light vehicle is very efficient.

Maintenance is also speculative. Today’s cars spend about 6 cents/mile, including 1 cent/mile for the tires. Electric cars are expected to have lower maintenance costs, but the totals here are higher because the car is going 250,000 miles not 75,000 miles like the Prius. With this high level of maintenance and such smooth driving, I forecast low repair cost.

Parking is cheaper for the taxis for several reasons. First, they can freely move around looking for the cheapest place to wait, which will often be free city parking, or the cheapest advertised parking on the auction “spot” market. They do not need to park right where the passenger is going, as the private car does. They will park valet style, and so the small cars will use less space and pay less too. Parking may actually be much cheaper than this, even free in many cases. Of course, many private car owners do not pay for parking overtly, so this varies a lot from city to city.

(You can view the spreadsheet directly on Google docs and download it to your own tool to play around with the model. Adjust my assumptions and report your own price estimates.)

The Prius has one of the lowest costs of ownership of any regular car (take out the parking and it’s only 38 cents/mile) but its price is massively undercut by the electric robotaxi, especially my estimates for the half-width electric city car. (I have not even included the tax credits that apply to electric cars today.) For the taxis I add 15% vacant miles to come up with the final cost.

The price of the Prius is the retail cost (on which you must also pay tax) but a taxi fleet operator would pay a wholesale, or even manufacturer’s cost. Of course, they now have the costs of running a fleet of self-driving cars. That includes all the virtual stuff (software, maps and apps) with web sites and all the other staff of a big service company ranging from lawyers to marketing departments. This is hard to estimate because if the company gets big, this cost will not be based on miles, and even so, it will not add many cents per mile. The costs of the Prius for fuel, repair, maintenance and the rest are also all retail. The taxi operator wants a margin, and a big margin at first, though with competition this margin would settle to that of other service businesses.  read more »

Museums in ruins and old buildings will take on new life with Augmented Reality

We’re on the cusp of a new wave of virtual reality and augmented reality technology. The most exciting is probably the Magic Leap. I have yet to look through it, but friends who have describe it as hard to tell from actual physical objects in your environment. The Hololens (which I have looked through) is not that good, and has a very limited field of view, but it already shows good potential.

It’s becoming easier and easier to create VR versions of both fictional and real environments. Every historical documentary show seems to include a nice model reconstructing what something used to look like, and this is going to get better and better with time.

This will be an interesting solution for many of the world’s museums and historical sites. A few years from now, every visit to a ruin or historical building won’t just include a boring and slow audioguide, but some AR glasses to allow you to see a model of what the building was really like in its glory. Not just a building — it should be possible to walk around ancient Rome or other towns and do this as well.

Now with VR you’ll be able to do that in your own home if you like, but you won’t be able to walk very far in that space. (There are tricks that let you fool people into thinking they walked further but they are just not the same as walking in the real space with the real geometry.) They will also be able to populate the space with recordings or animations of people in period costumes doing period things.

This is good news for historical museums. Many of them have very few actual interesting artifacts to see, so they end up just being placards and photos and videos and other multimedia presentations. Things I could easily see on the museum web site; their only virtue is that I am reading the text and looking at the picture in the greatly changed remains of where it happened. These days, I tend to skip museums that have become little more than multimedia. But going to see the virtual recreation will be a different story, I predict.

Soon will be the time for museum and tourist organizations to start considering what spaces will be good for this. You don’t need to restore or rebuild that old castle, as long as it’s safe to walk around. You just need to instrument it with tracking sensors for the AR gear and build and refine those models. Over time, the resolution of the AR glasses will approach that of the eyes, and the reality of the models will improve too. In time, many will feel like they got an experience very close to going back and time and seeing it as it was.

Well, not quite as it was. It will be full of tourists from the future, including yourself. AR keeps them present, which is good because you don’t want to bump into them. A more advanced system will cover the tourists in period clothing, or even replace their faces. You would probably light the space somewhat dimly to assure the AR can cover up what it needs to cover up, while still keeping enough good vision of the floor so you don’t trip.

Of course, if you cover everything up with the AR, you could just do this in a warehouse, and that will happen too. You would need to reproduce the staircases of the recreated building but could possibly get away with producing very little else. As long as the other visitors don’t walk through walls the walls don’t have to be there. This might be popular (since it needs no travel) but many of us still do have an attraction to the idea that we’re standing in the actual old place, not in our hometown. And the museums would also have rooms with real world artifacts to examine, if they have them.

Syndicate content