Archives

Date

Bloomberg (or another moderate) could have walked away with the Presidency due to Trump

Michael Bloomberg, a contender for an independent run for US President has announced he will not run though for a reason that just might be completely wrong. As a famous moderate (having been in both the Republican and Democratic parties) he might just have had a very rare shot at being the first independent to win since forever.

Here’s why, and what would have to happen:

  1. Donald Trump would have to win the Republican nomination. (I suspect he won’t, but it’s certainly possible.)
  2. The independent would have to win enough electoral votes to prevent either the Republican or Democrat getting 280.

If nobody has a majority of the electoral college, the house picks the President from the top 3 college winners. The house is Republican, so it seems pretty unlikely it would pick any likely Democratic Party nominee, and the Democrats would know this. Once they did know this, the Democrats would have little choice but to vote for the moderate, since they certainly would not vote for Trump.

Now all it takes is a fairly small number of Republicans to bolt from Trump. Normally they would not betray their own party’s official nominee, but in this case, the party establishment hates Trump, and I think that some of them would take the opportunity to knock him out, and vote for the moderate. If 30 or more join the democrats and vote for the moderate, he or she becomes President.

It would be different for the Vice President, chosen by the senate. Trump probably picks a mainstream republican to mollify the party establishment, and that person wins the senate vote easily.

To be clear, here the independent can win even if all they do is make a small showing, just strong enough to split off some electors from both other candidates. Winning one big state could be enough, for example, if it was won from the candidate who would otherwise have won.  read more »

Google's crash is a very positive sign

Reports released reveal that one of Google’s Gen-2 vehicles (the Lexus) has a fender-bender (with a bus) with some responsibility assigned to the system. This is the first crash of this type — all other impacts have been reported as fairly clearly the fault of the other driver.

This crash ties into an upcoming article I will be writing about driving in places where everybody violates the rules. I just landed from a trip to India, which is one of the strongest examples of this sort of road system, far more chaotic than California, but it got me thinking a bit more about the problems.

Google is thinking about them too. Google reports it just recently started experimenting with new behaviours, in this case when making a right turn on a red light off a major street where the right lane is extra wide. In that situation it has become common behaviour for cars to effectively create two lanes out of one, with a straight through group on the left, and right turners hugging the curb. The vehicle code would have there be only one lane, and the first person not turning would block everybody turning right, who would find it quite annoying. (In India, the lane markers are barely suggestions, and drivers — which consist of every width of vehicle you can imagine) — dynamically form their own patterns as needed.)

As such, Google wanted their car to be a good citizen and hug the right curb when doing a right turn. So they did, but found the way blocked by sandbags on a storm drain. So they had to “merge” back with the traffic in the left side of the lane. They did this when a bus was coming up on the left, and they made the assumption, as many would make, that the bus would yield and slow a bit to let them in. The bus did not, and the Google car hit it, but at very low speed. The Google car could have probably solved this with faster reflexes and a better read of the bus’ intent, and probably will in time, but more interesting is the question of what you expect of other drivers. The law doesn’t imagine this split lane or this “merge.” and of course the law doesn’t require people to slow down to let you in.

But driving in so many cities requires constantly expecting the other guy to slow down and let you in. (In places like Indonesia, the rules actually give the right-of-way to the guy who cuts you off, because you can see him and he can’t easily see you, so it’s your job to slow. Of course, robocars see in 360 degrees, so no car has a better view of the situation.)

While some people like to imagine that important ethical questions for robocars revolve around choosing who to kill in an accident, that’s actually an extremely rare event. The real ethical issues revolve around this issue of how to drive when driving involves routinely breaking the law — not once in a 100 lifetimes, but once every minute. Or once every second, as is the case in India. To solve this problem, we must come up with a resolution, and we must eventually get the law to accept it the same what it accepts it for all the humans out there, who are almost never ticketed for these infractions.

So why is this a good thing? Because Google is starting to work on problems like these, and you need to solve these problems to drive even in orderly places like California. And yes, you are going to have some mistakes, and some dings, on the way there, and that’s a good thing, not a bad thing. Mistakes in negotiating who yields to who are very unlikely to involve injury, as long as you don’t involve things smaller than cars (such as pedestrians.) Robocars will need to not always yield in a game of chicken or they can’t survive on the roads.

In this case, Google says it learned that big vehicles are much less likely to yield. In addition, it sounds like the vehicle’s confusion over the sandbags probably made the bus driver decide the vehicle was stuck. It’s still unclear to me why the car wasn’t able to abort its merge when it saw the bus was not going to yield, since the description has the car sideswiping the bus, not the other way around.

Nobody wants accidents — and some will play this accident as more than it is — but neither do we want so much caution that we never learn these lessons.

It’s also a good reminder that even Google, though it is the clear leader in the space, still has lots of work to do. A lot of people I talk to imagine that the tech problems have all been solved and all that’s left is getting legal and public acceptance. There is great progress being made, but nobody should expect these cars to be perfect today. That’s why they run with safety drivers, and did even before the law demanded it. This time the safety driver also decided the bus would yield and so let the car try its merge. But expect more of this as time goes forward. Their current record is not as good as a human, though I would be curious what the accident rate is for student drivers overseen by a driving instructor, which is roughly parallel to the safety driver approach. This is Google’s first caused accident in around 1.5M miles.

It’s worth noting that sometimes humans solve this problem by making eye contact, to know if the other car has seen you. Turns out that robots can do that as well, because the human eye flashes brightly in the red and infrared when looking directly at you — the “red eye” effect of small flash cameras. And there are ways that cars could signal to other drivers, “I see you too” but in reality any robocar should always be seeing all other parties on the road, and this would just be a comfort signal. A little harder to read would be gestures which show intent, like nodding, or waving. These can be seen, though not as easily with LIDAR. It’s better not to need them.

Uber, Lyft and crew should replace public transit at night

I have a big article forthcoming on the future of public transit. I believe that with the robocar (and van) it moves from being scheduled, route-based mass transit to on-demand, ad-hoc route medium and small vehicle transit. That’s in part because of the disturbingly poor economics of current mass transit, especially in the USA. We can do much better.

However, long before that day, there is something else that could be done. Many mass transit systems shut down at night. Demand is low, and that creates a big burden for the “night people” of the world, who are left with taxis and occasional carpooling, or more limited night bus service.

I think transit agencies should make a deal with companies like Uber to operate their carpool services (UberPool and LyftLines) during transit closure hours, and subsidize the rides to bring them down equal to, or closer to a transit ticket. This could also be the case for other seriously off-peak times, like weekends and holidays.

Already the typical transit ticket in the USA is heavily subsidized. The real cost of providing a transit ride is much higher. In the transit-heavy cities, fares pay about 50-60% of operating cost, but in some cities it’s only 15-20%. The US national average is around 33%. And that’s just operating cost, it does not include the capital costs in many cases. One thing that pushes the number the wrong way is operation during off-peak hours on lightly loaded vehicles. So while the average ride may cost $6 to provide, it can be more at night. Already the mobile-summoned based carpools are close to that price. (For promotions, they have actually gotten to less. They also subsidize to get going, though.)

There are some big issues. First, not everybody has a smartphone, a data plan or even a phone. You need a method for those without them to summon a ride. You could start with an 800 number so any phone (or the few remaining payphones) could summon a ride. You could also make mini-kiosks by building a protective case and putting a surplus tablet at every subway stop and many bus stops.

Another issue is that these services, particularly the carpool versions, depend on not having anonymous riders. People feel much safer about carpooling with strangers if those strangers can be identified if there is a problem. Transit riding is anonymous, and should be. The solutions to this are challenging. On top of all this, riding in a mobile-hail car is never paid for with cash, and the drivers are not going to accept cash. At the least, this means you would need to provide tickets that people buy (from machines at stations or in advance) which the driver can scan with their phone. So no just deciding to take a ride with cash. Transit cards are an other issue, though there is no requirement that they work, because at least at first, this service is meant for hours when the transit was not even running, so it’s OK if it’s an extra cost.

Dolmu? Finally, there is the issue that this is too good. A ride in a private car vs. a late night transit bus, for the price of a bus? People will over-use it, and that would of course get the Taxis angry, though there is no reason they could not participate as they are all going to supporting mobile-app hail. But the subsidy may be too expensive if people over use it.

One solution to that is to only allow it to take you between transit stops. Even that’s “too good” in that it may be faster than the transit, and much faster if the trip involved changes, especially changes during limited service times. You could get extreme and only allow it between limited sets of stops, or require 2 rides (for the same price) to simulate having to change lines. This also makes carpooling much easier, as the drivers would mostly end up cruising close to the transit lines. IF they do it in vans it could be quite efficient, in fact.

We probably don’t need to go that far in limiting it, but we could. You could tune the ease and quality of the service so the demand is what you expect, and the subsidy affordable. And the ride companies could actually use this as a way to gain extra revenue. They could offer you a door to door ride with a subsidy for the portion that would have been along the transit line. For example, today you can take Uber to the subway station, ride the subway for $2 and then take Uber from the end station to your destination, and that can be cheaper than just taking the Uber directly. This ride could be offered at some subsidized price and keep up the volume. The taxi companies can either get into the 21st century and play, or not compete.

Aside from improving transit service (by making it 24 hours) this also lets us experiment with the future world of ad-hoc demand based public transportation, when we get to the future where the vans are driving themselves. More on that to come.

Fears confirmed on failure of fix to Hugo awards

Last year, I wrote a few posts on the attack on Science Fiction’s Hugo awards, concluding in the end that only human defence can counter human attack. A large fraction of the SF community felt that one could design an algorithm to reduce the effect of collusion, which in 2015 dominated the nomination system. (It probably will dominate it again in 2016.) The system proposed, known as “e Pluribus Hugo” attempted to defeat collusion (or “slates”) by giving each nomination entry less weight when a nomination ballot was doing very well and getting several of its choices onto the final ballot. More details can be found on the blog where the proposal was worked out.

The process passed the first round of approval, but does not come into effect unless it is ratified at the 2016 meeting and then it applies to the 2017 nominations. As such, the 2016 awards will be as vulnerable to the slates as before, however, there are vastly more slate nominators this year — presuming all those who joined in last year to support the slates continue to do so.

Recently, my colleague Bruce Schneier was given the opportunity to run the new system on the nomination data from 2015. The final results of that test are not yet published, but a summary was reported today in File 770 and the results are very poor. This is, sadly, what I predicted when I did my own modelling. In my models, I considered some simple strategies a clever slate might apply, but it turns out that these strategies may have been naturally present in the 2015 nominations, and as predicted, the “EPH” system only marginally improved the results. The slates still massively dominated the final ballots, though they no longer swept all 5 slots. I consider the slates taking 3 or 4 slots, with only 1 or 2 non-slate nominees making the cut to be a failure almost as bad as the sweeps that did happen. In fact, I consider even nomination through collusion to be a failure, though there are obviously degrees of failure. As I predicted, a slate of the size seen in the final Hugo results of 2015 should be able to obtain between 3 and 4 of the 5 slots in most cases. The new test suggests they could do this even with a much smaller slate group as they had in the 2015 nominations.

Another proposal — that there be only 4 nominations on each nominating ballot but 6 nominees on the final ballot — improves this. If the slates can take only 3, then this means 3 non-slate nominees probably make the ballot.

An alternative - Make Room, Make Room!

First, let me say I am not a fan of algorithmic fixes to this problem. Changing the rules — which takes 2 years — can only “fight the last war.” You can create a defence against slates, but it may not work against modifications of the slate approach, or other attacks not yet invented.

Nonetheless, it is possible to improve the algorithmic approach to attain the real goal, which is to restore the award as closely as possible to what it was when people nominated independently. To allow the voters to see the top 5 “natural” nominees, and award the best one the Hugo award, if it is worth.

The approach is as follows: When slate voting is present, automatically increase the number of nominees so that 5 non-slate candidates are also on the ballot along with the slates.

To do this, you need a formula which estimates if a winning candidate is probably present due to slate voting. The formula does not have to be simple, and it is OK if it occasionally identifies a non-slate candidate as being from a slate.

  1. Calculate the top 5 nominees by the traditional “approval” style ballot.
  2. If 2 or more pass the “slate test” which tries to measure if they appear disproportionately together on too many ballots, then increase the number of nominees until 5 entries do not meet the slate condition.

As a result, if there is a slate of 5, you may see the total pool of nominees increased to 10. If there are no slates, there would be only 5 nominees. (Ties for last place, as always, could increase the number slightly.)

Let’s consider the advantages of this approach:

  • While ideally it’s simple, the slate test formula does not need to be understood by the typical voter or nominator. All they need to know is that the nominees listed are the top nominees.
  • Likewise, there is no strategy in nominating. Your ballot is not reduced in strength if it has multiple winners. It’s pure approval.
  • If a candidate is falsely identified as passing the slate test — for example a lot of Doctor Who fans all nominate the same episodes — the worst thing that happens is we get a few extra nominees we should not have gotten. Not ideal, but pretty tame as a failure mode.
  • Likewise, for those promoting slates, they can’t claim their nominations are denied to them by a cabal or conspiracy.
  • All the nominees who would have been nominated in the absence of slate efforts get nominated; nobody’s work is displaced.
  • Fans can decide for themselves how they want to consider the larger pool of nominees. Based on 2015’s final results (with many “No Awards”) it appears fans wish to judge some works as there unfairly and discount them. Fans who wish it would have the option of deciding for themselves which nominees are important, and acting as though those are all that was on the ballot.
  • If it is effective, it gives the slates so little that many of them are likely to just give up. It will be much harder to convince large numbers of supporters to spend money to become members of conventions just so a few writers can get ignored Hugo nominations with asterisks beside them.

It has a few downsides, and a vulnerability.

  • The increase in the number of nominees (only while under slate attack) will frustrate some, particularly those who feel a duty to read all works before voting.
  • All the slate candidates get on the ballot, along with all the natural ones. The first is annoying, but it’s hardly a downside compared to having some of the natural ones not make it. A variant could block any work that fits the slate test but scored below 5th, but that introduces a slight (and probably un-needed) bit of bias.
  • You need a bigger area for nominees at the ceremony, and a bigger party, if they want to show up and be sneered at. The meaning of “Hugo Nominee” is diminished (but not as much as it’s been diminished by recent events.)
  • As an algorithmic approach it is still vulnerable to some attacks (one detailed below) as well as new attacks not yet thought of.
  • In particular, if slates are fully coordinated and can distribute their strength, it is necessary to combine this with an EPH style algorithm or they can put 10 or more slate candidates on the ballot.

All algorithmic approaches are vulnerable to a difficult but possible attack by slates. If the slate knows its strength and knows the likely range of the top “natural” nominees, it can in theory choose a number of slots it can safety win, and name only that many choices, and divide them up among supporters. Instead of having 240 people cast ballots with the 3 choices, they can have 3 groups of 80 cast ballots for one choice only. No simple algorithm can detect that or respond to it, including this one. This is a more difficult attack than the current slates can carry off, as they are not that unified. However, if you raise the bar, they may rise to it as well.

All algorithmic approaches are also vulnerable to a less ambitious colluding group, that simply wants to get one work on the ballot by acting together. That can be done with a small group, and no algorithm can stop it. This displaces a natural candidate and wins a nomination, but probably not the award. Scientologists were accused of doing this for L. Ron Hubbard’s work in the past.

What formula?

The best way to work out the formula would be through study of real data with and without slates. One candidate would be to take all nominees present on more than 5% of ballots, and pairwise compare them to find out what fraction of the time the pair are found together on ballots. Then detect pairs which are together a great deal more than that. How much more would be learned from analysis of real data. Of course, the slates will know the formula, so it must be difficult to defeat it even knowing it. As noted, false positives are not a serious problem if they are uncommon. False negatives are worse, but still better than alternatives.

So what else?

At the core is the idea of providing voters with information on who the natural nominees would have been, and allowing them to use the STV voting system of the final ballot to enact their will. This was done in 2015, but simply to give No Award in many of the categories — it was necessary to destroy the award in order to save it.

As such, I believe there is a reason why every other system (including the WSFS site selection) uses a democratic process, such as write-in, to deal with problems in nominations. Democratic approaches use human judgment, and as such they are not a response to slates, but to any attack.

As such, I believe a better system is to publish a longer list of nominees — 10 or more — but to publish them sorted according to how many nominations they got. This allows voters to decide what they think the “real top 5” was and to vote on that if they desire. Because a slate can’t act in secret, this is robust against slates and even against the “slate of one” described above. Revealing the sort order is a slight compromise, but a far lesser one than accepting that most natural nominees are pushed off the ballot.

The advantages of this approach:

  • It is not simply a defence against slates, it is a defence against any effort to corrupt the nominations, as long as it is detected and fans believe it.
  • It requires no algorithms or judgment by officials. It is entirely democratic.
  • It is completely fair to all comers, even the slate members.

The downsides are:

  • As above, there are a lot more nominees, so the meaning of being a nominee changes
  • Some fans will feel bound to read/examine more than 5 nominees, which produces extra work on their part
  • The extra information (sorting order) was never revealed before, and may have subtle effects on voting strategy. So far, this appears to be pretty minor, but it’s untested. With STV voting, there is about as little strategy as can be. Some voters might be very slightly more likely to rank a work that sorted low in first place, to bump its chances, but really, they should not do that unless they truly want it to win — in which case it is always right to rank it first.
  • It may need to add EPH style counting if slates get a high level of coordination.

Human judgment

Another surprisingly strong approach would be simply to add a rule saying, “The Hugo Administrators should increase the number of nominees in any category if their considered analysis leaves them convinced that some nominees made the final ballot through means other than the nominations of fans acting independently, adding one slot for each work judged to fail that test, but adding no more than 6 slots.” This has tended to be less popular, in spite of its simplicity and flexibility - it even deals with single-candidate campaigns — because some fans have an intense aversion to any use of human judgment by the Hugo administrators.

Advantages:

  • Very simple (for voters at least)
  • Very robust against any attempt to corrupt the nominations that the admins can detect. So robust that it makes it not worth trying to corrupt the nominations, since that often costs money.
  • Does not require constant changes to the WSFS constitution to adapt to new strategies, nor give new strategies a 2 year “free shot” before the rules change.
  • If administrators act incorrectly, the worst they do is just briefly increase the number of nominees in some categories.
  • If there are no people trying to corrupt the system in a way admins can see, we get the original system we had before, in all its glory and flaws.
  • The admins get access to data which can’t be released to the public to make their evaluations, so they can be smarter about it.

Disadvantages:

  • Clearly a burden for the administrators to do a good job and act fairly
  • People will criticise and second guess. It may be a good idea to have a post-event release of any methodology so people learn what to do and not do.
  • There is the risk of admins acting improperly. This is already present of course, but traditionally they have wanted to exercise very little judgment.

Will bed-bound seniors experience the world through VR telepresence robots?

I’ve written before about my experiences inhabiting a telepresence robot. I did it again this weekend to attend a reunion, with a different robot that’s still in prototype form.

I’ve become interested in the merger of virtual reality and telepresence. The goal would be to have VR headsets and telepresence robots able to transmit video to fill them. That’s a tall order. On the robot you would have a array of cameras able to produce a wide field view — perhaps an entire hemisphere, or of course the full sphere. You want it in high resolution, so this is actually a lot of camera.

The lowest bandwidth approach would be to send just the field of view of the VR glasses in high resolution, or just a small amount more. You would send the rest of the hemisphere in very low resolution. If the user turned their head, you would need to send a signal to the remote to change the viewing box that gets high resolution. As a result, if you turned your head, you would see the new field, but very blurry, and after some amount of time — the round trip time plus the latency of the video codec — you would start seeing your view sharper. Reports on doing this say it’s pretty disconcerting, but more research is needed.

At the next level, you could send a larger region in high-def, at the cost of bandwidth. Then short movements of the head would still be good quality, particularly the most likely movements, which would be side to side movements of the head. It might be more acceptable if looking up or down is blurry, but looking left and right is not.

And of course, you could send the whole hemisphere, allowing most head motions but requiring a great deal of bandwidth. At least by today’s standards — in the future such bandwidth will be readily available.

If you want to look behind you, there you could just have cameras capturing the full sphere, and that would be best, but it’s probably acceptable to have servos move the camera, and also to not be sending the rear information. It takes time to turn your head, and that’s time to send signals to adjust the remote parameters or camera.

Still, all of this is more bandwidth than most people can get today, especially if we want lifelike resolution — 4K per eye or probably even greater. Hundreds of megabits. There are fiber operators selling such bandwidth, and Google fiber sells it cheap. It does not need to be symmetrical for most applications — more on that later.

Surrogates, etc.

At this point, you might be thinking of the not-very-exciting Bruce Willis movie “surrogates” where everybody just lay in bed all day controlling surrogate robots that were better looking versions of themselves. Those robot bodies passed on not just VR but touch and smell and taste — the works — by a neural interface. That’s science fiction, but a subset could be possible today.

Local robots

One place you can easily get that bandwidth is within a single building, or perhaps even a town. Within a short distance, it is possible to get very low latency, and in a neighbourhood you can get millisecond latency from the network. Low latency from the video codec means less compression in the codec, but that can be attained if you have lots of spare megabits to burst when the view moves, which you do.

So who would want to operate a VR robot that’s not that far from them? This disabled, and in particular the bedridden, which includes many seniors at the end of their lives. Such seniors might be trapped in bed, but if they can sit up and turn their heads, they could get a quality VR experience of the home they live in with their family, or the nursing home they move to. With the right data pipes, they could also be in a nursing home but get a quality VR experience of being the homes of nearby family. They could have multiple robots in houses with stairs to easily “move” from floor to floor.

What’s interesting is we could build this today, and soon we can build it pretty well.

What do others see?

One problem with using VR headsets with telepresence is a camera pointed at you sees you wearing a giant headset. That’s of limited use. Highly desired would be software that, using cameras inside the headset looking at the eyes, and a good captured model of the face, digitally remove the headset in a way that doesn’t look creepy. I believe such software is possible today with the right effort. It’s needed if people want VR based conferencing with real faces.

One alternative is to instead present an avatar, that doesn’t look fully real, but which offers all the expression of the operator. This is also doable, and Philip Rosedale’s “High Fidelity” business is aimed at just that. In particular, many seniors might be quite pleased at having an avatar that looks like a younger version of themselves, or even just a cleaned up version of their present age.

Another alternative is to use fairly small and light AR glasses. These could be small enough that you don’t mind seeing the other person wearing them and you are able to see their eyes direction, at most behind a tinted screen. That would provide less a sense of being there, but also might provide a more comfortable experience.

For those who can’t set up, experiments are needed to see if they can make a system to do this that isn’ t nausea inducing, as I suspect wearing VR that shifts your head angle will be. Anybody tried that?

Of course, the bedridden will be able to use VR for virtual space meetings with family and friends, just as the rest of the world will use them — still having these problems. You don’t need a robot in that case. But the robot gives you control of what happens on the other end. You can move around the real world and it makes a big difference.

Such systems might include some basic haptic feedback, allowing things like handshakes or basic feelings of touch, or even a hug. Corny as it sounds, people do interpret being squeezed by an actuator with emotion if it’s triggered by somebody on the other side. You could build the robot to accept a hug (arms around the screen) and activate compressed air pumps to squeeze the operator — this is also readily doable today.

Barring medical advances, many of us may sadly expect to spend some of their last months or years bedridden or housebound in a wheelchair. Perhaps they will adopt something like this, or even grander. And of course, even the able bodied will be keen to see what can be done with VR telepresence.

Deadlines approaching for Singularity U summer program and accelerator

The highlight and founding program of Singularity University, where I am chair of computing, is our summer program, now known as the Global Solutions Program. 80 students come from all over the world (only a tiny minority will be from the USA) to learn about the hottest rapidly changing technologies, and then join together with others to kickstart projects that have the potential to use those technologies to solve the world’s biggest problems.

This year is the 2nd year of a Google scholarship program, which means the program is free for those who are accepted. About 50 slots go to those scholarships, the other 30 go to winners of national competitions to attend. You can apply both ways. That means you can expect a class of great rising and already risen stars. I don’t like to exaggerate, but almost everybody who goes through it finds it life-changing.

If you are at a point where you are ready to do something new and big, and you want to understand how technology that keeps changing faster and faster works and how it can change the world and your world, look into it.

Learn about it and apply.

Also closing on Feb 19 is our accelerator program for existing or nascent startups. Applicants get $100K in seed funding, office space at Nasa Research Park and more through our network. You can read about it or Apply.

Car and Driver evaluates autopilots, and other news.

In a recent article, Car and Driver magazine compares 4 of the Highway autopilot systems, including those from Tesla, Mercedes, BMW and Infiniti. They test on a variety of roads, and spoilers: The Tesla wins by a good margin in several categories.

It’s a pretty interesting comparison, and a nicely detailed article. They drove a variety of roads, though the reality is that none of these autopilots are much use off the highway, and they are not intended to be as yet. Each system will perform differently on different roads. People report a much better score for the Tesla on Highway 280, which is the highway closest to Tesla HQ.

Still, it should wake up people who want to compare Google’s report of needing an intervention to prevent an accident every 70,000 miles (or 5300 miles between software anomalies) and needing intervention every 2 miles on the Tesla and twice a mile on the Infiniti, on average.

Other News notes:

  • Google is expanding testing to Kirkland Washington — hoping for some heavy rain, among other things.
  • The California DMV hearings were contentious. You can hear a brief radio call-in debate with myself and one of the few people in favour of the regulations at KPCC’s “AirTalk”. Google threatened that if the regs are passed as written, they will plan to first deploy outside of California, and they probably mean it.
  • A small autonomous shuttle bus is doing test runs in the Netherlands, joining several other projects of this sort.
  • Porsche has come out against self-driving. Who would have thought it?
  • Baidu and Jaguar/Landrover are both upping their game. While you probably won’t automate off-road vehicles any time soon, having one that takes you to the countryside where you take the wheel can be a nice idea.
  • In Greenwich, the self-driving shuttle pilot there will use vehicles based on the Ultra PRT pods from Heathrow. Ultra’s pods have always been wheeled cars but they needed a dedicated track. Today, they can be modified not to.
  • Steve Zadesky, supposedly the lead of Apple’s unconfirmed project Titan, has left Apple. Rumours suggest a culture issue. Hmm.
  • The Isle of Man is tiny but is its own country — they are giving serious consideration to being a robocar pilot location. Last year I had some talks with another channel island on the same topic. There are advantages to having your own country.

Low clearance underpasses for small robocars

I recently read a report of a plan for a new type of intersection being developed in Malaysia, and I felt it had some interesting applications for robocars.

The idea behind the intersection is that you have a traditional intersection, but dig in one or both directions, a special underpass which is both shallow and narrow. One would typically imagine this underpass as being 2 vehicles wide in the center of the road but other options are possible. The underpass might be very shallow, perhaps just 4 to 5 feet high.

The underpass is available only to vehicles which fit, which is to say ordinary height passenger cars or even just ordinary height half-width vehicles. Big vehicles such as SUV, vans, trucks etc. would not use the underpass, and instead use the at-grade intersection, where you would have traffic signals or stop signs.

Why is this such a good idea? It’s vastly cheaper to make such an underpass. Because it’s so shallow, it is cheap to dig and shore up the walls. You can start the downramp much closer to the intersection because you don’t need to go so far down. It’s a tiny fraction of the cost of a regular overpass or underpass which requires lots of space to go up and down, and must be high enough for big trucks to pass underneath. Not so here, as trucks never go under it.

The downramp could begin a very short distance from the intersection, or it could begin further out to allow for a longer tunnel, such space now dedicated to the left turn lanes. (Or the right turn lanes if the tunnels are on the outside rather than center of the road.)

The center has the advantage of only digging one tunnel for both directions and providing that space for the left-turn lane. The downside is you have this physical tunnel entrance with protective bollards in the middle of a road, which may present some risk — though there are many places where there are tunnel entrances in the middle of roads, but they are full sized. Indeed we have intersections like this in full sized mode, including on Geary St. in San Francisco. The alternative on the edges requires two trenches and puts the obstacles to the side, mixing straight-through underpass traffic with right turning traffic.

Cars small enough to use the tunnels would get a transponder to signal their ability, possibly to raise a gate. In addition, a camera system would detect any too-large vehicle trying to enter the tunnel and do whatever it can to stop it. In the end, a too-large vehicle would end up hitting soft barriers if it failed to stop or divert. (Most parking lots today have hanging barriers to let vehicles know they won’t fit.)

Now the small, light vehicles, such as the one-person robocars, could bypass the traffic lights if they are red. They might get an “express” lane that is just for them which goes through these underpasses so it’s a smooth ride all along the road, other than the ups and downs.

Robocars would have a better time knowing where they fit and letting the intersection know they fit. More to the point, their ability to drive “on rails” would allow a wider robocar to go down a narrower tunnel, keeping a tiny margin that a human driver could never handle. Human driven vehicles would need to be narrower if they used these tunnels.

This would strongly encourage the use of small, lower-height vehicles, which are also very energy efficient. Really strongly — who would want to drive in a big SUV that has to stop at traffic lights when you can go nonstop in a small pod? Of course, you probably still use the light if making a turn. This in turn would cause a drop in vehicle size and congestion, and increase overall road capacity beyond what we get from having no stopping for a large fraction of vehicles.

If you want to get extreme, you could even have just a one lane tunnel if it’s all robocars. The simplest approach would be to have the express lane (with tunnels) only go in the commute direction during rush hour. Off peak, the robocars could pace their trips in pulses so that they alternate what direction they move through the underpass. On a north-south road, you could imagine during the red lights having 15 cars northbound, then 15 cars southbound back and forth until the light is green and you allocate the tunnel to the most popular direction. Humans could not obey this easily but robots could.

This works best when one of the roads intersecting is bigger than the other, since it’s harder to have both routes get an underpass. You could have one take a deeper underpass — at 10’ deep under a 5’ deep one, it’s still not nearly as deep as a full road underpass. Or with all robocars, you could have the robots alternate through the underground intersection at full speed under computer control. People have built computer modules of this “reservation” style intersection for many years, but they never could solve the problem that not every car in an intersection is a trustable robocar, and as such, you can never make an intersection like this. If all cars are robocars, an underground at-grade intersection could easily allow traffic to flow on both routes, in both directions, with proper timing. Since you would not see the other vehicles coming it might not even be as scary.

I think these underpasses would pay for themselves in the increase in road efficiency they would generate, but if not, you could also require a toll to use them. I think a lot of people would pay a modest toll to have no red lights on their trip. Since all you need do is dig a shallow trench, shore up the walls, and cover it with metal plates or similar, it’s a completely different scale of problem from a real underpass. Without too much money, every major road could become a non-stop robocar road.

You can, of course, create more capacity by building full elevated guideways only for use by small, light vehicles. These are again, much cheaper to build than full roads that can handle heavy trucks, and they take up only pillar space so they can be run down the center of many roads. They still need to be up high enough for big vehicles to go under them. Aside from the cost, the big issue is how they change the built environment, blocking out the sun and putting vehicles running in front of the 2nd or 3rd floor of buildings and houses. This is like a PRT plan but you only need to build these in the most congested zones.

There isn’t a lot of details on these plans, but I read about them at Reason’s Surface Transportation newsletter.

Wanted: A better method for multi-leg flight booking

I’m doing a lot of flying these days for international speaking and consulting, and I try whenever possible to have 2 or more clients when I fly overseas, since the trips and time-changes can be draining.

By far my favourite flight search tool is Google flight search. That’s because it’s an order of magnitude faster than most of the other tools, and while it lacks some features I would like, once you have speed, there is no substitute for it. I also like routehappy when I am being particular about seats, though it doesn’t cover all airlines which makes it useless for primary search.

To save money, however, what I really need is a tool that can get smart about the various arcane prices airlines put on flights which can vary tremendously. In particular the situations where airlines have decided not to simply sell one-way fares at around half the price of return trips. This is almost universally true between the USA and Europe and on some domestic routes, and less true on travel involving Asia. It is quite common for one-way trips to cost the same as round trips, and sometimes, bizarrely, even more. In the case of some KLM flights, I have found a one way costing double the price of a round trip. The Dutch know this and commonly book returns on KLM and don’t fly the return leg. There are stories of airlines punishing people who do that but they are rare. (The airlines are much more upset about “hidden city” booking, where people notice a flight to X connecting through Y is much cheaper than the direct flight to Y, so they book to X and just walk off the plane there.)

Throwing away the return leg doesn’t stop the trip from costing as much as a return. Your goal is to pay a more fair price, and that usually means making sure that you fly all your flights (or certainly your transatlantic flights) ticketed by the same airline. That works some of the time, but not always. The best airline to fly out may be a terrible airline to fly back on. You may have to take a flight with a painful time and routing one way to get the schedule you need the other way. Of course, this is the supposed purpose of the pricing — to make you buy both directions from the same airline, but it’s often a false victory, I suspect it loses for the airline almost as much as it wins, and it pisses off customers.

Trying all the permutations

Airlines have tons of hidden fare rules that jack up or seriously reduce fares involving certain cities. If you are going to these cities, you want to use them.

If we consider a complex trip that goes A -> B -> C -> D -> E -> A (4 stops) you can put that into most of the flight search engines as a “multi city” trip. You’ll sometimes get back a great answer, but usually you get back a ridiculous one. That’s because the engine just shops that out to all the airlines, which means you only get airlines that sell all 5 routes. And if the itinerary is far flung, there may be no airlines that sell them all at a good price, or with a good routing. (Of course, rarely does any one airline fly all the routes, but they all have tons of partners they can build tickets from.)

So it turns out the best way to fly this trip means combining one-ways (where they are fairly priced) and open jaws. I have found, for example, that you can often save a huge amount of money by buying something like “A->B, D-E” from one airline and “B->C, E-A” from another and “C->D” one way from a third. Bizarrely, adding the right extra legs to certain itineraries triggers serious price drops. This is particularly true when you involve cities with lots of competition (like New York) or inherently low prices (like India.)

So what I want is a flight search engine that will try all the combinations. There are engines that will check if sets of one-ways will do the trick (Kayak calls it a hacker fare) but that’s not enough. Price all 5 together, and then the sets of 4 with a single one-way, then the sets of 3 with the different sets of 2 and so on. You want to combine the price search with a flight quality search too, so that you flight on shorter, better flights.

When I do this as a human, I do it with some knowledge of the geography. For example, if you have a short leg which is only flown nonstop by one airline, it’s pretty obvious you want to price that out independently from the other flights, because if your ticket comes from an airline that doesn’t partner with the nonstop airline, they will put you on a ridiculous connection instead of a cheap one-hour flight.

In addition, there is another advantage to breaking up a flight into smaller groupings. It gives you more ability to change the flights or even to skip them. In many cases, to avoid people playing tricks, airlines will cancel the rest of an itinerary if you don’t show up for an early leg, often with no refund. Once, when a change in plans put me in Copenhagen instead of Bergen, Norway the night before my planned flight from Bergen back to San Francisco (via Copenhagen), SAS insisted I fly to Bergen just so I could turn around and get on the flight back to Copenhagen for my connection.

Round the world

This gets worse when you do a multi-leg trip, and worse, a “round the world” trip involving Asia, Europe and the Americas. In the latter case, sometimes your best course is the special around-the-world tickets offered by the 3 big alliances. These tickets cost around $10,000 in business class, around $4K in coach. For certain types of trips they are the clear winning choice. They are flexible — you can book them as little as 3 days in advance, and you can change your flights, even the cities, for free or low cost. They are refundable with a small penalty! You can add side trips for personal travel at little to no extra cost, and you can go to obscure airports that are expensive to fly to for the same price. They have a small number of downsides:

  • They can cost more than many directly booked trips. If your client is paying, it may not be fair to charge them $10K for something you could book for $7K. Though you can always eat the extra cost if you are doing side-trips as it can easily be worth it.
  • You are limited to one alliance only, though most of them have several airlines to fly you on the route.
  • They fetch from a more limited inventory if flying in business class, so quite often, particularly if booking late or changing your plans, you may see the flight you want is not available in the class you paid for.
  • Of course, they have their RTW restrictions — you must cross each ocean exactly once, along with a few others. Usually not a problem, but sometimes.

So if you ever see that your complex trip is adding up to a high cost, look into these. OneWorld also has some subset trips that don’t require a Pacific crossing.

Smart travel agents

While a computer should be able to do all this, perhaps there are still members of the dying profession of travel agents who can do a decent job on this. Let me know if you know of some. In the past, there were ticket consolidators, who buy up buckets of tickets and then have the power to sell them at reasonable one-way prices. This can be good, though sometimes it means being a 2nd class passenger, not getting loyalty miles and not being able to deal directly with the airline for service.

Robotic landing pad gets more serious

In 2010, I proposed the idea of planes with no landing gear which land on robotic platforms. The spring loaded platforms are pulled by cables and so can accelerate and turn with multiple gees, so that almost no matter what the plane does, it can’t miss the platform, and it can even hit hard with safety.

Today I learned there is a European research project called Gabriel with very similar ideas. In their plan, the plane has landing pillars which insert into the platform, rather than wheels. This requires retractable pillars but not the weight of the wheels. The platform runs on a maglev track but can tilt and rotate slightly to match the plane as it lands or takes off.

Overall I still prefer my plan — and I have added some refinements in the intervening years.

  • I am not quite sure of the value of maglev, which is quite expensive. Cables can provide high acceleration quite well.
  • The pillars still need a complex mechanism (which can fail) though they make a very solid connection — if you can place them just right.
  • Their platform tilts up — this may mean it can provide power longer which could be useful. It also allows easier release of pillars.
  • My approach allowed, in theory the ability to land in any direction, eliminating crosswinds. Gabriel uses a linear track.
  • I don’t think there is much need for communications between the aircraft and the platform. Can’t see much the platform can’t figure out — it can easily track the aircraft with its cameras and position itself. There are a few things that could be communicated, but why not have it work fine even if the communications are out — which could happen.
  • My goal was to have a super short runway, taking off and landing with high acceleration.
  • My aim was to handle small aircraft, Gabriel seems aimed at larger ones. Admittedly larger ones may be more tolerant of landing only at prepared airports.

One refinement I have added involves the hard question of what to do if you lose power at takeoff. This is the scariest thing in flying, and you must be able to recover. You could have a longer takeoff runway, so that there is enough space to slow down again if the aircraft loses power just before being released.

An alternative, as suggested by Gregg Maryniak is to have a “catch” airfield downrange from the main airfield. In this case, if you lost power, the system could keep accelerating you and even release you, with enough power that you can climb over the intervening space and then glide to a landing on an emergency catch platform — which would grab you no matter what, and let you land hard. The intervening land could be farmland or any sort of land use willing to be at the end of an airport, but it need not be airstrip. The downside of this is you must take off along a vector which lets you get, with no power, to the catch robot, so you may have to deal with crosswinds. You could have more than one catch robot allowing different takeoff vectors, but it’s still vastly less land than a typical airport would require, with most of the land finding other uses. Indeed it might be possible to have a small set of catch robots arrayed around the takeoff airstrip and allow takeoff in almost any direction.

The emergency catch robots, being only for emergencies, might stop you faster than an ordinary landing, and thus require less land. For example, if you can take 20m/s/s of deceleration (2gs) you can stop from 40m/s in just 40 meters, meaning the emergency catch strip could be very small, an insignificant amount of land. At such a small size, it’s easy to imagine an array of pads around the main takeoff-zone. Admittedly it’s a hard landing, but it would be a rare exception. Better be belted in on takeoff and everything stowed in the back.

It seems concluded for now, but it will be interested to see if anything develops further.

The Electric Car may be entering its "cell phone" period

I’ve been electric car shopping, but one thing has stood out as a big concern. Many electric cars are depreciating fast, and it may get even faster. I think part of this is due to the fact that electric cars are a bit more like electronics devices than they are cars. Electric cars will see major innovation in the next few years, as well as a decline in their price/performance of their batteries. This spells doom for their value. It’s akin to cell phones — your 2 year old cell phone still functions perfectly, but you dispose of it for a new one because of the pace of innovation. Electric cars are not at that pace, but they are skirting the phenomenon.

When it comes to Robocar, I remind people that the computer will be the most important part of the car, not the engine or other features. And the computer and software are on the Moore’s Law curve, like your phone. The battery system is not like this, but digital features are becoming more and more important parts of every car.

The most obvious cause of the big depreciation is not related to the cars. There is a $7500 federal tax rebate on a new electric car, so the moment you drive it off the lot, its blue book value drops an additional $7500. In addition, different states offer credits from of up to $5,000, and unless you take the car out of state, that amount will also drop off the value. This is the primary culprit for the huge depreciation numbers, but there is more.

Perversely, people with higher incomes don’t get California’s $2,500 credit, so for them, buying used is a very wise idea, because somebody else got the credit, and it’s reflected in the price of the car. Of course, if you are rich enough, you may tolerate paying $2,500 more than everybody else for the new car. In fact, if not for the sales tax, it would be a good strategy to get somebody else to buy a car for you and get the credit, then buy it from them. Or take over a lease (getting to that…)

There are rumours that vendors might even be trying to subsidize against this depreciation to avoid a collapse in the price of their cars. After all, such low used car value discourages confidence in the car (and steals away buyers of new cars.) Rumours suggest Nissan has been known to offer incentives to get people to keep their lease-returns rather than take them back, and there are stories of even Teslas getting low prices at auction, though in the retail market they have actually done pretty well.

The Leaf is the most popular electric car, and only it and the Tesla are real market cars from big players. The other cars are all “compliance” cars, made by companies who must meet quotas of green vehicles. The 2015 Leaf has a cited range around 80 miles, and users report a real range on the highway closer to 60 miles. For me, that means a car that can’t take me to San Francisco and back. The Leaf would handle a large fraction of my trips around Silicon Valley, but not being able to go to SF is a major detriment in this town. So I decided not to get a 2015 Leaf.

Better cars keep getting pre-announced

That decision was magnified when Nissan announced the 2016 Leaf would be able to do 107 miles. Technically, that’s enough for the San Francisco trip, though in reality it’s just on the edge. Any charging would allow the trip, including a 5 minute (“gas pump” level) stop at a DC supercharger (if nobody else is using it.) So I was waiting for that car to come out when…

They announced the Chevy Bolt, a $30K car (after rebate) with a 200 mile range. Finally a reasonably priced car with enough range. And then rumours circulated of a similar range in the 2017 Leaf — it needs to if it will compete, and so every other car needs to as well. Who will buy a 100 mile 2016 car when a 200 mile 2017 car for not much more is being promoted?

Of course, in a year, something even more appealing than the Bolt will be announced. While the Bolt’s range is enough for 99% of my drives (leaving out only Lake Tahoe and road tripping) there is still much that can improve — other parts of the car, the electronics, and of course the battery pack getting even cheaper at that range.

Every year, cars get a little bit better, but we’re in for a period of about 5 years in electric cars where each new year is a lot better, and that’s trouble for people trying to sell them if the customers figure that out. A cell phone is cheap enough to throw out after 2 years. A car is not. To top it off, in a few years the robocar features will start getting more serious (starting with the first no-supervision traffic jam assist) and so other parts of the car will also be on the Moore’s Law curve.

The battery is probably not on that curve, but it’s on a good one. The Bolt’s 200 mile range is a result of an expected reduction of battery cost from $500/kwh a couple of years ago to $200/kwh by 2020, and that’s without any breakthroughs or new chemistry. (It is speculated the Bolt’s battery cost will already beat that $200 number.) Breakthroughs — which sometimes come when enough money is pushing the process — could easily do much more.

Robocar answer

Robocars have an answer to this rapid depreciation. If they are used as Taxis, they can survive. The typical New York Taxi drives 62,000 miles each year and wears out in 5 years. Personal cars take 19 years to wear out, and go around 200,000 miles. Robotaxis will wear out and be scrapped after just 5 years, which means it is less of a burden when they are 4 years old and obsolete from a technology standpoint. (We may also design these vehicles to make it easy to give them hardware upgrades so their electronics can keep pace.)

Personal robocars have it harder. Your 4 year old personal vehicle is going to look like crap compared to the new ones. It will get software updates to match them (which is vital) but without hardware updates it will, like an old iPhone, no longer even be able to handle the software updates. If you buy a personal robocar, get one where it’s easy to swap out the hardware, and expect to pay the cost of this.

Wear and tear of electric cars

The battery is the lifeblood of the electric car. No matter how new the rest is, a reduced range is a deal-killer for most buyers. Indeed, some predictions say the rest of the power train should wear out more slowly than traditional cars, so the depreciation is unfair in some ways.

Battery swap is an option on some electric cars, but that’s a big cost to pay over what you planned to pay. Older battery packs will still work, but deliver less range. Owners will salivate for new packs that are cheaper, lighter, fresher and possibly even higher capacity than what they have. That’s all good, but if you buy an electric car with a pack only good for 4 years at today’s prices, you’ve lost all the economies the electric car hopes to give you. Of course, robocars and especially robotaxis can manage their batteries for much longer life

It might make sense to buy a 2012 Leaf for $8,000 and pay $5K to add a battery pack to it that’s brand-new, giving you a car close to matching a new one in certain ways.

With all this, why look at electric cars today? For me, my electricity bill would actually go down due to metering differences, and of course my gasoline bill would drop too. And they are zippy and fun to drive and quite green with California’s (relatively) green energy grid. And because of this depreciation, used ones are a major bargain. The buyers of new cars (and the federal government) took the hit on a new electric, but you can pick up a 2012 Leaf for $8,000. That’s because all those 2012 units are coming off their leases, and people want them a lot less with those fancier models out there. (In addition, it is known the 2012 had some battery life issues fixed in 2013.)

A lot of people are leasing electric cars. Leasing has one financial advantage (you pay sales tax only on the depreciation you take, rather than the whole car) and otherwise it’s a bad idea unless you’re sure the vendor has guessed badly on the residual value of the car after the lease. With electric cars, you take so much of the depreciation that the tax advantage is not so great. But many electric owners are leasing. The $2500 tax credit in California can often pay for the downpayment, making it easy to come up with the money, and owners are, with good reason, willing to let the vendor take the risk on battery decay and mega-depreciation. Vendors are not idiots, though, and so their residual values are low, but perhaps not low enough. Of course, if you know better cars are coming and are sure you only want the car for 2 years, leasing can ease your legwork.

On the other hand, you can sometimes take over the lease of another electric car owner, letting them suffer the “due at signing” downpayment (which often exceeds all the monthly payments on a short lease) and giving you a car for a very short time, which might be a wise choice with all the new vehicles coming down the pipe.

Maintaining Privacy in the Robotaxi

While I’ve been in love for a long time with the idea of mobility-on-demand and the robocar taxi, I continue to have some privacy concerns. The first is simply over the idea that a service company gets a map of all your travels. Of course, your cell phone company, and companies like Google with their Location History (Warning, don’t click or you will be freaked out if you didn’t know about this) know this already, as does the NSA and probably all the other spy agencies in the world. That doesn’t make it much better to add more trackers. The online ride companies like Uber are tracking you too.

It will be sad to lose the anonymous taxi we used to have, where you hailed a cab and paid in cash and no record was made (until cabs got tracklogs and video) of your travels. In my article on Robocars and Privacy written many years ago I outlined some plans for anonymous taxi service and I continue to push this idea.

In the article, I outline the concern that a taxi company will want to be able to photograph the vehicle when you’re not in it, to assure you haven’t dirtied or damaged the interior, and also to check if you left something in the vehicle by accident. People will be less comfortable with a camera that can be turned on all the time, and LEDs to inform you if a camera is on can’t really be trusted, so we want to have a physical shutter.

This led me to a simple solution: The physical shutter on the camera could be the switch by which you signal the start and end of a ride. The ride can’t begin until you close the physical shutter, and it doesn’t close out until you open it. You want a lever for the shutter on the outside of the car by the main passenger door, so you can open and close it when you are not in the car, so it doesn’t take a picture of you if you are trying to use an anonymous taxi. A connected lever inside could allow people who are not trying to be anonymous (but rather just private on their journey) to both control the shutter, and signal the car to go or conclude the ride.

You might not want to be inside when it takes the photo anyway, because a bright flash would be advised, for a millisecond brighter than the sunlight coming in the car. That way the images will be under the same light, night or day, making it easy to compare before and after images to detect dirt or lost items.

If you leave the car without opening the shutter, it would honk at you, or ding on your phone to remind you to come back and open it.

Cars will likely have some other cameras too, for video conferencing. I expect video conferences to be popular in robocars, and while your own phone can do that for you, a camera with stabilization in it could be a useful idea. Here, we could use a physical shutter, though this time with a remote actuator that makes noise, so you can easily see if it’s open. Even more simply, the video camera and monitor might not connect to anything in the car, but rather only connect to your phone via a car dock. (The connection must be wired, unfortunately.) If the camera is not connected you can be reasonably confident it’s not spying on you.

Of course, a truly malicious operator could have hidden cameras, or a secret connection to the video conference camera, but there’s not to much you can do about that. What we want protection from are attackers breaking into the car’s system, and vendors who change their mind about your privacy. We also want a stake in the ground that routine surveillance of passengers is not acceptable.

Federal government involvement

NHTSA, the federal car safety agency has been talking about getting into the robocar game for a while, and now declares it wants more involvement with two important details:

  • Unlike California, they are keen on making sure full robocars (able to run unmanned) are part of the regulations, and
  • Their regulations might supersede those of states like California.

In the next six months, the DoT will work with states and others on a unified policy. There are some other details here.

(California, by the way will have hearings in the next couple of weeks on their regulations. I will be out of the state, unfortunately.)

On top of this there is a $4 billion (over 10 years) proposal in the new Obama budget to support and accelerate robocars and (sadly) connected cars.

Perhaps most heartening is a plan to offer reduced regulation for up to 2,500 early deployment vehicles — a way to get companies out there in the field without shackling them first. Public attitudes on robocars have pushed regulators to a rather radical approach to regulation, namely attempting to define regulations before a product is actually on the market, with California even thinking of banning unmanned cars before they arrive. In the normal history of car safety regulation, technologies are built and deployed by vendors and are usually on the road for decades before they get regulated, but people are so afraid of robots that this normal approach may not happen here.

GM Delays super-cruise again

There was a fair bit of excitement when Cadillac announced “super-cruise,” a product similar to what you see in the Tesla autopilot, for the 2014 model year, or so we thought. It was the first effort from a big car company at some level of self-driving, even if minimal. Since then, they’ve kept delaying it, while Mercedes, Tesla and others have released such products. Now they have said it won’t show until at least 2017. GM is quickly dropping in the ranks of active Robocar companies, leaving the U.S. mantle to Tesla and Ford. Chrysler has never announced anything an even ran anti-self-driving-car ads in the Superbowl a few years ago.

Tesla releases “summon” and hints at more

The latest Tesla firmware release offers a “summon” function, so you can train your car to park and come back to you (with a range of 39 feet.) Primary use is to have your car go park itself in the garage, or at a robotic charging station. This didn’t stop Elon Musk from promising we are not very far away from being able to summon the car from very far away.

They have also detailed that those sorts of functions, and other autonomy, will require more sensors than they put in the model S, and that this sensor suite is a few years away, perhaps in time for the model 3

But wait, there’s more…

The pace of news is getting fast. Even I’m having trouble keeping up with everything even though it’s part of my job. This blog will continue to be a place not for all the news, but the news that actually makes a difference, with analysis.

Here are some other items you might find of interest:

Google releases detailed intervention rates -- and the real unsolved problem of robocars

Hot on the heels of my CES Report is the release of the latest article from Chris Urmson on The View from the Front Seat of the Google Car. Chris heads engineering on the project (and until recently led the entire project.)

Chris reports two interesting statistics. The first is “simulated contacts” — times when a safety driver intervened, and the vehicle would have hit something without the intervention:

There were 13 [Simulated Contact] incidents in the DMV reporting period (though 2 involved traffic cones and 3 were caused by another driver’s reckless behavior). What we find encouraging is that 8 of these incidents took place in ~53,000 miles in ~3 months of 2014, but only 5 of them took place in ~370,000 miles in 11 months of 2015. (There were 69 safety disengages, of which 13 were determined to be likely to cause a “contact.”)

The second is detected system anomalies:

There were 272 instances in which the software detected an anomaly somewhere in the system that could have had possible safety implications; in these cases it immediately handed control of the vehicle to our test driver. We’ve recently been driving ~5300 autonomous miles between these events, which is a nearly 7-fold improvement since the start of the reporting period, when we logged only ~785 autonomous miles between them. We’re pleased.

Let’s look at these and why they are different and how they compare to humans.

The “simulated contacts” are events which would have been accidents in an unsupervised or unmanned vehicle, which is serious. Google is now having one once every 74,000 miles, though Urmson suggests this rate may not keep going down as they test the vehicle in new and more challenging environments. It’s also noted that a few were not the fault of the system. Indeed, for the full set of 69 safety disengagements, the rate of those is actually going up, with 29 of them in the last 5 months reported.

How does that number compare with humans? Well, regular people in the USA have about 6 million accidents per year reported to the police, which means about once every 500,000 miles. But for some time, insurance companies have said the number is twice that, or once every 250,000 miles. Google’s own new research suggests even more accidents are taking place that go entirely unreported by anybody. For example, how often have you struck a curb, or even had a minor touch in a parking lot that nobody else knew about? Many people would admit to that, and altogether there are suggestions the human number for a “contact” could be as bad as one per 100,000 miles.

Which would put the Google cars at close to that level, though this is from driving in simple environments with no snow and easy California driving situations. In other words, there is still some distance to go, but at least one possible goal seems in striking distance. Google even reports going 230,000 miles from April to November of last year without a simulated contact, a (cherry-picked) stretch that nonetheless matches human levels.

For the past while, when people have asked me, “What is the biggest obstacle to robocar deployment, is it technology or regulation?” I have given an unexpected answer — that it’s testing. I’ve said we have to figure out just how to test these vehicles so we can know when a safety goal has been met. We also have to figure out what the safety goal is.

Various suggestions have come out for the goal: Having a safety record to match humans. Matching good humans. Getting twice or even 10 times or even 100 times as good as humans. Those higher, stretch goals will become good targets one day, but for now the first question is how to get to the level of humans.

One problem is that the way humans have accidents is quite different from how robots probably will. Human accidents sometimes have a single cause (such as falling asleep at the wheel) but many arise because 2 or more things went wrong. Almost everybody I talk to will agree a time has come when they were looking away from the road to adjust the radio or even play with their phone, and they looked up to see traffic slowing ahead of them, and quickly hit the brakes just in time, narrowly avoiding an accident. Accidents often happen when luck like this runs out. Robotic accidents will probably mostly come from one single flaw or error. Robots doing anything unsafe, even for a moment, will be cause for alarm and the source of the error will be fixed as quickly as possible.

Safety anomalies

This leads us to look at the other number — the safety anomalies. At first, this sounds more frightening. They range from 39 hardware issues and anomalies to 80 “software discrepancies” which may include rarer full-on “blue screen” style crashes (if the cars ran Windows, which they don’t). People often wonder how we can trust robocars when they know computers can be so unreliable. (The most common detected fault is a perception discrepancy, with 119. It is not said, but I will presume these will include strange sensor data or serious disagreement between different sensors.)

It’s important to note the hidden message. These “safety anomaly” interventions did not generally cause simulated contacts. With human beings, the fact that you zone out, take your eyes off the road, text or even in many cases even briefly fall asleep does not always result in a crash for humans, and nor will similar events for robocars. In the event of a detected anomaly, one presumes that independent (less capable) backup systems will immediately take over. Because they are less capable, they might cause an error, but that should be quite rare.

As such, the 5300 miles between anomalies, while clearly in need of improvement, may also not be a bad number. Certainly many humans have such an “anomaly” that often (that’s about every 6 months of human driving.) It depends how often such anomalies might lead to a crash, and what severity of crash it would be.

The report does not describe something more frightening — a problem with the system that it does not detect. This is the sort of issue that could lead to a dangerous “careen into oncoming traffic” style event in the worst case scenario. The “unexpected motion” anomalies may be of this class. (As such would be a contact incident, we can conclude it’s very rare if it happens at all in the modern car.) (While I worked on Google’s car a few years ago, I have no inside data on the performance of the current generations of cars.)

I have particular concern with the new wave of projects hoping to drive with trained machine learning and neural networks. Unlike Google’s car and most others, the programmers of those vehicles have only a limited idea how the neural networks are operating. It’s harder to tell if they’re having an “anomaly,” though the usual things like hardware errors, processor faults and memory overflows are of course just as visible.

The other vendors

Google didn’t publish total disengagements, judging most of them to be inconsequential. Safety drivers are regularly disengaging for lots of reasons:

  • Taking a break, swapping drivers or returning to base
  • Moving to a road the car doesn’t handle or isn’t being tested on
  • Any suspicion of a risky situation

The latter is the most interesting. Drivers are told to take the wheel if anything dangerous is happening on the road, not just with the vehicle. This is the right approach — you don’t want to use the public as test subjects, you don’t want to say, “let’s leave the car auto-driving and see what it does with that crazy driver trying to hassle the car or that group of schoolchildren jaywalking.” Instead the approach is to play out the scenario in simulator and see if the car did the right thing.

Delphi reports 405 disengagements in 16,600 miles — but their breakdown suggests only a few were system problems. Delphi is testing on highway where disengagement rates are expected to be much lower.

Nissan reports 106 disengagements in 1485 miles, most in their early stages. For Oct-Nov their rate was 36 for 866 miles. They seem to be reporting the more serious ones, like Google.

Tesla reports zero disengagements, presumably because they would define what their vehicle does as not a truly autonomous mode.

VW’s report is a bit harder to read, but it suggests 5500 total miles and 85 disengagements.

Google’s lead continues to be overwhelming. That shows up very clearly in the nice charts that the Washington Post made from these numbers.

How safe do we have to be?

If the number is the 100,000 mile or 250,000 mile number we estimate for humans, that’s still pretty hard to test. You can’t just take every new software build and drive it for a million miles (about 25,000 hours) to see if it has fewer than 4 or even 10 accidents. You can and will test the car over billions of miles in simulator, encountering every strange situation ever seen or imagined. Before the car has a first accident it will be unlike a human. It will probably perform flawlessly. if it doesn’t, that will be immediate cause for alarm back at HQ, and correction of the problem.

Makers of robocars will need to convince themselves, their lawyers and safety officers, their boards, the public and eventually even the government that they have met some reasonable safety goal.

Over time we will hopefully see even more detailed numbers on this. That is how we’ll answer this question.

This does turn out to be one advantage of the supervised autopilots, such as what Tesla has released. Because it can count on all the Tesla owners to be the fail-safe (or if you prefer, guinea-pig) for their autopilot system, Tesla is able to quickly gather a lot of data about the safety record of its system over a lot of miles. Far more than can be gathered if you have to run the testing operation with paid drivers or even your own unmanned cars. This ability to test could help the supervised autopilots get to good confidence numbers faster than expected. Indeed, though I have often written that I don’t feel there is a good evolutionary path from supervised robocars to unmanned ones, this approach could make my prediction be in error. For if Tesla or some other car maker with lots of cars on the road is able to make an autopilot, and then observe that it never fails in several million miles, then they might have a legitimate claim on having something safe enough to run unmanned, at least on the classes of roads and situations which the customers tested it on. Though a car that does 10 million perfect highway miles is still not ready to bring itself to you door to door on urban streets, as Elon Musk claimed would happen soon with the Tesla yesterday.

CES 2016 Robocar News

I’m back from CES 2016 with a raft of news, starting with robocars. Some news was reported before the show but almost everybody had something to say — even if it was only to have something to say!

I have many more photos with coverage in my CES 2016 Photo Gallery.

Ford makes strong commitment

Ford’s CEO talks like he gets it. Ford did not have too much to show — they announced they will be moving to Velodyne’s new lower cost 32-laser puck-sized LIDAR for their research, and boosting their research fleet to 30 vehicles. They plan for full-auto operation in limited regions fairly soon.

Ford is also making its own efforts into one-way car share (similar to Daimler Car2Go and BMW DriveNow) called GoDrive, which pushes Ford more firmly into the idea of selling rides rather than cars. The car companies are clearly believing this sooner than I expected, and the reason is very clearly the success of Uber. (As I have said, it’s a mistake to think of Uber as competition for the taxi companies. Uber is competition for the car companies.)

Ford is also doing an interesting “car swap” product. While details are scant, it seems what the service will do is let you swap your Ford for somebody else’s different Ford. For example, if somebody has an F-150 or Transit Van that they know they won’t use the cargo features on some day or weekend, you drive over with your ordinary sedan and swap temporarily for their truck — presumably with a small amount of money flowing to the more popular vehicle. Useful idea.

The big announcement that didn’t happen was the much-rumoured alliance between Ford and Google. Ford did not overtly refute it but suggested they had enough partners at present. The alliance would be a good idea, but either the rumours were wrong, or they are waiting for another event (such as the upcoming Detroit Auto Show) to talk about it.

Faraday Future, where art thou?

The big disappointment of the event was the silly concept racecar shown by Faraday Future. Oh, sure, it’s a cool electric racecar, but it has absolutely nothing to do with everything we’ve heard about this company, namely that they are building a consumer electric car-on-demand service with autonomous delivery. Everybody wondered if they had booked the space and did not have their real demo ready on time. It stays secret for a while, it seems. Recent hires, such as Jan Becker, the former head of the autonomous lab for Bosch, suggest they are definitely going autonomous.

Mapping heats up

Google’s car drives by having super-detailed maps of all the roads, and that’s the correct approach. Google is unlikely to hand out its maps, so both Here/Navteq (now owned by a consortium of auto companies in Germany) and TomTom have efforts to produce similar maps to licence to non-Google robocar teams. They are taking fairly different approaches, which will be the subject of a future article.

One interesting edge is that these companies plan to partner with big automakers and not just give them map data but expect data in return. That means that each company will have a giant fleet of cars constantly scanning the road, and immediately reporting any differences between the map and the territory. With proper scale, they should get reports on changes to the road literally within minutes of them happening. The first car to encounter a change will still need to be able to handle it, possibly by pulling over and/or asking the human passenger to help, but this will be a very rare event.

MobilEye has announced a similar plan, and they are already the camera in a large fraction of advanced cars on the road today. MobilEye has a primary focus on vision, rather than Lidar, but will have lots of sources of data. Tesla has also been uploading data from their cars, though it does not (as far as I know) make as extensive use of detailed maps, though it does rely on general maps.  read more »

Lyft and GM, Sidecar, the nature of competition and CES

Lyft announced a $500M investment from GM with $500M more, pushing them to a $5.4B valuation, which is both huge and just a tenth of Uber. This was combined with talk of a push to robocars. (GM will provide a car rental service to Lyft drivers to start, but the speculation is that whatever robocar GM gets involved in will show up at Lyft.)

With no details, Lyft’s announcement doesn’t really add anything to the robocar world that Uber doesn’t already add. It is GM’s participation that is more interesting, because it’s another car company showing they are not just giving lip service to the idea of selling rides rather than cars. (Mercedes and BWM have also started saying real things in this area.)

My initial expectations for the big car companies were much more bleak for them. I felt that their century long histories of doing nothing but selling cars would impede them from switching models until it was too late. That might still happen, and will happen for some companies, but more might survive than expected. The story also contains some more pure PR comments about OnStar in the new Lyft rental cars. Lyft drivers are all linked in real time with their smartphones; OnStar is obsolete technology, named only to make it seem GM is adding something. GM is not a great robocar leader. They have been very slow even with their highway “super cruise” efforts and the best they have done is partner with Rajkumar at CMU only to find Uber more successful at working with CMU folks.

Sidecar and where are you going?

Also frightening is the news last week of the death of Sidecar. Sidecar was the 3rd place smartphone-hail company after Uber and Lyft, but so distant a third that it decided to shut down. Where Lyft can raise another billion, Sidecar could not get a dime. The CEO is a friend of mine and I’ve been impressed that Sidecar was willing to innovate, even building a successful delivery business on top of the fact that you had to tell Sidecar where you were going. I think it’s important that users say where they are going. It allows much better planning of the use of robocar resources. If customers say where they are going, you can not only do some of the things Sidecar did (deliveries in the trunk the passenger doesn’t even know about, pricing set by drivers, directional goals set by drivers etc.) you can do more:

  • Send short-range cars (electric cars) for short trips
  • Send small (one or two person) cars when there is just one rider
  • Send cars not even capable of the highway if the trip doesn’t involve the highway
  • Pool riders far more efficiently, sometimes in vehicles designed for pooling which have 2-12 private “cabins.”

All of this is important to making transportation vastly more efficient, and in allowing a wide variety of vehicle designs, and a wide variety of power trains. It is only by knowing the destination that many of these benefits can be seen.

Uber lets you enter the destination but does not require it, and people do like having less to do when summoning a vehicle. (I always enter the destination when in places they don’t speak English, it’s a handy way to communicate with the driver.) The driver is not shown the destination until after they pick you up. This stops drivers from refusing rides going places they don’t want to go, which has its merits. It also has serious downsides for drivers, who sometimes at the end of their shift pick up a rider who wants to go 40 miles in the opposite direction of their home.

Even more frightening is what Sidecar’s death says about how much room there is for competitors in the robotaxi space. There are dozens of car makers competing for a new car customer, but San Francisco, the birthplace of Uber, Lyft and Sidecar, could not support 3 players in one of the world’s hottest investment spaces. Two unicorns, but nobody else.

When it comes to competition, the ride business is a strange one. For scheduled rides (which was most of the black car business before Uber) there are minimal economies of scale. A one-car limo “fleet” is still a viable business today, picking up customers for scheduled rides. They provide the same service as a 100 car limo-fleet, though they sometimes have to turn you down or redirect you to a partner.

For on-demand rides, there is a big economy of scale. I want a car now, so you have to have a lot of cars to be sure to have one near me. I will go with the service that can get to me soonest. While price and vehicle quality matter, they can be trumped by pickup time, within reason. Sidecar, being small, often failed in this area, including my attempt to use it on its last day on my way home from the airport.

Robocars offer up a middle ground. Because there is no driver who minds waiting, it will be common to summon a robocar longer in advance of when you want it. Once you know that “I’m leaving in around 20 minutes” you can summon, and the car can find somewhere to wait except in the most congested zones. Waiting time for a robotaxi can be very cheap, well under a dollar/hour, though during peak times, robotaxi owners will raise the price a little to avoid lost opportunity costs. (Finance costs will be under 20 cents/hour at 5% interest, and waiting space will range from free to probably 30 cents/hour in a competitive parking “spot market.”)

The more willing customers are to summon in advance, the more competitive a small player can be. They can offer you instant service when you actually are ready to leave, and that way they can compete on factors other than wait time. Small players can be your first choice, and they can subcontract your business to another company who has a car close by when you forget to summon in advance.)

CES in Las Vegas

I’m off to CES Wednesday. This show, as before promises to have quite a lot of car announcements. Rumours suggest the potential Ford/Google announcement could happen there, along with updates from most major companies. There will also be too many “connected” car announcements because companies need to announce something, and it’s easy to come up with something in that space that sounds cool without the actual need that it be useful.

This morning already sees an announcement from Volvo and Ericsson about streaming video in cars. This is a strange one, a mix of something real — as cars become more like living rooms and offices they are going to want more and better bandwidth, including bandwidth reliable enough for video conferencing — but also something silly, in that watching movies and TV shows is, with a bit of buffering, a high-bandwidth application that’s easy to get right on an unreliable network. Though in truth, because wireless bandwidth on the highway is always going to be more expensive than wifi in the parking space, it really makes more sense to pre-load your likely video choices to win both ways on cost and quality. I have been fascinated watching the shift between semi-planned watching (DVD rental, Netflix DVD queue, DVR, prepaid series subscriptions, watchlists and old-school live TV) and totally ad-hoc streaming on demand. While I understand the attraction of ad-hoc streaming (even for what you planned far ahead to watch) it surprises me that people do it even at the expense of cost and quality. Of course, there are parallels to how we might summon cars!