What does the VW Scandal mean for Robocars?

Most of you would have heard about the giant scandal where it has been revealed that Volkswagen put software in their cars to deliberately cheat on emissions tests in the USA and possibly other places. It’s very bad for VW, but what does it mean for all robocar efforts?

You can read tons about the Volkswagen emissions violations but here’s a short summary. All modern cars have computer controlled fuel and combustion systems, and these can be tuned for different levels of performance, fuel economy and emissions. (Of course, ignition in a diesel is not done by an electronic spark.) Cars have to pass emission tests, so most cars have to tune their systems in ways that reduce other things (like engine performance and fuel economy) in order to reduce their pollution. Most cars attempt to detect the style of driving going on, and tune the engine differently for the best results in that situation.

VW went far beyond that. Apparently their system was designed to detect when it was in an emissions test. In these tests, the car is on rollers in a garage, and it follows certain patterns. VW set their diesel cars to look for this, and tune the engine to produce emissions below the permitted numbers. When the car saw it was in more regular driving situations, it switched the tuning to modes that gave it better performance and better mileage but in some cases vastly worse pollution. A commonly reported number is that in some modes 40 times the California limit of Nitrogen Oxides could be emitted, and even over a wide range of driving it was as high as 20 times the California limit (about 5 times the European limit.) NOx are a major smog component and bad for your lungs.

It has not been revealed just who at VW did this, and whether other car companies have done this as well. (All companies do variable tuning, and it’s “normal” to have modestly higher emissions in real driving compared to the test, but this was beyond the pale.) The question everybody is asking is “What the hell were they thinking?

That is indeed the question, because I think the central issue is why VW would do this. After all, having been caught, the cost is going to be immense, possibly even ruining one of the world’s great brands. Obviously they did not really believe that they might get caught.

Beyond that, they have seriously reduced the trust that customers and governments will place not just in VW, but in car makers in general, and in their software offerings in particular. VW will lose trust, but this will spread to all German carmakers and possibly all carmakers. This could result in reduced trust in the software in robocars.

What the hell were they thinking?

The motive is the key thing we want to understand. In the broad sense, it’s likely they did it because they felt customers would like it, and that would lead to selling more cars. At a secondary level, it’s possible that those involved felt they would gain prestige (and compensation) if they pulled off the wizard’s trick of making a diesel car which was clean and also high performance, at a level that turns out to be impossible.  read more »

Tricking LIDARS and robocars

Much press has been made over Jonathan Petit’s recent disclosure of an attack on some LIDAR systems used in robocars. I saw Petit’s presentation on this in July, but he asked me for confidentiality until they released their paper in October. However, since he has decided to disclose it, there’s been a lot of press, with truth and misconceptions.

There are many security aspects to robocars. By far the greatest concern would be compromise of the control computers by malicious software, and great efforts will be taken to prevent that. Many of those efforts will involve having the cars not talk to any untrusted sources of code or data which might be malicious. The car’s sensors, however, must take in information from outside the vehicle, so they are another source of compromise.

There are ways to compromise many of the sensors on a robocar. GPS can be easily spoofed, and there are tools out there to do that now. (Fortunately real robocars will only use GPS as one clue to their location.) Radar is also very easy to spooof — far easier than LIDAR, agrees Petit — but their goal was to see if LIDAR is vulnerable.

The attack is a real one, but at the same time it’s not, in spite of the press, a particularly frightening one. It may cause a well designed vehicle to believe there are “ghost” objects that don’t actually exist, so that it might brake for something that’s not there, or even swerve around it. It might also overwhelm the sensor, so that it feels the sensor has failed, and thus the car would go into a failure mode, stopping or pulling off the road. This is not a good thing, of course, and it has some safety consequences, but it’s also a fairly unlikely attack. Essentially, there are far easier ways to do these things that don’t involve the LIDAR, so it’s not too likely anybody would want to mount such an attack.

Indeed, to do these attacks, you need to be physically present, near the target car, and you need a solid object that’s already in front of the car, such as the back of a truck that it’s following. (It is possible the road surface might work.) This is a higher bar than attacks which might be done remotely (such as computer intrusions) or via radio signals (such as with hypothetical vehicle-to-vehicle radio, should cars decide to use that tech.)

Here’s how it works: LIDAR works by sending out a very short pulse of laser light, and then waiting for the light to reflect back. The pulse is a small dot, and the reflection is seen through a lens aimed tightly at the place the pulse was sent. The time it takes for the light to come back tells you how far away the target is, and the brightness tells you how reflective it is, like a black-and-white photo.

To fool a lidar, you must send another pulse that comes from or appears to come from the target spot, and it has to come in at just the right time, before (or on some, after) the real pulse from what’s really in front of the LIDAR comes in.

The attack requires knowing the characteristics of the target LIDAR very well. You must know exactly when it is going to send its pulses before it sends them, and thus precisely (to the nanosecond) when a return reflection (“return”) would arrive from a hypothetical object in front of the LIDAR. Many LIDARS are quite predictable. They scan a scene with a rotating drum, and you can see the pulses coming out, and know when they will be sent.  read more »

Meeting on a narrow road

Jean-Louis Gassée, while a respected computer entrepreneur, wrote a critical post on robocars recently which matches a very common pattern of critical articles:

The pattern is as follows:

  • The author has been hearing about robocars for a while, and is interested
  • While out driving, or sometimes just while thinking, they encounter a situation which seems challenging
  • They can’t figure out what a robocar would do in that situation
  • They conclude that thus the technology is very far in the future.

His scenario is the very narrow road, so narrow that it really should be one-way but it isn’t. In most of the road, two cars can’t pass one another. Humans resolve this through various human dynamics, discussion and experience.

In most of these examples, the situation is not one that is new to robocar developers. They’ve been thinking about all the problems they might encounter in driving for over a decade in many cases. It’s extremely rare for a newcomer to come up with a scenario they have not thought of. In addition, developers are putting cars on the road, with over a million miles in Google’s case, to find the situations that they didn’t think of just by thinking and driving themselves. It is not impossible for novices to come up with something new — in fact a fresh eye can often be very valuable — but the fresh eyes should check to see what prior thinking may exist.

Some of the problems are indeed hard, and developers have put them later on the roadmap. They will not release their cars to operate on roads where the unsolved situations may occur. If snow is hard, the first cars will be released in places where it does not snow, or they will not drive on their own if it’s snowing. In the meantime, the problems will be solved, in a priority order based on how often they happen and how important they are.

The “two cars meet” situation involves very rare roads in the USA, so it’s not a high priority problem there, but it would not be a surprise problem. That’s because current plans have cars only drive with a map of the road they are driving. No map, they don’t drive the road.

That means they know the road well, and exactly how wide it is at every spot, and what its rules are (one-way vs. two-way and so on.) They will know their own width and the width of oncoming vehicles accurately. If they can’t safely drive a road, they won’t drive it. If it’s a rare road, the cost of that will be accepted. Driving every road everywhere is a nice dream, but not necessary to have a highly useful product. While Google’s ideal prototype is planned to be released for urban situations without a wheel, cars that need to go places where they can’t drive will continue to offer wheels or other interfaces (joysticks, tablet apps) that let a human guide them to get through problems.

The two-cars meeting problem is interesting because it’s actually one where the cars can far outperform humans. It’s also one of the rare times that communication between cars turns out to be useful. (Typically car to server to server to car, not direct v2v, but that’s another matter.)

The reason is that super narrow roads, including country roads and urban back-alleys have occasional wide-spots and turn outs where people can pass. They have to, to be two-way. And these will all be on the map. Cars on such a road would desire traffic data about other cars on the road. They will be able to make predictions about when they might encounter another car coming the other way. Most interestingly, one or both of the cars can adjust their speed so that they will encounter one another precisely at one of the wider spots where passing can take place.

In fact, if they do this well, they might drive a one-lane road at a nice fast speed, barely slowing down in these wider passing zones, in part because by knowing the width of the vehicles they will be able to confidently pass quite closely. If a robocar is meeting a human driven car, it would leave some slop, picking the right passing zone, arriving early in case the other car is faster than expected, waiting if it is slower.

This remarkable ability would allow us to build low-traffic roads and alleys which are mostly only one lane wide, but which could carry traffic fairly quickly and safely in both directions. Gassée’s problem is far from a problem — it’s actually a great opportunity to vastly decrease the cost and land requirements of road construction. I wrote about this a couple of years ago, in fact.

Even without communication, a robocar would do pretty well here. Its map would tell it, should it encounter another vehicle on the road it can’t pass, just where the closest passing spot is. It could back up if need be, or if the other car should back up, it could nudge in that direction, or even display instructions to a human driver on a screen. It would be able to do this far better than humans could because of its accurate measurements and driving ability. Generally, any human car should defer to the robocar’s superior knowledge and superior ability to manage a close pass-by. The car would figure it out the moment it sensed the other car, and immediately adjust speed to meet at a passing point, or possibly to back up. Unlike humans, they will be able to drive in reverse at high speed if they have 360 degree sensors.

Human drivers could actually play a role in this. Those running a mobile app like WAZE could know about other cars running the app, or robocars. The app could give them advice to speed up or slow down to encounter the other car at a wide spot. Of course, if there are cars not using the app, they would just fall back to the old fashioned human approach. One could imagine a sign at the entry to a narrow road saying, “We recommend running the XYZ app for a smoother trip down this road.”

Not all these problems that people put forward were as easily resolved as this one, so I am not calling for people to “shut up and let the experts get to work.” There are many problems yet to be solved. Most of them can be be solved by punting because you don’t need to drive everywhere. Though Google has shown that having a steering wheel that can be grabbed while moving is a bad idea, I do expect most cars to have some form of control that can be activated when a car is stopped. If a road needs the human touch, it will be available.

To fix human attack on the Hugo awards, you need humans

I wrote earlier on the drama that ensued when a group of SF writers led a campaign to warp the nomination process by getting a small but sufficiently large group of supporters to collude on nominating a slate of candidates. The way the process works, with the nomination being a sampling process where a thousand nominators choose from thousands of works, it takes only a 100-200 people working together to completely take over the process, and in some cases, they did — to much uproar.

In the aftermath, there was much debate about what to do about it. Changes to the rules are in the works, but due to a deliberate ratification process, they mostly can’t take effect until the 2017 award.

One popular proposal, called E Pluribus Hugo appeals, at least initially, to the nerdy mathematician in many of us. Game theory tries to design voting systems that resist attack. This is such a proposal, which works to diminish the effect that slate collusion can have, so that a slate of 5 might get fewer than 5 (perhaps just 1 or 2) onto the ballot. It is complex but aimed to make it possible for people to largely nominate the same way as before. My fear is that it modestly increases the reward for “strategic” voting. With strategic voting, you are not colluding, but you deliberately leave choices you like off your ballot to improve the chances of other choices you like more.  read more »

Singularity University Closing Ceremony Thursday Evening in San Jose

After a hard 10 weeks, our Singularity University Graduate class for 2015 will have its closing ceremony this Thursday night. If you are in the Bay Area, consider coming down to join luminaries of the SFBA accelerating technology community at San Jose’s California theater and see presentations from the 5 top student teams as well as tables and posters from all 24 of them. With 80 students from 40 countries it’s an eclectic and amazing group.

You can get Event info and tickets here.

Past student teams have revolutionized car sharing, mined metals from waste, developed radical new medical diagnostic devices, built infrastructure for drone delivery and done the first manufacturing off the Earth by launching a 3D printer on to the space station. And there have been over 100 more.

Google Alphabet: Is it good for robocars?

Everybody has heard about Google’s restructuring. In the restructuring, Google [x], which includes the self-driving car division, will be a subsidiary of the new Alphabet holding company, and no longer part of Google.

Having been a consultant on that team, I have some perspective to offer on how the restructuring might affect the companies that become Alphabet subsidiaries and leave the Google umbrella.

The biggest positive is that Google has become a large corporation, and as a large corporation, it suffers from many of the problems that large companies have. Google is perhaps one of the most unusual large companies, so it suffers most of them less, but it is not immune. As small subsidiaries of Alphabet, the various companies will be able to escape this and act a bit more like startups. They won’t get to be entirely like startups — they will have a rich sugar daddy and not have to raise money in the venture funding world, and it’s yet to be seen if they will get any resources from their cousin at Google. Even so, this change can’t be understated. There are just ways of thinking at big companies that seem entirely rational when looked at up close, but which doom so many projects inside big companies.

Here let me list some of the factors that will be positives:

  • While Alphabet has said nothing about the structure of the Google [X] companies, it seems likely that they will be able to give options and equity to their employees; options that might have a big upside. Google stock options have lost the big upside. Due to the structure, however, the equity packages will probably be smaller, with nobody getting the large chunks founders get — and nobody taking the risks founders do.
  • It will be easier, of course, for Alphabet to sell off these subsidiaries, or even take them public or do other unusual things normally not done with corporate divisions. (It’s not impossible with corporate divisions, but it’s rare and it rarely is a bonanza for the staff.)
  • The subsidiaries will be freed from the matrix management of large companies. They will get their own legal departments, be able to set their own benefit structures and culture to some extent. Don’t underestimate the value of not having to work within a corporate legal or HR department when you’re trying to be a startup.
  • The companies can take risks that Google can’t take. For example, consider Uber, which simply violated local laws in some area to kickstart ride service. It’s much harder for a division of a large company to even try a stunt like that. For Uber, it worked — but it doesn’t always work.
  • The companies can also do things that would otherwise tarnish the Google brand. Huge as it is, the public has a natural distrust of Google, particularly on issues like privacy. While I think all robocar companies should work hard to protect privacy, being inside Google creates a whole new amount of scrutiny and established principles. In the case of making robocars, they might one day injure somebody, and that is a scary thing for the big brands. If you live in fear of that all the time, you won’t win the race, either.
  • The CEOs of the new companies should have a lot more autonomy than they had before.
  • They still will have access to vast financial resources. If the new car company needs ten billion dollars to build a fleet of 400,000 taxis, or even needs to buy an existing car company, it’s not out of the question.
  • Being inside Google conveys a certain arrogance to people because it’s one of the world’s leading companies in many different ways. But sometimes it’s good not to be so cocky.
  • Out of fear, there are companies that won’t do business with Google. I once asked the folks at WAZE if we might get their data on accidents. I was told, “The one company we would be afraid to sell our data to is Google.” Of course, Google got WAZE’s data, but at a much higher price!
  • People will finally stop wondering if they are building a car just to show advertising to you while you ride.

Of course there are some negatives:

  • Google brings with it vast, vast resources, not just in money. Google is also the world’s #1 mapping company, and in fact many of the early members of the Chauffeur team were people who worked on maps and streetview. Google’s world-leading computing resources also are useful for the big data projects and simulation a robocar team has to do.
  • There is also a giant talent pool at Google, though of course in all big companies, poaching top employees from within the company comes with risks of internal strife. The ability to even borrow top-notch people and resources is immensely valuable.
  • Google has fantastic benefits that are hard to duplicate in a small company. One suspects Alphabet’s subsidiaries will probably mirror a lot of Google’s policies, but there is a limit to what they can do. A google badge does not just get you dozens of restaurants and a large commuter bus system, it gets you things like a great series of internal talks from technical and world leaders and many other events. Google spends a lot on keeping its people happy. A lot.
  • Google has a fun company culture, with lots of cool people. People make a lot of friendships with very smart friends there, even outside their groups.
  • Inside Google there is always the opportunity to switch to different projects, many of which are grand and sure to affect a lot of people, without getting a new job.
  • The projects at Google [x] have the personal interest of Larry Page and Sergey Brin. That’s been very useful to them within Google, but it also threatens the necessary independence of the CEOs of the new subsidiaries, who will still report to Alphabet. It remains to be seen if the founders can be sufficiently hands off.
  • Google is perhaps the world’s top brand. It is able to get things done. When you call companies and say, “I’m calling from the Google car team” they return your calls right away and jump at the chance to talk with you. Doors are opened that are closed to most startups. (Admittedly, the project at Google is now so famous that it might overcome this.)
  • Google’s power has allowed it to also do things like get laws made and changed around robocars; in fact this kickstarted the legal changes around the world. A small company will have a harder time.
  • Google’s power gives it a strange upper hand in negotiations with other players like big car companies. Big car companies are very used to being in charge of any talks they have with partners and suppliers.

I have no inside information on this deal — this is all based on lots of observation of public information about Google and non-confidential impressions from having been there. Some of this could be wrong. Alphabet might have Google re-sell some of its perks like the bus system to the other companies. It will certainly lend a hand where it makes a lot of sense. There is a fine line, though — the more “help” you give, the more “perfectly reasonable” conditions the help comes with and soon you’re like a division again.

There have been no specific announcements about Chauffeur either. Will Google [x] be a subsidiary with Astro Teller as CEO, including the car? Will the car have its own company? Will Chris Urmson be CEO if so? Or will [x] continue as the research lab of Alphabet, while other “graduated” portions of it go off into their own companies? Specific mention was made of “Wing” which is doing drones, but not of other [x] projects. More news will surely come.

Overall, I think this is a strong decision. If Google was to fail in the race to robocars, I always felt that that failure would come from one of two fronts — either the mistakes that big companies make because they are big, or from the special hubris of Google and its #1 position. Now these two dangers are dimmed.

Automated Vehicles Symposium Days 1 and 2

From small beginnings, over 800 people are here at the Ann Arbor AUVSI/TRB Automated Vehicles symposium. Let’s summarize some of the news.

Test Track

Lots of PR about the new test track opening at University of Michigan. I have not been out to see it, but it certainly is a good idea to share one of these rather than have everybody build their own, as long as you don’t want to test in secret.


Mark Rosekind, the NHTSA administrator gave a pretty good talk for an official, though he continued the DoT’s bizarre promotion of V2V/DSRC. He said that they were even open to sharing the DSRC spectrum with other users (the other users have been chomping at the bit to get more unlicenced spectrum opened up, and this band, which remains unused, is a prime target, and the DoT realizes it probably can’t protect it.) Questions however, clarified that he wants to demand evidence that the spectrum can be shared without interfering with the ability of cars to get a clear signal for safety purposes. Leaving aside the fact that the safety applications are not significant, this may bode a different approach — they may plan to demand this evidence, and when they don’t get it — because of course there will be interference — they will then use that as a grounds to fight to retain the spectrum.

I say there will be interference because the genius of the unlicenced bands (like the 2.4ghz where your 802.11b and bluetooth work) was the idea that if you faced interference, it was your problem to fix, not the transmitter’s, as long as the transmitter stayed low power. A regime where you don’t interfere would be a very different band, one that could only be used a long distance from any road — ie. nowhere that anybody lives.


The most disappointing session for everybody was the vendor’s session, particularly the report from GM. In the past GM has shown real stuff based on their work. Instead we got a recap of ancient stuff. The other reports were better, but only a little. Perhaps it is a sign that the field is getting big, and people are no longer treating it like a research discipline where you share with your colleagues.


Chris Gerdes’ report on a Stanford ethics conference was good in that it went well past the ridiculous trolley problem question (what if the machine has to choose between harming two different humans) which has become the bane of anybody who talks about robocars. You can see my answer if you haven’t by now.

Their focus was on more real problems, like when you illegally cross the double yellow line to get around a stalled car, or what you do if a child runs into the street chasing a ball. I am not sure I liked Gerdes’ proposal — that the systems compute a moral calculus, putting weights on various outcomes and following a formula. I don’t think that’s a good thing to ask the programmers to do.

If we really do have a lot of this to worry about, I think this is a place where policymakers could actually do something useful. They could set up a board of some sort. A vendor/programmer who has an ethical problem to program would put it to the board, and get a ruling, and program in that ruling with the safe knowledge they would not be blamed, legally, for following it.

The programmers would know how to properly frame the questions, but they could also refine them. They would frame them differently that lay people would imagine, because they would know things. For example:

My vehicle encounters a child (99% confidence) who darts out from behind a parked van, and it is not possible to stop in time before hitting the child. I have an X% confidence (say 95%) that the oncoming lane is clear and a y% confidence (90%) that the sidewalk is clear though driving there would mean climbing a curb, which may injure my passenger. While on the sidewalk, I am operating outside my programming so my risk of danger increases 100 fold while doing so. What should I do?

Let the board figure it out, and let them understand the percentages, and even come back with a formula on what to do based on X, Y and other numbers. Then the programmer can implement it and refine it.


For the first time, there was a panel about investment in the technology, with one car company, two VCs and a car oriented family fund (Porsche.) Lots more interest in the space, but still a reluctance to get involved in hardware, because it costs a lot, is uncertain, and takes a long time to generate a return.

Afternoon breakouts

I largely missed these. Many were just filled with more talks. I have suggested to conference organizers a rule that the breakout sessions be no more than 40% prepared talks, and the rest interactive discussion.

Wednesday starts with Chris Urmson of Google

Chris’ talk was perhaps the most anticipated one. (Disclaimer — I used to work for Chris on the Google team.) It has similarities to a number of his other recent talks at TeD and ITS America, with lots of good video examples of the car’s perception system in operation. Chris also addressed this week’s hot topic in the press, namely the large number of times Google’s car fleet is being hit by other drivers in accidents that are clearly the fault of the other driver.

While some (including me) have speculated this might be because the car is unusual and distracting, Google’s analysis of the accidents strongly suggests that our impression of how common small bumper-bender accidents are was seriously underestimated. There are 6 million reported accidents in the US every year, and common suggestions from insurers and researchers suggested the real number might include another 6 million unreported ones. It’s now clear, based on Google’s experience, that the number of small accidents that go unreported is much higher.

Google thinks that is good news in several ways. First, it tells us just how distracted human drivers are, and how bad they are, and it shows that their car is doing even better than was first thought. The task of outperforming humans on safety may be easier than expected.

The anti-Urmson

Adriano Allessandrini has always been an evocative and controversial character at these events. His report on Citymobil2 (a self-driving shuttle bus that has run in several cities with real passengers) was deliberately done as contrast to Google’s approach. Google is building a car meant to drive existing roads, a very complex task. Allesandrini believes the right approach is to make the vehicle much simpler, and only run it on certified safe infrastructure (not mixed with cars) and at very low speeds. As much as I disagree with almost everything he says, he does have a point when it comes to the value of simplicity. His vehicles are serving real passengers, something few else can claim.

Public perception

We got to see a number of study results. Frankly, I have always been skeptical of the studies that report what the public thinks of future self-driving cars and how much they want them. In reality, only a tiny fraction of the 800 people at the conference, supposed experts in the field, probably have a really solid concept of what these future vehicles will look like. None of us truly know the final form. So I am not sure how you can ask the general public what they think of them.

Of greater interest are reports on what people think of today’s advanced features. For example, blindspot warning is much more popular than I realized, and is changing the value of cars and what cars people will buy.


For Tuesday afternoon I attended a very interesting security session. I will write more about this later, particularly about a great paper on spoofing robocar sensors (I will await first publication of the paper by its author) but in general I feel there is a lot of work to be done here.

In another post I will sum up a new expression of my thoughts here, which I will describe as “Connected and Automated: Pick only one.” While most of the field seems to be raving about the values of connectivity, and that debate has some merit, I feel that if the value of connectivity (other than to the car’s HQ) is not particularly high, it does not justify the security risk that comes from it. As such, if you have a vehicle that can drive itself, that system should not be “on the internet” as it were, connecting to other cars or to various infrastructure services. It should only talk to its maker (probably over a verified and encrypted tunnel on top of the cellular data network) and it should frankly be a little scared even of talking to its maker.

I proposed this to the NHTSA administrator, and as huge backers of V2V he could not give me an answer — he mostly want to talk about the perception of security rather than the security itself — but I think it’s an important question to be discussed.

Since many people don’t accept this there are efforts to increase security. First of all people are working to put in the security that always should have been in cars (they have almost none at present.) Secondly there are efforts at more serious security, with the lessons of the internet’s failures fresh in our minds. Efforts at provably correct algorithms are improving, and while nobody thinks you could build a provably correct self-driving system, there is some hope that the systems which parse inputs from outside could be made provably secure, and they could be compartmentalized from other systems in a way that compromise of one system would have a hard time getting to the driving system where real danger could be done.

There were calls for standards, which I oppose — we are way too early in this game to know how to write the standards. Standards at best encode the conventional wisdom of 3 years ago, and make it hard to go beyond it. Not what we need now.

Nonetheless there is research going to make this more secure, if it is to be done.

Automated Vehicles Symposium Day 0: When do robocars become cheaper than standard cars?

I’m in the Detroit area for the annual TRB/AUVSI Automated Vehicle Symposium, which starts tomorrow. Today, those in Ann Arbor attended the opening of the new test track at the University of Michigan. Instead, I was at a small event with a lot of good folks in downtown Detroit, sponsored by SAFE which is looking to wean the USA off oil.

Much was discussed, but a particularly interesting idea was just how close we are getting to something I had put further in the future — robocars that are cheaper than ordinary cars.

Most public discussion of robocars has depicted them as costing much more than regular cars. That’s because the cars built to date have been standard cars modified by placing expensive computers and sensors on them. Many cars use the $75,000 Velodyne Lidar and the similarly priced Applanix IMU/GPS, and most forecasts and polls have imagined the first self-driving cars as essentially a Mercedes with $10,000 added to the price tag to make it self driving. After all, that’s how things like Adaptive Cruise Control and the like are sold.

Google is showing us an interesting vision with their 3rd generation buggy-style car. That car has no steering wheel, brakes or gas pedal, and it is electric and small. It’s a car aimed at “Mobility on Demand.”

When people have asked me “how much extra will these cars cost,” my usual answer has been that while the cars might cost more, they will be available for use by the mile, where they can cost less per mile than owning a car does today — ie. that overall it will be cheaper. That’s in part because of the savings from sharing, and having vehicles go more miles in their lifetime. More miles in the life of a car at the same cost means a lower cost per mile, even if the car costs a little more.

The sensors cost money, but that cost is already in serious decline. We’re just a few years away from $250 Lidars and even cheaper radar. Cameras are already cheap, and there are super cheap IMUs and GPSs already getting near the quality we need. Computers of course get cheaper every year.

This means we are not too far when the cost of the sensors is less than the money saved by what you take out of the car. After all, having a steering wheel, gas and brakes costs money. Side mirrors cost money (ever had to replace them?) That fancy dashboard with all its displays and controls costs a lot of money, but almost everything it does in a robocar can be done by your tablet.

That said, you need a few extra things in your robocar. You need two steering motors and two braking systems. You need some more short range sensors and a cell phone radio. But there’s even more you can save, especially with time.

Because mobility on demand means you can make cars that are never used for anything but short urban trips (the majority of trips, as it turns out) you can save a lot more money on those cars. These cars need not be large or fast. They don’t need acceleration. They won’t ever go on the highway so they don’t need to be safe at 60mph. Electric drive, as we discussed earlier, is great for these cars, and electric cars have far fewer parts than gasoline ones. Today, their batteries are too expensive, but everything else in the car is cheaper, so if you solve the battery cost using the methods I outlined Saturday we’re saving serious money. And small one or two person cars are inherently cheaper to boot.

Of course, you need to make highway cars, and long-range 4WD SUVs to take people skiing. But these only need be a fraction of the cars, and people who use a mix of cars will see a big saving.

For a long time, we’ve talked about some day also removing many of the expensive safety systems from cars. When the roads become filled with robocars, you can start talking about having so few accidents you don’t need all the safety systems, or the 1/3 of vehicle weight that is attributable to passive safety. That day is still far away, though cars like the Edison2 Very-Light-Car have done amazing things even while meeting today’s crash tests. Companies like Zoox and other startups have pushed visions of completely redesigned cars, some of them at lower cost for a while. But this seems like it might become true sooner rather than later.

Evacuation in a hurricane

One participant asked how, if we only had 1/9th as many cars (as some people forecast, I suspect it’s closer to 1/4) we would evacuate sections of Florida or similar places when a hurricane is coming. I think the answer is a very positive one — simply enforce car pooling / ride sharing in the evacuation. While there is not a lot I think policymakers should do at this time, some simple mandates could help a lot in this arena. While people would not be able to haul as much personal property, it is very likely there would be more than enough seats available in robocars to evacuate a large population quickly if you fill all the seats in cars going out. Further, those cars can go back in to get more people if need be.

Filling those seats would actually get everybody out faster, because there would be far less traffic congestion and the roads would carry far more people per hour. In fact, that’s such a good idea it could even be implemented today. When there’s an evacuation, require all to use an app to register when they are almost ready to leave. If you have spare seats, you could not leave (within reason) until you picked up neighbours and filled the seats. With super-carpooling, everybody would get out very fast on much less congested roads. Those crossing the checkpoint on the way out with empty seats would be photographed and ticketed unless the app allowed them to leave like that, or the app records that it tried to reach the server and failed, or other mitigating circumstances. (This is all hours before the storm, of course, before there is panic, when people will do whatever they can.) Some storms might be so bad the cars are at risk. In that case, if the road capacity is enough, people could move out all the cars too, to protect them. But in most cases, it’s the people that are the priority.

More tomorrow as the conference gets underway.

Will Robocars vastly increase battery life?

We know electric cars are getting better and likely to get popular even when driven by humans. Tesla, at its core, is a battery technology company as much as it’s a car company, and it is sometimes joked that the $85,000 Telsa with a $40,000 battery is like buying a battery with a car wrapped around it. (It’s also said that it’s a computer with a car wrapped around it, but that’s a better description of a robocar.)

Tesla did a lot of work on building cooling systems for standard cylinder Lithium-Ion cells and was able to make a high performance vehicle. The Model S also by default charges to only 80% of capacity because battery life is hurt by charging all the way to full. In fact, charging to 3.92 volts (about 60%) capacity is the sweet spot. Some of the other things that reduce battery life include:

  • Discharging too close to empty
  • Getting too warm while discharging
  • Getting too warm while charging, and in particular causing thermal expansion which creates physical damage
  • Even ordinary warmth, where the vehicle is stored for long periods, particularly at high charge, is dangerous. The closer to freezing the better, and even above 25 degrees centigrade causes some loss.

The important, but little reported statistic for a battery is the total watt-hours you will be able to get out of it during its usable lifetime. This tells you the lifetime of the battery in miles, and the cost tells you the cost per mile. How important is this? If the Tesla $40,000 battery lasts you 150,000 miles and sells for $10,000 when done, the straight-line cost per mile is 20 cents/mile — more than the cost of gasoline in most cars, and much more than the 3 cent/mile or less cost of electricity.

Humans will drive as humans want to drive, and it’s hard to change that. They will accelerate for both fun and to get ahead of other cars. They will take mixes of short trips and long trips. They don’t know how long their trips are and demand a flexible vehicle always ready for anything.

Electric robotaxis change that game. They will drive predictably, rarely ever demanding quick acceleration. A driver likes zippy fun, a passenger wants a gentle ride. They can go even further, and set their driving pattern based on the temperature of their batteries. Are we making the batteries too warm? Then “cool off,” literally. This applies both to fast starts and also slowing down. Regenerative braking conserves energy and increases range, but doing it too hard heats the batteries. Start slowing down sooner — especially if you have data on what traffic lights and traffic are doing and it can make a big difference.

Robotaxis can always use the sweet spot of the battery charge duty cycle.

  • You will rarely be sent a robotaxi that, in order to get you, needs to dig deep into its maximum range.
  • Often demand is predictable, so if need be, vehicles can be charged above 60% only when such demand is expected or is arising.
  • While robotaxis will prefer to charge at night when power is cheapest, they can charge any time to get back up to the optimal level
  • As I’ve noted before, battery swap doesn’t work well for humans, but robots don’t mind making an appointment or driving out of their way for a swap. This makes it easy to use batteries only in the sweet spot, and to charge them only at night on cheap power.
  • If battery swap is not an option, there are many options to supplement range during peak demand. Vehicles can go to depots to pick up trunk batteries, battery trailers, or even slot-in units with small motorcycle engines and liquid fuel tanks. If this is cheaper than the alternatives, it’s an option.
  • When it gets hot, robotaxis can seek out the shade, or even places with cooling, to keep the batteries from being too warm.

Robotaxis don’t mind the loss of range all that much

As a battery ages, its capacity drops. Humans hate that — having bought a car with a 100 mile range they won’t accept it can now only do 60. For a human, that means time to replace the battery. For a robotaxi, that just means you have a shorter range, and you don’t get sent on long range trips. Or you may decide that while before, you only charged to 60% to get maximum battery life, now you charge more, knowing it will eat the remaining life, but getting the most out of the battery.

Of course, as the range drops, now you run into another problem. You’re carrying around the extra weight of battery for half the range, and it’s costing you energy to do that, especially in an ultralight car where the battery is the biggest component of the weight. (This also enters into the math of whether it makes sense to charge only to 60%.) Eventually the time comes that the battery is not practical. This is the time to sell it. Tesla and others are working to produce a home and grid storage market for used car batteries. In those applications, the weight doesn’t matter, just the cost for the remaining lifetime watt-hours. You care about the capacity, but you pay a market price for it.

Eventually, even this is not practical and you scrap to recycle the materials.

Typical predictions for Lithium-Ion run from 500 to 1,000 cycles. Tesla’s techniques seem to be beating that. With robotaxis, who knows just how many lifetime kwh we’ll be able to get out of these batteries, or perhaps even other chemistries. Turns out that human drivers like a chemistry that keeps its life as long as possible then falls off a cliff. Slow decline is harder to sell — but slow decline chemistries, like Lithium Iron Phosphate and others could make more sense for the robots that don’t care.

Grid storage?

It’s often suggested that electric cars could be used as grid storage. Problem is, with car batteries today, it costs around 15 cents to put a kwh into a battery and get it out. That means to be grid storage, you need to have the spot price on the grid be the price you bought at, plus 15 cents, plus a margin to make it worth this. Night power can get as low as 6 cents, so this does happen, but not as much as one might hope. The problem is that the grid’s peak demand is around 4 to 7pm, which is also a peak time for driving. That’s the last time most car owners will want to drain off their batteries to make a bit of money on the power. You will only do that if you know you won’t be using the car. For a robotaxi fleet, that might be the case. Of course, selling power to the grid you will do it only at a rate that does not harm your battery or warm it up too much.

When the grid gets to a super peak, the price can really spike to attractive numbers. That’s because building extra power plant capacity just for those rare days is expensive, and so almost any price is better. Here we could talk about cars as storage, when we know their batteries are not going to be used. That’s even more true of batteries sitting in a battery swap facility.

Some Q&A on Robocars via Singularity U

At Singularity U, we’re releasing a new video series answering questions about our future technology topics that come from Twitter. My segment is one of the first, and while regular readers of my blog will probably have seen me talk about most of these, here is the video:

You can follow the series link or subscribe to see the other videos as they come.

Facebook makes less than $10/user, can we find alternatives to advertising?

Facebook’s ARPU (average revenue per user, annualized) in the last quarter was just under $10, declining slightly in the USA and Canada, and a much lower 80 cents in the rest of the world. This is quite a bit less than Google’s which hovers well over $40.

That number has been mostly growing (it shrank last quarter for the first time) but it’s fairly low. I can solidly say I would happily pay $10 a year — even $50 a year — for a Facebook which was not simply advertising-free, but more importantly motivated only to please its customers and not advertisers. Why can’t I get that?

One reason is that it’s not that simple. If Facebook had to actually charge, it would not get nearly as many users as it does being free and ad-supported. It is frictionless to join and participate in FB, and that’s important with the natural monopolies that apply to social media. You dare not do anything that would scare away users.

Valley of Distraction

Being advertising supported bends how Facebook operates, as it will any company. The most obvious thing is the annoying ads. Particularly annoying are the ads which show up in my feed, often marked with “Friend X liked this company.” I am starting to warn my friends to please not like the pages of anybody who buys ads on FB, because these ads are even more distracting than regular ads. Also extra distracting are ads which are “just off the bulls-eye,” which is to say they are directed at me (based on what FB knows about me) and thus likely to distract me, but which turn out to be completely useless. That’s worse than an ad which was not well aimed and so doesn’t distract me at all with its uselessness. There is a “valley of distraction” when it comes to targeting ads:

  • Ads about things I am researching or may want to buy can be actually valuable to me, and also rewarding to the advertiser.
  • Ads about things I am interested in, but have already bought or would not buy via an ad are highly distracting but provide no value to the advertiser and negative value to me.
  • Ads about things I have no interest in tend to be only mildly distracting if they are off to the side and not blinky/flashy/pop-up style.

As sites get better at ad targeting, they generate more of the middle type.


Facebook’s need to monetize with advertising gives them strong incentives to be less protective of privacy. All social networks have an anti-privacy incentive, because the more they can get you to share with more people, the more they can make things happen on their site, and the more they can attract in other users. But advertising ads to this. Without ads, FB would focus only on attracting and retaining customers by serving them, which would be good for users.

As the old saying goes, “If you’re not paying, you’re not the customer, you’re the product.” To give credit to many web companies, in spite of the reality of this, they actually work hard to reduce the truth of this statement, but they can never do it entirely.

How we monetize the web

When I created the first internet based publication in 1989, I did it by selling subscriptions. There really wasn’t a way to do it with advertising at that time, but I lamented the eventual switch that later came which has made advertising the overwhelmingly dominant means of monetizing the web. There are a few for-pay sites but they are very few and specialized. I lament that forces pushed the web that way, and have always wished for a mechanism to make it easier, if not as easy, to monetize a web site with payment from customers. That’s why I promoted ideas like microrefunds as well as selling books in flat-rate pools like my Library of Tomorrow back in 1992. (Fortunately this concept is now starting to get some traction in some areas, like Amazon’s Kindle Unlimited.)

I’m also very interested in the way that low-friction digital currencies like Bitcoin and in particular Dogecoin have made it work workable to give donations and tips. Dogecoin started as a joke, but because people viewed it as a joke, they were willing to build easy and low security means of tipping people. The lack of value attached to Dogecoin meant people were more willing to play around with such approaches. Perhaps Bitcoin’s greatest flaw is that because its transactions are irrevocable, you must make the engine that spends them secure, and in turn, that demands it is harder to use. Easy to spend means easy to lose, or easy to steal and that’s a rule that’s hard to break. The credit card system, in order to be easy to spend, solves the problem of being easy to steal by allowing chargebacks or other human fixes when problems occur. While we can do better at making digital money easy to spend and not quite so easy to steal, it’s hard to figure out how to be perfect at that without something akin to chargebacks.

To monetize the web without advertising, we need a truly frictionless money. Advertising provides a money whose only friction is the annoyance of the advertising. To consume an ad-supported product you need do nothing but waste a little time. It’s a fairly passive thing. To consume a consumer-paid product, you must pay, and that creates three frictions:

  1. The spending itself — though if it’s low that should be tolerable
  2. The mental cost of thinking about the spending — which often exceeds the monetary cost on tiny transactions
  3. The user interface cost of your means of payment.

You can’t eliminate #1 of course, but you can realize that the monetary cost is less than the negatives introduced by advertising. Eliminating #2 and #3 in a secure way is the challenge, and indeed it is the challenge which I devised the microrefund concept to address.

Will we pay the cost?

I think lots of people would pay $10/year for Facebook, particularly if alternatives also charged money. It’s a bargain at that price. But would people pay the $50 that Google makes from them? Again, I think Google is a bargain at that price, but for a lot of the world, that could be a lot of money, and that’s Google’s average revenue, not its revenue for me. (I click on ads so rarely that I think their revenue from me is actually a lot lower.)

I already bought my ticket on Iberia!

At the same time, Google’s ads are among the least painful. The ads on search are marked and isolated, and largely text based. The only really bad ads Google is doing are the ones in the valley of distraction in Adsense. As I wrote earlier, we are all constantly seeing ads for things we already bought.

And so, even though a Google search might only cost you a couple of pennies, I doubt we could move Google to payment supported even if we could remove all the friction from it.

This is not true for many other sites, though. Video sites would be a great target for frictionless payment, since showing a 30 second video ad to watch a 2 minute video is a terrible bargain, yet we see it happen frequently. There are many sites who do much worse than Google at monetizing themselves through advertising, and who would welcome a way to get more decent revenues via payment — though of course they can’t get greedy or they friction of the payment itself will reduce their business.

In addition, there are zillions of small sites and sites about topics of no commercial value who can’t make much money from advertising at all. Some of these sites probably don’t even exist because they can’t become going concerns in the current regime of monetizing the web — what fraction of the web are we missing because we have only one practical way to monetize it?

Google not hitting Delphi, going to Austin -- Vislab sold

The press were all a-twitter about a report from Reuters that there had been a near miss between Delphi’s test car and one of Google’s though it was quickly denied that anything happened

The situation described, one car cutting off another, was a very unlikely one for several reasons:

  • All these cars are operated by trained safety drivers who are expected to be vigilant and take control at any sign of trouble.
  • In particular, special moves like a lane change would get extra vigilance. If something unusual happened (such as 2 cars going for the same spot) the safety drivers would be watching in advance, tracking what the car was doing, and pull back if the car’s own displays were not telling them it was going to do the right thing.

The safety drivers are not perfect of course, but an autonomous lane change is a rare event and one that most people are still just testing, so they would be very unlikely to miss that the car was going to cut somebody else off.

Of course, situations will arise when two cars try to change into the same spot at the same time, and robocars will probably be fairly timid in these situations. The most likely situation if two robocars tried to take the same spot would be that both would back off and return to their original lane, and it will probably be that way until being so timid is not a workable strategy.

Robocars won’t be the lane-changing demons that some people (including myself sometimes are.) Many human drivers are constantly trying to find the fastest lane and we weave, often finding the lane we move into seems to become the slowest. Part of that is our psychology.

Robocars won’t do this as much because their passengers will be occupied doing other things, and in most cases will not be in a super hurry. Those passengers will prefer a stable ride where they can get work done to a weaving ride with extra starts and stops. If we’re in a big hurry, we might ask the car to try to work extra hard to make the fastest trip but this will be the exception.

When we do want that, the robocar will actually have a very nice model of just how fast each lane is moving. It won’t be fooled the way we are by seeing some lanes that seem to be faster when in fact neither lane is winning by that much. If they read licence plates to identify cars, they will get excellent appraisals of what’s going on. If one lane is truly faster they will find it. On the other hand, they will be worse at the standard game of chicken needed to change lanes in heavy traffic, where you depend on the car you are moving in front of to slow down. They will know the physics though, and if a lane change is needed, they will warn the passengers of high acceleration and perfectly make a smaller spot than you might be able to make.

In other news, Google has sent two cars to Austin, Texas to expand their testing ground. I don’t have a particular insight on why they selected Austin — I know that many towns and states regularly contact Google in the hope they might bring some cars to their area, though Texas has no modified laws yet.


I’ve written a few times about the work of Vislab in Parma, Italy. They have a focus on doing self-driving with machine vision, and did a famous cross-continent trek from Italy to Shanghai a few years ago, using a lead car to map the way and a following car self-driving, mostly with vision.

This lab was spun out of its university but now has been [acquired by Ambarella], a company that specializes in video compression chips. One can see why Ambarella would want a computer vision lab — but it seems this might spell the end of their self-driving efforts, unless they are spun out.


A new paper is out in Nature Climate Change on the potential for robocars to reduce emissions, inspired by some of my research in this area. Sadly, it’s behind a paywall, but the author will give a talk at Nissan’s lab in Silicon Valley on July 15th at our local self-driving car meetup.

Just a couple more days to apply for our exponential tech startup incubator

At Singularity University, our students have been forming interesting ventures after the class for the past 6 years. This fall, we’ll also be starting an SU Startup Accelerator for nascent startups working on exponential technology to solve the world’s biggest problems. We will be accelerating both for-profit ventures (for the world’s greatest problems can also be the greatest opportunities) and $50K grants for non-profit efforts.

The application deadline is coming up on June 30th — so zoom together your application today if you can. Follow the link and apply via AngelList.

Replacing E-mail: The calendar as communications tool

I want to begin a series of thoughts on how E-mail has failed us and what we should do about it.

Yes, E-mail has failed, and not, as we thought, because it got overwhelmed with spam. There is tons of spam but we seem to be handling it. The problem might be better described as “too much signal” rather than the signal/noise ratio. There are three linked problems:

  1. There is just too much E-mail from people we actually have relationships with. Part of this is the over-reach of businesses, who think that because you bought a tube of toothpaste that you should fill out a customer satisfaction survey and get the weekly bargains mail-out, but part of it is there really are a lot of people who want to interact with you, and e-mail makes it very easy for them to do that, particularly to “cc” you on mail you may only have a marginal interest.
  2. Because of problem 1, people are moving away from E-mail to other tools, particularly the younger generation. They (and we) are using Facebook mail and other social tools, instant messengers, texting and more.
  3. The volume means that you can’t handle it all. Important mails scroll off the main screen and are forgotten about. And some people are just not using their E-mail, so it is losing its place as the one universal and reliable way to send somebody a message.

One of the key differences the new media have is they focus on person to person communications — while there are group tools, they don’t even have the concept of a “cc” or mailing list, or even sending to two people.

I’m going to write more on these topics in the future, but today I want to talk about

The shared calendar as the communications tool

I’ve been pushing people I work with to use the calendar as the means of telling me about anything that is going to happen at a specific time. If people send me an E-mail saying, “Can we talk at 3?” I say, “don’t tell me that in an E-mail. Create an event on your calendar and invite me to it. Put the details of the conversation into the calendar entry.”

In general, I want to create a pattern of communication where if any message you send would cause the other person to put something on their calendar, you instead communicate it through the calendar by creating an event that they are an attendee of.

Our calendar and E-mail tools need to improve to make this work better. When everybody uses a shared calendar like Google Calendar, it is a lot easier, but we need tools that make it just as easy when people don’t use the same calendar tool.

When things do get into the calendar, you get a lot of nice benefits:

  • You are much less likely to forget about or miss the task or event
  • When you want to find the data on the event near the time of the event, you don’t have to hunt around for it — it is highlighted, in my case right on the home screen of my phone
  • If the event has a location, your phone typically is able to generate a map and even warn you when you need to leave based on traffic
  • If the event has a phone call/hangout/whatever, your devices can join that with a single click, no hunting for URLs or meeting codes — particularly while driving. (Google put in a tool to add one of their hangouts to any event in the calendar.)
  • Calendar events remove any confusion on time zones when people are in different zones.

Here are some features I want, some of which exist in current tools (particularly if you attach an ICS calendar entry to an E-mail) but which don’t yet work seamlessly.

  • Your email tool, when writing a message should notice if you’re talking about an event that’s not already in your calendars, and parse out dates and other data and turn it into a calendar invitation
  • Likewise your receiving tool should parse messages and figure this out, since the sender might not have done that.
  • E-mails that create calendar events should be linked together, so that from your calendar you can read all the email threads around the event, find any associated files or other resources.
  • Likewise it should be easy to contact any others tied to a calendar event by any means, not just the planned means of communication. For example, a good calendar should have a system where I can be phoned or texted on my cell phone by any other member of the event during the time around the event, without having to reveal my cell phone number. How often have you been waiting for a conference call to have somebody say, “does anybody know John’s number? Let’s find where he is.”
  • When I accept a calendar entry from outside and confirm, that should give them some access to use that calendar entry as a means of communication, even across calendar and mail platforms.

For example, when I book a flight or hotel or rent a car, the company should respond by putting that in my calendar. I might given them a token enabling that, or manually approve their invitation. Of course the confirmation numbers, links on how to change the reservation and more will be in the calendar entry. If the flight is delayed, they should be able to use this linkage to contact me — my calendar tool should know best where I am and the best ways to reach me — and push updates to me. When I get to the check-in desk, our shared calendar entry should make my phone and their computer immediately connect and make the process seamless.

When I approach the desk of a hotel, my phone should notice this, do the handshake and by the time I walk up they should say, “Good evening, Mr. Templeton, could you please sign this form? Here’s your room key, you’re in suite 1207.” (Of course, even better if I don’t have to sign the form and my phone, or any of the magstripe, chip or NFC cards I have in my wallet automatically become my room key.)

When you think this way, you start realizing that a surprisingly large amount of our E-mails are about events with times. And, as I wrote 8 years ago, most e-mails involve tasks, and E-mail and time management should be merged. Sadly my ideas of so long ago remain unrealized, and since then, E-mail has declined.

One caveat — if we do start using calendars for communication more, we must be able to prevent spam, and even over-use by people we know. We can’t do what we did with e-mail. Invitations to an event with just one or two people can be made easy — even automatic for those with authorization. Creating multi-person events needs to be a harder thing for people who aren’t whitelisted, though not impossible. The meaning of the word “invite” also needs to be more tightly understood. A solicitation for me to buy a ticket is not an invite.

Robocars and Ultracapacitors (and other energy sources)

A reader recently asked about the synergies between robocars and ultracapacitors/supercapacitors. It turns out they are not what you would expect, and it teaches some of the surprising lessons of robocars.

Ultracaps are electrical storage devices, like batteries, which can be charged and discharged very, very quickly. That makes them interesting for electric cars, because slow charging is the bane of electric cars. They also tend to support a very large number of charge and discharge cycles — they don’t wear out the way batteries do. Where you might get 1,000 or so cycles from a good battery, you could see several tens of thousands from an ultracap.

Today, ultracaps cost a lot more than batteries. LIon batteries (like in the Tesla and almost everything else) are at $500/kwh of capacity and falling fast — some forecast it will be $200 in just a few years, and it’s already cheaper in the Tesla. Ultracaps are $2,500 to $5,000 per kwh, though people are working to shrink that.

They are also bigger and heavier. They are cited as just 10 wh/kg and on their way to 20 wh/kg. That’s really heavy — LIon are an order of magnitude better at 120 wh/kg and also improving.

So with the Ultracap, you are paying a lot of money and a lot of weight to get a super-fast recharge. It’s so much money that you could never justify it if not for the huge number of cycles. That’s because there are two big money numbers on a battery — the $/kwh of capacity — which means range — and the lifetime $/kwh, which affects your economics. Lifetime $/kwh is actually quite important but mostly ignored because people are so focused on range. An ultracap, at 5x the cost but 10x or 20x the cycles actually wins out on lifetime $/kwh. That means that while it will be short range, if you have a vehicle which is doing tons of short trips between places it can quickly recharge, the ultracap can win on lifetime cost, and on wasted recharging time, since it can recharge in seconds, not hours. That’s why one potential application is the shuttle bus, which goes a mile between stops and recharges in a short time at every stop.

How do robocars change the equation? In some ways it’s positive, but mostly it’s not.

  • Robocars don’t mind going out of their way to charge, at least not too far out of their way. Humans hate this. So you don’t need to place charging stations conveniently, and you can have a smaller number of them.
  • Robocars don’t care how long it takes to charge. The only issue is they are not available for service while charging. Humans on the other hand won’t tolerate much wait at all.
  • Robocars will eventually often be small single-person vehicles with very low weight compared to today’s cars. In fact, most of their weight might be battery if they are electric.
  • Users don’t care about the power train of a taxi or its energy source. Only the fleet manager cares, and the fleet manager is all about cost and efficiency and almost nothing else.

Now we see the bad news for the ultracap. It’s main advantage is the fast recharge time. Robots don’t care about that much at all. Instead, the fleet manager does care about the downtime, but the cost of the downtime is not that high. You need more vehicles the more downtime you have during peak loads, but as vehicles are wearing out by the km, not the year, the only costs for having more vehicles are the interest rate and the storage (parking) cost.

The interest cost is very low today. Consider a $20,000 vehicle. At 3%, you’re paying $1.60 per day in interest. So 4 hours of recharge downtime (only at peak times when you need every vehicle) doesn’t cost very much, certainly not as much as the extra cost of an ultracap. The cost of parking is actually much more, but will be quite low in the beginning because these vehicles can park wherever they can get the best rate and the best rate is usually zero somewhere not too far away. That may change in time, to around $2/day for surface parking of mini-vehicles, but free for now in most places.

In addition to the high cost, the ultracap comes with two other big downsides. The first is the weight and bulk. Especially when a vehicle is small and is mostly battery, adding 200kg of battery actually backfires, and you get diminishing returns on adding more in such vehicles. The other big downside is the short range. Even with the fast recharge time, you would have to limit these vehicles to doing only short cab hops in urban spaces of just a few miles, sending them off after just a few rides to get a recharge.

A third disadvantage is you need a special charging station to quick charge an ultracap. While level 2 electric car charging stations are in the 7-10kw range, and rapid chargers are in the 50kw-100kw range, ultracap chargers want to be in the megawatt or more range, and that’s a much more serious proposition, and a lot more work to build them.

Finally, while ultracaps don’t wear out very fast, they might still depreciate quickly the same way your computer does — because the technology keeps improving. So while your ultracap might last 20 years, you won’t want it any more compared to the cheaper, lighter, higher capacity one you can buy in the future. It can still work somewhere, like grid storage, but probably not in your car.

The fact that robocars don’t need fast refueling in convenient locations opens up all sorts of energy options. Natural gas, hydrogen, special biofuels and electricity all become practical even with gasoline’s 100 year headstart when it comes to deployment and infrastructure, and even sometimes in competition with gasoline’s incredible convenience and energy density. But what the robocar brings is not always a boon to every different form of energy storage.

One technique that makes sense for robocars (and taxis) is battery swap. Battery swap was a big failure for human driven cars, for reasons I have outlined in other posts. But robocars and taxis don’t mind coming back to a central station, or even making an appointment for a very specific time to do their swap. They don’t even mind waiting for other cars to get their swaps, and can put themselves into the swap station when told to — very precisely if needed. Here it’s a question of whether it’s cheaper to swap or just pay the interest and parking on more cars.

Ultracaps are also used to help with regenerative braking, since they can soak up power from hard regenerative braking faster than batteries. That’s mostly not a robocar issue, though in general robocars will brake less hard and accelerate less quickly — trying to give a smooth ride to their passengers rather than an exciting one — so this has less importance there too.

Still, for convenience, the first robocars will probably be gasoline and electric.

Google Accidents, Baidu Cars, Startups and more news roundup

2 months mostly on the road, so here’s a roundup of the “real” news stories in the field.

Google begins PR campaign and talks about accidents

As the world’s most famous company, Google doesn’t need to seek press and the Chauffeur project has kept fairly quiet, but it just opened a new web site which will feature monthly reports on the status of the project. The first report gives details of all the accidents in the project’s history, which we discussed earlier. A new one just took place in the last month, but like the others, it did not involve the self-driving software. Google’s cars continue to not cause any accidents, though they have been at the receiving end of a modestly high number of impacts, perhaps because they are a bit unusual.

The zero at-fault accident number is both impressive, and possibly involves a bit of luck. Perhaps it even raises unrealistic expectations of perfection, because I believe there will be at-fault accidents in the future for both Google and other teams. Most teams, when they were first building their vehicles, had minor accidents where cars hit curbs or obstacles on test tracks, but the track records of almost all teams since then are surprisingly good. One way that’s not luck, of course, is the presence of safety drivers ready to take the controls if something goes wrong. They are trained and experienced, though some day, being human, some of them will make mistakes.

Baidu to build a prototype

In November I gave a “Big Talk” for Baidu in Beijing on cars. Perhaps there is something about search engines because they have made announcements about their own project. Like Google, Baidu has expertise in mapping and various AI techniques, including the advice of Andrew Ng, whose career holds many parallels to that of Sebastian Thrun who started Google’s project. (Though based on my brief conversations with Andrew I don’t think he’s directly involved.)

Virginia opens test roads

The state of Virginia has designated 70 miles of roads for robocar testing. That’s a good start for testing by those working in that state, but it skirts what to me is a dangerous idea — the thought that there would be “special” roads for robocars designated by states or road authorities. The fantastic lesson of the Darpa grand challenges was the idea that the infrastructure remains stupid and the car becomes smart, so that the car can go anywhere once its builders are satisfied it can handle that road. So it’s OK to test on a limited set of roads but it’s also vital to test in as many situations as you can, so you need to get off that set of roads as soon as you can.

Zoox startup un-stealthed

Zoox is probably the first funded startup working on a real, fully automated robocar. They were recently funded by DFJ ventures and set up shop in rented space at the SLAC linear accelerator lab. Zoox was begun by Tim Kentley-Klay, a designer and entrepreneur from Australia, and he later joined forces with Jesse Levinson, a top researcher from Stanford’s self-driving car projects.

I’ve known about Zoox since it begain and had many discussions. They first got some attention a while back with Tim’s designs, which are quite different from typical car designs, and presume a fully functional robocar — the designs feature no controls for the humans, and don’t even have a windshield to see forward in some cases. (Indeed, they don’t have a “forward” since an essential part of the design is to be symmetrical and move equally well in both directions, avoiding the need for some twists and turns.) I like many elements of the Zoox vision, though in fact I think it is even more ambitious than Google’s, at least from a car design standpoint, which is quite audacious in a world where most of the players think Google is going too far.

You can see details in this report on Zoox from IEEE. I haven’t reported on Zoox under FrieNDA courtesy — in fact the early consultations with “Singularity University” described in the article are actually discussions with me.

Zoox is not the first small startup. Kyle Haight’s “Cruise” has been at it a while aiming at a much less ambitious supervised product, and truck platooning company Peleton has even simpler goals, but expect to see more startups enter the fray and fight with the big boys in the year to come.

Mercedes E Class

Speaking of supervised cruising, the report is that the 2016 Mercedes E Class will offer highway speed cruising in the USA. This has been on offer in Europe in the past. As I wrote earlier, I am less enthused about supervised cruising products and think they will not do tremendously well. Tesla’s update to offer the same in their cars will probably get the most attention.


The press continue to get super excited about things that aren’t real. In spite of many reports, Uber does not yet have a car cruising the streets of Pittsburgh, though there is reality to the report that Uber has “poached” a large fraction of the robotics research crew from CMU.

In addition, many stories reported that Tesla had “solved” the liability problem of robocars through the design of their lane change system. In their system (and in several other discussed designs — they did not come up with this) the car won’t change lanes until the human signals it is OK to do so, usually by something like hitting the turn signal indicator. The Tesla plan is for a supervised car, and in a supervised car all liability is already supposed to go to the human supervisor.

Changing lanes safely is surprisingly challenging, because there is always the chance somebody is zooming up behind you at a rather rapid rate of speed. That’s common merging into a carpool lane, or on German autobahn trips. Most supervised cars have only forward sensing, but to change lanes safely you need to notice a car coming up fast from behind you, and you need to see it quite a distance away. This requires special sensors, such as rear radars, which most cars don’t have. So the solution of having the human check the mirrors works well for now.

More and more stories keep getting excited by “connected car” technology, in particular V2V communications using DSRC. They even write that these technologies are essential for robocars, and it gets scary when people like the transportation secretary say this. I wish the press covering this would take the simple step of asking the top teams who are working on robocars whether they plan to depend on, or even make early use of vehicle to vehicle communications. They will find out the answers will range form “no, not really” to a few vague instances of “yes, someday” from car companies who made corporate support commitments to V2V. The engineers don’t actually think they will find the technology crucial. The fact that the people actually building robocars have only a mild interest, if any, in V2V, while the people who staked their careers on V2V insist it’s essential should maybe suggest to the press that the truth is not quite what they are told.

Don't be fooled by robots falling down at Darpa Robotics Challenge

This weekend I went to Pomona, CA for the 2015 DARPA Robotics Challenge which had robots (mostly humanoid) compete at a variety of disaster response and assistance tasks. This contest, a successor of sorts to the original DARPA Grand Challenge which changed the world by giving us robocars, got a fair bit of press, but a lot of it was around this video showing various robots falling down when doing the course:

What you don’t hear in this video are the cries of sympathy from the crowd of thousands watching — akin to when a figure skater might fall down — or the cheers as each robot would complete a simple task to get a point. These cheers and sympathies were not just for the human team members, but in an anthropomorphic way for the robots themselves. Most of the public reaction to this video included declarations that one need not be too afraid of our future robot overlords just yet. It’s probably better to watch the DARPA official video which has a little audience reaction.

Don’t be fooled as well by the lesser-known fact that there was a lot of remote human tele-operation involved in the running of the course.

Check out my Gallery of Photos from the DARPA Robotics Challenge Finals.

What you also don’t see in this video is just how very far the robots have come since the first round of trials in December 2013. During those trials the amount of remote human operation was very high, and there weren’t a lot of great fall videos because the robots had tethers that would catch them if they fell. (These robots are heavy and many took serious damage when falling, so almost all testing is done with a crane, hoist or tether able to catch the robot during the many falls which do occur.)

We aren’t yet anywhere close to having robots that could do tasks like these autonomously, so for now the research is in making robots that can do tasks with more and more autonomy with higher level decisions made by remote humans. The tasks in the contest were:

  • Starting in a car, drive it down a simple course with a few turns and park it by a door.
  • Get out of the car — one of the harder tasks as it turns out, and one that demanded a more humanoid form
  • Go to a door and open it
  • Walk through the door into a room
  • In the room, go up to a valve with circular handle and turn it 360 degrees
  • Pick up a power drill, and use it to cut a large enough hole in a sheet of drywall
  • Perform a surprise task — in this case throwing a lever on day one, and on day 2 unplugging a power cord and plugging it into another socket
  • Either walk over a field of cinder blocks, or roll through a field of light debris
  • Climb a set of stairs

The robots have an hour to do this, so they are often extremely slow, and yet to the surprise of most, the audience — a crowd of thousands and thousands more online — watched with fascination and cheering. Even when robots would take a step once a minute, or pause at a task for several minutes, or would get into a problem and spend 10 minutes getting fixed by humans as a penalty.  read more »

Google Accidents and Deployment, Mercedes Trucks and more

Some headlines (I’ve been on the road and will have more to say soon.)

Google announces it will put new generation buggies on city streets

Google has done over 2.7 million km of testing with their existing fleet, they announced. Now, they will be putting their small “buggy” vehicle onto real streets in Mountain View. The cars will stick to slower streets and are NEVs that only go 25mph.

While this vehicle is designed for fully automatic operation, during the testing phase, as required, it will have a temporary set of controls for the safety driver to use in case of any problem. Google’s buggy, which still has no official name, has been built in a small fleet and has been operating on test tracks up to this point. Now it will need to operate among other road users and pedestrians.

Accidents with, but not caused by self-driving cars cause press tizzy.

The press were terribly excited when reports filed with the State of California indicated that there had been 4 accidents reported — 3 for Google and 1 for Delphi. Google reported a total of 11 accidents in 6 years of testing and over 1.5 million miles.

Headlines spoke loudly about the cars being in accidents, but buried in the copy was the fact that none of the accidents by any company were the fault of the software. Several took place during human driving, and the rest were accidents that were clearly the fault of the other party, such as being rear ended or hit while stopped.

Still, some of the smarter press noticed, this is a higher rate of being in an accident than normal, in fact almost double — human drivers are in an accident about every 250,000 miles and so should have had only 6.

The answer may be that these vehicles are unusual and have “self driving car” written on them. They may be distracting other drivers, making it more likely those drivers will make a mistake. In addition, many people have told me of their thoughts when they encountered a Google car on the road. “I thought about going in front of it and braking to see what it would do,” I’ve been told by many. Aside from the fact that this is risky and dickish, and would just cause the safety drivers to immediately disengage and take over, in reality they all also said they didn’t do it, and experience in the cars shows that it’s very rare for other drivers to actually try to “test” the car.

But perhaps some people who think about it do distract themselves and end up in an accident. That’s not good, but it’s also something that should go away as the novelty of the cars decreases.

Mercedes and Freightliner test in Nevada

There was also lots of press about a combined project of Mercedes/Daimler and Freightliner to test a self-driving truck in Nevada. There is no reason that we won’t eventually have self-driving trucks, of course, and there are direct economic benefits for trucking fleets to not require drivers.

Self-driving trucks are not new off the road. In fact the first commercial self-driving vehicles were mining trucks at the Rio Tinto mine in Australia. Small startup Peleton is producing a system to let truckers convoy, with the rear driver able to go hands-free. Putting them on regular roads is a big step, but it opens some difficult questions.

First, it is not wise to do this early on. Systems will not be perfect, and there will be accidents. You want your first accidents to be with something like Google’s buggy or a Prius, not with an 18-wheel semi-truck. “Your first is your worst” with software and so your first should be small and light.

Secondly, this truck opens up the jobs question much more than other vehicles, where the main goal is to replace amateur drivers, not professionals. Yes, cab drivers will slowly fade out of existence as the decades pass, but nobody grows up wanting to be a cab driver — it’s a job you fall into for a short time because it’s quick and easy work that doesn’t need much training. While other people build robots to replace workers, the developers of self-driving cars are mostly working on saving lives and increasing convenience.

Many jobs have been changed by automation, of course, and this will keep happening, and it will happen faster. Truck drivers are just one group that will face this, and they are not the first. On the other hand, the reality of robot job replacement is that while it has happened at a grand scale, there are more people working today than ever. People move to other jobs, and they will continue to do so. This may not be much satisfaction for those who will need to go through this task, but the other benefits of robocars are so large that it’s hard to imagine delaying them because of this. Jobs are important, but lives are even more important.

It’s also worth noting that today there is a large shortage of truck drivers, and as such the early robotic trucks will not be taking any jobs.

I’m more interested in tiny delivery “trucks” which I call “deliverbots.” For long haul, having large shared cargo vehicles makes sense, but for delivery, it can be better to have a small robot do the job and make it direct and personal.

New Sensors

The world of sensors continues to grow. This wideband software based radar from a student team won a prize. It claims to produce a 3D image. Today’s automotive radars have long range but very low resolution. High resolution radar could replace lidar if it gets enough resolution. Radar sees further, and sees through fog, and gives you a speed value, and LIDAR falls short in those areas.

Also noteworthy is this article on getting centimeter GPS accuracy with COTS GPS equipment. They claim to be able to eliminate a lot of multipath through random movements of the antennas. If true, it could be a huge localization breakthrough. GPS just isn’t good enough for robocar positioning. Aside from the fact it goes away in some locations like tunnels, and even though modern techniques can get sub-cm accuracy, it you want to position your robocar with it, and it alone, you need it to essentially never fail. But it does.

That said, most other localization systems, including map and image based localization, benefit from getting good GPS data to keep them reliable. The two systems together work very well, and making either one better helps.

Transportation Secretary Fox advances DoT plan

Secretary Fox has been out writing articles and Speaking in Silicon Valley about their Beyond Traffic effort. They promise big promotion of robocars which is good. Sadly, they also keep promoting the false idea that vehicle to vehicle communications are valuable and will play a significant role in the development of robocars. In my view, many inside the DoT staked their careers on V2V, and so feel required to promote it, even though it has minimal compelling applications and may actually be rejected entirely by the robocar community because of security issues.

This debate is going to continue for a while, it seems.

Maps, maps, maps

Nokia has put its “Here” map division up for sale, and a large part of the attention seems to related to their HD Maps project, aimed at making maps for self-driving. (HERE published a short interview with me on the value of these maps.

It will be interesting to see how much money that commands. At the same time, TomTom, the 3rd mapping company, has announced it will begin making maps for self-driving cars — a decision they made in part because of encouragement from yours truly.

Uber dwarfs taxis

Many who thought Uber’s valuation is crazy came to that conclusion because they looked at the size of the Taxi industry. To the surprise of nobody who has followed Uber, they recently revealed that in San Francisco, their birthplace, they are now 3 times the size of the old taxi industry, and growing. It was entirely the wrong comparison to make. The same is true of robocars. They won’t just match what Uber does, they will change the world.

There’s more news to come, during a brief visit to home, but I’m off to play in Peoria, and then Africa next week!

Second musings on the the Hugo Awards and the fix

Last week’s Hugo Awards point of crisis caused a firestorm even outside the SF community. I felt it time to record some additional thoughts above the summary of many proposals I did.

It’s not about the politics

I think all sides have made an error by bringing the politics and personal faults of either side into the mix. Making it about the politics legitimises the underlying actions for some. As such, I want to remove that from the discussion as much as possible. That’s why in the prior post I proposed an alternate history.

What are the goals of the award?

Awards are funny beasts. They are almost all given out by societies. The Motion Picture Academy does the Oscars, and the Worldcons do the Hugos. The Hugos, though, are overtly a “fan” award (unlike the Nebulas which are a writer’s award, and the Oscars which are a Hollywood pro’s award.) They represent the view of fans who go to the Worldcons, but they have always been eager for more fans to join that community. But the award does not belong to the public, it belongs to that community.

While the award is done with voting and ballots, I believe it is really a measurement, which is to say, a survey. We want to measure the aggregate opinion of the community on what the best of the year was. The opinions are, of course, subjective, but the aggregate opinion is an objective fact, if we could learn it.

In particular, I would venture we wish to know which works would get the most support among fans, if the fans had the time to fairly judge all serious contenders. Of course, not everybody reads everything, and not everybody votes, so we can’t ever know that precisely, but if we did know it, it’s what we would want to give the award to.

To get closer to that, we use a 2 step process, beginning with a nomination ballot. Survey the community, and try to come up with a good estimate of the best contenders based on fan opinion. This both honours the nominees but more importantly it now gives the members the chance to more fully evaluate them and make a fair comparison. To help, in a process I began 22 years ago, the members get access to electronic versions of almost all the nominees, and a few months in which to evaluate them.

Then the final ballot is run, and if things have gone well, we’ve identified what truly is the best loved work of the informed and well-read fans. Understand again, the choices of the fans are opinions, but the result of the process is our best estimate of a fact — a fact about the opinions.

The process is designed to help obtain that winner, and there are several sub-goals

  • The process should, of course, get as close to the truth as it can. In the end, the most people should feel it was the best choice.
  • The process should be fair, and appear to be fair
  • The process should be easy to participate in, administer and to understand
  • The process should not encourage any member to not express their true opinion on their ballot. If they lie on their ballot, how can we know the true best aggregate of their opinions.
  • As such, ballots should be generated independently, and there should be very little “strategy” to the system which encourages members to falsely represent their views to help one candidate over another.
  • It should encourage participation, and the number of nominees has to be small enough that it’s reasonable for people to fairly evaluate them all

A tall order, when we add a new element — people willing to abuse the rules to alter the results away from the true opinion of the fans. In this case, we had this through collusion. Two related parties published “slates” — the analog of political parties — and their followers carried them out, voting for most or all of the slate instead of voting their own independent and true opinion.

This corrupts the system greatly because when everybody else nominates independently, their nominations are broadly distributed among a large number of potential candidates. A group that colludes and concentrates their choices will easily dominate, even if it’s a small minority of the community. A survey of opinion becomes completely invalid if the respondents collude or don’t express their true views. Done in this way, I would go so far as to describe it as cheating, even though it is done within the context of the rules.

Proposals that are robust against collusion

Collusion is actually fairly obvious if the group is of decent size. Their efforts stick out clearly in a sea of broadly distributed independent nominations. There are algorithms which make it less powerful. There are other algorithms that effectively promote ballot concentration even among independent nominators so that the collusion is less useful.

A wide variety have been discussed. Their broad approaches include:

  • Systems that diminish the power of a nominating ballot as more of its choices are declared winners. Effectively, the more you get of what you asked for, the less likely you will get more of it. This mostly prevents a sweep of all nominations, and also increases diversity in the final result, even the true diversity of the independent nominators.
  • Systems which attempt to “maximize happiness,” which is to say try to make the most people pleased with the ballot by adding up for each person the fraction of their choices that won and maximizing that. This requires that nominators not all nominate 5 items, and makes a ballot with just one nomination quite strong. Similar systems allow putting weight on nominations to make some stronger than others.
  • Public voting, where people can see running tallies, and respond to collusion with their own counter-nominations.
  • Reduction of the number of nominations for each member, to stop sweeps.

The proposals work to varying degrees, but they all significantly increase the “strategy” component for an individual voter. It becomes the norm that if you have just a little information about what the most common popular choices will be, that your wisest course to get the ballot you want will be to deliberately remove certain works from your ballot.

Some members would ignore this and nominate honestly. Many, however, would read articles about strategy, and either practice it or wonder if they were doing the right thing. In addition to debates about collusion, there would be debates on how strategy affected the ballot.

Certain variants of multi-candidate STV help against collusion and have less strategy, but most of the methods proposed have a lot.

In addition, all the systems permit at least one, and as many as 2 or 3 slate-choice nominees onto the final ballot. While members will probably know which ones those are, this is still not desired. First of all, these placements displace other works which would otherwise have made the ballot. You could increase the size of the final ballot, you need to know how many slate choices will be on it.

It should be clear, when others do not collude, slate collusion is very powerful. In many political systems, it is actually considered a great result if a party with 20% of the voters gains 20% of the “victories.” Here, we have a situation with 2,000 nominators, and where just 100 colluding members can saturate some categories and get several entries into all of them, and with 10% (the likely amount in 2015) they can get a large fraction of them. As such it is not proportional representation at all.

Fighting human attackers with human defence

Consideration of the risks of confusion and strategy with all these systems, I have been led to the conclusion that the only solid response to organized attackers on the nomination system is a system of human judgement. Instead of hard and fast voting rules, the time has come, regrettably, to have people judge if the system is under attack, and give them the power to fix it.

This is hardly anything new, it’s how almost all systems of governance work. It may be a hubris to suggest the award can get by without it. Like the good systems of governance this must be done with impartiality, transparency and accountability, but it must be done.

I see a few variants which could be used. Enforcement would most probably be done by the Hugo Committee, which is normally a special subcommittee of the group running the Worldcon. However, it need not be them, and could be a different subcommittee, or an elected body.

While some of the variants I describe below add complexity, it is not necessary to do them. One important thing about the the rule of justice is that you don’t have to get it exactly precise. You get it in broad strokes and you trust people. Sometimes it fails. Mostly it works, unless you bring in the wrong incentives.

As such, some of these proposals work by not changing almost anything about the “user experience” of the system. You can do this with people nominating and voting as they always did, and relying on human vigilance to deflect attacks. You can also use the humans for more than that.

A broad rule against collusion and other clear ethical violations

The rule could be as broad as to prohibit “any actions which clearly compromise the honesty and independence of ballots.” There would be some clarifications, to indicate this does not forbid ordinary lobbying and promotion, but does prohibit collusion, vote buying, paying for memberships which vote as you instruct and similar actions. The examples would not draw hard lines, but give guidance.

Explicit rules about specific acts

The rule could be much more explicit, with less discretion, with specific unethical acts. It turns out that collusion can be detected by the appearance of patterns in the ballots which are extremely unlikely to occur in a proper independent sample. You don’t even need to know who was involved or prove that anybody agreed to any particular conspiracy.

The big challenge with explicit rules (which take 2 years to change) is that clever human attackers can find holes, and exploit them, and you can’t fix it then, or in the next year.

Delegation of nominating power or judicial power to a sub group elected by the members

Judicial power to fix problems with a ballot could fall to a committee chosen by members. This group would be chosen by a well established voting system, similar to those discussed for the nomination. Here, proportional representation makes sense, so if a group is 10% of the members it should have 10% of this committee. It won’t do it much good, though, if the others all oppose them. Unlike books, the delegates would be human beings, able to learn and reason. With 2,000 members, and 50 members per delegate, there would be 40 on the judicial committee, and it could probably be trusted to act fairly with that many people. In addition, action could require some sort of supermajority. If a 2/3 supermajority were needed, attackers would need to be 1/3 of all members.

This council could perhaps be given only the power to add nominations — beyond the normal fixed count — and not to remove them. Thus if there are inappropriate nominations, they could only express their opinion on that, and leave it to the voters what to do with those candidates, including not reading them and not ranking them.

Instead of judicial power, it might be simpler to appoint pure nominating power to delegates. Collusion is useless here because in effect all members are now colluding about their different interests, but in an honest way. Unlike pure direct democracy, the delegates, not unlike an award jury, would be expected to listen to members (and even look at nominating ballots done by them) but charged with coming up with the best consensus on the goal stated above. Such jurors would not simply vote their preferences. They would swear to attempt to examine as many works as possible in their efforts. They would suggest works to others and expect them to be likely to look at them. They would expect to be heavily lobbied and promoted to, but as long as its pure speech (no bribes other than free books and perhaps some nice parties) they would be expected to not be fooled so easily by such efforts.

As above, a nominating body might also only start with a member nominating system and add candidates to it and express rulings about why. In many awards, the primary function of the award jury is not to bypass the membership ballot, but to add one or two works that were obscure and the members may have missed. This is not a bad function, so long as the “real ballot” (the one you feel a duty to evaluate) is not too large.

Transparency and accountability

There is one barrier to transparency, in that releasing preliminary results biases the electorate in the final ballot, which would remain a direct survey of members with no intermediaries — though still the potential to look for attacks and corruption. There could also be auditors, who are barred from voting in the awards and are allowed to see all that goes on. Auditors might be people from the prior worldcon or some other different source, or fans chosen at random.

Finally, decisions could be appealed to the business meeting. This requires a business meeting after the Hugos. Attackers would probably always appeal any ruling against them. Appeals can’t alter nominations, obviously, or restore candidates who were eliminated.

Comprehensive plan

All the above requires the two year ratification process and could not come into effect (mostly) until 2017. To deal with the current cheating and the promised cheating in 2016, the following are recommended.

  1. Downplay the 2015 Hugo Award, perhaps with sufficient fans supporting this that all categories (including untainted ones) have no award given.
  2. Conduct a parallel award under a new system, and fête it like the Hugos, though they would not use that name.
  3. Pass new proposed rules including a special rule for 2016
  4. If 2016’s award is also compromised, do the same. However, at the 2016 business meeting, ratify a short-term amendment proposed in 2015 declaring the alternate awards to be the Hugo awards if run under the new rules, and discarding the uncounted results of the 2016 Hugos conducted under the old system. Another amendment would permit winners of the 2015 alternate award to say they are Hugo winners.
  5. If the attackers gave up, and 2016’s awards run normally, do not ratify the emergency plan, and instead ratify the new system that is robust against attack for use in 2017.

People get carsick as passengers? Shocking!

Earlier this week I was sent some advance research from the U of Michigan about car sickness rates for car passengers. I found the research of interest, but wish it had covered some questions I think are more important, such as how carsickness is changed by potentially new types of car seating, such as face to face or along the side.

To my surprise, there was a huge rush of press coverage of the study, which concluded that 6 to 12% of car passengers get a bit queasy, especially when looking down in order to read or work. While it was worthwhile to work up those numbers, the overall revelation was in the “Duh” category for me, I guess because it happens to me on some roads and I presumed it was fairly common.

Oddly, most of the press was of the “this is going to be a barrier to self-driving cars” sort, while my reaction was, “wow, that happens to fewer people than I thought!”

Having always known this, I am interested in the statistics, but to me the much more interesting question is, “what can be done about it?”

For those who don’t like to face backwards, the fact that so many are not bothered is a good sign — just switch seats.

Some activities are clearly better than others. While staring down at your phone or computer in your lap is bad during turns and bumps, it may be that staring up at a screen watching a video, with your peripheral vision very connected to the environment, is a choice that reduces the stress.

I also am interested in studying if there can be clues to help people reduce sickness. For example, the car will know of upcoming turns, and probably even upcoming bumps. It could issue tones to give you subtle clues as to what’s coming, and when it might be time to pause and look up. It might even be the case that audio clues could substitute for visual clues in our plastic brains.

The car, of course, should drive as gently as it can, and because the software does not need a tight suspension to feel the road, the ride can be smoother as well.

Another interesting thing to test would be having your tablet or phone deliberately tilt its display to give you the illusion you are looking at the fixed world when you look at it, or to have a little “window” that shows you a real world level so your eyes and inner ears can find something to agree on.

More advanced would be a passenger pod on hydraulic struts able to tilt with several degrees of freedom to counter the turns and bumps, and make them always be such that the forces go up and down, never side to side. With proper banking and tilting, you could go through a roundabout (often quite disconcerting when staring down) but only feel yourself get lighter and heavier.

Syndicate content