Archives

Date

Automated Vehicles Symposium Days 1 and 2

From small beginnings, over 800 people are here at the Ann Arbor AUVSI/TRB Automated Vehicles symposium. Let’s summarize some of the news.

Test Track

Lots of PR about the new test track opening at University of Michigan. I have not been out to see it, but it certainly is a good idea to share one of these rather than have everybody build their own, as long as you don’t want to test in secret.

NHTSA

Mark Rosekind, the NHTSA administrator gave a pretty good talk for an official, though he continued the DoT’s bizarre promotion of V2V/DSRC. He said that they were even open to sharing the DSRC spectrum with other users (the other users have been chomping at the bit to get more unlicenced spectrum opened up, and this band, which remains unused, is a prime target, and the DoT realizes it probably can’t protect it.) Questions however, clarified that he wants to demand evidence that the spectrum can be shared without interfering with the ability of cars to get a clear signal for safety purposes. Leaving aside the fact that the safety applications are not significant, this may bode a different approach — they may plan to demand this evidence, and when they don’t get it — because of course there will be interference — they will then use that as a grounds to fight to retain the spectrum.

I say there will be interference because the genius of the unlicenced bands (like the 2.4ghz where your 802.11b and bluetooth work) was the idea that if you faced interference, it was your problem to fix, not the transmitter’s, as long as the transmitter stayed low power. A regime where you don’t interfere would be a very different band, one that could only be used a long distance from any road — ie. nowhere that anybody lives.

Manufacturers

The most disappointing session for everybody was the vendor’s session, particularly the report from GM. In the past GM has shown real stuff based on their work. Instead we got a recap of ancient stuff. The other reports were better, but only a little. Perhaps it is a sign that the field is getting big, and people are no longer treating it like a research discipline where you share with your colleagues.

Ethics

Chris Gerdes’ report on a Stanford ethics conference was good in that it went well past the ridiculous trolley problem question (what if the machine has to choose between harming two different humans) which has become the bane of anybody who talks about robocars. You can see my answer if you haven’t by now.

Their focus was on more real problems, like when you illegally cross the double yellow line to get around a stalled car, or what you do if a child runs into the street chasing a ball. I am not sure I liked Gerdes’ proposal — that the systems compute a moral calculus, putting weights on various outcomes and following a formula. I don’t think that’s a good thing to ask the programmers to do.

If we really do have a lot of this to worry about, I think this is a place where policymakers could actually do something useful. They could set up a board of some sort. A vendor/programmer who has an ethical problem to program would put it to the board, and get a ruling, and program in that ruling with the safe knowledge they would not be blamed, legally, for following it.

The programmers would know how to properly frame the questions, but they could also refine them. They would frame them differently that lay people would imagine, because they would know things. For example:

My vehicle encounters a child (99% confidence) who darts out from behind a parked van, and it is not possible to stop in time before hitting the child. I have an X% confidence (say 95%) that the oncoming lane is clear and a y% confidence (90%) that the sidewalk is clear though driving there would mean climbing a curb, which may injure my passenger. While on the sidewalk, I am operating outside my programming so my risk of danger increases 100 fold while doing so. What should I do?

Let the board figure it out, and let them understand the percentages, and even come back with a formula on what to do based on X, Y and other numbers. Then the programmer can implement it and refine it.

Investment

For the first time, there was a panel about investment in the technology, with one car company, two VCs and a car oriented family fund (Porsche.) Lots more interest in the space, but still a reluctance to get involved in hardware, because it costs a lot, is uncertain, and takes a long time to generate a return.

Afternoon breakouts

I largely missed these. Many were just filled with more talks. I have suggested to conference organizers a rule that the breakout sessions be no more than 40% prepared talks, and the rest interactive discussion.

Wednesday starts with Chris Urmson of Google

Chris’ talk was perhaps the most anticipated one. (Disclaimer — I used to work for Chris on the Google team.) It has similarities to a number of his other recent talks at TeD and ITS America, with lots of good video examples of the car’s perception system in operation. Chris also addressed this week’s hot topic in the press, namely the large number of times Google’s car fleet is being hit by other drivers in accidents that are clearly the fault of the other driver.

While some (including me) have speculated this might be because the car is unusual and distracting, Google’s analysis of the accidents strongly suggests that our impression of how common small bumper-bender accidents are was seriously underestimated. There are 6 million reported accidents in the US every year, and common suggestions from insurers and researchers suggested the real number might include another 6 million unreported ones. It’s now clear, based on Google’s experience, that the number of small accidents that go unreported is much higher.

Google thinks that is good news in several ways. First, it tells us just how distracted human drivers are, and how bad they are, and it shows that their car is doing even better than was first thought. The task of outperforming humans on safety may be easier than expected.

The anti-Urmson

Adriano Allessandrini has always been an evocative and controversial character at these events. His report on Citymobil2 (a self-driving shuttle bus that has run in several cities with real passengers) was deliberately done as contrast to Google’s approach. Google is building a car meant to drive existing roads, a very complex task. Allesandrini believes the right approach is to make the vehicle much simpler, and only run it on certified safe infrastructure (not mixed with cars) and at very low speeds. As much as I disagree with almost everything he says, he does have a point when it comes to the value of simplicity. His vehicles are serving real passengers, something few else can claim.

Public perception

We got to see a number of study results. Frankly, I have always been skeptical of the studies that report what the public thinks of future self-driving cars and how much they want them. In reality, only a tiny fraction of the 800 people at the conference, supposed experts in the field, probably have a really solid concept of what these future vehicles will look like. None of us truly know the final form. So I am not sure how you can ask the general public what they think of them.

Of greater interest are reports on what people think of today’s advanced features. For example, blindspot warning is much more popular than I realized, and is changing the value of cars and what cars people will buy.

Security

For Tuesday afternoon I attended a very interesting security session. I will write more about this later, particularly about a great paper on spoofing robocar sensors (I will await first publication of the paper by its author) but in general I feel there is a lot of work to be done here.

In another post I will sum up a new expression of my thoughts here, which I will describe as “Connected and Automated: Pick only one.” While most of the field seems to be raving about the values of connectivity, and that debate has some merit, I feel that if the value of connectivity (other than to the car’s HQ) is not particularly high, it does not justify the security risk that comes from it. As such, if you have a vehicle that can drive itself, that system should not be “on the internet” as it were, connecting to other cars or to various infrastructure services. It should only talk to its maker (probably over a verified and encrypted tunnel on top of the cellular data network) and it should frankly be a little scared even of talking to its maker.

I proposed this to the NHTSA administrator, and as huge backers of V2V he could not give me an answer — he mostly want to talk about the perception of security rather than the security itself — but I think it’s an important question to be discussed.

Since many people don’t accept this there are efforts to increase security. First of all people are working to put in the security that always should have been in cars (they have almost none at present.) Secondly there are efforts at more serious security, with the lessons of the internet’s failures fresh in our minds. Efforts at provably correct algorithms are improving, and while nobody thinks you could build a provably correct self-driving system, there is some hope that the systems which parse inputs from outside could be made provably secure, and they could be compartmentalized from other systems in a way that compromise of one system would have a hard time getting to the driving system where real danger could be done.

There were calls for standards, which I oppose — we are way too early in this game to know how to write the standards. Standards at best encode the conventional wisdom of 3 years ago, and make it hard to go beyond it. Not what we need now.

Nonetheless there is research going to make this more secure, if it is to be done.

Automated Vehicles Symposium Day 0: When do robocars become cheaper than standard cars?

I’m in the Detroit area for the annual TRB/AUVSI Automated Vehicle Symposium, which starts tomorrow. Today, those in Ann Arbor attended the opening of the new test track at the University of Michigan. Instead, I was at a small event with a lot of good folks in downtown Detroit, sponsored by SAFE which is looking to wean the USA off oil.

Much was discussed, but a particularly interesting idea was just how close we are getting to something I had put further in the future — robocars that are cheaper than ordinary cars.

Most public discussion of robocars has depicted them as costing much more than regular cars. That’s because the cars built to date have been standard cars modified by placing expensive computers and sensors on them. Many cars use the $75,000 Velodyne Lidar and the similarly priced Applanix IMU/GPS, and most forecasts and polls have imagined the first self-driving cars as essentially a Mercedes with $10,000 added to the price tag to make it self driving. After all, that’s how things like Adaptive Cruise Control and the like are sold.

Google is showing us an interesting vision with their 3rd generation buggy-style car. That car has no steering wheel, brakes or gas pedal, and it is electric and small. It’s a car aimed at “Mobility on Demand.”

When people have asked me “how much extra will these cars cost,” my usual answer has been that while the cars might cost more, they will be available for use by the mile, where they can cost less per mile than owning a car does today — ie. that overall it will be cheaper. That’s in part because of the savings from sharing, and having vehicles go more miles in their lifetime. More miles in the life of a car at the same cost means a lower cost per mile, even if the car costs a little more.

The sensors cost money, but that cost is already in serious decline. We’re just a few years away from $250 Lidars and even cheaper radar. Cameras are already cheap, and there are super cheap IMUs and GPSs already getting near the quality we need. Computers of course get cheaper every year.

This means we are not too far when the cost of the sensors is less than the money saved by what you take out of the car. After all, having a steering wheel, gas and brakes costs money. Side mirrors cost money (ever had to replace them?) That fancy dashboard with all its displays and controls costs a lot of money, but almost everything it does in a robocar can be done by your tablet.

That said, you need a few extra things in your robocar. You need two steering motors and two braking systems. You need some more short range sensors and a cell phone radio. But there’s even more you can save, especially with time.

Because mobility on demand means you can make cars that are never used for anything but short urban trips (the majority of trips, as it turns out) you can save a lot more money on those cars. These cars need not be large or fast. They don’t need acceleration. They won’t ever go on the highway so they don’t need to be safe at 60mph. Electric drive, as we discussed earlier, is great for these cars, and electric cars have far fewer parts than gasoline ones. Today, their batteries are too expensive, but everything else in the car is cheaper, so if you solve the battery cost using the methods I outlined Saturday we’re saving serious money. And small one or two person cars are inherently cheaper to boot.

Of course, you need to make highway cars, and long-range 4WD SUVs to take people skiing. But these only need be a fraction of the cars, and people who use a mix of cars will see a big saving.

For a long time, we’ve talked about some day also removing many of the expensive safety systems from cars. When the roads become filled with robocars, you can start talking about having so few accidents you don’t need all the safety systems, or the 1/3 of vehicle weight that is attributable to passive safety. That day is still far away, though cars like the Edison2 Very-Light-Car have done amazing things even while meeting today’s crash tests. Companies like Zoox and other startups have pushed visions of completely redesigned cars, some of them at lower cost for a while. But this seems like it might become true sooner rather than later.

Evacuation in a hurricane

One participant asked how, if we only had 1/9th as many cars (as some people forecast, I suspect it’s closer to 1/4) we would evacuate sections of Florida or similar places when a hurricane is coming. I think the answer is a very positive one — simply enforce car pooling / ride sharing in the evacuation. While there is not a lot I think policymakers should do at this time, some simple mandates could help a lot in this arena. While people would not be able to haul as much personal property, it is very likely there would be more than enough seats available in robocars to evacuate a large population quickly if you fill all the seats in cars going out. Further, those cars can go back in to get more people if need be.

Filling those seats would actually get everybody out faster, because there would be far less traffic congestion and the roads would carry far more people per hour. In fact, that’s such a good idea it could even be implemented today. When there’s an evacuation, require all to use an app to register when they are almost ready to leave. If you have spare seats, you could not leave (within reason) until you picked up neighbours and filled the seats. With super-carpooling, everybody would get out very fast on much less congested roads. Those crossing the checkpoint on the way out with empty seats would be photographed and ticketed unless the app allowed them to leave like that, or the app records that it tried to reach the server and failed, or other mitigating circumstances. (This is all hours before the storm, of course, before there is panic, when people will do whatever they can.) Some storms might be so bad the cars are at risk. In that case, if the road capacity is enough, people could move out all the cars too, to protect them. But in most cases, it’s the people that are the priority.

More tomorrow as the conference gets underway.

Will Robocars vastly increase battery life?

We know electric cars are getting better and likely to get popular even when driven by humans. Tesla, at its core, is a battery technology company as much as it’s a car company, and it is sometimes joked that the $85,000 Telsa with a $40,000 battery is like buying a battery with a car wrapped around it. (It’s also said that it’s a computer with a car wrapped around it, but that’s a better description of a robocar.)

Tesla did a lot of work on building cooling systems for standard cylinder Lithium-Ion cells and was able to make a high performance vehicle. The Model S also by default charges to only 80% of capacity because battery life is hurt by charging all the way to full. In fact, charging to 3.92 volts (about 60%) capacity is the sweet spot. Some of the other things that reduce battery life include:

  • Discharging too close to empty
  • Getting too warm while discharging
  • Getting too warm while charging, and in particular causing thermal expansion which creates physical damage
  • Even ordinary warmth, where the vehicle is stored for long periods, particularly at high charge, is dangerous. The closer to freezing the better, and even above 25 degrees centigrade causes some loss.

The important, but little reported statistic for a battery is the total watt-hours you will be able to get out of it during its usable lifetime. This tells you the lifetime of the battery in miles, and the cost tells you the cost per mile. How important is this? If the Tesla $40,000 battery lasts you 150,000 miles and sells for $10,000 when done, the straight-line cost per mile is 20 cents/mile — more than the cost of gasoline in most cars, and much more than the 3 cent/mile or less cost of electricity.

Humans will drive as humans want to drive, and it’s hard to change that. They will accelerate for both fun and to get ahead of other cars. They will take mixes of short trips and long trips. They don’t know how long their trips are and demand a flexible vehicle always ready for anything.

Electric robotaxis change that game. They will drive predictably, rarely ever demanding quick acceleration. A driver likes zippy fun, a passenger wants a gentle ride. They can go even further, and set their driving pattern based on the temperature of their batteries. Are we making the batteries too warm? Then “cool off,” literally. This applies both to fast starts and also slowing down. Regenerative braking conserves energy and increases range, but doing it too hard heats the batteries. Start slowing down sooner — especially if you have data on what traffic lights and traffic are doing and it can make a big difference.

Robotaxis can always use the sweet spot of the battery charge duty cycle.

  • You will rarely be sent a robotaxi that, in order to get you, needs to dig deep into its maximum range.
  • Often demand is predictable, so if need be, vehicles can be charged above 60% only when such demand is expected or is arising.
  • While robotaxis will prefer to charge at night when power is cheapest, they can charge any time to get back up to the optimal level
  • As I’ve noted before, battery swap doesn’t work well for humans, but robots don’t mind making an appointment or driving out of their way for a swap. This makes it easy to use batteries only in the sweet spot, and to charge them only at night on cheap power.
  • If battery swap is not an option, there are many options to supplement range during peak demand. Vehicles can go to depots to pick up trunk batteries, battery trailers, or even slot-in units with small motorcycle engines and liquid fuel tanks. If this is cheaper than the alternatives, it’s an option.
  • When it gets hot, robotaxis can seek out the shade, or even places with cooling, to keep the batteries from being too warm.

Robotaxis don’t mind the loss of range all that much

As a battery ages, its capacity drops. Humans hate that — having bought a car with a 100 mile range they won’t accept it can now only do 60. For a human, that means time to replace the battery. For a robotaxi, that just means you have a shorter range, and you don’t get sent on long range trips. Or you may decide that while before, you only charged to 60% to get maximum battery life, now you charge more, knowing it will eat the remaining life, but getting the most out of the battery.

Of course, as the range drops, now you run into another problem. You’re carrying around the extra weight of battery for half the range, and it’s costing you energy to do that, especially in an ultralight car where the battery is the biggest component of the weight. (This also enters into the math of whether it makes sense to charge only to 60%.) Eventually the time comes that the battery is not practical. This is the time to sell it. Tesla and others are working to produce a home and grid storage market for used car batteries. In those applications, the weight doesn’t matter, just the cost for the remaining lifetime watt-hours. You care about the capacity, but you pay a market price for it.

Eventually, even this is not practical and you scrap to recycle the materials.

Typical predictions for Lithium-Ion run from 500 to 1,000 cycles. Tesla’s techniques seem to be beating that. With robotaxis, who knows just how many lifetime kwh we’ll be able to get out of these batteries, or perhaps even other chemistries. Turns out that human drivers like a chemistry that keeps its life as long as possible then falls off a cliff. Slow decline is harder to sell — but slow decline chemistries, like Lithium Iron Phosphate and others could make more sense for the robots that don’t care.

Grid storage?

It’s often suggested that electric cars could be used as grid storage. Problem is, with car batteries today, it costs around 15 cents to put a kwh into a battery and get it out. That means to be grid storage, you need to have the spot price on the grid be the price you bought at, plus 15 cents, plus a margin to make it worth this. Night power can get as low as 6 cents, so this does happen, but not as much as one might hope. The problem is that the grid’s peak demand is around 4 to 7pm, which is also a peak time for driving. That’s the last time most car owners will want to drain off their batteries to make a bit of money on the power. You will only do that if you know you won’t be using the car. For a robotaxi fleet, that might be the case. Of course, selling power to the grid you will do it only at a rate that does not harm your battery or warm it up too much.

When the grid gets to a super peak, the price can really spike to attractive numbers. That’s because building extra power plant capacity just for those rare days is expensive, and so almost any price is better. Here we could talk about cars as storage, when we know their batteries are not going to be used. That’s even more true of batteries sitting in a battery swap facility.

Some Q&A on Robocars via Singularity U

At Singularity U, we’re releasing a new video series answering questions about our future technology topics that come from Twitter. My segment is one of the first, and while regular readers of my blog will probably have seen me talk about most of these, here is the video:

You can follow the series link or subscribe to see the other videos as they come.

Facebook makes less than $10/user, can we find alternatives to advertising?

Facebook’s ARPU (average revenue per user, annualized) in the last quarter was just under $10, declining slightly in the USA and Canada, and a much lower 80 cents in the rest of the world. This is quite a bit less than Google’s which hovers well over $40.

That number has been mostly growing (it shrank last quarter for the first time) but it’s fairly low. I can solidly say I would happily pay $10 a year — even $50 a year — for a Facebook which was not simply advertising-free, but more importantly motivated only to please its customers and not advertisers. Why can’t I get that?

One reason is that it’s not that simple. If Facebook had to actually charge, it would not get nearly as many users as it does being free and ad-supported. It is frictionless to join and participate in FB, and that’s important with the natural monopolies that apply to social media. You dare not do anything that would scare away users.

Valley of Distraction

Being advertising supported bends how Facebook operates, as it will any company. The most obvious thing is the annoying ads. Particularly annoying are the ads which show up in my feed, often marked with “Friend X liked this company.” I am starting to warn my friends to please not like the pages of anybody who buys ads on FB, because these ads are even more distracting than regular ads. Also extra distracting are ads which are “just off the bulls-eye,” which is to say they are directed at me (based on what FB knows about me) and thus likely to distract me, but which turn out to be completely useless. That’s worse than an ad which was not well aimed and so doesn’t distract me at all with its uselessness. There is a “valley of distraction” when it comes to targeting ads:

  • Ads about things I am researching or may want to buy can be actually valuable to me, and also rewarding to the advertiser.
  • Ads about things I am interested in, but have already bought or would not buy via an ad are highly distracting but provide no value to the advertiser and negative value to me.
  • Ads about things I have no interest in tend to be only mildly distracting if they are off to the side and not blinky/flashy/pop-up style.

As sites get better at ad targeting, they generate more of the middle type.

Privacy

Facebook’s need to monetize with advertising gives them strong incentives to be less protective of privacy. All social networks have an anti-privacy incentive, because the more they can get you to share with more people, the more they can make things happen on their site, and the more they can attract in other users. But advertising ads to this. Without ads, FB would focus only on attracting and retaining customers by serving them, which would be good for users.

As the old saying goes, “If you’re not paying, you’re not the customer, you’re the product.” To give credit to many web companies, in spite of the reality of this, they actually work hard to reduce the truth of this statement, but they can never do it entirely.

How we monetize the web

When I created the first internet based publication in 1989, I did it by selling subscriptions. There really wasn’t a way to do it with advertising at that time, but I lamented the eventual switch that later came which has made advertising the overwhelmingly dominant means of monetizing the web. There are a few for-pay sites but they are very few and specialized. I lament that forces pushed the web that way, and have always wished for a mechanism to make it easier, if not as easy, to monetize a web site with payment from customers. That’s why I promoted ideas like microrefunds as well as selling books in flat-rate pools like my Library of Tomorrow back in 1992. (Fortunately this concept is now starting to get some traction in some areas, like Amazon’s Kindle Unlimited.)

I’m also very interested in the way that low-friction digital currencies like Bitcoin and in particular Dogecoin have made it work workable to give donations and tips. Dogecoin started as a joke, but because people viewed it as a joke, they were willing to build easy and low security means of tipping people. The lack of value attached to Dogecoin meant people were more willing to play around with such approaches. Perhaps Bitcoin’s greatest flaw is that because its transactions are irrevocable, you must make the engine that spends them secure, and in turn, that demands it is harder to use. Easy to spend means easy to lose, or easy to steal and that’s a rule that’s hard to break. The credit card system, in order to be easy to spend, solves the problem of being easy to steal by allowing chargebacks or other human fixes when problems occur. While we can do better at making digital money easy to spend and not quite so easy to steal, it’s hard to figure out how to be perfect at that without something akin to chargebacks.

To monetize the web without advertising, we need a truly frictionless money. Advertising provides a money whose only friction is the annoyance of the advertising. To consume an ad-supported product you need do nothing but waste a little time. It’s a fairly passive thing. To consume a consumer-paid product, you must pay, and that creates three frictions:

  1. The spending itself — though if it’s low that should be tolerable
  2. The mental cost of thinking about the spending — which often exceeds the monetary cost on tiny transactions
  3. The user interface cost of your means of payment.

You can’t eliminate #1 of course, but you can realize that the monetary cost is less than the negatives introduced by advertising. Eliminating #2 and #3 in a secure way is the challenge, and indeed it is the challenge which I devised the microrefund concept to address.

Will we pay the cost?

I think lots of people would pay $10/year for Facebook, particularly if alternatives also charged money. It’s a bargain at that price. But would people pay the $50 that Google makes from them? Again, I think Google is a bargain at that price, but for a lot of the world, that could be a lot of money, and that’s Google’s average revenue, not its revenue for me. (I click on ads so rarely that I think their revenue from me is actually a lot lower.)

I already bought my ticket on Iberia!

At the same time, Google’s ads are among the least painful. The ads on search are marked and isolated, and largely text based. The only really bad ads Google is doing are the ones in the valley of distraction in Adsense. As I wrote earlier, we are all constantly seeing ads for things we already bought.

And so, even though a Google search might only cost you a couple of pennies, I doubt we could move Google to payment supported even if we could remove all the friction from it.

This is not true for many other sites, though. Video sites would be a great target for frictionless payment, since showing a 30 second video ad to watch a 2 minute video is a terrible bargain, yet we see it happen frequently. There are many sites who do much worse than Google at monetizing themselves through advertising, and who would welcome a way to get more decent revenues via payment — though of course they can’t get greedy or they friction of the payment itself will reduce their business.

In addition, there are zillions of small sites and sites about topics of no commercial value who can’t make much money from advertising at all. Some of these sites probably don’t even exist because they can’t become going concerns in the current regime of monetizing the web — what fraction of the web are we missing because we have only one practical way to monetize it?

Google not hitting Delphi, going to Austin -- Vislab sold

The press were all a-twitter about a report from Reuters that there had been a near miss between Delphi’s test car and one of Google’s though it was quickly denied that anything happened

The situation described, one car cutting off another, was a very unlikely one for several reasons:

  • All these cars are operated by trained safety drivers who are expected to be vigilant and take control at any sign of trouble.
  • In particular, special moves like a lane change would get extra vigilance. If something unusual happened (such as 2 cars going for the same spot) the safety drivers would be watching in advance, tracking what the car was doing, and pull back if the car’s own displays were not telling them it was going to do the right thing.

The safety drivers are not perfect of course, but an autonomous lane change is a rare event and one that most people are still just testing, so they would be very unlikely to miss that the car was going to cut somebody else off.

Of course, situations will arise when two cars try to change into the same spot at the same time, and robocars will probably be fairly timid in these situations. The most likely situation if two robocars tried to take the same spot would be that both would back off and return to their original lane, and it will probably be that way until being so timid is not a workable strategy.

Robocars won’t be the lane-changing demons that some people (including myself sometimes are.) Many human drivers are constantly trying to find the fastest lane and we weave, often finding the lane we move into seems to become the slowest. Part of that is our psychology.

Robocars won’t do this as much because their passengers will be occupied doing other things, and in most cases will not be in a super hurry. Those passengers will prefer a stable ride where they can get work done to a weaving ride with extra starts and stops. If we’re in a big hurry, we might ask the car to try to work extra hard to make the fastest trip but this will be the exception.

When we do want that, the robocar will actually have a very nice model of just how fast each lane is moving. It won’t be fooled the way we are by seeing some lanes that seem to be faster when in fact neither lane is winning by that much. If they read licence plates to identify cars, they will get excellent appraisals of what’s going on. If one lane is truly faster they will find it. On the other hand, they will be worse at the standard game of chicken needed to change lanes in heavy traffic, where you depend on the car you are moving in front of to slow down. They will know the physics though, and if a lane change is needed, they will warn the passengers of high acceleration and perfectly make a smaller spot than you might be able to make.

In other news, Google has sent two cars to Austin, Texas to expand their testing ground. I don’t have a particular insight on why they selected Austin — I know that many towns and states regularly contact Google in the hope they might bring some cars to their area, though Texas has no modified laws yet.

Vislab

I’ve written a few times about the work of Vislab in Parma, Italy. They have a focus on doing self-driving with machine vision, and did a famous cross-continent trek from Italy to Shanghai a few years ago, using a lead car to map the way and a following car self-driving, mostly with vision.

This lab was spun out of its university but now has been [acquired by Ambarella], a company that specializes in video compression chips. One can see why Ambarella would want a computer vision lab — but it seems this might spell the end of their self-driving efforts, unless they are spun out.

Emissions

A new paper is out in Nature Climate Change on the potential for robocars to reduce emissions, inspired by some of my research in this area. Sadly, it’s behind a paywall, but the author will give a talk at Nissan’s lab in Silicon Valley on July 15th at our local self-driving car meetup.

Just a couple more days to apply for our exponential tech startup incubator

At Singularity University, our students have been forming interesting ventures after the class for the past 6 years. This fall, we’ll also be starting an SU Startup Accelerator for nascent startups working on exponential technology to solve the world’s biggest problems. We will be accelerating both for-profit ventures (for the world’s greatest problems can also be the greatest opportunities) and $50K grants for non-profit efforts.

The application deadline is coming up on June 30th — so zoom together your application today if you can. Follow the link and apply via AngelList.

Replacing E-mail: The calendar as communications tool

I want to begin a series of thoughts on how E-mail has failed us and what we should do about it.

Yes, E-mail has failed, and not, as we thought, because it got overwhelmed with spam. There is tons of spam but we seem to be handling it. The problem might be better described as “too much signal” rather than the signal/noise ratio. There are three linked problems:

  1. There is just too much E-mail from people we actually have relationships with. Part of this is the over-reach of businesses, who think that because you bought a tube of toothpaste that you should fill out a customer satisfaction survey and get the weekly bargains mail-out, but part of it is there really are a lot of people who want to interact with you, and e-mail makes it very easy for them to do that, particularly to “cc” you on mail you may only have a marginal interest.
  2. Because of problem 1, people are moving away from E-mail to other tools, particularly the younger generation. They (and we) are using Facebook mail and other social tools, instant messengers, texting and more.
  3. The volume means that you can’t handle it all. Important mails scroll off the main screen and are forgotten about. And some people are just not using their E-mail, so it is losing its place as the one universal and reliable way to send somebody a message.

One of the key differences the new media have is they focus on person to person communications — while there are group tools, they don’t even have the concept of a “cc” or mailing list, or even sending to two people.

I’m going to write more on these topics in the future, but today I want to talk about

The shared calendar as the communications tool

I’ve been pushing people I work with to use the calendar as the means of telling me about anything that is going to happen at a specific time. If people send me an E-mail saying, “Can we talk at 3?” I say, “don’t tell me that in an E-mail. Create an event on your calendar and invite me to it. Put the details of the conversation into the calendar entry.”

In general, I want to create a pattern of communication where if any message you send would cause the other person to put something on their calendar, you instead communicate it through the calendar by creating an event that they are an attendee of.

Our calendar and E-mail tools need to improve to make this work better. When everybody uses a shared calendar like Google Calendar, it is a lot easier, but we need tools that make it just as easy when people don’t use the same calendar tool.

When things do get into the calendar, you get a lot of nice benefits:

  • You are much less likely to forget about or miss the task or event
  • When you want to find the data on the event near the time of the event, you don’t have to hunt around for it — it is highlighted, in my case right on the home screen of my phone
  • If the event has a location, your phone typically is able to generate a map and even warn you when you need to leave based on traffic
  • If the event has a phone call/hangout/whatever, your devices can join that with a single click, no hunting for URLs or meeting codes — particularly while driving. (Google put in a tool to add one of their hangouts to any event in the calendar.)
  • Calendar events remove any confusion on time zones when people are in different zones.

Here are some features I want, some of which exist in current tools (particularly if you attach an ICS calendar entry to an E-mail) but which don’t yet work seamlessly.

  • Your email tool, when writing a message should notice if you’re talking about an event that’s not already in your calendars, and parse out dates and other data and turn it into a calendar invitation
  • Likewise your receiving tool should parse messages and figure this out, since the sender might not have done that.
  • E-mails that create calendar events should be linked together, so that from your calendar you can read all the email threads around the event, find any associated files or other resources.
  • Likewise it should be easy to contact any others tied to a calendar event by any means, not just the planned means of communication. For example, a good calendar should have a system where I can be phoned or texted on my cell phone by any other member of the event during the time around the event, without having to reveal my cell phone number. How often have you been waiting for a conference call to have somebody say, “does anybody know John’s number? Let’s find where he is.”
  • When I accept a calendar entry from outside and confirm, that should give them some access to use that calendar entry as a means of communication, even across calendar and mail platforms.

For example, when I book a flight or hotel or rent a car, the company should respond by putting that in my calendar. I might given them a token enabling that, or manually approve their invitation. Of course the confirmation numbers, links on how to change the reservation and more will be in the calendar entry. If the flight is delayed, they should be able to use this linkage to contact me — my calendar tool should know best where I am and the best ways to reach me — and push updates to me. When I get to the check-in desk, our shared calendar entry should make my phone and their computer immediately connect and make the process seamless.

When I approach the desk of a hotel, my phone should notice this, do the handshake and by the time I walk up they should say, “Good evening, Mr. Templeton, could you please sign this form? Here’s your room key, you’re in suite 1207.” (Of course, even better if I don’t have to sign the form and my phone, or any of the magstripe, chip or NFC cards I have in my wallet automatically become my room key.)

When you think this way, you start realizing that a surprisingly large amount of our E-mails are about events with times. And, as I wrote 8 years ago, most e-mails involve tasks, and E-mail and time management should be merged. Sadly my ideas of so long ago remain unrealized, and since then, E-mail has declined.

One caveat — if we do start using calendars for communication more, we must be able to prevent spam, and even over-use by people we know. We can’t do what we did with e-mail. Invitations to an event with just one or two people can be made easy — even automatic for those with authorization. Creating multi-person events needs to be a harder thing for people who aren’t whitelisted, though not impossible. The meaning of the word “invite” also needs to be more tightly understood. A solicitation for me to buy a ticket is not an invite.

Robocars and Ultracapacitors (and other energy sources)

A reader recently asked about the synergies between robocars and ultracapacitors/supercapacitors. It turns out they are not what you would expect, and it teaches some of the surprising lessons of robocars.

Ultracaps are electrical storage devices, like batteries, which can be charged and discharged very, very quickly. That makes them interesting for electric cars, because slow charging is the bane of electric cars. They also tend to support a very large number of charge and discharge cycles — they don’t wear out the way batteries do. Where you might get 1,000 or so cycles from a good battery, you could see several tens of thousands from an ultracap.

Today, ultracaps cost a lot more than batteries. LIon batteries (like in the Tesla and almost everything else) are at $500/kwh of capacity and falling fast — some forecast it will be $200 in just a few years, and it’s already cheaper in the Tesla. Ultracaps are $2,500 to $5,000 per kwh, though people are working to shrink that.

They are also bigger and heavier. They are cited as just 10 wh/kg and on their way to 20 wh/kg. That’s really heavy — LIon are an order of magnitude better at 120 wh/kg and also improving.

So with the Ultracap, you are paying a lot of money and a lot of weight to get a super-fast recharge. It’s so much money that you could never justify it if not for the huge number of cycles. That’s because there are two big money numbers on a battery — the $/kwh of capacity — which means range — and the lifetime $/kwh, which affects your economics. Lifetime $/kwh is actually quite important but mostly ignored because people are so focused on range. An ultracap, at 5x the cost but 10x or 20x the cycles actually wins out on lifetime $/kwh. That means that while it will be short range, if you have a vehicle which is doing tons of short trips between places it can quickly recharge, the ultracap can win on lifetime cost, and on wasted recharging time, since it can recharge in seconds, not hours. That’s why one potential application is the shuttle bus, which goes a mile between stops and recharges in a short time at every stop.

How do robocars change the equation? In some ways it’s positive, but mostly it’s not.

  • Robocars don’t mind going out of their way to charge, at least not too far out of their way. Humans hate this. So you don’t need to place charging stations conveniently, and you can have a smaller number of them.
  • Robocars don’t care how long it takes to charge. The only issue is they are not available for service while charging. Humans on the other hand won’t tolerate much wait at all.
  • Robocars will eventually often be small single-person vehicles with very low weight compared to today’s cars. In fact, most of their weight might be battery if they are electric.
  • Users don’t care about the power train of a taxi or its energy source. Only the fleet manager cares, and the fleet manager is all about cost and efficiency and almost nothing else.

Now we see the bad news for the ultracap. It’s main advantage is the fast recharge time. Robots don’t care about that much at all. Instead, the fleet manager does care about the downtime, but the cost of the downtime is not that high. You need more vehicles the more downtime you have during peak loads, but as vehicles are wearing out by the km, not the year, the only costs for having more vehicles are the interest rate and the storage (parking) cost.

The interest cost is very low today. Consider a $20,000 vehicle. At 3%, you’re paying $1.60 per day in interest. So 4 hours of recharge downtime (only at peak times when you need every vehicle) doesn’t cost very much, certainly not as much as the extra cost of an ultracap. The cost of parking is actually much more, but will be quite low in the beginning because these vehicles can park wherever they can get the best rate and the best rate is usually zero somewhere not too far away. That may change in time, to around $2/day for surface parking of mini-vehicles, but free for now in most places.

In addition to the high cost, the ultracap comes with two other big downsides. The first is the weight and bulk. Especially when a vehicle is small and is mostly battery, adding 200kg of battery actually backfires, and you get diminishing returns on adding more in such vehicles. The other big downside is the short range. Even with the fast recharge time, you would have to limit these vehicles to doing only short cab hops in urban spaces of just a few miles, sending them off after just a few rides to get a recharge.

A third disadvantage is you need a special charging station to quick charge an ultracap. While level 2 electric car charging stations are in the 7-10kw range, and rapid chargers are in the 50kw-100kw range, ultracap chargers want to be in the megawatt or more range, and that’s a much more serious proposition, and a lot more work to build them.

Finally, while ultracaps don’t wear out very fast, they might still depreciate quickly the same way your computer does — because the technology keeps improving. So while your ultracap might last 20 years, you won’t want it any more compared to the cheaper, lighter, higher capacity one you can buy in the future. It can still work somewhere, like grid storage, but probably not in your car.

The fact that robocars don’t need fast refueling in convenient locations opens up all sorts of energy options. Natural gas, hydrogen, special biofuels and electricity all become practical even with gasoline’s 100 year headstart when it comes to deployment and infrastructure, and even sometimes in competition with gasoline’s incredible convenience and energy density. But what the robocar brings is not always a boon to every different form of energy storage.

One technique that makes sense for robocars (and taxis) is battery swap. Battery swap was a big failure for human driven cars, for reasons I have outlined in other posts. But robocars and taxis don’t mind coming back to a central station, or even making an appointment for a very specific time to do their swap. They don’t even mind waiting for other cars to get their swaps, and can put themselves into the swap station when told to — very precisely if needed. Here it’s a question of whether it’s cheaper to swap or just pay the interest and parking on more cars.

Ultracaps are also used to help with regenerative braking, since they can soak up power from hard regenerative braking faster than batteries. That’s mostly not a robocar issue, though in general robocars will brake less hard and accelerate less quickly — trying to give a smooth ride to their passengers rather than an exciting one — so this has less importance there too.

Still, for convenience, the first robocars will probably be gasoline and electric.

Google Accidents, Baidu Cars, Startups and more news roundup

2 months mostly on the road, so here’s a roundup of the “real” news stories in the field.

Google begins PR campaign and talks about accidents

As the world’s most famous company, Google doesn’t need to seek press and the Chauffeur project has kept fairly quiet, but it just opened a new web site which will feature monthly reports on the status of the project. The first report gives details of all the accidents in the project’s history, which we discussed earlier. A new one just took place in the last month, but like the others, it did not involve the self-driving software. Google’s cars continue to not cause any accidents, though they have been at the receiving end of a modestly high number of impacts, perhaps because they are a bit unusual.

The zero at-fault accident number is both impressive, and possibly involves a bit of luck. Perhaps it even raises unrealistic expectations of perfection, because I believe there will be at-fault accidents in the future for both Google and other teams. Most teams, when they were first building their vehicles, had minor accidents where cars hit curbs or obstacles on test tracks, but the track records of almost all teams since then are surprisingly good. One way that’s not luck, of course, is the presence of safety drivers ready to take the controls if something goes wrong. They are trained and experienced, though some day, being human, some of them will make mistakes.

Baidu to build a prototype

In November I gave a “Big Talk” for Baidu in Beijing on cars. Perhaps there is something about search engines because they have made announcements about their own project. Like Google, Baidu has expertise in mapping and various AI techniques, including the advice of Andrew Ng, whose career holds many parallels to that of Sebastian Thrun who started Google’s project. (Though based on my brief conversations with Andrew I don’t think he’s directly involved.)

Virginia opens test roads

The state of Virginia has designated 70 miles of roads for robocar testing. That’s a good start for testing by those working in that state, but it skirts what to me is a dangerous idea — the thought that there would be “special” roads for robocars designated by states or road authorities. The fantastic lesson of the Darpa grand challenges was the idea that the infrastructure remains stupid and the car becomes smart, so that the car can go anywhere once its builders are satisfied it can handle that road. So it’s OK to test on a limited set of roads but it’s also vital to test in as many situations as you can, so you need to get off that set of roads as soon as you can.

Zoox startup un-stealthed

Zoox is probably the first funded startup working on a real, fully automated robocar. They were recently funded by DFJ ventures and set up shop in rented space at the SLAC linear accelerator lab. Zoox was begun by Tim Kentley-Klay, a designer and entrepreneur from Australia, and he later joined forces with Jesse Levinson, a top researcher from Stanford’s self-driving car projects.

I’ve known about Zoox since it begain and had many discussions. They first got some attention a while back with Tim’s designs, which are quite different from typical car designs, and presume a fully functional robocar — the designs feature no controls for the humans, and don’t even have a windshield to see forward in some cases. (Indeed, they don’t have a “forward” since an essential part of the design is to be symmetrical and move equally well in both directions, avoiding the need for some twists and turns.) I like many elements of the Zoox vision, though in fact I think it is even more ambitious than Google’s, at least from a car design standpoint, which is quite audacious in a world where most of the players think Google is going too far.

You can see details in this report on Zoox from IEEE. I haven’t reported on Zoox under FrieNDA courtesy — in fact the early consultations with “Singularity University” described in the article are actually discussions with me.

Zoox is not the first small startup. Kyle Haight’s “Cruise” has been at it a while aiming at a much less ambitious supervised product, and truck platooning company Peleton has even simpler goals, but expect to see more startups enter the fray and fight with the big boys in the year to come.

Mercedes E Class

Speaking of supervised cruising, the report is that the 2016 Mercedes E Class will offer highway speed cruising in the USA. This has been on offer in Europe in the past. As I wrote earlier, I am less enthused about supervised cruising products and think they will not do tremendously well. Tesla’s update to offer the same in their cars will probably get the most attention.

Non-Stories

The press continue to get super excited about things that aren’t real. In spite of many reports, Uber does not yet have a car cruising the streets of Pittsburgh, though there is reality to the report that Uber has “poached” a large fraction of the robotics research crew from CMU.

In addition, many stories reported that Tesla had “solved” the liability problem of robocars through the design of their lane change system. In their system (and in several other discussed designs — they did not come up with this) the car won’t change lanes until the human signals it is OK to do so, usually by something like hitting the turn signal indicator. The Tesla plan is for a supervised car, and in a supervised car all liability is already supposed to go to the human supervisor.

Changing lanes safely is surprisingly challenging, because there is always the chance somebody is zooming up behind you at a rather rapid rate of speed. That’s common merging into a carpool lane, or on German autobahn trips. Most supervised cars have only forward sensing, but to change lanes safely you need to notice a car coming up fast from behind you, and you need to see it quite a distance away. This requires special sensors, such as rear radars, which most cars don’t have. So the solution of having the human check the mirrors works well for now.

More and more stories keep getting excited by “connected car” technology, in particular V2V communications using DSRC. They even write that these technologies are essential for robocars, and it gets scary when people like the transportation secretary say this. I wish the press covering this would take the simple step of asking the top teams who are working on robocars whether they plan to depend on, or even make early use of vehicle to vehicle communications. They will find out the answers will range form “no, not really” to a few vague instances of “yes, someday” from car companies who made corporate support commitments to V2V. The engineers don’t actually think they will find the technology crucial. The fact that the people actually building robocars have only a mild interest, if any, in V2V, while the people who staked their careers on V2V insist it’s essential should maybe suggest to the press that the truth is not quite what they are told.

Don't be fooled by robots falling down at Darpa Robotics Challenge

This weekend I went to Pomona, CA for the 2015 DARPA Robotics Challenge which had robots (mostly humanoid) compete at a variety of disaster response and assistance tasks. This contest, a successor of sorts to the original DARPA Grand Challenge which changed the world by giving us robocars, got a fair bit of press, but a lot of it was around this video showing various robots falling down when doing the course:

What you don’t hear in this video are the cries of sympathy from the crowd of thousands watching — akin to when a figure skater might fall down — or the cheers as each robot would complete a simple task to get a point. These cheers and sympathies were not just for the human team members, but in an anthropomorphic way for the robots themselves. Most of the public reaction to this video included declarations that one need not be too afraid of our future robot overlords just yet. It’s probably better to watch the DARPA official video which has a little audience reaction.

Don’t be fooled as well by the lesser-known fact that there was a lot of remote human tele-operation involved in the running of the course.

Check out my Gallery of Photos from the DARPA Robotics Challenge Finals.

What you also don’t see in this video is just how very far the robots have come since the first round of trials in December 2013. During those trials the amount of remote human operation was very high, and there weren’t a lot of great fall videos because the robots had tethers that would catch them if they fell. (These robots are heavy and many took serious damage when falling, so almost all testing is done with a crane, hoist or tether able to catch the robot during the many falls which do occur.)

We aren’t yet anywhere close to having robots that could do tasks like these autonomously, so for now the research is in making robots that can do tasks with more and more autonomy with higher level decisions made by remote humans. The tasks in the contest were:

  • Starting in a car, drive it down a simple course with a few turns and park it by a door.
  • Get out of the car — one of the harder tasks as it turns out, and one that demanded a more humanoid form
  • Go to a door and open it
  • Walk through the door into a room
  • In the room, go up to a valve with circular handle and turn it 360 degrees
  • Pick up a power drill, and use it to cut a large enough hole in a sheet of drywall
  • Perform a surprise task — in this case throwing a lever on day one, and on day 2 unplugging a power cord and plugging it into another socket
  • Either walk over a field of cinder blocks, or roll through a field of light debris
  • Climb a set of stairs

The robots have an hour to do this, so they are often extremely slow, and yet to the surprise of most, the audience — a crowd of thousands and thousands more online — watched with fascination and cheering. Even when robots would take a step once a minute, or pause at a task for several minutes, or would get into a problem and spend 10 minutes getting fixed by humans as a penalty.  read more »

Google Accidents and Deployment, Mercedes Trucks and more

Some headlines (I’ve been on the road and will have more to say soon.)

Google announces it will put new generation buggies on city streets

Google has done over 2.7 million km of testing with their existing fleet, they announced. Now, they will be putting their small “buggy” vehicle onto real streets in Mountain View. The cars will stick to slower streets and are NEVs that only go 25mph.

While this vehicle is designed for fully automatic operation, during the testing phase, as required, it will have a temporary set of controls for the safety driver to use in case of any problem. Google’s buggy, which still has no official name, has been built in a small fleet and has been operating on test tracks up to this point. Now it will need to operate among other road users and pedestrians.

Accidents with, but not caused by self-driving cars cause press tizzy.

The press were terribly excited when reports filed with the State of California indicated that there had been 4 accidents reported — 3 for Google and 1 for Delphi. Google reported a total of 11 accidents in 6 years of testing and over 1.5 million miles.

Headlines spoke loudly about the cars being in accidents, but buried in the copy was the fact that none of the accidents by any company were the fault of the software. Several took place during human driving, and the rest were accidents that were clearly the fault of the other party, such as being rear ended or hit while stopped.

Still, some of the smarter press noticed, this is a higher rate of being in an accident than normal, in fact almost double — human drivers are in an accident about every 250,000 miles and so should have had only 6.

The answer may be that these vehicles are unusual and have “self driving car” written on them. They may be distracting other drivers, making it more likely those drivers will make a mistake. In addition, many people have told me of their thoughts when they encountered a Google car on the road. “I thought about going in front of it and braking to see what it would do,” I’ve been told by many. Aside from the fact that this is risky and dickish, and would just cause the safety drivers to immediately disengage and take over, in reality they all also said they didn’t do it, and experience in the cars shows that it’s very rare for other drivers to actually try to “test” the car.

But perhaps some people who think about it do distract themselves and end up in an accident. That’s not good, but it’s also something that should go away as the novelty of the cars decreases.

Mercedes and Freightliner test in Nevada

There was also lots of press about a combined project of Mercedes/Daimler and Freightliner to test a self-driving truck in Nevada. There is no reason that we won’t eventually have self-driving trucks, of course, and there are direct economic benefits for trucking fleets to not require drivers.

Self-driving trucks are not new off the road. In fact the first commercial self-driving vehicles were mining trucks at the Rio Tinto mine in Australia. Small startup Peleton is producing a system to let truckers convoy, with the rear driver able to go hands-free. Putting them on regular roads is a big step, but it opens some difficult questions.

First, it is not wise to do this early on. Systems will not be perfect, and there will be accidents. You want your first accidents to be with something like Google’s buggy or a Prius, not with an 18-wheel semi-truck. “Your first is your worst” with software and so your first should be small and light.

Secondly, this truck opens up the jobs question much more than other vehicles, where the main goal is to replace amateur drivers, not professionals. Yes, cab drivers will slowly fade out of existence as the decades pass, but nobody grows up wanting to be a cab driver — it’s a job you fall into for a short time because it’s quick and easy work that doesn’t need much training. While other people build robots to replace workers, the developers of self-driving cars are mostly working on saving lives and increasing convenience.

Many jobs have been changed by automation, of course, and this will keep happening, and it will happen faster. Truck drivers are just one group that will face this, and they are not the first. On the other hand, the reality of robot job replacement is that while it has happened at a grand scale, there are more people working today than ever. People move to other jobs, and they will continue to do so. This may not be much satisfaction for those who will need to go through this task, but the other benefits of robocars are so large that it’s hard to imagine delaying them because of this. Jobs are important, but lives are even more important.

It’s also worth noting that today there is a large shortage of truck drivers, and as such the early robotic trucks will not be taking any jobs.

I’m more interested in tiny delivery “trucks” which I call “deliverbots.” For long haul, having large shared cargo vehicles makes sense, but for delivery, it can be better to have a small robot do the job and make it direct and personal.

New Sensors

The world of sensors continues to grow. This wideband software based radar from a student team won a prize. It claims to produce a 3D image. Today’s automotive radars have long range but very low resolution. High resolution radar could replace lidar if it gets enough resolution. Radar sees further, and sees through fog, and gives you a speed value, and LIDAR falls short in those areas.

Also noteworthy is this article on getting centimeter GPS accuracy with COTS GPS equipment. They claim to be able to eliminate a lot of multipath through random movements of the antennas. If true, it could be a huge localization breakthrough. GPS just isn’t good enough for robocar positioning. Aside from the fact it goes away in some locations like tunnels, and even though modern techniques can get sub-cm accuracy, it you want to position your robocar with it, and it alone, you need it to essentially never fail. But it does.

That said, most other localization systems, including map and image based localization, benefit from getting good GPS data to keep them reliable. The two systems together work very well, and making either one better helps.

Transportation Secretary Fox advances DoT plan

Secretary Fox has been out writing articles and Speaking in Silicon Valley about their Beyond Traffic effort. They promise big promotion of robocars which is good. Sadly, they also keep promoting the false idea that vehicle to vehicle communications are valuable and will play a significant role in the development of robocars. In my view, many inside the DoT staked their careers on V2V, and so feel required to promote it, even though it has minimal compelling applications and may actually be rejected entirely by the robocar community because of security issues.

This debate is going to continue for a while, it seems.

Maps, maps, maps

Nokia has put its “Here” map division up for sale, and a large part of the attention seems to related to their HD Maps project, aimed at making maps for self-driving. (HERE published a short interview with me on the value of these maps.

It will be interesting to see how much money that commands. At the same time, TomTom, the 3rd mapping company, has announced it will begin making maps for self-driving cars — a decision they made in part because of encouragement from yours truly.

Uber dwarfs taxis

Many who thought Uber’s valuation is crazy came to that conclusion because they looked at the size of the Taxi industry. To the surprise of nobody who has followed Uber, they recently revealed that in San Francisco, their birthplace, they are now 3 times the size of the old taxi industry, and growing. It was entirely the wrong comparison to make. The same is true of robocars. They won’t just match what Uber does, they will change the world.

There’s more news to come, during a brief visit to home, but I’m off to play in Peoria, and then Africa next week!

Second musings on the the Hugo Awards and the fix

Last week’s Hugo Awards point of crisis caused a firestorm even outside the SF community. I felt it time to record some additional thoughts above the summary of many proposals I did.

It’s not about the politics

I think all sides have made an error by bringing the politics and personal faults of either side into the mix. Making it about the politics legitimises the underlying actions for some. As such, I want to remove that from the discussion as much as possible. That’s why in the prior post I proposed an alternate history.

What are the goals of the award?

Awards are funny beasts. They are almost all given out by societies. The Motion Picture Academy does the Oscars, and the Worldcons do the Hugos. The Hugos, though, are overtly a “fan” award (unlike the Nebulas which are a writer’s award, and the Oscars which are a Hollywood pro’s award.) They represent the view of fans who go to the Worldcons, but they have always been eager for more fans to join that community. But the award does not belong to the public, it belongs to that community.

While the award is done with voting and ballots, I believe it is really a measurement, which is to say, a survey. We want to measure the aggregate opinion of the community on what the best of the year was. The opinions are, of course, subjective, but the aggregate opinion is an objective fact, if we could learn it.

In particular, I would venture we wish to know which works would get the most support among fans, if the fans had the time to fairly judge all serious contenders. Of course, not everybody reads everything, and not everybody votes, so we can’t ever know that precisely, but if we did know it, it’s what we would want to give the award to.

To get closer to that, we use a 2 step process, beginning with a nomination ballot. Survey the community, and try to come up with a good estimate of the best contenders based on fan opinion. This both honours the nominees but more importantly it now gives the members the chance to more fully evaluate them and make a fair comparison. To help, in a process I began 22 years ago, the members get access to electronic versions of almost all the nominees, and a few months in which to evaluate them.

Then the final ballot is run, and if things have gone well, we’ve identified what truly is the best loved work of the informed and well-read fans. Understand again, the choices of the fans are opinions, but the result of the process is our best estimate of a fact — a fact about the opinions.

The process is designed to help obtain that winner, and there are several sub-goals

  • The process should, of course, get as close to the truth as it can. In the end, the most people should feel it was the best choice.
  • The process should be fair, and appear to be fair
  • The process should be easy to participate in, administer and to understand
  • The process should not encourage any member to not express their true opinion on their ballot. If they lie on their ballot, how can we know the true best aggregate of their opinions.
  • As such, ballots should be generated independently, and there should be very little “strategy” to the system which encourages members to falsely represent their views to help one candidate over another.
  • It should encourage participation, and the number of nominees has to be small enough that it’s reasonable for people to fairly evaluate them all

A tall order, when we add a new element — people willing to abuse the rules to alter the results away from the true opinion of the fans. In this case, we had this through collusion. Two related parties published “slates” — the analog of political parties — and their followers carried them out, voting for most or all of the slate instead of voting their own independent and true opinion.

This corrupts the system greatly because when everybody else nominates independently, their nominations are broadly distributed among a large number of potential candidates. A group that colludes and concentrates their choices will easily dominate, even if it’s a small minority of the community. A survey of opinion becomes completely invalid if the respondents collude or don’t express their true views. Done in this way, I would go so far as to describe it as cheating, even though it is done within the context of the rules.

Proposals that are robust against collusion

Collusion is actually fairly obvious if the group is of decent size. Their efforts stick out clearly in a sea of broadly distributed independent nominations. There are algorithms which make it less powerful. There are other algorithms that effectively promote ballot concentration even among independent nominators so that the collusion is less useful.

A wide variety have been discussed. Their broad approaches include:

  • Systems that diminish the power of a nominating ballot as more of its choices are declared winners. Effectively, the more you get of what you asked for, the less likely you will get more of it. This mostly prevents a sweep of all nominations, and also increases diversity in the final result, even the true diversity of the independent nominators.
  • Systems which attempt to “maximize happiness,” which is to say try to make the most people pleased with the ballot by adding up for each person the fraction of their choices that won and maximizing that. This requires that nominators not all nominate 5 items, and makes a ballot with just one nomination quite strong. Similar systems allow putting weight on nominations to make some stronger than others.
  • Public voting, where people can see running tallies, and respond to collusion with their own counter-nominations.
  • Reduction of the number of nominations for each member, to stop sweeps.

The proposals work to varying degrees, but they all significantly increase the “strategy” component for an individual voter. It becomes the norm that if you have just a little information about what the most common popular choices will be, that your wisest course to get the ballot you want will be to deliberately remove certain works from your ballot.

Some members would ignore this and nominate honestly. Many, however, would read articles about strategy, and either practice it or wonder if they were doing the right thing. In addition to debates about collusion, there would be debates on how strategy affected the ballot.

Certain variants of multi-candidate STV help against collusion and have less strategy, but most of the methods proposed have a lot.

In addition, all the systems permit at least one, and as many as 2 or 3 slate-choice nominees onto the final ballot. While members will probably know which ones those are, this is still not desired. First of all, these placements displace other works which would otherwise have made the ballot. You could increase the size of the final ballot, you need to know how many slate choices will be on it.

It should be clear, when others do not collude, slate collusion is very powerful. In many political systems, it is actually considered a great result if a party with 20% of the voters gains 20% of the “victories.” Here, we have a situation with 2,000 nominators, and where just 100 colluding members can saturate some categories and get several entries into all of them, and with 10% (the likely amount in 2015) they can get a large fraction of them. As such it is not proportional representation at all.

Fighting human attackers with human defence

Consideration of the risks of confusion and strategy with all these systems, I have been led to the conclusion that the only solid response to organized attackers on the nomination system is a system of human judgement. Instead of hard and fast voting rules, the time has come, regrettably, to have people judge if the system is under attack, and give them the power to fix it.

This is hardly anything new, it’s how almost all systems of governance work. It may be a hubris to suggest the award can get by without it. Like the good systems of governance this must be done with impartiality, transparency and accountability, but it must be done.

I see a few variants which could be used. Enforcement would most probably be done by the Hugo Committee, which is normally a special subcommittee of the group running the Worldcon. However, it need not be them, and could be a different subcommittee, or an elected body.

While some of the variants I describe below add complexity, it is not necessary to do them. One important thing about the the rule of justice is that you don’t have to get it exactly precise. You get it in broad strokes and you trust people. Sometimes it fails. Mostly it works, unless you bring in the wrong incentives.

As such, some of these proposals work by not changing almost anything about the “user experience” of the system. You can do this with people nominating and voting as they always did, and relying on human vigilance to deflect attacks. You can also use the humans for more than that.

A broad rule against collusion and other clear ethical violations

The rule could be as broad as to prohibit “any actions which clearly compromise the honesty and independence of ballots.” There would be some clarifications, to indicate this does not forbid ordinary lobbying and promotion, but does prohibit collusion, vote buying, paying for memberships which vote as you instruct and similar actions. The examples would not draw hard lines, but give guidance.

Explicit rules about specific acts

The rule could be much more explicit, with less discretion, with specific unethical acts. It turns out that collusion can be detected by the appearance of patterns in the ballots which are extremely unlikely to occur in a proper independent sample. You don’t even need to know who was involved or prove that anybody agreed to any particular conspiracy.

The big challenge with explicit rules (which take 2 years to change) is that clever human attackers can find holes, and exploit them, and you can’t fix it then, or in the next year.

Delegation of nominating power or judicial power to a sub group elected by the members

Judicial power to fix problems with a ballot could fall to a committee chosen by members. This group would be chosen by a well established voting system, similar to those discussed for the nomination. Here, proportional representation makes sense, so if a group is 10% of the members it should have 10% of this committee. It won’t do it much good, though, if the others all oppose them. Unlike books, the delegates would be human beings, able to learn and reason. With 2,000 members, and 50 members per delegate, there would be 40 on the judicial committee, and it could probably be trusted to act fairly with that many people. In addition, action could require some sort of supermajority. If a 2/3 supermajority were needed, attackers would need to be 1/3 of all members.

This council could perhaps be given only the power to add nominations — beyond the normal fixed count — and not to remove them. Thus if there are inappropriate nominations, they could only express their opinion on that, and leave it to the voters what to do with those candidates, including not reading them and not ranking them.

Instead of judicial power, it might be simpler to appoint pure nominating power to delegates. Collusion is useless here because in effect all members are now colluding about their different interests, but in an honest way. Unlike pure direct democracy, the delegates, not unlike an award jury, would be expected to listen to members (and even look at nominating ballots done by them) but charged with coming up with the best consensus on the goal stated above. Such jurors would not simply vote their preferences. They would swear to attempt to examine as many works as possible in their efforts. They would suggest works to others and expect them to be likely to look at them. They would expect to be heavily lobbied and promoted to, but as long as its pure speech (no bribes other than free books and perhaps some nice parties) they would be expected to not be fooled so easily by such efforts.

As above, a nominating body might also only start with a member nominating system and add candidates to it and express rulings about why. In many awards, the primary function of the award jury is not to bypass the membership ballot, but to add one or two works that were obscure and the members may have missed. This is not a bad function, so long as the “real ballot” (the one you feel a duty to evaluate) is not too large.

Transparency and accountability

There is one barrier to transparency, in that releasing preliminary results biases the electorate in the final ballot, which would remain a direct survey of members with no intermediaries — though still the potential to look for attacks and corruption. There could also be auditors, who are barred from voting in the awards and are allowed to see all that goes on. Auditors might be people from the prior worldcon or some other different source, or fans chosen at random.

Finally, decisions could be appealed to the business meeting. This requires a business meeting after the Hugos. Attackers would probably always appeal any ruling against them. Appeals can’t alter nominations, obviously, or restore candidates who were eliminated.

Comprehensive plan

All the above requires the two year ratification process and could not come into effect (mostly) until 2017. To deal with the current cheating and the promised cheating in 2016, the following are recommended.

  1. Downplay the 2015 Hugo Award, perhaps with sufficient fans supporting this that all categories (including untainted ones) have no award given.
  2. Conduct a parallel award under a new system, and fête it like the Hugos, though they would not use that name.
  3. Pass new proposed rules including a special rule for 2016
  4. If 2016’s award is also compromised, do the same. However, at the 2016 business meeting, ratify a short-term amendment proposed in 2015 declaring the alternate awards to be the Hugo awards if run under the new rules, and discarding the uncounted results of the 2016 Hugos conducted under the old system. Another amendment would permit winners of the 2015 alternate award to say they are Hugo winners.
  5. If the attackers gave up, and 2016’s awards run normally, do not ratify the emergency plan, and instead ratify the new system that is robust against attack for use in 2017.

People get carsick as passengers? Shocking!

Earlier this week I was sent some advance research from the U of Michigan about car sickness rates for car passengers. I found the research of interest, but wish it had covered some questions I think are more important, such as how carsickness is changed by potentially new types of car seating, such as face to face or along the side.

To my surprise, there was a huge rush of press coverage of the study, which concluded that 6 to 12% of car passengers get a bit queasy, especially when looking down in order to read or work. While it was worthwhile to work up those numbers, the overall revelation was in the “Duh” category for me, I guess because it happens to me on some roads and I presumed it was fairly common.

Oddly, most of the press was of the “this is going to be a barrier to self-driving cars” sort, while my reaction was, “wow, that happens to fewer people than I thought!”

Having always known this, I am interested in the statistics, but to me the much more interesting question is, “what can be done about it?”

For those who don’t like to face backwards, the fact that so many are not bothered is a good sign — just switch seats.

Some activities are clearly better than others. While staring down at your phone or computer in your lap is bad during turns and bumps, it may be that staring up at a screen watching a video, with your peripheral vision very connected to the environment, is a choice that reduces the stress.

I also am interested in studying if there can be clues to help people reduce sickness. For example, the car will know of upcoming turns, and probably even upcoming bumps. It could issue tones to give you subtle clues as to what’s coming, and when it might be time to pause and look up. It might even be the case that audio clues could substitute for visual clues in our plastic brains.

The car, of course, should drive as gently as it can, and because the software does not need a tight suspension to feel the road, the ride can be smoother as well.

Another interesting thing to test would be having your tablet or phone deliberately tilt its display to give you the illusion you are looking at the fixed world when you look at it, or to have a little “window” that shows you a real world level so your eyes and inner ears can find something to agree on.

More advanced would be a passenger pod on hydraulic struts able to tilt with several degrees of freedom to counter the turns and bumps, and make them always be such that the forces go up and down, never side to side. With proper banking and tilting, you could go through a roundabout (often quite disconcerting when staring down) but only feel yourself get lighter and heavier.

Hugo awards suborned, what can or should be done?

Since 1992 I have had a long association with the Hugo Awards for SF & Fantasy given by the World Science Fiction Society/Convention. In 1993 I published the Hugo and Nebula Anthology which was for some time the largest anthology of current fiction every published, and one of the earliest major e-book projects. While I did it as a commercial venture, in the years to come it became the norm for the award organizers to publish an electronic anthology of willing nominees for free to the voters.

This year, things are highly controversial, because a group of fans/editors/writers calling themselves the “Sad Puppies,” had great success with a campaign to dominate the nominations for the awards. They published a slate of recommended nominations and a sufficient number of people sent in nominating ballots with that slate so that it dominated most of the award categories. Some categories are entirely the slate, only one was not affected. It’s important to understand the nominating and voting on the Hugos is done by members of the World SF Society, which is to say people who attend the World SF Convention (Worldcon) or who purchase special “supporting” memberships which don’t let you go but give you voting rights. This is a self-selected group, but in spite of that, it has mostly manged to run a reasonably independent vote to select the greatest works of the year. The group is not large, and in many categories, it can take only a score or two of nominations to make the ballot, and victory margins are often small. As such, it’s always been possible, and not even particularly hard, to subvert the process with any concerted effort. It’s even possible to do it with money, because you can just buy memberships which can nominate or vote, so long as a real unique person is behind each ballot.

The nominating group is self-selected, but it’s mostly a group that joins because they care about SF and its fandom, and as such, this keeps the award voting more independent than you would expect for a self-selected group. But this has changed.

The reasoning behind the Sad Puppy effort is complex and there is much contentious debate you can find on the web, and I’m about to get into some inside baseball, so if you don’t care about the Hugos, or the social dynamics of awards and conventions, you may want to skip this post.  read more »

Delphi completes trans-continental drive, and Hyundai goes big

Most of the robocar press this week has been about the Delphi drive from San Francisco to New York, which completed yesterday. Congratulations to the team. Few teams have tried to do such a long course and so many different roads. (While Google has over a million miles logged in their testing by now, it’s not been reported that they have done 3,500 distinct roads; most testing is done around Google HQ.)

The team reported the vehicle drove 99% of the time. This is both an impressive and unimpressive number, and understanding that is key to understanding the difficulty of the robocar problem.

One of the earliest pioneers, Ernst Dickmanns did a long highway drive 20 years ago, in 1995. He reported the system drove 95% of the time, kicking out every 10km or so. This was a system simply finding the edge of the road, and keeping in the lane by tracking that. Delphi’s car is much more sophisticated, with a very impressive array of sensors — 10 radars, 6 lidars and more, and it has much more sophisticated software.

99% is not 4% better than 95%, it’s 5 times better, because the real number is the fraction of road it could not drive. And from 99%, we need to get something like 10,000 times better — to 99.9999% of the time, to even start talking about a real full-auto robocar. Because in the USA we drive 3 trillion miles per year, taking about 60 billion hours, a little over half of it on the highway. 99.9999% for all cars would mean still too many accidents if 1 time in a million you encountered something and could not handle it.

However, this depends on what we mean by “being unable to handle it.”

  • If not handling means “has a fatal accident” that could map to 3,600,000 of those, which would be 100x the human rate and not acceptable.
  • If not handling it means “has any sort of accident” then we’re pretty good, about 1/4th of the rate of human accidents
  • If not handling it means that the vehicle knows certain roads are a problem, and diverts around them or requests human assistance, it’s no big problem at all.
  • Likewise if not handling it means identifying a trouble situation, and slowing down and pulling off the road, or even just plain stopping in the middle of the road — which is not perfectly safe but not ultra-dangerous either — it’s also not a problem.

At the same time, our technology is an exponential one, so it’s wrong to think that the statement that it needs to be 10,000 times better means the system is only 1/10,000th of the way there. In fact, getting to the goal may not be that far away, and Google is much further along. They reported a distance of over 80,000 miles between necessary interventions. Humans have accidents about ever 250,000 miles.

(Delphi has not reported the most interesting number, which is necessary unexpected interventions per million miles. To figure out if an intervention is necessary, you must replay the event in simulator to see what the vehicle would have done had the safety driver not intervened. The truly interesting number is the combination of interventions/mm and the fraction of roads you can drive. It’s easier, but boring, to get a low interventions/mm number on one plain piece of straight highway, for example.)

It should also be noted that Delphi’s result is almost entirely on highways, which are the simplest roads to drive for a robot. Google’s result is also heavily highway biased, though they have reported a lot more surface street work. None of the teams have testing records in complex and chaotic streets such as those found in the developing world, or harsh weather.

It is these facts which lead some people to conclude this technology is decades away. That would be the right conclusion if you were unaware of the exponential curve the technologies and the software development are on.

Huge Hyundai investment

For some time, I’ve been asking where the Koreans are on self-driving cars. Major projects arose in many major car companies, with the Germans in the lead, and then the US and Japan. Korea was not to be seen.

Hyundai announced they would produce highway cruise cars shortly (like other makers) but they also announced they would produce a much more autonomous car by 2020 — a similar number to most car makers as well. Remarkable though was the statement that they would invest over $70 billion in the next 4 years on what they are calling “smart cars,” including hiring over 7,000 people to work on them. While this number includes the factories they plan to build, and refers to many technologies beyond robocars, it’s still an immense number. The Koreans have arrived.

Matternet launches drone delivery platform

I often speak about deliverbots — the potential for ground based delivery robots. There is also excitement about drone (UAV/quadcopter) based delivery. We’ve seen many proposed projects, including Amazon prime Air and much debate. Many years ago I also was perhaps the first to propose that drones deliver a defibrillator anywhere and there are a few projects underway to do this.

Some of my students in the Singularity University Graduate Studies Program in 2011 really caught the bug, and their team project turned into Matternet — a company with a focus in drone delivery in the parts of the world without reliable road infrastructure. Example applications including moving lightweight items like medicines and test samples between remote clinics and eventually much more.

I’m pleased to say they just announced moving to a production phase called Matternet One. Feel free to check it out.

When it comes to ground robots and autonomous flying vehicles, there are a number of different trade-offs:

  • Drones will be much faster, and have an easier time getting roughly to a location. It’s a much easier problem to solve. No traffic, and travel mostly as the crow flies.
  • Deliverbots will be able to handle much heavier and larger cargo, consuming a lot less energy in most cases. Though drones able to move 40kg are already out there.
  • Regulations stand in the way of both vehicles, but current proposed FAA regulations would completely prohibit the drones, at least for now.
  • Landing a drone in a random place is very hard. Some drone plans avoid that by lowering the cargo on a tether and releasing the tether.
  • Driving to a doorway or even gate is not super easy either, though.
  • Heavy drones falling on people or property are an issue that scares people, but they are also scared of robots on roads and sidewalks.
  • Drones probably cost more but can do more deliveries per hour.
  • Drones don’t have good systems in place to avoid collisions with other drones. Deliverbots won’t go that fast and so can stop quickly for obstacles seen with short range sensors.
  • Deliverbots have to not hit cars or pedestrians. Really not hit them.
  • Deliverbots might be subject to piracy (people stealing them) and drones may have people shoot at them.
  • Drones may be noisy (this is yet to be seen) particularly if they have heavier cargo.
  • Drones can go where their are no roads or paths. For ground robots, you need legs like the BigDog.
  • Winds and rain will cause problems for drones. Deliverbots will be more robust against these, but may have trouble on snow and ice.

In the long run, I think we’ll see drones for urgent, light cargo and deliverbots for the rest, along with real trucks for the few large and heavy things we need.

Delphi's cross-country trip and a raft of Robocar News

I’ve been on the road, and there has been a ton of news in the last 4 weeks. In fact, below is just a small subset of the now constant stream of news items and articles that appear about robocars.

Delphi has made waves by undertaking a road trip from San Francisco to New York in their test car, which is equipped with an impressive array of sensors. The trip is now underway, and on their page you can see lots of videos of the vehicle along the trek.

The Delphi vehicle is one of the most sensor-laden vehicles out there, and that’s good. In spite of all those who make the rather odd claim that they want to build robocars with fewer sensors, Moore’s Law and other principles teach us that the right procedure is to throw everything you can at the problem today, because those sensors will be cheap when it comes time to actually ship. Particularly for those who say they won’t ship for a decade.

At the same time, the Delphi test is mostly of highway driving, with very minimal urban street driving according to Kristen Kinley at Delphi. They are attempting off-map driving, which is possible on highways due to their much simpler environment. Like all testing projects these days, there are safety drivers in the cars ready to intervene at the first sign of a problem.

Delphi is doing a small amount of DSRC vehicle to infrastructure testing as well, though this is only done in Mountain View where they used some specially installed roadside radio infrastructure equipment.

Delphi is doing the right thing here — getting lots of miles and different roads under their belt. This is Google’s giant advantage today. Based on Google’s announcements, they have more than a million miles of testing in the can, and that makes a big difference.

Hype and reality of Tesla’s autopilot announcement

Telsa has announced they will do an over the air upgrade of car software in a few months to add autopilot functionality to existing models that have sufficient sensors. This autopilot is the “supervised” class of self driving that I warned may end up viewed as boring. The press have treated this as something immense, but as far as I can tell, this is similar to products built by Mercedes, BMW, Audi and several other companies and even sold in the market (at least for traffic jams) for a couple of years now.

The other products have shied away from doing full highway speed in commercial products, though rumours exist of it being available in commercial cars in Europe. What is special about Tesla’s offering is that it will be the first car sold in the US to do this at highway speed, and they may offer supervised lane change as well. It’s also interesting that since they have been planning this for a while, it will come as a software upgrade to people who bought their technology package earlier.

UK project budget rises to £100 million

What started with a £10 million pound prize has grown in the UK has become over 100m in grants in the latest UK budget. While government research labs will not provide us with the final solutions, this money will probably create some very useful tools and results for the private players to exploit.

MobilEye releases their EyeQ4 chip

MobilEye from Jerusalem is probably the leader in automotive machine vision, and their new generation chip has been launched, but won’t show up in cars for a few years. It’s an ASIC packed with hardware and processor cores aimed at doing easy machine vision. My personal judgement is that this is not sufficient for robocar driving, but MobilEye wants to prove me wrong. (The EQ4 chip does have software to do sensor fusion with LIDAR and Radar, so they don’t want to prove me entirely wrong.) Even if not good enough on their own, ME chips offer a good alternate path for redundancy

Chris Urmson gives a TeD talk about the Google Car

Talks by Google’s team are rare — the project is unusual in trying to play down its publicity. I was not at TeD, but reports from there suggest Chris did not reveal a great deal new, other than repeating his goal of having the cars be in practical service before his son turns 16. Of course, humans will be driving for a long time after robocars start becoming common on the roads, but it is true that we will eventually see teens who would have gotten a licence never get around to getting one. (Teems are already waiting longer to get their licences so this is not a hard prediction.)

The war between DSRC and more wifi is heating up.

2 years ago, the FCC warned that since auto makers had not really figured out much good to do with the DSRC spectrum at 5.9ghz, it was time to repurpose it for unlicenced use, like more WiFi.

There is now a bill to force this being proposed.  read more »

How to avoid a pilot suicide

After 9/11 there was a lot of talk about how to prevent it, and the best method was to fortify the cockpit door and prevent unauthorized access. Every security system, however, sometimes prevents authorized people from getting access, and the tragic results of that are now clear to the world. This is likely a highly unusual event, and we should not go overboard, but it’s still interesting to consider.

(I have an extra reason to take special interest here, I was boarding a flight out of Spain on Tuesday just before the Germanwings flight crashed.)

In 2001, it was very common to talk about how software systems, at least on fly-by-wire aircraft, might make it impossible for aircraft to do things like fly into buildings. Such modes might be enabled by remote command from air traffic control. Pilots resist this, they don’t like the idea of a plane that might refuse to obey them at any time, because with some justification they worry that a situation could arise where the automated system is in error, and they need full manual control to do what needs to be done.

The cockpit access protocol on the Airbus allows flight crew to enter a code to unlock the door. Quite reasonably, the pilot in the cockpit can override that access, because an external bad guy might force a flight crew member to enter the code.

So here’s an alternative — a code that can be entered by a flight crew member which sends and emergency alert to air traffic control. ATC would then have the power to unlock the door with no possibility of pilot override. In extreme cases, ATC might even be able to put the plane in a safe mode, where it can only fly to a designated airport, and auto-land at that airport. In planes with sufficient bandwidth near an airport, the plane might actually be landed by remote pilots like a UAV, an entirely reasonable idea for newer aircraft. In case of real terrorist attack, ATC would need to be ready to refuse to open the door no matter what is threatened to the passengers.

If ATC is out of range (like over the deep ocean) then the remote console might allow the flight crew — even a flight attendant — to direct the aircraft to fly to pre-approved waypoints along the planned flight path where quality radio contact can be established.

Clearly there is a risk to putting a plane in this mode, though ATC or the flight crew who did it could always return control to the cockpit.

It might still be possible to commit suicide but it would take a lot more detailed planning. Indeed, there have been pilot suicides where the door was not locked, and the suicidal pilot just put the plane into a non-recoverable spin so quickly that nobody could stop it. Still, in many cases of suicide, any impediment can sometimes make the difference.

Update: I have learned the lock has a manual component, and so the pilot in the cockpit could prevent even a remote opening for now. Of course, current planes are not set to be remotely flown, though that has been discussed. It’s non trivial (and would require lots of approval) but it could have other purposes.

A safe mode that prevents overt attempts to crash might be more effective than you think, in that with many suicides, even modest discouragement can make a difference. It’s why they still want to put a fence on the Golden Gate Bridge had have other similar things elsewhere. You won’t stop a determined suicide but it apparently does stop those who are still uncertain, which is lots of them.

The simpler solution — already going into effect in countries that did not have this rule already — is a regulation insisting that nobody is ever alone in the cockpit. Under this rule, if a pilot wants to go to the bathroom, a flight attendant waits in the cockpit. Of course, a determined suicidal pilot could disable this person, either because of physical power, or because sometimes there is a weapon available to pilots. That requires more resolve and planning, though.

What colour is the dress? It's both.

Perhaps by now you are sick of the dress that 3/4 people see as “white and gold” and 1/4 people see as “dark blue and black.” If you haven’t seen it, it’s easy to find. What’s amazing is to see how violent the arguments can get between people because the two ways we see it are so hugely different. “How can you see that as white????” people shout. They really shout.

There are a few explanations out there, but let me add my own:

  • The real dress, the one you can buy, is indeed blue and black. That’s well documented.
  • The real photo of the dress everybody is looking at, is light blue and medium gold, because of unusual lighting and colour balance.

That’s the key point. The dress and photo are different. Anybody who saw the dress in almost any lighting would say it was blue and black. But people say very different things about the photo.

To explain, here are sampled colour swatches from the photo, on white and dark backgrounds.

You can see that the colours in the photo are indeed a light blue and a medium to dark gold. Whatever the dress really is, that’s what the photo colours are.

We see things in strange light all the time. Indoors, under incandescent light bulbs, or at sunset, everything is really, really yellow-red. Take a photo at these times with your camera set to “sunshine” light and you will see what the real colours look like. But your brain sees the colours very similarly to how they are in the day. Our brains are trained to correct for the lighting and try to see the “true” (under sunlight) colours. Sunlight isn’t really white but it’s our reference for white.

Some people see the photo and this part of their brain kicks in, and does the correction, letting them see what the dress looks like in more neutral light. We all do this most of the time, but this photo shows a time when only some of us can do it.

For the white/gold folks, their brains are not doing the real correction. We (I am one of them) see something closer to the actual colour of the photo. Though not quite — we see the light blue as whiter and the gold as a little lighter too. We’re making a different correction, and it seems going a bit the other direction. Our correction is false, the blue/black folks are doing a better job at the correction. It’s a bit unusual that the the results are so far apart. The blue/blacks see something close to the real dress, and the white/golds see something closer to the actual photo. Hard to say if “their kind” are better or worse than my kind because of it.

For the white/gold folks, our brains must be imagining the light is a bit blueish. We do like to find the white in a scene to help us figure out what colour the light is. In this case we’re getting tricked. There are many other situations where we get our colour correction wrong, and I will bet you can find other situations where the white/golds see the sunlit colour, and the black/blues see something closer to the photograph.