Archives

Date

Just a couple more days to apply for our exponential tech startup incubator

At Singularity University, our students have been forming interesting ventures after the class for the past 6 years. This fall, we’ll also be starting an SU Startup Accelerator for nascent startups working on exponential technology to solve the world’s biggest problems. We will be accelerating both for-profit ventures (for the world’s greatest problems can also be the greatest opportunities) and $50K grants for non-profit efforts.

The application deadline is coming up on June 30th — so zoom together your application today if you can. Follow the link and apply via AngelList.

Replacing E-mail: The calendar as communications tool

I want to begin a series of thoughts on how E-mail has failed us and what we should do about it.

Yes, E-mail has failed, and not, as we thought, because it got overwhelmed with spam. There is tons of spam but we seem to be handling it. The problem might be better described as “too much signal” rather than the signal/noise ratio. There are three linked problems:

  1. There is just too much E-mail from people we actually have relationships with. Part of this is the over-reach of businesses, who think that because you bought a tube of toothpaste that you should fill out a customer satisfaction survey and get the weekly bargains mail-out, but part of it is there really are a lot of people who want to interact with you, and e-mail makes it very easy for them to do that, particularly to “cc” you on mail you may only have a marginal interest.
  2. Because of problem 1, people are moving away from E-mail to other tools, particularly the younger generation. They (and we) are using Facebook mail and other social tools, instant messengers, texting and more.
  3. The volume means that you can’t handle it all. Important mails scroll off the main screen and are forgotten about. And some people are just not using their E-mail, so it is losing its place as the one universal and reliable way to send somebody a message.

One of the key differences the new media have is they focus on person to person communications — while there are group tools, they don’t even have the concept of a “cc” or mailing list, or even sending to two people.

I’m going to write more on these topics in the future, but today I want to talk about

The shared calendar as the communications tool

I’ve been pushing people I work with to use the calendar as the means of telling me about anything that is going to happen at a specific time. If people send me an E-mail saying, “Can we talk at 3?” I say, “don’t tell me that in an E-mail. Create an event on your calendar and invite me to it. Put the details of the conversation into the calendar entry.”

In general, I want to create a pattern of communication where if any message you send would cause the other person to put something on their calendar, you instead communicate it through the calendar by creating an event that they are an attendee of.

Our calendar and E-mail tools need to improve to make this work better. When everybody uses a shared calendar like Google Calendar, it is a lot easier, but we need tools that make it just as easy when people don’t use the same calendar tool.

When things do get into the calendar, you get a lot of nice benefits:

  • You are much less likely to forget about or miss the task or event
  • When you want to find the data on the event near the time of the event, you don’t have to hunt around for it — it is highlighted, in my case right on the home screen of my phone
  • If the event has a location, your phone typically is able to generate a map and even warn you when you need to leave based on traffic
  • If the event has a phone call/hangout/whatever, your devices can join that with a single click, no hunting for URLs or meeting codes — particularly while driving. (Google put in a tool to add one of their hangouts to any event in the calendar.)
  • Calendar events remove any confusion on time zones when people are in different zones.

Here are some features I want, some of which exist in current tools (particularly if you attach an ICS calendar entry to an E-mail) but which don’t yet work seamlessly.

  • Your email tool, when writing a message should notice if you’re talking about an event that’s not already in your calendars, and parse out dates and other data and turn it into a calendar invitation
  • Likewise your receiving tool should parse messages and figure this out, since the sender might not have done that.
  • E-mails that create calendar events should be linked together, so that from your calendar you can read all the email threads around the event, find any associated files or other resources.
  • Likewise it should be easy to contact any others tied to a calendar event by any means, not just the planned means of communication. For example, a good calendar should have a system where I can be phoned or texted on my cell phone by any other member of the event during the time around the event, without having to reveal my cell phone number. How often have you been waiting for a conference call to have somebody say, “does anybody know John’s number? Let’s find where he is.”
  • When I accept a calendar entry from outside and confirm, that should give them some access to use that calendar entry as a means of communication, even across calendar and mail platforms.

For example, when I book a flight or hotel or rent a car, the company should respond by putting that in my calendar. I might given them a token enabling that, or manually approve their invitation. Of course the confirmation numbers, links on how to change the reservation and more will be in the calendar entry. If the flight is delayed, they should be able to use this linkage to contact me — my calendar tool should know best where I am and the best ways to reach me — and push updates to me. When I get to the check-in desk, our shared calendar entry should make my phone and their computer immediately connect and make the process seamless.

When I approach the desk of a hotel, my phone should notice this, do the handshake and by the time I walk up they should say, “Good evening, Mr. Templeton, could you please sign this form? Here’s your room key, you’re in suite 1207.” (Of course, even better if I don’t have to sign the form and my phone, or any of the magstripe, chip or NFC cards I have in my wallet automatically become my room key.)

When you think this way, you start realizing that a surprisingly large amount of our E-mails are about events with times. And, as I wrote 8 years ago, most e-mails involve tasks, and E-mail and time management should be merged. Sadly my ideas of so long ago remain unrealized, and since then, E-mail has declined.

One caveat — if we do start using calendars for communication more, we must be able to prevent spam, and even over-use by people we know. We can’t do what we did with e-mail. Invitations to an event with just one or two people can be made easy — even automatic for those with authorization. Creating multi-person events needs to be a harder thing for people who aren’t whitelisted, though not impossible. The meaning of the word “invite” also needs to be more tightly understood. A solicitation for me to buy a ticket is not an invite.

Robocars and Ultracapacitors (and other energy sources)

A reader recently asked about the synergies between robocars and ultracapacitors/supercapacitors. It turns out they are not what you would expect, and it teaches some of the surprising lessons of robocars.

Ultracaps are electrical storage devices, like batteries, which can be charged and discharged very, very quickly. That makes them interesting for electric cars, because slow charging is the bane of electric cars. They also tend to support a very large number of charge and discharge cycles — they don’t wear out the way batteries do. Where you might get 1,000 or so cycles from a good battery, you could see several tens of thousands from an ultracap.

Today, ultracaps cost a lot more than batteries. LIon batteries (like in the Tesla and almost everything else) are at $500/kwh of capacity and falling fast — some forecast it will be $200 in just a few years, and it’s already cheaper in the Tesla. Ultracaps are $2,500 to $5,000 per kwh, though people are working to shrink that.

They are also bigger and heavier. They are cited as just 10 wh/kg and on their way to 20 wh/kg. That’s really heavy — LIon are an order of magnitude better at 120 wh/kg and also improving.

So with the Ultracap, you are paying a lot of money and a lot of weight to get a super-fast recharge. It’s so much money that you could never justify it if not for the huge number of cycles. That’s because there are two big money numbers on a battery — the $/kwh of capacity — which means range — and the lifetime $/kwh, which affects your economics. Lifetime $/kwh is actually quite important but mostly ignored because people are so focused on range. An ultracap, at 5x the cost but 10x or 20x the cycles actually wins out on lifetime $/kwh. That means that while it will be short range, if you have a vehicle which is doing tons of short trips between places it can quickly recharge, the ultracap can win on lifetime cost, and on wasted recharging time, since it can recharge in seconds, not hours. That’s why one potential application is the shuttle bus, which goes a mile between stops and recharges in a short time at every stop.

How do robocars change the equation? In some ways it’s positive, but mostly it’s not.

  • Robocars don’t mind going out of their way to charge, at least not too far out of their way. Humans hate this. So you don’t need to place charging stations conveniently, and you can have a smaller number of them.
  • Robocars don’t care how long it takes to charge. The only issue is they are not available for service while charging. Humans on the other hand won’t tolerate much wait at all.
  • Robocars will eventually often be small single-person vehicles with very low weight compared to today’s cars. In fact, most of their weight might be battery if they are electric.
  • Users don’t care about the power train of a taxi or its energy source. Only the fleet manager cares, and the fleet manager is all about cost and efficiency and almost nothing else.

Now we see the bad news for the ultracap. It’s main advantage is the fast recharge time. Robots don’t care about that much at all. Instead, the fleet manager does care about the downtime, but the cost of the downtime is not that high. You need more vehicles the more downtime you have during peak loads, but as vehicles are wearing out by the km, not the year, the only costs for having more vehicles are the interest rate and the storage (parking) cost.

The interest cost is very low today. Consider a $20,000 vehicle. At 3%, you’re paying $1.60 per day in interest. So 4 hours of recharge downtime (only at peak times when you need every vehicle) doesn’t cost very much, certainly not as much as the extra cost of an ultracap. The cost of parking is actually much more, but will be quite low in the beginning because these vehicles can park wherever they can get the best rate and the best rate is usually zero somewhere not too far away. That may change in time, to around $2/day for surface parking of mini-vehicles, but free for now in most places.

In addition to the high cost, the ultracap comes with two other big downsides. The first is the weight and bulk. Especially when a vehicle is small and is mostly battery, adding 200kg of battery actually backfires, and you get diminishing returns on adding more in such vehicles. The other big downside is the short range. Even with the fast recharge time, you would have to limit these vehicles to doing only short cab hops in urban spaces of just a few miles, sending them off after just a few rides to get a recharge.

A third disadvantage is you need a special charging station to quick charge an ultracap. While level 2 electric car charging stations are in the 7-10kw range, and rapid chargers are in the 50kw-100kw range, ultracap chargers want to be in the megawatt or more range, and that’s a much more serious proposition, and a lot more work to build them.

Finally, while ultracaps don’t wear out very fast, they might still depreciate quickly the same way your computer does — because the technology keeps improving. So while your ultracap might last 20 years, you won’t want it any more compared to the cheaper, lighter, higher capacity one you can buy in the future. It can still work somewhere, like grid storage, but probably not in your car.

The fact that robocars don’t need fast refueling in convenient locations opens up all sorts of energy options. Natural gas, hydrogen, special biofuels and electricity all become practical even with gasoline’s 100 year headstart when it comes to deployment and infrastructure, and even sometimes in competition with gasoline’s incredible convenience and energy density. But what the robocar brings is not always a boon to every different form of energy storage.

One technique that makes sense for robocars (and taxis) is battery swap. Battery swap was a big failure for human driven cars, for reasons I have outlined in other posts. But robocars and taxis don’t mind coming back to a central station, or even making an appointment for a very specific time to do their swap. They don’t even mind waiting for other cars to get their swaps, and can put themselves into the swap station when told to — very precisely if needed. Here it’s a question of whether it’s cheaper to swap or just pay the interest and parking on more cars.

Ultracaps are also used to help with regenerative braking, since they can soak up power from hard regenerative braking faster than batteries. That’s mostly not a robocar issue, though in general robocars will brake less hard and accelerate less quickly — trying to give a smooth ride to their passengers rather than an exciting one — so this has less importance there too.

Still, for convenience, the first robocars will probably be gasoline and electric.

Google Accidents, Baidu Cars, Startups and more news roundup

2 months mostly on the road, so here’s a roundup of the “real” news stories in the field.

Google begins PR campaign and talks about accidents

As the world’s most famous company, Google doesn’t need to seek press and the Chauffeur project has kept fairly quiet, but it just opened a new web site which will feature monthly reports on the status of the project. The first report gives details of all the accidents in the project’s history, which we discussed earlier. A new one just took place in the last month, but like the others, it did not involve the self-driving software. Google’s cars continue to not cause any accidents, though they have been at the receiving end of a modestly high number of impacts, perhaps because they are a bit unusual.

The zero at-fault accident number is both impressive, and possibly involves a bit of luck. Perhaps it even raises unrealistic expectations of perfection, because I believe there will be at-fault accidents in the future for both Google and other teams. Most teams, when they were first building their vehicles, had minor accidents where cars hit curbs or obstacles on test tracks, but the track records of almost all teams since then are surprisingly good. One way that’s not luck, of course, is the presence of safety drivers ready to take the controls if something goes wrong. They are trained and experienced, though some day, being human, some of them will make mistakes.

Baidu to build a prototype

In November I gave a “Big Talk” for Baidu in Beijing on cars. Perhaps there is something about search engines because they have made announcements about their own project. Like Google, Baidu has expertise in mapping and various AI techniques, including the advice of Andrew Ng, whose career holds many parallels to that of Sebastian Thrun who started Google’s project. (Though based on my brief conversations with Andrew I don’t think he’s directly involved.)

Virginia opens test roads

The state of Virginia has designated 70 miles of roads for robocar testing. That’s a good start for testing by those working in that state, but it skirts what to me is a dangerous idea — the thought that there would be “special” roads for robocars designated by states or road authorities. The fantastic lesson of the Darpa grand challenges was the idea that the infrastructure remains stupid and the car becomes smart, so that the car can go anywhere once its builders are satisfied it can handle that road. So it’s OK to test on a limited set of roads but it’s also vital to test in as many situations as you can, so you need to get off that set of roads as soon as you can.

Zoox startup un-stealthed

Zoox is probably the first funded startup working on a real, fully automated robocar. They were recently funded by DFJ ventures and set up shop in rented space at the SLAC linear accelerator lab. Zoox was begun by Tim Kentley-Klay, a designer and entrepreneur from Australia, and he later joined forces with Jesse Levinson, a top researcher from Stanford’s self-driving car projects.

I’ve known about Zoox since it begain and had many discussions. They first got some attention a while back with Tim’s designs, which are quite different from typical car designs, and presume a fully functional robocar — the designs feature no controls for the humans, and don’t even have a windshield to see forward in some cases. (Indeed, they don’t have a “forward” since an essential part of the design is to be symmetrical and move equally well in both directions, avoiding the need for some twists and turns.) I like many elements of the Zoox vision, though in fact I think it is even more ambitious than Google’s, at least from a car design standpoint, which is quite audacious in a world where most of the players think Google is going too far.

You can see details in this report on Zoox from IEEE. I haven’t reported on Zoox under FrieNDA courtesy — in fact the early consultations with “Singularity University” described in the article are actually discussions with me.

Zoox is not the first small startup. Kyle Haight’s “Cruise” has been at it a while aiming at a much less ambitious supervised product, and truck platooning company Peleton has even simpler goals, but expect to see more startups enter the fray and fight with the big boys in the year to come.

Mercedes E Class

Speaking of supervised cruising, the report is that the 2016 Mercedes E Class will offer highway speed cruising in the USA. This has been on offer in Europe in the past. As I wrote earlier, I am less enthused about supervised cruising products and think they will not do tremendously well. Tesla’s update to offer the same in their cars will probably get the most attention.

Non-Stories

The press continue to get super excited about things that aren’t real. In spite of many reports, Uber does not yet have a car cruising the streets of Pittsburgh, though there is reality to the report that Uber has “poached” a large fraction of the robotics research crew from CMU.

In addition, many stories reported that Tesla had “solved” the liability problem of robocars through the design of their lane change system. In their system (and in several other discussed designs — they did not come up with this) the car won’t change lanes until the human signals it is OK to do so, usually by something like hitting the turn signal indicator. The Tesla plan is for a supervised car, and in a supervised car all liability is already supposed to go to the human supervisor.

Changing lanes safely is surprisingly challenging, because there is always the chance somebody is zooming up behind you at a rather rapid rate of speed. That’s common merging into a carpool lane, or on German autobahn trips. Most supervised cars have only forward sensing, but to change lanes safely you need to notice a car coming up fast from behind you, and you need to see it quite a distance away. This requires special sensors, such as rear radars, which most cars don’t have. So the solution of having the human check the mirrors works well for now.

More and more stories keep getting excited by “connected car” technology, in particular V2V communications using DSRC. They even write that these technologies are essential for robocars, and it gets scary when people like the transportation secretary say this. I wish the press covering this would take the simple step of asking the top teams who are working on robocars whether they plan to depend on, or even make early use of vehicle to vehicle communications. They will find out the answers will range form “no, not really” to a few vague instances of “yes, someday” from car companies who made corporate support commitments to V2V. The engineers don’t actually think they will find the technology crucial. The fact that the people actually building robocars have only a mild interest, if any, in V2V, while the people who staked their careers on V2V insist it’s essential should maybe suggest to the press that the truth is not quite what they are told.

Don't be fooled by robots falling down at Darpa Robotics Challenge

This weekend I went to Pomona, CA for the 2015 DARPA Robotics Challenge which had robots (mostly humanoid) compete at a variety of disaster response and assistance tasks. This contest, a successor of sorts to the original DARPA Grand Challenge which changed the world by giving us robocars, got a fair bit of press, but a lot of it was around this video showing various robots falling down when doing the course:

What you don’t hear in this video are the cries of sympathy from the crowd of thousands watching — akin to when a figure skater might fall down — or the cheers as each robot would complete a simple task to get a point. These cheers and sympathies were not just for the human team members, but in an anthropomorphic way for the robots themselves. Most of the public reaction to this video included declarations that one need not be too afraid of our future robot overlords just yet. It’s probably better to watch the DARPA official video which has a little audience reaction.

Don’t be fooled as well by the lesser-known fact that there was a lot of remote human tele-operation involved in the running of the course.

Check out my Gallery of Photos from the DARPA Robotics Challenge Finals.

What you also don’t see in this video is just how very far the robots have come since the first round of trials in December 2013. During those trials the amount of remote human operation was very high, and there weren’t a lot of great fall videos because the robots had tethers that would catch them if they fell. (These robots are heavy and many took serious damage when falling, so almost all testing is done with a crane, hoist or tether able to catch the robot during the many falls which do occur.)

We aren’t yet anywhere close to having robots that could do tasks like these autonomously, so for now the research is in making robots that can do tasks with more and more autonomy with higher level decisions made by remote humans. The tasks in the contest were:

  • Starting in a car, drive it down a simple course with a few turns and park it by a door.
  • Get out of the car — one of the harder tasks as it turns out, and one that demanded a more humanoid form
  • Go to a door and open it
  • Walk through the door into a room
  • In the room, go up to a valve with circular handle and turn it 360 degrees
  • Pick up a power drill, and use it to cut a large enough hole in a sheet of drywall
  • Perform a surprise task — in this case throwing a lever on day one, and on day 2 unplugging a power cord and plugging it into another socket
  • Either walk over a field of cinder blocks, or roll through a field of light debris
  • Climb a set of stairs

The robots have an hour to do this, so they are often extremely slow, and yet to the surprise of most, the audience — a crowd of thousands and thousands more online — watched with fascination and cheering. Even when robots would take a step once a minute, or pause at a task for several minutes, or would get into a problem and spend 10 minutes getting fixed by humans as a penalty.  read more »

Google Accidents and Deployment, Mercedes Trucks and more

Some headlines (I’ve been on the road and will have more to say soon.)

Google announces it will put new generation buggies on city streets

Google has done over 2.7 million km of testing with their existing fleet, they announced. Now, they will be putting their small “buggy” vehicle onto real streets in Mountain View. The cars will stick to slower streets and are NEVs that only go 25mph.

While this vehicle is designed for fully automatic operation, during the testing phase, as required, it will have a temporary set of controls for the safety driver to use in case of any problem. Google’s buggy, which still has no official name, has been built in a small fleet and has been operating on test tracks up to this point. Now it will need to operate among other road users and pedestrians.

Accidents with, but not caused by self-driving cars cause press tizzy.

The press were terribly excited when reports filed with the State of California indicated that there had been 4 accidents reported — 3 for Google and 1 for Delphi. Google reported a total of 11 accidents in 6 years of testing and over 1.5 million miles.

Headlines spoke loudly about the cars being in accidents, but buried in the copy was the fact that none of the accidents by any company were the fault of the software. Several took place during human driving, and the rest were accidents that were clearly the fault of the other party, such as being rear ended or hit while stopped.

Still, some of the smarter press noticed, this is a higher rate of being in an accident than normal, in fact almost double — human drivers are in an accident about every 250,000 miles and so should have had only 6.

The answer may be that these vehicles are unusual and have “self driving car” written on them. They may be distracting other drivers, making it more likely those drivers will make a mistake. In addition, many people have told me of their thoughts when they encountered a Google car on the road. “I thought about going in front of it and braking to see what it would do,” I’ve been told by many. Aside from the fact that this is risky and dickish, and would just cause the safety drivers to immediately disengage and take over, in reality they all also said they didn’t do it, and experience in the cars shows that it’s very rare for other drivers to actually try to “test” the car.

But perhaps some people who think about it do distract themselves and end up in an accident. That’s not good, but it’s also something that should go away as the novelty of the cars decreases.

Mercedes and Freightliner test in Nevada

There was also lots of press about a combined project of Mercedes/Daimler and Freightliner to test a self-driving truck in Nevada. There is no reason that we won’t eventually have self-driving trucks, of course, and there are direct economic benefits for trucking fleets to not require drivers.

Self-driving trucks are not new off the road. In fact the first commercial self-driving vehicles were mining trucks at the Rio Tinto mine in Australia. Small startup Peleton is producing a system to let truckers convoy, with the rear driver able to go hands-free. Putting them on regular roads is a big step, but it opens some difficult questions.

First, it is not wise to do this early on. Systems will not be perfect, and there will be accidents. You want your first accidents to be with something like Google’s buggy or a Prius, not with an 18-wheel semi-truck. “Your first is your worst” with software and so your first should be small and light.

Secondly, this truck opens up the jobs question much more than other vehicles, where the main goal is to replace amateur drivers, not professionals. Yes, cab drivers will slowly fade out of existence as the decades pass, but nobody grows up wanting to be a cab driver — it’s a job you fall into for a short time because it’s quick and easy work that doesn’t need much training. While other people build robots to replace workers, the developers of self-driving cars are mostly working on saving lives and increasing convenience.

Many jobs have been changed by automation, of course, and this will keep happening, and it will happen faster. Truck drivers are just one group that will face this, and they are not the first. On the other hand, the reality of robot job replacement is that while it has happened at a grand scale, there are more people working today than ever. People move to other jobs, and they will continue to do so. This may not be much satisfaction for those who will need to go through this task, but the other benefits of robocars are so large that it’s hard to imagine delaying them because of this. Jobs are important, but lives are even more important.

It’s also worth noting that today there is a large shortage of truck drivers, and as such the early robotic trucks will not be taking any jobs.

I’m more interested in tiny delivery “trucks” which I call “deliverbots.” For long haul, having large shared cargo vehicles makes sense, but for delivery, it can be better to have a small robot do the job and make it direct and personal.

New Sensors

The world of sensors continues to grow. This wideband software based radar from a student team won a prize. It claims to produce a 3D image. Today’s automotive radars have long range but very low resolution. High resolution radar could replace lidar if it gets enough resolution. Radar sees further, and sees through fog, and gives you a speed value, and LIDAR falls short in those areas.

Also noteworthy is this article on getting centimeter GPS accuracy with COTS GPS equipment. They claim to be able to eliminate a lot of multipath through random movements of the antennas. If true, it could be a huge localization breakthrough. GPS just isn’t good enough for robocar positioning. Aside from the fact it goes away in some locations like tunnels, and even though modern techniques can get sub-cm accuracy, it you want to position your robocar with it, and it alone, you need it to essentially never fail. But it does.

That said, most other localization systems, including map and image based localization, benefit from getting good GPS data to keep them reliable. The two systems together work very well, and making either one better helps.

Transportation Secretary Fox advances DoT plan

Secretary Fox has been out writing articles and Speaking in Silicon Valley about their Beyond Traffic effort. They promise big promotion of robocars which is good. Sadly, they also keep promoting the false idea that vehicle to vehicle communications are valuable and will play a significant role in the development of robocars. In my view, many inside the DoT staked their careers on V2V, and so feel required to promote it, even though it has minimal compelling applications and may actually be rejected entirely by the robocar community because of security issues.

This debate is going to continue for a while, it seems.

Maps, maps, maps

Nokia has put its “Here” map division up for sale, and a large part of the attention seems to related to their HD Maps project, aimed at making maps for self-driving. (HERE published a short interview with me on the value of these maps.

It will be interesting to see how much money that commands. At the same time, TomTom, the 3rd mapping company, has announced it will begin making maps for self-driving cars — a decision they made in part because of encouragement from yours truly.

Uber dwarfs taxis

Many who thought Uber’s valuation is crazy came to that conclusion because they looked at the size of the Taxi industry. To the surprise of nobody who has followed Uber, they recently revealed that in San Francisco, their birthplace, they are now 3 times the size of the old taxi industry, and growing. It was entirely the wrong comparison to make. The same is true of robocars. They won’t just match what Uber does, they will change the world.

There’s more news to come, during a brief visit to home, but I’m off to play in Peoria, and then Africa next week!

Second musings on the the Hugo Awards and the fix

Last week’s Hugo Awards point of crisis caused a firestorm even outside the SF community. I felt it time to record some additional thoughts above the summary of many proposals I did.

It’s not about the politics

I think all sides have made an error by bringing the politics and personal faults of either side into the mix. Making it about the politics legitimises the underlying actions for some. As such, I want to remove that from the discussion as much as possible. That’s why in the prior post I proposed an alternate history.

What are the goals of the award?

Awards are funny beasts. They are almost all given out by societies. The Motion Picture Academy does the Oscars, and the Worldcons do the Hugos. The Hugos, though, are overtly a “fan” award (unlike the Nebulas which are a writer’s award, and the Oscars which are a Hollywood pro’s award.) They represent the view of fans who go to the Worldcons, but they have always been eager for more fans to join that community. But the award does not belong to the public, it belongs to that community.

While the award is done with voting and ballots, I believe it is really a measurement, which is to say, a survey. We want to measure the aggregate opinion of the community on what the best of the year was. The opinions are, of course, subjective, but the aggregate opinion is an objective fact, if we could learn it.

In particular, I would venture we wish to know which works would get the most support among fans, if the fans had the time to fairly judge all serious contenders. Of course, not everybody reads everything, and not everybody votes, so we can’t ever know that precisely, but if we did know it, it’s what we would want to give the award to.

To get closer to that, we use a 2 step process, beginning with a nomination ballot. Survey the community, and try to come up with a good estimate of the best contenders based on fan opinion. This both honours the nominees but more importantly it now gives the members the chance to more fully evaluate them and make a fair comparison. To help, in a process I began 22 years ago, the members get access to electronic versions of almost all the nominees, and a few months in which to evaluate them.

Then the final ballot is run, and if things have gone well, we’ve identified what truly is the best loved work of the informed and well-read fans. Understand again, the choices of the fans are opinions, but the result of the process is our best estimate of a fact — a fact about the opinions.

The process is designed to help obtain that winner, and there are several sub-goals

  • The process should, of course, get as close to the truth as it can. In the end, the most people should feel it was the best choice.
  • The process should be fair, and appear to be fair
  • The process should be easy to participate in, administer and to understand
  • The process should not encourage any member to not express their true opinion on their ballot. If they lie on their ballot, how can we know the true best aggregate of their opinions.
  • As such, ballots should be generated independently, and there should be very little “strategy” to the system which encourages members to falsely represent their views to help one candidate over another.
  • It should encourage participation, and the number of nominees has to be small enough that it’s reasonable for people to fairly evaluate them all

A tall order, when we add a new element — people willing to abuse the rules to alter the results away from the true opinion of the fans. In this case, we had this through collusion. Two related parties published “slates” — the analog of political parties — and their followers carried them out, voting for most or all of the slate instead of voting their own independent and true opinion.

This corrupts the system greatly because when everybody else nominates independently, their nominations are broadly distributed among a large number of potential candidates. A group that colludes and concentrates their choices will easily dominate, even if it’s a small minority of the community. A survey of opinion becomes completely invalid if the respondents collude or don’t express their true views. Done in this way, I would go so far as to describe it as cheating, even though it is done within the context of the rules.

Proposals that are robust against collusion

Collusion is actually fairly obvious if the group is of decent size. Their efforts stick out clearly in a sea of broadly distributed independent nominations. There are algorithms which make it less powerful. There are other algorithms that effectively promote ballot concentration even among independent nominators so that the collusion is less useful.

A wide variety have been discussed. Their broad approaches include:

  • Systems that diminish the power of a nominating ballot as more of its choices are declared winners. Effectively, the more you get of what you asked for, the less likely you will get more of it. This mostly prevents a sweep of all nominations, and also increases diversity in the final result, even the true diversity of the independent nominators.
  • Systems which attempt to “maximize happiness,” which is to say try to make the most people pleased with the ballot by adding up for each person the fraction of their choices that won and maximizing that. This requires that nominators not all nominate 5 items, and makes a ballot with just one nomination quite strong. Similar systems allow putting weight on nominations to make some stronger than others.
  • Public voting, where people can see running tallies, and respond to collusion with their own counter-nominations.
  • Reduction of the number of nominations for each member, to stop sweeps.

The proposals work to varying degrees, but they all significantly increase the “strategy” component for an individual voter. It becomes the norm that if you have just a little information about what the most common popular choices will be, that your wisest course to get the ballot you want will be to deliberately remove certain works from your ballot.

Some members would ignore this and nominate honestly. Many, however, would read articles about strategy, and either practice it or wonder if they were doing the right thing. In addition to debates about collusion, there would be debates on how strategy affected the ballot.

Certain variants of multi-candidate STV help against collusion and have less strategy, but most of the methods proposed have a lot.

In addition, all the systems permit at least one, and as many as 2 or 3 slate-choice nominees onto the final ballot. While members will probably know which ones those are, this is still not desired. First of all, these placements displace other works which would otherwise have made the ballot. You could increase the size of the final ballot, you need to know how many slate choices will be on it.

It should be clear, when others do not collude, slate collusion is very powerful. In many political systems, it is actually considered a great result if a party with 20% of the voters gains 20% of the “victories.” Here, we have a situation with 2,000 nominators, and where just 100 colluding members can saturate some categories and get several entries into all of them, and with 10% (the likely amount in 2015) they can get a large fraction of them. As such it is not proportional representation at all.

Fighting human attackers with human defence

Consideration of the risks of confusion and strategy with all these systems, I have been led to the conclusion that the only solid response to organized attackers on the nomination system is a system of human judgement. Instead of hard and fast voting rules, the time has come, regrettably, to have people judge if the system is under attack, and give them the power to fix it.

This is hardly anything new, it’s how almost all systems of governance work. It may be a hubris to suggest the award can get by without it. Like the good systems of governance this must be done with impartiality, transparency and accountability, but it must be done.

I see a few variants which could be used. Enforcement would most probably be done by the Hugo Committee, which is normally a special subcommittee of the group running the Worldcon. However, it need not be them, and could be a different subcommittee, or an elected body.

While some of the variants I describe below add complexity, it is not necessary to do them. One important thing about the the rule of justice is that you don’t have to get it exactly precise. You get it in broad strokes and you trust people. Sometimes it fails. Mostly it works, unless you bring in the wrong incentives.

As such, some of these proposals work by not changing almost anything about the “user experience” of the system. You can do this with people nominating and voting as they always did, and relying on human vigilance to deflect attacks. You can also use the humans for more than that.

A broad rule against collusion and other clear ethical violations

The rule could be as broad as to prohibit “any actions which clearly compromise the honesty and independence of ballots.” There would be some clarifications, to indicate this does not forbid ordinary lobbying and promotion, but does prohibit collusion, vote buying, paying for memberships which vote as you instruct and similar actions. The examples would not draw hard lines, but give guidance.

Explicit rules about specific acts

The rule could be much more explicit, with less discretion, with specific unethical acts. It turns out that collusion can be detected by the appearance of patterns in the ballots which are extremely unlikely to occur in a proper independent sample. You don’t even need to know who was involved or prove that anybody agreed to any particular conspiracy.

The big challenge with explicit rules (which take 2 years to change) is that clever human attackers can find holes, and exploit them, and you can’t fix it then, or in the next year.

Delegation of nominating power or judicial power to a sub group elected by the members

Judicial power to fix problems with a ballot could fall to a committee chosen by members. This group would be chosen by a well established voting system, similar to those discussed for the nomination. Here, proportional representation makes sense, so if a group is 10% of the members it should have 10% of this committee. It won’t do it much good, though, if the others all oppose them. Unlike books, the delegates would be human beings, able to learn and reason. With 2,000 members, and 50 members per delegate, there would be 40 on the judicial committee, and it could probably be trusted to act fairly with that many people. In addition, action could require some sort of supermajority. If a 2/3 supermajority were needed, attackers would need to be 1/3 of all members.

This council could perhaps be given only the power to add nominations — beyond the normal fixed count — and not to remove them. Thus if there are inappropriate nominations, they could only express their opinion on that, and leave it to the voters what to do with those candidates, including not reading them and not ranking them.

Instead of judicial power, it might be simpler to appoint pure nominating power to delegates. Collusion is useless here because in effect all members are now colluding about their different interests, but in an honest way. Unlike pure direct democracy, the delegates, not unlike an award jury, would be expected to listen to members (and even look at nominating ballots done by them) but charged with coming up with the best consensus on the goal stated above. Such jurors would not simply vote their preferences. They would swear to attempt to examine as many works as possible in their efforts. They would suggest works to others and expect them to be likely to look at them. They would expect to be heavily lobbied and promoted to, but as long as its pure speech (no bribes other than free books and perhaps some nice parties) they would be expected to not be fooled so easily by such efforts.

As above, a nominating body might also only start with a member nominating system and add candidates to it and express rulings about why. In many awards, the primary function of the award jury is not to bypass the membership ballot, but to add one or two works that were obscure and the members may have missed. This is not a bad function, so long as the “real ballot” (the one you feel a duty to evaluate) is not too large.

Transparency and accountability

There is one barrier to transparency, in that releasing preliminary results biases the electorate in the final ballot, which would remain a direct survey of members with no intermediaries — though still the potential to look for attacks and corruption. There could also be auditors, who are barred from voting in the awards and are allowed to see all that goes on. Auditors might be people from the prior worldcon or some other different source, or fans chosen at random.

Finally, decisions could be appealed to the business meeting. This requires a business meeting after the Hugos. Attackers would probably always appeal any ruling against them. Appeals can’t alter nominations, obviously, or restore candidates who were eliminated.

Comprehensive plan

All the above requires the two year ratification process and could not come into effect (mostly) until 2017. To deal with the current cheating and the promised cheating in 2016, the following are recommended.

  1. Downplay the 2015 Hugo Award, perhaps with sufficient fans supporting this that all categories (including untainted ones) have no award given.
  2. Conduct a parallel award under a new system, and fête it like the Hugos, though they would not use that name.
  3. Pass new proposed rules including a special rule for 2016
  4. If 2016’s award is also compromised, do the same. However, at the 2016 business meeting, ratify a short-term amendment proposed in 2015 declaring the alternate awards to be the Hugo awards if run under the new rules, and discarding the uncounted results of the 2016 Hugos conducted under the old system. Another amendment would permit winners of the 2015 alternate award to say they are Hugo winners.
  5. If the attackers gave up, and 2016’s awards run normally, do not ratify the emergency plan, and instead ratify the new system that is robust against attack for use in 2017.

People get carsick as passengers? Shocking!

Earlier this week I was sent some advance research from the U of Michigan about car sickness rates for car passengers. I found the research of interest, but wish it had covered some questions I think are more important, such as how carsickness is changed by potentially new types of car seating, such as face to face or along the side.

To my surprise, there was a huge rush of press coverage of the study, which concluded that 6 to 12% of car passengers get a bit queasy, especially when looking down in order to read or work. While it was worthwhile to work up those numbers, the overall revelation was in the “Duh” category for me, I guess because it happens to me on some roads and I presumed it was fairly common.

Oddly, most of the press was of the “this is going to be a barrier to self-driving cars” sort, while my reaction was, “wow, that happens to fewer people than I thought!”

Having always known this, I am interested in the statistics, but to me the much more interesting question is, “what can be done about it?”

For those who don’t like to face backwards, the fact that so many are not bothered is a good sign — just switch seats.

Some activities are clearly better than others. While staring down at your phone or computer in your lap is bad during turns and bumps, it may be that staring up at a screen watching a video, with your peripheral vision very connected to the environment, is a choice that reduces the stress.

I also am interested in studying if there can be clues to help people reduce sickness. For example, the car will know of upcoming turns, and probably even upcoming bumps. It could issue tones to give you subtle clues as to what’s coming, and when it might be time to pause and look up. It might even be the case that audio clues could substitute for visual clues in our plastic brains.

The car, of course, should drive as gently as it can, and because the software does not need a tight suspension to feel the road, the ride can be smoother as well.

Another interesting thing to test would be having your tablet or phone deliberately tilt its display to give you the illusion you are looking at the fixed world when you look at it, or to have a little “window” that shows you a real world level so your eyes and inner ears can find something to agree on.

More advanced would be a passenger pod on hydraulic struts able to tilt with several degrees of freedom to counter the turns and bumps, and make them always be such that the forces go up and down, never side to side. With proper banking and tilting, you could go through a roundabout (often quite disconcerting when staring down) but only feel yourself get lighter and heavier.

Hugo awards suborned, what can or should be done?

Since 1992 I have had a long association with the Hugo Awards for SF & Fantasy given by the World Science Fiction Society/Convention. In 1993 I published the Hugo and Nebula Anthology which was for some time the largest anthology of current fiction every published, and one of the earliest major e-book projects. While I did it as a commercial venture, in the years to come it became the norm for the award organizers to publish an electronic anthology of willing nominees for free to the voters.

This year, things are highly controversial, because a group of fans/editors/writers calling themselves the “Sad Puppies,” had great success with a campaign to dominate the nominations for the awards. They published a slate of recommended nominations and a sufficient number of people sent in nominating ballots with that slate so that it dominated most of the award categories. Some categories are entirely the slate, only one was not affected. It’s important to understand the nominating and voting on the Hugos is done by members of the World SF Society, which is to say people who attend the World SF Convention (Worldcon) or who purchase special “supporting” memberships which don’t let you go but give you voting rights. This is a self-selected group, but in spite of that, it has mostly manged to run a reasonably independent vote to select the greatest works of the year. The group is not large, and in many categories, it can take only a score or two of nominations to make the ballot, and victory margins are often small. As such, it’s always been possible, and not even particularly hard, to subvert the process with any concerted effort. It’s even possible to do it with money, because you can just buy memberships which can nominate or vote, so long as a real unique person is behind each ballot.

The nominating group is self-selected, but it’s mostly a group that joins because they care about SF and its fandom, and as such, this keeps the award voting more independent than you would expect for a self-selected group. But this has changed.

The reasoning behind the Sad Puppy effort is complex and there is much contentious debate you can find on the web, and I’m about to get into some inside baseball, so if you don’t care about the Hugos, or the social dynamics of awards and conventions, you may want to skip this post.  read more »

Delphi completes trans-continental drive, and Hyundai goes big

Most of the robocar press this week has been about the Delphi drive from San Francisco to New York, which completed yesterday. Congratulations to the team. Few teams have tried to do such a long course and so many different roads. (While Google has over a million miles logged in their testing by now, it’s not been reported that they have done 3,500 distinct roads; most testing is done around Google HQ.)

The team reported the vehicle drove 99% of the time. This is both an impressive and unimpressive number, and understanding that is key to understanding the difficulty of the robocar problem.

One of the earliest pioneers, Ernst Dickmanns did a long highway drive 20 years ago, in 1995. He reported the system drove 95% of the time, kicking out every 10km or so. This was a system simply finding the edge of the road, and keeping in the lane by tracking that. Delphi’s car is much more sophisticated, with a very impressive array of sensors — 10 radars, 6 lidars and more, and it has much more sophisticated software.

99% is not 4% better than 95%, it’s 5 times better, because the real number is the fraction of road it could not drive. And from 99%, we need to get something like 10,000 times better — to 99.9999% of the time, to even start talking about a real full-auto robocar. Because in the USA we drive 3 trillion miles per year, taking about 60 billion hours, a little over half of it on the highway. 99.9999% for all cars would mean still too many accidents if 1 time in a million you encountered something and could not handle it.

However, this depends on what we mean by “being unable to handle it.”

  • If not handling means “has a fatal accident” that could map to 3,600,000 of those, which would be 100x the human rate and not acceptable.
  • If not handling it means “has any sort of accident” then we’re pretty good, about 1/4th of the rate of human accidents
  • If not handling it means that the vehicle knows certain roads are a problem, and diverts around them or requests human assistance, it’s no big problem at all.
  • Likewise if not handling it means identifying a trouble situation, and slowing down and pulling off the road, or even just plain stopping in the middle of the road — which is not perfectly safe but not ultra-dangerous either — it’s also not a problem.

At the same time, our technology is an exponential one, so it’s wrong to think that the statement that it needs to be 10,000 times better means the system is only 1/10,000th of the way there. In fact, getting to the goal may not be that far away, and Google is much further along. They reported a distance of over 80,000 miles between necessary interventions. Humans have accidents about ever 250,000 miles.

(Delphi has not reported the most interesting number, which is necessary unexpected interventions per million miles. To figure out if an intervention is necessary, you must replay the event in simulator to see what the vehicle would have done had the safety driver not intervened. The truly interesting number is the combination of interventions/mm and the fraction of roads you can drive. It’s easier, but boring, to get a low interventions/mm number on one plain piece of straight highway, for example.)

It should also be noted that Delphi’s result is almost entirely on highways, which are the simplest roads to drive for a robot. Google’s result is also heavily highway biased, though they have reported a lot more surface street work. None of the teams have testing records in complex and chaotic streets such as those found in the developing world, or harsh weather.

It is these facts which lead some people to conclude this technology is decades away. That would be the right conclusion if you were unaware of the exponential curve the technologies and the software development are on.

Huge Hyundai investment

For some time, I’ve been asking where the Koreans are on self-driving cars. Major projects arose in many major car companies, with the Germans in the lead, and then the US and Japan. Korea was not to be seen.

Hyundai announced they would produce highway cruise cars shortly (like other makers) but they also announced they would produce a much more autonomous car by 2020 — a similar number to most car makers as well. Remarkable though was the statement that they would invest over $70 billion in the next 4 years on what they are calling “smart cars,” including hiring over 7,000 people to work on them. While this number includes the factories they plan to build, and refers to many technologies beyond robocars, it’s still an immense number. The Koreans have arrived.

Matternet launches drone delivery platform

I often speak about deliverbots — the potential for ground based delivery robots. There is also excitement about drone (UAV/quadcopter) based delivery. We’ve seen many proposed projects, including Amazon prime Air and much debate. Many years ago I also was perhaps the first to propose that drones deliver a defibrillator anywhere and there are a few projects underway to do this.

Some of my students in the Singularity University Graduate Studies Program in 2011 really caught the bug, and their team project turned into Matternet — a company with a focus in drone delivery in the parts of the world without reliable road infrastructure. Example applications including moving lightweight items like medicines and test samples between remote clinics and eventually much more.

I’m pleased to say they just announced moving to a production phase called Matternet One. Feel free to check it out.

When it comes to ground robots and autonomous flying vehicles, there are a number of different trade-offs:

  • Drones will be much faster, and have an easier time getting roughly to a location. It’s a much easier problem to solve. No traffic, and travel mostly as the crow flies.
  • Deliverbots will be able to handle much heavier and larger cargo, consuming a lot less energy in most cases. Though drones able to move 40kg are already out there.
  • Regulations stand in the way of both vehicles, but current proposed FAA regulations would completely prohibit the drones, at least for now.
  • Landing a drone in a random place is very hard. Some drone plans avoid that by lowering the cargo on a tether and releasing the tether.
  • Driving to a doorway or even gate is not super easy either, though.
  • Heavy drones falling on people or property are an issue that scares people, but they are also scared of robots on roads and sidewalks.
  • Drones probably cost more but can do more deliveries per hour.
  • Drones don’t have good systems in place to avoid collisions with other drones. Deliverbots won’t go that fast and so can stop quickly for obstacles seen with short range sensors.
  • Deliverbots have to not hit cars or pedestrians. Really not hit them.
  • Deliverbots might be subject to piracy (people stealing them) and drones may have people shoot at them.
  • Drones may be noisy (this is yet to be seen) particularly if they have heavier cargo.
  • Drones can go where their are no roads or paths. For ground robots, you need legs like the BigDog.
  • Winds and rain will cause problems for drones. Deliverbots will be more robust against these, but may have trouble on snow and ice.

In the long run, I think we’ll see drones for urgent, light cargo and deliverbots for the rest, along with real trucks for the few large and heavy things we need.

Delphi's cross-country trip and a raft of Robocar News

I’ve been on the road, and there has been a ton of news in the last 4 weeks. In fact, below is just a small subset of the now constant stream of news items and articles that appear about robocars.

Delphi has made waves by undertaking a road trip from San Francisco to New York in their test car, which is equipped with an impressive array of sensors. The trip is now underway, and on their page you can see lots of videos of the vehicle along the trek.

The Delphi vehicle is one of the most sensor-laden vehicles out there, and that’s good. In spite of all those who make the rather odd claim that they want to build robocars with fewer sensors, Moore’s Law and other principles teach us that the right procedure is to throw everything you can at the problem today, because those sensors will be cheap when it comes time to actually ship. Particularly for those who say they won’t ship for a decade.

At the same time, the Delphi test is mostly of highway driving, with very minimal urban street driving according to Kristen Kinley at Delphi. They are attempting off-map driving, which is possible on highways due to their much simpler environment. Like all testing projects these days, there are safety drivers in the cars ready to intervene at the first sign of a problem.

Delphi is doing a small amount of DSRC vehicle to infrastructure testing as well, though this is only done in Mountain View where they used some specially installed roadside radio infrastructure equipment.

Delphi is doing the right thing here — getting lots of miles and different roads under their belt. This is Google’s giant advantage today. Based on Google’s announcements, they have more than a million miles of testing in the can, and that makes a big difference.

Hype and reality of Tesla’s autopilot announcement

Telsa has announced they will do an over the air upgrade of car software in a few months to add autopilot functionality to existing models that have sufficient sensors. This autopilot is the “supervised” class of self driving that I warned may end up viewed as boring. The press have treated this as something immense, but as far as I can tell, this is similar to products built by Mercedes, BMW, Audi and several other companies and even sold in the market (at least for traffic jams) for a couple of years now.

The other products have shied away from doing full highway speed in commercial products, though rumours exist of it being available in commercial cars in Europe. What is special about Tesla’s offering is that it will be the first car sold in the US to do this at highway speed, and they may offer supervised lane change as well. It’s also interesting that since they have been planning this for a while, it will come as a software upgrade to people who bought their technology package earlier.

UK project budget rises to £100 million

What started with a £10 million pound prize has grown in the UK has become over 100m in grants in the latest UK budget. While government research labs will not provide us with the final solutions, this money will probably create some very useful tools and results for the private players to exploit.

MobilEye releases their EyeQ4 chip

MobilEye from Jerusalem is probably the leader in automotive machine vision, and their new generation chip has been launched, but won’t show up in cars for a few years. It’s an ASIC packed with hardware and processor cores aimed at doing easy machine vision. My personal judgement is that this is not sufficient for robocar driving, but MobilEye wants to prove me wrong. (The EQ4 chip does have software to do sensor fusion with LIDAR and Radar, so they don’t want to prove me entirely wrong.) Even if not good enough on their own, ME chips offer a good alternate path for redundancy

Chris Urmson gives a TeD talk about the Google Car

Talks by Google’s team are rare — the project is unusual in trying to play down its publicity. I was not at TeD, but reports from there suggest Chris did not reveal a great deal new, other than repeating his goal of having the cars be in practical service before his son turns 16. Of course, humans will be driving for a long time after robocars start becoming common on the roads, but it is true that we will eventually see teens who would have gotten a licence never get around to getting one. (Teems are already waiting longer to get their licences so this is not a hard prediction.)

The war between DSRC and more wifi is heating up.

2 years ago, the FCC warned that since auto makers had not really figured out much good to do with the DSRC spectrum at 5.9ghz, it was time to repurpose it for unlicenced use, like more WiFi.

There is now a bill to force this being proposed.  read more »

How to avoid a pilot suicide

After 9/11 there was a lot of talk about how to prevent it, and the best method was to fortify the cockpit door and prevent unauthorized access. Every security system, however, sometimes prevents authorized people from getting access, and the tragic results of that are now clear to the world. This is likely a highly unusual event, and we should not go overboard, but it’s still interesting to consider.

(I have an extra reason to take special interest here, I was boarding a flight out of Spain on Tuesday just before the Germanwings flight crashed.)

In 2001, it was very common to talk about how software systems, at least on fly-by-wire aircraft, might make it impossible for aircraft to do things like fly into buildings. Such modes might be enabled by remote command from air traffic control. Pilots resist this, they don’t like the idea of a plane that might refuse to obey them at any time, because with some justification they worry that a situation could arise where the automated system is in error, and they need full manual control to do what needs to be done.

The cockpit access protocol on the Airbus allows flight crew to enter a code to unlock the door. Quite reasonably, the pilot in the cockpit can override that access, because an external bad guy might force a flight crew member to enter the code.

So here’s an alternative — a code that can be entered by a flight crew member which sends and emergency alert to air traffic control. ATC would then have the power to unlock the door with no possibility of pilot override. In extreme cases, ATC might even be able to put the plane in a safe mode, where it can only fly to a designated airport, and auto-land at that airport. In planes with sufficient bandwidth near an airport, the plane might actually be landed by remote pilots like a UAV, an entirely reasonable idea for newer aircraft. In case of real terrorist attack, ATC would need to be ready to refuse to open the door no matter what is threatened to the passengers.

If ATC is out of range (like over the deep ocean) then the remote console might allow the flight crew — even a flight attendant — to direct the aircraft to fly to pre-approved waypoints along the planned flight path where quality radio contact can be established.

Clearly there is a risk to putting a plane in this mode, though ATC or the flight crew who did it could always return control to the cockpit.

It might still be possible to commit suicide but it would take a lot more detailed planning. Indeed, there have been pilot suicides where the door was not locked, and the suicidal pilot just put the plane into a non-recoverable spin so quickly that nobody could stop it. Still, in many cases of suicide, any impediment can sometimes make the difference.

Update: I have learned the lock has a manual component, and so the pilot in the cockpit could prevent even a remote opening for now. Of course, current planes are not set to be remotely flown, though that has been discussed. It’s non trivial (and would require lots of approval) but it could have other purposes.

A safe mode that prevents overt attempts to crash might be more effective than you think, in that with many suicides, even modest discouragement can make a difference. It’s why they still want to put a fence on the Golden Gate Bridge had have other similar things elsewhere. You won’t stop a determined suicide but it apparently does stop those who are still uncertain, which is lots of them.

The simpler solution — already going into effect in countries that did not have this rule already — is a regulation insisting that nobody is ever alone in the cockpit. Under this rule, if a pilot wants to go to the bathroom, a flight attendant waits in the cockpit. Of course, a determined suicidal pilot could disable this person, either because of physical power, or because sometimes there is a weapon available to pilots. That requires more resolve and planning, though.

What colour is the dress? It's both.

Perhaps by now you are sick of the dress that 3/4 people see as “white and gold” and 1/4 people see as “dark blue and black.” If you haven’t seen it, it’s easy to find. What’s amazing is to see how violent the arguments can get between people because the two ways we see it are so hugely different. “How can you see that as white????” people shout. They really shout.

There are a few explanations out there, but let me add my own:

  • The real dress, the one you can buy, is indeed blue and black. That’s well documented.
  • The real photo of the dress everybody is looking at, is light blue and medium gold, because of unusual lighting and colour balance.

That’s the key point. The dress and photo are different. Anybody who saw the dress in almost any lighting would say it was blue and black. But people say very different things about the photo.

To explain, here are sampled colour swatches from the photo, on white and dark backgrounds.

You can see that the colours in the photo are indeed a light blue and a medium to dark gold. Whatever the dress really is, that’s what the photo colours are.

We see things in strange light all the time. Indoors, under incandescent light bulbs, or at sunset, everything is really, really yellow-red. Take a photo at these times with your camera set to “sunshine” light and you will see what the real colours look like. But your brain sees the colours very similarly to how they are in the day. Our brains are trained to correct for the lighting and try to see the “true” (under sunlight) colours. Sunlight isn’t really white but it’s our reference for white.

Some people see the photo and this part of their brain kicks in, and does the correction, letting them see what the dress looks like in more neutral light. We all do this most of the time, but this photo shows a time when only some of us can do it.

For the white/gold folks, their brains are not doing the real correction. We (I am one of them) see something closer to the actual colour of the photo. Though not quite — we see the light blue as whiter and the gold as a little lighter too. We’re making a different correction, and it seems going a bit the other direction. Our correction is false, the blue/black folks are doing a better job at the correction. It’s a bit unusual that the the results are so far apart. The blue/blacks see something close to the real dress, and the white/golds see something closer to the actual photo. Hard to say if “their kind” are better or worse than my kind because of it.

For the white/gold folks, our brains must be imagining the light is a bit blueish. We do like to find the white in a scene to help us figure out what colour the light is. In this case we’re getting tricked. There are many other situations where we get our colour correction wrong, and I will bet you can find other situations where the white/golds see the sunlit colour, and the black/blues see something closer to the photograph.

Targeted Ads after I buy something are really annoying

I’m sure you’ve seen it. Shop for something and pretty quickly, half the ads you see on the web relate to that thing. And you keep seeing those ads, even after you have made your purchase, sometimes for weeks on end.

At first blush, it makes sense, and is the whole reason the ad companies (like Google and the rest) want to track more about us is to deliver ads that target our interests. The obvious is value in terms of making advertising effective for advertisers, but it’s also argued that web surfers derive more value from ads that might interest them than we do from generic ads with little relevance to our lives. It’s one of the reasons that text ads on search have been such a success.

Anything in the ad industry worth doing seems to them to be worth overdoing, I fear, and I think this is backfiring. That’s because the ads that pop up for products I have already bought are both completely useless and much more annoying than generic ads. They are annoying because they distract my attention too well — I have been thinking about those products, I may be holding them in my hands, so of course my eyes are drawn to photos of things like what I just bought.

I already bought my ticket on Iberia!

This extends beyond the web. Woe to me for searching for hotel rooms and flights these days. I am bombarded after this with not just ads but emails wanting to make sure I had gotten a room or other travel service. They accept that if I book a flight, I don’t need another flight but surely need a room, but of course quite often I don’t need a room and may not even be shopping for one. It’s way worse than the typical spam. I’ve seen ads for travel services a month after I took the trip.

Yes, that Iberia ad I screen captured on the right is the ad showing to me on my own blog — 5 days after I booked a trip to Spain on USAir that uses Iberia as a codeshare. (Come see me at the Singularity Summit in Sevilla on March 12-14!)

I am not sure how to solve this. I am not really interested in telling the ad engines what I have done to make them go away. That’s more annoyance, and gives them even more information just to be rid of another annoyance.

It does make us wonder — what is advertising like if it gets really, really good? I mean good beyond the ads John Anderton sees in Minority report as he walks past the billboards. What if every ad is actually about something you want to buy? It will be much more effective for advertisers of course, but will that cause them to cut back on the ads to reduce the brain bandwidth it takes from us? Would companies like Google say, “Hey, we are making a $200 CPM here, so let’s only run ads 1/10th of the time that we did when we made a $20 CPM?” Somehow I doubt it.

Uber price in LA approaches robocar cheap

I was recently considering the price of UberX in Los Angeles. It’s gotten disturbingly low:

Flag drop: $0 18 cents/minute 90 cents/mile

This is not a very good deal for the driver. After Uber’s 20% cut, that’s 72 cents/mile. According to AAA, a typical car costs about 60 cents/mile to operate, not including parking. (Some cars are a bit cheaper, including the Prius favoured by UberX drivers.) In any event, the UberX driver is not making much money on their car.

The 18 cents/minute — $10.80 per hour, drops to only $8.64/hour while driving. Not that much above minimum wage. And I’m not counting the time spent waiting and driving to and from rides, nor the miles, which is odd that the flag drop fee. There is a $1 “safe rides fee” that Uber pockets (they are being sued over that.) And there is a $4 minimum, which will hit you on rides of up to about 2.5 miles.

So Uber drivers aren’t getting paid that well — not big news — but a bigger thing is the comparison of this with private car ownership.

As noted, private car ownership is typically around 60 cents/mile. The Uber ride then, is only 50% more per mile. You pay the driver a low rate to drive you, but in return, you get that back as free time in which you can work, or socialize on your phone, or relax and read or watch movies. For a large number of people who value their time much more than $10/hour, it’s a no-brainer win.

The average car trip for urbanites is 8.5 miles — though that of course is biased up by long road trips that would never be done in something like Uber. I will make a guess and drop urban trips to 6.

The Uber and private car costs do have some complications: * That Safe Rides Fee adds $1/trip, or about 16 cents/mile on a 6 mile trip * The minimum fee is a minor penalty from 2 to 2.5 miles, a serious penalty on 1 mile trips * Uber has surge pricing some of the time that can double or even triple this price

As UberX prices drop this much, we should start seeing people deliberately dropping cars for Uber, just as I have predicted for robocars. I forecast robotaxi service can be available for even less. 60 cents/mile with no cost for a driver and minimal flag drop or minimum fees. In other words, beating the cost of private car ownership and offering free time while riding. UberX is not as good as this, but for people of a certain income level who value their own time, it should already be beating the private car.

We should definitely see 2 car families dropping down to 1 car plus digital rides. The longer trips can be well handled by services like Zipcar or even better, Car2Go or DriveNow which are one way.

The surge pricing is a barrier. One easy solution would be for a company like Uber to make an offer: “If you ride more than 4,000 miles/year with us, then no surge pricing for you.” Or whatever deal of that sort can make economic sense. Sort of frequent rider loyalty miles. (Surprised none of the companies have thought about loyalty programs yet.)

Another option that might make sense in car replacement is an electric scooter for trips under 2 miles, UberX like service for 2 to 30 miles, and car rental/carshare for trips over 30 miles.

If we don’t start seeing this happen, it might tell us that robocars may have a larger hurdle in getting people to give up a car for them than predicted. On the other hand, some people will actually much prefer the silence of a robocar to having to interact with a human driver — sometimes you are not in the mood for it. In addition, Americans at least are not quite used to the idea of having a driver all the time. Even billionaires I know don’t have a personal chauffeur, in spite of the obvious utility of it for people whose time is that valuable. On the other hand, having a robocar will not seem so ostentatious.

Issues in regulating robocars, and the case for a light hand

All over the world, people (and governments) are debating about regulations for robocars. First for testing, and then for operation. It mostly began when Google encouraged the state of Nevada to write regulations, but now it’s in full force. The topic is so hot that there is a danger that regulations might be drafted long before the shape of the first commercial deployments of the technology take place.

As such I have prepared a new special article on the issues around regulating robocars. The article concludes that in spite of a frequent claim that we want to regulate and standarize even before the technology has been out in the market for a while, this is in fact both a highly unusual approach, and possibly even a dangerous approach.

Read:

Regulating Robocar Safety : An examination of the issues around regulating robocar safety and the case for a very light touch

Time for phones to have replaceable shock corners and more battery

Everywhere I go, a vast majority of people seem to now have two things in associating with their phone — a protective case, and a spare USB charging battery. The battery is there because most phones stopped having switchable batteries some time ago. The cases are there partly for decoration, but mostly because anybody who has dropped a phone and cracked the screen (or worse, the digitizer) doesn’t want to do it again — and a lot of people have done it.

While there is still a market for the thinnest and lightest phone, and phone makers think that’s what everybody wants, but I am not sure that is true any longer.

When they make a phone, they do try to make the battery last all day — and it often does. From time to time, however, a runaway application or other problem will drain your battery. You pick your phone out of your pocket in horror to find it warm, and it will die soon. And today, when your phone is dead, you feel lost and confused, like Manfred Mancx without his glasses. Even if it only happens 3 times a month it’s so bad that people now try to carry a backup battery in their bag.

One reason people like the large “phablet” phones is they come with bigger batteries, but I think even those who don’t want a phone too large for their hand still want a bigger battery. The conventional wisdom for a long time was that everybody wants thinner — I am not sure that’s true. Of course, a two battery system with one swappable still has its merits, or the standardized battery sticks I talked about.

The case is another matter. Here we buy a phone that is as thin as they can make it, and then we deliberately make it thicker to protect it.

I propose that phone design include 4 “shock corners” which are actually slightly thicker than the phone, and stick out just a few mm in all the directions. They will become the point of impact for all falls, and just a little shock buffer can make a big difference. What I propose further, though, even though it uses precious space in the device, is that they attach to indents at the corners of the phone, probably with a tiny jeweler’s screw or other small connection. This would allow the massive case industry to design all sorts of alternate bumpers and cases for phones that could attach firmly to the phone. Today, cases have to wrap all the way around the phone in order to hold it, which limits their design in many ways.

You could attach many things to your phone if it had a screw hole, not just bumper cases. Mounts that can easily slot into cars holders or other holders. Magnetic mounts and inductive charging plates. Accessory mounts of all sorts. And yes, even extra batteries.

While it would be nice to standardize, the truth is the case industry has reveled in supporting 1,000 different models of phone, and so could the attachment industry.

The Oscars

While not worthy of a blog post of its own, I was amused to note on Sunday that Oscars were won by films whose subjects were Hawking, Turing, Edward Snowden and robot-building nerds. Years ago it would have been remarkable if people had even heard of all these, and today, nobody noticed. Nerd culture really has won.

Where's my fast, smart, overhead scanner?

Back in 2008, I proposed the idea of a scanner club which would share high-end scanning equipment to rid of houses of the glut of paper. It’s a harder problem than it sounds. I bought a high-end Fujitsu office scanner (original price $5K, but I paid a lot less) and it’s done some things for me, but it’s still way too hard to use on general scanning problems.

I’ve bought a lot of scanners in the day. There are now lots of portable hand scanners that just scan to an SD card which I like. I also have several flatbeds and a couple of high volume sheetfeds.

In the scanner club article, I outlined a different design for how I would like a scanner to work. This design is faster and much less expensive and probably more reliable than all the other designs, yet 7 years later, nobody has built it.

The design is similar to the “document camera” family of scanners which feature a camera suspended over a flat surface, equipped with some LED lighting. Thanks to the progress in digital cameras, a fast, high resolution camera is now something you can get cheap. The $350 Hovercam Solo 8, which provides an 8 megapixel (4K) image at 30 frames per second. Soon, 4K cameras will become very cheap. You don’t need video at that resolution, and still cameras in the 20 megapixel range — which means 500 pixels/inch scanning of letter sized paper — are cheap and plentiful.

Under the camera you could put anything, but a surface of a distinct colour (like green screen) is a good idea. Anything but the same colour as your paper will do. To get extra fancy, the table could be perforated with small holes like an air hockey table, and have a small suction pump, so that paper put on it is instantly held flat, sticking slightly to the surface.

No-button scanning

The real feature I want is an ability to scan pages as fast as a human being can slap them down on the table. To scan a document, you would just take pages and quickly put them down, one after the other, as fast as you can, so long as you pause long enough for your hand to leave the view and the paper to stay still for 100 milliseconds or so.

The system will be watching with a 60 frame per second standard HD video camera (these are very cheap today.) It will watch until a new page arrives and your hand leaves. Because it will have an image of the table or papers under the new sheet, it can spot the difference. It can also spot when the image becomes still for a few frames, and when it doesn’t have your hand in it. This would trigger a high resolution still image. The LEDs would flash with that still image, which is your signal to know the image has been taken and the system is ready to drop a new page on. Every so often you would clear the stack so it doesn’t grow too high.

Alternately, you could remove pages before you add a new one. This would be slower, you would get no movement of papers under the top page. If you had the suction table, each page would be held nice and flat, with a green background around it, for a highly accurate rotation and crop in the final image. With two hands it might not be much slower to pull pages out while adding new ones.

No button is pressed between scans or even to start and finish scanning. You might have some buttons on the scanner to indicate you are clearing the stack, or to select modes (colour, black and white, line art, double sided, exposure modes etc.) Instead of buttons, you could also have little tokens you put on the surface with codes that can be read by the camera. This can include sheets of paper you print with bar codes to insert in the middle of your scanning streams.

By warning the scanner, you could also scan bound books and pamplets and even stapled documents without unstapling. You will get some small distortions but the scans will be fine if the goal is document storage rather than publishing. (You can even eliminate those distortions if you use 3D scanning techniques like structured light projection onto the pages, or having 2 cameras for stereo.)

For books, this is already worked out, and many places like the Internet Archive build special scanners that use overhead cameras for books. They have not attacked the “loose pile of paper” problem that so many of us have in our files and boxes of paper.

Why this method?

I believe this method is much faster than even high speed commercial scanners on all but the most regular of documents. You can flip pages at better than 1 per second. With small things, like business cards and photos, you can lay down multiple pages per second. That’s already the speed of typical high end office scanners. But the difference is actually far greater.

For those office scanners, you tend to need a fairly regular stack or the document feeder may mess up. Scanning a pile of different sized pages is problematic, and even general loose pages run the risk of skipping pages or other errors. As such, you always do a little bit of prep with your stacks of documents before you put them in the scanner. No button scanning will work with a random pile of cards and papers, including even folded papers. You would unfold them as you scan, but the overall process will take less time.

A scanner like this can handle almost any size and shape of paper. It could offer the option to zoom the camera out or pull it higher to scan very large pages, which the other scanners just can’t do. A lower ppi number on the larger pages, but if you can’t handle that, scan at full ppi and stitch together like you would on an older scanner.

The scans will not be as clean as a flatbed or sheetfed scanner. There will be variations in lighting and shading from curvature of the pages, along with minor distortions unless you use the suction table for all pages. A regular scanner puts a light source right on the page and full 3-colour scanning elements right next to it, it’s going to be higher quality. For publication and professional archiving, the big scanners will still win. On the other hand, this scanner could handle 3-dimensional objects and any thickness of paper.

Another thing that’s slower here is double sided pages. A few options are available here:

  • Flip every page. Have software in the scanner able to identify the act of flipping — especially easy if you have the 3D imaging with structured light.
  • Run the whole stack through again, upside-down. Runs the risk of getting out of sync. You want to be sure you tie every page with its other side.
  • Build a fancier double sided table where the surface is a sheet of glass or plexi, and there are cameras on both sides. (Flash the flash at two different times of course to avoid translucent paper.) Probably no holes in the glass for suction as those would show in the lower image.

Ideally, all of this would work without a computer, storing the images to a flash card. Fancier adjustments and OCR could be done later on the computer, as well as converting images to PDFs and breaking things up into different documents. Even better if it can work on batteries, and fold up for small storage. But frankly, I would be happy to have it always there, always on. Any paper I received in the mail would get a quick slap-down on the scanning table and the paper could go in the recycling right away.

You could also hire teens to go through your old filing cabinets and scan them. I believe this scanner design would be inexpensive, so there would be less need to share it.

Getting Fancy

As Moore’s law progresses, we can do even more. If we realize we’re taking video and have the power to process it, it becomes possible to combine all the video frames with a page in it, and produce an image that is better than any one frame, with sub-pixel resolution, and superior elimination of gradations in lighting and distortions.

As noted in the comments, it also becomes possible to do all this with what’s in a mobile phone, or any video camera with post-processing. One can even imagine:

  • Flipping through a book at high speed in front of a high-speed camera, and getting an image of the entire book in just a few seconds. Yes, some pages will get missed so you just do it again until it says it has all the pages. Update: This lab did something like this.
  • Vernor Vinge’s crazy scanner from Rainbow’s End, which sliced off the spines and blew the pages down a tube, being imaged all the way along to capture everything.
  • Using a big table and a group of people who just slap things down on the table until the computer, using a projector, shows you which things have been scanned and can be replaced. Thousands of pages could go buy in minutes.

Does Tesla new home storage battery suggest an amazing breakthrough?

There has been lots of buzz over announcements from Tesla that they will sell a battery for home electricity storage manufactured in the “gigafactory” they are building to make electric car batteries. It is suggested that 1/3 of the capacity of the factory might go to grid storage batteries.

This is very interesting because, at present, battery grid storage is not generally economical. The problem is the cost of the batteries. While batteries can be as much as 90% efficient, they wear out the more you use and recharge them. Batteries vary a lot in how many cycles they will deliver, and this varies according to how you use the battery (ie. do you drain it all the way, or use only the middle of the range, etc.) If your battery will deliver 1,000 cycles using 60% of its range (from 20% to 80%) and costs $400/kwh, then you will get 600kwh over the lifetime of a kwh unit, or 66 cents per kwh (presuming no residual value.) That’s not an economical cost for energy anywhere, except perhaps off-grid. (You also lose a cent or two from losses in the system.) If you can get down to 9 cents/kwh, plus 1 cent for losses, you get parity with the typical grid. However, this is modified by some important caveats:

  • If you have a grid with very different prices during the day, you can charge your batteries at the night price and use them during the daytime peak. You might pay 7 cents at night and avoid 21 cent prices in the day, so a battery cost of 14 cents/kwh is break-even.
  • You get a backup power system for times when the grid is off. How valuable that is varies on who you are. For many it’s worth several hundred dollars. (But not too many as you can get a generator as backup and most people don’t.)
  • Because battery prices are dropping fast, a battery pack today will lose value quickly, even before it physically degrades. And yes, in spite of what you might imagine in terms of “who cares, as long as it’s working,” that matters.

The magic number that is not well understood about batteries is the lifetime watt-hours in the battery per dollar. Lots of analysis will tell you things about the instantaneous capacity in kwh, notably important numbers like energy density (in kwh/kg or kwh/litre) and cost (in dollars/kwh) but for grid storage, the energy density is almost entirely unimportant, the cost for single cycle capacity is much less important and the lifetime watt-hours is the one you want to know. For any battery there will be an “optimal” duty cycle which maximizes the lifetime wh. (For example, taking it down to 20% and then back up to 80% is a popular duty cycle.)

The lifetime watt hour number is:

Number of cycles before replacement * watt-hours in optimum cycle

The $/lifetime-wh is:

(Battery cost + interest on cost over lifetime - battery recycle value) / lifetime-wh

(You must also consider these numbers around the system, because in addition to a battery pack, you need chargers, inverters and grid-tie equipment, though they may last longer than a battery pack.)

I find it odd that this very important number is not widely discussed or published. One reason is that it’s not as important for electric cars and consumer electronic goods.

Electric car batteries

In electric cars, it’s difficult because you have to run the car to match the driver’s demands. Some days the driver only goes 10 miles and barely discharges before plugging in. Other days they want to run the car all the way down to almost empty. Because of this each battery will respond differently. Taxis, especially Robotaxis, can do their driving to match an optimum cycle, and this number is important for them.

A lot of factors affect your choice of electric car battery. For a car, you want everything, and in fact must just do trade-offs.

  • Cost per kwh of capacity — this is your range, and electric car buyers care a great deal about that
  • Low weight (high energy density) is essential, extra weight decreases performance and range
  • Modest size is important, you don’t want to fill your cargo space with batteries
  • Ability to use the full capacity from time to time without damaging the battery’s life much is important, or you don’t really have the range you paid for and you carry its weight for nothing.
  • High discharge is important for acceleration
  • Fast charge is important as DC fast-charging stations arise. It must be easy to make the cells take charge and not burst.
  • Ability to work in all temperatures is a must. Many batteries lose a lot of capacity in the cold.
  • Safety if hit by a truck is a factor, or even safety just sitting there.
  • Long lifetime, and lifetime-wh affect when you must replace the battery or junk the car

Weight is really important in the electric car because as you add weight, you reduce the efficiency and performance of the car. Double the battery and you don’t double the range because you added that weight, and you also make the car slower. After a while, it becomes much less useful to add range, and the heavier your battery is, the sooner that comes.

That’s why Tesla makes lithium ion battery based cars. These batteries are light, but more expensive than the heavier batteries. Today they cost around $500/kwh of capacity (all-in) but that cost is forecast to drop, perhaps to $200/kwh by 2020. That initial pack in the Tesla costs $40,000, but they will sell you a replacement for 8 years down the road for just $12,000 because, in part, they plan to pay a lot less in 8 years.  read more »