brad's blog

Google Accidents and Deployment, Mercedes Trucks and more

Some headlines (I’ve been on the road and will have more to say soon.)

Google announces it will put new generation buggies on city streets

Google has done over 2.7 million km of testing with their existing fleet, they announced. Now, they will be putting their small “buggy” vehicle onto real streets in Mountain View. The cars will stick to slower streets and are NEVs that only go 25mph.

While this vehicle is designed for fully automatic operation, during the testing phase, as required, it will have a temporary set of controls for the safety driver to use in case of any problem. Google’s buggy, which still has no official name, has been built in a small fleet and has been operating on test tracks up to this point. Now it will need to operate among other road users and pedestrians.

Accidents with, but not caused by self-driving cars cause press tizzy.

The press were terribly excited when reports filed with the State of California indicated that there had been 4 accidents reported — 3 for Google and 1 for Delphi. Google reported a total of 11 accidents in 6 years of testing and over 1.5 million miles.

Headlines spoke loudly about the cars being in accidents, but buried in the copy was the fact that none of the accidents by any company were the fault of the software. Several took place during human driving, and the rest were accidents that were clearly the fault of the other party, such as being rear ended or hit while stopped.

Still, some of the smarter press noticed, this is a higher rate of being in an accident than normal, in fact almost double — human drivers are in an accident about every 250,000 miles and so should have had only 6.

The answer may be that these vehicles are unusual and have “self driving car” written on them. They may be distracting other drivers, making it more likely those drivers will make a mistake. In addition, many people have told me of their thoughts when they encountered a Google car on the road. “I thought about going in front of it and braking to see what it would do,” I’ve been told by many. Aside from the fact that this is risky and dickish, and would just cause the safety drivers to immediately disengage and take over, in reality they all also said they didn’t do it, and experience in the cars shows that it’s very rare for other drivers to actually try to “test” the car.

But perhaps some people who think about it do distract themselves and end up in an accident. That’s not good, but it’s also something that should go away as the novelty of the cars decreases.

Mercedes and Freightliner test in Nevada

There was also lots of press about a combined project of Mercedes/Daimler and Freightliner to test a self-driving truck in Nevada. There is no reason that we won’t eventually have self-driving trucks, of course, and there are direct economic benefits for trucking fleets to not require drivers.

Self-driving trucks are not new off the road. In fact the first commercial self-driving vehicles were mining trucks at the Rio Tinto mine in Australia. Small startup Peleton is producing a system to let truckers convoy, with the rear driver able to go hands-free. Putting them on regular roads is a big step, but it opens some difficult questions.

First, it is not wise to do this early on. Systems will not be perfect, and there will be accidents. You want your first accidents to be with something like Google’s buggy or a Prius, not with an 18-wheel semi-truck. “Your first is your worst” with software and so your first should be small and light.

Secondly, this truck opens up the jobs question much more than other vehicles, where the main goal is to replace amateur drivers, not professionals. Yes, cab drivers will slowly fade out of existence as the decades pass, but nobody grows up wanting to be a cab driver — it’s a job you fall into for a short time because it’s quick and easy work that doesn’t need much training. While other people build robots to replace workers, the developers of self-driving cars are mostly working on saving lives and increasing convenience.

Many jobs have been changed by automation, of course, and this will keep happening, and it will happen faster. Truck drivers are just one group that will face this, and they are not the first. On the other hand, the reality of robot job replacement is that while it has happened at a grand scale, there are more people working today than ever. People move to other jobs, and they will continue to do so. This may not be much satisfaction for those who will need to go through this task, but the other benefits of robocars are so large that it’s hard to imagine delaying them because of this. Jobs are important, but lives are even more important.

It’s also worth noting that today there is a large shortage of truck drivers, and as such the early robotic trucks will not be taking any jobs.

I’m more interested in tiny delivery “trucks” which I call “deliverbots.” For long haul, having large shared cargo vehicles makes sense, but for delivery, it can be better to have a small robot do the job and make it direct and personal.

New Sensors

The world of sensors continues to grow. This wideband software based radar from a student team won a prize. It claims to produce a 3D image. Today’s automotive radars have long range but very low resolution. High resolution radar could replace lidar if it gets enough resolution. Radar sees further, and sees through fog, and gives you a speed value, and LIDAR falls short in those areas.

Also noteworthy is this article on getting centimeter GPS accuracy with COTS GPS equipment. They claim to be able to eliminate a lot of multipath through random movements of the antennas. If true, it could be a huge localization breakthrough. GPS just isn’t good enough for robocar positioning. Aside from the fact it goes away in some locations like tunnels, and even though modern techniques can get sub-cm accuracy, it you want to position your robocar with it, and it alone, you need it to essentially never fail. But it does.

That said, most other localization systems, including map and image based localization, benefit from getting good GPS data to keep them reliable. The two systems together work very well, and making either one better helps.

Transportation Secretary Fox advances DoT plan

Secretary Fox has been out writing articles and Speaking in Silicon Valley about their Beyond Traffic effort. They promise big promotion of robocars which is good. Sadly, they also keep promoting the false idea that vehicle to vehicle communications are valuable and will play a significant role in the development of robocars. In my view, many inside the DoT staked their careers on V2V, and so feel required to promote it, even though it has minimal compelling applications and may actually be rejected entirely by the robocar community because of security issues.

This debate is going to continue for a while, it seems.

Maps, maps, maps

Nokia has put its “Here” map division up for sale, and a large part of the attention seems to related to their HD Maps project, aimed at making maps for self-driving. (HERE published a short interview with me on the value of these maps.

It will be interesting to see how much money that commands. At the same time, TomTom, the 3rd mapping company, has announced it will begin making maps for self-driving cars — a decision they made in part because of encouragement from yours truly.

Uber dwarfs taxis

Many who thought Uber’s valuation is crazy came to that conclusion because they looked at the size of the Taxi industry. To the surprise of nobody who has followed Uber, they recently revealed that in San Francisco, their birthplace, they are now 3 times the size of the old taxi industry, and growing. It was entirely the wrong comparison to make. The same is true of robocars. They won’t just match what Uber does, they will change the world.

There’s more news to come, during a brief visit to home, but I’m off to play in Peoria, and then Africa next week!

Second musings on the the Hugo Awards and the fix

Last week’s Hugo Awards point of crisis caused a firestorm even outside the SF community. I felt it time to record some additional thoughts above the summary of many proposals I did.

It’s not about the politics

I think all sides have made an error by bringing the politics and personal faults of either side into the mix. Making it about the politics legitimises the underlying actions for some. As such, I want to remove that from the discussion as much as possible. That’s why in the prior post I proposed an alternate history.

What are the goals of the award?

Awards are funny beasts. They are almost all given out by societies. The Motion Picture Academy does the Oscars, and the Worldcons do the Hugos. The Hugos, though, are overtly a “fan” award (unlike the Nebulas which are a writer’s award, and the Oscars which are a Hollywood pro’s award.) They represent the view of fans who go to the Worldcons, but they have always been eager for more fans to join that community. But the award does not belong to the public, it belongs to that community.

While the award is done with voting and ballots, I believe it is really a measurement, which is to say, a survey. We want to measure the aggregate opinion of the community on what the best of the year was. The opinions are, of course, subjective, but the aggregate opinion is an objective fact, if we could learn it.

In particular, I would venture we wish to know which works would get the most support among fans, if the fans had the time to fairly judge all serious contenders. Of course, not everybody reads everything, and not everybody votes, so we can’t ever know that precisely, but if we did know it, it’s what we would want to give the award to.

To get closer to that, we use a 2 step process, beginning with a nomination ballot. Survey the community, and try to come up with a good estimate of the best contenders based on fan opinion. This both honours the nominees but more importantly it now gives the members the chance to more fully evaluate them and make a fair comparison. To help, in a process I began 22 years ago, the members get access to electronic versions of almost all the nominees, and a few months in which to evaluate them.

Then the final ballot is run, and if things have gone well, we’ve identified what truly is the best loved work of the informed and well-read fans. Understand again, the choices of the fans are opinions, but the result of the process is our best estimate of a fact — a fact about the opinions.

The process is designed to help obtain that winner, and there are several sub-goals

  • The process should, of course, get as close to the truth as it can. In the end, the most people should feel it was the best choice.
  • The process should be fair, and appear to be fair
  • The process should be easy to participate in, administer and to understand
  • The process should not encourage any member to not express their true opinion on their ballot. If they lie on their ballot, how can we know the true best aggregate of their opinions.
  • As such, ballots should be generated independently, and there should be very little “strategy” to the system which encourages members to falsely represent their views to help one candidate over another.
  • It should encourage participation, and the number of nominees has to be small enough that it’s reasonable for people to fairly evaluate them all

A tall order, when we add a new element — people willing to abuse the rules to alter the results away from the true opinion of the fans. In this case, we had this through collusion. Two related parties published “slates” — the analog of political parties — and their followers carried them out, voting for most or all of the slate instead of voting their own independent and true opinion.

This corrupts the system greatly because when everybody else nominates independently, their nominations are broadly distributed among a large number of potential candidates. A group that colludes and concentrates their choices will easily dominate, even if it’s a small minority of the community. A survey of opinion becomes completely invalid if the respondents collude or don’t express their true views. Done in this way, I would go so far as to describe it as cheating, even though it is done within the context of the rules.

Proposals that are robust against collusion

Collusion is actually fairly obvious if the group is of decent size. Their efforts stick out clearly in a sea of broadly distributed independent nominations. There are algorithms which make it less powerful. There are other algorithms that effectively promote ballot concentration even among independent nominators so that the collusion is less useful.

A wide variety have been discussed. Their broad approaches include:

  • Systems that diminish the power of a nominating ballot as more of its choices are declared winners. Effectively, the more you get of what you asked for, the less likely you will get more of it. This mostly prevents a sweep of all nominations, and also increases diversity in the final result, even the true diversity of the independent nominators.
  • Systems which attempt to “maximize happiness,” which is to say try to make the most people pleased with the ballot by adding up for each person the fraction of their choices that won and maximizing that. This requires that nominators not all nominate 5 items, and makes a ballot with just one nomination quite strong. Similar systems allow putting weight on nominations to make some stronger than others.
  • Public voting, where people can see running tallies, and respond to collusion with their own counter-nominations.
  • Reduction of the number of nominations for each member, to stop sweeps.

The proposals work to varying degrees, but they all significantly increase the “strategy” component for an individual voter. It becomes the norm that if you have just a little information about what the most common popular choices will be, that your wisest course to get the ballot you want will be to deliberately remove certain works from your ballot.

Some members would ignore this and nominate honestly. Many, however, would read articles about strategy, and either practice it or wonder if they were doing the right thing. In addition to debates about collusion, there would be debates on how strategy affected the ballot.

Certain variants of multi-candidate STV help against collusion and have less strategy, but most of the methods proposed have a lot.

In addition, all the systems permit at least one, and as many as 2 or 3 slate-choice nominees onto the final ballot. While members will probably know which ones those are, this is still not desired. First of all, these placements displace other works which would otherwise have made the ballot. You could increase the size of the final ballot, you need to know how many slate choices will be on it.

It should be clear, when others do not collude, slate collusion is very powerful. In many political systems, it is actually considered a great result if a party with 20% of the voters gains 20% of the “victories.” Here, we have a situation with 2,000 nominators, and where just 100 colluding members can saturate some categories and get several entries into all of them, and with 10% (the likely amount in 2015) they can get a large fraction of them. As such it is not proportional representation at all.

Fighting human attackers with human defence

Consideration of the risks of confusion and strategy with all these systems, I have been led to the conclusion that the only solid response to organized attackers on the nomination system is a system of human judgement. Instead of hard and fast voting rules, the time has come, regrettably, to have people judge if the system is under attack, and give them the power to fix it.

This is hardly anything new, it’s how almost all systems of governance work. It may be a hubris to suggest the award can get by without it. Like the good systems of governance this must be done with impartiality, transparency and accountability, but it must be done.

I see a few variants which could be used. Enforcement would most probably be done by the Hugo Committee, which is normally a special subcommittee of the group running the Worldcon. However, it need not be them, and could be a different subcommittee, or an elected body.

While some of the variants I describe below add complexity, it is not necessary to do them. One important thing about the the rule of justice is that you don’t have to get it exactly precise. You get it in broad strokes and you trust people. Sometimes it fails. Mostly it works, unless you bring in the wrong incentives.

As such, some of these proposals work by not changing almost anything about the “user experience” of the system. You can do this with people nominating and voting as they always did, and relying on human vigilance to deflect attacks. You can also use the humans for more than that.

A broad rule against collusion and other clear ethical violations

The rule could be as broad as to prohibit “any actions which clearly compromise the honesty and independence of ballots.” There would be some clarifications, to indicate this does not forbid ordinary lobbying and promotion, but does prohibit collusion, vote buying, paying for memberships which vote as you instruct and similar actions. The examples would not draw hard lines, but give guidance.

Explicit rules about specific acts

The rule could be much more explicit, with less discretion, with specific unethical acts. It turns out that collusion can be detected by the appearance of patterns in the ballots which are extremely unlikely to occur in a proper independent sample. You don’t even need to know who was involved or prove that anybody agreed to any particular conspiracy.

The big challenge with explicit rules (which take 2 years to change) is that clever human attackers can find holes, and exploit them, and you can’t fix it then, or in the next year.

Delegation of nominating power or judicial power to a sub group elected by the members

Judicial power to fix problems with a ballot could fall to a committee chosen by members. This group would be chosen by a well established voting system, similar to those discussed for the nomination. Here, proportional representation makes sense, so if a group is 10% of the members it should have 10% of this committee. It won’t do it much good, though, if the others all oppose them. Unlike books, the delegates would be human beings, able to learn and reason. With 2,000 members, and 50 members per delegate, there would be 40 on the judicial committee, and it could probably be trusted to act fairly with that many people. In addition, action could require some sort of supermajority. If a 2/3 supermajority were needed, attackers would need to be 1/3 of all members.

This council could perhaps be given only the power to add nominations — beyond the normal fixed count — and not to remove them. Thus if there are inappropriate nominations, they could only express their opinion on that, and leave it to the voters what to do with those candidates, including not reading them and not ranking them.

Instead of judicial power, it might be simpler to appoint pure nominating power to delegates. Collusion is useless here because in effect all members are now colluding about their different interests, but in an honest way. Unlike pure direct democracy, the delegates, not unlike an award jury, would be expected to listen to members (and even look at nominating ballots done by them) but charged with coming up with the best consensus on the goal stated above. Such jurors would not simply vote their preferences. They would swear to attempt to examine as many works as possible in their efforts. They would suggest works to others and expect them to be likely to look at them. They would expect to be heavily lobbied and promoted to, but as long as its pure speech (no bribes other than free books and perhaps some nice parties) they would be expected to not be fooled so easily by such efforts.

As above, a nominating body might also only start with a member nominating system and add candidates to it and express rulings about why. In many awards, the primary function of the award jury is not to bypass the membership ballot, but to add one or two works that were obscure and the members may have missed. This is not a bad function, so long as the “real ballot” (the one you feel a duty to evaluate) is not too large.

Transparency and accountability

There is one barrier to transparency, in that releasing preliminary results biases the electorate in the final ballot, which would remain a direct survey of members with no intermediaries — though still the potential to look for attacks and corruption. There could also be auditors, who are barred from voting in the awards and are allowed to see all that goes on. Auditors might be people from the prior worldcon or some other different source, or fans chosen at random.

Finally, decisions could be appealed to the business meeting. This requires a business meeting after the Hugos. Attackers would probably always appeal any ruling against them. Appeals can’t alter nominations, obviously, or restore candidates who were eliminated.

Comprehensive plan

All the above requires the two year ratification process and could not come into effect (mostly) until 2017. To deal with the current cheating and the promised cheating in 2016, the following are recommended.

  1. Downplay the 2015 Hugo Award, perhaps with sufficient fans supporting this that all categories (including untainted ones) have no award given.
  2. Conduct a parallel award under a new system, and fête it like the Hugos, though they would not use that name.
  3. Pass new proposed rules including a special rule for 2016
  4. If 2016’s award is also compromised, do the same. However, at the 2016 business meeting, ratify a short-term amendment proposed in 2015 declaring the alternate awards to be the Hugo awards if run under the new rules, and discarding the uncounted results of the 2016 Hugos conducted under the old system. Another amendment would permit winners of the 2015 alternate award to say they are Hugo winners.
  5. If the attackers gave up, and 2016’s awards run normally, do not ratify the emergency plan, and instead ratify the new system that is robust against attack for use in 2017.

People get carsick as passengers? Shocking!

Earlier this week I was sent some advance research from the U of Michigan about car sickness rates for car passengers. I found the research of interest, but wish it had covered some questions I think are more important, such as how carsickness is changed by potentially new types of car seating, such as face to face or along the side.

To my surprise, there was a huge rush of press coverage of the study, which concluded that 6 to 12% of car passengers get a bit queasy, especially when looking down in order to read or work. While it was worthwhile to work up those numbers, the overall revelation was in the “Duh” category for me, I guess because it happens to me on some roads and I presumed it was fairly common.

Oddly, most of the press was of the “this is going to be a barrier to self-driving cars” sort, while my reaction was, “wow, that happens to fewer people than I thought!”

Having always known this, I am interested in the statistics, but to me the much more interesting question is, “what can be done about it?”

For those who don’t like to face backwards, the fact that so many are not bothered is a good sign — just switch seats.

Some activities are clearly better than others. While staring down at your phone or computer in your lap is bad during turns and bumps, it may be that staring up at a screen watching a video, with your peripheral vision very connected to the environment, is a choice that reduces the stress.

I also am interested in studying if there can be clues to help people reduce sickness. For example, the car will know of upcoming turns, and probably even upcoming bumps. It could issue tones to give you subtle clues as to what’s coming, and when it might be time to pause and look up. It might even be the case that audio clues could substitute for visual clues in our plastic brains.

The car, of course, should drive as gently as it can, and because the software does not need a tight suspension to feel the road, the ride can be smoother as well.

Another interesting thing to test would be having your tablet or phone deliberately tilt its display to give you the illusion you are looking at the fixed world when you look at it, or to have a little “window” that shows you a real world level so your eyes and inner ears can find something to agree on.

More advanced would be a passenger pod on hydraulic struts able to tilt with several degrees of freedom to counter the turns and bumps, and make them always be such that the forces go up and down, never side to side. With proper banking and tilting, you could go through a roundabout (often quite disconcerting when staring down) but only feel yourself get lighter and heavier.

Hugo awards suborned, what can or should be done?

Since 1992 I have had a long association with the Hugo Awards for SF & Fantasy given by the World Science Fiction Society/Convention. In 1993 I published the Hugo and Nebula Anthology which was for some time the largest anthology of current fiction every published, and one of the earliest major e-book projects. While I did it as a commercial venture, in the years to come it became the norm for the award organizers to publish an electronic anthology of willing nominees for free to the voters.

This year, things are highly controversial, because a group of fans/editors/writers calling themselves the “Sad Puppies,” had great success with a campaign to dominate the nominations for the awards. They published a slate of recommended nominations and a sufficient number of people sent in nominating ballots with that slate so that it dominated most of the award categories. Some categories are entirely the slate, only one was not affected. It’s important to understand the nominating and voting on the Hugos is done by members of the World SF Society, which is to say people who attend the World SF Convention (Worldcon) or who purchase special “supporting” memberships which don’t let you go but give you voting rights. This is a self-selected group, but in spite of that, it has mostly manged to run a reasonably independent vote to select the greatest works of the year. The group is not large, and in many categories, it can take only a score or two of nominations to make the ballot, and victory margins are often small. As such, it’s always been possible, and not even particularly hard, to subvert the process with any concerted effort. It’s even possible to do it with money, because you can just buy memberships which can nominate or vote, so long as a real unique person is behind each ballot.

The nominating group is self-selected, but it’s mostly a group that joins because they care about SF and its fandom, and as such, this keeps the award voting more independent than you would expect for a self-selected group. But this has changed.

The reasoning behind the Sad Puppy effort is complex and there is much contentious debate you can find on the web, and I’m about to get into some inside baseball, so if you don’t care about the Hugos, or the social dynamics of awards and conventions, you may want to skip this post.  read more »

Delphi completes trans-continental drive, and Hyundai goes big

Most of the robocar press this week has been about the Delphi drive from San Francisco to New York, which completed yesterday. Congratulations to the team. Few teams have tried to do such a long course and so many different roads. (While Google has over a million miles logged in their testing by now, it’s not been reported that they have done 3,500 distinct roads; most testing is done around Google HQ.)

The team reported the vehicle drove 99% of the time. This is both an impressive and unimpressive number, and understanding that is key to understanding the difficulty of the robocar problem.

One of the earliest pioneers, Ernst Dickmanns did a long highway drive 20 years ago, in 1995. He reported the system drove 95% of the time, kicking out every 10km or so. This was a system simply finding the edge of the road, and keeping in the lane by tracking that. Delphi’s car is much more sophisticated, with a very impressive array of sensors — 10 radars, 6 lidars and more, and it has much more sophisticated software.

99% is not 4% better than 95%, it’s 5 times better, because the real number is the fraction of road it could not drive. And from 99%, we need to get something like 10,000 times better — to 99.9999% of the time, to even start talking about a real full-auto robocar. Because in the USA we drive 3 trillion miles per year, taking about 60 billion hours, a little over half of it on the highway. 99.9999% for all cars would mean still too many accidents if 1 time in a million you encountered something and could not handle it.

However, this depends on what we mean by “being unable to handle it.”

  • If not handling means “has a fatal accident” that could map to 3,600,000 of those, which would be 100x the human rate and not acceptable.
  • If not handling it means “has any sort of accident” then we’re pretty good, about 1/4th of the rate of human accidents
  • If not handling it means that the vehicle knows certain roads are a problem, and diverts around them or requests human assistance, it’s no big problem at all.
  • Likewise if not handling it means identifying a trouble situation, and slowing down and pulling off the road, or even just plain stopping in the middle of the road — which is not perfectly safe but not ultra-dangerous either — it’s also not a problem.

At the same time, our technology is an exponential one, so it’s wrong to think that the statement that it needs to be 10,000 times better means the system is only 1/10,000th of the way there. In fact, getting to the goal may not be that far away, and Google is much further along. They reported a distance of over 80,000 miles between necessary interventions. Humans have accidents about ever 250,000 miles.

(Delphi has not reported the most interesting number, which is necessary unexpected interventions per million miles. To figure out if an intervention is necessary, you must replay the event in simulator to see what the vehicle would have done had the safety driver not intervened. The truly interesting number is the combination of interventions/mm and the fraction of roads you can drive. It’s easier, but boring, to get a low interventions/mm number on one plain piece of straight highway, for example.)

It should also be noted that Delphi’s result is almost entirely on highways, which are the simplest roads to drive for a robot. Google’s result is also heavily highway biased, though they have reported a lot more surface street work. None of the teams have testing records in complex and chaotic streets such as those found in the developing world, or harsh weather.

It is these facts which lead some people to conclude this technology is decades away. That would be the right conclusion if you were unaware of the exponential curve the technologies and the software development are on.

Huge Hyundai investment

For some time, I’ve been asking where the Koreans are on self-driving cars. Major projects arose in many major car companies, with the Germans in the lead, and then the US and Japan. Korea was not to be seen.

Hyundai announced they would produce highway cruise cars shortly (like other makers) but they also announced they would produce a much more autonomous car by 2020 — a similar number to most car makers as well. Remarkable though was the statement that they would invest over $70 billion in the next 4 years on what they are calling “smart cars,” including hiring over 7,000 people to work on them. While this number includes the factories they plan to build, and refers to many technologies beyond robocars, it’s still an immense number. The Koreans have arrived.

Matternet launches drone delivery platform

I often speak about deliverbots — the potential for ground based delivery robots. There is also excitement about drone (UAV/quadcopter) based delivery. We’ve seen many proposed projects, including Amazon prime Air and much debate. Many years ago I also was perhaps the first to propose that drones deliver a defibrillator anywhere and there are a few projects underway to do this.

Some of my students in the Singularity University Graduate Studies Program in 2011 really caught the bug, and their team project turned into Matternet — a company with a focus in drone delivery in the parts of the world without reliable road infrastructure. Example applications including moving lightweight items like medicines and test samples between remote clinics and eventually much more.

I’m pleased to say they just announced moving to a production phase called Matternet One. Feel free to check it out.

When it comes to ground robots and autonomous flying vehicles, there are a number of different trade-offs:

  • Drones will be much faster, and have an easier time getting roughly to a location. It’s a much easier problem to solve. No traffic, and travel mostly as the crow flies.
  • Deliverbots will be able to handle much heavier and larger cargo, consuming a lot less energy in most cases. Though drones able to move 40kg are already out there.
  • Regulations stand in the way of both vehicles, but current proposed FAA regulations would completely prohibit the drones, at least for now.
  • Landing a drone in a random place is very hard. Some drone plans avoid that by lowering the cargo on a tether and releasing the tether.
  • Driving to a doorway or even gate is not super easy either, though.
  • Heavy drones falling on people or property are an issue that scares people, but they are also scared of robots on roads and sidewalks.
  • Drones probably cost more but can do more deliveries per hour.
  • Drones don’t have good systems in place to avoid collisions with other drones. Deliverbots won’t go that fast and so can stop quickly for obstacles seen with short range sensors.
  • Deliverbots have to not hit cars or pedestrians. Really not hit them.
  • Deliverbots might be subject to piracy (people stealing them) and drones may have people shoot at them.
  • Drones may be noisy (this is yet to be seen) particularly if they have heavier cargo.
  • Drones can go where their are no roads or paths. For ground robots, you need legs like the BigDog.
  • Winds and rain will cause problems for drones. Deliverbots will be more robust against these, but may have trouble on snow and ice.

In the long run, I think we’ll see drones for urgent, light cargo and deliverbots for the rest, along with real trucks for the few large and heavy things we need.

Delphi's cross-country trip and a raft of Robocar News

I’ve been on the road, and there has been a ton of news in the last 4 weeks. In fact, below is just a small subset of the now constant stream of news items and articles that appear about robocars.

Delphi has made waves by undertaking a road trip from San Francisco to New York in their test car, which is equipped with an impressive array of sensors. The trip is now underway, and on their page you can see lots of videos of the vehicle along the trek.

The Delphi vehicle is one of the most sensor-laden vehicles out there, and that’s good. In spite of all those who make the rather odd claim that they want to build robocars with fewer sensors, Moore’s Law and other principles teach us that the right procedure is to throw everything you can at the problem today, because those sensors will be cheap when it comes time to actually ship. Particularly for those who say they won’t ship for a decade.

At the same time, the Delphi test is mostly of highway driving, with very minimal urban street driving according to Kristen Kinley at Delphi. They are attempting off-map driving, which is possible on highways due to their much simpler environment. Like all testing projects these days, there are safety drivers in the cars ready to intervene at the first sign of a problem.

Delphi is doing a small amount of DSRC vehicle to infrastructure testing as well, though this is only done in Mountain View where they used some specially installed roadside radio infrastructure equipment.

Delphi is doing the right thing here — getting lots of miles and different roads under their belt. This is Google’s giant advantage today. Based on Google’s announcements, they have more than a million miles of testing in the can, and that makes a big difference.

Hype and reality of Tesla’s autopilot announcement

Telsa has announced they will do an over the air upgrade of car software in a few months to add autopilot functionality to existing models that have sufficient sensors. This autopilot is the “supervised” class of self driving that I warned may end up viewed as boring. The press have treated this as something immense, but as far as I can tell, this is similar to products built by Mercedes, BMW, Audi and several other companies and even sold in the market (at least for traffic jams) for a couple of years now.

The other products have shied away from doing full highway speed in commercial products, though rumours exist of it being available in commercial cars in Europe. What is special about Tesla’s offering is that it will be the first car sold in the US to do this at highway speed, and they may offer supervised lane change as well. It’s also interesting that since they have been planning this for a while, it will come as a software upgrade to people who bought their technology package earlier.

UK project budget rises to £100 million

What started with a £10 million pound prize has grown in the UK has become over 100m in grants in the latest UK budget. While government research labs will not provide us with the final solutions, this money will probably create some very useful tools and results for the private players to exploit.

MobilEye releases their EyeQ4 chip

MobilEye from Jerusalem is probably the leader in automotive machine vision, and their new generation chip has been launched, but won’t show up in cars for a few years. It’s an ASIC packed with hardware and processor cores aimed at doing easy machine vision. My personal judgement is that this is not sufficient for robocar driving, but MobilEye wants to prove me wrong. (The EQ4 chip does have software to do sensor fusion with LIDAR and Radar, so they don’t want to prove me entirely wrong.) Even if not good enough on their own, ME chips offer a good alternate path for redundancy

Chris Urmson gives a TeD talk about the Google Car

Talks by Google’s team are rare — the project is unusual in trying to play down its publicity. I was not at TeD, but reports from there suggest Chris did not reveal a great deal new, other than repeating his goal of having the cars be in practical service before his son turns 16. Of course, humans will be driving for a long time after robocars start becoming common on the roads, but it is true that we will eventually see teens who would have gotten a licence never get around to getting one. (Teems are already waiting longer to get their licences so this is not a hard prediction.)

The war between DSRC and more wifi is heating up.

2 years ago, the FCC warned that since auto makers had not really figured out much good to do with the DSRC spectrum at 5.9ghz, it was time to repurpose it for unlicenced use, like more WiFi.

There is now a bill to force this being proposed.  read more »

How to avoid a pilot suicide

After 9/11 there was a lot of talk about how to prevent it, and the best method was to fortify the cockpit door and prevent unauthorized access. Every security system, however, sometimes prevents authorized people from getting access, and the tragic results of that are now clear to the world. This is likely a highly unusual event, and we should not go overboard, but it’s still interesting to consider.

(I have an extra reason to take special interest here, I was boarding a flight out of Spain on Tuesday just before the Germanwings flight crashed.)

In 2001, it was very common to talk about how software systems, at least on fly-by-wire aircraft, might make it impossible for aircraft to do things like fly into buildings. Such modes might be enabled by remote command from air traffic control. Pilots resist this, they don’t like the idea of a plane that might refuse to obey them at any time, because with some justification they worry that a situation could arise where the automated system is in error, and they need full manual control to do what needs to be done.

The cockpit access protocol on the Airbus allows flight crew to enter a code to unlock the door. Quite reasonably, the pilot in the cockpit can override that access, because an external bad guy might force a flight crew member to enter the code.

So here’s an alternative — a code that can be entered by a flight crew member which sends and emergency alert to air traffic control. ATC would then have the power to unlock the door with no possibility of pilot override. In extreme cases, ATC might even be able to put the plane in a safe mode, where it can only fly to a designated airport, and auto-land at that airport. In planes with sufficient bandwidth near an airport, the plane might actually be landed by remote pilots like a UAV, an entirely reasonable idea for newer aircraft. In case of real terrorist attack, ATC would need to be ready to refuse to open the door no matter what is threatened to the passengers.

If ATC is out of range (like over the deep ocean) then the remote console might allow the flight crew — even a flight attendant — to direct the aircraft to fly to pre-approved waypoints along the planned flight path where quality radio contact can be established.

Clearly there is a risk to putting a plane in this mode, though ATC or the flight crew who did it could always return control to the cockpit.

It might still be possible to commit suicide but it would take a lot more detailed planning. Indeed, there have been pilot suicides where the door was not locked, and the suicidal pilot just put the plane into a non-recoverable spin so quickly that nobody could stop it. Still, in many cases of suicide, any impediment can sometimes make the difference.

Update: I have learned the lock has a manual component, and so the pilot in the cockpit could prevent even a remote opening for now. Of course, current planes are not set to be remotely flown, though that has been discussed. It’s non trivial (and would require lots of approval) but it could have other purposes.

A safe mode that prevents overt attempts to crash might be more effective than you think, in that with many suicides, even modest discouragement can make a difference. It’s why they still want to put a fence on the Golden Gate Bridge had have other similar things elsewhere. You won’t stop a determined suicide but it apparently does stop those who are still uncertain, which is lots of them.

The simpler solution — already going into effect in countries that did not have this rule already — is a regulation insisting that nobody is ever alone in the cockpit. Under this rule, if a pilot wants to go to the bathroom, a flight attendant waits in the cockpit. Of course, a determined suicidal pilot could disable this person, either because of physical power, or because sometimes there is a weapon available to pilots. That requires more resolve and planning, though.

What colour is the dress? It's both.

Perhaps by now you are sick of the dress that 3/4 people see as “white and gold” and 1/4 people see as “dark blue and black.” If you haven’t seen it, it’s easy to find. What’s amazing is to see how violent the arguments can get between people because the two ways we see it are so hugely different. “How can you see that as white????” people shout. They really shout.

There are a few explanations out there, but let me add my own:

  • The real dress, the one you can buy, is indeed blue and black. That’s well documented.
  • The real photo of the dress everybody is looking at, is light blue and medium gold, because of unusual lighting and colour balance.

That’s the key point. The dress and photo are different. Anybody who saw the dress in almost any lighting would say it was blue and black. But people say very different things about the photo.

To explain, here are sampled colour swatches from the photo, on white and dark backgrounds.

You can see that the colours in the photo are indeed a light blue and a medium to dark gold. Whatever the dress really is, that’s what the photo colours are.

We see things in strange light all the time. Indoors, under incandescent light bulbs, or at sunset, everything is really, really yellow-red. Take a photo at these times with your camera set to “sunshine” light and you will see what the real colours look like. But your brain sees the colours very similarly to how they are in the day. Our brains are trained to correct for the lighting and try to see the “true” (under sunlight) colours. Sunlight isn’t really white but it’s our reference for white.

Some people see the photo and this part of their brain kicks in, and does the correction, letting them see what the dress looks like in more neutral light. We all do this most of the time, but this photo shows a time when only some of us can do it.

For the white/gold folks, their brains are not doing the real correction. We (I am one of them) see something closer to the actual colour of the photo. Though not quite — we see the light blue as whiter and the gold as a little lighter too. We’re making a different correction, and it seems going a bit the other direction. Our correction is false, the blue/black folks are doing a better job at the correction. It’s a bit unusual that the the results are so far apart. The blue/blacks see something close to the real dress, and the white/golds see something closer to the actual photo. Hard to say if “their kind” are better or worse than my kind because of it.

For the white/gold folks, our brains must be imagining the light is a bit blueish. We do like to find the white in a scene to help us figure out what colour the light is. In this case we’re getting tricked. There are many other situations where we get our colour correction wrong, and I will bet you can find other situations where the white/golds see the sunlit colour, and the black/blues see something closer to the photograph.

Targeted Ads after I buy something are really annoying

I’m sure you’ve seen it. Shop for something and pretty quickly, half the ads you see on the web relate to that thing. And you keep seeing those ads, even after you have made your purchase, sometimes for weeks on end.

At first blush, it makes sense, and is the whole reason the ad companies (like Google and the rest) want to track more about us is to deliver ads that target our interests. The obvious is value in terms of making advertising effective for advertisers, but it’s also argued that web surfers derive more value from ads that might interest them than we do from generic ads with little relevance to our lives. It’s one of the reasons that text ads on search have been such a success.

Anything in the ad industry worth doing seems to them to be worth overdoing, I fear, and I think this is backfiring. That’s because the ads that pop up for products I have already bought are both completely useless and much more annoying than generic ads. They are annoying because they distract my attention too well — I have been thinking about those products, I may be holding them in my hands, so of course my eyes are drawn to photos of things like what I just bought.

I already bought my ticket on Iberia!

This extends beyond the web. Woe to me for searching for hotel rooms and flights these days. I am bombarded after this with not just ads but emails wanting to make sure I had gotten a room or other travel service. They accept that if I book a flight, I don’t need another flight but surely need a room, but of course quite often I don’t need a room and may not even be shopping for one. It’s way worse than the typical spam. I’ve seen ads for travel services a month after I took the trip.

Yes, that Iberia ad I screen captured on the right is the ad showing to me on my own blog — 5 days after I booked a trip to Spain on USAir that uses Iberia as a codeshare. (Come see me at the Singularity Summit in Sevilla on March 12-14!)

I am not sure how to solve this. I am not really interested in telling the ad engines what I have done to make them go away. That’s more annoyance, and gives them even more information just to be rid of another annoyance.

It does make us wonder — what is advertising like if it gets really, really good? I mean good beyond the ads John Anderton sees in Minority report as he walks past the billboards. What if every ad is actually about something you want to buy? It will be much more effective for advertisers of course, but will that cause them to cut back on the ads to reduce the brain bandwidth it takes from us? Would companies like Google say, “Hey, we are making a $200 CPM here, so let’s only run ads 1/10th of the time that we did when we made a $20 CPM?” Somehow I doubt it.

Uber price in LA approaches robocar cheap

I was recently considering the price of UberX in Los Angeles. It’s gotten disturbingly low:

Flag drop: $0 18 cents/minute 90 cents/mile

This is not a very good deal for the driver. After Uber’s 20% cut, that’s 72 cents/mile. According to AAA, a typical car costs about 60 cents/mile to operate, not including parking. (Some cars are a bit cheaper, including the Prius favoured by UberX drivers.) In any event, the UberX driver is not making much money on their car.

The 18 cents/minute — $10.80 per hour, drops to only $8.64/hour while driving. Not that much above minimum wage. And I’m not counting the time spent waiting and driving to and from rides, nor the miles, which is odd that the flag drop fee. There is a $1 “safe rides fee” that Uber pockets (they are being sued over that.) And there is a $4 minimum, which will hit you on rides of up to about 2.5 miles.

So Uber drivers aren’t getting paid that well — not big news — but a bigger thing is the comparison of this with private car ownership.

As noted, private car ownership is typically around 60 cents/mile. The Uber ride then, is only 50% more per mile. You pay the driver a low rate to drive you, but in return, you get that back as free time in which you can work, or socialize on your phone, or relax and read or watch movies. For a large number of people who value their time much more than $10/hour, it’s a no-brainer win.

The average car trip for urbanites is 8.5 miles — though that of course is biased up by long road trips that would never be done in something like Uber. I will make a guess and drop urban trips to 6.

The Uber and private car costs do have some complications: * That Safe Rides Fee adds $1/trip, or about 16 cents/mile on a 6 mile trip * The minimum fee is a minor penalty from 2 to 2.5 miles, a serious penalty on 1 mile trips * Uber has surge pricing some of the time that can double or even triple this price

As UberX prices drop this much, we should start seeing people deliberately dropping cars for Uber, just as I have predicted for robocars. I forecast robotaxi service can be available for even less. 60 cents/mile with no cost for a driver and minimal flag drop or minimum fees. In other words, beating the cost of private car ownership and offering free time while riding. UberX is not as good as this, but for people of a certain income level who value their own time, it should already be beating the private car.

We should definitely see 2 car families dropping down to 1 car plus digital rides. The longer trips can be well handled by services like Zipcar or even better, Car2Go or DriveNow which are one way.

The surge pricing is a barrier. One easy solution would be for a company like Uber to make an offer: “If you ride more than 4,000 miles/year with us, then no surge pricing for you.” Or whatever deal of that sort can make economic sense. Sort of frequent rider loyalty miles. (Surprised none of the companies have thought about loyalty programs yet.)

Another option that might make sense in car replacement is an electric scooter for trips under 2 miles, UberX like service for 2 to 30 miles, and car rental/carshare for trips over 30 miles.

If we don’t start seeing this happen, it might tell us that robocars may have a larger hurdle in getting people to give up a car for them than predicted. On the other hand, some people will actually much prefer the silence of a robocar to having to interact with a human driver — sometimes you are not in the mood for it. In addition, Americans at least are not quite used to the idea of having a driver all the time. Even billionaires I know don’t have a personal chauffeur, in spite of the obvious utility of it for people whose time is that valuable. On the other hand, having a robocar will not seem so ostentatious.

Issues in regulating robocars, and the case for a light hand

All over the world, people (and governments) are debating about regulations for robocars. First for testing, and then for operation. It mostly began when Google encouraged the state of Nevada to write regulations, but now it’s in full force. The topic is so hot that there is a danger that regulations might be drafted long before the shape of the first commercial deployments of the technology take place.

As such I have prepared a new special article on the issues around regulating robocars. The article concludes that in spite of a frequent claim that we want to regulate and standarize even before the technology has been out in the market for a while, this is in fact both a highly unusual approach, and possibly even a dangerous approach.

Read:

Regulating Robocar Safety : An examination of the issues around regulating robocar safety and the case for a very light touch

Time for phones to have replaceable shock corners and more battery

Everywhere I go, a vast majority of people seem to now have two things in associating with their phone — a protective case, and a spare USB charging battery. The battery is there because most phones stopped having switchable batteries some time ago. The cases are there partly for decoration, but mostly because anybody who has dropped a phone and cracked the screen (or worse, the digitizer) doesn’t want to do it again — and a lot of people have done it.

While there is still a market for the thinnest and lightest phone, and phone makers think that’s what everybody wants, but I am not sure that is true any longer.

When they make a phone, they do try to make the battery last all day — and it often does. From time to time, however, a runaway application or other problem will drain your battery. You pick your phone out of your pocket in horror to find it warm, and it will die soon. And today, when your phone is dead, you feel lost and confused, like Manfred Mancx without his glasses. Even if it only happens 3 times a month it’s so bad that people now try to carry a backup battery in their bag.

One reason people like the large “phablet” phones is they come with bigger batteries, but I think even those who don’t want a phone too large for their hand still want a bigger battery. The conventional wisdom for a long time was that everybody wants thinner — I am not sure that’s true. Of course, a two battery system with one swappable still has its merits, or the standardized battery sticks I talked about.

The case is another matter. Here we buy a phone that is as thin as they can make it, and then we deliberately make it thicker to protect it.

I propose that phone design include 4 “shock corners” which are actually slightly thicker than the phone, and stick out just a few mm in all the directions. They will become the point of impact for all falls, and just a little shock buffer can make a big difference. What I propose further, though, even though it uses precious space in the device, is that they attach to indents at the corners of the phone, probably with a tiny jeweler’s screw or other small connection. This would allow the massive case industry to design all sorts of alternate bumpers and cases for phones that could attach firmly to the phone. Today, cases have to wrap all the way around the phone in order to hold it, which limits their design in many ways.

You could attach many things to your phone if it had a screw hole, not just bumper cases. Mounts that can easily slot into cars holders or other holders. Magnetic mounts and inductive charging plates. Accessory mounts of all sorts. And yes, even extra batteries.

While it would be nice to standardize, the truth is the case industry has reveled in supporting 1,000 different models of phone, and so could the attachment industry.

The Oscars

While not worthy of a blog post of its own, I was amused to note on Sunday that Oscars were won by films whose subjects were Hawking, Turing, Edward Snowden and robot-building nerds. Years ago it would have been remarkable if people had even heard of all these, and today, nobody noticed. Nerd culture really has won.

Where's my fast, smart, overhead scanner?

Back in 2008, I proposed the idea of a scanner club which would share high-end scanning equipment to rid of houses of the glut of paper. It’s a harder problem than it sounds. I bought a high-end Fujitsu office scanner (original price $5K, but I paid a lot less) and it’s done some things for me, but it’s still way too hard to use on general scanning problems.

I’ve bought a lot of scanners in the day. There are now lots of portable hand scanners that just scan to an SD card which I like. I also have several flatbeds and a couple of high volume sheetfeds.

In the scanner club article, I outlined a different design for how I would like a scanner to work. This design is faster and much less expensive and probably more reliable than all the other designs, yet 7 years later, nobody has built it.

The design is similar to the “document camera” family of scanners which feature a camera suspended over a flat surface, equipped with some LED lighting. Thanks to the progress in digital cameras, a fast, high resolution camera is now something you can get cheap. The $350 Hovercam Solo 8, which provides an 8 megapixel (4K) image at 30 frames per second. Soon, 4K cameras will become very cheap. You don’t need video at that resolution, and still cameras in the 20 megapixel range — which means 500 pixels/inch scanning of letter sized paper — are cheap and plentiful.

Under the camera you could put anything, but a surface of a distinct colour (like green screen) is a good idea. Anything but the same colour as your paper will do. To get extra fancy, the table could be perforated with small holes like an air hockey table, and have a small suction pump, so that paper put on it is instantly held flat, sticking slightly to the surface.

No-button scanning

The real feature I want is an ability to scan pages as fast as a human being can slap them down on the table. To scan a document, you would just take pages and quickly put them down, one after the other, as fast as you can, so long as you pause long enough for your hand to leave the view and the paper to stay still for 100 milliseconds or so.

The system will be watching with a 60 frame per second standard HD video camera (these are very cheap today.) It will watch until a new page arrives and your hand leaves. Because it will have an image of the table or papers under the new sheet, it can spot the difference. It can also spot when the image becomes still for a few frames, and when it doesn’t have your hand in it. This would trigger a high resolution still image. The LEDs would flash with that still image, which is your signal to know the image has been taken and the system is ready to drop a new page on. Every so often you would clear the stack so it doesn’t grow too high.

Alternately, you could remove pages before you add a new one. This would be slower, you would get no movement of papers under the top page. If you had the suction table, each page would be held nice and flat, with a green background around it, for a highly accurate rotation and crop in the final image. With two hands it might not be much slower to pull pages out while adding new ones.

No button is pressed between scans or even to start and finish scanning. You might have some buttons on the scanner to indicate you are clearing the stack, or to select modes (colour, black and white, line art, double sided, exposure modes etc.) Instead of buttons, you could also have little tokens you put on the surface with codes that can be read by the camera. This can include sheets of paper you print with bar codes to insert in the middle of your scanning streams.

By warning the scanner, you could also scan bound books and pamplets and even stapled documents without unstapling. You will get some small distortions but the scans will be fine if the goal is document storage rather than publishing. (You can even eliminate those distortions if you use 3D scanning techniques like structured light projection onto the pages, or having 2 cameras for stereo.)

For books, this is already worked out, and many places like the Internet Archive build special scanners that use overhead cameras for books. They have not attacked the “loose pile of paper” problem that so many of us have in our files and boxes of paper.

Why this method?

I believe this method is much faster than even high speed commercial scanners on all but the most regular of documents. You can flip pages at better than 1 per second. With small things, like business cards and photos, you can lay down multiple pages per second. That’s already the speed of typical high end office scanners. But the difference is actually far greater.

For those office scanners, you tend to need a fairly regular stack or the document feeder may mess up. Scanning a pile of different sized pages is problematic, and even general loose pages run the risk of skipping pages or other errors. As such, you always do a little bit of prep with your stacks of documents before you put them in the scanner. No button scanning will work with a random pile of cards and papers, including even folded papers. You would unfold them as you scan, but the overall process will take less time.

A scanner like this can handle almost any size and shape of paper. It could offer the option to zoom the camera out or pull it higher to scan very large pages, which the other scanners just can’t do. A lower ppi number on the larger pages, but if you can’t handle that, scan at full ppi and stitch together like you would on an older scanner.

The scans will not be as clean as a flatbed or sheetfed scanner. There will be variations in lighting and shading from curvature of the pages, along with minor distortions unless you use the suction table for all pages. A regular scanner puts a light source right on the page and full 3-colour scanning elements right next to it, it’s going to be higher quality. For publication and professional archiving, the big scanners will still win. On the other hand, this scanner could handle 3-dimensional objects and any thickness of paper.

Another thing that’s slower here is double sided pages. A few options are available here:

  • Flip every page. Have software in the scanner able to identify the act of flipping — especially easy if you have the 3D imaging with structured light.
  • Run the whole stack through again, upside-down. Runs the risk of getting out of sync. You want to be sure you tie every page with its other side.
  • Build a fancier double sided table where the surface is a sheet of glass or plexi, and there are cameras on both sides. (Flash the flash at two different times of course to avoid translucent paper.) Probably no holes in the glass for suction as those would show in the lower image.

Ideally, all of this would work without a computer, storing the images to a flash card. Fancier adjustments and OCR could be done later on the computer, as well as converting images to PDFs and breaking things up into different documents. Even better if it can work on batteries, and fold up for small storage. But frankly, I would be happy to have it always there, always on. Any paper I received in the mail would get a quick slap-down on the scanning table and the paper could go in the recycling right away.

You could also hire teens to go through your old filing cabinets and scan them. I believe this scanner design would be inexpensive, so there would be less need to share it.

Getting Fancy

As Moore’s law progresses, we can do even more. If we realize we’re taking video and have the power to process it, it becomes possible to combine all the video frames with a page in it, and produce an image that is better than any one frame, with sub-pixel resolution, and superior elimination of gradations in lighting and distortions.

As noted in the comments, it also becomes possible to do all this with what’s in a mobile phone, or any video camera with post-processing. One can even imagine:

  • Flipping through a book at high speed in front of a high-speed camera, and getting an image of the entire book in just a few seconds. Yes, some pages will get missed so you just do it again until it says it has all the pages. Update: This lab did something like this.
  • Vernor Vinge’s crazy scanner from Rainbow’s End, which sliced off the spines and blew the pages down a tube, being imaged all the way along to capture everything.
  • Using a big table and a group of people who just slap things down on the table until the computer, using a projector, shows you which things have been scanned and can be replaced. Thousands of pages could go buy in minutes.

Does Tesla new home storage battery suggest an amazing breakthrough?

There has been lots of buzz over announcements from Tesla that they will sell a battery for home electricity storage manufactured in the “gigafactory” they are building to make electric car batteries. It is suggested that 1/3 of the capacity of the factory might go to grid storage batteries.

This is very interesting because, at present, battery grid storage is not generally economical. The problem is the cost of the batteries. While batteries can be as much as 90% efficient, they wear out the more you use and recharge them. Batteries vary a lot in how many cycles they will deliver, and this varies according to how you use the battery (ie. do you drain it all the way, or use only the middle of the range, etc.) If your battery will deliver 1,000 cycles using 60% of its range (from 20% to 80%) and costs $400/kwh, then you will get 600kwh over the lifetime of a kwh unit, or 66 cents per kwh (presuming no residual value.) That’s not an economical cost for energy anywhere, except perhaps off-grid. (You also lose a cent or two from losses in the system.) If you can get down to 9 cents/kwh, plus 1 cent for losses, you get parity with the typical grid. However, this is modified by some important caveats:

  • If you have a grid with very different prices during the day, you can charge your batteries at the night price and use them during the daytime peak. You might pay 7 cents at night and avoid 21 cent prices in the day, so a battery cost of 14 cents/kwh is break-even.
  • You get a backup power system for times when the grid is off. How valuable that is varies on who you are. For many it’s worth several hundred dollars. (But not too many as you can get a generator as backup and most people don’t.)
  • Because battery prices are dropping fast, a battery pack today will lose value quickly, even before it physically degrades. And yes, in spite of what you might imagine in terms of “who cares, as long as it’s working,” that matters.

The magic number that is not well understood about batteries is the lifetime watt-hours in the battery per dollar. Lots of analysis will tell you things about the instantaneous capacity in kwh, notably important numbers like energy density (in kwh/kg or kwh/litre) and cost (in dollars/kwh) but for grid storage, the energy density is almost entirely unimportant, the cost for single cycle capacity is much less important and the lifetime watt-hours is the one you want to know. For any battery there will be an “optimal” duty cycle which maximizes the lifetime wh. (For example, taking it down to 20% and then back up to 80% is a popular duty cycle.)

The lifetime watt hour number is:

Number of cycles before replacement * watt-hours in optimum cycle

The $/lifetime-wh is:

(Battery cost + interest on cost over lifetime - battery recycle value) / lifetime-wh

(You must also consider these numbers around the system, because in addition to a battery pack, you need chargers, inverters and grid-tie equipment, though they may last longer than a battery pack.)

I find it odd that this very important number is not widely discussed or published. One reason is that it’s not as important for electric cars and consumer electronic goods.

Electric car batteries

In electric cars, it’s difficult because you have to run the car to match the driver’s demands. Some days the driver only goes 10 miles and barely discharges before plugging in. Other days they want to run the car all the way down to almost empty. Because of this each battery will respond differently. Taxis, especially Robotaxis, can do their driving to match an optimum cycle, and this number is important for them.

A lot of factors affect your choice of electric car battery. For a car, you want everything, and in fact must just do trade-offs.

  • Cost per kwh of capacity — this is your range, and electric car buyers care a great deal about that
  • Low weight (high energy density) is essential, extra weight decreases performance and range
  • Modest size is important, you don’t want to fill your cargo space with batteries
  • Ability to use the full capacity from time to time without damaging the battery’s life much is important, or you don’t really have the range you paid for and you carry its weight for nothing.
  • High discharge is important for acceleration
  • Fast charge is important as DC fast-charging stations arise. It must be easy to make the cells take charge and not burst.
  • Ability to work in all temperatures is a must. Many batteries lose a lot of capacity in the cold.
  • Safety if hit by a truck is a factor, or even safety just sitting there.
  • Long lifetime, and lifetime-wh affect when you must replace the battery or junk the car

Weight is really important in the electric car because as you add weight, you reduce the efficiency and performance of the car. Double the battery and you don’t double the range because you added that weight, and you also make the car slower. After a while, it becomes much less useful to add range, and the heavier your battery is, the sooner that comes.

That’s why Tesla makes lithium ion battery based cars. These batteries are light, but more expensive than the heavier batteries. Today they cost around $500/kwh of capacity (all-in) but that cost is forecast to drop, perhaps to $200/kwh by 2020. That initial pack in the Tesla costs $40,000, but they will sell you a replacement for 8 years down the road for just $12,000 because, in part, they plan to pay a lot less in 8 years.  read more »

The Daily Show is the most valuable TV program out there, and probably will still be that

Musings on the economies of cutting the cord.

Over the past 14 years, there has been only one constant in my TV viewing, and that’s The Daily Show. I first loved it with Craig Kilborn, and even more under Jon Stewart. I’ve seen almost all of them, even after going away for a few weeks, because when you drop the interview and commercials, it’s a pretty quick play. Jon Stewart’s decision to leave got a much stronger reaction from me than any other TV show news, though I think the show will survive.

I don’t know how many viewers are like me, but I think that TDS is one of the most commercially valuable programs on TV. It is the primary reason I have not “cut the cord” (or rather turned off the satellite.) I want to get it in HD, with the ability to skip commercials, at 8pm on the night that it was made. No other show I watch regularly meets this test. I turned off my last network show last year — I had been continuing to watch just the “Weekend Update” part of SNL along with 1 or 2 sketches. It always surprised me that the Daily Show team could produce a better satirical newscast than the SNL writers, even though SNL’s team had more money and a whole week to produce much less material.

The reason I call it that valuable is that by and larger, I am paying $45/month for satellite primarily to get that show. Sure, I watch other shows, but in a pinch, I would be willing to watch these other shows much later through other channels, like Netflix, DVD or online video stores at reasonable prices. I want the Daily Show as soon as I can, which is 8pm on the west coast. On the east coast, the 11pm arrival is a bit late.

I could watch it on their web site, but that’s the next day, and with forced watching of commercials. My time is too valuable to me to watch commercials — I would much rather pay to see it without them. (As I have pointed out there, you receive around $1-$2 in value for every hour of commercials you watch on regular TV, though the online edition only plays 4 ads instead of the more typical 12-15 of broadcast that I never see.)

In the early days at BitTorrent when we were trying to run a video store, I really wanted us to do a deal with Viacom/Comedy Central/TDS. In my plan, they would release the show to us (in HD before the cable systems moved to HD) as soon as possible (ie. before 11pm Eastern) and with unbleeped audio and no commercials. In other words, a superior product. I felt we could offer them more revenue per pay subscriber than they were getting from advertising. That’s because the typical half-hour show only brings in around 15 cents per broadcast viewer, presuming a $10 CPM. They were not interested, in part because some people didn’t want to go online, or had a bad view of BitTorrent (though the company that makes the software is not involved in any copyright infringement done with the tools.)

It may also have been they knew some of that true value. Viacom requires cable and satellite companies to buy a bundle of channels from them, and even though the channels show ads. Evidence suggests that the bundle of Viacom channels (Including Comedy Central, MTV and Nickelodeon) costs around $2.80 per household per month. While there are people like me who watch only Comedy Central from the Viacom bundle, most people probably watch 2 or more of them. They should be happy to get $5/month from a single household for a single show, but they are very committed to the bundling, and the cable companies, who don’t like the bundles, would get upset if Viacom sold individual shows like this and cable subscribers cut the cord.

In spite of this, I think the cord cutting and unbundling are inevitable. The forces are too strong. Dish Network’s supposedly bold venture with Sling, which provides 20 channels of medium popularity for $20/month over the internet only offers live streaming — no time-shifting, no fast forwarding — so it’s a completely uninteresting product to me.

As much as I love Jon Stewart, I think The Daily Show will survive his transition just fine. That’s because it was actually pretty funny with Craig Kilborn. Stewart improved it, but he is just one part of a group of writers, producers and other on-air talent, including those who came from a revolving door with The Onion. There are other folks who can pull it off.

TDS is available a day late on Amazon Instant Video and next day on Google Play — for $3/episode, or almost $50/month. You can get cable for a lot less than that. It’s on iTunes for $2/episode or $10/month, the latter price being reasonable, does anybody know when it gets released? The price difference is rather large.

Keep Calm and Carry Passengers -- UK robocar projects level up

The government baked robocar projects in the UK are going full steam, with this press release from the UK government to accompany the unveiling of the prototype Lutz pod which should ply the streets of Milton Keynes and Greenwich.

This comes along with the realizations of laws enabling of testing of vehicles (with safety drivers) and discussion of changes to the UK vehicle code.

The new pod follows a similar path to other fully-autonomous prototypes, reminding me of the EN-V from GM, the MIT City Car and the Google buggy prototype. It’s electric, meant for “last mile” and will lose its steering wheel once testing is over.

I also note they talk eagerly about the Meridian shuttle being tested in Greenwich, even though that’s a French vehicle.

When it comes to changes to the vehicle code, I think it’s premature. Even without looking at the proposed changes, I would say that we don’t know enough to work out what changes are needed, even though we all might be full of ideas.

One proposal is to remove the ban on tailgating to allow convoys. A reasonable enough thing, except people are not going to build convoys for quite some time, I think. The Volvo/SARTRE experiment found a number of problems with the idea, and you don’t want to do your first deployments with something that could crash 10 cars if it goes wrong instead of one. You do that later, once you feel very confident in your tech.

Another proposal called for changing how cyclists are treated. The law in the UK (and some other places) demands cyclists be given the full berth of a car, and in practice nobody ever does that, and if they did do it, it would mean they just followed along at bicycle speed, impeding traffic. One of those classic cases, like speed limits in the USA, where the law only works if nobody follows it. (Though cyclists would say that they should just get the full lane like the law says.)

We will need to fix these areas of the vehicle codes, but we should fix them only after we see a problem, unless it’s clear that the vehicles can’t be deployed without the change. Give the developers the chance to fix the problem on their own first. If you fix the law before you know what the vehicles will be like, you may ensconce old thinking into the law and have a hard time getting it out.

It is interesting to see Governments adapt so quickly to a disruptive technology. It’s quite probable that our hype is a bit too good and will come back to bite us. I predicted this sort of jurisdictional competition as governments realize they have a chance to make their regions become players in the new automotive industry, but they are embracing some things faster than I expected.

Multi car EV chargers

Electric Vehicles are moving up, at least here in California, and it’s gotten to the point that EV drivers are finding all the charging stations where they want to go already in use, forcing them to travel well outside their way, or to panic. Sometimes not charging is not an option. Sometimes the car taking the spot is already mostly charged or doesn’t need the charge much, but the owner has not come back.

Here in Silicon Valley, there is a problem that the bulk of the EVs have 60 to 80 miles of range — OK for wandering around the valley, but not quite enough for a trip to San Francisco and back, at least not a comfortable one. And we do like to go to San Francisco. The natives up there don’t really need the chargers in a typical day, but the visitors do. In general, unless you are certain you are going to get a charger, you won’t want to go in a typical EV. Sure, a Tesla has no problem, but a Tesla has a ridiculous amount of battery in it. You spend $40,000 on the battery pack in the Tesla, but use the second half of its capacity extremely rarely — it’s not cost effective outside the luxury market, at least at today’s prices (and also because of the weight.)

Charging stations are somewhat expensive. Even home stations cost from $400 to $800 because they must now include EVSE protocol equipment. This does a digital negotiation between the car and the plug on how much power is available and when to send it. The car must not draw more current than the circuit can handle, and you want the lines to not be live until the connection is solid. For now that’s expensive (presumably because of the high current switching gear.) Public charging stations also need a way to doing billing and access control.

Another limit on public charging stations, however, is the size of the electrical service. A typical car wants 30 amps, or up to 50 if you can get it. Put in more than a few of those and you’re talking an upgrade to the building’s electrical service in many cases.

I propose a public charging pole which has 4 or even 8 cords on it. This pole would be placed at the intersection of 4 parking spots in a parking lot. (That’s not very usual, more often they end up placed against a wall, with only 2 parking spots in range, because that’s where the power is.) The station, however, may not have enough power to charge all the cables at once.  read more »

Rise of the selfie drones. Is tethered a good idea?

At CES, there were a couple of “selfie drones.” The Nixie is designed to be worn on your wrist, taken off, thrown, and then it returns to you after taking a photo or video. There was also the Zano which is fancier and claims it will follow you around, tracking you as you mountain bike or ski to make a video of you just as you do your cool trick.

The selfie is everywhere. In Rome, literally hundreds of vendors tried to sell me selfie sticks in all the major tourist areas, even with a fat Canon DSLR hanging from my neck. It’s become the most common street vendor gadget. (The blue LED wind up helicopters were driving me nuts anyway.)

I also had been thinking about this, and came up with a design that’s not as capable as these designs, but might be better. My selfie drone would be tethered. You would put down the base which would have the batteries and a retractable cord. Up would fly the camera drone, which would track your phone to get a great shot of you. (If it were for me, it would also offer panorama mode where it spun around at the top shooting a pano, with you or without you.)

This drone could not follow you as you do a sport, of course, or get above a certain height. But unlike the free designs, it would not get lost over the cliff in the winds, as I think might happen to a number of these free selfie drones. It turns out that cliffs and outlook points are a common place to want to take these photos, they are the place you really need a high view to capture you and what’s below you.

Secondly, with the battery on the ground, and only a short tether wire needed, you can have a much better camera as payload. Only needing a short flight time and not needing to carry the batteries means more capabilities for the drone.

It’s also less dangerous, and is unlikely to come under regulation because it physically can’t fly beyond a certain altitude or distance from the base. It could not shoot you from water or from over the edge of the cliff as the other drones could if you were willing to risk them.

My variation would probably be a niche. Most selfies are there to show off where you were, not to be top quality photos. Only more serious photographers would want one capable of hauling up a quality lens. Because mine probably wants a motor in the base to reel it back in (so you don’t have to wind the cables) it might even cost more, not less.

The pano mode would be very useful. In so many pano spots, the view is fantastic but is blocked by bushes and trees, and the spectacular pano shot is only available if you go up enough. For daytime a tethered drone would probably do fine. I’m still waiting on the Panono — a ball, studded with cameras from Berlin that was funded by Kickstarter. You throw the ball up, and it figures when it is at the top of its flight and shoots the panorama all at once. Something like that could also be carried by a tethered drone, and it has the advantage of not moving between shots as a spinning drone would be at risk for. This is another thing I’ve wanted for a while. After my first experiments in airplane and helicopter based panoramas showed you really want to shoot everything all at once, I imagined having decent digital cameras getting cheap enough to buy 16 of them and put them in a circle. Sadly, once cameras starting doing that, there were always better cameras that I now decided I needed that were too expensive to buy for that purpose.

An instant online debate for everybody ("Youtube" debate)

In continuation of my series on fixing politics I would like to address the issue of debates. Not just presidential debates, but all levels.

The big debates are a strange animal. You need to get the candidates to agree to come, and so a big negotiation takes place which inherently waters down the debate. Only the big 2 candidates appear in Presidential debates, usually, and they put in rules that stop the candidates form actually actively debating one another. Most debates outside the big ones get little attention, and they are a lot of work.

I propose the creation, on an online video site — Youtube is an obvious choice but it need not be there — of a suite of tools to allow the creation of a special online video debate. Anybody, in any race, could create a debate using these tools, and do it easily.

To run a debate, some group with some reputation — press, or even election officials, would use the system to create a new debate. They would then gather some initial questions, and invite candidates — usually all candidates in the race, there being no reason to exclude anybody (as you’ll see below.) The initial questions could be in video, coming from press or voters as desired.

The first round of questions would be released to the candidates. They would then be able to record video answers to those questions, in addition to opening statements. They could record answers of any length, or even record answers of multiple lengths, or answers with logical stopping points marked at different lengths. They could also write written answers or record just audio, which is much less work.

After this, candidates could look at what the other candidates said, and then record responses, again in varying lengths if they like. They could then record responses to the responses, and so on. They could record a response to a specific candidate’s statements, or a response applying to more than one, as they wish.

It could also be enabled that candidates could ask questions of other candidates, and those candidates could elect to answer or not answer. They could also agree in advance that they will trade answers, ie. “I will answer one of yours if you will answer one of mine.”

This process would create a series of videos, and we then get to the next part of the tool, which would allow the voter to program what sort of debate they want.

For example, a voter could say:

  • I want a debate between the Republican and Democrat, initial answers limited to around 2 minutes, follow-ups to one minute, up to 2 each.
  • I want a debate between the Republican, Democrat and Libertarian, with follow-ups and videos until I hit “next”
  • I want a debate between all candidates on Climate Change (or any other issue that’s been put in the debate)
  • I want a debate on foreign policy among the top candidates as ranked by feedback scores/Greenpeace/etc.

The voter could have exactly the debate they wanted, and candidates could go back and forth rebutting one another as long as they wanted. Candidates would be able to get statistics on the length of answers that voters are looking for, and know how long a response to give. Typically they would do one short and one long, but they could also make a long response that is structured so it can be stopped reasonably at several different points when the voter gets bored.

Sure, the Republican might decide not to respond to the Green Party candidate’s view on Climate Change. If the viewer asked for a Republican-Green debate, the system would just say “the candidate offered no response.” Voters who wanted could even accept seeing material from other voters.

Candidates would duplicate themselves in answers, so software would convert the answers to text (or campaigns would provide the captions) and the system could automatically remove things you’ve seen, quickly popping up the text for a few seconds. If desired, campaign workers could spend a fair bit of time tuning just what to show based on the history of the viewer’s watching.

For the Presidential debates, building a well crafted set of videos would take time, but probably less time than the immense prep and rehearsal they do for those debates. On the other hand, they get to do multiple takes, so they don’t need to rehearse, just say it until it feels right. It does mean you don’t get to see the candidate under pressure — there is no Rick Perry saying he will close 3 agencies and only being able to name 2. As such it may not substitute fully for that, but it would also allow a low-effort debate at every level of contest, and bring the candidates in front of more voters.

Syndicate content