Topic

Google to custom make its own car with no steering wheel

In what is the biggest announcement since Google first revealed their car project, it has announced that they are building their own car, a small low-speed urban vehicle for two with no steering wheel, throttle or brakes. It will act as a true robocar, delivering itself and taking people where they want to go with a simple interface. The car is currently limited to 25mph, and has special pedestrian protection features to make it even safer. (I should note that as a consultant to that team, I helped push the project in this direction.)

This is very different from all the offerings being discussed by the various car companies, and is most similar to the Navia which went on sale earlier this year. The Navia is meant as a shuttle, and up to 12 people stand up in it while it moves on private campus roads. It only goes 20 km/h rather than the 40 km/h of Google’s new car. Google plans to operate their car on public roads, and will have non-employees in test prototype vehicles “very soon.”

This is a watershed moment and an expression of the idea that the robocar is not a car but the thing that comes after the car, as the car came after the horse. Google’s car is disruptive, it seems small and silly looking and limited if you look at it from the perspective of existing car makers. That’s because that’s how the future often looks.

I have a lot to say about what this car means, but at the same time, very little because I have been saying it since 2007. One notable feature (which I was among those pushing for inside) is a soft cushion bumper and windshield. Clearly the goal is always to have the car never hit anybody, but it can still happen because systems aren’t perfect and sometimes people appear in front of cars quickly making it physically impossible to stop. In this situation, cars should work to protect pedestrians and cyclists. Volvo and Autoliv have an airbag that inflates on the windshield bars, which are the thing that most often kills a cyclist. Of the 1.2 million who are killed in car accidents each year, close to 500,000 are pedestrians, mostly in the lower income nations. These are first steps in protecting them as well as the occupants of the car.

The car has 2 seats (side-by-side) and very few controls. It is a prototype, being made at first in small quantities for testing.

More details, and other videos, including a one of Chris Urmson giving more details, can be found at the new Google Plus page for the car. Also of interest is this interview with Chris.

I’m in Milan right now about to talk to Google’s customers about the car — somewhat ironic — after 4 weeks on the road all over Europe. 2 more weeks to go! I will be in Copenhagen, Amsterdam, London and NYC in the coming weeks, after having been in NYC, Berlin, Krakow, Toronto, Amsterdam, Copenhagen, Oslo, the fjords and Milan. In New York, come see me at Singularity U’s Exponential Finance conference June 10-11.

Google announces urban driving milestone

News from Google’s project is rare, but today on the Google blog they described new achievements in urban driving and reported a number of 700,000 miles. The car has been undergoing extensive testing in urban situations, and Google let an Atlantic reporter get a demo of the urban driving which is worth a read.

You will want to check out the new video demo of urban operations:

While Google speakers have been saying for a while that their goal is a full-auto car that does more than the highway, this release shows the dedication already underway towards that goal. It is the correct goal, because this is the path to a vehicle that can operate vacant, and deliver, store and refuel itself.

Much of the early history of development has been on the highway. Most car company projects have a focus on the highway or traffic jam situations. Google’s cars were, in years past, primarily seen on the highways. In spite of the speed, highway driving is actually a much easier task. The traffic is predictable, and the oncoming traffic is physically separated. There are no cyclists, no pedestrians, no traffic lights, no stop signs. The scariest things are on-ramps and construction zones. At low speed the highway could even be considered a largely solved problem by now.

Highway driving accounts for just over half of our miles, but of course not our hours. A full-auto car on the highway delivers two primary values: Fewer accidents (when delivered) and giving productive time back to the highway commuter and long distance traveller. This time is of no small value, of course. But the big values to society as a whole come in the city, and so this is the right target. The “super-cruise” products which require supervision do not give back this time, and it is debatable if they give the safety. Their prime value is a more relaxing driving experience.

Google continues to lead its competitors by a large margin. (Disclaimer: They have been a consulting client of mine.) While Mercedes — which is probably the most advanced of the car companies — has done an urban driving test run, it is not even at the level that Google was doing in 2010. It is time for the car makers to get very afraid. Major disruption is coming to their industry. The past history of high-tech disruptions shows that very few of the incumbent leaders make it through to the other side. If I were one of the car makers who doesn’t even have a serious project on this, I would be very afraid right now.

New regulations are banning the development of delivery robots

Many states and jurisdictions are rushing to write laws and regulations governing the testing and deployment of robocars. California is working on its new regulations right now. The first focus is on testing, which makes sense.

Unfortunately the California proposed regulations and many similar regulations contain a serious flaw:

The autonomous vehicle test driver is either in immediate physical control of the vehicle or is monitoring the vehicle’s operations and capable of taking over immediate physical control.

This is quite reasonable for testing vehicles based on modern cars, which all have steering wheels and brakes with physical connections to the steering and braking systems. But it presents a problem for testing delivery robots or deliverbots.

Delivery robots are world-changing. While they won’t and can’t carry people, they will change retailing, logistics, the supply chain, and even going to the airport in huge ways. By offering very quick delivery of every type of physical goods — less than 30 minutes — at a very low price (a few pennies a mile) and on the schedule of the recipient, they will disrupt the supply chain of everything. Others, including Amazon, are working on doing this by flying drone, but for delivery of heavier items and efficient delivery, the ground is the way to go.

While making fully unmanned vehicles is more challenging than ones supervised by their passenger, the delivery robot is a much easier problem than the self-delivering taxi for many reasons:

  • It can’t kill its cargo, and thus needs no crumple zones, airbags or other passive internal safety.
  • It still must not hurt people on the street, but its cargo is not impatient, and it can go more slowly to stay safer. It can also pull to the side frequently to let people pass if needed.
  • It doesn’t have to travel the quickest route, and so it can limit itself to low-speed streets it knows are safer.
  • It needs no windshield or wheel, and can be small, light and very inexpensive.

A typical deliverbot might look like little more than a suitcase sized box on 3 or 4 wheels. It would have sensors, of course, but little more inside than batteries and a small electric motor. It probably will be covered in padding or pre-inflated airbags, to assure it does the least damage possible if it does hit somebody or something. At a weight of under 100lbs, with a speed of only 25 km/h and balloon padding all around, it probably couldn’t kill you even if it hit you head on (though that would still hurt quite a bit.)

The point is that this is an easier problem, and so we might see development of it before we see full-on taxis for people.

But the regulations do not allow it to be tested. The smaller ones could not fit a human, and even if you could get a small human inside, they would not have the passive safety systems in place for that person — something you want even more in a test vehicle. They would need to add physical steering and braking systems which would not be present in the full drive-by-wire deployment vehicle. Testing on real roads is vital for self-driving systems. Test tracks will only show you a tiny fraction of the problem.

One way to test the deliverbot would be to follow it in a chase car. The chase car would observe all operations, and have a redundant, reliable radio link to allow a person in the chase car to take direct control of any steering or brakes, bypassing the autonomous drive system. This would still be drive-by-wire(less) though, not physical control.

These regulations also affect testing of full drive-by-wire vehicles. Many hybrid and electric cars today are mostly drive-by-wire in ordinary operations, and the new Infiniti Q50 features the first steer-by-wire. However the Q50 has a clutch which, in the event of system failure, reconnects the steering column and the wheels physically, and the hybrids, even though they do DBW regenerative braking for the first part of the brake pedal, if you press all the way down you get a physical hydraulic connection to the brakes. A full DBW car, one without any steering wheel like the Induct Navia, can’t be tested on regular roads under these regulations. You could put a DBW steering wheel in the Navia for testing but it would not be physical.

Many interesting new designs must be DBW. Things like independent control of the wheels (as on the Nissan Pivo) and steering through differential electric motor torque can’t be done through physical control. We don’t want to ban testing of these vehicles.

Yes, teams can test regular cars and then move their systems down to the deliverbots. This bars the deliverbots from coming first, even though they are easier, and allows only the developers of passenger vehicles to get in the game.

So let’s modify these regulations to either exempt vehicles which can’t safely carry a person, or which are fully drive-by-wire, and just demand a highly reliable DBW system the safety driver can use.

Can they make a better black box pinger?

I wrote earlier on how we might make it easier to find a lost jet and this included the proposal that the pingers in the black boxes follow a schedule of slowing down their pings to make their batteries last much longer.

In most cases, we’ll know where the jet went down and even see debris, and so getting a ping every second is useful. But if it’s been a week, something is clearly wrong, and having the pinger last much longer becomes important. It should slow down, eventually dropping to intervals as long as one minute, or even an hour, to keep it going for a year or more.

But it would be even more valuable if the pinger was precise about when it pinged. It’s easy to get very accurate clocks these days, either sourced from GPS chips (which cost $5) or just synced on occasion from other sources. Unlike GPS transmitter clocks, which must sync to the nanosecond, here even a second of drift is tolerable.

The key is that the receiver who hears a ping must be able to figure out when it was sent, because if they can do that they can get the range, and even a very rough range is magic when it comes to finding the box. Just 2 received pings from different places with range will probably find the box.

I presume the audio signal is full of noise and you can’t encode data into it very well, but you can vary the interval between pings. For example, while a pinger might bleep every second, every 30 seconds it could ping twice in a second. Any listener who hears 30 seconds of pings would then know the pinger’s clock and when each ping was sent. There could be other variations in the intervals to help pin the time down even better, but it’s probably not needed. In 30 seconds, sound travels 28 miles underwater, and it’s unlikely you would hear the ping from that far away.

When the ping slows down as battery gets lower, you don’t need the variation any more, because you will know that pings are sent at precise seconds. If pings are down to one a minute, you might hear just one, but knowing it was sent at exactly the top of the minute, you will know its range, at least if you are within 50 miles.

Of course things can interfere here — I don’t know if sound travels with such reliable speed in water, and of course, waves bounce off the sea floor and other things. It is possible the multipath problem for sound is much worse than I imagine, making this impossible. Perhaps that’s why it hasn’t been done. This also adds some complexity to the pinger which they may wish to avoid. But anything that made the pings distinctive would also allow two ships tracking the pings to know they had both heard the same particular ping and thus solve for the location of the pinger. Simple designs are possible.

Two way pinger

If you want to get complex of course you could make the pinger smart, and listening for commands from outside. Listening takes much less power, and a smart pinger could know not to bother pinging if it can’t hear the ship searching for it. Ships can ping with much more volume, and be sure to be heard. While there is a risk a pinger with a broken microphone might not understand it has a broken microphone, otherwise, a pinger should sit silent until it hears request pings from ships, and answer those. It could answer them with much more power and thus more range, because it would only ping when commanded to. It could sit under the sea for years until it heard a request from a passing ship or robot. (Like the robots made by my friends at Liquid Robotics, which cruise unmanned at 2 knots using wave power and could spend years searching an area.)

The search for MH370 has cost hundreds of millions of dollars, so this is something worth investigating.

Other more radical ideas might be a pinger able to release small quantities of radioactive material after waiting a few weeks without being found. Or anything else that can be detected in extremely minute concentrations. Spotting those chemicals could be done sampling the sea, and if found soon enough — we would know exactly when they would be released — could help narrow the search area.

Track the waves

I will repeat a new idea I added to the end of the older post. As soon as the search zone is identified, a search aircraft should drop small floating devices with small radio transmitters good to find them again at modest range. Drop them as densely as you can, which might mean every 10 miles or every 100 miles but try to get coverage on the area.

Then, if you find debris from the plane, do a radio hunt for the nearest such beacon. When you find it, or others, you can note their serial number, know where they were dropped, and thus get an idea of where the debris might have come from. Make them fancier, broadcasting their GPS location or remembering it for a dump when re-collected, and you could build a model of motion on the surface of the sea, and thus have a clue of how to track debris back to the crash site. In this case, it would have been a long time before the search zone was located, but in other cases it will be known sooner.

Conspiracy theory!

Reporting has not been clear, but it appears that the ships which heard the pings did so in the very first place they looked. With a range of only a few miles, that seems close to impossibly good luck. If it turns out they did hear the last gasp of the black boxes, this suggests an interesting theory.

The theory would be that some advanced intelligence agencies have always known where the plane went down, but could not reveal that because they did not want reveal their capabilities. A common technique in intelligence, when you learn something important by secret means, is to then engineer another way to learn that information, so that it appears it was learned through non-secret means or luck. In the war, for example, spies who broke enemy codes and learned about troop movements would then have a “lucky” recon plane “just happen” to fly over the area, to explain how you knew where they were. Too much good luck and they might get suspicious, and might learn you have broken their crypto.

In this case the luck is astounding. Yes, it is the central area predicted by the one ping found by Inmarsat, but that was never so precise. In this case, though, all we might discern — if we believe this theory at all — is that maybe, just maybe, some intelligence agency among the countries searching has some hidden ways to track aircraft. Not really all that surprising as a bit of news, though.

Let’s hope they do find what’s left — but if they do, it seems likely to me it happened because the spies know things they aren’t telling us.

Robocar Prize in India, New Vislab car

I read a lot of feeds, and there are now scores of stories about robocars every week. Almost every day a new publication gives a summary of things. Here, I want to focus on things that are truly new, rather than being comprehensive.

Mahindra “Rise” Prize

The large Indian company Mahindra has announced a $700,000 Rise prize for robocar development for India’s rather special driving challenges. Prizes have been a tremendous boost to robocar development and DARPA’s contests changed the landscape entirely. Yet after the urban challenge, DARPA declared their work was done and stopped, and in spite of various efforts to build a different prize at the X-Prize foundation, the right prize has never been clear. China has annual prizes and has done so for several years, but they get little coverage outside of China.

An Indian prize has merit because driving in India is very much different, and vastly more chaotic than most of the west. As such, western and east Asian companies are unlikely to spend a lot of effort trying to solve the special Indian problems first. It makes sense to spur Indian development, and of course there is no shortage of technical skill in India.

Many people imagine that India’s roads are so chaotic that a computer could never drive on them. There is great chaos, but it’s important to note that it’s slow chaos, not fast chaos. Being slow makes it much easier to be safe. Safety is the hard part of the problem. Figuring out just what is happening, playing subtle games of chicken — these are not trivial, but they can be solved, if the law allows it.

I say if the law allows it because Indians often pay little heed to the traffic law. A vehicle programmed to strictly obey the law will probably fail there without major changes. But the law might be rewritten to allow a robot to drive the way humans drive there, and be on an open footing. The main challenge is games of chicken. In the end, a robot will yield in a game of chicken and humans will know that and exploit it. If this makes it impossible for the robot to advance, it might be programmed to “yield without injury” in a game of chicken. This would mean randomly claiming territory from time to time, and if somebody else refuses to yield, letting them hit you, gently. The robot would use its knowledge of physics to keep the impact low enough speed to cause minor fender damage but not harm people. If at fault, the maker of the robot would have to pay, but this price in damage to property may be worthwhile if it makes the technology workable.

The reason it would make things workable is that once drivers understood that, at random, the robot will not yield (especially if it has the right-of-way) and you’re going to hit it. Yes, they might pay for the damage (if you had the right of way) but frankly that’s a big pain for most people to deal with. People might attempt insurance fraud and deliberately be hit, but they will be recorded in 3D, so they had better be sure they do it right, and don’t do it more than once.

Of course, the cars will have to yield to pedestrians, cylists and in India, cows. But so does everybody else. And if you just jump in front of a car to make it hit the brakes, it will be recording video of you, so smile.

New Vislab Car

I’ve written before about Vislab at the University of Parma. Vislab are champions of using computer vision to solve the driving problem, though their current vehicles also make use of LIDAR, and in fact they generally agree with the trade-offs I describe in my article contrasting LIDAR and cameras.

They have a new vehicle called DEEVA which features 20 cameras and 4 lasers. Like so many “not Google” projects, they have made a focus on embedding the sensors to make them not stand out from the vehicle. This continues to surprise me, because I have very high confidence that the first customers of robocars will be very keen that they not look like ordinary cars. They will want the car to stand out and tell everybody, “Hey, look, I have a robocar!” The shape of the Prius helped its sales, as well as its drag coefficient.

This is not to say there aren’t people who, when asked, will say they don’t want the car to look too strange, or who say, looking at various sensor-adorned cars, that these are clearly just lab efforts and not something coming soon to roads near you. But the real answer is neither ugly sensors nor hidden sensors, but distinctive sensors with a design flair.

More interesting is what they can do with all those cameras, and what performance levels they can reach.

I will also note that car uses QNX as its OS. QNX was created by friend I went to school with in Waterloo, and they’re now a unit of RIM/Blackberry (also created by classmates of mine.) Go UW!

Solving the problem of money and politics

A recent Surpreme court case which struck down limits on the total amount donors could provide to a large group of candidates has fired up the debate on what to do about the grand problem, particularly in the USA, of the corrupting influence of money on politics. I have written about this before in my New Democracy Topic, including proposals for anonymous donations, official political spam and many others.

As I strongly believe that it is very difficult to draft campaign finance rules that don’t violate the 1st amendment (the Supreme court agrees) and also that it would be a horrible, horrible decision to weaken the 1st amendment to solve this problem, nasty as the problem is, I have been working on alternate solutions. (I also don’t believe any of the proposed weakenings of the 1st amendment would actually work and not backfire.)

I am going to do a series here on those solutions over time, but first I want to lay out my perceptions of the various elements of the problem, for it is necessary to understand them to fix them. While political corruption is rife anywhere, the influence of big money seems most widespread in the USA.

Problem 1: Politicians feel they can’t get elected without spending a lot of money

Ask any member of congress what they did on their first day in office. The answer will be “made calls to donors.” They are always fundraising, because they don’t think they can get elected without it. They generally resent this, which is a ray of hope. If they thought they had a choice, that they could get elected without fundraising, they would reduce it a lot.

One thing that’s not easy to fix is the fact that if you fundraise, those who give you money will expect something for it, which is the thing we’re trying to eliminate. Even if the donors don’t ever explicitly state that expectation, it is always there, because every candidate will ask if what they are doing will piss off the donors, even more than they will ask what will piss off the voters. If you depend on the donations, you will do what it takes to keep them coming. Donations get a donor’s phone calls and letters answered, as well as requests for meetings.

I say that politicians feel they need money, and in fact they are often right about this. Money does produce votes. But neither are they totally right, as there are alternatives.

As noted in the comments, the length of campaigns plays a role in how much money people need to raise. Due to fixed election dates, US election campaigns are extremely long compared to other countries. (In Canada, an election might be called at any time, and takes place in as little as 36 days. Fundraising is often done in advance, of course, but there is only a little time in which to spend the money.)

The most common proposed solution here is public campaign finance, but I am developing alternatives to that or systems which could work in combination with that.

Problem 2: The main reason they need money is to buy TV ads

About 60% of the budget of a big campaign is spent on ads, most of them on TV. Today online advertising is just 10% of TV.

There is a reason they love TV. It gets to most demographics, and your message can be very dramatic and convincing. Most of all, you reach people who were not looking for your message. Everybody has a web site, but the web site only is seen by people who actively sought it out. TV gets into the homes of an ordinary voter and gives you a shot at influencing them. Other forms of advertising do that too, but few do it as well as TV.

This aspect of the problem is important because we’re in the middle of a big shift in the nature of advertising. The new advertising giant, Google, is a relatively new company with entirely different methods. We’re also in the middle of a big shift in media. Broadcast media, I feel, are on the decline, and new media forms, mostly online forms, are likely to take the lead. When this happens — and I say when, not if — it means that most of the donated political money will flow to the new media. This gives the new media a chance to either be the destination for all corruption money or to change the rules of the game, if they have the courage to do so.

In many cases, the world of advertising hasn’t simply moved form one company to a competitor. In the case of newspaper classified advertising, that industry was just supplanted by free online ads like craigslist. Thanks to internet media, publishing is now cheap or almost free, and advertising is much more efficient and in some areas, cheaper. The potential for disruption is ripe.

Problem 3: The other big effort is “Get out the Vote”

While most of the dollars go to advertising, a lot of them, and most of the volunteer time, goes to what they call GOTV.

GOTV is so important because US voter turnouts are low. 50-60% in Presidential years, less in off-years. Because of that, by far the most productive use of campaign resources is often not trying to convince an undecided or opposing voter to switch to your side, but simply getting a voter who already supports you but doesn’t care a great deal to actually make the trek to the polls on voting day.

While you might imagine elections are fought and won with one candidate’s ads or speeches or debate performance swaying undecided voters one way or another, the reality is that turnouts are so low that GOTV is what decides a lot of races.

Aside from the basic principle that it’s crazy to decide our leaders based on who has the best system of pushing apathetic voters to come to the polls, it’s also true that GOTV uses a lot of money and resources, and as such is another of the big reasons for problem #1. A lot of the advertising purchased is bought to make existing supporters more likely to turn out as much as it’s there to sway undecideds.

There are many areas for solution here, including increasing the voter turnout to a level where GOTV is not so productive. For example, in many countries, voting is mandatory — you are fined if you don’t vote. Chile gets 95% turnout this way, and Australia at 81% is the worst turnout of the compulsory nations.

It is also possible to increase turnout by making voting super-easy. Options such as online or cell-phone voting, while rife with election security and privacy problems, may be worth the risk if they reduce the power of GOTV — or simply make GOTV much cheaper.

Problem 4: Other campaign costs

While they are in 3rd place, the other campaign costs — travel, events, databases, staff, candidate’s time and many other things — still add up to a lot, and it’s money that must be fundraised. Today, all candidates build impressive computer systems from scratch every 4 years. After the election the system is discarded, because in 4 years, technology will have changed so much it is better to rewrite it from scratch.

Elections, however, are taking place every month around the world, which would justify the constant development of generalized campaign tools. If done open source, they could easily be free to campaigns, saving them lots of resources — and the need to raise money for them.

Problem 5: Buying influence pays off

Candidates raise money because they have to, but donors give it because they get good value in return. Yes, some get the “pure” good value they are supposed to get — the hope that they get a better candidate elected, who will run things closer to the way they want. In a general “for the country” sense, not in a personal benefit sense, but even that’s technically OK if it does not involve doing personal favours.

Sadly, they usually get much more than that. They get personal benefit, even the ability to write drafts of laws and stop laws they dislike. Congress members even have a semi-official “pork” system which spreads federal money around districts, to please voters and also donors.

Worst of all, buying influence can be profitable from a pure financial sense. While Shel Adelson might give money to support his views on foreign policy, corporations and many others give money because they feel they will make a profit in the bottom line. As soon as this profit is possible, it’s almost impossible to stop money from flowing in, no matter what rules you make. (It might be noted that Libertarians believe one of the most compelling arguments for keeping the government out of the economy is that a government that has no ability to hurt or benefit economic interests is one that can’t be bribed to hurt or benefit economic interests.)

This is also what makes corporations interested in donations. Corporations, at least in the pure sense, are interested only in the bottom line, and have a fiduciary duty to the stockholders to care only about shareholder value. Some closely held corporations will also take actions based on direct shareholder political interests, and some corporations, like PACs exist to do nothing else but that.

Some solutions can come from changing the system so that it’s just not as productive to buy politicians. This requires new rules on how they vote, which are hard to get. An ideal system might demand that officials recuse themselves from any vote on any bill which would unduly benefit any of their constituents or voters. Vote trading would attempt to get around this, but it seems crazy that today we think it is their job to look out for their constituents (and unofficially their donors) at the expense of the rest of the country.

The most common solution for this problem is to limit donations, with caps for each donor, and also caps on amount raised or amount spent. Success is highly mixed in this area.

Paths to improvement

These nexus points, notable #1, #2 and #5, are the place to look for solutions. While problem #1 can be addressed with limits on donations, fundraising and spending (otherwise known as Campaign Finance Reform) this approach is very challenging. Because of problem #5 in particular, money will “find a way” like water flowing downhill. You may put up a dam but the water will find another channel if it can.

The only defence against issue #5 — that buying politicians is lucrative — is to combine the politician’s core dislike of fundraising with efforts to make it a bit less productive to buy politicians. While money will always try to buy them, if the price goes up, and the need for the money goes down, there can be improvement.

One of the most popular proposals to fix #1 is public funding of campaigns, combined with mandatory or optional limits on fundraising or spending. The latter limits are hard to do under the 1st amendment. This is not because “corporations are people” (a strange meme because that idea never appears in the Citizens United decision that many people imagine it came from) but because freedom of the press, especially for political speech, is not divisible in the 1st amendment. It has always been given to corporations (including ones like the New York Times corporation) and in fact for a century or more, until the rise of the blogging era, almost all press of significance have been corporations.

Attempts to limit what sort of political ads that rich people and corporations may run are extremely difficult under the 1st amendment, as the court has said, and in spite of the terrible problem caused by the influence of money in politics, the 1st amendment deservedly remains untouched. Much of the argument around this case (and Citizen’s United) has been of the form, “Corruption is horribly bad, so the court should decide the 1st amendment doesn’t protect it.” Many things the 1st amendment protects are bad, but we’ve decided letting the government decide which are good or bad is worse. Here, we can add to that the important sense that giving congress extra control over how their elections are run is another very bad idea.

In coming weeks, I will outline alternate solutions. But I also believe neither I, nor anybody else have thought up all the possible solutions. Politics, advertising and media are in a state of flux thanks to new technologies that I and my compatriots have built. Whether you think the future is bright or dark, I can assure you it’s different, and may options for solution to this problem are out there, even those we may not see as yet.

Cranes, and rooftops, should be decorated

Look at the skyline of any growing city, and what do you see, but a sea of construction cranes. The theory is that each crane will go away and be replaced by an architectually interesting or pleasing building, but the cycle continues and there are always cranes.

My proposal: An ordinance requiring aesthetic elements on construction cranes. Make them look beautiful. Make them look like the birds they are named after, or anything else. Get artists to design them as grand public art installations. Obviously you can’t increase the weight a lot, or cut the visibility of the operator too much, but leave that as a challenge to the artists. And give us a city topped with giant works of art instead of eyesores.

While we’re building these skyscrapers, it seems to me we also don’t seem to care about the aesthetics of our cities from above. The view from the towers, or incoming aircraft bringing in fresh visitors, is of ugly rooftops, covered with ugly pipes, giant air conditioners and spaces everybody imagines that nobody sees. Yet we all see them.

Compare that with many European hillside towns where everybody knew they would be seen from above. At least in the old days, the roofs were crafted with the same care as the house. Today, that’s been changing, and many roofs are covered with antennas, satellite dishes and in the middle east, black water heaters. We care a lot about how our houses look from the curb, and we imagine people don’t see the roof. But we do.

Getting rid of lines at airport security

Why are there lines at airport security? I mean, we know why the lines form, when passenger load exceeds the capacity, with the bottleneck usually being the X-ray machines. The question is why this imbalance is allowed to happen?

The variable wait at airport security levies a high cost, because passengers must assume it will be long, just in case it is. That means every passenger gets there 15 or more minutes earlier than they would need to, even if there is no wait. Web sites listing wait times can help, but they can change quickly.

For these passengers, especially business passengers, their time is valuable, and almost surely a lot more costly than that of TSA screeners. If there are extra screeners, it costs more money to keep them idle when loads are low, but the passengers would be more than willing to pay that cost to get assuredly short airport lines.

(There are some alternatives, as Orwellian programs like Clear and TSA-PRE allow you to bypass the line if you will be fingerprinted and get a background check. But this should not be the answer.)

In some cases, the limit is the size of the screening area. After 9/11, screening got more intensive, and they needed more room for machines and more table space for people to prepare their bags for all the rules about shoes, laptops, liquids and anything in their pockets.

Here are some solutions:

Appointments at security

The TSA has considered this but it is not widely in use. Rather than a time of departure, what you care about is when you need to get to the airport. You want an appointment at security, so if you show up at that time, you get screened immediately and are on your way to the gate in time. Airlines or passengers could pay for appointments, though in theory they should be free and all should get them, with the premium passengers just paying for appointments that are closer to departure time.

Double-decker X-ray machines

There may not be enough floor space, but X-ray machines could be made double decker, with two conveyor belts. No hand luggage is allowed to be more than a foot high, though you need a little more headroom to arrange your things. Taller people could be asked to use the upper belt, though by lowering the lower belt a little you can get enough room for all and easy access to the upper belt for all but children and very short folks.

A double width deck is also possible, if people are able to reach over, or use the other side to load. (See below.)

This might be overkill, as I doubt the existing X-ray machines run at half their capacity. It is the screener’s deliberation that takes the time, and thus the next step is key…

Remote X-ray screeners

The X-ray screener’s job is to look at the X-ray image and flag suspect items. There is no need for them to be at the security station. There is no need for them to even be in the airport or the city, come to that. With redundant, reliable bandwidth, screeners could work in central screening stations, and be sent virtually to whatever security station has the highest load.

Each airport would have some local screeners, though they could work in a central facility so they can virtually move from station to station as needed, and even go there physically in the event of some major equipment failure. They would be enough to handle the airport’s base-load, but peak loads would call in screeners from other locations in the city, state or country.

Using truly remote screeners creates a risk that a network outage could greatly slow processing. This would mean delayed flights until text messages can go out to all passengers to expect longer lines and temporary workers can come in — or the outage can be repaired. To avoid this, you want reliable, redundant bandwidth, multiple screener centers and the ability to even use LTE cell phones as a backup. And, perhaps, an ability to quickly move screeners from airport to airport to handle downtimes at a particular airport. (Fortunately, there happens to be a handy technology for moving people from airport to airport!)

Screeners need not be working a specific line. Screeners could be allocated by item. Ie. one bag is looked at by screener 12 and the next bag is looked at by screener 24, just giving each item or set of items to the next available screener, which means an X-ray could actually constantly run at full speed if there are available staff. Each screener would, if they saw an issue, get to look at the other bags of the same passenger, and any bag flagged as suspect could immediately be presented to one or more other screeners for re-evaluation. In addition, as capacity is available, a random subset of bags could be looked at by 2 or more screeners.

It can also make sense to just skip having a human look at some bags at random to reduce wait and cost. It might even make sense to let some bags go unviewed in order to have other bags be viewed by 2 screeners. Software triage of how many screeners should look at a bag (0, 1, 2, etc.) is also possible though random might be better because attackers might figure out how to fool the software. With the screeners being remote and the belts operating at a fixed speed, passengers won’t learn who was randomly selected for inspection or not.

Some screeners need to be there — the one who swabs your bag, or does an extra search on it, the one who does the overly-intimate patdown and the one with the gun who tries to stop you if you try to run. But the ones who just give advice can be remote, and the one who inspects your boarding pass can be remote for passengers able to hold those things up to the scanners. I suspect remote inspection of ID is also possible though I can see people resisting that. The scanner who looks at your nude photo can certainly be remote — currently they are out of view so you don’t feel as bothered.

This remote approach, instead of costing more, might actually save money, especially on the national level. That’s because the different time zones have different peak times, and remote workers can quickly move to follow the traffic loads.

It’s also easier with remote screeners for passengers to use both sides of the belt to load and get their stuff. Agents would have to go in among them to pull bags for special inspection, though.

Of course it could be even better

Don’t misunderstand — the whole system should be scrapped and replaced with something that is more flyer-friendly as well as more capable of catching actual hijacker techniques. But if it’s going to exist, it should be possible to remove the line for everybody, not just those who go through background checks and fingerprinting just to travel.

After 2001, a company developed bomb proof luggage containers and now there is a new bag approach which would reduce the need to x-ray and delay checked luggage as much as they do. They were never widely deployed, because they cost more and weigh more.

I have 3 things I carry on planes:

  1. The things I need on the plane (like my computer, books and other items.)
  2. The vital and fragile things which I insist not leave my control, such as my camera gear and medicines.
  3. When I am not checking a bag, everything else for short trips.

I’m open to having all but #1 being put into a bomb-proof container by me and removed by me in a manner similar to gate check, so I can assure it’s always on the plane with me. Of course if I’m to do that then security (for just me and the items of type one) must be close to the plane — which it is for many international flights to the USA. That would speed up that security a lot. The use of remote screeners could make it easier to have security at the gate, too.

Personally, once the problem of taking over the cockpit was solved by new cockpit doors and access policies, I think there was an argument that you need not screen passengers at all. Sure, they could bring on guns, but would be no longer able to hijack the aircraft, so it’s no different from a bus or a train. Kept to small items, they would not be able to cause as much damage as they could do with a suitcase sized bomb in the security line. The security line is, by definition, unsecured, and anybody can bring a large uninspected roll-aboard up to it, amidst a very large crowd — similar to what happened in Moscow in 2011.

Instead, you would have gates where a portal in the wall would have a bomb-proof luggage container into which you could put your personal bags and coats. Most people would then just get on, but a random sampling would be directed to extra security. Those wishing to bring larger things on-board (medical gear, super-fragiles, mega-laptops) would need to arrive earlier and go through security too. A forklift would quickly move the bombproof container into the hold and the plane would take off.

Making sea crashes easier to find

We’ve all learned a lot about what can and can’t be done from the tragic story of MH 370, as well as the Air France flight lost over the Atlantic. Of course, nobody expected the real transponders to be disconnected or fail, and so it may be silly to speculate about how to avoid this situation when there already is supposed to be a system that stops aircraft from getting lost. Even so, here are some things to consider:

In the next few years, Iridium plans to launch a new generation of satellites with 1 megabit of bandwidth, replacing the pitiful 2400 bps they have now. In addition, with luck, Google Loon may get launched and do even more. With that much bandwidth, you can augment the “black box” with a live stream of the most important data. In particular, you would want a box to transmit as much as it could in the event of catastrophic shock, loss of signal from the aircraft and any unplanned descent, including of course getting close to the ground away from the target airport set at takeoff. Even the high cost of Iridium is no barrier for rare use, and you actually have a lot of seconds in the case of planes lost while flying at high altitude. Not enough to send much cockpit voice, but the ability to send all major alerts, odd-readings and cockpit inputs.

You could send more to geosync satellites but I will assume in a crisis it’s hard to keep aimed.

Another place you could stream live data would be to other aircraft. Turns out that up high as they are, aircraft are often able to transmit to other aircraft line of sight. Yes, the deep south Indian ocean may not be one of those places, but in general the range would be 500 miles, and longer if you used any wavelength that could travel beyond the horizon. Out there over the ocean, there’s nobody to interfere with, and closer to land, you can talk to the land. Near land, the live stream would go to terrestrial receivers, even cell towers. Live data gives you information even if the black box is destroyed or lost. If you are sure that can never happen, the black box is enough.

It also could make sense to have the black box be on the outside of the aircraft, meant to break away on impact with ground or water, and of course, it should float. The Emergency Locator Transmitter should be set up this way as well. You want another box pinging that sinks with the plane, though. The floating ELT/black box could even eject itself from the plane on its own if it detected an imminent crash in any remote area, including the ocean. With a GPS, it will know its altitude and location. It could even have a parachute on it.

Speaking of pinging, one issue right now is the boxes only have power for 2 weeks. Obviously there is a limit on power, and you want a strong signal, but it is possible to slow down your ping rate as your battery gets low, to the point that you are perhaps only pinging a few times a day. The trick is you would ping at very specific and predictable times, so people would know precisely when to listen — even years later if they get a new idea about where to look. Computers can go to sleep on these sorts of batteries and last for years if they only have to use power once a day.

If all you want to know is where an aircraft is, we’ve seen from this that it doesn’t take too much. A slightly more frequent accurately timed ping of any kind picked up by 2 satellites (LEO or geosync) is enough to get a pretty good idea where a plane is. The cheapest and simplest solution might be a radio that can’t be disabled that does this basic ping either all the time, or any time it doesn’t get the signal that others systems like ACARS are not doing their job.

Like many, I was surprised that the cell phones on board the aircraft that were left on — and every flight has many phones left on — didn’t help at all. Aircraft fly too high for most cell phones to actually associate with cell towers on the ground, so you would not see any connections made, but it seems likely that as the plane returned over inhabited areas on its way south, some of those phones probably transmitted something to those ground stations, something the ground stations ignored because they could not complete the handshake. If those stations kept lower level logs, there might be information there, but they probably don’t keep them. Because metal plane skins block signals, they might have been very weak. If the passengers were conscious, they probably would have been trying to hold their phones near the window, even though they could not connect at their altitude.

Another thing I have not understood is why we have only seen the results of one ping detected by the Inmarsat over the Indian. From that ping, they were able to calculate the distance of the aircraft to the satellite, and thus draw that giant arc we’ve all seen on the maps. It’s not clear to me why there was only one ping. Another ping would have drawn another arc, and so on, but that would have given us much more data to narrow down the course of the aircraft, as it’s a fair presumption it was flying straight. The reason they know know the one ping came from the southern hemisphere is the satellite itself is not perfectly centered and so moves up and down, giving a different doppler for north vs. south.

We may not learn their fate. I must admit, I’m probably an unusual passenger. I am an astronomer, and so will notice if a plane has made such a big course correction, though I have to admit in the southern hemisphere I would get confused. But then I would pull out my phone and ask its GPS where we are. I do this all the time, and I often notice when the aircraft I am in does something odd like divert or circle. But I guess there are not so many people of this stripe on a typical plane. (Though I have flown in and out of KL on Malaysian Airlines myself, but long ago.)

While hope for the people aboard is gone, I do hope we learn the cause of the tragedy, to see if anything we can think that is not too expensive would prevent it from happening again. The cost need not be that low. The cost of this search and the Air France search both added up to a lot.

Update: A New Idea — as soon as the search zone is identified, a search aircraft should drop small floating devices with small radio transmitters good to find them again at modest range. Drop them as densely as you can, which might mean every 10 miles or every 100 miles but try to get coverage on the area.

Then, if you find debris from the plane, do a radio hunt for the nearest such beacon. When you find it, or others, you can note their serial number, know where they were dropped, and thus get an idea of where the debris might have come from. Make them fancier, broadcasting their GPS location or remembering it for a dump when re-collected, and you could build a model of motion on the surface of the sea, and thus have a clue of how to track debris back to the crash site. In this case, it would have been a long time before the search zone was located, but in other cases it will be known sooner.

The endgame for Bitcoin

Bitcoin is hot-hot-hot, but today I want to talk about how it ends. Earlier, I predicted a variety of possible fates for Bitcoin ranging from taking over the entire M1 money supply to complete collapse, but the most probable one, in my view, is that Bitcoin is eventually supplanted by one or more successor digital currencies which win in the marketplace. I think that successor will also itself be supplanted, and that this might continue for some time. I want to talk about not just why that might happen, but also how it may take place.

Nobody thinks Bitcoin is perfect, and no digital currency (DigiC) is likely to satisfy everybody. Some of the flaws are seen as flaws by most people, but many of its facets are seen as features by some, and flaws by others. The anonymity of addresses, the public nature of the transactions, the irrevocable transactions, the fixed supply, the mining system, the resistance to control by governments — there are parties that love these and hate these.

Bitcoin’s most remarkable achievement, so far, is the demonstration that a digital currency with no intrinsic value or backer/market maker can work and get a serious valuation. Bitcoin argues — and for now demonstrates — that you can have a money that people will accept only because they know they can get others to accept it with no reliance on a government’s credit or the useful physical properties of a metal. The price of a bitcoin today is pretty clearly the result of speculative bubble investment, but that it sustains a price at all is a revelation.

Bitcoins have their value because they are scarce. That scarcity is written into the code — in the regulated speed of mining, and in the fixed limit on coins. There will only be so many bitcoins, and this gives you confidence in their value, unlike say, Zimbabwe 100 trillion dollar notes. This fixed limit is often criticised because it will be strongly deflationary over time, and some more traditional economic theory feels there are serious problems with a deflationary currency. People resist spending it because holding it is better than spending it, among other things.

Altcoins

While bitcoins have this scarcity, digital currencies as a group do not. You can always create another digital currency. And many people have. While Bitcoin is the largest, there are many “altcoins,” a few of which (such as Ripple, Litecoin and even the satirical currency Dogecoin) have serious total market capitalizations of tens or hundreds of millions of dollars(1). Some of these altcoins are simply Bitcoin or minor modifications of the Bitcoin protocol with a different blockchain or group of participants, others have more serious differences, such as alternate forms of mining. Ripple is considerably different. New Altcoins will emerge from time to time, presumably forever.

What makes one digital coin better than another? Obviously a crucial element is who will accept the coin in exchange for goods, services or other types of currency. The leading coin (Bitcoin) is accepted at more stores which gives it a competitive advantage.

If one is using digital currency simply as a medium — changing dollars to bitcoins to immediately buy something with bitcoins at a store, then it doesn’t matter a great deal which DigiC you use, or what its price is, as long as it is not extremely volatile. (You may be interested in other attributes, like speed of transaction and revocation, along with security, ease of use and other factors.) If you wish to hold the DigC you care about appreciation, inflation and deflation, as well as the risk of collapse. These factors are affected as well by the “cost” of the DigiC.

The cost of a digital currency

I will advance that every currency has a cost which affects its value. For fiat currency like dollars, all new dollars go to the government, and every newly printed dollar devalues all the other dollars, and overprinting creates clear inflation.  read more »

The world goes gaga for cool concept prototypes

One sign of how interest is building is the large reaction to some recent concept prototypes for robocars, two of which were shown in physical form at the Geneva auto show.

The most attention came to the Swiss auto research company Ringspeed’s XchangE concept which they based on a Tesla. They including a steering wheel which could move from side to side (and more to the point, go to the middle, where it could be out of the way of the two front seats,) along with seats that could recline to sleeping positions or for watching a big-screen TV, and which could reverse for face-to-face seating.

Also attracting attention was the Link and Go, an electric shuttle. In this article it is shown on the floor with the face to face configuration.

This followed on buzz late last year over the announcement of Zoox and their Boz concept, which features a car that has no steering wheel, and is symmetrical front to back (so of course seating is face to face.) The Zoox model takes this down to the low level, with 4 independent wheel motors. I’ve met a few times with Zoox’s leader, Tim Kentley-Klay of Melbourne, and the graphics skills of he and his team, along with some dynamic vision, also generated great buzz.

All this buzz came even though none of these companies had anything to say about the self-driving technology itself, which remains 99% of the problem. And there have been a number of designers who have put out graphic concepts like these for many years, and many writers (your unhumble blogger included) who have written about them for years.

The Zoox design is fairly radical — a vehicle with no windshield and no steering wheel — it can never be manually driven and a full robocar. Depending on future technologies like cheap carbon fibre and cost-effective 3-D printing for medium volumes, it’s a more expensive vehicle that you could make, but there may be a certain logic to that. Tesla has shown us that there are many people who will happily pay a lot more to get a car that is unlike any other, and clearly the best. They will pay more than can be rationally justified.

Speaking of Tesla, a lot of the excitement around the Rinspeed concept was that it was based on a Tesla. That appears to have been a wise choice for Rinspeed as people got more excited about it than any other concept I’ve seen. The image of people reclining, watching a movie, brought home an image that has been said many times in print but not shown physically to the world in the same way.

It’s easy for me (and perhaps for many readers of this blog) to feel that these concepts are so obvious that everybody just gets them, but it’s clearly not true. This revolution is going to take many people by surprise.

Commentary on California's robocar regulations workshop

Tuesday, the California DMV held a workshop on how they will write regulations for the operation of robocars in California. They already have done meetings on testing, but the real meat of things will be in the operation. It was in Sacramento, so I decided to just watch the video feed. (Sadly, remote participants got almost no opportunity to provide feedback to the workshop, so it looks like it’s 5 hours of driving if you want to really be heard, at least in this context.)

The event was led by Brian Soublet, assistant chief counsel, and next to him was Bernard Soriano, the deputy director. I think Mr. Soublet did a very good job of understanding many of the issues and leading the discussion. I am also impressed at the efforts Mr. Soriano has made to engage the online community to participate. Because Sacramento is a trek for most interested parties, it means the room will be dominated by those paid to go, and online engagement is a good way to broaden the input received.

As I wrote in my article on advice to governments I believe the best course is to have a light hand today while the technology is still in flux. While it isn’t easy to write regulations, it’s harder to undo them. There are many problems to be solved, but we really should see first whether the engineers who are working day-in and day-out to solve them can do that job before asking policymakers to force a solution. It’s not the role of the government to forbid theoretical risks in advance, but rather to correct demonstrated harms and demonstrated unacceptable risks once it’s clear they can’t be solved on the ground.

With that in mind, here’s some commentary on matters that came up during the session.

How do the police pull over a car?

Well, the law already requires that vehicles pull over when told to by police, as well as pull to the right when any emergency vehicle is passing. With no further action, all car developers will work out ways to notice this — microphones which know the sound of the sirens, cameras which can see the flashing lights.

Developers might ask for a way to make this problem easier. Perhaps a special sound the police car could make (by holding a smartphone up to their PA microphone for example.) Perhaps the police just reading the licence plate to dispatch and dispatch using an interface provided by the car vendor. Perhaps a radio protocol that can be loaded into an officer’s phone. Or something else — this is not yet the time to solve it.

It should be noted that this should be an extremely unlikely event. The officer is not going to pull over the car to have a chat. Rather, they would only want the car to stop because it is driving in an unsafe manner and putting people at risk. This is not impossible, but teams will work so hard on testing their cars that the probability that a police officer would be the first to discover a bug which makes the car drive illegally is very, very low. In fact, not to diminish the police or represent the developers as perfect, but the odds are much greater that the officer is in error. Still, the ability should be there.  read more »

Birth of the World Wide Web

Yesterday, I was interviewed for the public radio program Marketplace and as is normal, 30 minutes come down to 30 seconds. So I wanted to add some commentary to that story.

As you are no doubt hearing today, it was 25 years ago that Tim Berners-Lee first developed his draft proposal for an internet based hypertext system to tie together all the internet’s protocols: E-mail, USENET, FTP, Gopher, Telnet and a potential new protocol (HTTP) to serve up those hypertext pages. He didn’t call it the web then, and the first web tools were not written for a while, and wouldn’t make it to the outside world until 1991, but this was the germ of a system that changed the internet and the world. The first wave of public attention came when the UIUC’s supercomputing center released a graphical browser called Mosaic in 1993 and CERN declared the web protocols non-proprietary. Mosaic’s main author went on to start Mozilla/Netscape, which turned into the Firefox browser you may be reading this with.

As the radio piece explains, many people are confused as to what the difference is between the internet and the web. (They also are unsure what a browser is, or how the web is distinct even from Google sometimes.) To most, the internet was an overnight success — an overnight success that had been developing for over 20 years.

I don’t want to diminish the importance of the web, or TimBL’s contribution to it. He writes a guest editorial on the Google blog today where he lays out a similar message. The web integrated many concepts from deeper internet history.

Prior to the web, several systems emerged to let you use the internet’s resources. Mailing lists were the first seat of community on the internet, starting with Dave Farber’s MSGGROUP in the 70s. In the early 80s, that seat of community moved to USENET. USENET was serial, rather than browsed, but it taught lessons about having a giant network with nobody owning it or being in control.

The large collection of FTP servers were indexed by the Archie search engine, the first internet search engine from McGill University. Greater excitement came from the Gopher protocol from the U. of Minnesota, which allowed you to browse a tree of menus, moving from site to site, being taken to pages, files, local search resources and more all over the internet.

The web was not based on menus, though. It took the concept of hypertext; the ability to put links into documents that point at other documents. Hypertext concepts go back all the way to Vannevar Bush’s famous “Memex” but the man most known for popularizing it was Ted Nelson, who wrote the popular book Comptuer Lib. Ted tried hard for decades to commercialize hypertext and saw his Project Xanadu system as the vision for the future computerized world. In Xanadu, links were to specific points in other documents, were bi-directional and also allowed for copyright ownership and billing — I could link in text from your document and you got paid when people paid to read my document. Hypertext was the base of Apple’s “Hypercard” and a few other non-networked systems.

So did TimBL just combine hypertext with internet protocols to make a revolution? One important difference with the web was that the links were one-way and the system was non-proprietary. Anybody could join the system, anybody could link to anybody, and no permission or money were needed. Embracing the internet’s philosophy of open protocols, while others had built more closed systems, this was a tool that everybody could jump aboard.

Another key difference, which allowed WWW to quickly supplant gopher, was counter-intuitive. Gopher used menus and thus was structured. Structure enables several useful things, but it’s hard to maintain and limits other things you can do. Hypertext is unstructured and produces a giant morass, what we math nerds would call a big directed graph. This “writer friendly” approach was easy to add to, in spite of the lack of plan and the many broken links.

The Web was a superset of Gopher, but by being less structured it was more powerful. This lesson would be taught several times in the future, as Yahoo’s structure menus, which made billions for its founders, were supplanted by unstructured text search from Lycos, Alta Vista and eventually Google. Wikipedia’s anybody-can-contribute approach devoured the old world of encyclopedias.

For the real explosion into the public consciousness, though, the role of Mosaic is quite important. TimBL did envision the inclusion of graphics — I remember him excitedly showing me an early version of Mosaic in 1992 he was playing with — but at the time most of us used USENET, gopher and the very early Web using text browsers, and more to the point, we liked it that way. The inclusion of graphics into web pages was mostly superfluous and slowed things down, making it harder, not easier to get to the meat of what we wanted. The broader public doesn’t see it that way, and found Mosaic to be their gateway into the internet. In addition, many companies and content producers would not be satisfied with publishing online until they could make it look the way they wanted it to look. Graphical browsers allowed for that, but at the time, people were much more interested in the new PDF format which let you publish a document to look just like paper than in the HTML format where you didn’t control the margins, fonts or stylistic elements.

(The HTML specification’s history is one of a war between those who believe you should specify the meaning of the structural elements in your documents and let the browser figure out the best way to present those, and those who want tight control to produce a specific vision. CSS has settled some of that war, but it continues to this day.)

Nobody owned the web, and while Tim is not poor, it was others like Marc Andreesen, Jerry Yang & Dave Filo who would become the early billionaires from it. The web was the internet’s inflection point, when so many powerful trends came together and reached a form that allowed the world to embrace it. (In addition, it was necessary that the Moore’s law curves governing the price of computing and networking were also reaching the level needed to give these technologies to the public.)

25 years ago, I was busy working on the code for ClariNet, which would become the first business founded on the internet when I announced it in June — I will post an update on that 25th anniversary later this year.

A Critique of the NHTSA "Levels" for robocars

Last year, the NHTSA released a document defining “levels” from 0 to 4 for self-driving technology. People are eager for taxonomy that lets them talk about the technology, so it’s no surprise that use of the levels has caught on.

The problem is that they are misleading and probably won’t match the actual progress of technology. That would be tolerable if it weren’t for the fact that NHTSA itself made recommendations to states about how the levels should be treated in law, and states and others are already using the vocabulary in discussing regulations. Most disturbingly, NHTSA recommendations suggested states hold off on “level 4” in writing regulations for robocars — effectively banning them until the long process of un-banning them can be done. There is a great danger the levels will turn into an official roadmap.

Because of this, it’s worth understanding how the levels are already incorrect in the light of current and soon-to-be-released technology, and how they’re likely to be a bad roadmap for the future.

Read A Critique of the NHTSA and SAE “Levels” for robocars.

Would we ever ban human driving?

I often see the suggestion that as Robocars get better, eventually humans will be forbidden from driving, or strongly discouraged through taxes or high insurance charges. Many people think that might happen fairly soon.

It’s easy to see why, as human drivers kill 1.2 million people around the world every year, and injure many millions more. If we get a technology that does much better, would we not want to forbid the crazy risk of driving? It is one of the most dangerous things we commonly do, perhaps only second to smoking.

Even if this is going to happen, it won’t happen soon. While my own personal prediction is that robocars will gain market share very quickly — more like the iPhone than like traditional automotive technologies — there will still be lots of old-style cars around for many decades to come, and lots of old-style people. History shows we’re very reluctant to forbid old technologies. Instead we grandfather in the old technologies. You can still drive the cars of long ago, if you have one, even though they are horribly unsafe death traps by today’s standards, and gross polluters as well. Society is comfortable that as market forces cause the numbers of old vehicles to dwindle, this is sufficient to attain the social goals.

There are occasional exceptions, though usually only if they are easy to do. You do have to install seatbelts in a classic car that doesn’t have them, as well as turn signals and the other trappings of being street legal.

While I often talk about the horrible death toll, and how bad human drivers are, the reality is that this is an aggregation over a lot of people. A very large number of people will never have an accident in their lives, let alone one with major injuries or death. That’s a good thing! The average person probably drives around 600,000 miles in a lifetime in the USA. There is an accident for every 250,000 miles, but these are not evenly distributed. Some people have 4 or 5 accidents, and many have none.

As such, forbidding driving would be a presumption of guilt where most are innocent, and tough call from a political standpoint.

That doesn’t mean other factors won’t strongly discourage driving. You’ll still need a licence after all, and that licence might get harder and harder to get. The USA is one of the most lax places in the world. Many other countries have much more stringent driving tests. The ready availability of robotaxis will mean that many people just never go through the hassle of getting a licence, seeing no great need. Old people, who currently fight efforts to take away their licences, will not have the need to fight so hard.

Insurance goes down, not up

You will also need insurance. Today we pay about 6 cents/mile on average for insurance. Those riding in safe robocars might find that cost down to a penny/mile, which would be a huge win. But the cost for those who insist to drive is not going to go up because of robocars, unless you believe the highly unlikely proposition that the dwindling number of humans will cause more or deadlier accidents per person in the future. People tolerate that 6 cent/mile cost today, and they’ll tolerate it in the future if they want to. The cost will probably even drop a bit, because human driven cars will have robocar technologies and better passive safety (crumple zones) that make them much safer, even with a human at the wheel. Indeed, we may see many cars which are human driven but “very hard” to crash by mistake.

The relative cost of insurance will be higher, which may dissuade some folks. If you are told, “This trip will cost $6 if you ride, and $8 if you insist on driving” you might decide not to drive because 33% more cost seems ridiculous — even though today you are paying more for that cost on an absolute scale.

Highly congested cities will take steps against car ownership, and possibly driving. In Singapore, for example, you can’t have a car unless you buy a very expensive certificate at auction — these certificates cost as much as $100,000 for ten years. You have to really want a private car in Singapore, but still many people do.

Governments won’t have a great incentive to forbid driving but they might see it as a way to reduce congestion. Once robocars are packing themselves more tightly on the roads, they will want to give the human driven cars a wider berth, because they are less predictable. As such, the human driver takes up more road space. They also do more irrational things (like slow down to look at an accident.) One can imagine charges placed on human drivers for the extra road congestion they cause, and that might take people out of the driver’s seat.

The all-robocar lane tricks

There are certain functions which only work or only work well if all cars are robocars. They will be attractive, to be sure, but will they surpass the pressure from the human lobby?

  • It’s possible to build dynamic intersections without traffic lights or bridges if all cars are trusted robocars.
  • It’s possible to build low-use roads that are just two strips of concrete (like rails) if only robocars go on them, which is much cheaper.
  • It’s possible to safety redirect individual lanes on roads, without need for barriers, if all cars in the boundary lanes are robocars. Humans can still drive in the non-boundary lanes pretty safely.
  • We can probably cut congestion a lot in the all-robocar world, but we still cut it plenty as penetration increases over time.

These are nice, but really only a few really good things depend on the all-robocar world. Which is a good thing, because we would never get the cars if the benefits required universal adoption.

But don’t have an accident…

All of this is for ordinary drivers who are free of accidents and tickets. This might all change if you have an accident or get lots of tickets. Just as you can lose you licence to a DUI, I can foresee a system where people lose their licence on their first accident, or certainly on their second. Or their first DUI or certain major tickets. In that world, people will actually drive with much more caution, having their licence at stake for any serious mistake. A teen who causes an accident may find they have to wait several years to re-try getting a licence. It’s also possible that governments would raise the driving age to 18 or 21 to get people past the reckless part of their lives, but that this would not be a burden in a robocar world, with teens who are not even really aware of what they are missing.

I’ve driven over 35 years and had no accidents. I’ve gotten 2 minor speeding tickets, back in the 80s — though I actually speed quite commonly, like everybody else. It seems unlikely there would be cause to forbid me to drive, even in a mostly robocar world. Should I wish it. I don’t actually wish it, not on city streets. I still will enjoy driving on certain roads I would consider “fun to drive” in the mountains or by the coast. It’s also fun to go to a track and go beyond even today’s street rules. I don’t see that going away.

More about stolen bitcoins

Yesterday, I wrote about stolen bitcoins and the issues around a database of stolen coins. The issue is very complex, so today I will add some follow-up issues.

When stolen property changes hands (innocently) the law says that nobody in the chain had authority to transfer title to that property. Let’s assume that the law accepts bitcoins as property, and bitcoin transactions as denoting transfer of title, (as well as possession/control) to it. So with a stolen bitcoin, the final recipient is required on the law to return possession of the coin to its rightful owner, the victim of the theft. However, that recipient is also now entitled to demand back whatever they paid for the bitcoin, and so on down the line, all the way to the thief. With anonymous transactions, that’s a tall order, though most real world transactions are not that anonymous.

This is complicated by the fact that almost all Bitcoin transactions mix coins together. A Bitcoin “wallet” doesn’t hold bitcoins, rather it holds addresses which were the outputs of earlier transactions, and those outputs were amounts of bitcoin. When you want to do a new transaction, you do two things:

  1. You gather together enough addresses in your wallet which hold outputs of prior transactions, which together add up to as much as you plan to spend, and almost always a bit more.
  2. You write a transaction that lists all those old outputs as “inputs” and then has a series of outputs, which are the addresses of the recipients of the transaction.

There are typically 3 (or more) outputs on a transaction:

  1. The person you’re paying. The output is set to be the amount you’re paying
  2. Yourself. The output is the “change” from the transaction since the inputs probably didn’t add up exactly to the amount you’re paying.
  3. Any amount left over — normally small and sometimes zero — which does not have a specific output, but is given as a transaction fee to the miner who put your transaction into the Bitcoin ledger (blockchain.)

They can be more complex, but the vast majority work like this. While normally you pay the “change” back to yourself, the address for the change can be any new random address, and nothing in the ledger connects it to you.

So as you can see, a transaction might combine a ton of inputs, some of which are clean, untainted coins, some of which are tainted, and some of which are mixed. After coins have been through a lot of transactions, the mix can be very complex. Not so complex as the computers can’t deal with it and calculate a precise fraction of the total coin that was tainted, but much too complex for humans to wish to worry about.

A thief will want to mix up their coins as quickly as possible, and there are a variety of ways to do that.

Right now, the people who bought coins at Mt.Gox (or those who sent them there to buy other currency) are the main victims of this heist. They thought they had a balance there, and its gone. Many of them bought these coins at lower prices, and so their loss is not nearly as high as the total suggests, but they are deservedly upset.

Unfortunately, if the law does right by them and recovers their stolen property, it is likely that might come from the whole Bitcoin owning and using community, because of the fact that everybody in the chain is liable. Of particular concern are the merchants who are taking bitcoin on their web sites. Let’s speculate on the typical path of a stolen coin that’s been around for a while:

  • It left Mt.Gox for cash, sold by the thief, and a speculator simply held onto the coins. That’s the “easy” one, the person who now has stolen coins has to find the thief and get their money back. Not too likely, but legally clear.
  • It left Mt.Gox and was used in a series of transactions, ending up with one where somebody bought an item from a web store using bitcoin.
  • With almost all stores, the merchant system takes all bitcoin received and sells it for dollars that day. Somebody else — usually a bitcoin speculator — paid dollars for that bitcoin that day, and the chain continues.

There is the potential here for a lot of hassle. The store learns they sold partially tainted bitcoins. The speculator wants and is entitled to getting a portion of her money back, and the store is an easy target to go after. The store now has to go after their customer for the missing money. The store also probably knows who their customer is. The customer may have less knowledge of where her bitcoins came from.

This is a huge hassle for the store, and might very well lead to stores reversing their decisions to accept bitcoin. If 6% of all bitcoins are stolen, as the Mt.Gox heist alleges, most transactions are tainted. 6% is an amount worth recovering for many, and it’s probably all the profit at a typical web store. Worse, the number of stolen coins may be closer to 15% of all the circulating bitcoins, certainly something worth recovering on many transactions.

The “sinking taint” approach

Previously, I suggested a rule. The rule was that if a transaction merges various inputs which are variously reported as stolen (tainted) and not, then the total percentage be calculated, and the first outputs receive all the tainting, and the latter outputs (including the transaction fee, last of all) be marked clear. One of the outputs would remain partial unless the transaction was designed to avoid this. There is no inherent rule that the “change” comes last, it is just a custom, and it would probably be reversed, so that as much of the tainted fraction remains in the change as possible, and the paid amount is as clean as possible. Recipients would want to insist on that.

This allows the creation of a special transaction that people could do with themselves on discovering they have coin that is reported stolen. The transaction would split the coin precisely into one or more purely tainted outputs, and one or more fully clean outputs. Recipients would likely refuse bitcoin with any taint on it at all, and so holders of bitcoin would be forced to do these dividing transactions. (They might have to do them again if new theft reports come on coin that they own.) People would end up doing various combinations of these transactions to protect their privacy and not publicly correlate all their coin.

Tainted transaction fees?

The above system makes the transaction fee clean if any of the coin in the transaction is clean. If this is not done, miners might not accept such transactions. On the other hand, there is an argument that it would be good if miners refused even partially tainted transactions, other than the ones above used to divide the stolen coins from the clean. There would need to be a rule that allows a transaction to be declared a splitting transaction which pays its fees from the clean part. In this case, as soon as coins had any taint at all, they would become unspendable in the legit markets and it would be necessary to split them. They would still be spendable with people who did not accept this system, or in some underground markets, but they would probably convert to other currencies at a discount.

This works better if there is agreement on the database of tainted coins, but that’s unlikely. As such, miners would decide what databases to use. Anything in the database used by a significant portion of the miners would make those coins difficult to spend and thus prime for splitting. However, if they are clean in the view of a significant fraction of the miners, they will enter the blockchain eventually.

This is a lot of complexity, much more than anybody in the Bitcoin community wants. The issue is that if the law gets involved, there is a world of pain in store for the system, and merchants, if a large fraction of all circulating coins are reported as stolen in a police report, even a Japanese police report.

What if somebody steals a bitcoin?

Bitcoin has seen a lot of chaos in the last few months, including being banned in several countries, the fall of the Silk Road, and biggest of all, the collapse of Mt. Gox, which was for much of Bitcoin’s early history, the largest (and only major) exchange between regular currencies and bitcoins. Most early “investors” in bitcoin bought there, and if they didn’t move their coins out, they now greatly regret it.

I’ve been quite impressed by the ability of the bitcoin system to withstand these problems. Each has caused major “sell” days but it has bounced back each time. This is impressive because nothing underlies bitcoins other than the expectation that you will be able to use them into the future and that others will take them.

It is claimed (though doubted by some) that most of Mt.Gox’s bitcoins — 750,000 of them or over $400M — were stolen in some way, either through thieves exploiting a bug or some other means. If true, this is one of the largest heists in history. There are several other stories of theft out there as well. Because bitcoin transactions can’t be reversed, and there is no central organization to complain to, theft is a real issue for bitcoin. If you leave your bitcoin keys on your networked devices, and people get in, they can transfer all your coins away, and there is no recourse.

Or is there?

If you sell something and are paid in stolen money, there is bad news for you, the recipient of the money. If this is discovered, the original owner gets the money back. You are out of luck for having received stolen property. You might even be suspected of being involved, but even if you are entirely innocent, you still lose.

All bitcoin transactions are public, but the identities of the parties are obscured. If your bitcoins are stolen, you can stand up and declare they were stolen. More than that, unless the thief wiped all your backups, you can 99.9% prove that you were, at least in the past, the owner of the allegedly stolen coins. Should society accept bitcoins as money or property, you would be able to file a police report on the theft, and identify the exact coin fragments stolen, and prove they were yours, once. We would even know “where” they are today, or see every time they are spent and know who they went to, or rather, know the random number address that owns them now in the bitcoin system. You still own them, under the law, but in the system they are at some other address.

That random address is not inherently linked to this un-owner, but as the coins are spent and re-spent, they will probably find their way to a non-anonymous party, like a retailer, from whom you could claim them back. Retailers, exchanges and other legitimate parties would not want this, they don’t want to take stolen coins and lose their money. (Clever recipients generate a new address for every transaction, but others use publicly known addresses.)

Tainted coin database?

It’s possible, not even that difficult, to create a database of “tainted” coins. If such a database existed, people accepting coins could check if the source transaction coins are in that database. If there, they might reject the coins or even report the sender. I say “reject” because you normally don’t know what coins you are getting until the transaction is published, and if the other party publishes it, the coins are now yours. You can refuse to do your end of the transaction (ie. not hand over the purchased goods) or even publish a transaction “refunding” the coins back to the sender. It’s also possible to imagine that the miners could refuse to enter a transaction involving tainted coins into the blockchain. (For one thing, if the coins are stolen, they won’t get their transaction fees.) However, as long as some miner comes along willing to enter it, it will be recorded, though other miners could refuse to accept that block as legit.  read more »

What governments should do to help and regulate robocars

In my recent travels, I have often been asked what various government entities can and should do related to the regulation of robocars. Some of them want to consider how to protect public safety. Most of them, however, want to know what they can do to prepare their region for the arrival of these cars, and ideally to become one of the leading centres in the development of the vehicles. The car industry is about to be disrupted, and most of the old players may not make it through to the new world. The ground transportation industry is so huge (I estimate around $7 trillion globally) that many regions depend on it as a large component of their economy. For some places it’s vital.

But there are many more questions than that, so I’ve prepared an essay covering a wide variety of ways in which policymakers and robocars will interact.

Read: Governments, The Law and Robocars

US push to mandate V2V radios -- is it a good choice?

It was revealed earlier this month that NHTSA wishes to mandate vehicle to vehicle radios in all cars. I have written extensively on the issues around this and regular readers will know I am a skeptic of this plan. This is not to say that I don’t think that V2V would not be useful for robocars and regular cars. Rather, I believe that its benefits are marginal when it comes to the real problems, and for the amount of money that must be spent, there are better ways to spend it. In addition, I think that similar technology can and will evolve organically, without a government mandate, or with a very minimal one. Indeed, I think that technology produced without a mandate or pre-set standards will actually be superior, cheaper and be deployed far more quickly than the proposed approach.

The new radio protocol, known as DSRC, is a point-to-point wifi style radio protocol for cars and roadside equipment. There are many applications. Some are “V2V” which means cars report what they are doing to other cars. This includes reporting one’s position tracklog and speed, as well as events like hitting the brakes or flashing a turn signal. Cars can use this to track where other cars are, and warn of potential collisions, even with cars you can’t see directly. Infrastructure can use it to measure traffic.

The second class of applications are “V2I” which means a car talks to the road. This can be used to know traffic light states and timings, get warnings of construction zones and hazards, implement tolling and congestion charging, and measure traffic.

This will be accomplished by installing a V2V module in every new car which includes the radio, a connection to car information and GPS data. This needs to be tamper-proof, sealed equipment and must have digital certificates to prove to other cars it is authentic and generated only by authorized equipment.

Robocars will of course use it. Any extra data is good, and the cost of integrating this into a robocar is comparatively small. The questions revolve around its use in ordinary cars. Robocars, however, can never rely on it. They must be be fully safe enough based on just their sensors, since you can’t expect every car, child or deer to have a transponder, ever.

One issue of concern is the timeline for this technology, which will look something like this:

  1. If they’re lucky, NHTSA will get this mandate in 2015, and stop the FCC from reclaiming the currently allocated spectrum.
  2. Car designers will start designing the tech into new models, however they will not ship until the 2019 or 2020 model years.
  3. By 2022, the 2015 designed technology will be seriously obsolete, and new standards will be written, which will ship in 2027.
  4. New cars will come equipped with the technology. About 12 million new cars are sold per year.
  5. By 2030, about half of all cars have the technology, and so it works in 25% of accidents. 3/4 of those will have the obsolete 2015 technology or need a field-upgrade. The rest will have soon to be obsolete 2022 technology. Most cars also have forward collision warning by this point, so V2V is only providing extra information in a tiny fraction of the 25% of accidents.
  6. By 2040 almost all cars have the technology, though most will have older versions. Still, 5-10% of cars do not have the technology unless a mandate demands retrofit. Some cars have the equipment but it is broken.

Because of the quadratic network effect, in 2030 when half of cars have the technology, only 25% of car interactions will be make use of it, since both cars must have it. (The number is, to be fair, somewhat higher as new cars drive more than old cars.)  read more »

More World Tour: Dubai, Singapore

The Robocars world tour continues. Monday I will speak on robocars at the UAE Government conference in Dubai, where I just landed. Then it’s off to talk about them at a private event in Singapore, but I’ll also visit teams there. If I have time, I will check out Masdar — what was originally going to be the first all-robocar city — while in the UAE.

When I get back I will have more on some new announcements, particularly the vehicle-to-vehicle communications plan announcement, and new teams forming up. Though for my views on the V2V issue, you can read the three part series wrote last year, V2V and how to build a networked technology.