Tags

The paradox of Bitcoin proof-of-work mining

Everybody knows about bitcoin, but fewer know what goes on under the hood. Bitcoin provides the world a trustable ledger for transactions without trusting any given party such as a bank or government. Everybody can agree with what’s in the ledger and what order it was put there, and that makes it possible to write transfers of title to property — in particular the virtual property called bitcoins — into the ledger and thus have a money system.

Satoshi’s great invention was a way to build this trust in a decentralized way. Because there are rewards, many people would like to be the next person to write a block of transactions to the ledger. The Bitcoin system assures that the next person to do it is chosen at random. Because the winner is chosen at random from a large pool, it becomes very difficult to corrupt the ledger. You would need 6 people, chosen at random from a large group, to all be part of your conspiracy. That’s next to impossible unless your conspiracy is so large that half the participants are in it.

How do you win this lottery to be the next randomly chosen ledger author? You need to burn computer time working on a math problem. The more computer time you burn, the more likely it is you will hit the answer. The first person to hit the answer is the next winner. This is known as “proof of work.” Technically, it isn’t proof of work, because you can, in theory, hit the answer on your first attempt, and be the winner with no work at all, but in practice, and in aggregate, this won’t happen. In effect, it’s “proof of luck,” but the more computing you throw at the problem, the more chances of winning you have. Luck is, after all, an imaginary construct.

Because those who win are rewarded with freshly minted “mined” bitcoins and transaction fees, people are ready to burn expensive computer time to make it happen. And in turn, they assure the randomness and thus keep the system going and make it trustable.

Very smart, but also very wasteful. All this computer time is burned to no other purpose. It does no useful work — and there is debate about whether it inherently can’t do useful work — and so a lot of money is spent on these lottery tickets. At first, existing computers were used, and the main cost was electricity. Over time, special purpose computers (dedicated processors or ASICs) became the only effective tools for the mining problem, and now the cost of these special processors is the main cost, and electricity the secondary one.

Money doesn’t grow on trees or in ASIC farms. The cost of mining is carried by the system. Miners get coins and will eventually sell them, wanting fiat dollars or goods and affecting the price. Markets, being what they are, over time bring closer and closer the cost of being a bitcoin miner and the reward. If the reward gets too much above the cost, people will invest in mining equipment until it normalizes. The miners get real, but not extravagant profits. (Early miners got extravagant profits not because of mining but because of the appreciation of their coins.)

What this means is that the cost of operating Bitcoin is mostly going to the companies selling ASICs, and to a lesser extent the power companies. Bitcoin has made a funnel of money — about $2M a day — that mostly goes to people making chips that do absolutely nothing and fuel is burned to calculate nothing. Yes, the miners are providing the backbone of Bitcoin, which I am not calling nothing, but they could do this with any fair, non-centralized lottery whether it burned CPU or not. If we can think of one.

(I will note that some point out that the existing fiat money system also comes with a high cost, in printing and minting and management. However, this is not a makework cost, and even if Bitcoin is already more efficient doesn’t mean there should not be effort to make it even better.)

CPU/GPU mining

Naturally, many people have been bothered by this for various reasons. A large fraction of the “alt” coins differ from Bitcoin primarily in the mining system. The first round of coins, such as Litecoin and Dogecoin, use a proof-of-work system which was much more difficult to solve with an ASIC. The theory was that this would make mining more democratic — people could do it with their own computers, buying off-the-shelf equipment. This has run into several major problems:

  • Even if you did it with your own computer, you tended to need to dedicate that computer to mining in the end if you wanted to compete
  • Because people already owned hardware, electricity became a much bigger cost component, and that waste of energy is even more troublesome than ASIC buying
  • Over time, mining for these coins moved to high-end GPU cards. This, in turn caused mining to be the main driver of demand for these GPUs, drying up the supply and jacking up the prices. In effect, the high end GPU cards became like the ASICs — specialized hardware being bought just for mining.
  • In 2014, vendors began advertising ASICs for these “ASIC proof” algorithms.
  • When mining can be done on ordinary computers, it creates a strong incentive for thieves to steal computer time from insecure computers (ie. all computers) in order to mine. Several instances of this have already become famous.

The last point is challenging. It’s almost impossible to fix. If mining can be done on ordinary computers, then they will get botted. In this case a thief will even mine at a rate that can’t pay for the electricity, because the thief is stealing your electricity too.  read more »

New regulations are banning the development of delivery robots

Many states and jurisdictions are rushing to write laws and regulations governing the testing and deployment of robocars. California is working on its new regulations right now. The first focus is on testing, which makes sense.

Unfortunately the California proposed regulations and many similar regulations contain a serious flaw:

The autonomous vehicle test driver is either in immediate physical control of the vehicle or is monitoring the vehicle’s operations and capable of taking over immediate physical control.

This is quite reasonable for testing vehicles based on modern cars, which all have steering wheels and brakes with physical connections to the steering and braking systems. But it presents a problem for testing delivery robots or deliverbots.

Delivery robots are world-changing. While they won’t and can’t carry people, they will change retailing, logistics, the supply chain, and even going to the airport in huge ways. By offering very quick delivery of every type of physical goods — less than 30 minutes — at a very low price (a few pennies a mile) and on the schedule of the recipient, they will disrupt the supply chain of everything. Others, including Amazon, are working on doing this by flying drone, but for delivery of heavier items and efficient delivery, the ground is the way to go.

While making fully unmanned vehicles is more challenging than ones supervised by their passenger, the delivery robot is a much easier problem than the self-delivering taxi for many reasons:

  • It can’t kill its cargo, and thus needs no crumple zones, airbags or other passive internal safety.
  • It still must not hurt people on the street, but its cargo is not impatient, and it can go more slowly to stay safer. It can also pull to the side frequently to let people pass if needed.
  • It doesn’t have to travel the quickest route, and so it can limit itself to low-speed streets it knows are safer.
  • It needs no windshield or wheel, and can be small, light and very inexpensive.

A typical deliverbot might look like little more than a suitcase sized box on 3 or 4 wheels. It would have sensors, of course, but little more inside than batteries and a small electric motor. It probably will be covered in padding or pre-inflated airbags, to assure it does the least damage possible if it does hit somebody or something. At a weight of under 100lbs, with a speed of only 25 km/h and balloon padding all around, it probably couldn’t kill you even if it hit you head on (though that would still hurt quite a bit.)

The point is that this is an easier problem, and so we might see development of it before we see full-on taxis for people.

But the regulations do not allow it to be tested. The smaller ones could not fit a human, and even if you could get a small human inside, they would not have the passive safety systems in place for that person — something you want even more in a test vehicle. They would need to add physical steering and braking systems which would not be present in the full drive-by-wire deployment vehicle. Testing on real roads is vital for self-driving systems. Test tracks will only show you a tiny fraction of the problem.

One way to test the deliverbot would be to follow it in a chase car. The chase car would observe all operations, and have a redundant, reliable radio link to allow a person in the chase car to take direct control of any steering or brakes, bypassing the autonomous drive system. This would still be drive-by-wire(less) though, not physical control.

These regulations also affect testing of full drive-by-wire vehicles. Many hybrid and electric cars today are mostly drive-by-wire in ordinary operations, and the new Infiniti Q50 features the first steer-by-wire. However the Q50 has a clutch which, in the event of system failure, reconnects the steering column and the wheels physically, and the hybrids, even though they do DBW regenerative braking for the first part of the brake pedal, if you press all the way down you get a physical hydraulic connection to the brakes. A full DBW car, one without any steering wheel like the Induct Navia, can’t be tested on regular roads under these regulations. You could put a DBW steering wheel in the Navia for testing but it would not be physical.

Many interesting new designs must be DBW. Things like independent control of the wheels (as on the Nissan Pivo) and steering through differential electric motor torque can’t be done through physical control. We don’t want to ban testing of these vehicles.

Yes, teams can test regular cars and then move their systems down to the deliverbots. This bars the deliverbots from coming first, even though they are easier, and allows only the developers of passenger vehicles to get in the game.

So let’s modify these regulations to either exempt vehicles which can’t safely carry a person, or which are fully drive-by-wire, and just demand a highly reliable DBW system the safety driver can use.

The endgame for Bitcoin

Bitcoin is hot-hot-hot, but today I want to talk about how it ends. Earlier, I predicted a variety of possible fates for Bitcoin ranging from taking over the entire M1 money supply to complete collapse, but the most probable one, in my view, is that Bitcoin is eventually supplanted by one or more successor digital currencies which win in the marketplace. I think that successor will also itself be supplanted, and that this might continue for some time. I want to talk about not just why that might happen, but also how it may take place.

Nobody thinks Bitcoin is perfect, and no digital currency (DigiC) is likely to satisfy everybody. Some of the flaws are seen as flaws by most people, but many of its facets are seen as features by some, and flaws by others. The anonymity of addresses, the public nature of the transactions, the irrevocable transactions, the fixed supply, the mining system, the resistance to control by governments — there are parties that love these and hate these.

Bitcoin’s most remarkable achievement, so far, is the demonstration that a digital currency with no intrinsic value or backer/market maker can work and get a serious valuation. Bitcoin argues — and for now demonstrates — that you can have a money that people will accept only because they know they can get others to accept it with no reliance on a government’s credit or the useful physical properties of a metal. The price of a bitcoin today is pretty clearly the result of speculative bubble investment, but that it sustains a price at all is a revelation.

Bitcoins have their value because they are scarce. That scarcity is written into the code — in the regulated speed of mining, and in the fixed limit on coins. There will only be so many bitcoins, and this gives you confidence in their value, unlike say, Zimbabwe 100 trillion dollar notes. This fixed limit is often criticised because it will be strongly deflationary over time, and some more traditional economic theory feels there are serious problems with a deflationary currency. People resist spending it because holding it is better than spending it, among other things.

Altcoins

While bitcoins have this scarcity, digital currencies as a group do not. You can always create another digital currency. And many people have. While Bitcoin is the largest, there are many “altcoins,” a few of which (such as Ripple, Litecoin and even the satirical currency Dogecoin) have serious total market capitalizations of tens or hundreds of millions of dollars(1). Some of these altcoins are simply Bitcoin or minor modifications of the Bitcoin protocol with a different blockchain or group of participants, others have more serious differences, such as alternate forms of mining. Ripple is considerably different. New Altcoins will emerge from time to time, presumably forever.

What makes one digital coin better than another? Obviously a crucial element is who will accept the coin in exchange for goods, services or other types of currency. The leading coin (Bitcoin) is accepted at more stores which gives it a competitive advantage.

If one is using digital currency simply as a medium — changing dollars to bitcoins to immediately buy something with bitcoins at a store, then it doesn’t matter a great deal which DigiC you use, or what its price is, as long as it is not extremely volatile. (You may be interested in other attributes, like speed of transaction and revocation, along with security, ease of use and other factors.) If you wish to hold the DigC you care about appreciation, inflation and deflation, as well as the risk of collapse. These factors are affected as well by the “cost” of the DigiC.

The cost of a digital currency

I will advance that every currency has a cost which affects its value. For fiat currency like dollars, all new dollars go to the government, and every newly printed dollar devalues all the other dollars, and overprinting creates clear inflation.  read more »

Commentary on California's robocar regulations workshop

Tuesday, the California DMV held a workshop on how they will write regulations for the operation of robocars in California. They already have done meetings on testing, but the real meat of things will be in the operation. It was in Sacramento, so I decided to just watch the video feed. (Sadly, remote participants got almost no opportunity to provide feedback to the workshop, so it looks like it’s 5 hours of driving if you want to really be heard, at least in this context.)

The event was led by Brian Soublet, assistant chief counsel, and next to him was Bernard Soriano, the deputy director. I think Mr. Soublet did a very good job of understanding many of the issues and leading the discussion. I am also impressed at the efforts Mr. Soriano has made to engage the online community to participate. Because Sacramento is a trek for most interested parties, it means the room will be dominated by those paid to go, and online engagement is a good way to broaden the input received.

As I wrote in my article on advice to governments I believe the best course is to have a light hand today while the technology is still in flux. While it isn’t easy to write regulations, it’s harder to undo them. There are many problems to be solved, but we really should see first whether the engineers who are working day-in and day-out to solve them can do that job before asking policymakers to force a solution. It’s not the role of the government to forbid theoretical risks in advance, but rather to correct demonstrated harms and demonstrated unacceptable risks once it’s clear they can’t be solved on the ground.

With that in mind, here’s some commentary on matters that came up during the session.

How do the police pull over a car?

Well, the law already requires that vehicles pull over when told to by police, as well as pull to the right when any emergency vehicle is passing. With no further action, all car developers will work out ways to notice this — microphones which know the sound of the sirens, cameras which can see the flashing lights.

Developers might ask for a way to make this problem easier. Perhaps a special sound the police car could make (by holding a smartphone up to their PA microphone for example.) Perhaps the police just reading the licence plate to dispatch and dispatch using an interface provided by the car vendor. Perhaps a radio protocol that can be loaded into an officer’s phone. Or something else — this is not yet the time to solve it.

It should be noted that this should be an extremely unlikely event. The officer is not going to pull over the car to have a chat. Rather, they would only want the car to stop because it is driving in an unsafe manner and putting people at risk. This is not impossible, but teams will work so hard on testing their cars that the probability that a police officer would be the first to discover a bug which makes the car drive illegally is very, very low. In fact, not to diminish the police or represent the developers as perfect, but the odds are much greater that the officer is in error. Still, the ability should be there.  read more »

The real Bitcoin Satoshi Nakamoto

The latest Bitcoin bombshell — distracting us even from the Mt.Gox failure — was the Newsweek cover story — their first printed issue since 2012 — declaring they had found the mythical creator of Bitcoin, known under the pseudonym of Satoshi Nakamoto, and he was a guy from near L.A. in his 60s whose real birth name was actually Satoshi Nakamoto.

Now known as Dorian S. Nakamoto, I’ll refer to him as DSN to distinguish him from BCSN — the Bitcoin creator Satoshi Nakamoto, though of course the question is whether DSN == BCSN. DSN denies he is BCSN and says his quotes suggesting that were answers to other questions, at least in his mind.

The second surprise was a web posting from BCSN, the first in years, simply saying he is not DSN. This posting is confusing, because a little thought shows it reveals no information on that subject. If DSN is BCSN, then of course both are denying it. More to the point, BCSN is clearly somebody well versed in game theory and trust calculus, and knows very well that the denial does not add reliable information on this.

BCSN’s post does tell us one big thing though — that BCSN is still alive, around, and even willing to comment if the issue is as big as this one. Many speculated that his silence meant he was gone, and also that he had lost his estimated million bitcoins.

The Bitcoin community was quite skeptical of the Newsweek claim. One very justified reason for this skepticism is that aside from the two key disputed quotes, the article’s arguments that it has found BCSN read like nonsense to the average nerd.

DSN might be BCSN, the article reasons, because he is a nerdy engineer with good technical skills, a background working at various tech companies and government projects, is aloof from his family and neighbours, and enjoys a technical hobby such as collecting model trains, even machining his own parts. “Smart, intelligent, mathematics, engineering, computers. You name it, he can do it,” says DSN’s brother. He’s a little bit libertarian, looks scruffy and is reportedly a bit of an asshole.

Aha, links Leah McGrath Goodman of Newsweek — this “suggested I was on the right track.”

What she doesn’t realize perhaps is that I literally know hundreds people who fit that description. It’s a profile that is actually more likely to be true than not among wide swaths of the nerd community.

Goodman’s logic reads to us like somebody saying, “I was on the track of the Zodiac killer, whom we know to be from San Francisco. I identified a suspect named John Zodiac who is a quiet loner, and is known to like the San Francisco Giants and burritos in the Mission district. I’m on the right track!”

There is only one thing in the Newsweek article that was worthy of attention. With police he summoned ready to usher Goodman away from his house, he tells her

“I am no longer involved in that and I cannot discuss it. It’s been turned over to other people. They are in charge of it now. I no longer have any connection.”

In the context of Bitcoin, that’s indeed proof enough. The police officers present have confirmed he did say something like this. DSN insists he felt he was being asked about his past classified work on government projects. He says he had not even heard about Bitcoin until this matter came up.

Various online forces have come up with other arguments against the match. DSN’s known writings seem fairly different from the writings of BCSN, though Goodman finds a few commonalities, including hints that BCSN is perhaps older (like DSN.)

But most of all, BCSN is known as a scrupulous protector of his or her or their own identity. BCSN made meticulous use of online identity hiding techniques to avoid being tracked, and has never spent any of the huge cache of bitcoins mined in the early days, possibly to avoid the risk of detection. This is so completely at odds with the idea of doing it all under his real name that after a perfunctory search in the early days, most people who fancied themselves Satoshi-finding detectives rarely bothered to look at people whose real name was Satoshi Nakamoto. Common wisdom, in fact, was that he/she probably wasn’t even Japanese. Certainly not somebody with no history in the cryptography or digital money communities.

But what if it is him?

While currently the tide seems to be to discredit the Newsweek story, a second question has been raised — is it good or bad if BCSN is unmasked, and if it is this guy?  read more »

More about stolen bitcoins

Yesterday, I wrote about stolen bitcoins and the issues around a database of stolen coins. The issue is very complex, so today I will add some follow-up issues.

When stolen property changes hands (innocently) the law says that nobody in the chain had authority to transfer title to that property. Let’s assume that the law accepts bitcoins as property, and bitcoin transactions as denoting transfer of title, (as well as possession/control) to it. So with a stolen bitcoin, the final recipient is required on the law to return possession of the coin to its rightful owner, the victim of the theft. However, that recipient is also now entitled to demand back whatever they paid for the bitcoin, and so on down the line, all the way to the thief. With anonymous transactions, that’s a tall order, though most real world transactions are not that anonymous.

This is complicated by the fact that almost all Bitcoin transactions mix coins together. A Bitcoin “wallet” doesn’t hold bitcoins, rather it holds addresses which were the outputs of earlier transactions, and those outputs were amounts of bitcoin. When you want to do a new transaction, you do two things:

  1. You gather together enough addresses in your wallet which hold outputs of prior transactions, which together add up to as much as you plan to spend, and almost always a bit more.
  2. You write a transaction that lists all those old outputs as “inputs” and then has a series of outputs, which are the addresses of the recipients of the transaction.

There are typically 3 (or more) outputs on a transaction:

  1. The person you’re paying. The output is set to be the amount you’re paying
  2. Yourself. The output is the “change” from the transaction since the inputs probably didn’t add up exactly to the amount you’re paying.
  3. Any amount left over — normally small and sometimes zero — which does not have a specific output, but is given as a transaction fee to the miner who put your transaction into the Bitcoin ledger (blockchain.)

They can be more complex, but the vast majority work like this. While normally you pay the “change” back to yourself, the address for the change can be any new random address, and nothing in the ledger connects it to you.

So as you can see, a transaction might combine a ton of inputs, some of which are clean, untainted coins, some of which are tainted, and some of which are mixed. After coins have been through a lot of transactions, the mix can be very complex. Not so complex as the computers can’t deal with it and calculate a precise fraction of the total coin that was tainted, but much too complex for humans to wish to worry about.

A thief will want to mix up their coins as quickly as possible, and there are a variety of ways to do that.

Right now, the people who bought coins at Mt.Gox (or those who sent them there to buy other currency) are the main victims of this heist. They thought they had a balance there, and its gone. Many of them bought these coins at lower prices, and so their loss is not nearly as high as the total suggests, but they are deservedly upset.

Unfortunately, if the law does right by them and recovers their stolen property, it is likely that might come from the whole Bitcoin owning and using community, because of the fact that everybody in the chain is liable. Of particular concern are the merchants who are taking bitcoin on their web sites. Let’s speculate on the typical path of a stolen coin that’s been around for a while:

  • It left Mt.Gox for cash, sold by the thief, and a speculator simply held onto the coins. That’s the “easy” one, the person who now has stolen coins has to find the thief and get their money back. Not too likely, but legally clear.
  • It left Mt.Gox and was used in a series of transactions, ending up with one where somebody bought an item from a web store using bitcoin.
  • With almost all stores, the merchant system takes all bitcoin received and sells it for dollars that day. Somebody else — usually a bitcoin speculator — paid dollars for that bitcoin that day, and the chain continues.

There is the potential here for a lot of hassle. The store learns they sold partially tainted bitcoins. The speculator wants and is entitled to getting a portion of her money back, and the store is an easy target to go after. The store now has to go after their customer for the missing money. The store also probably knows who their customer is. The customer may have less knowledge of where her bitcoins came from.

This is a huge hassle for the store, and might very well lead to stores reversing their decisions to accept bitcoin. If 6% of all bitcoins are stolen, as the Mt.Gox heist alleges, most transactions are tainted. 6% is an amount worth recovering for many, and it’s probably all the profit at a typical web store. Worse, the number of stolen coins may be closer to 15% of all the circulating bitcoins, certainly something worth recovering on many transactions.

The “sinking taint” approach

Previously, I suggested a rule. The rule was that if a transaction merges various inputs which are variously reported as stolen (tainted) and not, then the total percentage be calculated, and the first outputs receive all the tainting, and the latter outputs (including the transaction fee, last of all) be marked clear. One of the outputs would remain partial unless the transaction was designed to avoid this. There is no inherent rule that the “change” comes last, it is just a custom, and it would probably be reversed, so that as much of the tainted fraction remains in the change as possible, and the paid amount is as clean as possible. Recipients would want to insist on that.

This allows the creation of a special transaction that people could do with themselves on discovering they have coin that is reported stolen. The transaction would split the coin precisely into one or more purely tainted outputs, and one or more fully clean outputs. Recipients would likely refuse bitcoin with any taint on it at all, and so holders of bitcoin would be forced to do these dividing transactions. (They might have to do them again if new theft reports come on coin that they own.) People would end up doing various combinations of these transactions to protect their privacy and not publicly correlate all their coin.

Tainted transaction fees?

The above system makes the transaction fee clean if any of the coin in the transaction is clean. If this is not done, miners might not accept such transactions. On the other hand, there is an argument that it would be good if miners refused even partially tainted transactions, other than the ones above used to divide the stolen coins from the clean. There would need to be a rule that allows a transaction to be declared a splitting transaction which pays its fees from the clean part. In this case, as soon as coins had any taint at all, they would become unspendable in the legit markets and it would be necessary to split them. They would still be spendable with people who did not accept this system, or in some underground markets, but they would probably convert to other currencies at a discount.

This works better if there is agreement on the database of tainted coins, but that’s unlikely. As such, miners would decide what databases to use. Anything in the database used by a significant portion of the miners would make those coins difficult to spend and thus prime for splitting. However, if they are clean in the view of a significant fraction of the miners, they will enter the blockchain eventually.

This is a lot of complexity, much more than anybody in the Bitcoin community wants. The issue is that if the law gets involved, there is a world of pain in store for the system, and merchants, if a large fraction of all circulating coins are reported as stolen in a police report, even a Japanese police report.

What if somebody steals a bitcoin?

Bitcoin has seen a lot of chaos in the last few months, including being banned in several countries, the fall of the Silk Road, and biggest of all, the collapse of Mt. Gox, which was for much of Bitcoin’s early history, the largest (and only major) exchange between regular currencies and bitcoins. Most early “investors” in bitcoin bought there, and if they didn’t move their coins out, they now greatly regret it.

I’ve been quite impressed by the ability of the bitcoin system to withstand these problems. Each has caused major “sell” days but it has bounced back each time. This is impressive because nothing underlies bitcoins other than the expectation that you will be able to use them into the future and that others will take them.

It is claimed (though doubted by some) that most of Mt.Gox’s bitcoins — 750,000 of them or over $400M — were stolen in some way, either through thieves exploiting a bug or some other means. If true, this is one of the largest heists in history. There are several other stories of theft out there as well. Because bitcoin transactions can’t be reversed, and there is no central organization to complain to, theft is a real issue for bitcoin. If you leave your bitcoin keys on your networked devices, and people get in, they can transfer all your coins away, and there is no recourse.

Or is there?

If you sell something and are paid in stolen money, there is bad news for you, the recipient of the money. If this is discovered, the original owner gets the money back. You are out of luck for having received stolen property. You might even be suspected of being involved, but even if you are entirely innocent, you still lose.

All bitcoin transactions are public, but the identities of the parties are obscured. If your bitcoins are stolen, you can stand up and declare they were stolen. More than that, unless the thief wiped all your backups, you can 99.9% prove that you were, at least in the past, the owner of the allegedly stolen coins. Should society accept bitcoins as money or property, you would be able to file a police report on the theft, and identify the exact coin fragments stolen, and prove they were yours, once. We would even know “where” they are today, or see every time they are spent and know who they went to, or rather, know the random number address that owns them now in the bitcoin system. You still own them, under the law, but in the system they are at some other address.

That random address is not inherently linked to this un-owner, but as the coins are spent and re-spent, they will probably find their way to a non-anonymous party, like a retailer, from whom you could claim them back. Retailers, exchanges and other legitimate parties would not want this, they don’t want to take stolen coins and lose their money. (Clever recipients generate a new address for every transaction, but others use publicly known addresses.)

Tainted coin database?

It’s possible, not even that difficult, to create a database of “tainted” coins. If such a database existed, people accepting coins could check if the source transaction coins are in that database. If there, they might reject the coins or even report the sender. I say “reject” because you normally don’t know what coins you are getting until the transaction is published, and if the other party publishes it, the coins are now yours. You can refuse to do your end of the transaction (ie. not hand over the purchased goods) or even publish a transaction “refunding” the coins back to the sender. It’s also possible to imagine that the miners could refuse to enter a transaction involving tainted coins into the blockchain. (For one thing, if the coins are stolen, they won’t get their transaction fees.) However, as long as some miner comes along willing to enter it, it will be recorded, though other miners could refuse to accept that block as legit.  read more »

What governments should do to help and regulate robocars

In my recent travels, I have often been asked what various government entities can and should do related to the regulation of robocars. Some of them want to consider how to protect public safety. Most of them, however, want to know what they can do to prepare their region for the arrival of these cars, and ideally to become one of the leading centres in the development of the vehicles. The car industry is about to be disrupted, and most of the old players may not make it through to the new world. The ground transportation industry is so huge (I estimate around $7 trillion globally) that many regions depend on it as a large component of their economy. For some places it’s vital.

But there are many more questions than that, so I’ve prepared an essay covering a wide variety of ways in which policymakers and robocars will interact.

Read: Governments, The Law and Robocars

US push to mandate V2V radios -- is it a good choice?

It was revealed earlier this month that NHTSA wishes to mandate vehicle to vehicle radios in all cars. I have written extensively on the issues around this and regular readers will know I am a skeptic of this plan. This is not to say that I don’t think that V2V would not be useful for robocars and regular cars. Rather, I believe that its benefits are marginal when it comes to the real problems, and for the amount of money that must be spent, there are better ways to spend it. In addition, I think that similar technology can and will evolve organically, without a government mandate, or with a very minimal one. Indeed, I think that technology produced without a mandate or pre-set standards will actually be superior, cheaper and be deployed far more quickly than the proposed approach.

The new radio protocol, known as DSRC, is a point-to-point wifi style radio protocol for cars and roadside equipment. There are many applications. Some are “V2V” which means cars report what they are doing to other cars. This includes reporting one’s position tracklog and speed, as well as events like hitting the brakes or flashing a turn signal. Cars can use this to track where other cars are, and warn of potential collisions, even with cars you can’t see directly. Infrastructure can use it to measure traffic.

The second class of applications are “V2I” which means a car talks to the road. This can be used to know traffic light states and timings, get warnings of construction zones and hazards, implement tolling and congestion charging, and measure traffic.

This will be accomplished by installing a V2V module in every new car which includes the radio, a connection to car information and GPS data. This needs to be tamper-proof, sealed equipment and must have digital certificates to prove to other cars it is authentic and generated only by authorized equipment.

Robocars will of course use it. Any extra data is good, and the cost of integrating this into a robocar is comparatively small. The questions revolve around its use in ordinary cars. Robocars, however, can never rely on it. They must be be fully safe enough based on just their sensors, since you can’t expect every car, child or deer to have a transponder, ever.

One issue of concern is the timeline for this technology, which will look something like this:

  1. If they’re lucky, NHTSA will get this mandate in 2015, and stop the FCC from reclaiming the currently allocated spectrum.
  2. Car designers will start designing the tech into new models, however they will not ship until the 2019 or 2020 model years.
  3. By 2022, the 2015 designed technology will be seriously obsolete, and new standards will be written, which will ship in 2027.
  4. New cars will come equipped with the technology. About 12 million new cars are sold per year.
  5. By 2030, about half of all cars have the technology, and so it works in 25% of accidents. 3/4 of those will have the obsolete 2015 technology or need a field-upgrade. The rest will have soon to be obsolete 2022 technology. Most cars also have forward collision warning by this point, so V2V is only providing extra information in a tiny fraction of the 25% of accidents.
  6. By 2040 almost all cars have the technology, though most will have older versions. Still, 5-10% of cars do not have the technology unless a mandate demands retrofit. Some cars have the equipment but it is broken.

Because of the quadratic network effect, in 2030 when half of cars have the technology, only 25% of car interactions will be make use of it, since both cars must have it. (The number is, to be fair, somewhat higher as new cars drive more than old cars.)  read more »

Satoshi, is now the time to consider donating lots of bitcoin to charity?

I don’t know who the person or people are who, under the name Satoshi Nakamoto, created the Bitcoin system. The creator(s) want to keep their privacy, and given the ideology behind Bitcoin, that’s not too surprising.

There can only be 21 million bitcoins. It is commonly speculated that Satoshi did much of the early mining, and owns between 1 million and 1.5 million unspent bitcoins. Today, thanks in part to a speculative bubble, bitcoins are selling for $800, and have been north of $1,000. In other words, Satoshi has near a billion dollars worth of bitcoin. Many feel that this is not an unreasonable thing, that a great reward should go to Satoshi for creating such a useful system.

For Satoshi, the problem is that it’s very difficult to spend more than a small portion of this block, possibly ever. Bitcoin addresses are generally anonymous, but all transactions are public. Things are a bit different for the first million bitcoins, which went only to the earliest adopters. People know those addresses, and the ones that remain unspent are commonly believed to be Satoshi’s. If Satoshi starts spending them in any serious volume, it will be noticed and will be news.

The fate of Bitcoin

Whether Bitcoin becomes a stable currency in the future or not, today few would deny it is not stable, and undergoing speculative bubbles. Some think that because nothing backs the value of bitcoins, it will never become stable, but others are optimistic. Regardless of that, today the value of a bitcoin is fragile. The news that “Satoshi is selling his bitcoins!” would trigger panic selling, and that’s bad news in any bubble.

If Satoshi could sell, it is hard to work out exactly when the time to sell would be. Bitcoin has several possible long term fates:

  1. It could become the world’s dominant form of money. If it replaced all of the “M1” money supply in the world (cash and very liquid deposits) a bitcoin could be worth $1 million each!
  2. It could compete with other currencies (digital and fiat) for that role. If it captured 1% of world money supply, it might be $10,000 a coin. While there is a limit on the number of bitcoins, the limit on the number of cryptocurrencies is unknown, and as bitcoin prices and fees increase, competition is to be expected.
  3. It could be replaced by one or more successors of superior design, with some ability to exchange during a modest window, and then drifting down to minimal value
  4. It could collapse entirely and quickly in the face of government opposition, competition and other factors during its bubble phase.

My personal prediction is #3 — that several successor currencies will arise which fix issues with Bitcoin, with exchange possible for a while. However, just as bitcoins had their sudden rushes and bubbles, so will this exchange rate, and as momentum moves into this currency it could move very fast. Unlike exchanges that trade bitcoins for dollars, inter-cryptocurrency exchanges will be fast (though the settlement times of the currencies will slow things down.) It could be even worse if the word got out that “Satoshi is trading his coins for [Foo]Coin” as that could cause complete collapse of Bitcoin.

Perhaps he could move some coins through randomizing services that scramble the identity association, but moving the early coins to such a system would be seen as selling them.  read more »

Oh Hugo Awards, where have you gone?

I follow the Hugo awards closely, and 20 years ago published the 1993 Hugo and Nebula Anthology which was probably the largest anthology of currently released fiction ever published at the time.

The Hugo awards are voted by around 1,000 fans who attend the World SF Convention, so they have their biases, but over time almost all the greats have been recognized. In addition, until the year 2000, in the best novel Hugo, considered the most important, the winner was always science fiction, not fantasy even though both and more were eligible. That shifted, and from 2001 to 2012, there have been 6 Fantasy winners, one Alternate History, and 5+1 SF. (2010 featured a tie between bad-science SF in the Windup Girl and genre-bending political science fiction in The City & The City.)

That’s not the only change to concern me. A few times my own pick for the best has not even been nominated. While that obviously shows a shift between my taste and the rest of the fans, I think I can point to reasons why it’s not just that.

The 2013 nominees I find not particularly inspiring. And to me, that’s not a good sign. I believe that the Hugo award winning novel should say to history, “This is an example of the best that our era could produce.” If it’s not such an example, I think “No Award” should win. (No Award is a candidate on each ballot, but it never comes remotely close to winning, and hasn’t ever for novels. In the 70s, it deservedly won a few times for movies. SF movies in the mid and early 70s were largely dreck.)

What is great SF? I’ve written on it before, but here’s an improvement of my definition. Great SF should change how you see the future/science/technology. Indeed, perhaps all great literature should change how you view the thing that is the subject matter of the literature, be it love, suffering, politics or anything else. That’s one reason why I have the preference for SF over Fantasy in this award. Fantasy has a much harder time attaining that goal.

I should note that I consider these books below as worth reading. My criticism is around whether they meet the standard for greatness that a Hugo candidate should have.

2312 by Kim Stanley Robinson

This is the best of the bunch, and it does an interesting exploration into the relationship of human and AI, and as in all of Stan’s fiction, the environment. His rolling city on Mercury is a wonder. The setup is great but the pace is as glacial as the slowly rolling city and the result is good, but not at the level of greatness I require here.  read more »

A Bitcoin Analogy

Bitcoin is having its first “15 minutes” with the recent bubble and crash, but Bitcoin is pretty hard to understand, so I’ve produced this analogy to give people a deeper understanding of what’s going on.

It begins with a group of folks who take a different view on several attributes of conventional “fiat” money. It’s not backed by any physical commodity, just faith in the government and central bank which issues it. In fact, it’s really backed by the fact that other people believe it’s valuable, and you can trade reliably with them using it. You can’t go to the US treasury with your dollars and get very much directly, though you must pay your US tax bill with them. If a “fiat” currency faces trouble, you are depending on the strength of the backing government to do “stuff” to prevent that collapse. Central banks in turn get a lot of control over the currency, and in particular they can print more of it any time they think the market will stomach such printing — and sometimes even when it can’t — and they can regulate commerce and invade privacy on large transactions. Their ability to set interest rates and print more money is both a bug (that has sometimes caused horrible inflation) and a feature, as that inflation can be brought under control and deflation can be prevented.

The creators of Bitcoin wanted to build a system without many of these flaws of fiat money, without central control, without anybody who could control the currency or print it as they wish. They wanted an anonymous, privacy protecting currency. In addition, they knew an open digital currency would be very efficient, with transactions costing effectively nothing — which is a pretty big deal when you see Visa and Mastercard able to sustain taking 2% of transactions, and banks taking a smaller but still real cut.

With those goals in mind, they considered the fact that even the fiat currencies largely have value because everybody agrees they have value, and the value of the government backing is at the very least, debatable. They suggested that one might make a currency whose only value came from that group consensus and its useful technical features. That’s still a very debatable topic, but for now there are enough people willing to support it that the experiment is underway. Most are aware there is considerable risk.

Update: I’ve grown less fond of this analogy and am working up a superior one, closer to the reality but still easy to understand.

Wordcoin

Bitcoins — the digital money that has value only because enough people agree it does — are themselves just very large special numbers. To explain this I am going to lay out an imperfect analogy using words and describe “wordcoin” as it might exist in the pre-computer era. The goal is to help the less technical understand some of the mechanisms of a digital crypto-based currency, and thus be better able to join the debate about them.  read more »

The Personal Cloud and Data Deposit Box

Last night I gave a short talk at the 3rd “Personal Clouds” meeting in San Francisco, The term “personal clouds” is a bit vague at present, but in part it describes what I had proposed in 2008 as the “data deposit box” — a means to acheive the various benefits of corporate-hosted cloud applications in computing space owned and controlled by the user. Other people are interpreting the phrase “personal clouds” to mean mechanisms for the user to host, control or monetize their own data, to control their relationships with vendors and others who will use that data, or in the simplest form, some people are using it to refer to personal resources hosted in the cloud, such as cloud disk drive services like Dropbox.

I continue to focus on the vision of providing the advantages of cloud applications closer to the user, bringing the code to the data (as was the case in the PC era) rather than bringing the data to the code (as is now the norm in cloud applications.)

Consider the many advantages of cloud applications for the developer:

  • You write and maintain your code on machines you build, configure and maintain.
    • That means none of the immense support headaches of trying to write software to run on mulitple OSs, with many versions and thousands of variations. (Instead you do have to deal with all the browsers but that’s easier.)
    • It also means you control the uptime and speed
    • Users are never running old versions of your code and facing upgrade problems
    • You can debug, monitor, log and fix all problems with access to the real data
  • You can sell the product as a service, either getting continuing revenue or advertising revenue
  • You can remove features, shut down products
  • You can control how people use the product and even what steps they may take to modify it or add plug-ins or 3rd party mods
  • You can combine data from many users to make compelling applications, particuarly in the social space
  • You can track many aspects of single and multiple user behaviour to customize services and optimize advertising, learning as you go

Some of those are disadvantages for the user of course, who has given up control. And there is one big disadvantage for the provider, namely they have to pay for all the computing resources, and that doesn’t scale — 10x users can mean paying 10x as much for computing, especially if the cloud apps run on top of a lower level cloud cluster which is sold by the minute.

But users see advantages too:  read more »

Speaking on Personal Clouds in SF, and Robocars in Phoenix

Two upcoming talks:

Tomorrow (April 4) I will give a very short talk at the meeting of the personal clouds interest group. As far as I know, I was among the first to propose the concept of the personal cloud in my essages on the Data Deposit Box back in 2007, and while my essays are not the reason for it, the idea is gaining some traction now as more and more people think about the consequences of moving everything into the corporate clouds.

My lighting talk will cover what I see as the challenges to get the public to accept a system where the computing resources are responsible to them rather than to various web sites.

On April 22, I will be at the 14th International Conference on Automated People Movers and Automated Transit speaking in the opening plenary. The APM industry is a large, multi-billion dollar one, and it’s in for a shakeup thanks to robocars, which will allow automated people moving on plain concrete, with no need for dedicated right-of-way or guideways. APMs have traditionally been very high-end projects, costing hundreds of millions of dollars per mile.

The best place to find me otherwise is at Singularity University Events. While schedules are being worked on, with luck you see me this year in Denmark, Hungary and a few other places overseas, in addition to here in Silicon Valley of course.

V2V and connected car part 3: Broadcast data

Earlier in part one I examined why it’s hard to make a networked technology based on random encounters. In part two I explored how V2V might be better achieved by doing things phone-to-phone.

For this third part of the series on connected cars and V2V I want to look at the potential for broadcast data and other wide area networking.

Today, the main thing that “connected car” means in reality is cell phone connectivity. That began with “telematics” — systems such as OnStar but has grown to using data networks to provide apps in cars. The ITS community hoped that DSRC would provide data service to cars, and this would be one reason for people to deploy it, but the cellular networks took that over very quickly. Unlike DSRC which is, as the name says, short range, the longer range of cellular data means you are connected most of the time, and all of the time in some places, and people will accept nothing less.

I believe there is a potential niche for broadcast data to mobile devices and cars. This would be a high-power shared channel. One obvious way to implement it would be to use a spare TV channel, and use the new ATSC-M/H mobile standard. ATSC provides about 19 megabits. Because TV channels can be broadcast with very high power transmitters, they reach almost everywhere in a large region around the transmitter. For broadcast data, that’s good.

Today we use the broadcast spectrum for radio and TV. Turns out that this makes sense for very popular items, but it’s a waste for homes, and largely a waste for music — people are quite satisfied instead with getting music and podcasts that are pre-downloaded when their device is connected to wifi or cellular. The amount of data we need live is pretty small — generally news, traffic and sports. (Call in talk shows need to be live but their audiences are not super large.)

A nice broadcast channel could transmit a lot of interest to cars.

  • Timing and phase information on all traffic signals in the broadcast zone.
  • Traffic data, highly detailed
  • Alerts about problems, stalled vehicles and other anomalies.
  • News and other special alerts — you could fit quite a few voice-quality station streams into one 19 megabit channel.
  • Differential GPS correction data, and even supplemental GPS signals.

The latency of the broadcast would be very low of course, but what about the latency of uploaded signals? This turns out to not be a problem for traffic lights because they don’t change suddenly on a few milliseconds notice, even if an emergency vehicle is sending them a command to change. If you know the signal is going to change 2 seconds in advance, you can transmit the time of the change over a long latency channel. If need be, a surprise change can even be delayed until the ACK is seen on the broadcast channel, to within certain limits. Most emergency changes have many seconds before the light needs to change.

Stalled car warnings also don’t need low latency. If a car finds itself getting stalled on the road, it can send a report of this over the cellular modem that’s already inside so many cars (or over the driver’s phone.) This may take a few seconds to get into the broadcast stream, but then it will be instantly received. A stalled car is a problem that lasts minutes, you don’t need to learn about it in the first few milliseconds.

Indeed, this approach can even be more effective. Because of the higher power of the radios involved, information can travel between vehicles in places where line of sight communications would not work, or would actually only work later than the server-relayed signal. This is even possible in the “classic” DSRC example of a car running a red light. While a line of sight communication of this is the fastest way to send it, the main time we want this is on blind corners, where LoS may have problems. This is a perfect time for those longer range, higher power communications on the longer waves.

Most phones don’t have ATSC-M/H and neither do cars. But receiver chips for this are cheap and getting cheaper, and it’s a consumer technology that would not be hard to deploy. However, this sort of broadcast standard could also be done in the cellular bands, at some cost in bandwidth for them.

19 megabits is actually a lot, and since traffic incidents and light changes are few, a fair bit of bandwidth would be left over. It could be sold to companies who want a cheaper way to update phones and cars with more proprietary data, including map changes, their own private traffic and so on. Anybody with a lot of customers might fight this more efficient. Very popular videos and audio streams for mobile devices could also use the extra bandwidth. If only a few people want something, point to point is the answer, but once something is wanted by many, broadcast can be the way to go.

What else might make sense to broadcast to cars and mobile phones in a city? While I’m not keen to take away some of the nice whitespaces, there are many places with lots of spare channels if designed correctly.

Solving V2V Part 2: Make it Phone to Phone

Last week, I began in part 1 by examining the difficulty of creating a new network system in cars when you can only network with people you randomly encounter on the road. I contend that nobody has had success in making a new networked technology when faced with this hurdle.

This has been compounded by the fact that the radio spectrum at 5.9ghz which was intended for use in short range communications (DSRC) from cars is going to be instead released as unlicenced spectrum, like the WiFi bands. I think this is a very good thing for the world, since unlicenced spectrum has generated an unprecedented radio revolution and been hugely beneficial for everybody.

But surprisingly it might be something good for car communications too. The people in the ITS community certainly don’t think so. They’re shocked, and see this as a massive setback. They’ve invested huge amounts of efforts and careers into the DSRC and V2V concepts, and see it all as being taken away or seriously impeded. But here’s why it might be the best thing to ever happen to V2V.

The innovation in mobile devices and wireless protocols of the last 1-2 decades is a shining example to all technology. Compare today’s mobile handsets with 10 years ago, when the Treo was just starting to make people think about smartphones. (Go back a couple more years and there weren’t any smartphones at all.) Every year there are huge strides in hardware and software, and as a result, people are happily throwing away perfectly working phones every 2 years (or less) to get the latest, even without subsidies. Compare that to the electronics in cars. There is little in your car that wasn’t planned many years ago, and usually nothing changes over the 15-20 year life of the car. Car vendors are just now toying with the idea of field upgrades and over-the-air upgrades.

Car vendors love to sell you fancy electronics for your central column. They can get thousands of dollars for the packages — packages that often don’t do as much as a $300 phone and get obsolete quickly. But customers have had enough, and are now forcing the vendors to give up on owning that online experience in the car and ceding it to the phone. They’re even getting ready to cede their “telematics” (things like OnStar) to customer phones.

I propose this: Move all the connected car (V2V, V2I etc.) goals into the personal mobile device. Forget about the mandate in cars.

The car mandate would have started getting deployed late in this decade. And it would have been another decade before deployment got seriously useful, and another decade until deployment was over 90%. In that period, new developments would have made all the decisions of the 2010s wrong and obsolete. In that same period, personal mobile devices would have gone through a dozen complete generations of new technology. Can there be any debate about which approach would win?  read more »

V2V vs. the paths to a successful networked technology (Part 1)

A few weeks ago, in my article on myths I wrote why the development of “vehicle to vehicle” (V2V) communications was mostly orthogonal to that of robocars. That’s very far from the view of many authors, and most of those in the ITS community. I remain puzzled by the V2V plan and how it might actually come to fruition. Because there is some actual value in V2V, and we would like to see that value realized in the future, I am afraid that the current strategy will not work out and thus misdirect a lot of resources.

This is particularly apropos because recently, the FCC issued an NPRM saying it wants to open up the DSRC band at 5.9ghz that was meant for V2V for unlicenced wifi-style use. This has been anticipated for some time, but the ITS community is concerned about losing the band it received in the late 90s but has yet to use in anything but experiments. The demand for new unlicenced spectrum is quite appropriately very large — the opening up of 2.4gz decades ago generated the greatest period of innovation in the history of radio — and the V2V community has a daunting task resisting it.

In this series I will examine where V2V approaches went wrong and what they might do to still attain their goals.


I want to begin by examining what it takes to make a successful cooperative technology. History has many stories of cooperative technologies (either peer-to-peer or using central relays) that grew, some of which managed to do so in spite of appearing to need a critical mass of users before they were useful.

Consider the rise and fall of fax (or for that matter, the telephone itself.) For a lot of us, we did not get a fax machine until it was clear that lots of people had fax machines, and we were routinely having people ask us to send or receive faxes. But somebody had to buy the first fax machine, in fact others had to buy the first million fax machines before this could start happening.

This was not a problem because while one fax machine is useless, two are quite useful to a company with a branch office. Fax started with pairs of small networks of machines, and one day two companies noticed they both had fax and started communicating inter-company instead of intra-company.

So we see rule one: The technology has to have strong value to the first purchaser. Use by a small number of people (though not necessarily just one) needs to be able to financially justify itself. This can be a high-cost, high-value “early adopter” value but it must be real.

This was true for fax, e-mail, phone and many other systems, but a second principle has applied in many of the historical cases. Most, but not all systems were able to build themselves on top of an underlying layer that already existed for other reasons. Fax came on top of the telephone. E-mail on top of the phone and later the internet. Skype was on top of the internet and PCs. The underlying system allowed it to be possible for two people to adopt a technology which was useful to just those two, and the two people could be anywhere. Any two offices could get a fax or an e-mail system and communicate, only the ordinary phone was needed.

The ordinary phone had it much harder. To join the phone network in the early days you had to go out and string physical wires. But anybody could still do it, and once they did it, they got the full value they were paying for. They didn’t pay for phone wires in the hope that others would some day also pay for wires and they could talk to them — they found enough value calling the people already on that network.

Social networks are also interesting. There is a strong critical mass factor there. But with social networks, they are useful to a small group of friends who join. It is not necessary that other people’s social groups join, not at first. And they have the advantage of viral spreading — the existing infrastructure of e-mail allows one person to invite all their friends to join in.

Enter Car V2V

Car V2V doesn’t satisfy these rules. There is no value for the first person to install a V2V radio, and very tiny value for the first thousands of people. An experiment is going on in Ann Arbor with 3,000 vehicles, all belonging to people who work in the same area, and another experiment in Europe will equip several hundred vehicles.  read more »

Top Myths of Robocars (and why V2V is not the answer)

There’s been a lot of press on robocars in the last few months, and a lot of new writers expressing views. Reading this, I have encountered a recurring set of issues and concerns, so I’ve prepared an article outlining these top myths and explaining why they are not true.

Perhaps of strongest interest will be one of the most frequent statements — that Vehicle to Vehicle (V2V) communication is important, or even essential, to the deployment of robocars. The current V2V (and Vehicle to Infrastructure) efforts, using the DSRC radio spec are quite extensive, and face many challenges, but to the surprise of many, this is largely orthogonal to the issues around robocars.

So please read The top 10 (or so) myths or robocars.

They are:

  • They won’t be safe
  • The big issue is who will be liable in a crash
  • The cars will need special dedicated roads and lanes
  • This only works when all cars are robocars and human driving is banned
  • We need radio links between cars to make this work
  • We wont see self-driving cars for many decades
  • It is a long time before this will be legal
  • How will the police give a robocar a ticket?
  • People will never trust software to drive their car
  • They can’t make an OS that doesn’t crash, how can they make a safe car?
  • We need the car to be able to decide between hitting a schoolbus and going over a cliff
  • The cars will always go at the speed limit

You may note that this is not my first myths FAQ, as I also have Common objections to Robocars written when this site was built. Only one myth is clearly in both lists, a sign of how public opinion has been changing.

Larry Niven and Greg Benford on "Bowl of Heaven" and Big, Dumb Objects

Last month, I invited Gregory Benford and Larry Niven, two of the most respected writers of hard SF, to come and give a talk at Google about their new book “Bowl of Heaven.” Here’s a Youtube video of my session. They did a review of the history of SF about “big dumb objects” — stories like Niven’s Ringworld, where a huge construct is a central part of the story.

My interview with Vernor Vinge

Vernor Vinge is perhaps the greatest writer of hard SF and computer-related SF today. He has won 5 Hugo awards, including 3 in a row for best novel (nobody has done 4 in a row) and his novels have inspired many real technologies in cyberspace, augmented reality and more.

I invited him up to speak at Singularity University but before that he visited Google to talk in the Authors@Google series. I interview him about his career and major novels and stories, including True Names, A Fire Upon the Deep, Rainbow’s End and his latest novel Children of the Sky. We also talk about the concept of the Singularity, for which he coined the term.