Archives

More about stolen bitcoins

Yesterday, I wrote about stolen bitcoins and the issues around a database of stolen coins. The issue is very complex, so today I will add some follow-up issues.

When stolen property changes hands (innocently) the law says that nobody in the chain had authority to transfer title to that property. Let’s assume that the law accepts bitcoins as property, and bitcoin transactions as denoting transfer of title, (as well as possession/control) to it. So with a stolen bitcoin, the final recipient is required on the law to return possession of the coin to its rightful owner, the victim of the theft. However, that recipient is also now entitled to demand back whatever they paid for the bitcoin, and so on down the line, all the way to the thief. With anonymous transactions, that’s a tall order, though most real world transactions are not that anonymous.

This is complicated by the fact that almost all Bitcoin transactions mix coins together. A Bitcoin “wallet” doesn’t hold bitcoins, rather it holds addresses which were the outputs of earlier transactions, and those outputs were amounts of bitcoin. When you want to do a new transaction, you do two things:

  1. You gather together enough addresses in your wallet which hold outputs of prior transactions, which together add up to as much as you plan to spend, and almost always a bit more.
  2. You write a transaction that lists all those old outputs as “inputs” and then has a series of outputs, which are the addresses of the recipients of the transaction.

There are typically 3 (or more) outputs on a transaction:

  1. The person you’re paying. The output is set to be the amount you’re paying
  2. Yourself. The output is the “change” from the transaction since the inputs probably didn’t add up exactly to the amount you’re paying.
  3. Any amount left over — normally small and sometimes zero — which does not have a specific output, but is given as a transaction fee to the miner who put your transaction into the Bitcoin ledger (blockchain.)

They can be more complex, but the vast majority work like this. While normally you pay the “change” back to yourself, the address for the change can be any new random address, and nothing in the ledger connects it to you.

So as you can see, a transaction might combine a ton of inputs, some of which are clean, untainted coins, some of which are tainted, and some of which are mixed. After coins have been through a lot of transactions, the mix can be very complex. Not so complex as the computers can’t deal with it and calculate a precise fraction of the total coin that was tainted, but much too complex for humans to wish to worry about.

A thief will want to mix up their coins as quickly as possible, and there are a variety of ways to do that.

Right now, the people who bought coins at Mt.Gox (or those who sent them there to buy other currency) are the main victims of this heist. They thought they had a balance there, and its gone. Many of them bought these coins at lower prices, and so their loss is not nearly as high as the total suggests, but they are deservedly upset.

Unfortunately, if the law does right by them and recovers their stolen property, it is likely that might come from the whole Bitcoin owning and using community, because of the fact that everybody in the chain is liable. Of particular concern are the merchants who are taking bitcoin on their web sites. Let’s speculate on the typical path of a stolen coin that’s been around for a while:

  • It left Mt.Gox for cash, sold by the thief, and a speculator simply held onto the coins. That’s the “easy” one, the person who now has stolen coins has to find the thief and get their money back. Not too likely, but legally clear.
  • It left Mt.Gox and was used in a series of transactions, ending up with one where somebody bought an item from a web store using bitcoin.
  • With almost all stores, the merchant system takes all bitcoin received and sells it for dollars that day. Somebody else — usually a bitcoin speculator — paid dollars for that bitcoin that day, and the chain continues.

There is the potential here for a lot of hassle. The store learns they sold partially tainted bitcoins. The speculator wants and is entitled to getting a portion of her money back, and the store is an easy target to go after. The store now has to go after their customer for the missing money. The store also probably knows who their customer is. The customer may have less knowledge of where her bitcoins came from.

This is a huge hassle for the store, and might very well lead to stores reversing their decisions to accept bitcoin. If 6% of all bitcoins are stolen, as the Mt.Gox heist alleges, most transactions are tainted. 6% is an amount worth recovering for many, and it’s probably all the profit at a typical web store. Worse, the number of stolen coins may be closer to 15% of all the circulating bitcoins, certainly something worth recovering on many transactions.

The “sinking taint” approach

Previously, I suggested a rule. The rule was that if a transaction merges various inputs which are variously reported as stolen (tainted) and not, then the total percentage be calculated, and the first outputs receive all the tainting, and the latter outputs (including the transaction fee, last of all) be marked clear. One of the outputs would remain partial unless the transaction was designed to avoid this. There is no inherent rule that the “change” comes last, it is just a custom, and it would probably be reversed, so that as much of the tainted fraction remains in the change as possible, and the paid amount is as clean as possible. Recipients would want to insist on that.

This allows the creation of a special transaction that people could do with themselves on discovering they have coin that is reported stolen. The transaction would split the coin precisely into one or more purely tainted outputs, and one or more fully clean outputs. Recipients would likely refuse bitcoin with any taint on it at all, and so holders of bitcoin would be forced to do these dividing transactions. (They might have to do them again if new theft reports come on coin that they own.) People would end up doing various combinations of these transactions to protect their privacy and not publicly correlate all their coin.

Tainted transaction fees?

The above system makes the transaction fee clean if any of the coin in the transaction is clean. If this is not done, miners might not accept such transactions. On the other hand, there is an argument that it would be good if miners refused even partially tainted transactions, other than the ones above used to divide the stolen coins from the clean. There would need to be a rule that allows a transaction to be declared a splitting transaction which pays its fees from the clean part. In this case, as soon as coins had any taint at all, they would become unspendable in the legit markets and it would be necessary to split them. They would still be spendable with people who did not accept this system, or in some underground markets, but they would probably convert to other currencies at a discount.

This works better if there is agreement on the database of tainted coins, but that’s unlikely. As such, miners would decide what databases to use. Anything in the database used by a significant portion of the miners would make those coins difficult to spend and thus prime for splitting. However, if they are clean in the view of a significant fraction of the miners, they will enter the blockchain eventually.

This is a lot of complexity, much more than anybody in the Bitcoin community wants. The issue is that if the law gets involved, there is a world of pain in store for the system, and merchants, if a large fraction of all circulating coins are reported as stolen in a police report, even a Japanese police report.

What if somebody steals a bitcoin?

Bitcoin has seen a lot of chaos in the last few months, including being banned in several countries, the fall of the Silk Road, and biggest of all, the collapse of Mt. Gox, which was for much of Bitcoin’s early history, the largest (and only major) exchange between regular currencies and bitcoins. Most early “investors” in bitcoin bought there, and if they didn’t move their coins out, they now greatly regret it.

I’ve been quite impressed by the ability of the bitcoin system to withstand these problems. Each has caused major “sell” days but it has bounced back each time. This is impressive because nothing underlies bitcoins other than the expectation that you will be able to use them into the future and that others will take them.

It is claimed (though doubted by some) that most of Mt.Gox’s bitcoins — 750,000 of them or over $400M — were stolen in some way, either through thieves exploiting a bug or some other means. If true, this is one of the largest heists in history. There are several other stories of theft out there as well. Because bitcoin transactions can’t be reversed, and there is no central organization to complain to, theft is a real issue for bitcoin. If you leave your bitcoin keys on your networked devices, and people get in, they can transfer all your coins away, and there is no recourse.

Or is there?

If you sell something and are paid in stolen money, there is bad news for you, the recipient of the money. If this is discovered, the original owner gets the money back. You are out of luck for having received stolen property. You might even be suspected of being involved, but even if you are entirely innocent, you still lose.

All bitcoin transactions are public, but the identities of the parties are obscured. If your bitcoins are stolen, you can stand up and declare they were stolen. More than that, unless the thief wiped all your backups, you can 99.9% prove that you were, at least in the past, the owner of the allegedly stolen coins. Should society accept bitcoins as money or property, you would be able to file a police report on the theft, and identify the exact coin fragments stolen, and prove they were yours, once. We would even know “where” they are today, or see every time they are spent and know who they went to, or rather, know the random number address that owns them now in the bitcoin system. You still own them, under the law, but in the system they are at some other address.

That random address is not inherently linked to this un-owner, but as the coins are spent and re-spent, they will probably find their way to a non-anonymous party, like a retailer, from whom you could claim them back. Retailers, exchanges and other legitimate parties would not want this, they don’t want to take stolen coins and lose their money. (Clever recipients generate a new address for every transaction, but others use publicly known addresses.)

Tainted coin database?

It’s possible, not even that difficult, to create a database of “tainted” coins. If such a database existed, people accepting coins could check if the source transaction coins are in that database. If there, they might reject the coins or even report the sender. I say “reject” because you normally don’t know what coins you are getting until the transaction is published, and if the other party publishes it, the coins are now yours. You can refuse to do your end of the transaction (ie. not hand over the purchased goods) or even publish a transaction “refunding” the coins back to the sender. It’s also possible to imagine that the miners could refuse to enter a transaction involving tainted coins into the blockchain. (For one thing, if the coins are stolen, they won’t get their transaction fees.) However, as long as some miner comes along willing to enter it, it will be recorded, though other miners could refuse to accept that block as legit.  read more »

What governments should do to help and regulate robocars

In my recent travels, I have often been asked what various government entities can and should do related to the regulation of robocars. Some of them want to consider how to protect public safety. Most of them, however, want to know what they can do to prepare their region for the arrival of these cars, and ideally to become one of the leading centres in the development of the vehicles. The car industry is about to be disrupted, and most of the old players may not make it through to the new world. The ground transportation industry is so huge (I estimate around $7 trillion globally) that many regions depend on it as a large component of their economy. For some places it’s vital.

But there are many more questions than that, so I’ve prepared an essay covering a wide variety of ways in which policymakers and robocars will interact.

Read: Governments, The Law and Robocars

US push to mandate V2V radios -- is it a good choice?

It was revealed earlier this month that NHTSA wishes to mandate vehicle to vehicle radios in all cars. I have written extensively on the issues around this and regular readers will know I am a skeptic of this plan. This is not to say that I don’t think that V2V would not be useful for robocars and regular cars. Rather, I believe that its benefits are marginal when it comes to the real problems, and for the amount of money that must be spent, there are better ways to spend it. In addition, I think that similar technology can and will evolve organically, without a government mandate, or with a very minimal one. Indeed, I think that technology produced without a mandate or pre-set standards will actually be superior, cheaper and be deployed far more quickly than the proposed approach.

The new radio protocol, known as DSRC, is a point-to-point wifi style radio protocol for cars and roadside equipment. There are many applications. Some are “V2V” which means cars report what they are doing to other cars. This includes reporting one’s position tracklog and speed, as well as events like hitting the brakes or flashing a turn signal. Cars can use this to track where other cars are, and warn of potential collisions, even with cars you can’t see directly. Infrastructure can use it to measure traffic.

The second class of applications are “V2I” which means a car talks to the road. This can be used to know traffic light states and timings, get warnings of construction zones and hazards, implement tolling and congestion charging, and measure traffic.

This will be accomplished by installing a V2V module in every new car which includes the radio, a connection to car information and GPS data. This needs to be tamper-proof, sealed equipment and must have digital certificates to prove to other cars it is authentic and generated only by authorized equipment.

Robocars will of course use it. Any extra data is good, and the cost of integrating this into a robocar is comparatively small. The questions revolve around its use in ordinary cars. Robocars, however, can never rely on it. They must be be fully safe enough based on just their sensors, since you can’t expect every car, child or deer to have a transponder, ever.

One issue of concern is the timeline for this technology, which will look something like this:

  1. If they’re lucky, NHTSA will get this mandate in 2015, and stop the FCC from reclaiming the currently allocated spectrum.
  2. Car designers will start designing the tech into new models, however they will not ship until the 2019 or 2020 model years.
  3. By 2022, the 2015 designed technology will be seriously obsolete, and new standards will be written, which will ship in 2027.
  4. New cars will come equipped with the technology. About 12 million new cars are sold per year.
  5. By 2030, about half of all cars have the technology, and so it works in 25% of accidents. 3/4 of those will have the obsolete 2015 technology or need a field-upgrade. The rest will have soon to be obsolete 2022 technology. Most cars also have forward collision warning by this point, so V2V is only providing extra information in a tiny fraction of the 25% of accidents.
  6. By 2040 almost all cars have the technology, though most will have older versions. Still, 5-10% of cars do not have the technology unless a mandate demands retrofit. Some cars have the equipment but it is broken.

Because of the quadratic network effect, in 2030 when half of cars have the technology, only 25% of car interactions will be make use of it, since both cars must have it. (The number is, to be fair, somewhat higher as new cars drive more than old cars.)  read more »

More World Tour: Dubai, Singapore

The Robocars world tour continues. Monday I will speak on robocars at the UAE Government conference in Dubai, where I just landed. Then it’s off to talk about them at a private event in Singapore, but I’ll also visit teams there. If I have time, I will check out Masdar — what was originally going to be the first all-robocar city — while in the UAE.

When I get back I will have more on some new announcements, particularly the vehicle-to-vehicle communications plan announcement, and new teams forming up. Though for my views on the V2V issue, you can read the three part series wrote last year, V2V and how to build a networked technology.

Virtual window in cruise ship comes to life

Very long-time readers of this blog will remember a proposal I made 10 years ago that cruise ship inside cabins use HDTVs with the outside view. Now a cruise ship is launching with such a system, though bigger than I proposed.

The Royal Caribbean vessel will feature an artificial balcony using an 80 inch screen including a fake railing. While the cameras used are 4K, I suspect the screens will only be HDTV, since 4K 80 inch screens are hugely expensive right now, though very shortly they will be quite affordable for this.

It will be interesting to see if the virtual balcony approach does much better than just using something meant to look like a window, which frankly would be a bunch easier though not get that 3D effect from the railing. (The fact that the image and railing are at the same focus distance may actually complicate things.) I think an interesting approach would be instead to use a screen with infinity optics, which make the screen focus as though it is at infinity. This requires space outside the room, which you could get by having two adjoining cabins each take a box out of the other cabin for the mirror and lenses. (Though doing really good collimated light takes a lot of space which is at too much of a premium in a cruise ship, though perhaps not as much in interior cabins.

The sample photo shows a rather large stateroom — usually interior rooms are small and for those who can’t afford a window, but this might change. One reason people tolerate interior rooms is they plan to spend very little time in the room not sleeping, but the reality is that even doing that, it is disconcerting not to have the subtle cues of real exposure to day and night, waking up and not knowing what time it is. It generates a greater feeling of being closed in to be in a small enclosed and windowless space, compared to large interior spaces. As I pointed out before, having a view of the real horizon helps a lot with seasickness.

If this is a success, it could lead to several things:

  • Ability to sell many more interior rooms, making better use of space in the middle of the very wide ships desired today.
  • Low, central cabins have the least sway, but in the past were not popular with the seasick because that’s much worse without a window.
  • People might actually choose a larger, interior cabin at the same price as a much smaller, exterior cabin. Even if you plan to spend only modest time awake in your cabin, life in a larger cabin is more pleasant.
  • Virtual walls could be put on multiple sides of the cabin, so you get the illusion of an owner’s suite, with views in all directions.

To really get a super effect, you could even have people wear 3D glasses in the cabin — polarized ones that double as sunglasses if you can make the screens bright enough. These allow you to do a special trick if there is only one person in the room, which make the screens simulate parallax, so that as you move your head, the background moves as though you are really looking through a window. Most ocean scenes are not very 3D themselves. It is debatable if this would be good enough for people to find it worth wearing the glasses, and of course there is the issue of dealing with only one person in the room. You can handle 2 people in the room if you have shutter glasses, very bright screens, and 240hz or faster displays. Handling 2 is probably enough — turn the effect off the very rare times you have guests.

Finally, I would even wonder if it made sense to pipe in outside air on demand.

4K displays can get close to eye resolution depending on the viewing distance. Interior cabins on cruise ships are dismal places, and so if this can make them more palatable, it can be financially worthwhile.

Disney has also been doing this since 2010, I have learned, with a virtual porthole. They also add animations to the video (of Disney Characters peeking in the window) which presumably the kids like. Reports are this has caused a major boost in their inside cabin sales.

End the redirect wrapper on links

A lot of sites, most notably search engines like Google, like to rewrite all the links on their pages. So search for this page and instead of http://ideas.4brad.com, the link Google gives you is http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=short-string&url=http%3A%2F%2Fideas.4brad.com%2F&ei=med-string&usg=huge-string&bvm=short-string or similar. (I have redacted the actual codes.)

What’s happening is that when you click on the link, you really go to Google. Google records what you clicked on and other parameters related to the search so they can study just how people use their search engine, what they click on and when. It’s a reasonable thing for them to want to study, though a potential privacy invasion.

Because each click goes through Google, your clicks are slowed down. Because Google has such huge resources, and is almost never down, you usually don’t notice it, though even with Google you will see the delay on slow links, like mobile GPRS and Edge connections. It also means you can’t easily cut and paste links from search results.

Other sites are not as good. They sometimes noticeably slow own your click. Worse, they sometimes break it. For example, on my phone, when I click on links in LinkedIn messages, as well as Facebook ones, which are also redirected, it doesn’t work if I’m not currently logged in to those sites. Due to some bad code, it also wants to send the link to the mobile apps of these sites, which is not what I want. (The one for LinkedIn is particularly broken, as it doesn’t seem to know where the app is, and sends me to the Play store to install it even though it is already installed.)

In other words, these links break the web from time to time. They can also interfere with spiders. On the plus side, they can be set to protect your privacy by hiding data in the REFERER field from the target web site. For sites that have been identified ad malicious, they can provide a warning.

To fix this, sites can change all their links to be javascript. The link can be a real target, and associated onClick javascript can also send a web hit back to the server with the logging info.

A better solution would be to push use of the “ping” attribute in the HTML spec, and allow links to have both an href to the target, and another URL which gets invoked when the link is clicked. In the background, this would not slow down your click, or break it. Browsers could also elect to block it, which the sites might not like but is good for users. Links to malicious sites could be treated differently if that’s part of the service. There would also be no need to fake the status window when moving the mouse over the link, as must be done with redirects.

Let’s say no to all these redirects.

Satoshi, is now the time to consider donating lots of bitcoin to charity?

I don’t know who the person or people are who, under the name Satoshi Nakamoto, created the Bitcoin system. The creator(s) want to keep their privacy, and given the ideology behind Bitcoin, that’s not too surprising.

There can only be 21 million bitcoins. It is commonly speculated that Satoshi did much of the early mining, and owns between 1 million and 1.5 million unspent bitcoins. Today, thanks in part to a speculative bubble, bitcoins are selling for $800, and have been north of $1,000. In other words, Satoshi has near a billion dollars worth of bitcoin. Many feel that this is not an unreasonable thing, that a great reward should go to Satoshi for creating such a useful system.

For Satoshi, the problem is that it’s very difficult to spend more than a small portion of this block, possibly ever. Bitcoin addresses are generally anonymous, but all transactions are public. Things are a bit different for the first million bitcoins, which went only to the earliest adopters. People know those addresses, and the ones that remain unspent are commonly believed to be Satoshi’s. If Satoshi starts spending them in any serious volume, it will be noticed and will be news.

The fate of Bitcoin

Whether Bitcoin becomes a stable currency in the future or not, today few would deny it is not stable, and undergoing speculative bubbles. Some think that because nothing backs the value of bitcoins, it will never become stable, but others are optimistic. Regardless of that, today the value of a bitcoin is fragile. The news that “Satoshi is selling his bitcoins!” would trigger panic selling, and that’s bad news in any bubble.

If Satoshi could sell, it is hard to work out exactly when the time to sell would be. Bitcoin has several possible long term fates:

  1. It could become the world’s dominant form of money. If it replaced all of the “M1” money supply in the world (cash and very liquid deposits) a bitcoin could be worth $1 million each!
  2. It could compete with other currencies (digital and fiat) for that role. If it captured 1% of world money supply, it might be $10,000 a coin. While there is a limit on the number of bitcoins, the limit on the number of cryptocurrencies is unknown, and as bitcoin prices and fees increase, competition is to be expected.
  3. It could be replaced by one or more successors of superior design, with some ability to exchange during a modest window, and then drifting down to minimal value
  4. It could collapse entirely and quickly in the face of government opposition, competition and other factors during its bubble phase.

My personal prediction is #3 — that several successor currencies will arise which fix issues with Bitcoin, with exchange possible for a while. However, just as bitcoins had their sudden rushes and bubbles, so will this exchange rate, and as momentum moves into this currency it could move very fast. Unlike exchanges that trade bitcoins for dollars, inter-cryptocurrency exchanges will be fast (though the settlement times of the currencies will slow things down.) It could be even worse if the word got out that “Satoshi is trading his coins for [Foo]Coin” as that could cause complete collapse of Bitcoin.

Perhaps he could move some coins through randomizing services that scramble the identity association, but moving the early coins to such a system would be seen as selling them.  read more »

Surprising math on Obamacare levels: Go for the Bronze!

Recently I learned from health.net, the insurer which did my individual plan, that they were canceling it. I’m one of those who lost his health plan with the switch to the ACA (Obamacare) plans, so I need to shop in the healthcare marketplace and will likely end up paying more.

What surprised me when I went to the marketplace was the math of the plans. For those who don’t know, there are 4 main classes of plans (Bronze, Silver, Gold, Platinum) which are roughly the same for all insurers. There is also a 5th, “Catastrophic” plan available to under-30s and hardship cases, which is cheaper and covers even less than Bronze. Low income people get a great subsidized price in the marketplace, but people with decent incomes get no subsidy.

The 4 plans are designed so that for the average patient, they will end up paying 60% (Bronze), 70% (Silver), 80% (Gold) or 90% (Platinum) of health care costs, with the patient, on average, bearing the rest. All plans come with a “Maximum out of pocket” (MOOP) that is at most $6,350 for all plans but $4,000 (or less) for the Platinum.

Here’s some analysis based on California prices and plans. The other states can vary a fair bit. Insurance is much cheaper in some regions, and there are plans that use moderately different formulae. In every state the MOOP is no more than $6,350 and the actuarial percentages are the same.

As you might expect, the Platinum costs a lot more than the Bronze. But at my age, in my early 50s, I was surprised how much more. I decided to plug in numbers for Blue Cross, which is actually slightly cheaper than many of the other plans. I actually have little information with which to compare the companies. This is quite odd — my health insurance is going to be by biggest annual expenditure after my mortgage. More than my car — but there’s tons of information to help you choose a car. (Consumer Reports does have a comparison article on the major insurance companies before the ACA for their subscribers.)

The Platinum plan costs $350/month extra over Bronze, $4200/year. Almost as much as the MOOP. So I decided to build a spreadsheet that would show me what I would end up paying on each plan in total — premiums plus my personal outlays. Here is the sheet for me in my early 50s:

The X axis is how much your health care actually cost, ie. what your providers were paid. The Y axis is how much you had to pay. The green line is unity, with your payout equal to the cost, as might happen in theory if you were uninsured. In theory, because in reality uninsured people pay a “list price” that is several times the cost that insurance companies negotiate. Also in theory because those uninsured must pay a tax penalty.

All the plans go up at one rate until they first hit your deductibles (Bronze/Silver) and then at a slower rate until you hit your MOOP. After the MOOP they are a flat line almost no matter what your health spending does. The Silver plan is the most complex. It has a $250 drug deductible and a $2000 general deductible and the usual $6,350 MOOP. In reality, these slopes will not be smooth lines. For example, on the silver plan if you are mostly doing doctor visits and labs, you do copays, not the deductible. If you hit something else, like MRI scans or hospitalization, you pay out the full cost until you hit the deductible. So each person’s slope will be different, but these slopes are meant to represent an estimate for average patients.

The surprising thing about this chart is that the Bronze plan is pretty clearly superior. Only for a small region of costs does your outlay exceed the other plans, and never by much. However, in the most likely region for most people (modest health care) or the danger zone (lots of health care) it is quite a bit cheaper. The catastrophic plan, if you can get your hands on it over 30, is even better. It almost never does worse than the other plans.

I will note that the zone where Bronze is not the winner is around the $8,400 average cost of health care in the USA. However, what I really want to learn is the median cost, a statistic that is not readily available, or even better the median cost or distribution of costs at each age cohort. The actuaries obviously know this, and I would like pointers to a source.

Premiums are tax deductible for the self-employed, as are large medical expenses for all, but the outlays above premiums can also come from a Health Savings Account (HSA) which is a special IRA-like instrument. You put in up to around $3K each year tax-free, and can pay the costs above from it. (You also don’t pay tax on appreciation of the account, and can draw out the money post-retirement at a decent rate.)

The chart suggests the Bronze plan is the clear winner unless you know you will be in the $6K to $10K zone where it’s a modest loser. It seems to beat the Platinum all the time (at least in this simplified model) but might have minor competition from the Silver. The Gold is essentially always worse than the Silver.

If we move to age 60, now the win for Bronze is very clear. At age 60, the $5500 extra premium for Platinum almost exceeds the MOOP on the Bronze — the Bronze will always be cheaper. This makes no sense, and seems to be a result of the fact that the MOOP remains the same no matter how old you are (and is also the same for B/S/G/Cat.) Perhaps varying deductibles and the MOOP over time would have made more variety.

Here the Gold is clearly a loser to the Silver if you were thinking about it. Nobody in this age group should buy the Gold plan but I doubt the sites will say that. Platinum is almost as clearly a loss.

Thinking about money every time you use health care

With the choice for the older person so obvious, this opens up another question, namely one of psychology. The rational thing to do is to buy the Bronze plan. But with its $5,000 deductible, you will find yourself paying out of pocket for almost all your health care except in years you need major treatments and hospitalizations.  read more »

Michigan to build fake-downtown robocar test site

I’m working on a new long article about advice to governments on how they should react to and encourage the development of robocars.

An interesting plan announced today has something I had not thought of: Michigan is funding the development of a fake downtown to act as a test track for robocar development. The 32 acre site will be at the University of Michigan, and is expected to open soon — in time for the September ITS World Congress.

Part of the problem with my advice to governments is that my main recommendation is to get out of the way. To not try too hard both to help and to regulate, because even those of us trying to build the vehicles don’t have a certain handle on the eventual form of the technology.

A test track is a great idea, though. Test tracks are hugely expensive to make, entirely outside of the means of small entrepreneurs. They immediately resolve most safety concerns for people just starting out — every team has had small runaway issues at the very start. Once past that, they can be shared, in fact having multiple vehicles running the track can be a bonus rather than a problem.

Big car companies all have their own test tracks, but these are mostly real tracks, not urban streets. Several companies have built pre-programmed robotic cars which drive in specific patterns to test ADAS systems and robocars. The DARPA Urban Challenge was run on an fake set of urban streets on an old military base, so this idea goes back to the dawn of the modern field. (Old military bases are popular for this — Mythbusters used a California one for their test of blind and drunk driving.)

This track will probably bring teams to Michigan, which is what they want. Detroit is in trouble, and it knows it. Robocars are going to upend the car industry. Incumbent players are going to fall, and new players are going to rise, and that could be very bad news for Detroit.

My home province of Ontario is facing the same problem, to a lesser degree. A lot of the Ontario economy is in cars as well, and so they’ve started a plan to introduce testing legislation. I don’t think this is the right plan — testing is already legal with a good supervising driver in most jurisdictions, though I have not yet examined the Ontario code. Ontario has one big advantage over Michigan, though, in that it is also a high-tech centre. Right now the car companies in Detroit are finding it very difficult to convince high-tech stars to come move to Detroit, in spite of being able to offer high pay and the fact that you can literally get a mansion for the price of the downpayment on a nice San Francisco condo. Toronto doesn’t have the same problem — in fact it’s one of the most desired places to live for Canadians, and for people from all over the world. Ontario’s combination of high-tech and big automotive might end up doing well.

At least in Ontario, everybody will be motivated to solve the snow problem sooner than the California companies are.

Please, NBC, let us choose our audio for the Olympics, especially the opening

The Olympics are coming up, and I have a request for you, NBC Sports. It’s the 21st century, and media technologies have changed a lot. It’s not just the old TV of the 1900s.

Every year, you broadcast the opening ceremony, which is always huge, expensive and spectacular. But your judgment is that we need running commentary, even when music is playing or especially poignant moments are playing out. OK, I get that, perhaps a majority of the audience wants and needs that commentary. Another part of the audience would rather see the ceremony as is, with minimal commentary.

This being the 21st century, you don’t have to choose only one. Almost every TV out there now supports both multiple audio channels — either via the SAP channel (where it still exists) or more likely through the multiple audio channels of digital TV. In addition, they all support multiple channels of captions, too.

So please give us the audio without your announcers on one of the alternate audio channels. Give us their commentary on a caption channel, so if we want to read it without interfering with the music, we can read it.

If you like, do a channel where the commentary is only on the left channel. Clever viewers can then mix the commentary at whatever volume they like using the balance control. Sure, you lose stereo, but this is much more valuable.

I know you might take this as an insult. You work hard on your coverage and hire good people to do it. And so do it — but give your viewers the choice when the live audio track is an important part of the event, as it is for the opening and closing ceremonies, medal ceremonies and a few other events.

Do you agree with me? Possibly share your opinion with nbcolympicsfeedback@nbcuni.com.

Electric cars as peak grid power? Not small ones, but perhaps the Tesla

An article in the LA Times suggests an idea I’ve seen frequently — use electric car batteries to meet peak power demand on the grid. After all, you have a car, and it’s plugged in, and it has a big battery, so instead of just charging it, have it send juice back to the grid when it most needs it.

The reason this is attractive is that a large part of the cost of the grid is building it to handle the peak load. Most of the capital cost is for that, and fuel costs are based on the real, variable load. Softening the peak is very valuable to the power company — to the point that power companies give rebates and credits to people who do things that will soften that peak.

This is also one of the virtues of solar. It tends to provide power during the day, which is always when the peak is. However, solar peaks at noon, while the demand peak is the hottest part of the day, which tends to be later in the afternoon. The big peak tends to be around 4-6pm when it’s hot, and people have started turning on things in their houses to get ready for dinner. On the spot markets power costs the most then.

Contrast that with the night. Because nuclear plants and some big coal plants aren’t easy to dial back, then sometimes even produce more power than is being used, and they end up discarding the power into giant resistors. That makes power at night cheap.

I’ve never seen it done, but there could even be merit in the idea of mounting fixed solar panels pointing west, so that they catch less power in the morning but do better in the later afternoon when the price of electricity is highest. I presume this doesn’t happen because net metering home owners don’t get access to the “true” spot power price which would justify this. If they are lucky they do get time-of-day metering so they sell power at a high price in the day and buy it cheap in the evening, but some don’t even get that. The harsh reality is that most grids were not built to have a lot of generation at the edges, and power companies are pushing back on net metering and grid-ties that feed back too much power. Indeed, for cost reasons here in California, people should size their solar systems to not quite meet needs, and buy the rest at the cheap “tier 1” price, rather than try to sell back.

Most solar panels are erected facing due south, tilted to the latitude which maximizes total kwh, but peaks at noon. Actually, most are mounted on a section of the roof that is closest to south. If you have to choose between SE and SW, it might be that SW is best, at least for the grid. (Sadly, a number of solar panels are mounted on the front of houses, even if that points north! People are more keen on looking good than doing good. I hope that’s rarer than I’ve been told.)

Anyway, back to the cars

There are a few issues with using the batteries in the car for the peak load.

  1. The peak time is unfortunately a very popular time for driving. People either want to drive in the late afternoon — it is called the rush hour for a reason — or they plan to drive soon and want their car’s battery to be full to meet their driving needs. They don’t want to find their car half-empty at 6pm because it sold power to the grid. A study of car usage patterns detailed the numbers.
  2. The batteries in cars are expensive. Charging and discharging the battery uses up its lifetime. We don’t know how long car batteries are going to last but a typical estimate is around 150,000 miles, or about 40,000 lifetime kwh. If it’s the 22kwh pack in the LEAF (which costs $12K or so today) that’s 27 cents/kwh lifetime. Plus the cost of the electricity that went in to be resold. The peak price ranges from 25-30 cents/kwh in the west but hits as much as 48 cents in New York. So it could be profitable in New York, but barely so. Big, heavy lead batteries are more cost effective.

There are some factors, though, which could change this:

  • Battery packs will get cheaper, and their lifetimes will increase. That will drop the cost of putting a kwh into and out of a battery.
  • Cars like the Tesla model S have huge batteries, far more than they actually need. This, it turns out is quite wasteful, since you buy a lot of battery and rarely use it. If you know you don’t plan a 200 mile trip, you might be tolerant that your long-range car is half-empty at 6pm, and happy to sell that excess capacity. You already paid for the capacity, after all to give you that long-trip freedom. You will still shorten the battery life, but you’ll be paid for that.
  • Weather forecasts are getting quite accurate, so demand can be predicted and this managed better.
  • The car can also be a backup in the event of grid power outages. There, the 35 cent/kwh price (and loss of driving ability) are minor compared to the burden of having no power in your home.

Calling all cars!

Now, as you might expect on this blog, robocars are also game changers here. The inverters and equipment to feed power back to the grid are expensive, so most people won’t have them. But if the robocars have a means to plug in, they can bring the power to where it’s needed. A power company, seeing a brownout coming, could send out an alert on the net. “Calling all cars” — if you have spare capacity, we’ll buy it at the following rate. Please drive to the nearest two-way intertie and plug in soon. While ideally some sort of automatic connection would be possible, this could even be a charging lot with human staff who plug in the cars as they arrive and unplug them when they have to leave.

Such charging lots might well exist for cars that need charges at night or other non-peak times. Due to cost, cars will strongly wish to avoid charging at peak cost times. This puts them to use then. Inductive charging also works (at a loss of about 10%) and robotic plug-in is actually quite doable — there are already robotic gasoline filling stations out there. A robocar charging lot could be dense-pack, valet style, so not take a lot of land. But it would take megawatts — but that’s OK. The robots don’t care how convenient it is, so put it next to the transformer station.

The Valley of Danger -- medium speed roads for robocars

With last week’s commercial release of the Navia, I thought I would release a new essay on the challenges of driving robocars at different speeds.

As the Navia shows, you can be safe if you’re slow. And several car company “traffic jam assist” products say the same thing. On the other end, we see demos taking place at highway speeds. But what about the middle range — decent speeds on urban streets?

Turns out that’s one of the harder problems, and so there is a “valley” in the chart which makes safe operation harder in that zone.

So read my more detailed essay on these challenges: The Valley of Danger for Robocars

Induct's "Navia" officially for sale for $250,000

A significant milestone was announced this week. Induct has moved their “Navia” vehicle into commercial production, and is now taking orders, though at $250,000 you may not grab your wallet.

This is the first commercial robocar. Their page of videos will let you see it in operation in European pedestrian zones. It operates unmanned, can be summoned and picks up passengers. It is limited to a route and stops programmed into it.

The “catch” is that it stays safe by going only 20km/h, where it is much harder for it to harm things. It’s aimed at the campus shuttle market, rather than going on public roads, but it drives on ordinary pavement, not requiring special infrastructure, since it localizes using a prepared laser map of the route.

Now 20km/h (12mph) is not very fast, though suitable for a campus shuttle. This slow speed and limited territory may make some skeptical that this is an important development, but it is.

  1. This is a real product, ready to deploy with civilians, without its own dedicated track or modified infrastructure.
  2. The price point is actually quite justifiable to people who operate shuttles today, as a shuttle with human driver can cost this much in 1.5 years or less of operation.
  3. It smashes the concept of the NHTSA and SAE “Levels” which have unmanned operation as the ultimate level after a series of steps. The Navia is at the final level already, just over a constrained area and at low speed. If people imagined the levels were a roadmap of predicted progress, that was incorrect.
  4. Real deployment is teaching us important things. For example, Navia found that once in operation, teen-agers would deliberately throw themselves in front of the vehicle to test it. Pretty stupid, but a reminder of what can happen.

The low speed does make it much easier to make the vehicle safe. But now it become much easier to show that over time, the safe speed can rise as the technology gets better and better. (To a limit — see my article on the dangers at different speeds.)

The route limitation has two elements. The first is that they want to keep it only in safe locations, which makes sense for an early release. It also avoids legal issues. The second is simpler — they are using a map based approach, so they can only drive somewhere that has been mapped. Mapping means driving a scanner over the route and building a map of all the details, and then typically having humans confirm the map. This is the same way that the cars from Google and almost all other vendors do it when they are doing complex things that go beyond following lane markers on a highway. As such it is not that big a barrier. While building new infrastructure is hugely expensive, mapping it is much more modest in comparison, though non-trivial. Covering the whole world would take time, but it becomes possible to quickly add routes and destinations.

I single out the Navia because of its ability to drive without requiring any changes to the roads or extra infrastructure. Previous shuttle-style systems like the ULTra PRT at Heathow (which I rode a couple of months ago), the Masdar PRT and earlier Cybercar projects all required a dedicated guideway or fenced-off ground track to run. While the Navia is being kept to private property for safety and legal reasons, there is no technical reason it could not operate in public spaces, which moves it from PRT into Robocar territory.

The Navia is very much designed to be a shuttle. It is open-air and doesn’t really have seats, just padded bars to lean against. There is no steering wheel or other traditional control. This belies that common expectation of the first vehicles looking just like traditional cars.

Félicitations, Induct.

CES Robocar News

CES has become a big show for announcing car technology. I’m not there this year due to other engagements, but here’s some of what has been in the news.

Most impressive is probably BMW’s prototype 2 and 6 series vehicles, which have features both for existing drivers and for future self-operation. The video below shows a BMW 235i doing a slalom around cones on its own, and then drifting on wet pavement. BMW claims their active assist will help you in both understeer and oversteer situations. That feature wil be in trials in 2015. Here’s an older article on BMW efforts.

Earlier I wrote about Ford’s plan for the C-Max which positions its solar panel under a concentrator which remains more a concept gimmick but is still interesting.

There’s been a raft of “connected car” announcements, by which we mean cars using the mobile network to provide apps and related features. The biggest news is a new consortium planning to use Android as a platform for connected infotainment in cars, called the Open Automotive Alliance. It has GM, Audi, Honda, and Hyundai involved, and of course Google. It may be bad news for QNX, which for now is the remaining shining star in RIM/Blackberry’s portfolio, as QNX has a strong position as the infotainment OS in a number of cars. (Having gone to school at UW long ago, I am friends with all the founders of these companies.)

The win will be cars that don’t try to be too smart, and let the phones do most of the work. My phone is just a few months old, while my car is ten years old, and this ratio is not that uncommon. Put the smarts where the innovation is moving fastest, because even if you don’t, they wild end up there eventually by consumer demand.

Audi is demonstrating their A7 with new self-drive features at CES. It even has Nevada plate number 046 for Autonomous vehicle testing — people are wondering who all these plates have gone to. Google only took a few, Continental took some, and Audi took some around 007. While nobody does primary testing in Nevada, everybody doing test demos at CES needs these plates.

Bosch is running a full “driverless car experience” in their booth and some panels during the show. The panel is happening in just 15 minutes as I write this.

Delphi is also doing a demo of all their driver assist tech. This is mostly aimed at driver monitoring, which is seen as important for the transition to full robocar operation where lots of driver intervention is required.

Induct is showing off the Navia in a track — I write more details about how it is now for sale. Though it’s not quite “consumer” electronics.

No, we don't want much more Fedex and UPS on Dec 24

A big story this Christmas was a huge surge in the use of rush shipping in the last 2 days before Christmas. Huge numbers of people signed up for Amazon Prime, and other merchants started discounting 2 day and overnight shipping to get those last minute sales. In turn, a lot of stuff didn’t get delivered on time, making angry customers and offers of apology discounts from merchants. This was characterized as a “first world problem” by many outside the game, of course.

When I shop, I am usually travelling outside the US and so I have to get stuff even before the 24th, and I’ve had stuff I left to the last day not delivered several times, so I know to avoid doing it. Some packages are not going to make it, and this should be expected — even desired.

While it makes sense to increase the infrastructure a bit as online shopping grows in popularity, you don’t want to go nuts at Christmas. If you need to build your infrastructure to handle every Christmas gift, you have to build it too big, and you pay for that through higher prices the rest of the year. Shippers need to figure out their real capacity, and everybody needs to plan based on it.

The failure this season was not a failure of the delivery system. Rather it was a failure of either the shippers to tell the merchants what their capacity was, and/or a failure of the merchants to communicate to customers that too much was being shipped and not everybody could be promised Dec 24 delivery.

The obvious way to fix this is first to have the shippers get a solid handle on their capacity for the various types of shipping to the various destinations. They can also identify the bottlenecks and widen them a modest amount.

The next thing is for the merchants to know just how much shipping they can buy. There can either be a live spot market — so the merchant web sites just stop offering the delivery promise when the capacity is reached, or merchants could even attempt to pre-contract for capacity, paying for it whether they need it or not (or reselling it if they know they won’t need it.) Merchants should be building their own forecasts about available capacity and querying shippers for updates on just how much more is left. Capacity isn’t a fixed thing — it depends on the size of packages and where they are going and many other things — but this is a problem computers can handle.

Finally, the shippers and the merchants can start increasing the price of the rush shipping so that demand and supply match. This can be based on accurate forecasts, or just live data. As Dec 23rd wears on, the price of next-day shipping will keep going up and up so only the serious buy it. Of course, this might reveal just how keen some people are to get items, and justify having more capacity in years to come. Indeed, as the price goes up, it may make sense for Amazon to say, “Listen, we’re just going to buy this for you at your local Wal-Mart, it will be waiting for you there.” Wal-Mart surely won’t mind that.

There are also some tricks to increase capacity. For example, most people would probably tolerate having to pick up items at a retail location — FedEx and UPS and the USPS of course have tons of those — especially if it is the only option or offers a serious discount over surge priced home delivery. (This is not as good for sending gifts to remote locations.) Temporarily contracted depots could also be used. You want to streamline these depots, as lots of people will be coming in, so you want some nice system where people bring in a bar code and everything is optimized to get them out the door with the right package quickly.

All of this will push people to shop and ship a little earlier, smoothing out the rush, and avoiding having to design the system for one peak day. I have always found it remarkable that most stores and malls have giant parking lots (back in the brick and mortar world) which are only filled in December. It’s such a waste — but something robocars will fix in the future.

Delivery to the wrong address

I had a missed delivery myself this year. In this case it was on December 14th because I went home early, and I had the gifts arriving 2 days before I left. But oddly, I got the note that the package had been delivered at 6pm — but it wasn’t. Both UPS and Amazon had very little set up to handle this. Amazon’s system insists you wait at least a day to complain about this, which was no help to me. I could have used that day to replace the items if I were sure it wasn’t coming.

After I left, the package showed up on my porch on Sunday. UPS does not operate Sunday so it seems pretty likely they had left the package with a neighbour who was perhaps away for a few days. I presume the neighbour eventually came and dropped off the package but they left no note. (Of course I wish they had done it right away — replacing the gifts in Canada cost me a bunch extra.)

Amazon had already given a refund — fairly good service there — and so I just had UPS return the package as undelivered which costs me nothing, so that all worked out, except the scramble and the extra cost of replacing the items.

I don’t know how often this happens — it’s in the Amazon FAQ so it must be often enough — but there are some obvious fixes. The UPS driver’s wand, which scans the package on delivery, should record more data, including any location from a GPS in the wand or the truck, but perhaps more easily the MACs and signal strengths of any WIFI nodes visible when the package was scanned.

That information would have both allowed UPS to say, “OK, that’s odd, this doesn’t match where the package should be going” right when it was scanned, or it would have allowed me to figure out where it went and get it right away.

You’re probably wondering, didn’t I just imagine it was stolen? I did consider that possible, though in my safe neighbourhood it doesn’t appear to be a real danger. Somebody following UPS trucks at Christmas time to steal gifts would be very Grinchey, not to say it doesn’t happen. In safe neighbourhoods, UPS and Fedex routinely just leave packages at the door. Not actually signed for, I presume they just eat the loss the rare times they are stolen, or perhaps the merchant does. It’s small enough shrinkage that the system handles it.

Ford's solar charging robocar design

One of the silly ideas I see often is the solar powered car. In 2011, I wrote an article about the solar powered robocar which explained some of the reasons why the idea is anti-green, and how robocars might help.

I was interested to see a concept from Ford for a solar charging station for a robocar which goes further than my idea.

In the Ford proposal, there is a special garage with sun exposure and a giant Fresnel lens, which can focus light on a solar panel on the car parked in the garage, effectively a solar concentrator based PV system. The trick is that the car is able to move during the day, so as the sun moves (or rather the Earth and the garage turn with respect to the sun) the car adjusts to put the panel in the beam of the Fresnel lens. They predict they could get 21 miles of range in six hours of sunlight. That’s a bit over 5kwh, meaning the panel must generate just under a kw during those 6 hours.

Normally 1kw of solar panel is quite large, and the roof of the garage is large to make this happen. The downside is this would make the panels really, really hot, which reduces their efficiency and frankly, could be dangerously hot and also wear out the panels and roof quickly. (We would need to see what temperature parameters they plan for.)

In the end, this system still falls into the pitfalls that make a green solar powered car a contradiction in terms. To be green, you must use all the power panels generate. When this car is not in the garage, its panel will produce minimal output, since as it moves about its day it will park in shade or at the wrong angle to the sun, and the panels will be horizontal. The only way to properly exploit panels is to have them at the very least facing south in a permanently sunny spot, tilted to the latitude (or sun-tracking) and combined with the grid, so every single joule they generate is put to use.

There is a minor win for solar on a vehicle, which is when you are driving, the energy is never stored, and thus battery weight can be slightly lowered and there are no storage or transmission losses. However, unless you are going to make something like the cars that compete in the solar races, this doesn’t make up for the waste of having panels whose output is mostly unused. Toyota figured out a good use for a panel on the Prius — it runs the ventilation fan, whose demand matches the sunlight and heat of the day. Every joule of that panel is used, and keeping the car cool saves on AC when driving. Had the panel fed into the hybrid battery, its output would be thrown away most of the time when the battery was not low.

As I noted in my earlier article, robocars could make better use of solar panels because they could arrange to always store themselves in the sun, pointed in the right direction, and could even go find connection stations to feed their power back to the grid if the batteries were not low. (You need some robotic ability to connect to the charging station without a human, and ideally without the 10% loss of inductive coupling but even that is tolerable.)

In that world, you could put up Fresnel or other concentrating charging stations which cars could seek out to make the best use of their panels. However, these cars are now consigned to never being garaged or parking in the shade, which is not really what we’re looking for.

This does have the advantage of not needing to plug in, though inductive charging stations are also something robocars would move themselves to. If the vehicles are used off-grid, this would be somewhat more valuable even if on-grid the panels (concentrated or not) should just feed that grid.

There’s another downside to the heat of this system. In the summer at least, you then have to spend a fair bit of energy cooling the car down. The extra energy gained from sitting in the sun might be lost in cooling if the wait was modest. A cooling fan is a good idea while in the sun.

In other News

Michigan has passed its law regulating the testing of robocars there. It’s being touted as a way to “save jobs” by preventing the flight of automaking innovation to other locations. It’s going to be a tall order. The Detroit car companies are opening labs in silicon valley, in part because it’s very difficult to recruit the very best people to come live in Detroit, no matter how cheap the housing is — and you can have a mansion in Detroit for the price of a shack in San Francisco. If Michigan wants to retain its car dominance, it will need to do even more.

Several announcements planned for CES. Delphi will be showing off their latest work, which is more ADAS related. Bosch will be showing off their prototype cars, and presumably Audi and others will return.

Results from the Ann Arbor V2V test bed are expected soon. The original plan was for the DoT to propose regulations demanding V2V in all new cars in 2013. They missed that deadline, of course, but many expect something very soon. Results of this testbed are expected to be crucial. I predict the results will be lukewarm when viewed through the robocar lens — which is to say, the V2V systems will only have been found able to prevent a tiny number of incidents which could not also be detected with advanced sensors directly on the cars. They may not publish that number, as there are incentives to make the test report as a success.

Having secure open wifi (Death to wifi login part 2)

In part 1 I outlined the many problems caused by wifi login pages that hijack your browser (“captive portals”) and how to improve things.

Today I want to discuss the sad state of having security in WIFI in most of the setups used today.

Almost all open WIFI networks are simply “in the clear.” That means, however you got on, your traffic is readable by anybody, and can be interfered with as well, since random users near you can inject fake packets or pretend to be the access point. Any security you have on such a network depends on securing your outdoing connections. The most secure way to do this is to have a VPN (virtual private network) and many corporations run these and insist their employees use them. VPNs do several things:

  • Encrypt your traffic
  • Send all the traffic through the same proxy, so sniffers can’t even see who else you are talking to
  • Put you on the “inside” of corporate networks, behind firewalls. (This has its own risks.)

VPNs have downsides. They are hard to set up. If you are not using a corporate VPN, and want a decent one, you typically have to pay a 3rd party provider at least $50/year. If your VPN router is not in the same geographic region as you are, all your traffic is sent to somewhere remote first, adding latency and in some cases reducing bandwidth. Doing voice or video calls over a VPN can be quite impractical — some VPNs are all TCP without the UDP needed for that, and extra latency is always a killer. Also, there is the risk your VPN provider could be snooping on you — it actually can make it much easier to snoop on you (by tapping the outbound pipe of your VPN provider) than to follow you everywhere to tap where you are.

If you don’t have a VPN, you want to try to use encrypted protocols for all you do. At a minimum, if you use POP/IMAP E-mail, it should be configured to only get and receive mail over TLS encrypted channels. In fact, my own IMAP server doesn’t even accept connections in the clear to make sure nobody is tempted to use one. For your web traffic, use sites in https mode as much as possible, and use EFF’s plugin https everywhere to make your browser switch to https wherever it can.  read more »