The giant security hole in auto-updating software

Topic: 

It's more and more common today to see software that is capable of easily or automatically updating itself to a new version. Sometimes the user must confirm the update, in some cases it is fully automatic or manual but non-optional (ie. the old version won't work any more.) This seems like a valuable feature for fixing security problems as well as bugs.

But rarely do we talk about what a giant hole this is in general computer security. On most computers, programs you run have access to a great deal of the machine, and in the case of Windows, often all of it. Many of these applications are used by millions and in some cases even hundreds of millions of users.

When you install software on almost any machine, you're trusting the software and the company that made it, and the channel by which you got it -- at the time you install. When you have auto-updating software, you're trusting them on an ongoing basis. It's really like you're leaving a copy of the keys to your office at the software vendor, and hoping they won't do anything bad with them, and hoping that nobody untrusted will get at those keys and so something bad with them. In fact, in many cases it's worse than leaving them the keys, because you're giving them access to your live running computers. At some companies, computers are hard-secured and need passwords to boot. Companies like Google, Microsoft and several others have access to hundreds of millions of computers at effectively all the companies of the world, if they wished to use it. Worse, unlike physical keys which can only be used one at a time for specific invasions, these keys are general and could be used for scalable, large attacks.

I first started worrying about this when PointCast came out. PointCast was one of the first really widely adopted network apps with auto-update and a million users. One day the manager of the colocation facility used by Pointcast was visiting me. "How many fingers of yours would a mafioso have to break," I asked, "before you would let him into your server room?"

"They would just have to threaten," he said. No surprise there. But I was worried that the ability to inject software into a million machines might be something well worth a few threats to a criminal enterprise. I imagined some cute hacks, where I took over a million machines, and hunted for spreadsheets with next quarter's financial results on CFO's windows boxes, making a billion in the stock market, or even, more maliciously, telling a million people the Dow was down by 500 and thus making it really happen as they panic-sold. But in fact, far nastier things are possible.

So I thought I would outline the sort of principles I would want to see from auto-updating (or even prompted manual updating) software, and challenge vendors to declare that they are doing these things.

Resist DNS attacks and MITM attacks

Such software checks for new updates by querying a host belonging to the software vendor. One basic attack is to pretend to be that host. This can often be done with DNS cache poisoning or similar attacks. However, proper signing also defends against this. The use of TLS certificates and encrypted TLS channels to the host is also a good idea.

Sign all updates

The most fundamental step is to have the initial software release contain within it a set of public keys belonging to the software supplier, and to require that updates be signed by some number of these keys, and match those signatures. A growing number of suppliers do this, which is good, but in fact they don't go far enough when you consider you are giving them the keys to your house.

Signatures from multiple parties

It's not good enough that a software update be signed by just one key. In the end, a single key often means action by just a single person. Do you want to trust all your computer security to one single employee -- even the CEO -- of every software provider you deal with? The key can be compromised not just by bad computer security or an employee gone rogue. The stakes are high enough here that it can be compromised by the employee's child being kidnapped or far lesser threats.

As such, there should be some large number of keys (such as 6) and a new update must be signed by some subset, such as any 3 of them. All 6 keyholders must keep their keys separately (such as in a USB fob on their bodies, or better in a specially designed signing fob). It's much harder for the bad guys to compromise 3 key employees at once, and not risk exposure and the revocation of the keys. However, only needing 3 of 6, a genuine update can usually go out pretty quickly.

Quick key revocation

As noted, if there is a hint of compromise, such as a report of an extortion attempt on one keyholder, you want to be able to send a quick update out that says to disregard that key, or even all keys. That's pretty drastic of course, as it requires that all customers now install a new set of keys to have automatic update again, and do so with a skeptical eye. But I would rather have this than a big hole into my network.

Rotation of who the keyholders are

Updates should routinely revoke old keys and issue new ones, which may belong to different keyholders, so that attackers can't as readily figure out who they are. (On the other hand, you don't want it to be possible for a conspiracy to sit and wait until its members have enough keys, so this is a dilemma.)

Secure code build process

Securely signed updates are great, but don't protect you if the code that gets signed is itself compromised. Generally even the set of signers listed above are just going to sign off any new release handed them by the engineering teams without directly inspecting it.

Solutions here are very difficult. In effect it requires that there be an independent QA team which actually builds any binaries using trusted software building tools. This team has to have multiple engineers examine all changes before compiling them and releasing them to the executives to sign. This is an expensive, and possibly slow process that many software suppliers can't easily afford -- but again, you are giving them the keys to your network.

This gets even harder when you realize that all the tools must be similarly secured. Say I want to buy an election, but the voting machine company practices very good security. If I can compromise the vendor of the compilers they buy, or if I can compromise some software the compiler vendor uses to let me in there, then I can get into the voting machines and rig them. And that's worth a great deal of money (or effort by foreign governments.)

Secure Operating Systems

The best (but not complete) answer to this problem is more secure operating systems. Ones where you can run a program you don't necessarily fully trust, and not fear that it can get at things on your computer you don't authorize it to. We already know how to do this a great deal better than existing systems like Windows, Mac OS/X or *Nix, though we aren't there yet. As we discover every day with browsers, Java and javascript which include attempts and sandboxing untrusted apps, even in this case we often don't get it right. We need to be able to do this for non-sandboxed applications that do real things with our data. Capability based OSs may be the best thinking in this area, but they are not enough -- many attacks come via social engineering attacks which trick the humans which are authorized with the dangerous capabilities to hand them over to malicious code.

Software systems that get millions of users, particularly users in business situations where real money or power is involved, will need to realize just how much access they have, and consider steps like this.

Who's doing well or poorly

If you know of companies that are following regimes this good, I would be curious to learn that in the comments. Conversely, companies that are not even following the basic signature process should be outed as well.

Comments

One of the bugs we had with a particular piece of software was that the maker got bought by someone else. After a couple of years their old domain got bought by the usual suspects. But it turns out that they couldn't make the auto-update look anywhere else for updates. I assume redirection hid the problem, or maybe the new owner just didn't care. But after a month or so of "cannot reach update server - please connect to the internet then press OK", we got a little sick of it.

So now we use software from another venduh.

Where I work we have given up on auto-updates, and moved to begging our customers that this time the upgrade won't break everything in their system. We have a strange combination of customers who won't upgrade if they can possibly avoid it and those will demand to run their live systems on prototypes that have just been demoed to them. I'm not sure which is worse...

Add new comment