Archives

Date
  • 01
  • 02
  • 03
  • 04
  • 05
  • 06
  • 07
  • 08
  • 09
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

Software recalls and quick fixes to safety-critical computers in robocars

While giving a talk on robocars to a Stanford class on automative innovation on Wednesday, I outlined the growing problem of software recalls and how they might effect cars. If a company discovers a safety problem in a car’s software, it may be advised by its lawyers to shut down or cripple the cars by remote command until a fix is available. Sebastian Thrun, who had invited me to address this class, felt this could be dealt with through the ability to remotely patch the software.

This brings up an issue I have written about before — the giant dangers of automatic software updates. Automatic software updates are a huge security hole in today’s computer systems. On typical home computers, there are now many packages that do automatic updates. Due to the lack of security in these OSs, a variety of companies have been “given the keys” to full administrative access on the millions of computers which run their auto-updater. Companies which go to all sorts of lengths to secure their computers and networks are routinely granting all these software companies top level access (ie. the ability to run arbitrary code on demand) without thinking about it. Most of these software companies are good and would never abuse this, but this doesn’t mean that they don’t have employees who can’t be bribed or suborned, or security holes in their own networks which would let an attacker in to make a malicious update which is automatically sent out.

I once asked the man who ran the server room where the servers for Pointcast (the first big auto-updating application) were housed, how many fingers somebody would need to break to get into his server room. “They would not have to break any. Any physical threat and they would probably get in,” I heard. This is not unusual, and often there are ways in needing far less than this.

So now let’s consider software systems which control our safety. We are trusting our safety to computers more and more these days. Every elevator or airplane has a computer which could kill us if maliciously programmed. More and more cars have them, and more will over time, long before we ride in robocars. All around the world are electric devices with computer controls which could, if programmed maliciously, probably overload and start many fires, too. Of course, voting machines with malicious programs could even elect the wrong candidates and start baseless wars. (Not that I’m saying this has happened, just that it could.)

However these systems do not have automatic update. The temptation for automatic update will become strong over time, both because it is cheap and it allows the ability to fix safety problems, and we like that for critical systems. While the internal software systems of a robocar would not be connected to the internet in a traditional way, they might be programmed to, every so often, request and accept certified updates to their firmware from the components of the car’s computer systems which are connected to the net.

Imagine a big car company with 20 million robocars on the road, and an automatic software update facility. This would allow a malicious person, if they could suborn that automatic update ability, to load in nasty software which could kill tens of millions. Not just the people riding in the robocars would be affected, because the malicious software could command idle cars to start moving and hit other cars or run down pedestrians. It would be a catastrophe of grand proportions, greater than a major epidemic or multiple nuclear bombs. That’s no small statement.

There are steps that can be taken to limit this. Software updates should be digitally signed, and they should be signed by multiple independent parties. This stops any one of the official parties from being suborned (either by being a mole, or being tortured, or having a child kidnapped, etc.) to send out an update. But it doesn’t stop the fact that the 5 executives who have to sign an update will still be trusting the programming team to have delivered them a safe update. Assuring that requires a major code review of every new update, by a team that carefully examines all source changes and compiles the source themselves. Right now this just isn’t common practice.

However, it gets worse than this. An attacker can also suborn the development tools, such as the C compilers and linkers which build the final binaries. The source might be clean, but few companies keep perfect security on all their tools. Doing so requires that all the tool vendors have a similar attention to security in all their releases. And on all the tools they use.

One has to ask if this is even possible. Can such a level of security be maintained on all the components, enough to stop a terrorist programmer or a foreign government from inserting a trojan into a tool used by a compiler vendor who then sends certified compilers to the developers of safety-critical software such as robocars? Can every machine on every network at every tool vendor be kept safe from this?

We will try but the answer is probably not. As such, one result may be that automatic updates are a bad idea. If updates spread more slowly, with the individual participation of each machine owner, it gives more time to spot malicious code. It doesn’t mean that malicious code can’t be spread, as individual owners who install updates certainly won’t be checking everything they approve. But it can stop the instantaneous spread, and give a chance to find logic bombs set to go off later.

Normally we don’t want to go overboard worrying about “movie plot” threats like these. But when a single person can kill tens of millions because of a software administration practice, it starts to be worthy of notice.