Connecting untrusted devices to your computer

My prior post about USB charging hubs in hotel rooms brought up the issue of security, as was the case for my hope for a world with bluetooth keyboards scattered around.

Is it possible to design our computers to let them connect to untrusted devices? Clearly to a degree, in that an ethernet connection is generally always untrusted. But USB was designed to be fully trusted, and that limits it.

Perhaps in the future, an OS can be designed to understand the difference between trusted and untrusted devices connected (wired or wirelessly) to a computer or phone. This might involve a different physical interface, or using the same physical interface, but a secure protocol by which devices can be identified (and then recognized when plugged in again) and tagged once as trusted the first time they are plugged in.

For example, an unknown keyboard is a risky thing to plug in. It could watch you type and remember passwords, or it could simply send fake keys to your computer to get it to install trojan software completely taking it over. But we might allow an untrusted keyboard to type plain text into our word processors or E-mail applications. However, we would have to switch to the trusted keyboard (which might just be a touch-screen keyboard on a phone or tablet) for anything dangerous, including of course entry of passwords, URLs and commands that go beyond text entry. Would this be tolerable, constantly switching like this, or would we just get used to it? We would want to mount the inferior keyboard very close to our comfy but untrusted one.

A mouse has the same issues. We might allow an untrusted mouse to move the pointer within a text entry window and to go to a set of menus that can’t do anything harmful on the machine, but would it drive us crazy to have to move to a different pointer to move out of the application? Alas, an untrusted mouse can (particularly if it waits until you are not looking) run applications, even bring up the on-screen keyboard most OSs have for the disabled, and then do anything with your computer.

It’s easier to trust output devices, like a printer. In fact, the main danger with plugging in an unknown USB printer is that a really nasty one might pretend to be a keyboard or CD-Rom to infect you. A peripheral bus that allows a device to only be an output device would be safer. Of course an untrusted printer could still record what you print.

An untrusted screen is a challenge. While mostly safe, one can imagine attacks. An untrusted screen might somehow get you to go to a special web-site. There, it might display something else, perhaps logins for a bank or other site so that it might capture the keys. Attacks here are difficult but not impossible, if I can control what you see. It might be important to have the trusted screen nearby somehow helping you to be sure the untrusted screen is being good. This is a much more involved attack than the simple attacks one can do by pretending to be a keyboard.

An untrusted disk (including a USB thumb drive) is actually today’s biggest risk. People pass around thumb drives all the time, and they can pretend to be auto-run CD-roms. In addition, we often copy files from them, and double click on files on them, which is risky. The OS should never allow code to auto-run from an untrusted disk, and should warn if files are double-clicked from them. Of course, even then you are not safe from traps inside the files themselves, even if the disk is just being a disk. Many companies try to establish very tight firewalls but it’s all for naught if they allow people to plug external drives and thumbsticks into the computers. Certain types of files (such as photos) are going to be safer than others (like executables and word processor files with macros or scripts.) Digital cameras, which often look like drives, are a must, and can probably be trusted to hand over jpegs and other image and video files.

A network connection is one of the things you can safely plug in. After all, a network connection should always be viewed as hostile, even one behind a firewall.

There is a risk in declaring a device trusted, for example, such as your home keyboard. It might be compromised later, and there is not much you can do about that. A common trick today is to install a key-logger in somebody’s keyboard to snoop on them. This is done not just by police but by suspicious spouses and corporate spies. Short of tamper-proof hardware and encryption, this is a difficult problem. For now, that’s too much cost to add to consumer devices.

Still, it sure would be nice to be able to go to a hotel and use their keyboard, mouse and monitor. It might be worth putting up with having to constantly switch back to get full sized input devices on computers that are trying to get smaller and smaller. But it would also require rewriting of a lot of software, since no program could be allowed to take input from an untrusted device unless it has been modified to understand such a protocol. For example, your e-mail program would need to be modified to declare that a text input box allows untrusted input. This gets harder in web browsing — each web page would need to have to declare, in its input boxes, whether untrusted input was allowed.

As a starter, however, the computer could come with a simple “clipboard editor” which brings up a box in which one can type and edit with untrusted input devices. Then, one could copy the edited text to the OS clipboard and, using the trusted mouse or keyboard, paste it into any application of choice. You could always get back to the special editing windows using the untrusted keyboard and mouse, you would have to use the trusted ones to leave that window. Cumbersome, but not as cumbersome as typing a long e-mail on an iPhone screen.

This is just DRM

The notion of trusted input and output devices is really just DRM with the pre-stated goal of being useful to a users (and only well educated guaranteed sound state of mind users at that).

Relying on individual applications to correctly mark their inputs as needing to be trusted or not is distributing the problem too widely. An analogy is that web developers need to sanitize all inputs and know their authentication domains. History demonstrates that the same mistakes are repeatedly made there. This would be no different.

Little connection here

While DRM requires a platform with security, this does not mean that security is DRM. DRM is mostly (perhaps entirely) concerned with security on outputs, while what I describe is mostly about security on inputs.

I agree that it is quite a step for apps to indicate where they can take untrusted input and where they can’t. The problem is we need to be able to execute privileged operations from our keyboards, and lots of the operations we do are privileged when it comes to worrying about an attacker. The mere ability to run an application and give it input is something you can’t trust to an untrusted keyboard.

Nothing is truly safe. We could let the untrusted keyboard type into this web box from which I am commenting on the blog. It could post a bogus comment. But I would at least see this. However, we can’t let the keyboard go to the administration menu of the blog and enable permissions for users, for example.

You got me wondering how a

You got me wondering how a keyboard alone could post a bogus comment. It would first have to rely on heuristics to determine whether the user is typing in a suitable comment field, which is probably not too hard.

Step two is to insert it's own text. Does it simply append it's text? This is the easy option but can hardly fake any appearance of authenticity. Deleting your already-entered text runs into some problem since Mac and Windows have different conventions for deleting all text, but they may not conflict; I can't work it out right now. Software should probably ask for confirmation for either selection or deletion of all (or most) text - an obstacle which would make using a hotel keyboard infuriating for some people.

Step 3 is to submit the form. This will be quite impossible to do reliably if the user browses a variety of sites and the keyboard doesn't know which site it is. It might recognize the site directly by the user's entering a URL with it, therefore entering a URL should be a trusted operation. There is however a gotcha: If the attacker can sniff your internet and communicate that data to the keyboard then it can circumvent any attempt to prevent the keyboard getting the URL. Bluetooth and unlicensed low-power radio alike could communicate url data to the keyboard quite simply. (Low-power digital transceivers for 417MHz and other government-approved frequencies are very cheap and very easy to get.)

You know what? Before I started writing this I was quite tolerant of the idea of a hotel keyboard. Now I don't want to touch one with a barge pole! :)

This might involve a

This might involve a different physical interface, or using the same physical interface, but a secure protocol by which devices can be identified and tagged once as trusted the first time they are plugged in.

Trusting

You can probably trust the devices in your own house, so that when you plug them in and see their ID, you accept them. It’s harder to trust a keyboard at a hotel, even if you have seen that keyboard before, because somebody could have gone in and modified the keyboard to make it do bad things, but not changed the ID. Only if you knew the keyboard had truly tamper-proof electronics could you trust it, but it could still have a key logger.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.