Privacy

Do we need to ban the password?

Ok, I’m not really much of a fan of banning anything, but the continued reports of massive thefts of password databases from web sites are not slowing down. Whether the recent Hold Security report of discovering a Russian ring that got a billion account records from huge numbers of websites is true or not, we should imagine that it is.

As I’ve written before there are two main kinds of password using sites. The sites that keep a copy of your password (ie. any site that can e-mail you your password if you forget it) and the sites who keep an encrypted/hashed version of your password (these can reset your password for you via e-mail if you forget it.) The latter class is vastly superior, though it’s still an issue when a database of encrypted passwords is stolen as it makes it easier for attackers to work out brute-force attacks.

Sites that are able to e-mail you a lost password should be stamped out. While I’m not big on banning, it make make sense that a rule require that any site which is going to remember your password in plain form have a big warning on the password setting page and login page:

This site is going to store your password without protection. There is significant risk attackers will someday breach this site and get your ID and password. If you use these credentials on any other site, you are giving access to these other accounts to the operators of this site or anybody who compromises this site.

Sites which keep a hashed password (including the Drupal software running this blog, though I no longer do user accounts) probably should have a lesser warning too. If you use a well-crafted password unlikely to be checked in a brute-force attack, you are probably OK, but only a small minority do that. Such sites still have a risk if they are taken over, because the taken over site can see any passwords typed by people logging in while it’s taken over.

Don’t feel too guilty for re-using passwords. Everybody does it. I do it, in places where it’s no big catastrophe if the password leaks. It’s not the end of the world if one blog site has the multi-use password I use on another blog site. With hundreds of accounts, there’s no way to not re-use with today’s tools. For my bank accounts or other accounts that could do me harm, I keep better hygene, and so should you.

But in reality we should not use passwords at all. Much better technology has existed for many decades, but it’s never been built in a way to make it easy to use. In particular it’s been hard to make it portable — so you can just go to another computer and use it to log into a site — and it’s been impossible to make it universal, so you can use it everywhere. Passwords need no more than your memory, and they work for almost all sites.

Even our password security is poor. Most sites use your password just to create a session cookie that keeps you authenticated for a long session on the site. That cookie’s even easier to steal than a password at most sites.  read more »

Having secure open wifi (Death to wifi login part 2)

In part 1 I outlined the many problems caused by wifi login pages that hijack your browser (“captive portals”) and how to improve things.

Today I want to discuss the sad state of having security in WIFI in most of the setups used today.

Almost all open WIFI networks are simply “in the clear.” That means, however you got on, your traffic is readable by anybody, and can be interfered with as well, since random users near you can inject fake packets or pretend to be the access point. Any security you have on such a network depends on securing your outdoing connections. The most secure way to do this is to have a VPN (virtual private network) and many corporations run these and insist their employees use them. VPNs do several things:

  • Encrypt your traffic
  • Send all the traffic through the same proxy, so sniffers can’t even see who else you are talking to
  • Put you on the “inside” of corporate networks, behind firewalls. (This has its own risks.)

VPNs have downsides. They are hard to set up. If you are not using a corporate VPN, and want a decent one, you typically have to pay a 3rd party provider at least $50/year. If your VPN router is not in the same geographic region as you are, all your traffic is sent to somewhere remote first, adding latency and in some cases reducing bandwidth. Doing voice or video calls over a VPN can be quite impractical — some VPNs are all TCP without the UDP needed for that, and extra latency is always a killer. Also, there is the risk your VPN provider could be snooping on you — it actually can make it much easier to snoop on you (by tapping the outbound pipe of your VPN provider) than to follow you everywhere to tap where you are.

If you don’t have a VPN, you want to try to use encrypted protocols for all you do. At a minimum, if you use POP/IMAP E-mail, it should be configured to only get and receive mail over TLS encrypted channels. In fact, my own IMAP server doesn’t even accept connections in the clear to make sure nobody is tempted to use one. For your web traffic, use sites in https mode as much as possible, and use EFF’s plugin https everywhere to make your browser switch to https wherever it can.  read more »

Locking devices down too hard, and other tales of broken phones

One day I noticed my nice 7 month old Nexus 4 had a think crack on the screen. Not sure where it came from, but my old Nexus One had had a similar crack and when it was on you barely saw it and the phone worked fine, so I wasn’t scared — until I saw that the crack stopped the digitizer from recognizing my finger in a band in the middle of the screen. A band which included dots from my “unlock” code.

And so, while the phone worked fine, you could not unlock it. That was bad news because with 4.3, the Android team had done a lot of work to make sure unlocked phones are secure if people randomly pick them up. As I’ll explain in more detail, you really can’t unlock it. And while it’s locked, it won’t respond to USB commands either. I had enabled debugging some time ago, but either that doesn’t work unlocked or that state had been reset in a system update.

No unlocking meant no backing up the things that Google doesn’t back up for you. It backs up a lot, these days, but there’s still dozens of settings, lots of app data, logs of calls and texts, your app screen layout and much more that’s lost.

I could repair the phone — but when LG designed this phone they merged the digitizer and screen, so the repair is $180, and the parts take weeks to come in at most shops. Problem is, you can now buy a new Nexus 4 for just $199 (which is a truly great price for an unlocked phone) or the larger model I have for $249. Since the phone still has some uses, it makes much more sense to get a new one than to repair, other than to get that lost data. But more to the point, it’s been 7 months and there are newer, hotter phones out there! So I eventually got a new phone.

But first I did restore functionality on the N4 by doing a factory wipe. That’s possible without the screen, and the wiped phone has no lock code. It’s actually possible to use quite a bit of the phone. Typing is a pain since a few letters on the right don’t register but you can get them by rotating. You would not want to use this long term, but many apps are quite usable, such as maps and in particular eBook reading — for cheap I have a nice small eBook reader. And you can make and receive calls. (Even on the locked phone I could receive a call somebody made to me — it was the only thing it could do.) In addition, by connecting a bluetooth mouse and keyboard, I could use the phone fully — this was essential for setting the phone up again, where the lack of that region on the touchpad would have made it impossible.

One of my security maxims is “Every security system ends up blocking legitimate users, often more than it blocks out the bad guys.” I got bitten by that.  read more »

Cats against surveillance

I always feel strange when I see blog and social network posts about the death of a pet or even a relative. I know the author but didn’t know anything about the pet other than that the author cared.

So as I report the end for our kitty, Bijou, I will make it interesting by relaying a fun surveillance related story of how she arrived at our house. She had been rescued as a stray by a distant relative. When that relative died there was nobody else to take the cats, so we took two of them, even though the two would have nothing to do with each-other. Upon arrival at our house, both cats discovered that the garage was a good place to hide, but the hiding was quite extreme, and after about 4 days we still could not figure where Bijou was hiding. Somebody was coming to eat the food, but we could not tell from where.

I had a small wireless camera with an RF transmitter on it. So I set it up near the food bowl, and we went into the TV room to watch. As expected, a few minutes later, the cat emerged — from inside the bottom of the washing machine through a rather small hole. After emerging she headed directly and deliberately to the camera and as she filled the screen, suddenly the view turned to distortion and static. It was the classic scene of any spy movie, as shot from the view of the surveillance camera. The intruder comes in and quickly disables the camera.

What really happened is that the transmitter is not very powerful and you must aim the antenna. When a cat sees something new in her environment, her first instinct is to come up to it and smell it, then rub her cheek on it to scent-mark it. And so this is what she did, bumping the antenna to lose the signal, though it certainly looked like she was the ideal cat for somebody at the EFF.

It’s also a good thing we didn’t run the washing machine. But I really wish I had been recording the video. Worthy of Kittywood studios.

She had happy years in her new home (as well as some visits to her old one before it was sold) and many a sunbeam was lazily exploited and evil bright red dot creature never captured, but it could not be forever.

RIP Bijou T. Cat, 199? - 2013

We need a security standard for USB and other plug-in devices

Studies have shown that if you leave USB sticks on the ground outside an office building, 60% of them will get picked up and plugged into a computer in the building. If you put the company logo on the sticks, closer to 90% of them will get picked up and plugged in.

USB sticks, as you probably know, can pretend to be CD-ROMs and that means on many Windows systems, the computer will execute an “autorun” binary on the stick, giving it control of your machine. (And many people run as administrator.) While other systems may not do this, almost every system allows a USB stick to pretend to be a keyboard, and as a keyboard it also can easily take full control of your machine, waiting for the machine to be idle so you won’t see it if need be. Plugging malicious sticks into computers is how Stuxnet took over Iranian centrifuges, and yet we all do this.

I wish we could trust unknown USB and bluetooth devices, but we can’t, not when they can be pointing devices and mice and drives we might run code from.

New OS generations have to create a trust framework for plug-in hardware, which includes USB and firewire and to a lesser degree even eSata.

When we plug in any device that might have power over the machine, the system should ask us if we wish to trust it, and how much. By default, we would give minimum trust to drives, and no trust to pointing devices or keyboards and the like. CD-Roms would not get the ability to autorun, though it could be granted by those willing to take this risk, poor a choice as it is.

Once we grant the trust, the devices should be able to store a provided key. After that, the device can then use this key to authenticate itself and regain that trust when plugged in again. Going forward all devices should do this.

The problem is they currently don’t, and people won’t accept obsoleting all their devices. Fortunately devices that look like writable drives can just have a token placed on the drive. This token would change every time, making it hard to clone.

Some devices can be given a unique identifier, or a semi-unique one. For devices that have any form of serial number, this can be remembered and the trust level associated with it. Most devices at least have a lot of identifiers related to the make and model of device. Trusting this would mean that once you trusted a keyboard, any keyboard of the same make and model would also be trusted. This is not super-secure but prevents generic attacks — attacks would have to be directly aimed at you. To avoid a device trying to pretend to be every type of keyboard until one is accepted, the attempted connection of too many devices without a trust confirmation should lock out the port until a confirmation is given.

The protocol for verification should be simple so it can be placed on an inexpensive chip that can be mass produced. In particular, the industry would mass produce small USB pass-through authentication devices that should cost no more than $1. These devices could be stuck on the plugs of old devices to make it possible for them to authenticate. They could look like hubs, or be truly pass-through.

All of this would make USB attacks harder. In the other direction, I believe as I have written before that there is value in creating classes of untrusted or less trusted hardware. For example, an untrusted USB drive might be marked so that executable code can’t be loaded from it, only classes of files and archives that are well understood by the OS. And an untrusted keyboard would only be allowed to type in boxes that say they will accept input from an untrusted keyboard. You could write the text of emails with the untrusted keyboard, but not enter URLs into the URL bar or passwords into password boxes. (Browser forms would have to indicate that an untrusted keyboard could be used.) In all cases, a mini text-editor would be available for use with the untrusted keyboard, from where one could cut and paste using a trusted device into other boxes.

A computer that as yet has no trusted devices of a given class would have to trust the first one plugged in. Ie. if you have a new computer that’s never had a keyboard, it has to trust its first keyboard unless there is another way to confirm trust when that first keyboard is plugged in. Fortunately mobile devices all have built in input hardware that can be trusted at manufacture, avoiding this issue. If a computer has lost all its input devices and needs a new one, you could either trust implicitly, or provide a pairing code to type on the new keyboard (would not work for mouse) to show you are really there. But this is only a risk on systems that normally have no input device at all.

For an even stronger level of trust, we might want to be able to encrypt the data going through. This stops the insertion of malicious hubs or other MITM intercepts that might try to log keystrokes or other data. Encryption may not be practical in low power devices that need to be drives and send data very fast, but it would be fine for all low speed devices.

Of course, we should not trust our networks, even our home networks. Laptops and mobile devices constantly roam outside the home network where they are not protected, and then come back inside able to attack if trusted. However, some security designers know this and design for this.

Yes, this adds some extra UI the first time you plug something in. But that’s hopefully rare and this is a big gaping hole in the security of most of our devices, because people are always plugging in USB drives, dongles and more.

A Bitcoin Analogy

Bitcoin is having its first “15 minutes” with the recent bubble and crash, but Bitcoin is pretty hard to understand, so I’ve produced this analogy to give people a deeper understanding of what’s going on.

It begins with a group of folks who take a different view on several attributes of conventional “fiat” money. It’s not backed by any physical commodity, just faith in the government and central bank which issues it. In fact, it’s really backed by the fact that other people believe it’s valuable, and you can trade reliably with them using it. You can’t go to the US treasury with your dollars and get very much directly, though you must pay your US tax bill with them. If a “fiat” currency faces trouble, you are depending on the strength of the backing government to do “stuff” to prevent that collapse. Central banks in turn get a lot of control over the currency, and in particular they can print more of it any time they think the market will stomach such printing — and sometimes even when it can’t — and they can regulate commerce and invade privacy on large transactions. Their ability to set interest rates and print more money is both a bug (that has sometimes caused horrible inflation) and a feature, as that inflation can be brought under control and deflation can be prevented.

The creators of Bitcoin wanted to build a system without many of these flaws of fiat money, without central control, without anybody who could control the currency or print it as they wish. They wanted an anonymous, privacy protecting currency. In addition, they knew an open digital currency would be very efficient, with transactions costing effectively nothing — which is a pretty big deal when you see Visa and Mastercard able to sustain taking 2% of transactions, and banks taking a smaller but still real cut.

With those goals in mind, they considered the fact that even the fiat currencies largely have value because everybody agrees they have value, and the value of the government backing is at the very least, debatable. They suggested that one might make a currency whose only value came from that group consensus and its useful technical features. That’s still a very debatable topic, but for now there are enough people willing to support it that the experiment is underway. Most are aware there is considerable risk.

Update: I’ve grown less fond of this analogy and am working up a superior one, closer to the reality but still easy to understand.

Wordcoin

Bitcoins — the digital money that has value only because enough people agree it does — are themselves just very large special numbers. To explain this I am going to lay out an imperfect analogy using words and describe “wordcoin” as it might exist in the pre-computer era. The goal is to help the less technical understand some of the mechanisms of a digital crypto-based currency, and thus be better able to join the debate about them.  read more »

The Personal Cloud and Data Deposit Box

Last night I gave a short talk at the 3rd “Personal Clouds” meeting in San Francisco, The term “personal clouds” is a bit vague at present, but in part it describes what I had proposed in 2008 as the “data deposit box” — a means to acheive the various benefits of corporate-hosted cloud applications in computing space owned and controlled by the user. Other people are interpreting the phrase “personal clouds” to mean mechanisms for the user to host, control or monetize their own data, to control their relationships with vendors and others who will use that data, or in the simplest form, some people are using it to refer to personal resources hosted in the cloud, such as cloud disk drive services like Dropbox.

I continue to focus on the vision of providing the advantages of cloud applications closer to the user, bringing the code to the data (as was the case in the PC era) rather than bringing the data to the code (as is now the norm in cloud applications.)

Consider the many advantages of cloud applications for the developer:

  • You write and maintain your code on machines you build, configure and maintain.
    • That means none of the immense support headaches of trying to write software to run on mulitple OSs, with many versions and thousands of variations. (Instead you do have to deal with all the browsers but that’s easier.)
    • It also means you control the uptime and speed
    • Users are never running old versions of your code and facing upgrade problems
    • You can debug, monitor, log and fix all problems with access to the real data
  • You can sell the product as a service, either getting continuing revenue or advertising revenue
  • You can remove features, shut down products
  • You can control how people use the product and even what steps they may take to modify it or add plug-ins or 3rd party mods
  • You can combine data from many users to make compelling applications, particuarly in the social space
  • You can track many aspects of single and multiple user behaviour to customize services and optimize advertising, learning as you go

Some of those are disadvantages for the user of course, who has given up control. And there is one big disadvantage for the provider, namely they have to pay for all the computing resources, and that doesn’t scale — 10x users can mean paying 10x as much for computing, especially if the cloud apps run on top of a lower level cloud cluster which is sold by the minute.

But users see advantages too:  read more »

Speaking on Personal Clouds in SF, and Robocars in Phoenix

Two upcoming talks:

Tomorrow (April 4) I will give a very short talk at the meeting of the personal clouds interest group. As far as I know, I was among the first to propose the concept of the personal cloud in my essages on the Data Deposit Box back in 2007, and while my essays are not the reason for it, the idea is gaining some traction now as more and more people think about the consequences of moving everything into the corporate clouds.

My lighting talk will cover what I see as the challenges to get the public to accept a system where the computing resources are responsible to them rather than to various web sites.

On April 22, I will be at the 14th International Conference on Automated People Movers and Automated Transit speaking in the opening plenary. The APM industry is a large, multi-billion dollar one, and it’s in for a shakeup thanks to robocars, which will allow automated people moving on plain concrete, with no need for dedicated right-of-way or guideways. APMs have traditionally been very high-end projects, costing hundreds of millions of dollars per mile.

The best place to find me otherwise is at Singularity University Events. While schedules are being worked on, with luck you see me this year in Denmark, Hungary and a few other places overseas, in addition to here in Silicon Valley of course.

Your session has expired. Forgot your password? Click Here!

We see it all the time. We log in to a web site but after not doing anything on the site for a while — sometimes as little as 10 minutes — the site reports “your session has timed out, please log in again.”

And you get the login screen. Which offers, along with the ability to log in, a link marked “Forget your password?” which offers the ability to reset (OK) or recover (very bad) your password via your E-mail account.

The same E-mail account you are almost surely logged into in another tab or another window on your desktop. The same e-mail account that lets you go a very long time idle before needing authentication again — perhaps even forever.

So if you’ve left your desktop and some villain has come to your computer and wants to get into that site that oh-so-wisely logged you out, all they need to is click to recover the password, go into the E-mail to learn it, delete that E-mail and log in again.

Well, that’s if you don’t, as many people do, have your browser remember passwords, and thus they can log-in again without any trouble.

It’s a little better if the site does only password reset rather than password recovery. In that case, they have to change your password, and you will at least detect they did that, because you can’t log in any more and have to do a password reset. That is if you don’t just think, “Damn, I must have forgotten that password. Oh well, I will reset it now.”

In other words, a lot of user inconvenience for no security, except among the most paranoid who also have their E-mail auth time out just as quickly, which is nobody. Those who have their whole computer lock with the screen saver are a bit better off, as everything is locked out, as long as they also use whole disk encryption to stop an attacker from reading stuff off the disk.  read more »

Meter to show speakers when they are losing the audience

Any speaker or lecturer is familiar with a modern phenomenon. A large fraction of your audience is using their tablet, phone or laptop doing email or surfing the web rather than paying attention to you. Some of them are taking notes, but it’s a minority. And it seems we’re not going to stop this, even speakers do it when attending the talks of others.

However, while we have open wireless networks (which we shouldn’t) there is a trick that could be useful. Build a tool that sniffs the wireless net and calculates what fraction of the computers are doing something that suggests distraction — or doing anything on the internet at all.

While you could get creepy here and do internal packet inspection to see precisely what people are doing (for example, are they searching wikipedia for something you just talked about?) you don’t need to go that far. The simple fact that more people in the room are doing stuff on the internet, or doing heavy stuff on the internet is a clue. You can also tell when people are doing a few core functions, like web surfing vs. SMTP vs. streaming based on the port numbers they are going to. You can also tell if they are doing a common web-mail with the IP address. All of this works even if they are encrypting all their traffic like they should be (to stop prying tools like this!)

Only if they have set up a VPN (which they also should) will you be unable to learn things like ports and IP addresses, but again, it’s a nice indicator to know just what total traffic is, and how many different machines it’s coming from, and that will almost never be hidden.

When the display tells you that most of your audience is using the internet, you could pause and ask for questions or find out why they are surfing. The simple act of asking when distraction gets high will reduce it, and make people embarrassed to have done so. Of course, a sneaky program that learns the MACs of various students could result in the professor asking, “What’s so fascinating on the internet, Mr. Wilson?” At the very least it would encourage the people in the audience to use more encryption. But you don’t have to get that precise. The broad traffic patterns are plenty of information.

Don't count my old passwords as failed login attempts

Like most people, I have a lot of different passwords in my brain. While we really should have used a different system from passwords for web authentication, that’s what we are stuck with now. A general good policy is to use the same password on sites you don’t care much about and to use more specific passwords on sites where real harm could be done if somebody knows your password, such as your bank or email.

The problem is that over time you develop many passwords, and sometimes your browser does not remember them for you. So you go back to a site and try to log in, and you end up trying all your old common passwords. The problem: At many sites, if you enter the wrong password too many times, they lock you out, or at least slow you down. That’s not unwise on their part, but a problem for you.

One solution: Sites can remember hashes of your old passwords. If you type in an old password, they can say, “No, that used to be your password but you have a new one now.” And not count that as a failed attempt by a password cracker. This adds a very slight risk, in that it lets a very specific attacker who knows you super well get a few free hits if they have managed to learn your old passwords. But this risk is slight.

Of course they should store a hash of the password, not the actual password. No site should store the actual password. If a site can offer to mail you your old password rather than offering a link to reset the password, it means they are keeping it around. That’s a security risk for you, and also means if you use a common password on such sites, they now know it and can log in as you on all the other sites you use that password at. Alas, it’s hard to tell when creating an account whether a site stores the password or just a hash of it. (A hash allows them to tell if you have typed in the right password by comparing the hash of what you typed and the stored hash of the password back when you created it. A hash is one-way so they can’t go from the hash to the actual password.) Alas, only a small minority of sites do this right.

This is just one of many things wrong with passwords. The only positive about them is you can keep a password entirely in your memory, and thus go to a random computer and login without anything but your brain. That is also part of what is wrong with them, in that others can do that too. And that the remote computers can quite easily be compromised and recording the password. The most secure systems use the combination of something in your memory and information in a device. Even today, though, people are wary of solutions that require them to carry a device. Pretty soon that will change and not having your device will be so rare as to not be an issue.

Understanding when and how to be secure

Over the years I have come to the maxim that “Everything should be as secure as is easy to use, and no more secure” to steal a theme from Einstein. One of my peeves has been the many companies who, feeling that E-mail is insecure, instead send you an E-mail that tells you you have an E-mail if you would only log onto their web site (often one you rarely log into) with the password you set up 2 years ago to read it. I often get these for things like bills and statements — “Your statement is now available online.” A few nicer ones tell me that my statement is online but the e-maiil does contain the total in the statement. Only if the total is unexpected do I need to login to see the statement.

None of these sites seem to offer me the option of saying, “My E-mail is secure, at least if you are doing your job, so just send me the data in E-mail” or of using one of the end-to-end encrypted E-mail systems. Alas, there is more than one E-mail system, but it’s not hard to do the two most popular, PGP/GPG and S-Mime and they are fairly widely supported in mailers.

As I noted, my own mail is secure in that I run an SMTP server on my home server, and only access it over encrypted IMAP. If they have set up their server to do encrypted SMTP (which should be the default by now, frankly) then the mail is generally secure (though it does do a brief unencrypted stop at my spam filter system.)

However, somtimes the contents of the mail need no security, and so instead it’s just annoyance. I have an acccount with Wachovia bank, and yesterday got an E-mail that there was an “important, secure E-mail” I should read on their server. After logging in, I found that all they had to say was public information about their merger with Wells Fargo, and how accounts would be shifted over. There was no reason that needed to be secure, since the only secret to reveal was that I had an account there, and the E-mail revealed that.

So I wrote a note back to complain, telling them not to make me jump through hoops to read public information. What’s so much fun is the response I got back:

Thank you for contacting Wachovia. My name is Tulanee E, and I am happy to assist you.

Mr. Templeton, I would be happy to assist you. However, to guarantee the security of your information prior to confidential information being disclosed or any account activities being performed we need to verify your personal information. For this we kindly ask you to please call us at 1-800-950-2296 to discuss this issue. Representatives are available to assist you 24 hours a day, seven days a week.

I apologize for any inconvenience.

My goal today was to provide you a complete and helpful answer. Thank you for banking with Wachovia.

Sincerely,

Tulanee E Online Services Team Online Customer Service: 1-800-950-2296

The efficacy of trusted traveler programs

A new paper on trusted traveler programs from RAND Corp goes into some detailed math analysis of various approaches to a trusted traveler program. In such a program, you pre-screen some people, and those who pass go into a trusted line where they receive a lesser security check. The resources saved in the lesser check are applied to give all other passengers a better security check. This was the eventual goal of the failed CLEAR card — though while it operated it just got you to the front of the line, it didn’t reduce your security check.

The analysis shows that with a “spherical horse” there are situations where the TT program could reduce the number of terrorists making it through security with some weapon, though it concludes the benefit is often minor, and sometimes negative. I say spherical horse because they have to idealize the security checks in their model, just declaring that an approach has an X% chance of catching a weapon, and that this chance increases when you spend more money and decreases when you spend less, though it has diminishing returns since you can’t get better than 100% no matter what you spend.

The authors know this assumption is risky. Turns out there is a form of security check which does match this model, which is random intense checking. There the percentage of weapons caught is pretty closely tied with the frequency of the random check. The TTs would just get a lower probability of random check. However, very few people seem to be proposing this model. The real approaches you see involve things like the TTs not having to take their shoes off, or somehow bypassing or reducing one of the specific elements of the security process compared to the public. I believe these approaches negate the positive results in the Rand study.

This is important because while the paper puts a focus on whether TT programs can get better security for the same dollar, the reality is I think a big motive for the TT approach is not more security, but placation of the wealthy and the frequent flyer. We all hate security and the TSA, and the airlines want to give better service and even the TSA wants to be hated a bit less. When a grandmother or 10 year old girl gets a security pat down, it is politically bad, even though it is the right security procedure. Letting important passengers get a less intrusive search has value to the airlines and the powerful, and not doing intrusive searches that seem stupid to the public has political value to the TSA as well.

We already have such a program, and it’s not just the bypass of the nudatrons (X ray scanners) that has been won by members of congress and airline pilots. It’s called private air travel. People with their own planes can board without security at all for them or their guests. They could fly their planes into buildings if they wished, though most are not as big as the airliners from 9/11. Fortunately, the chance that the captains of industry who fly these planes would do this is tiny, so they fly without the TSA. The bypass for pilots seems to make a lot of sense at first blush — why search a pilot for a weapon she might use to take control of the plane? The reality is that giving a pass to the pilots means the bad guy’s problem changes from getting a weapon through the X-ray to creating fake pilot ID. It seems the latter might actually be easier than the former.  read more »

The "Forgetful Broker" is needed for Data Deposit Box

For some time I’ve been advocating a concept I call the Data Deposit Box as an architecture for providing social networking and personal data based applications in a distributed way that tries to find a happy medium between the old PC (your data live on your machine) and the modern cloud (your data live on 3rd party corporate machines) approach. The basic concept is to have a piece of cloud that you legally own (a data deposit box) where your data lives, and code from applications comes and runs on your box, but displays to your browser directly. This is partly about privacy, but mostly about interoperability and control.

This concept depends on the idea of publishing and subscribing to feeds from your friends (and other sources.) Your friends are updating data about themselves, and you might want to see it — ie. things like the Facebook wall, or Twitter feed. Feeds themselves would go through brokers just for the sake of efficiency, but would be encrypted so the brokers can’t actually read them.

There is a need for brokers which do see the data in certain cases, and in fact there’s a need that some types of data are never shown to your friends.

Crush

One classic example is the early social networking application the “crush” detector. In this app you get to declare a crush on a friend, but this is only revealed when both people have a mutual crush. Clearly you can’t just be sending your crush status to your friends. You need a 3rd party who gets the status of both of you, and only alerts you when the crush is mutual. (In some cases applications like this can be designed to work without the broker knowing your data through the process known as blinding (cryptography).)  read more »

Working on Robocars at Google

As readers of this blog surely know, for several years I have been designing, writing and forecasting about the technology of self-driving “robocars” in the coming years. I’m pleased to announce that I have recently become a consultant to the robot car team working at Google.

Of course all that work will be done under NDA, and so until such time as Google makes more public announcements, I won’t be writing about what they or I are doing. I am very impressed by the team and their accomplishments, and to learn more I will point you to my blog post about their announcement and the article I added to my web site shortly after that announcement. It also means I probably won’t blog in any detail about certain areas of technology, in some cases not commenting on the work of other teams because of conflict of interest. However, as much as I enjoy writing and reporting on this technology, I would rather be building it.

My philosophical message about Robocars I have been saying for years, but it should be clear that I am simply consulting on the project, not setting its policies or acting as a spokesman.

My primary interest at Google is robocars, but many of you also know my long history in online civil rights and privacy, an area in which Google is often involved in both positive and negative ways. Indeed, while I was chairman of the EFF I felt there could be a conflict in working for a company which the EFF frequently has to either praise or criticise. I will be recusing myself from any EFF board decisions about Google, naturally.  read more »

Banks: Give me two passwords

Passwords are in the news thanks to Gawker media, who had their database of userids, emails and passwords hacked and published on the web. A big part of the fault is Gawker’s, who was saving user passwords (so it could email them) and thus was vulnerable. As I have written before, you should be very critical of any site that is able to email you your password if you forget it.

Some of the advice in the wake of this to users has been to not use the same password on multiple sites, and that’s not at all practical in today’s world. I have passwords for many hundreds of sites. Most of them are like gawker — accounts I was forced to create just to leave a comment on a message board. I use the same password for these “junk accounts.” It’s just not a big issue if somebody is able to leave a comment on a blog with my name, since my name was never verified in the first place. A different password for each site just isn’t something people can manage. There are password managers that try to solve this, creating different passwords for each site and remembering them, but these systems often have problems when roaming from computer to computer, or trying out new web browsers, or when sites change their login pages.

The long term solution is not passwords at all, it’s digital signature (though that has all the problems listed above) and it’s not to even have logins at all, but instead use authenticated actions so we are neither creating accounts to do simple actions nor using a federated identity monopoly (like Facebook Connect). This is better than OpenID too.  read more »

Can your computer be like your priest?

I’ve had a blogging hiatus of late because I was heavily involved last week with Singularity University a new teaching institution about the future created by Nasa, Google, Autodesk and various others. We’ve got 80 students, most from outside North America, here for the summer graduate program, and they are quite an interesting group.

On Friday, I gave a lecture to open the policy, law and ethics track and I brought up one of the central questions — should we let our technology betray us? Now our tech can betray us in a number of ways, but in this case I mean something more literal, such as our computer ratting us out to the police, or providing evidence that will be used against us in court. Right now this is happening a lot.

I put forward the following challenge: In history, certain service professions have been given a special status when it comes to being forced to betray us. Your lawyer, your doctor and your priest must keep most of what you tell them in confidence, and can’t be compelled to reveal it in court. We have given them this immunity because we feel their services are essential, and that people might be afraid to use them if they feared they could be betrayed.

Our computers are becoming essential too, and even more intimately entangled with our lives. We’re carrying our cell phone on our body all day long, with its GPS and microphone and camera, and we’re learning that it is telling our location to the police if they ask. Soon we’ll have computers implanted in our bodies — will they also betray us?

So can we treat our personal computer like a priest or doctor? Sadly, while people we trust have been given this exemption, technology doesn’t seem to get it. And there may be a reason, too. People don’t seem as afraid to disclose incriminating data to their computers as they are of disclosing it to other people. Right now, we know that people can blab, but we don’t seem to appreciate how much computers can blab. If we do, we’ll become more afraid to trust our computers and other technology, which hurts their value.

Can the ethics that developed around the trusted professions move to our technology? That’s for the future to see.

Explicit interfaces for social media

The lastest Facebook flap has caused me to write more about privacy of late, and that will continue has we head into the June 15 conference on Computers, Freedom and Privacy where I will be speaking on privacy implications of robots.

Social networks want nice easy user interfaces, and complex privacy panels are hard to negotiate by users who don’t want to spend the time learning all the nuances of a system. People usually end up using the defaults.

One option that might improve things is to make data publication more explicit in the interface, and to let users choose, in an easy way, the level of exposure for a specific act.

Consider twitter. Instead of having a “Tweet” button, it should have a “Tweet to the world” button and a “Tweet to my followers” button. (Twitter wisely does not tweet when you hit Enter, as many people forget it is not the search box.) For people tweeting by SMS or other means, they could define a special character to put at the front of the tweet, like starting your tweet with a “%” to make it private (or public depending on your default.) Of course, your followers could still log and republish your private tweets, but they would at least not go into public archives. (Unless you’ve accepted a follower who does that, which is admittedly a problem with their design.)

This interface might seem complex but what’s important is that it’s clear. You know what you are doing. Here your choice makes sense to you and you are not squeezed into a set of defaults, ie. their choices.

Facebook has come close to this. There is a little lock icon next to the Share button, and it becomes a select box where you can set who you will share a posting with. It has a bit too much UI, but it’s on the right track. A select box can make it smaller but it should say “With the world” when that is the default state, to make your action explicit for you. This should be extended to many other actions on Facebook, so that buttons which do things which will inform the world, or your friends, say it. “Share this photo with the world.” “Tell all 430 friends your Strawberries are ripe.” The use of the number is a good idea, to make it clear just how many people you are publishing to.

Of course “with the world” is somewhat bulky and “with all friends of your friends” is even bulkier. The UI can start this way, but the user should be able to to a page where they can switch to icons, once it is clear that they know what the icons mean. When facebook again tries to move our social graph out into partner sites, this approach should follow. Instead of “Like” it would be “Tell your friends you Like” and so on. Verbose, but worth being verbose about.

This only applies to social media, of course, where there is a choice. If you comment on this blog it doesn’t yet say “post your comment to everybody” because there really isn’t any other choice expected on public blogs. Private/public blog systems like LiveJournal have featured a means to make postings available only to friends for a long time.

When is "opt out" a "cop out?"

As many expected would happen, Mark Zuckerberg did an op-ed column with a mild about face on Facebook’s privacy changes. Coming soon, you will be able to opt out of having your basic information defined as “public” and exposed to outside web sites. Facebook has a long pattern of introducing a new feature with major privacy issues, being surprised by a storm of protest, and then offering a fix which helps somewhat, but often leaves things more exposed than they were before.

For a long time, the standard “solution” to privacy exposure problems has been to allow users to “opt out” and keep their data more private. Companies like to offer it, because the reality is that most people have never been exposed to a bad privacy invasion, and don’t bother to opt out. Privacy advocates ask for it because compared to the alternative — information exposure with no way around it — it seems like a win. The companies get what they want and keep the privacy crowd from getting too upset.

Sometimes privacy advocates will say that disclosure should be “opt in” — that systems should keep information private by default, and only let it out with the explicit approval of the user. Companies resist that for the same reason they like opt-out. Most people are lazy and stick with the defaults. They fear if they make something opt-in, they might as well not make it, unless they can make it so important that everybody will opt in. As indeed is the case with their service as a whole.

Neither option seems to work. If there were some way to have an actual negotiation between the users and a service, something better in the middle would be found. But we have no way to make that negotiation happen. Even if companies were willing to have negotiation of their “I Agree” click contracts, there is no way they would have the time to do it.  read more »

The peril of the Facebook anti-privacy pattern

There’s been a well justified storm about Facebook’s recent privacy changes. The EFF has a nice post outlining the changes in privacy policies at Facebook which inspired this popular graphic showing those changes.

But the deeper question is why Facebook wants to do this. The answer, of course, is money, but in particular it’s because the market is assigning a value to revealed data. This force seems to push Facebook, and services like it, into wanting to remove privacy from their users in a steadily rising trend. Social network services often will begin with decent privacy protections, both to avoid scaring users (when gaining users is the only goal) and because they have little motivation to do otherwise. The old world of PC applications tended to have strong privacy protection (by comparison) because data stayed on your own machine. Software that exported it got called “spyware” and tools were created to rout it out.

Facebook began as a social tool for students. It even promoted that those not at a school could not see in, could not even join. When this changed (for reasons I will outline below) older members were shocked at the idea their parents and other adults would be on the system. But Facebook decided, correctly, that excluding them was not the path to being #1.  read more »

Syndicate content