Submitted by brad on Mon, 2007-06-18 21:34.
For some time I’ve been warning about a growing danger to the 4th amendment. The 4th amendment protects our “persons, houses, papers and effects” but police and some courts have been interpreting this to mean that our private records kept in the hands of 3rd parties — such as E-mail on an ISP or webmail server — are not protected because they are not papers and not in our houses. Or more to the point, that we do not have a “reasonable expectation of privacy” when we leave our private data in the hands of 3rd parties. They have been seizing E-mail without getting a warrant, using the lower standards of the Stored Communications Act.
Recently, we at the EFF got involved in a case challenging that, and argued in our amicus brief that this mail deserved full protection. We won a lower court round and are thrilled that today, the 6th circuit court of appeals has issued a ruling affirming the logic in our amicus and protecting E-mail. We hope and expect this to become the full law of the land, though for now, I might advise all E-mail service providers to move their servers to the 6th circuit (MI, OH, TN, KY) for full protection. It will save you money as you will be able to more simply deal with requests for customer E-mails.
You can read more details on the EFF page on Warshak v USA. Congrats to Kevin Bankston who did the work on the brief. (Amusingly, Google owes him a big debt today, and last week they were hassling him to provide a notarized driver’s license photo in order to get removed from their Street View!)
Submitted by brad on Sat, 2007-06-16 11:54.
From time to time I come up with ideas that are interesting but I can't advocate because they have overly negative consequences in other areas, like privacy. Nonetheless, they are worth talking about because we might find better ways to do them.
There is some controversy today over whether driving while talking on a cell phone is dangerous, and should be banned, or restricted to handsfree mode. It occurs to me that the data to answer that question is out there. Most cars today have a computer, and it records things like the time that airbags deploy, or even in some cases when you suddenly dropped in speed. (If not, it certainly could.) Your cell phone, and your cell company know when you're on the phone. Your phone knows if you are using the handsfree, though the company doesn't. Your phone and cell company also know (but usually don't record) when you're driving and suddenly stop moving for an extended period.
In other words, something with access to all that data (and a time delta for the car's clock) could quickly answer the question of what cell phone behaviours are more likely to cause accidents. It would get a few errors (such as if the driver borrows their passenger's phone) but would be remarkably comprehensive in providing an answer.
But to gather this data involves way too many scary things. We don't really want our cars or phone companies recording data which can be used against us. They could record things like if we speed, and where we go that we don't want others to know about, and who we're talking to at the time, and much more.
In our quest for learning from private data, we have often sought anonymization technologies that can somehow collect the data and disassociate it from the source. That turns out to be very hard to do, often near impossible, and the infrastructure built for this sort of collection can almost always be trivially repurposed for non-anonymous use; now all that is needed is to flick a switch.
Now I do expect that soon we will see, after a serious car accident, attempts to get at this data on a case by case basis. The insurance companies will ask for cell phone records at the time of the accident, or data from the phone itself. We're already going to lose that privacy once there is an accident, thought at least case by case invasions don't scale. Messy problem.
Submitted by brad on Wed, 2007-05-30 11:32.
I wrote recently about the paradox of identity management and how the easier it is to offer information, the more often it will be exchanged.
To address some of these issues, let me propose something different: The creation of an infrastructure that allows people to generate secure (effectively anonymous) pseudonyms in a manner that each person can have at most one such ID. (There would be various classes of these IDs, so people could have many IDs, but only one of each class.) I’ll call this a QID (the Q “standing” for “unique.”)
The value of a unique ID is strong — it allows one to associate a reputation with the ID. Because you can only get one QID, you are motivated to carefully protect the reputation associated with it, just as you are motivated to protect the reputation on your “real” identity. With most anonymous systems, if you develop a negative reputation, you can simply discard the bad ID and get a new one which has no reputation. That’s annoying but better than using a negative ID. (Nobody on eBay keeps an account that gets a truly negative reputation. An account is abandoned as soon as the reputation seems worse than an empty reputation.) In effect, anonymous IDs let you demonstrate a good reputation. Unique IDs let you demonstrate you don’t have a negative reputation. In some cases systems try to stop this by making it cost money or effort to generate a new ID, but it’s a hard problem. Anti-spam efforts don’t really care about who you are, they just want to know that if they ban you for being a spammer, you stay banned. (For this reason many anti-spam crusaders currently desire identification of all mailers, often with an identity tied to a real world ID.)
I propose this because many web sites and services which demand accounts really don’t care who you are or what your E-mail address is. In many cases they care about much simpler things — such as whether you are creating a raft of different accounts to appear as more than one person, or whether you will suffer negative consequences for negative actions. To solve these problems there is no need to provide personal information to use such systems. read more »
Submitted by brad on Wed, 2007-05-16 16:34.
Since the dawn of the web, there has been a call for a “single sign-on”
facility. The web consists of millions of independently operated web sites,
many of which ask users to create “accounts” and sign-on to use the site.
This is frustrating to users.
Today the general single sign-on concept has morphed into what is now called
“digital identity management” and is considerably more complex. The most recent
project of excitement is OpenID which is a standard which allows users
to log on using an identifier which can be the URL of an identity service,
possibly even one they run themselves.
Many people view OpenID as positive for privacy because of what came before it.
The first major single sign-on project was Microsoft Passport which came
under criticism both because all your data was managed by a single company and
that single company was a fairly notorious monopoly. To counter that, the
Liberty Alliance project was brewed by Sun, AOL and many other companies,
offering a system not run by any single company. OpenID is simpler and even
However, I feel many of the actors in this space are not considering an inherent
paradox that surrounds the entire field of identity management. On the
surface, privacy-conscious identity management puts control over who gets
identity information in the hands of the user. You decide who to give identity
info to, and when. Ideally, you can even revoke access, and push for minimal
disclosure. Kim Cameron summarized a set of laws of identity
outlining many of these principles.
In spite of these laws one of the goals of most identity management
systems has been ease of use. And who, on the surface, can argue with ease
of use? Managing individual accounts at a thousand web sites is hard.
Creating new accounts for every new web site is hard. We want something
However, here is the contradiction. If you make something easy to do,
it will be done more often. It’s hard to see how this can’t be true.
The easier it is to give somebody ID information, the more often it will
be done. And the easier it is to give ID information, the more palatable
it is to ask for, or demand it. read more »
Submitted by brad on Thu, 2007-05-03 13:28.
While I was at Tim O’Reilly’s Web 2.0 Expo, I did an interview with an online publication called Web Pro News. I personally prefer written text to video blogging, but for those who like to see video, you can check out:
Video Interview on Privacy and Web 2.0
The video quality is pretty good, if not the lighting.
The main focus was to remind people that as we return to timesharing, which is to say, move our data from desktop applications to web based applications, we must be aware that putting our private data in the hands of 3rd parties gives it less constitutional protection. We’re effectively erasing the 4th Amendment.
I also talk about hints at an essay I am preparing on the evils of user-controlled identity management software. And my usual rant about thinking about how you would design software if you were living in China or Saudi Arabia.
I also was interviewed some time ago about Google and other issues by a French/German channel. That’s a 90 minute long program entitled Faut-il avoir peur de Google ? (Should we fear Google). It’s also available in German. It was up for free when I watched it, but it may now require payment. (I only appear for a few minutes, my voice dubbed over.)
When I was interviewed for this I offered to, with some help, speak in French. I am told I have a pretty decent accent, though I no longer have the vocabulary to speak conversationally in French. I thought it would be interesting if they helped me translate and then I spoke my words in French (perhaps even dubbing myself later if need be.) They were not interested since they also had to do German.
Another video interview by a young French documentarian producing a show called Mix-Age Beta can be found here. The lighting isn’t good, but this time it’s in English. It’s done under the palm tree in my back yard.
Submitted by brad on Sat, 2007-03-03 23:07.
I have written before how future technology affects our privacy decisions today. DNA collection is definitely one of these areas. As you may know, law enforcement in the USA is now collecting DNA from people convicted of crimes, and even those arrested in a number of jurisdictions — with no ability to expunge the data if not found guilty. You may feel this doesn’t affect you, as you have not been arrested.
As DNA technology grows, bioinformatics software is becoming able to determine that a sample of DNA is a “near match” for somebody in a database. For example, they might determine that a person in the database is not the source of the DNA being studied, but is a relative of that person.
In a recent case, a DNA search turned up not the perpetrator, but his brother. They investigated the male relatives of the brother and found and convicted the man in question. read more »
Submitted by brad on Thu, 2007-03-01 23:46.
I was discussing his Zphone encrypting telephone system with Phil Zimmermann today. In his system, phone calls are encrypted with opportunistic, certificateless cryptography, which I applaud because it allows zero user interface and not centralization. It is vulnerable to “man in the middle” attacks if the MITM can be present in all communications.
His defence against MITM is to allow the users of the system to do a spoken authentication protocol at any time in their series of conversations. While it’s good to do it on the first call, his system works even when done later. In their conversation, they can, using spoken voice, read off a signature of the crypto secrets that are securing their conversation. The signatures must match — if they don’t, a man-in-the-middle is possibly interfering.
I brought up an attack he had thought of and called the Rich Little attack, involving impersonation with a combination of a good voice impersonation actor and hypothetical computerized speech modification that turns a good impersonator into a near perfect one. Phil believes that trying to substitute voice in a challenge that can come at any time, in any form, in any conversation is woefully impractical.
A small amount of thought made me produce this attack: Two impersonators. Early on in a series of conversations, the spy agency trying to break in brings in two impersonators who have listened to Alice and Bob respectively (we are hearing their calls) and learned their mannerisms. A digital audio processor helps convert the tones of their voice. That’s even easier on an 8khz channel. read more »
Submitted by brad on Mon, 2007-02-19 12:54.
If you’re like me, you select special unique passwords for the sites that count, such as banks, and you use a fairly simple password for things like accounts on blogs and message boards where you’re not particularly scared if somebody learns the password. (You had better not be scared, since most of these sites store your password in the clear so they can mail it to you, which means they learn your standard account/password and could pretend to be you on all the sites you duplicate the password on.) There are tools that will generate a different password for every site you visit, and of course most browsers will remember a complete suite of passwords for you, but neither of these work well when roaming to an internet cafe or friend’s house.
However, every so often you’ll get a site that demands you use a “strong” password, requiring it to be a certain length, to have digits or punctuation, spaces and mixed case, or subsets of rules like these. This of course screws you up if the site is an unimportant site and you want to use your easy to remember password, you must generate a variant of it that meets their rules and remember it. These are usually sites where you can’t imagine why you want to create an account in the first place, such as stores you will shop at once, or blogs you will comment on once and so on.
Strong passwords make a lot of sense in certain situations, but it seems some people don’t understand why. You need a strong password in case it is possible or desireable for an attacker to do a “dictionary” attack on your account. This means they have to try thousands, or even millions of passwords until they hit the one that works. If you use a dictionary word, they can try the most common words in the dictionary and learn your password. read more »
Submitted by brad on Mon, 2007-01-29 16:23.
I’ve written before about ZUI (Zero user interface) in crypto, and the need for opportunistic encryption based upon it. Today I want to further enforce the concept by pointing to mistakes we’ve seen in the past.
Many people don’t know it, but our good friends at Microsoft put opportunistic encryption into Outlook Express and other mailers many years ago. And their mailers were and still are the most widely used. Just two checkboxes in MSOE allowed you to ask that it sign all your outgoing mail, and further to encrypt all mail you sent to people whose keys you knew. If they signed their outgoing mail, you automatically learned their keys, and from then on your replies to them were encrypted.
However, it wasn’t just two checkboxes — you also had to get an E-mail certificate. Those are available free from THAWTE, but the process is cumbersome and was a further barrier to adoption of this.
But the real barrier? Microsoft’s code imagined you had one primary private key and certificate. As such, access to that private key was a highly important security act. Use of that private key must be highly protected, after all you might be signing important documents, even cheques with it.
As a result, every time you sent a mail with the “automatic sign” checkbox on, it put up a prompt telling you a program wanted to use your private key, and asked if you would approve that. Every time you received a mail that was encrypted because somebody else knew your key, it likewise prompted you to confirm access should be given to the private key. That’s the right approach on the private key that can spend the money in my bank account (in fact it’s not strong enough even for that) but it’s a disaster if it happens every time you try to read an E-mail!
We see the same with SSL/TLS certificates for web sites. Web sites can pay good money to the blessed CAs for a site certificate, which verifies that a site is the site you entered the domain name of. While these are overpriced, that’s a good purpose. Many people however want a TLS certificate simply to make sure the traffic is encrypted and can’t be spied upon or modified. So many sites use a free self-signed certificate. If you use one, however, the browser pops up a window, warning you about the use of this self-signed certificate, and you must approve its use, and say for how long you will tolerate it.
That’s OK for my own certification of my E-mail server, since only a few people use it, and we can confirm that once without trouble. However, if every time you visit a new web site you have to confirm use of its self-signed key, you’re going to get annoyed. And thus, while the whole web could be encrypted, it’s not, in part due to this.
What was needed was what security experts call an understanding of the “threat model” — what are you scared of, and why, and how much hassle do you want to accept in order to try to be secure?
It would be nice for a TLS certificate to say, “I’m not certifying anything about who this is” and just arrange for encryption. All that would tell you is that the site is the same site you visited before. The Lock icon in the browser would show encryption, but not any authentication. (A good way to show authentication would be to perhaps highlight the authenticated part of the URL in the title bar, which shows you just what was authenticated.)
In E-mail, it is clear what was needed was a different private key, used only to do signing and opportunistic
encryption of E-mail, and not used for authorizing cheques. This lesser key could be accessed readily by the mail program, without needing confirmation from the user every time. (You might, if concerned, have it get confirmation or even a pass code on a once a day basis, to stop e-mail worms from sending mail signed as you at surprising times.)
Paranoid users could ask for warnings here too, but most would not need them.
TLS supports client side certificates too. They are almost never used. Clients don’t want to get certificates for most uses, but they might like to be able to tell a site they are the same person as visited before — which is mostly what the login accounts at web sites verify. A few also verify the account is tied to a particular e-mail address, but that’s about it.
Perhaps if we move to get the client part working, we can understand our threat model better.
Submitted by brad on Wed, 2006-12-13 23:17.
A new program has appeared at San Jose Airport, and a few other airports like Orlando. It’s called “Clear” and is largely the product of the private company Clear at flyclear.com. But something smells very wrong.
To get the Clear card, you hand over $99/year. The private company keeps 90% and the TSA gets the small remainder. You then have to provide a fingerprint, an iris scan and your SSN, among other things.
What do you get for this? You get to go to the front of the security line, past all the hoi polloi. But that’s it. Once at the front of the line, you still go through the security scan the same as anybody else. Which is, actually, the right thing to do since “trusted traveller” programs which actually let you bypass the security procedure are in fact bad for security compared to random screening.
But what doesn’t make sense is — why all the background checks and biometrics just to go to the head of the line? Why wouldn’t an ordinary photo ID card work? It doesn’t matter who you are. You could be Usama bin Ladin because all you did was not wait in line.
So what gives? Is this just an end run to get people more used to handing over fingerprints and other information as a natural consequence of flying? Is it a plan to change the program to one that lets the “clear” people actually avoid being x-rayed. As it stands, it certainly makes no sense.
Note that it’s not paying to get to the front of the line that makes no sense, though it’s debatable why the government should be selling such privileges. It’s the pointless security check and privacy invasion. For some time United Airlines at their terminal in SFO has had a shorter security line for their frequent flyers. But it doesn’t require any special check on who you are. If you have status or a 1st class ticket, you’re in the short line.
Submitted by brad on Mon, 2006-08-21 11:44.
One of the few positive things over the recent giant AOL data spill (which we have asked the FTC to look into) is it has hopefully taught a few lessons about just how hard it is to truly anonymize data. With luck, the lesson will be “don’t be fooled into thinking you can do it” and not “Just avoid what AOL did.”
There is some Irony that in general, AOL is one of the better performers. They don’t keep a permanent log of searches tied to userid, though it is tied, reports say, to a virtual ID. (I have seen other reports to suggest even this is erased after a while.) AOL also lets you turn off short term logging of the association with your real ID. Google, MSN, Yahoo and others keep the data effectively forever.
Everybody has pointed out that for many people, just the search queries themselves can be enough to identify a person, because people search for things that relate to them. But many people’s searches will not be trackable back to them.
However, the AOL records maintain the exact time of the search, to the second or perhaps more accurately. They also maintain the site the user clicked on after doing the search. AOL may have wiped logs, but most sites don’t. Let’s say you go through the AOL logs and discover an AOL user searched and clicked on your site. You can go into your own logs and find that search, both from the timestamp, and the fact the “referer” field will identify that the user came via an AOL search for those specific terms.
Now you can learn the IP address of the user, and their cookies or even account with your site, if your site has accounts.
If you’re a lawyer, however, doing a case where you can subpoena information, you could use that tool to identify almost any user in the AOL database who did a modest volume of searches. And the big sites with accounts could probably identify all their users who are in the database, getting their account id (and thus often name and email and the works.)
So even if AOL can’t uncover who many of these users are due to an erasure policy, the truth is that’s not enough. Even removing the site does not stop the big sites from tracking their own users, because their own logs have the timestamped searches. And an investigator could look for a query, do the query, see what sites you would likely click on, and search the logs of those sites. They would still find you. Even without the timestamp this is possible for an uncommon query. And uncommon queries are surprisingly common. :-)
I have a static IP address, so my IP address links directly to me. Broadband users who have dynamic IP addresses may be fooled — if you have a network gateway box or leave your sole computer on, your address may stay stable for months at a time — it’s almost as close a tie as a static IP.
The point here is that once the data are collected, making them anonymous is very, very hard. Harder than you think, even when you take into account this rule about how hard it is.
Submitted by brad on Fri, 2006-08-18 22:56.
You probably heard yesterday’s good news that the ACLU prevailed in their petition for an injunction against the NSA warrentless wiretapping. (Our case against AT&T to hold them accountable for allegedly participating in this now-ruled-unlawful program continues in the courts.)
However, the ruling was appealed (no surprise) and the government also asked for, and was granted a stay of the injunction. So the wiretaps won’t stop unless the appeal is won.
But this begs the question, “Why do you need a stay?”
The line from the White House has been that the government engaged in this warrantless wiretapping because the the President had the authority to do that, both inherently and under the famous AUMF. And they wanted to use that authority because they complained the official system mandated by law, requiring process before the FISA court, was just too cumbersome. Even though the FISA law allows immediate emergency wiretaps without a warrant as long as a retroactive application is made soon.
We’ve all wondered just why that’s too cumbersome. But they seemed to be saying that since the President haud the authority to bypass the FISA court, why should they impede the program with all that pesky judicial oversight?
But now we have a ruling that the President does not have that authority. Perhaps that will change on appeal, but for now it is the ruling. So surely this should mean that they just go back to doing it the way the FISA regulations require it? What’s the urgent need for a stay? Could they not have been ready with the papers to get the warrants they need if they lost?
Well, I think I know the answer. Many people suspect that the reason they don’t go to FISA is not because it’s too much paperwork. It’s because they are trying to do things FISA would not let them do. So of course they don’t want to ask. (The FISA court, btw, has only told them no once, and even that was overturned. That’s about all the public knows about all its rulings.) I believe there is a more invasive program in place, and we’ve seen hints of that in press reports, with data mining of call records and more.
By needing this stay, the message has come through loud and clear. They are not willing to get the court’s oversight of this program, no way, no how. And who knows how long it will be until we learn what’s really going on?
Submitted by brad on Mon, 2006-08-07 13:51.
The blogosphere is justifiably abuzz with the release by AOL of “anonymized” search query histories for over 500,000 AOL users, trying to be nice to the research community. After the fury, they pulled it and issued a decently strong apology, but the damage is done.
Many people have pointed out obvious risks, such as the fact that searches often contain text that reveal who you are. Who hasn’t searched on their own name? (Alas, I’m now the #7 “brad” on Google, a shadow of my long stint at #1.)
But some other browsers have discovered something far darker. There are searches in there for things like “how to kill your wife” and child porn. Once that’s discovered, isn’t that now going to be sufficient grounds for a court order to reveal who that person was? It seems there is probable cause to believe user 17556639 is thinking about killing his wife. And knowing this very specific bit of information, who would impede efforts to investigate and protect her?
But we can’t have this happening in general. How long before sites are forced to look for evidence of crimes in “anonymized” data and warrants then nymize it. (Did I just invent a word?)
After all, I recall a year ago, I wanted to see if Google would sell adwords on various nasty searches, and what adwords they would be. So I searched for “kiddie porn” and other nasty things. (To save you the stigma, Google clearly has a system designed to spot such searches and not show ads, since people who bought the word “kiddie” may not want to advertise on those results.)
So had my Google results been in such a leak, I might have faced one of those very scary kiddie porn raids, which in the end would find nothing after tearing apart my life and confiscating my computers. (I might hope they would have a sanity check on doing this to somebody from the EFF, but who knows. And you don’t have that protection even if somebody would accord it to me.)
I expect we’ll be seeing the reprecussions from this data spill for some time to come. In the end, if we want privacy from being data mined, deletion of such records is the only way to go.
Submitted by brad on Thu, 2006-07-20 14:46.
Big news today. Judge Walker has denied the motions — particularly the one by the federal government — to dismiss our case against AT&T for cooperative with the NSA on warrantless surveillance of phone traffic and records.
The federal government, including the heads of the major spy agencies, had filed a brief demanding the case be dismissed on “state secrets” grounds. This common law doctrine, which is often frighteningly successful, allows cases to be dismissed, even if they are of great merit, if following through would reveal state secrets.
Here is our brief note which as a link to the decision.
This is a great step. Further application of the state secrets rule would have made legal oversight of
surveillance by spy agencies moot. We can write all the laws we want governing how spies may operate, and how surveillance is to be regulated, but if nobody can sue over violations of those laws, what purpose do they really have? Very little.
Now our allegations can be tested in court.
Submitted by brad on Fri, 2006-06-30 16:17.
When you buy stuff with a credit card online these days, they always want your address, because they will plug it into their credit card verification system, even if they are not shipping you a physical product.
I’m trying to give my physical address out less and less these days, and would in the long term love something like the addresscrow system I proposed.
However, as an interim, it might be nice to formalize a “fake” credit card billing address, authorized by the credit card company, that you can give when placing orders that will not be shipped to your physical address.
You can already do this, in that credit card verification systems tend to focus only on your street number and zip code, and rarely on your phone number, so you can make up a fake address based on this. If you live at 124 Elm St. 60609, you can usually get credit card verification with “124 Fake St. Chicago, IL 60609” choosing a street name that doesn’t exist so the post office will discard that mail. (Though often post offices try to be “good” and will get mail to you even if the street name is wrong. I guess you could try 124 DoNotDeliver St. to give them the hint.)
If it became official, the post offices could better learn what to do. There are arguments for and against letting the biller realize the address is fake. Good billers would accept this and not add it to mailing lists. Bad billers might refuse to let you enter the address.
Submitted by brad on Thu, 2006-06-15 12:20.
In recent times, we’ve seen a lot of debate about eroding the 4th amendment protections against surveillance in the interests of stopping terrorists and other criminals.
It’s gotten so prevalent that it seems the debate has become only about how much to weaken the 4th. Nobody ever suggests the other direction, strengthening it.
Let’s dip back into historical perspective, and think of the late 18th century, when it was written. In those days surveillance was a simple thing to understand. It required human beings who were physically present to watch you, or search your house. The closest thing to remote surveillance was the idea of opening somebody’s mail while in transit.
More importantly, it didn’t scale. To watch 100 people you needed 100 teams. You could watch the town square but otherwise large scale surveillance simply wasn’t physically possible.
And yet, even with this limited set of things to worry about, the signers of the bill of rights felt they had plenty to fear. If you could describe today’s techniques of surveillance to them — where we can observe people from a distance, plant bugs in their homes, see them through walls, detect sounds from windows and read electronic emissions; where we can listen to a person by keying in a number at our desk, and where, most shockingly of all, through computers observe the activities of effectively everybody — they would have gasped in shock.
Their reaction would not have been to say, “We had not realized there would be all these new useful tools of surveillance. We had better open up exceptions in the 4th to be sure they can be used effectively.” I think they would have instead worked to strengthen the 4th to prevent these new tools.
After all, they were revolutionaries. Had the King been able to data-mine the call records of colonial America, no doubt he would have discovered all those seditious founding fathers and rounded them up quickly.
So I ask, as the surveillance tools become stronger, doesn’t it make sense that the protection from them should become stronger, to retain balance? Society can still benefit from better police technology by making it more precise, rather than more broad. This is not saying give up what technology can do to protect us from crime, but rather to channel it in the right direction.
Because the tools are going to get even better and “better.” The balance is going to continue to shift until there’s very little of the original design left.
Submitted by brad on Tue, 2006-05-02 00:03.
Here’s an interesting problem. In the movies we always see scenes where the good guy is fighting the Evil Conspiracy (EvilCon) and he tells them he’s hidden the incriminating evidence with a friend who will release it to the papers if the good guy disappears under mysterious circumstances. Today EvilCon would just quickly mine your social networking platform to find all your friends and shake them down for the evidence.
So here’s the challenge. Design a system so that if you want to escrow some evidence, you can do it quickly, reliably and not too expensively, at a brief stop at an internet terminal while on the run from EvilCon. Assume EvilCon is extremely powerful, like the NSA. Here are some of the challenges:
- You need to be able to pay those who do escrow, as this is risky work. At the same time there must be no way to trace the payment.
- You don’t want the escrow agents to be able to read the data. Instead, you will split the encryption keys among several escrow agents in a way that some subset of them must declare you missing to assemble the key and publish the data.
- You need some way to vet escrow agents to assure they will do their job faithfully, but at the same time you must assume some of them work for EvilCon if there is a large pool.
- They must have some way to check if you are still alive. Regularly searching for you in Google or going to your web site regularly might be traced.
Some thoughts below… read more »
Submitted by brad on Fri, 2006-03-31 17:24.
April 1, 2006, San Francisco, CA: In a surprise move, Department of Justice (DoJ) attorneys filed a subpoena yesterday in federal court against the National Security Agency, requesting one million sample Google searches. They plan to use the searches as evidence in their defence of the constitutionality of the Child Online Protection Act.
The DoJ had previously requested a subpoena against Google, Inc. itself for the records, but Google mounted a serious defence, resulting in much more limited data flow. According to DoJ spokesperson Charles Miller, “Google was just putting up too much of a fight. The other sites and ISPs mostly caved in quickly and handed over web traffic and search records without a fuss, but Google made it expensive for us. We knew the NSA had all the records, so it seemed much simpler to just get them by going within the federal government.”
“Yahoo, of course, gave in rather easily. If they hadn’t, we could have just asked our friends in the Chinese government to demand the records. Yahoo does whatever they say.”
The White House revealed in December that the NSA has been performing warrentless searches on international phone, e-mail and internet traffic after the New York Times broke the story. Common speculation suggests they have been tapping other things, to data mine the vast sea of internet traffic, looking for patterns that might point to enemy activity.
“The NSA has the wires into all the hubs already, it’s just a lot faster for them to get this data.”
“We can neither confirm nor deny we have these search records,” said an un-named NSA spokesperson. “In fact, even asking if we have them makes you suspect.”
(Thanks to John Gilmore for the suggestion.)
Submitted by brad on Wed, 2006-03-22 21:46.
For some time in my talks on CALEA and VoIP I’ve pointed out that because the U.S. government is mandating a wiretap backdoor into all telephony equipment, the vendors putting in these backdoors to sell to the U.S. market, and then selling the same backdoors all over the world. Even if you trust the USGov not to run around randomly wiretapping people without warrants, since that would never happen, there are a lot of governments and phone companies in other countries who can’t be trusted but whom we’re enabling. All to catch the 3 stupid criminals who use VoIP and don’t use an encrypted system like Skype.
Recently this story about a wiretap on the Greek PM’s phone was forwarded to me by John Gilmore. Ericsson says that they installed wiretap backdoors to allow legal wiretaps, and this system was abused because Vodaphone didn’t protect it very well — a claim they deny. As a result there was tapping of the phone of the prime minister for months, as well as foreign dignitaries and a U.S. Embassy phone. Well, there’s irony.
We’re hearing about this because there is accountability in Greece. But I have to assume it’s going to happen a lot in countries where we will never hear about it. If you build the apparatus of the surveillance society, even with the best of intentions, it will get used that way, either here, or in less savoury places.
It would be nice if U.S. companies would at least refuse to sell the wiretap functions, or charge a fortune for them, to countries without legal requirements for them like the USA. Of course, soon that won’t be very many, thanks to the US lead, and the companies will have to include the backdoors to do business in all those nations. Will U.S. companies have the guts to say, “Sorry China, Saudi Arabia, et al. — no wiretap backdoors in our product, law or not. Add it yourself if you can figure it out.”
Submitted by brad on Tue, 2006-03-21 00:32.
You may be familiar with Stegonography, the technique for hiding messages in other messages so that not only can the black-hat not read the message, they aren’t even aware it’s there at all. It’s arguably the most secure way to send secret data over an open channel. A classic form of “stego” involves encrypting a message and then hiding it in the low order “noise” bits of a digital photograph. An observer can’t tell the noise from real noise. Only somebody with the key can extract the actual message.
This is great but it has one flaw — the images must be much larger than the hidden text. To get down a significant amount of text, you must download tons of images, which may look suspicious. If your goal is to make a truly hidden path through something like the great firewall of China, not only will it look odd, but you may not have the bandwidth.
Spammers, bless their hearts (how often do you hear that?) have been working hard to develop computer generated text that computers can’t readily tell isn’t real human written text. They do this to bypass the spam filters that are looking for patterns in spam. It’s an arms race.
Can we use these techniques and others, to win another arms race with the national firewalls? I would propose a proxy server which, given the right commands, fetches a desired censored page. It then “encrypts” the page with a cypher that’s a bit more like a code, substituting words for words rather than byte blocks for byte blocks, but doing so under control of a cypher key so only somebody with the key can read it.
Most importantly, the resulting document, while looking like gibberish to a human being, would be structured to look like a plausible innocuous web page to censorware. And while it is rumoured the Chinese have real human beings looking at the pages, even they can’t have enough to track every web fetch.
A plan like this would require lots and lots and lots of free sites to install the special proxy, serving only those in censored countries. Ideally they would only be used on pages known to be blocked, something tools behind the censorware would be measuring and publishing hash tables about.
Of course, there is a risk that the censors would deliberately pretend to join the proxy network to catch people who are using it. And of course with live human beings they could discover use of the network so it would never be risk-free. On the other hand, if use of the proxies were placed in a popular plugin so that so many people used it as to make it impossible to effectively track or punish, it might win the day.
Indeed, one could even make the encrypted pages look like spam, which flows in great volumes in and out of places like China, stegoing the censored web pages in apparent spam!
(Obviously proxying in port 443 is better, but if that became very popular the censors might just limit 443 to a handful of sites that truly need it.)