Submitted by brad on Tue, 2006-03-21 00:32.
You may be familiar with Stegonography, the technique for hiding messages in other messages so that not only can the black-hat not read the message, they aren’t even aware it’s there at all. It’s arguably the most secure way to send secret data over an open channel. A classic form of “stego” involves encrypting a message and then hiding it in the low order “noise” bits of a digital photograph. An observer can’t tell the noise from real noise. Only somebody with the key can extract the actual message.
This is great but it has one flaw — the images must be much larger than the hidden text. To get down a significant amount of text, you must download tons of images, which may look suspicious. If your goal is to make a truly hidden path through something like the great firewall of China, not only will it look odd, but you may not have the bandwidth.
Spammers, bless their hearts (how often do you hear that?) have been working hard to develop computer generated text that computers can’t readily tell isn’t real human written text. They do this to bypass the spam filters that are looking for patterns in spam. It’s an arms race.
Can we use these techniques and others, to win another arms race with the national firewalls? I would propose a proxy server which, given the right commands, fetches a desired censored page. It then “encrypts” the page with a cypher that’s a bit more like a code, substituting words for words rather than byte blocks for byte blocks, but doing so under control of a cypher key so only somebody with the key can read it.
Most importantly, the resulting document, while looking like gibberish to a human being, would be structured to look like a plausible innocuous web page to censorware. And while it is rumoured the Chinese have real human beings looking at the pages, even they can’t have enough to track every web fetch.
A plan like this would require lots and lots and lots of free sites to install the special proxy, serving only those in censored countries. Ideally they would only be used on pages known to be blocked, something tools behind the censorware would be measuring and publishing hash tables about.
Of course, there is a risk that the censors would deliberately pretend to join the proxy network to catch people who are using it. And of course with live human beings they could discover use of the network so it would never be risk-free. On the other hand, if use of the proxies were placed in a popular plugin so that so many people used it as to make it impossible to effectively track or punish, it might win the day.
Indeed, one could even make the encrypted pages look like spam, which flows in great volumes in and out of places like China, stegoing the censored web pages in apparent spam!
(Obviously proxying in port 443 is better, but if that became very popular the censors might just limit 443 to a handful of sites that truly need it.)
Submitted by brad on Mon, 2006-03-06 18:06.
Looking at printed wedding gift ribbon some time ago, Kathryn thought it would be amusing to put the 4th amendment on the ribbon, and tie it around our suitcases.
That turned out to be hard to make, but I did make a design for shipping tape which you can see below. The printed shipping tape has the text slant so that as the pattern repeats, the 4th amendment appears as a long continuous string, as well as a block.
You can put this shipping tape on your packages and your airplane luggage. Every time I fly, my luggage gets a card in it telling me how “for my protection” they have searched it.
Now, when they open my luggage, they will have to literally slice the 4th amendment in half in order to do this.
Too bad we can’t wrap it around our phone wires, but at least the EFF is suing AT&T to stop the NSA wiretaps.
We ordered several cases of this tape for the EFF. You can get it as a gift if you
join the EFF or buy it directly from the
EFF Store. There is a fat markup of course, which goes to protecting your civil rights. Buy some for your own shipping tape gun, or give the gift of privacy rights to a friend.
And yeah, I know it probably won’t stop them from searching. But if, like John Perry Barlow on his way back from Burning Man, I have to go to court over it, it will be nice to tell the judge that they cut the 4th amendment up to search my bags.
(Minor note: The printer could not always get the repetitions to line up perfectly, so sometimes there’s a vertical gap.)
Submitted by brad on Wed, 2006-02-01 03:03.
Tom Selleck narrates:
Have you ever arranged a wiretap in Las Vegas without leaving your office in
Or listened in on a mother tucking in her baby from a phone booth, all without
the bother of a warrant?
Or data mined the call records of millions of Americans with no oversight?
And the company that will bring it to you… AT&T
Submitted by brad on Tue, 2006-01-31 16:32.
A big announcement today from those of us at the EFF regarding the
NSA illegal wiretap scandal. We have filed a class-action lawsuit against
AT&T because we have reason to believe they have provided the NSA and
possibly other agencies with access to not only their lines but also
their “Daytona” database, which contains the call and internet records
of AT&T customers, and probably the customers of other carriers who outsource
database services to Daytona.
AT&T, we allege, gave access to this database when it should have told
the federal agents to come back with a warrant. This is the
communications records of not just people phoning Al-Qaida. It’s
the records of millions of ordinary Americans.
Allowing access to these records without a warrant is both a violation
of the law and a violation of their duties to protect the privacy of
their customers. Worse, we believe AT&T may still be doing it.
We’re asking the court to make AT&T stop giving the NSA or others
access without proper warrants, and to exact penalties for having
done so. The potential penalties are very, very large. We want to
send a message to carriers and operators like AT&T that they have
a duty to follow the law and protect their customers.
You can read more at our AT&T wiretap lawsuit page.
Submitted by brad on Mon, 2006-01-30 23:05.
Last week I spoke at O’Reilly’s Emerging Telephony (ETEL) conference about CALEA and other telecom regulations that are coming to VoIP. CALEA is a law requiring telecom equipment to have digital wiretap hooks, so police (with a warrant, in theory) can come and request a user’s audio streams. It’s their attempt to bring alligator clips into the digital world.
Recently the FCC issued notice that they would apply CALEA to interconnected VoIP providers and broadband providers. They don’t have that power, and the EFF and several other groups filed suit last week to block this order.
In my talk, however, I decided to turn the tables. My “evil twin” gave a talk addressed at incumbent carriers (the Bells, etc.) and big equipment vendors as to why they should love CALEA, Universal Service and the E911 regulations.
A podcaster recorded it and here’s the blue box security podcast with that recording or you can go directly to the mp3 of my talk. I start 3 minutes into the recording, and it’s a 15 minute session. It was well received, at least based on the bloggers who covered it. You may not hear the audience laughter too well, but they got it, and came to understand just how bad these laws can be for the small innovator moving in on the incumbent’s cash cows.
Indeed, I like the “evil twin” so much that he’ll be back, and I’ll try to write up my talk as text some day if I get the time. When bad things happen, it’s useful to understand why some people might push for them.
A more muffled version including audience can be found via Skype Journal.
Submitted by brad on Sat, 2006-01-28 13:09.
With too many people defending the new levels of surveillance, I thought I would introduce a new word: Panoptopia — a world made wonderful by having so much surveillance that we can catch all the bad guys.
David Brin introduced the concept to many in The Transparent Society, though he doesn’t claim it’s a utopia, just better than the alternative as he sees it.
It used to be that “If you are innocent you have nothing to hide” was supposed to be a statement whose irony was obvious to all. Today, I see people saying it seriously.
Because of that, we’re on our way to building the pushbutton panopticon. We’re building the apparatus of very high levels of surveillance and pretending we are putting checks and balances on their use. Cameras everwhere. NSA taps into all international communications. Total Information Awareness and other large data mining projects. Vast amounts of our private records stored on 3rd party servers of search engines and email companies where we have fewer rights and even less control. CALEA requirements that phone equipment and broadband lines have pre-built wiretapping facilities, in theory to be turned on only with a warrant.
In all these cases we are told the information won’t be abused, that process will be followed. And in most cases, I can even believe them.
But the problem is this. Now our rights are protected not by physical limits or extreme costs, but by a policy decision. To the extreme, by a simple policy bit, a single switch. Now to change the society from a free one to a police state can become effectively just throwing a switch if you have the political will.
In the old days, creating a police state required taking over the radio stations with tanks, and putting police on all the street corners. We are building a world where it involves getting the political will to throw a switch. And we’re selling that switch to all the countries of the world as they buy our technology.
Can you wonder why I fear this doesn’t end well?
Submitted by brad on Mon, 2006-01-23 20:18.
We’re always coming up with new technologies that affect privacy and surveillance. We’ve seen court cases over infrared heat detectors seeing people move inside a house. We’ve seen parabolic microphones and lasers that can measure the vibration of the windows from the sound in a room. We’ve seen massive computers that can scan a billion emails in a short time, and estimates of speech recognition tools that can listen to millions of phone calls.
Today we’re seeing massive amounts of outsourced computing. People are doing their web searching, E-mails and more using the servers of third party companies, like Google, Yahoo and Microsoft.
Each new technology makes us wonder how it can or should be used. The courts have set a standard of a “resonable expectation of privacy” to decide if the 4th amendment applies. You don’t have it walking down the street. You do have it in your house. You don’t have it on records you hand over to 3rd parties to keep, or generate with those 3rd parties in the first place.
But I fear that as the pace of change accelerates, we’ve picked the wrong default. Right now, the spooks and police feel their job is to see how close to the 4th amendment and statutory lines they can slice. Each new technology is seen as an opportunity for more surveillance ability, in many cases a way to get information that could not be gotten before either due to scalability, or the rules. Right now, when technology changes the rules, most of the time the result is to lessen privacy. Only very rarely, and with deliberate effort (ie. the default encryption in Skype) are we getting the more desireable converse. Indeed, when it looks like we might get more privacy, various forces try to fight it, with things like the encryption export controls, and the clipper chip, and manadatory records retention rules in Europe.
I think we need a different default. I think we need to start saying, “When a new technology changes the privacy equation, let’s start by assuming it should make things more protected, until we’ve had a chance to sit down and look at it.”
Today, the new tech comes along, privacy gets invaded, and then society finally looks at the technology and decides to write the rules to set the privacy balance. Sometimes that comes from legislatures (for example the ECPA) and more often from courts. These new rules will say to the spooks and LEOs, “Hold on a minute, don’t go hog wild with this technology.”
We must reverse this. Let the new technologies come, and let them not be a way to peform new surveillance. Instead, let the watchers come to the people, or the courts and say, “Wow, we could really do our jobs a lot better if we could only look through walls, or scan all the e-mails, or data mine the web searches.” Then let the legislatures and the courts answer that request.
Sometimes they will say, “But our new spy-tech is classified. We can’t ask for permission to use it in public.” My reaction is that this is tough luck, but at the very least there should be a review process in the classified world to follow the same principles. Perhaps you can’t tell the public your satellites can watch them in their backyards, but you should not be able to do so until at least a secret court or legislative committee, charged with protecting the rights of the public, says you can do so.
If we don’t set such a rule, then forever we will be spied upon by technologies society has not yet comes to grips with — because the spooks of course already have.
Submitted by brad on Thu, 2006-01-19 21:52.
Google is currently fighting a subpoena from the DoJ for their search logs. The DoJ experts in the COPA online porn case want to mine Google’s logs, not for anybody’s data in particular, but because they are such a great repository of statistics on internet activity. Google is fighting hard as they should. Apparently several Google competitors caved in.
These logs are a treasure trove of information, just as the DoJ experts say they are. No wonder they want them. They are particularly valuable to Google, of course, so much so that they have resisted all calls to wipe them or anonymize them. In fact, Google has built a fancy system with its own custom computer language to do massively parallel computing to let it gather statistics from this giant pool of data.
The DoJ and the companies that didn’t fight the order insist there is no personally identifiable information in these logs, but that’s certainly not true of the source logs. Even if you remove the Google account cookie that is now sent with most people’s queries, the IP address is recorded. I have a static IP address myself on my DSL. It’s always the same, and so it would be easy to extract all my searches, which include some pretty confidential stuff, things like me entering the names of medicines I have been prescribed. (It even includes me searching for “Kiddie Porn” because I wanted to see if any adwords would be presented on such a search. There were not, in case you are wondering.) Yahoo and MSN state the IP address and other information was stripped from what they handed over.
Static IPs are the norm for corporations and more savvy internet users, but while most DSL and cable users have a dynamic IP, it isn’t really very dynamic. If you have a home gateway box or computer that is on all the time, it changes very infrequently, in some cases, never. All your activity can be linked back to you through that address. Only dial-up users can expect any anonymity from their dynamic IP, and even then ISPs keep logs for some period of time which connect dynamic IPs and accounts.
But there is something far more frightening about this collection of data. I hope Google wins its fight over this data, because the DoJ really has no business forcing a private company to help them with their statistics problems.
But what about when a subpoena comes about an individual? Imagine you are under investigation for something, or just in a frivolous lawsuit or even a messy divorce. You can bet lawyers are going to want to say, for those with mostly-static IPs, “I want the search records for this IP, or this cookie.” And it’s going to be a lot harder for search engines to turn down those requests, because they will be specific and will relate to the data the search companies are holding on all of us.
One way to hold the lawyers back will be to make it expensive. But how long will it remain expensive? After a few requests, the software to pull the records will exist, and it will not be possible to claim it’s more expensive than the data mining Google already does for itself, to improve its own business.
Now, before it seems like I am ragging on Google here, let’s not forget that Google’s competition — AOL, Yahoo and MSN — hasn’t been even so good as to fight this first salvo. Yahoo has a whole department to comply with legal requests for their records, and famously handed over the ID of a journalist who sent an E-mail that has landed him in a Chinese jail. When it comes to intent, Google has indeed been the “do the least evil” company here.
But with court orders, intent matters not. This pool of data is an “attractive nuisance.” In the end, I think Google will realize it has to start anonymizing this data to the point that it can respond to requests with “we don’t have that information.” Doing so will erase information that can be valuable to Google’s business. It will come at a cost to them. Worse, the cost can’t be predicted because they will lose the ability to learn new things they haven’t even realized they want to learn about how people use their tools. But in the end, it’s the only choice, both to keep their subpoena costs down, and to make users comfortable with searching.
Perhaps these logs were handed over without IPs or user names. But what if somebody browses them and sees queries on things like kiddie porn or white house security or how to build a nuclear bomb? Could that be sufficient cause for a further order to get the identifying information associated with that query?
In the meantime, if you feel motivated to foolishly search for things that could be misinterpreted, as I did, may I recommend you do so through Tor, the anonymizing proxy. (The EFF provided significant financial support to the development of Tor.) Tor bounces your web requests through a series of randomly chosen servers, all encrypted, so nobody can trace back your requests to you. Be sure not to login when using it, though!
Submitted by brad on Wed, 2006-01-18 18:28.
How often does it happen? There’s an important idea or action which is controversial. The bravest come out in support of it early, but others are wary. Will support for this idea hurt them in other circles? Is the idea against the “party line” of some group they belong to, even though a sizeable number of the group actually support it? How can you tell.
What the world needs is a way that people can register their support for something anonymously and learn how many other members of their group also secretly support it — but not who. However, once the support reaches a certain threshold, their support would become public. And not just public, but an actual binding committment to the support.
For example, Republicans may oppose the war, or the wiretapping, but are afraid to say so, even among their closer associates. What if really a lot of people feel that way, but nobody speaks up?
Now, obviously, you can do this with a trusted web site where people register and then can vote on issues. But you have to really, really trust the web site, because some of the positions such a system is designed to record are ones that could get you branded a traitor to the group. For issues like war, no web site could be trusted.
So can it be done cryptographically? Is there a way to do this in a public space? I think that with the use of things like Chaum’s blinding algorithms, and fragmented keys (So that a secret message can be decoded in the presence of N of M key fragments, but no fewer than N) it would be possible to create a club, give everybody fragments of everybody else’s key for a given message, and thus arrange that only after at least N votes of support arrive, everybody can decrypt the identities of the supporters. But it’s a bit messy, and might require new generation of keys for every question and various other complex logistics.
There is a particular danger as well. Opponents of a proposition might well pretend to be supporters, in order to bump the support number above the threshold and reveal who the “traitors” are. The opponents would make sure to record that their support was fake in some notarized location so they can renounce it when the names are revealed.
As such, in a governing body, it would be necessary to make the measures of support non-repudiable, which is to say they would be binding votes.
Say you wanted to have a vote to legalize gay marriage. There might be lawmakers who would support it, but could not do so publicly while it’s likely to lose. However, once it is assured to pass, they would accept making their support public — as is necessary in an open legislature. People would see the tally go up, and once it hit a majority the vote would pass. This stops people from pretending to support something just to unmask the real supporters.
Of course none of this prevents regular open support or opposition on things. Would the temporary secrecy cause risks due to some temporarily reduced transparency? And of course on failed propositions, the transparency would be permanent. (Or perhaps permanent until the person leaves office or dies or whatever.) Would it be good or bad that we knew that 30% of the house would vote to ban abortion if they could win, without knowing who they were?
Submitted by brad on Sun, 2006-01-01 22:05.
One particularly interesting argument seen in the Underwatergate scandal is the one that the NYT, by revealing the existence of warrantless wiretaps on international communications lines, compromised national security.
Reporters asked how that can be. After all, surely the bad guys knew the U.S. had the ability to perform surveillance on them, and has a secret intelligence court, and was presumably getting lots of secret warrants to watch them, and was furthermore watching them overseas without being subject to the 4th amendment.
The White House response was effectively, "Well, we're catching some of them with this program. So obviously in spite of the fact that they should know we are listening, they forget, and we learn things." In other words, the bad guys are sometimes stupid, and by bringing a lot of publicity on the surveillance (legal or illegal) we're reminding them not to be stupid.
I've seen this issue talked about before. Many members of the mafia have been caught with wiretaps, saying things on phones that you think they would know are probably tapped. This argument is used to counter the claim that since encrypting communications are readily available (such as in Skype) the smart criminals will not get caught with wiretaps.
Furthermore, in this case, while the White House revealed only minimal details of the program, security experts in blogs and other media around the world engaged in all sorts of informed speculation about what's really going on. While the NYT didn't reveal any technical details, kernels in the discussion almost surely do.
I'm willing to accept that even the smart criminals make mistakes, and get caught this way, and this will continue. So indeed, heavy publicity around the surveillance techniques and issues probably does, as they claim, instruct or remind some bad guys not to use certain communications that could put them at risk for being caught.
The harder question is this: Does that imply we must keep silent on these issues? I think the answer is clearly no. The standard the spooks and White House suggest is untenable, and there is no clear way to draw the line. Because if we use the stupidity of criminals as a standard, then it's hard to see what public discourse might not be considered potentially harmful to the exploitation of the criminal's mistakes. Yes, it's clear to see that a massive public debate with constant articles in all major media is more likely to remind a bad guy to watch what he says on the phone, more than a single blog posting would. But this is a difference of degree, not of kind.
In the end, it's a security through obscurity argument of a particularly high order. Not only must we not let the bad guys know that we can wiretap, we must not remind them after it is presumed they already know. It's hard to imagine a rule against this that would not chill speech at an extreme level.
Submitted by brad on Wed, 2005-12-21 16:08.
A lot of new developments in the warrantless wiretap scandal. A FISA judge has resigned in disgust. A Reagan-appointed former DoJ official calls the President a clear and present danger. And the NSA admits they have on rare occasions tapped entirely domestic phone calls, because sometimes people calling to or from international cell phones while those phones are in the USA would see the traffic go overseas and come back again. I have made such calls to Europeans and Australians visiting the USA.
So they can’t spot those calls as domestic and thus are performing surveillance on them. But what about E-mail? With E-mail, it’s a great deal harder to identify where the parties are, and what citizenship they hold. In some cases, almost impossible.
And more to the point, E-mails between two U.S. persons will quite often go through international servers. Unlike phones, where it’s expensive, anybody who travels outside the USA for long enough to warrant an E-mail address out there can easily keep it and many do. There’s not even a big reason for multinational ISPs to avoid routing messages to servers in Canada or other places. I maintain aliases on my own domain for all my family, for example, though most of them are not in the same country as the server. I am not alone.
Further, it’s likely that the order of surveillance they have done on E-mail is vastly greater than on phones. For the NSA, monitoring of all unencrypted E-mail — all of it — would be only a modest amount of work. We used to joke in the old days about putting NSA traps in our messages, see this thread from 21 years ago on the topic, and many others if you search for it. If enough people put those in messages, it would overload the systems, we mused.
Back then we were mostly kidding around. Today we have reason to be scared. And it’s time to put opportunistic crypto into E-mail as I detailed years ago, by default. (Since then, some projects to do this have popped up — One from Simson Garfinkel and another from PGP. MS Outlook also does it, but with an untenable user interface.
Submitted by brad on Wed, 2005-12-21 00:30.
Seeing as this scandal seems to be revolving around the tapping, without warrants, of signals over the
undersea telecom cables, I propose we call it Underwatergate.
Submitted by brad on Tue, 2005-12-20 13:30.
It’s long, but I can strongly recommend the transcript of today’s press briefing on the NSA warrentless wiretaps. It’s rare to see the NSA speak about this topic.
One can read a fair bit between the lines. The reporters were really on the ball here, far more than one usually sees.
Particularly interesting notes include:
- General Hayden of the NSA describes many reasons why they don’t use the FISA court, citing mostly “efficiency”
- Reporters ask if they are listening for the word “bomb” — The AG says there is no blanket surveillance
- The general states that the “physics” of the intercepts require one end be outside the USA
Independently, Senator Rockefeller’s letter where he wrote that he felt he needed “technical” advice to
understand the issues, and that it reminded him of Poindexter’s TIA is very telling.
The efficiency claim is a smokescreen. They would not have taken this level of legal risk, no matter
how much they feel what they did was legal, just to gain a little efficiency. It’s clear to me that
they are telling the truth when they say they could not use the FISA court — they are performing surveillance that the FISA court would not authorize for them.
The question is, what? The AG says it is not “blanket” but clearly there is some fancy computerized surveillance going on here, something secret, beyond Carnivore. I can readily believe that all sorts of fancy broad surveillance could take place and not be considered “blanket” by the AG. (The AG actually says, “The President has not authorized blanket surveillance of communications here in the United states.”) I certainly hope he has not authorized that. But has he authorized it on all communications coming in and out of the USA?
Or something less, like computer search of all E-mails or phone calls to or from entire towns or nations? Perhaps speaker recognition to look for certain people’s voices on all international calls, no matter what number they use? Perhaps looking for all arabic calls, and then doing blanket surveillance on them?
So much is possible, and all of this would not be authorized by the FISA court.
They knew they would get in legal trouble, so it’s also possible the intercepts, which the General says are on the international cables, are even placed outside the USA, either with or without the permission of foreign governments. (In extremes, they send submarines down to make taps.) Taps outside the USA are not under the rules of the wiretap act, though the 4th amendment still applies to US persons.
Spooky stuff. More to come.
P.S. If you have not been following it, it has now come out that the New York Times sat on this story for over a year, since before the 2004 election, whose outcome might have changed based on this news.
Submitted by brad on Fri, 2005-12-02 15:45.
This is an idea from several years go I’ve never written up fully, but it’s one of my favourites.
We’ve seen lots of pushes for online identity management — Microsoft Passport, Liberty Alliance and more. But what I want is for the online world to help me manage my physical identity. That’s much more valuable.
I propose a service I call “addrescrow” which holds and protects your physical address. It will give that address to any delivery company you specify when they have something to deliver, but has limits on how else it will give away info from you. It can also play a role in billing and online identity.
You would get one or more special ID names you could use in place of your address (and perhaps your name and everything else) when ordering stuff or otherwise giving an address. If my ID was “Brad Ideas” then somebody would be able to send a letter, fedex or UPS to me addressed simply to “Brad Ideas” and it would get to me, wherever I was.
(Read on…) read more »
Submitted by brad on Wed, 2005-11-16 23:45.
I don’t post most EFF news here, since the EFF has a news page and 2 blogs for that, but today I’m doing it
twice because congress is voting tomorrow on renewal of the PATRIOT act. There was a lot of effort to
reduce the bad stuff in the bill, efforts that seemed to be getting somewhere but were ignored.
Ok, do I have to tell you why this erosion of so many fundamental rights is a bad idea? At first,
I thought the PATRIOT only came about because in the weeks after Sept 11, the country was acting
in anger and shock. It did things it wouldn’t do with time to be calm and reasoned about it.
And the PATRIOT act has resulted in huge waves of new surveillance as we’ve been seing in the past
So do what you can to stop it. Our Action Center will help you contact your representatives to give them the message. Plus you can read the bill and
commentary on it on the EFF web site.
We’ve been saying these things for so long you may be getting tired of it. But every time we strip away
rights, make society a little bit more scared — each time we live in fear — I think that’s exactly
what terrorists want. Like the name says, their goal is to sew fear and terror in hope of getting their
way. Sure, people were angry when this law was first passed, but there is no excuse today. Take action
yourself. Donation to organizations like the EFF and others if you don’t have time to take all the action
you think you should, but do have the money. It’s as simple as that.
Submitted by brad on Tue, 2005-08-23 22:51.
A mantra in the security community, at least among some, has been that crypto that isn’t really strong is worse than having no crypto at all. The feeling is that a false sense of security can be worse than having no security as long as you know you have none. The bad examples include of course truly weak systems (like 40 bit SSL and even DES), systems that appear strong but have not been independently verified, and perhaps the greatest villian, “security through obscurity” where the details of the security are kept secret — and thus unverified by 3rd parties — in a hope that might make them safer from attack.
On the surface, all of these arguments are valid. From a cryptographer’s standpoint, since we know how to design good cryptography, why would we use anything less?
However, the problem is more complex than that, for it is not simply a problem of cryptography, but of business models, user interface and deployment. I fear that the attitude of “do it perfectly or not at all” has left the public with “not at all” far more than it should have.
An interesting illustration of the conflict is Skype. Skype encrypts all its calls as a matter of course. The user is unaware it’s even happening, and does nothing to turn it on. It just works. However, Skype is proprietary. They have not allowed independent parties to study the quality of their encryption. They advertise they use AES-256, which is a well trusted cypher, but they haven’t let people see if they’ve made mistakes in how they set it up.
This has caused criticism from the security community. And again, there is nothing wrong with the criticism in an academic sense. It certainly would be better if Skype laid bare their protocol and let people verify it. You could trust it more. Read on… read more »
Submitted by brad on Thu, 2005-05-12 05:50.
There have been many efforts at internet "identity" systems, such as Microsoft Passport, Liberty Alliance, and a variety of others. A recent conference was held in SF, though I didn't go, but I thought it was time to put forward one important idea.
Also, sometimes something goes into a server because business rules demand it. You can only make money from it as a service you sell, so you build it that way. read more »
Submitted by brad on Tue, 2005-04-19 14:05.
During the 1990s, the US Government made a major effort to block the deployment of encryption by banning its export. We won that fight, but during the formative years of most internet protocols, they made it hard to add good authentication and privacy to internet tools. They forced vendors to jump through hoops, made users download special "encryption packs" and made encryption the exception rather than the norm in online work.
This, combined with bad design decisions made even without the help of the government, has caused some of the security windows that are bugging people today.
A recent issue is DNS poisoning, getting known by the name of pharming. The scammers send fake DNS answers in advance to buggy DNS servers running on MS Windows Service pack 2 or earlier, or very old *nix copies of bind. They tell the server that www.yourbank.com should really go to their address with a fake version of the site.
Now of course we should have made DNS reliable and secure to stop this, or at least done the very basic things found in the most up to date DNS servers, but even so, this attack should not have been enough.
That's because SSL certificates were supposed to assure that you were really talking to yourbank.com when the browswer said it was, even if somebody hijacked the connection like this. And they will. The phisher can't pretend to be yourbank.com with the little "lock" icon on the status bar of your browswer set to locked. But they can pretend it when the icon says unlocked.
And surprise, surprise, people forget to look at the icon. A lot. They turn off the warnings about transitions to insecure pages because they go off all the time, and nobody pays attention to an alarm that's always going off. Encryption and SSL are rare, special things limited to login screens. We tolerate all the rest of life being unencrypted and in the clear -- and vulnerable, just like the USDoJ wanted it. read more »
Submitted by brad on Fri, 2005-01-28 10:31.
You may have run into the story of a fireman charged with burning down his own home. They charged him because his Safeway Club card records showed he had purchased the type of firestarter that was used in the arson on his house.
Sounds like a good case? Problem is somebody else confessed to the arson. He's now a free man.
People often wonder why privacy advocates get up in arms about things like the Safeway database. I mean, how can it harm you, especially if you're not doing anything suspicious?
The problem is that police are attracted to the evidence that is easy to find. But when databases become more and more comprehensive, the chance that they will contain something interesting grows.
In an old-time investigation, finding receipts for the firestarters would be a major clue, and mind convict somebody. That's because searches of what you bought weren't so easy. If you bought the very tool used in the crime, and it was prominent enough that they found it, it looked bad for you.
But the cops aren't aware they are falling into one of the traps of bad science. When you have a lot of data, you can always find something that matches what you are looking for. When you find it, your intuition tells you "this is too strange to be coincidence." But in fact math tells us that it is. That's why you must never start with the conclusion and dig around in a big pool of data looking for evidence of your conclusion. Good scientists have known not to do this for years. Cops haven't.
Submitted by brad on Wed, 2004-12-08 08:51.
When I give an E-mail address to a web site, I give a different one to each site. I have many domains, including one where all addresses are forwarded to me unless I turn them off.