Submitted by brad on Mon, 2004-05-03 07:01.
I recently spoke to Gordon Bell about the Digital Life Bits project he's doing at Microsoft Research, digitizing his entire life. I'm seeing more and more evidence that a prediction I made several years ago for "P-Day" may already have come true.
The prediction was this. We don't have the AI level technology today to perform ubiquitous automatic surveillance of our society, and that's a good thing. However, we have developed the technology to start recording everything. The cameras are already in lots of places (with their number growing) and storage has become cheap enough to keep all those recordings forever, and eventually to put them online.
Today we can't do anything so bold as perform facial recognition on all those images to track people. But that won't always be true. In the future we'll build such technology thanks to Moore's law (see the prior post!)
But this technology will be able to do more than find people in the cameras of the future. Thanks to recording it will be able to track people into the past. Audio and image records will become records of people. Data trails not possible to correlate today will be correlated in the future. The complete computerized tracking of your life is being done already, but the computation to write it down awaits future computing power.
P-Day is the day your privacy went away but you didn't yet know it. Thanks to other people digitizing their lives, it may have already happened to you. What touristed public space today is not constantly being camcordered or digitally photographed?
Walter Jon Williams recently explored this question in his Hugo nominated story The Green Leapord Plague. I've known him for many years, having published his story "Prayers on the Wind" in my 1993 Hugo & Nebula Anthology and highly recommend his work.
Submitted by brad on Tue, 2004-04-27 09:43.
In thinking about the GMail encryption problem, I came to realize that for ordinary users liable to forget their passwords, it would not be suitable to tell them after such an event that all their email archives are forever lost. This means some sort of Key Escrow. Not the nasty kind done with the clipper chip, but one done voluntarily.
I came up with a system I call Friendscrow. (I suspect others have also thought of the same thing.) This is a ZUI (Zero User Interface) system, at least for normal operation.
Your key would be broken up into some number of fragments, say 20. The fragments would be arranged so that getting any 10 of them recovers the key, but getting fewer gets you no closer.
The system would search your mail logs to find your 20 most frequent correspondents in the system. (It has to be a big and popular system for this to take place, otherwise some UI is needed.) Most of these will be your friends, a few may be enmies. Techniques would be used to eliminate mailing lists, etc. If you want to add basic UI, you might scan and approve the list.
The key fragments are then distributed to the 20 close contacts. They will not know this has been done, the fragement will just be placed in their files, encrypted with their key.
If you lose your key (or when you die) you use your friends to get it back. You mail those you know to be your closest correspondents a special message. It says to them, "You may not know it, but you may have a fragment of my lost key. Go to the system and click on the link to help a friend recover a password."
The link explains that you should first confirm you are really talking to the friend through some other means than e-mail. Or confirm that they are dead. It will ask you to confirm they are not under duress. Then it will give you the fragment to hand over to the authorized person.
You should be able to find half the fragments, which would be enough to get back your key, and read your archives again. read more »
Submitted by brad on Tue, 2004-04-20 16:05.
Most people have heard about the various debates around Google's new GMail service. I wear many hats, both as a friend and consultant to Google and as chairman of the EFF. There have been some tinfoil-hat flaps but there are also some genuine privacy concerns brought about by people moving their life online and into the hands of even a well-meaning third party.
Check out the Essay on privacy issues in GMail and webmail. I welcome your comments in the blog.
Submitted by brad on Mon, 2004-04-05 04:38.
Those of us who opposed the TIA and other programs were recently branded as "privacy nuts" for doing so. Hiawatha Bray wrote that it was stupid to quash this sort of research just because it might lead to abuse.
Nonetheless, it is important to understand that this is exactly the role
of the privacy advocate.
Protecting privacy is one of the most difficult tasks in the civil rights
pantheon for several reasons. One is that people are rarely concerned about
privacy invasions until after they have taken place. The consequences of
privacy invasion are often subtle as well, even after the fact. The
simple fact that you know you are being watched alters your behaviour in
subtle ways, causes self-censorship of all sorts of speech and activities.
After all, who acts the same home at dinner with their mother than they
do out on their own at college for the first time away from her eye?
Thus it is important not only that the government not engage in general
surveillance. It must, like Ceasar's wife be _seen_ to not engage in
such activity. Anything that gives the public grounds to fear they
are under surveillance impinges on freedom. Even if the watchers are
well intentioned and well behaved and don't exceed their authority.
But of course, even though they may be well intentioned, countless
evidence shows they do exceed their authority, and not infrequently.
Thus we come to the next princple. That we must not build the
infrastructure of the police state. We must not make it be that the
action needed to have a real police state is to flip a switch or
change a policy. Perhaps the risk that the switch will actually be
flipped is one in 100 in your judgement. To me the cost of such a
state is so high we must not even let that level of risk go by.
Instead, let us always have those who would want a surveillance state
have to do both things -- change the policy and create the infrastructure.
Let us not do the hard work for them.
Submitted by brad on Wed, 2004-02-25 14:52.
Each year when Tivo reminds people they gather anonymized viewing data on Tivo usage by reporting superbowl stats, a debate arises. A common view is that it's OK because they go to a lot of work (which indeed they do) to strip the data of the identity of the user.
As noted, I've read Tivo's reports and talked to Tivo's programmers, and they did work hard to try to keep the data secure and anonymised.
So why worry? A number of principles are at stake. Privacy is an
unusual issue. You only care about privacy invasions _after_ your
privacy is violated. To avoid invasions some people have to be a
little paranoid, and justifiably argue against building the infrastructure
of a massive surveillance system, even if the people who build it
have good intentions. They might not always run it.
This is not simply an Orwellian fear of the TV watching you (though that
does play a part.) Recently, Studios sued SonicBlue over the Replay TV,
a competitor to Tivo. To gather data, they sought a court order for
Replay to modify their code to monitor their users to gather data for
the court. Replay doesn't do even the anonymous monitoring Tivo does.
There was great outcry, and the order was reversed. Sadly, that's a
lesson that will cause the next such order to be done in secret.
And unfortunately, Tivo has done 90% of the work needed to allow such
an order to be easy. Yes, they anonymize the data, but they do it
by choice, not natural law. They can undo that choice, either because
they change their minds, or a court or police agency changes their minds
How paranoid is it to be worried about something that is not just
hypothetical, but has already taken place at least once? read more »
Submitted by brad on Tue, 2004-02-24 04:32.
RSA today announced a version of Ron Rivest's blocker tag which is a supposed defence against unwanted RFID scans.
The tag, explained simply, answers affirmatively to an entire subsection of the RFID space, so that any scanner looking for a tag in that space always hears a yes (or gives up) and thus can't find a tag in that space.
(RFID scanners, if you didn't know, find tags by doing a binary descent of their code number, asking "Anybody here start with 1? Yes? Ok, anybody start with 10? No? How about 11? Yes? Anybody start with 110?" and so on.)
This would work with existing scanners, but it doesn't seem very secure to me.
All they would need would be a scanner that could tell the difference between two tags answering and one answering. On the left side of the tree, it might hear both the blocker tag and real tags. On the right side, only the blocker tag. If it can tell the difference it can still descend the tree and read your tag.
A very smart blocker tag that knows not to answer when the specific tags it is blocking will answer could defeat this, but that's a much more expensive tag, effecitively an active device. And even this could be defeated by a reader with more than one antenna or any directionaility to its antenna to let it know the answers it got came from two different sources.
What this means is the ordinary reader won't be able to scan the tags on your clothes as you walk into a building, but one designed for that purpose could do so. So we'll have snooping for the rich, but not for the public. Though at least you could detect when this has been done to you, if you had an active tag looking for this. But what could you do about it?
Submitted by brad on Wed, 2004-02-18 15:36.
Many people, trying to address concerns about the privacy implications of RFID tags have indicated that it can just become the norm, or even a requirement, to "burn" out the RFID tags in purchased products as they are sold.
I'll get to why that doesn't work in a moment, but first some background. RFID tags are cheap passive radio devices planned to go into most consumer products, replacing the bar-code. A reader can, within certain range of the tags, read the serial numbers of all tags in the area. Every tag has a unique number, so it makes a great bar-code for inventory control.
Soon your body will be covered with RFID tags everywhere you go. In your clothes, boots, watch, wallet, glasses, ID-badges, credit-cards etc. Scanners may show up everywhere. This provides the potential to put them on city streets, doorways, airports, train stations and so on, and, once you have scanned a person once, to track everybody's movements everywhere. Pretty 1984.
Here's the rub. We're going to want some of these tags. Not just to return products to the store. Today the readers are expensive, but soon they will be cheap, and we'll want to have something we can use to find our keys, wallet, glasses, watch or other losables. To let us know what's in our closets, on our bookshelves. We'll love it. So we won't burn out the IDs.
The only answer I have thought of (I don't think Rivest's jammer will work) is more expensive RFIDs that can be modified instead of burned out, so that they will no longer respond to any scanner, just to our personal one. So they work for us, not for others.
Even with this the IDs in credit cards, access cards and such will need to work in more scanners. How will we turn them off?
Submitted by brad on Fri, 2004-01-16 05:54.
I plan for this to be mostly an essay blog rather than a link blog, but I could not resist this story of yet another nightmare at airport security, as a student is ordered to "dispose" of her pet fish while trying to take it home with her on the plane.
So now the essay. Like many of you, I have read tons of these crazy security stories. Stories of cruelty, stories of pointless security that simply gives the appearance of security without making things safer. Innocent people harassed. Pilots arrested over nailclippers. People groped. People kept off planes for reading the wrong book.
Now, you might be firmly of the mind that we must increase security and make some sacrifices. But if that's the case, there need to be checks and balances on the security decisions.
Right now, when people go overboard on security, there is no negative consequence for them when they get it wrong. They have no reason not to go overboard. We're too afraid that if we discipline them at all, they might let something slide and let Al Qaida on the plane.
But it can't work that way. There needs to be a consequence for going over the top. People should be able to lose jobs, get fined or disciplined. Security workers need to think twice about whether they should do something.
This should particularly be true if it's obvious that what they are doing gives only the appearance of security rather than real security, unless we decide it is our policy that the appearance of security is more important.
(And while I see the argument some might make for that -- that the public has to feel confident that they are safe in transit or they won't travel -- I don't buy it long term at all.)
The men who made the fish policy, or the men who enforced it, should be called to account. If they have a reason for it, they should have it ready for the passenger. If they don't, somebody should be disciplined.
No, I'm not saying don't give them any discretion or let them make any mistakes. But the mistakes should be noted, explained, and if repeated, punished.
Submitted by brad on Tue, 2003-12-30 11:27.
Next Monday, the USA will start fingerprinting and photographing all visitors, except those from 28 ally countries -- fortunately for me, Canada is among those exempted.
60 years ago, the USA gave its all to take down a growing empire that wanted everybody to show their papers any time they moved. Now the USA is moving closer to what it fought. Aside from hurting the tourist industry, it's yet another example of removing fundamental rights from people without the right lucky birth accident.
Here's the story from the San Jose Merc