Submitted by brad on Thu, 2008-10-09 12:26.
Ford is making a new car-limiting system called MyKey standard in future models. This allows the car owner to enable various limits and permissions on the keys they give to their teen-agers. Limits included in the current system include an 80 mph speed limit, a 40% volume limit on the stereo, never-ending seatbelt reminders, earlier low-fuel warnings, audio speed alerts and inability to disable various safety systems.
My reaction is of course mixed. If you own something, it is reasonable for you to be able to constrain its use by people you lend it to. At the same time it is easy to see this literal paternalism turn into social paternalism. While it’s always been possible to build cars that, for example, can’t go over the speed limit, it’s always been seen as a “non-starter” with the public. The more cars that are out there which have governors on them, the more used to the idea people will get. (“Valet” keys that can’t go over 25mph or open the trunk have been common for some time.)
This is going to be one of the big questions on the path to Robocars — will they be able to violate traffic laws at the command of their owners? I have an essay on that coming up for the future, where I will also ask how much sense traffic laws make in a robocar world.
The Ford key limits speed to 80mph to allow the teen to pass on the highway. Of course on some highways here you could not go in the fast lane with that governor on, which probably suits the parents just fine. What they probably want would be more a limit on average speed, allowing the teen to, for short periods, burst to the full power of the car if it’s needed, but not from a standing start, and of course with advanced warning when the car has gone too fast too long to give a chance to safely slow down.
The earlier low-gas warning is just silly. The earlier you make a warning, the more you teach people to ignore it. If you have an early warning (subtle) and then a “this time we really mean it” warning most people will probably just use the second one. Many cars with digital fuel meters refuse to estimate fuel left below a certain amount, because they don’t want to be blamed for making you think you have more gas than you do. So they tell you nothing instead, which is silly.
What might make more sense would be the ability to make full use of speed, but the threat of reporting it to mom & dad if it’s over-used. (Such a product would be easy to add to existing cars, I wonder if anybody has made a product like that?) Ideally the product would warn the teen if they were getting close to the limit, to let them govern themselves, knowing that they would face a lecture and complete loss of car privileges if they go over the limitations.
On one hand, this is less paternalistic, because it does not constrain the vehicle and teaches the child to discipline themselves rather than making technology enforce the discipline. On the other hand, it is somewhat Orwellian, though the system need not report the particulars of the infringement, just the fact of it. Though we can certainly see parents wanting to know all the details.
Of course, we’ll see a lot more of that sort of surveillance asked for. Track-logs from the GPS in fact. Logging GPSs that can be hidden in cars cost only $80, and I am sure parents are buying them. (I have one, they are handy for geotagging photos.) We might also start seeing “smart” logging systems that measure speed infractions based on what road you are on. Ie. 80mph not near any highway is an infraction but on the highway it isn’t.
I doubt we’ll be able to stop this sort of governing or monitoring technology — so how can we bend it to protect freedom and privacy?
Submitted by brad on Mon, 2008-09-29 22:40.
Most of us have had to stand in a long will-call line to pick up tickets. We probably even paid a ticket “service fee” for the privilege. Some places are helping by having online printable tickets with a bar code. However, that requires that they have networked bar code readers at the gate which can detect things like duplicate bar codes, and people seem to rather have giant lines and many staff rather than get such machines.
Can we do it better?
Well, for starters, it would be nice if tickets could be sent not as a printable bar code, but as a message to my cell phone. Perhaps a text message with coded string, which I could then display to a camera which does OCR of it. Same as a bar code, but I can actually get it while I am on the road and don’t have a printer. And I’m less likely to forget it.
Or let’s go a bit further and have a downloadable ticket application on the phone. The ticket application would use bluetooth and a deliberately short range reader. I would go up to the reader, and push a button on the cell phone, and it would talk over bluetooth with the ticket scanner and authenticate the use of my ticket. The scanner would then show a symbol or colour and my phone would show that symbol/colour to confirm to the gate staff that it was my phone that synced. (Otherwise it might have been the guy in line behind me.) The scanner would be just an ordinary laptop with bluetooth. You might be able to get away with just one (saving the need for networking) because it would be very fast. People would just walk by holding up their phones, and the gatekeeper would look at the screen of the laptop (hidden) and the screen of the phone, and as long as they matched wave through the number of people it shows on the laptop screen.
Alternately you could put the bluetooth antenna in a little faraday box to be sure it doesn’t talk to any other phone but the one in the box. Put phone in box, light goes on, take phone out and proceed.
One reason many will-calls are slow is they ask you to show ID, often your photo-ID or the credit card used to purchase the item. But here’s an interesting idea. When I purchase the ticket online, let me offer an image file with a photo. It could be my photo, or it could be the photo of the person I am buying the tickets for. It could be 3 photos if any one of those 3 people can pick up the ticket. You do not need to provide your real name, just the photo. The will call system would then inkjet print the photos on the outside of the envelope containing your tickets.
You do need some form of name or code, so the agent can find the envelope, or type the name in the computer to see the records. When the agent gets the envelope, identification will be easy. Look at the photo on the envelope, and see if it’s the person at the ticket window. If so, hand it over, and you’re done! No need to get out cards or hand them back and forth.
A great company to implement this would be paypal. I could pay with paypal, not revealing my name (just an E-mail address) and paypal could have a photo stored, and forward it on to the ticket seller if I check the box to do this. The ticket seller never knows my name, just my picture. You may think it’s scary for people to get your picture, but in fact it’s scarier to give them your name. They can collect and share data with you under your name. Your picture is not very useful for this, at least not yet, and if you like you can use one of many different pictures each time — you can’t keep using different names if you need to show ID.
This could still be done with credit cards. Many credit cards offer a “virtual credit card number” system which will generate one-time card numbers for online transactions. They could set these up so you don’t have to offer a real name or address, just the photo. When picking up the item, all you need is your face.
This doesn’t work if it’s an over-21 venue, alas. They still want photo ID, but they only need to look at it, they don’t have to record the name.
It would be more interesting if one could design a system so that people can find their own ticket envelopes. The guard would let you into the room with the ticket envelopes, and let you find yours, and then you can leave by showing your face is on the envelope. The problem is, what if you also palmed somebody else’s envelope and then claimed yours, or said you couldn’t find yours? That needs a pretty watchful guard which doesn’t really save on staff as we’re hoping. It might be possible to have the tickets in a series of closed boxes. You know your box number (it was given to you, or you selected it in advance) so you get your box and bring it to the gate person, who opens it and pulls out your ticket for you, confirming your face. Then the box is closed and returned. Make opening the boxes very noisy.
I also thought that for Burning Man, which apparently had a will-call problem this year, you could just require all people fetching their ticket be naked. For those not willing, they could do regular will-call where the ticket agent finds the envelope. :-)
I’ve noted before that, absent the need of the TSA to know all our names, this is how boarding passes should work. You buy a ticket, provide a photo of the person who is to fly, and the gate agent just looks to see if the face on the screen is the person flying, no need to get out ID, or tell the airline your name.
Submitted by brad on Sat, 2008-08-02 14:35.
There’s a bit of an internet buzz this week around a video of a law lecture on why you should never, ever, ever, ever talk to the police. The video begins with the law professor and criminal defense attorney, who is a good speaker, making that case, and then a police detective, interesting but not quite as eloquent, agreeing with him and describing the various tricks the police use every day with people stupid enough to talk to them.
The case is very good. In our society of a zillion laws, you are always guilty of something, and he explains, even if you’re completely innocent, and you tell nothing but the truth, there are still a lot of ways you could end up in jail. Not that it happens every time, but the chance is high enough and the cost is so great that he advocates that you should never, ever talk to the police. (He doesn’t say this, but I presume he does not include when you are filing a complaint about a crime against you or are a witness in a crime against others, where the benefits may outweigh the risk.)
Now fortunately for the police, few people follow the advice. Lots of people talk to the police. Some 80% of cases, the detective declares, are won because of a confession by the suspect. Cops love it, and they will lie (and are permitted to lie) to make it happen if they can.
But since a rational person should never, ever, under any circumstances talk to the police, this prevents citizens from ever helping the police. And there are times when society, and law enforcement, would be better if citizens could help the police without fear.
What if there existed a means for the police to do a guaranteed off-the-record interview with a non-suspect? Instead of a Miranda warning, the police would inform you that:
“You are not a suspect, and nothing from this interview can be used against you in a court of law.”
First of all, could this work? I believe our laws of evidence are strong enough that actual quotes from the interview could not be used. To improve things, you could be allowed to record the interview, or the officer could record it but hand you the only copy, and swear it’s the only copy. It could be a digitally signed, authenticated copy, which can never be taken from you by warrant or subpoena, or used even if you lose it, perhaps until some years after your death.
However, clearly if the police learn something in the interview that makes them suspect you, they will try to find ways to “learn” that again through other, admissible means. And this could come back to bite you. While we could have a Fruit of the poisonous tree doctrine which would forbid this, it is much harder to get full rigour about such doctrines. Is this fear enough to make it still always be the best advice to never speak to the police? Is there a way we could make it self to assist the police?
I will note that if we had a safe means to assist the police, it would sometimes “backfire” in the eyes of the public. There would be times when interviewees would (foolishly, but still successfully) say “nyah, nyah, I did it and you can’t get me” and the public would be faced with the usual confusion over people who are let free even when we know they are guilty. And indeed there would be times when the police learn things in such interviews and could have then found evidence, but are prohibited from, that get the public up in arms because some rapist, kidnapper, murderer or even terrorist goes free.
Submitted by brad on Tue, 2008-07-29 19:08.
There are a variety of tools out there to help recover stolen technological devices. They make the devices “phone home” to the security company, and if stolen, this can be used to find the laptop (based on IP traceroutes etc.) and get it back. Some of these tools work hard to hide on the machine, even claiming they will survive low level disk formats. Some reportedly get installed into the BIOS to survive a disk swap.
This has always been interesting to me, but it seems like something that could be used to track you against your will. I don’t know how all the different products work inside — they are deliberately obtuse about some parts of it — but here’s a design for one that you could perhaps trust with your privacy.
- When setting it up, you would create a passphrase. Write it down somewhere else, as you need it for recovery.
- Every so often, it will make a DNS request of a magic DNS server. In the request will be embedded a random number, and an encryption of the random number based on the passphrase.
- Without the passphrase, these requests mean nothing to the tracking company. They don’t know who made the request
- When your device is stolen, you give the tracking company your passphrase
- When a request comes in, the tracking company checks it using the passphrases of the devices that are currently reported stolen. If it matches, bingo.
- In a match, return a DNS answer that says, “You’re stolen. Do the stuff you should do.” That answer is of course also encrypted with the passphrase.
- At that point, the device can do complex traceroutes, take photos with its built in camera, record audio, you name it.
If there are a lot of stolen laptops in the database, the search could be sped up one of two ways:
- The random number isn’t random, it’s the date. The site can then pre-compute all the codes it is likely to get from stolen laptops that day. Changing the date on the computer won’t help, as that just means a little more CPU on that particular request.
- Include an 8 or 9 bit hash of the passphrase + date. That can reduce by a factor of 256 or 512 how many phrases you must try. This identifies you a bit but if the company has lots of customers you are fine.
Note that DNS requests tend to get through just about any firewall other than a firewall deliberately tuned to block sneaky DNS requests.
This system could be integrated into a BIOS or right into an ethernet card. However, since it is the high level OS that does DHCP etc. you need a bit of network layer cheating to do this right. I presume they already do that.
You can also run the DNS server yourself, if you are so inclined. It’s not that hard. But this system lets you trust a 3rd party as they learn nothing about you as long as they have lots of customers.
Submitted by brad on Thu, 2008-06-19 19:19.
Sadly, I must report that after our initial success in getting the members of the House to not grant immunity to telcos who participated in the illegal warrentless wiretap program which we at the EFF are suing over, the attempt to join the Senate bill (which grants immunity) to the House bill has, by reports, resulted in a so-called compromise that effectively grants the immunity.
I have written earlier about this issue and asked you to contact your members of congress, particularly the House and the House leadership about this issue, so now I must do it one last time.
It disturbs me that house members got the issue the first time, but that conservative “blue dog” democrats are bolting and going to President Bush’s side. The White House arguments make no sense — if the programs were not illegal, no immunity is needed, and since the new bills make the programs legal, the companies will have no fear of complying with new orders under the new law. The only activity these lawsuits should chill would be illegal activity. It’s like the White House is saying, “If they don’t get immunity, they will be scared to break the law when we ask them to again.”
The solution is simple. When the White House comes calling and asks you to break the law, once it’s not an emergency, you should say, “Why don’t we clear this up before a judge?” That’s what EFF is doing now, 7 years later. Asking a judge to look it over, and see if it’s legal. Should have been done long ago, but certainly shouldn’t be stopped now.
Call your members of congress. Tell them you care about the rule of law and the constitution, and not to grant immunity, in particular this so-called compromise which still grants immunity as long as the White House promised it was all legal.
You can get the contact information for your member at the EFF Action Center
Update: Damn. Even Obama came out and endorsed the “compromise.” The supposed “compromise” says that as long as the administration swears that they told the phone companies that it’s legal, it’s legal. Gee, what are the odds that’s going to happen? How can Obama and the rest of the Democratic leadership side with the President like this? Where are their spines? Obama says he wants to fight in the Senate to remove the immunity, but it’s sadly too late there, and he has to know that, unless he goes all out with his leadership power. He could have done much more earlier in the week by telling Democrats to not support immunity, but he didn’t.
Submitted by brad on Wed, 2008-05-21 18:23.
Recently, I wrote about thedata deposit box, an architecture where applications come to the data rather than copying your personal data to all the applications.
Let me examine some more of the pros and cons of this approach:
The biggest con is that it does make things harder for application developers. The great appeal of the Web 2.0 “cloud” approach is that you get to build, code and maintain the system yourself. No software installs, and much less portability testing (browser versions) and local support. You control the performance and how it scales. When there’s a problem, it’s in your system so you can fix it. You design it how you want, in any language you want, for any OS you want. All the data is there, there are no rules. You can update the software any time, other than the user’s browser and plugins.
The next con is the reliability of user’s data hosts. You don’t control it. If their data host is slow or down, you can’t fix that. If you want the host to serve data to their friends, it may be slow for other people. The host may not be located in the same country as the person getting data from it, making things slower.
The last con is also the primary feature of data hosting. You can’t get at all the data. You have to get permissions, and do special things to get at data. There are things you just aren’t supposed to do. It’s much easier, at least right now, to convince the user to just give you all their data with few or no restrictions, and just trust you. Working in a more secure environment is always harder, even if you’re playing by the rules.
Those are pretty big cons. Especially since the big “pro” — stopping the massive and irrevocable spread of people’s data — is fairly abstract to many users. It is the fundamental theorem of privacy that nobody cares about it until after it’s been violated.
But there’s another big pro — cheap scalability. If users are paying for their own data hosting, developers can make applications with minimal hosting costs. Today, building a large cloud app that will get a lot of users requires a serious investment in providing enough infrastructure for it to work. YouTube grew by spending money like water for bandwidth and servers, and so have many other sites. If you have VCs, it’s relatively inexpensive, but if you’re a small time garage innovator, it’s another story. In the old days, developers wrote software that ran on user’s PCs. Running the software didn’t cost the developer anything, but trying to support on a thousand different variations of the platform did.
With a data hosting architecture, we can get the best of both worlds. A more stable platform (or so we hope) that’s easy to develop for, but no duty to host most of its operations. Because there is no UI in the data hosting platform, it’s much simpler to make it portable. People joked that Java became write-once, debug everywhere for client apps but for server code it’s much closer to its original vision. The UI remains in the browser.
For applications with money to burn, we could develop a micropayment architecture so that applications could pay for your hosting expenses. Micropayments are notoroiusly hard to get adopted, but they do work in more restricted markets. Applications could send payment tokens to your host along with the application code, allowing your host to give you bandwidth and resources to run the application. It would all be consolidated in one bill to the application provider.
Alternately, we could develop a system where users allow applications to cache results from their data host for limited times. That way the application providers could pay for reliable, globally distributed resources to cache the results.
For example, say you wanted to build Flickr in a data hosting world. Users might host their photos, comments and resized versions of the photos in their data host, much of it generated by code from the data host. Data that must be aggregated, such as a search index based on tags and comments, would be kept by the photo site. However, when presenting users with a page filled with photo thumbnails, those thumbnails could be served by the owner’s data host, but this could generate unreliable results, or even missing results. To solve this, the photo site might get the right to cache the data where needed. It might cache only for users who have poor hosting. It might grant those who provide their own premium hosting with premium features since they don’t cost the site anything.
As such, well funded startups could provide well-funded quality of service, while no-funding innovators could get going relying on their users. If they became popular, funding would no doubt become available. At the same time, if more users buy high quality data hosting, it becomes possible to support applications that don’t have and never will have a “business model.” These would, in effect, be fee-paid apps rather than advertising or data harvesting funded apps, but the fees would be paid because the users would take on the costs of their own expenses.
And that’s a pretty good pro.
Submitted by brad on Thu, 2008-05-15 13:56.
Recently we at the EFF have been trying to fight new rulings about the power of U.S. customs. Right now, it’s been ruled they can search your laptop, taking a complete copy of your drive, even if they don’t have the normally required reasons to suspect you of a crime. The simple fact that you’re crossing the border gives them extraordinary power.
We would like to see that changed, but until then what can be done? You can use various software to encrypt your hard drive — there are free packages like truecrypt, and many laptops come with this as an option — but most people find having to enter a password every time you boot to be a pain. And customs can threaten to detain you until you give them the password.
There are some tricks you can pull, like having a special inner-drive with a second password that they don’t even know to ask about. You can put your most private data there. But again, people don’t use systems with complex UIs unless they feel really motivated.
What we need is a system that is effectively transparent most of the time. However, you could take special actions when going through customs or otherwise having your laptop be out of your control. read more »
Submitted by brad on Mon, 2008-05-12 12:46.
A recent story today about discussions for an official defense Botnet in the USA prompted me to post a question I’ve been asking for the last year. Are some of the world’s botnets secretly run by intelligence agencies, and if not, why not?
Some estimates suggest that up to 1/3 of PCs are secretly part of a botnet. The main use of botnets is sending spam, but they are also used for DDOS extortion attacks and presumably other nasty things like identity theft.
But consider this — having remote control of millions of PCs, and a large percentage of the world’s PCs seems like a very tempting target for the world’s various intelligence agencies. Most zombies are used for external purposes, but it would be easy to have them searching their own disk drives for interesting documents, and sniffing their own LANs for interesting unencrypted LAN traffic, or using their internal state to get past firewalls.
Considering the billions that spy agencies like the NSA, MI6, CSEC and others spend on getting a chance to sniff signals as they go over the wires, being able to look at the data all the time, any time as it sits on machines must be incredibly tempting.
And if the botnet lore is to be accepted, all this was done using the resources of a small group of young intrusion experts. If a group of near kids can control hundreds of millions of machines, should not security experts with billions of dollars be tempted to do it?
Of course there are legal/treaty issues. Most “free nation” spy agencies are prohibited from breaking into computers in their own countries without a warrant. (However, as we’ve seen, the NSA has recently been lifted of this restriction, and we’re suing over that.) However, they are not restricted on what they do to foreign computers, other than by the burdens of keeping up good relations with our allies.
However, in some cases the ECHELON loophole may be used, where the NSA spies on British computers and MI-6 spies on American computers in exchange.
More simply, these spy agencies would not want to get caught at this, so they would want to use young hackers building spam-networks as a front. They would be very careful to assure that the botting could not be traced back to them. To keep it legal, they might even just not take information from computers whose IP addresses or other clues suggest they are domestic. The criminal botnet operators could infect everywhere, but the spies would be more careful about where they got information and what they paid for.
Of course, spy agencies of many countries would suffer no such restrictions on domestic spying.
Of all the spy agencies in the world, can it be that none of them have thought of this? That none of them are tempted by being able to comb through a large fraction of the world’s disk drives, looking for both bad guys and doing plain old espionage?
That’s hard to fathom. The question is, how would we detect it? And if it’s true, could it mean that spies funded (as a cover story) the world’s spamming infrastructure?
Submitted by brad on Mon, 2008-05-05 20:08.
I’ve been ranting of late about the dangers inherent in “Data Portability” which I would like to rename as BEPSI to avoid the motherhood word “portability” for something that really has a strong dark side as well as its light side.
But it’s also important to come up with an alternative. I think the best alternative may lie in what I would call a “data deposit box” (formerly “data hosting.”) It’s a layered system, with a data layer and an application layer on top. Instead of copying the data to the applications, bring the applications to the data.
A data deposit box approach has your personal data stored on a server chosen by you. That server’s duty is not to exploit your data, but rather to protect it. That’s what you’re paying for. Legally, you “own” it, either directly, or in the same sense as you have legal rights when renting an apartment — or a safety deposit box.
Your data box’s job is to perform actions on your data. Rather than giving copies of your data out to a thousand companies (the Facebook and Data Portability approach) you host the data and perform actions on it, programmed by those companies who are developing useful social applications.
As such, you don’t join a site like Facebook or LinkedIn. Rather, companies like those build applications and application containers which can run on your data. They don’t get the data, rather they write code that works with the data and runs in a protected sandbox on your data host — and then displays the results directly to you.
To take a simple example, imagine a social application wishes to send a message to all your friends who live within 100 miles of you. Using permission tokens provided by you, it is able to connect to your data host and ask it to create that subset of your friend network, and then e-mail a message to that subset. It never sees the friend network at all. read more »
Submitted by brad on Fri, 2008-04-25 14:00.
I’ve spoken about the Web 2.0 movement that is now calling itself “data portability.” Now there are web sites, and format specifications and plans are underway to make it possible to quickly export the personal data you put on one social networking site to another. While that sounds like a good thing — we like interoperability, and cooperation, and low barriers to entry on new players — I sometimes seem like a lone voice warning about some of the negative consequences of this.
I know I’m not going to actually stop the data portability movement, and nor is that really my goal. But I do have a challenge for it: Switch to a slightly negative name. Data portability sounds like motherhood, and this is definitely not a motherhood issue. Deliberately choosing a name that includes the negative connotations would make people stop and think as they implement such systems. It would remind them, every step of the way, to consider the privacy implications. It would cause people asking about the systems to query what they have done about the downsides.
And that’s good, because otherwise it’s easy to put on a pure engineering mindset and say, “what’s the easiest way we can build the tools to make this happen?” rather than “what’s a slightly harder way that mitigates some of the downsides?”
A name I dreamed up is BEPSI, standing for Bulk Export of Personal and Sensitive Information. This is just as descriptive, but reminds you that you’re playing with information that has consequences. Other possible names include EBEPSI (Easy Bulk Export…) or OBEPSI (One-click Bulk Export…) which sounds even scarier.
It’s rare for people to do something so balanced, though. Nobody likes to be reminded there could be problems with what they’re doing. They want a name that sounds happy and good, so they can feel happy and good. And I know the creator of dataportability.org thinks he’s got a perfectly good name already so there will be opposition. But a name like this, or another similar one, would be the right thing to do. Remind people of the paradoxes with every step they take.
Submitted by brad on Thu, 2008-03-13 16:47.
Earlier I wrote an essay on the paradox of identity management describing some counter-intuitive perils that arise from modern efforts at federated identity. Now it’s time to expand these ideas to efforts for portable personal data, especially portable social networks.
Partly as a reaction to Facebook’s popular applications platform, other social networking players are seeking a way to work together to stop Facebook from taking the entire pie. The Google-lead open social effort is the leading contender, but there are a variety of related technologies, including OpenID, hcard and other microformats. The primary goal is to make it easy, as users move from one system to another, or run sub-abblications on one platform, to make it easy to provide all sorts of data, including the map of their social network, to the other systems.
Some are also working on a better version of this goal, which is to allow platforms to interoperate. As I wrote a year ago interoperation seems the right long term goal, but a giant privacy challenge emerges. We may not get very many chances to get this right. We may only get one.
The paradox I identified goes against how most developers think. When it comes to greasing the skids of data flow, “features” such as portability, ease of use and user control, may not be entirely positive, and may in fact be on the whole negative. The easier it is for data to flow around, the more it will flow around, and the more that sites will ask, and then demand that it flow. There is a big difference between portability between applications — such as OpenOffice and MS Word reading and writing the same files — and portability between sites. Many are very worried about the risks of our handing so much personal data to single 3rd party sites like Facebook. And then Facebook made it super easy — in fact mandatory with the “install” of any application — to hand over all that data to hundreds of thousands of independent application developers. Now work is underway to make it super easy to hand over this data to every site that dares to ask or demand it. read more »
Submitted by brad on Sun, 2008-02-17 15:17.
As many of you will know, it’s been a tumultuous week in President Bush’s battle to get congress to retroactively nullify our lawsuit against AT&T over the illegal wiretaps our witnesses have testified to. The President convinced the Senate to pass a bill with retroactive immunity for the phone companies — an immunity against not just this but all sorts of other illegal activities that have been confirmed but not explained by administration officials. But the House stood firm, and for now has refused. A battle is looming as the two bills must be reconciled. I encourage you to contact your members of congress soon to tell them you don’t want immunity.
And here, I’m going to outline in a slightly different way, why.
I’ve talked about the rule of law, and the problems with retroactive get out of jail free cards that “make it legal.” But let’s go back to when these programs started, and ask some important questions about the nature of democracy and its checks and balances.
The White House decided it wanted a new type of wiretap, and that it wouldn’t, or most probably couldn’t get a warrant from the special court convened just to deal with foreign intelligence wiretaps. They have their reasoning as to why this is legal, which we don’t agree with, but even assuming they believe it themselves, there is no denying by anybody — phone company employees, administration officials, members of congress or FISA judges — that these wiretaps were treading on new, untested ground. Wiretaps of course are an automatic red flag, because they involve the 4th amendment, and in just about every circumstance, everybody agrees they need a warrant as governed by the 4th amendment. Any wiretap without a warrant is enough to start some fine legal argument.
In the USA, the government is designed with a system of checks and balances. This is most important when the bill of rights is being affected, as it is here. The system is designed so that no one branch is allowed to interfere with rights on its own. The other branches get some oversight, they have a say.
So when the NSA came to the phone companies, asking for a new type of wiretap with no warrant, the phone companies had to decide what to do about it. The law tells them to say no, and exacts financial penalties if they don’t say no to an illegal request. The law is supposed to be simple and to not ask for too much judgment on the part of the private sector. In this situation, with a new type of wiretap being requested, the important question is who makes the call? Who should decide if the debatable orders are really legal or not?
There are two main choices. Phone company executives or federal judges. If, as the law requires, the phone company says “come back with a warrant” this puts the question of whether the program is legal in the hands of a judge. The phone company is saying, “this is not our call to make — let’s ask the right judge.”
If the administration says, “No, we say it’s legal, we will not be asking a judge, are you going to do this anyway?” then we’re putting the call in the hands of phone company executives.
That’s what happened. The phone companies made the decision. The law told them to kick it back to the judge, but the White House, it says, assured them the program was legal. And now that lawsuits like ours are trying to ask a different federal judge if the program was legal, the Senate has passed this retroactive immunity. This immunity does a lot of bad things, but among them it says that “it was right for the phone companies to be making the call.” That the pledges of the administration that the program was legal were enough. We’ve even be told we should thank the phone companies for being patriots.
But it must be understood. Even if you feel this program was necessary for the security of the nation, and was undertaking by patriots, this was not the only decision the phone company made. We’re not suing them because they felt they had a patriotic duty to help wiretap al Qaeda. We’re suing them because they took the decidedly non-patriotic step of abandoning the checks and balances that keep us free by not insisting on going to either a judge or congress or both.
Officials in the three branches take a solemn oath to defend the constitution. Phone company executives, as high minded or patriotic as they might be, don’t. So the law was written to tell them it is not their call whether a wiretap is legal, and to tell them there are heavy penalties if they try to make that decision. Those who desire immunity may think they are trying to rescue patriots, but instead they will be rewarding the destruction of proper checks and balances. And that’s not patriotic at all.
Some have argued that there was a tremendous urgency to this program, and this required the phone companies to act quickly and arrange the warrantless wiretaps. While I disagree, I can imagine how people might think that for the first week or two after the requests come in. But this wasn’t a week or two. This has gone on since 2001. There was over half a decade of time in which to consult with judges, congress or both about the legitimacy of the wiretaps. It’s not that they didn’t know — one company, Qwest, refused them at their own peril. If you argued for immunity for the actions of that first week or two, I could understand the nature of your argument. But beyond that, it’s very hard to see. For this is immunity not just for illegal wiretapping. This is immunity for not standing by the law and saying “let’s ask a judge.” For years, and years. Why we would want to grant immunity for that I just can’t understand, no matter how patriotic the goals. This system of freedom, with checks and balances, is the very core of what patriots are supposed to be defending.
Submitted by brad on Tue, 2008-01-29 10:21.
A couple of weeks ago many wrote about the mistakes of spock which made us call them the “evil spock” for the way they had you mass mail your friends by fooling you into thinking they were already users of Spock.
The newest company to make a similar mistake is called NotchUp. I am loathe to discuss their business, because this means they get publicity for being bad actors, but it involves companies paying candidates for the chance to interview them rather than just giving all the fees to the headhunters. (Something that could only work in a boom market, I expect.) But in this case, some of the fees go to the headhunters, of course, and in a particularly nasty turn, 10% of them go to the “friend” who “invited” you to sign up.
When I get a bunch of invites for something brand new in a short period, it’s either something really hot, or something fishy. In this case it’s the latter. And one person suggests they didn’t authorize NotchUp to email their entire linked-in contact list so there may be something really fishy.
Here are some of the mistakes:
- The offering of affiliate fees to spam your friends, effectively an Amway style marketing system, has been pernicious for some time. While this should be strongly discouraged, I am not calling for its total prohibition, but it should never be secret. Every such message should contain a note explaining the financial incentive.
- The ad comes with your friend’s name on it, but the reply address is a dummy “invite@notchup” which I presume doesn’t work. Any site that does this sort of mailing should put in the friend’s real e-mail, so I can complain to them.
- The ad comes as a combined HTML and plain text message. Which would be good except the plain text part is just “Go read the HTML part.” Seriously. Boy is that evil.
- The site contains no “contact us” information for users who have issues. Their FAQ is all about signing up.
- The site has no “opt out” to stop my friends from doing these mass mailings to me. These are not particularly useful, because I have many email addresses and in fact whole domains that come to me, but they are better than nothing.
- It may have some of these things if I sign up. Of course as somebody who wants to opt-out, I hardly want to create an account just to do that. A few other sites have had this flaw. (I have no idea if you can opt out by signing up, I presume it does give you the ability to at least not get mailings because you have already been fished by your friend.)
Whether their headhunting model sounds interesting or not, the company’s practices seem slimy enough that I would wait for a nicer competitor to come along if you want to get headhunted this way.
Submitted by brad on Tue, 2008-01-15 13:10.
Bruce Schneier has made a fuss by writing about how he leaves his wireless internet open. As a well regarded security expect, how can he do this. You’ll see many arguments for and against in his posting. I’ll expand on one of mine.
Part of Bruce’s argument is one I express different. I sometimes say “Firewalls are a hoax.” They are the wrong choice for security, but we sell them as a good choice. Oddly, however, this very fact does make them a valid choice. I will explain the contradiction.
Firewalls, I should say, are a form of network security — creating an internal network which is “trusted” and protected from the outside world. In an obscure way, encrypting your wireless net is in this class of security. Note that the “firewall” programs that run on PCs are not network firewalls so they are generally not in this class of security, though they are called Firewalls.
The right way to do things, in the ideal world, is to secure each PC, and to have that PC encrypt its traffic end-to-end with all the sites it communicates with. If you do this, you have almost no need for firewalls or encryption on the network. This is important because in many cases, the idea that your internal network is trustable is a dangerous one. That’s because many networks are populated with insecure consumer computers which frequently get infected with malware (viruses, trojans etc.) They can get infected because they are laptops that visit exposed networks they are not secured well enough for — because you thought you could get away with less on the home net — or because their owner is tricked into downloading malware, or going to a web site that exploits a browser bug, etc.
Once a local computer is infected, your trusted local net betrays you, as the malware now gets to take advantage of all that trust.
We don’t live in that ideal world. The same insecurity these consumer computers (and yes, I mean Windows but other OSs are not immune) have makes them unsuitable for general exposure. The firewall industry gets to sell firewalls because the workstations are so insecure.
In the real world, virus/trojan attacks are the most common. Up to 30% of PCs are “botted” — taken over by malware and acting as zombies under the control of some distant master. A significant number are just plain compromised in other ways, though botting seems the most popular motive today for taking control of systems. The volume of attacks coming in via outsiders sniffing or connecting to your wireless network is insignificant in comparison, I think research would show.
And sadly, while we would like all web traffic to be HTTPS and all E-mail to be secured over TLS, this is just not an option. Most web servers don’t over encrypted versions, and even the ones that do get rarely used because the UI was not set up correctly for it. (Ideally, http should have been designed so that you don’t have to put your encryption desires into the URL — https vs. http — so that it could be negotiated for each connection. Even then, it would be hard to do this though identity certificates could make it happen.)
So we must surf the web in the open, or at best through an encrypted tunnel to a proxy that surfs in the open. So this does call for encrypting one’s wifi. However, again, the number of people sniffing private homes wifi is tiny in comparison to the other threats.
One of the factors supporting Bruce’s choice is that most security continues to have bad UI. The computer and security industries regularly vastly underestimate the importance of good UI. The hard truth is that good security with bad (hard to use) UI simply doesn’t get deployed very much unless you force it and force it hard. This suggests that lesser security with good UI can actually deliver more real world results than better security with bad UI.
For encrypting networks, the UI is poor. Different vendors use different passphrase algorithms to input keys. For many devices (phones, digital picture frames etc.) even entering a passphrase is difficult. We’re starting to see some better UI but it’s slow to deploy and for now it is no surprise that people want to leave their nets open, both for their own devices, and to give access to guests in their home or office.
To my mind the ideal UI is a device tries to connect to the network, and the AP or a computer flashes a light that says that one, and exactly one device is asking to join the net. You then push a button to confirm that device. Also good is the ability to allow arbitrary devices to connect in a secured channel but with no special ability to route packets to one another or into general devices. A full configuration has an internal net (with routing), guest devices that can’t route to the internal net or to other guests, and host devices which can be seen by guests but not the outside world.
Oddly, as I said at the start, the choices we make affect the value of the choices. Because NATs and firewalls provide some security, people (and vendors) allow the computers behind these NATs and firewalls to be insecure in a way they never would or could if the NATs and firewalls weren’t there. This in turn makes the NATs and firewalls worthwhile. And yes, random attacks from outside will always be more probable than attacks from the inside from compromised machines, and they will be more probable than attacks from neighbours. So it’s not as simple as we like. However, computers are going to roam more and more. My PDA has wifi and roams. It also has EVDO and some day those networks will open and need more endpoint security.
So is Bruce right or wrong? Both. The real world risk of what he’s doing isn’t great. It’s not zero, either. The real question is whether the UI penalties of an encrypted network are worse than the risk. And that decision varies from person to person. Better UI and protocol design could mostly eliminate the tradeoff, which is the real lesson.
Submitted by brad on Tue, 2007-12-18 13:04.
This week, like many, I have gotten a bunch of invites to join people’s trust networks on the people-search/social networking site called “Spock.” Now normally I have started to mostly ignore new invites from social networking services. There are far too many, and I can’t possibly maintain accounts on them all, so a new site will have to get very, very, very compelling before I will join it.
I’m waiting for the social networking sites to figure out how how to interoperate in a meaningful way, so that I can join just one, and befriend people on others, and use apps that work over both. The new Google offering is a step in that direction but is mostly about making apps portable over networks.
However, the volume of mail from Spock was much higher than a typical new network. One blogger identified the reason, suggesting the site was designed by the evil spock from Mirror, Mirror (Star Trek). The trick is the site has already spidered other social networking sites and web sites to build profiles on people, and thus declares that almost everybody in your addressbook “already has a profile” according to Benson. This is convincing friends to authorize the semi-spam. And Wired News has discovered something even nastier about this spidering.
However, I see a deeper problem, even without these flaws in Spock’s system. We have to consider just how much we want to allow applications to “mail everybody in your address book.” This started with Plaxo and Goodcontacts, which wanted to be address book managers, and now has moved into social networking tools.
The problem is I have 1,000 or more people in my address book. If the average person engages in “mail everybody in my address book” once a year, I will get on average 3 such mails a day, and so will most others.
Facebook actually clued into that and forbids applications from mailing solicitations to everybody in your facebook profile. You are limited to a modest number per day. Even with this, it didn’t stop Zombie invitations from getting pretty annoying to people.
E-mail viruses, of course, also spread by mailing everybody in your address book, to the extent that email programs had to move to make that a more guarded operation, and antivirus programs had to detect it.
Now mailing most of your address book isn’t spam (even with commercial) because you know the people. Many of us mail a subset of it to announce parties or major events in our lives, or to send end of year letters. But we do need to generate a different ethic over mail to your whole list that is triggered by a 3rd party web site or application. With so many apps wanting to “market like a virus” this just doesn’t scale, and our boxes will become full of this spam-from-friends. (A bit like the way pyramid schemes also encourage friend spam.) It needs to be clear that this is not something apps should do, and not something our friends should let apps do without a lot of consideration.
Note: If you are on Spock, and you agree they went too far, you should delete your profile. Only be seeing people flee will they figure out they did wrong. Or, at the very least, change your profile to a stub that says you find Spock’s privacy practices unacceptable and you ask people not to network with you on it.
Submitted by brad on Mon, 2007-12-17 01:32.
Update: Harry Reid has delayed the bill until 2008. Let’s hope we can keep the immunity out when it returns again next year. Let your senators know.
Usually, when you start a legal action, you consider the merits and go ahead when you have a good case. If your case is just, you should win.
You don’t usually expect your case to cause the President to personally lobby congress to grant a retroactive immunity to the parties who broke the law. You don’t usually expect to have them try to toss out your case by having an act of congress grant amnesty to those you are suing.
But this could happen tomorrow, in our battle against AT&T for letting the NSA wiretap without warrants. The house passed a bill without the amnesty the President wanted, and the Senate had two bills, but right now they’ve picked the bad one, with the amnesty, and powerful forces are pushing to make it go through quickly, and then add the amnesty to the house bill.
Senator Chris Dodd is going to show some great spine tomorrow and try to filibuster the bill and trigger debate. However, pro-amnesty forces are gathering the 60 senate votes needed to shut down the bill and grant amnesty. Your senator is probably among them. One of my senators, Dianne Feinstein, is among the worst. But it’s not too late to call your own senator and tell them not to engage in this travesty of justice.
In Star Wars: The Phantom Menace, Darth Sidious, a.k.a. Emperor Palpatine, tells his puppet trade federation to invade Naboo.
“But my lord, is that legal?” asks the trader.
“I will make it legal” says Lord Sidious.
That’s the precedent they are setting, as I’ve written before. Do what the President says, ignore checks and balances because he can make it legal, retroactively. It’s a sad say for the rule of law.
Do me a favour and call your senator and let them know what you think about this issue. Let them know their constituents will remember this action, and see if you can turn the tide.
Submitted by brad on Sun, 2007-12-02 02:10.
All over the net, a huge number of sites offer you the option of E-mailing you your password if you have forgotten it. While this seems to make sense, it is actually a dreadful security policy, and if you see it, you should complain and point them to this article or others to get them to stop. As an alternate, they should at most offer to E-mail you a new, randomly chosen temporary password, which you can use to log in and set a more memorable password.
If a site can mail you your password, it means they are keeping a copy of it. They should not be doing that. First of all, almost everybody re-uses passwords at different sites. That means if one site has a security breach — as Convio did this week for a wide variety of sites that are its clients — your password will be stolen, and it can then be used on all the other sites you use it at. (This is a good reason to always use more protected, less duplicated passwords on sites where actual damage can be done or money can be spent, like banks, eBay, paypal etc.)
Instead, they should keep a “hash” of your password. A hash is a one way function. Given the plain password, they can hash it, and store the result, but you can’t get the plain password back from the hash. So you can check to see if a password that was typed matches the password without storing what the password is. This is actually a very easy thing to do in most systems, and its main downside is the fact that they can no longer e-mail you your password. They can, however, set it to something random and mail you that. That’s a touch more work in the rare event of a lost password, but worth the trouble.
There is, oddly, one minor downside to hashed passwords. With hashed passwords, you must provide the site your real password, and they can then test it and forget it. You must trust them to forget it. The real password, however, is sent over the internet and if you don’t use an encrypted channel, like SSL/TLS/https, it could be intercepted by people tapping the line. Some password systems (included the less commonly used HTTP password system) have the browser hash the password (in a special way that is different every time) and send the hash to log in. In this case, the real password is not sent, and can’t be sniffed, but must be in storage at the remote site. However, if you use an encrypted channel (https), there is no worry about the password going over the internet, and so there’s no reason not to do it that way.
There is a better way to do all of this. With digital signature, you can prove that you’re you using a secret private key only you know. Nobody else ever gets this key, and nobody can figure it out by watching the communications you send. While this technology has been around for some time, and is in fact implemented in most browsers (though far from perfectly) it is not a common way to authenticate to web sites at all.
However, next time a site offers to E-mail your password, point them to the Convio data theft and to this page and ask them to get their act together.
Submitted by brad on Thu, 2007-11-22 15:39.
The hot new thing of the web of late has been facebook apps. I must admit Facebook itself has been great for me at finding old friends because for unknown reasons, almost 20% of Canada is on Facebook compared to 5% of the USA. Facebook lets 3rd parties write apps, which users can “install” and after installing them, the apps get access to the user’s data (friend list) and can insert items into the user’s “feed” (which all their friends see) and sometimes send E-mails to friends.
I haven’t examined the API enough to understand the reason, but there are many Facebook apps that are very, very annoying in how they operate. Most won’t let you get anything from them unless you “install” them and give them access to a lot of your data. (There are a few that let you have more limited temporary use through a login.)
This is annoying because you constantly get data in feeds (or emails) which is just a teaser. “Fred Smith wrote something on your pixie wall.” You have to follow the link, and find you must install the application before it will show you what the other person wrote. It could easily have shown you the text in the feed or email, but it doesn’t want to do that, it wants to spread virally.
But this is far beyond viral. Viral apps usually work because friends recommend them. These apps push to install just because a friend used the app in reference to you.
Outside of facebook there was a different dynamic. Usually if you used a social app which emailed your friends, your friends could do their part just on the web site, without creating an account, or providing personal data, or “installing” something. (The install on facebook isn’t like a PC software install, but given the data it gets access too, it is pretty insidious, a form of super-spyware.)
There were a few apps which required your contacts to create accounts and enter data. They got a lot of pushback, and this largely stopped. Most of the apps certainly encouraged your friends to create accounts, but few forced it or sent a message that was useless unless they did create one. (Not counting deliberate invitations to join a system which obviously work this way, and which you tend to send one-by-one, or so most companies learned.) As much as I hate evite they still let the people you invite RSVP without doing any account creation.
In facebook it’s the reverse. One app I tired and hated asked questions. It ended up putting text into the feed and emails of the form, “Joe has asked a question, click here to see what it is” and “Mary has answered Joe’s question, click here to read the answer” instead of putting these short text questions and answers right into the email. And answering a question required installing the app.
I see a few things that have driven it this way. First of all, when you install a Facebook app, it informs all your friends in the feed. That’s publicity for the app. And they get to increase their total number of installed users, which gives them more visibility when people look to see what’s popular. If the app let your friends get data without making them join, it would not have so many users.
Apps are not forced to do this. A number of good apps will let people see the data, even put it in feeds, without you having to “install” and thus give up all your privacy to the app. What I wish is that more of us had pushed back against the bad ones. Frankly, even if you don’t care about privacy, this approach results in lots of spam which is trying to get you to install apps. Everybody thinks having an app with lots of users is going to mean bucks down the road, with Facebook valued as highly as it is.
But a lot of it is plain old spam, but we’re tolerating it because it’s on Facebook. (Which itself is no champion. They have an extremely annoying email system which sends you an e-mail saying, “You got a message on facebook, click to read it” rather than just including the text of the message. To counter this, there is an “E-mail me instead” application which tries to make it easier for people to use real E-mail. And I recently saw one friend add the text “Use E-mail not facebook message” in her profile picture.)
Submitted by brad on Mon, 2007-10-22 16:41.
I only post a modest number of EFF news items here, because I know that if you want to see them all, you should be reading some of the EFF blogs such as deeplinks or or action alerts or EFFector or others.
However, something remarkable is happening. As you may know, we filed suit against AT&T because we have evidence they allowed the government to engage in a massive spying program within the US without warrants or other proper legal authority. Special secret rooms were installed in San Francisco and other locations, rooms under the control of the NSA, and massive data pipes with all internet traffic and more were forked and fed into these NSA rooms. We want to get to the bottom of this, and punish the phone companies if they violated the very explicit laws which were set up after watergate to stop the President from doing this exact sort of thing. Congress told the phone companies that Nixon showed us we can’t trust the President all the time, and so they have a duty to protect their customers as well, even if the President tells them not to.
But as our lawsuit has progressed, forces are pushing Congress to not just enable this spying, but to grant a retroactive amnesty on the phone companies that violated the law. In one sense I am glad our lawsuit has scared them so much — you know you are on to something when they try to get congress to pass retroactive laws to stop your lawsuits — but the enormity of such action boggles my mind.
The phone companies and White House are pushing for a “get out of jail free” card for their past activity. Whatever you think about the need for such massive surveillance, retroactive immunities are something else entirely. Allowing such immunities will let the President tell people, “Don’t worry whether this is illegal or not. As you can see, I can make it legal.” Congress might give him the proof he needs to back up such claims. It doesn’t matter that he won’t be able to “make it legal” every time he promises it. The fact that he did it this time is still going to get more people to feel at less risk in joining illegal conspiracies. It undermines the rule of law.
The American people need to convince their Senators and House members not to do this. If your rep has already decided they like the surveillance program — even if you have decided you like it — they must realize this get out of jail free card is a horrible idea.
You can use our action alert system to find your rep and their phone numbers, and give them a call. Calls matter the most.
See if your reps are on the right committees and talk to them about it.
The house was ready to pass a bill without immunity and pro-immunity forces scuttled it and are pushing to get it added.
Call House Members
The Senate Intelligence community passed a bill with Telco immunity in it. The Judiciary committe is now looking at it.
Call Senate Members
Submitted by brad on Mon, 2007-07-16 20:48.
Earlier I wrote about the ability to find you from a DNA sample by noting it’s a near match with one of your relatives. This is a concern because it means that if relatives of yours enter the DNA databases, voluntarily or otherwise, it effectively means you’re in them too.
On a recent 60 minutes on the topic, they told the story of Darryl Hunt, who had been jailed for rape and murder. It wasn’t clear to me why, but this was done even though his blood type did not match the rapist’s DNA. Even after DNA testing improved and the non-match was better confirmed, he was still kept in jail, because he was believed to be the murderer, if not the rapist, ie. an accomplice.
Later, they did a DNA search on the rapist’s DNA and found his brother in the database, who had been entered due to a minor parole violation. So they interviewed the brothers of the near-match and found Willard Brown, who turned out to be the rapist. Once they could see he was not an associate of the rapist, Hunt was freed after 19 years of false imprisonment.
The piece also told the story of another rapist, who had raped scores of women and stolen their shoes as souvenirs, but had become a cold case. He was caught because his sister was in a DNA database due to a DUI.
Now much of our privacy law is based on having your own private data not seized and used against you without probable cause. It’s easy to answer the case of the shoe rapist. There are a wide variety of superior surveillance tools we could allow the police to use, and they would help them catch criminals, and in many cases thus prevent those criminals from committing future crimes. But we don’t give the police those tools, deliberately, because we don’t want a world where the government has such immense surveillance power. And a large part of that goal is protecting the innocent. Our rules that allow criminals to walk free when police do improver evidence gathering and surveillance to catch them are there in part to keep the police from use of those powers on the innocent.
But the innocent man who was freed presents a more interesting challenge. Can we help him, without enabling 1984? In considering this question, I asked, “What if we allowed DNA near matches to be used only when they would prove innocence?” Of course, in Hunt’s case, and many others, the innocence is proven by finding the real guilty party.
So what if, in such cases, it was ruled that while they might find the guilty party, they could not prosecute him or her? And further, that any other evidence learned as a result was considered Fruit of the poisonous tree? That’s a pretty tough rule to follow, since once the police know who the real perpetrator is, this will inspire them to find other sorts of evidence that they would not have thought to look for before, and they will find ways to argue that these were discovered independently. It might be necessary to put on a stronger standard, and just give immunity to the real perpetrator if sufficient time has passed since the crime to declare the case to be cold.
Setting out the right doctrine would be difficult. But if it frees innocents, might it be worth it?