Privacy

Police robots everywhere?

It is no coincidence that two friends of mine have both founded companies recently to build telepresence robots. These are easy to drive remote control robots which have a camera and screen at head height. You can inhabit the robot, and drive it around a flat area and talk to people by videoconferencing. You can join meetings, go visit people or inspect a factory. Companies building these robots, initially at high prices, intend to sell them both to executives who want to remotely tour remote offices and to companies who want to give cheaper remote employees a more physical presence back at HQ.

There are also a few super-cheap telepresence robots, such as the Spykee, which runs Skype video conferencing and can be had for as low as $150. It’s not very good, and the camera is very low down, and there’s no screen, but it shows just how cheap such a product can get.

“Anybots” QA telepresence robot

When they get down to a price like that, it seems inevitable to me that we will see an emergency services robot on every block, primarily for use by the police. When there is a police, fire or ambulance call to an address, an officer could immediately connect to the robot on that block and drive it to the scene, to be telepresent. The robot would live in a small, powered protective closet either paid for by the city, but more likely just donated by some neighbour on the block who wants the fastest possible emergency response. Called into action, the robot’s garage door would open and the robot would drive out, and probably be at the location of the emergency within 60 to 120 seconds, depending on how densely they are placed. In the meantime actual first responders might also be on the way.

What could such a robot do?  read more »

Towards a more secure web, and better TLS

Today an interesting paper (written with the assistance of the EFF) was released. The authors have found evidence that governments are compromising trusted “certificate authorities” by issuing warrants to them, compelling them to create a false certificate for a site whose encrypted traffic they want to snoop on.

That’s just one of the many ways in which web traffic is highly insecure. The biggest reason, though, is that the vast majority of all web traffic takes place “in the clear” with no encryption at all. This happens because SSL/TLS, the “https” system is hard to set up, hard to use, considered expensive and subject to many false-alarm warnings. The tendency of security professionals to deprecate anything but perfect security often leaves us with no security at all. My philosophy is different. To paraphrase Einstein:

Ordinary traffic should be made as secure as can be made easy to use, but no more secure

In this vein, I have prepared a new article on how to make the web much more secure, and it makes sense to release it today in light of the newly published threat. My approach, which calls for new browser behaviour and some optional new practices for sites, calls for the following:

  • Make TLS more lightweight so that nobody is bothered by the cost of it
  • Automatic provisioning (Zero UI) for self-signed certificates for domains and IPs.
  • A different meaning for the lock icon: Strong (Locked), Ordinary (no icon) and in-the-clear (unlocked).
  • A new philosophy of browser warnings with a focus on real threats and on changes in security, rather than static states deemed insecure.
  • A means so sites can provide a file with advisories for browsers about what warnings make sense at this site.

There is one goal in mind here: The web must become encrypted by default, with no effort on the part of site operators and users, and false positive warnings that go off too frequently and make security poor and hard to use must be eliminated.

If you have interest in browser design and security policy I welcome your comments on A new way to secure the web.

The privacy risks of genetic genealogy (23andMe part 2)

Last week, I wrote about interesting experiences finding Cousins who were already friends via genetic testing. 23andMe’s new “Relative Finder” product identifies the other people in their database of about 35,000 to whom you are related, guessing how close. Surprisingly, 2 of the 4 relatives I made contact with were already friends of mine, but not known to be relatives.

Many people are very excited about the potential for services like Relative Finder to take the lid off the field of genealogy. Some people care deeply about genealogy (most notably the Mormons) and others wonder what the fuss is. Genetic genealogy offers the potential to finally link all the family trees built by the enthusiasts and to provably test already known or suspected relationships. As such, the big genealogy web sites are all getting involved, and the Family Tree DNA company, which previously did mostly worthless haplogroup studies (and more useful haplotype scans,) is opening up a paired-chromosome scan service for $250 — half the price of 23andMe’s top-end scan. (There is some genealogical value to the deeper clade Y studies FTDNA does, but the Mitochondrial and 12-marker Y studies show far less than people believe about living relatives. I have a followup post about haplogroups and haplotypes in genealogy.) Note that in March 2010, 23andMe is offering a scan for just $199.

The cost of this is going to keep decreasing and soon will be sub-$100. At the same time, the cost of full sequencing is falling by a factor of 10 every year (!) and many suspect it may reach the $100 price point within just a few years. (Genechip sequencing only finds the SNPs, while a full sequencing reads every letter (allele) of your genome, and perhaps in the future your epigenome.

Discover of relatives through genetics has one big surprising twist to it. You are participating in it whether you sign up or not. That’s because your relatives may be participating in it, and as it gets cheaper, your relatives will almost certainly be doing so. You might be the last person on the planet to accept sequencing but it won’t matter.  read more »

Terror and security

One of the world’s favourite (and sometimes least favourite) topics is the issue of terrorism and security. On one side, there are those who feel the risk of terrorism justifies significant sacrifices of money, convenience and civil rights to provide enough security to counter it. That side includes both those who honestly come by that opinion, and those who simply want more security and feel terrorism is the excuse to use to get it.

On the other side, critics point out a number of counter arguments, most of them with merit, including:

  • Much of what is done in the name of security doesn’t actually enhance it, it just gives the appearance of doing so, and the appearance of security is what the public actually craves. This has been called “Security Theatre” by Bruce Schneier, who is a friend and advisor to the E.F.F.
  • We often “fight the previous war,” securing against the tactics of the most recent attack. The terrorists have already moved on to planning something else. They did planes, then trains, then subways, then buses, then nightclubs.
  • Terrorists will attack where the target is weakest. Securing something just makes them attack something else. This has indeed been the case many times. Since everything can’t be secured, most of our efforts are futile and expensive. If we do manage to secure everything they will attack the crowded lines at security.
  • Terrorists are not out to kill random people they don’t know. Rather, that is their tool to reach their real goal: sowing terror (for political, religious or personal goals.) When we react with fear — particularly public fear — to their actions, this is what they want, and indeed what they plan to achieve. Many of our reactions to them are just what they planned to happen.
  • Profiling and identity checks seem smart at first, but careful analysis shows that they just give a more free pass to anybody the terrorists can recruit whose name is not yet on a list, making their job easier.
  • The hard reality is, that frightening as terrorism is, in the grand scheme we are for more likely to face harm and death from other factors that we spend much less of our resources fighting. We could save far more people applying our resources in other ways. This is spelled out fairly well in this blog post.

Now Bruce’s blog, which I link to above, is a good resource for material on the don’t-panic viewpoint, and in fact he is sometimes consulted by the TSA and I suspect they read his blog, and even understand it. So why do we get such inane security efforts? Why are we willing to ruin ourselves, and make air travel such a burden, and strip ourselves of civil rights?

There is a mistake that both sides make, I think. The goal of counter-terrorism is not to stop the terrorists from attacking and killing people, not directly. The goal of counter-terrorism is to stop the terrorists from scaring people. Of course, killing people is frightening, so it is no wonder we conflate the two approaches.  read more »

The odds of knowing your cousins: 23andme Part 1

Bizarrely, Jonathan Zittrain turns out to be my cousin — which is odd because I have known him for some time and he is also very active in the online civil rights world. How we came to learn this will be the first of my postings on the future of DNA sequencing and the company 23andMe.

(Follow the genetics for part two and other articles.)

23andMe is one of a small crop of personal genomics companies. For a cash fee (ranging from $400 to $1000, but dropping with regularity) you get a kit to send in a DNA sample. They can’t sequence your genome for that amount today, but they can read around 600,000 “single-nucleotide polymorphisms” (SNPs) which are single-letter locations in the genome that are known to vary among different people, and the subject of various research about disease. 23andMe began hoping to let their customers know about how their own DNA predicted their risk for a variety of different diseases and traits. The result is a collection of information — some of which will just make you worry (or breathe more easily) and some of which is actually useful. However, the company’s second-order goal is the real money-maker. They hope to get the sequenced people to fill out surveys and participate in studies. For example, the more people fill out their weight in surveys, the more likely they might notice, “Hey, all the fat people have this SNP, and the thin people have that SNP, maybe we’ve found something.”

However, recently they added a new feature called “Relative Finder.” With Relative Finder, they will compare your DNA with all the other customers, and see if they can find long identical stretches which are very likely to have come from a common ancestor. The more of this they find, the more closely related two people are. All of us are related, often closer than we think, but this technique, in theory, can identify closer relatives like 1st through 4th cousins. (It gets a bit noisy after this.)

Relative Finder shows you a display listing all the people you are related to in their database, and for some people, it turns out to be a lot. You don’t see the name of the person but you can send them an E-mail, and if they agree and respond, you can talk, or even compare your genomes to see where you have matching DNA.

For me it showed one third cousin, and about a dozen 4th cousins. Many people don’t get many relatives that close. A third cousin, if you were wondering, is somebody who shares a great-great-grandparent with you, or more typically a pair of them. It means that your grandparents and their grandparents were “1st” cousins (ordinary cousins.) Most people don’t have much contact with 3rd cousins or care much to. It’s not a very close relationship.

However, I was greatly shocked to see the response that this mystery cousin was Jonathan Zittrain. Jonathan and I are not close friends, more appropriately we might be called friendly colleagues in the cyberlaw field, he being a founder of the Berkman Center and I being at the EFF. But we had seen one another a few times in the prior month, and both lectured recently at the new Singularity University, so we are not distant acquaintances either. Still, it was rather shocking to see this result. I was curious to try to figure out what the odds of it are.  read more »

10 year term as EFF chairman winds down, EFF 20th anniversary tonight

In early 2000, after a tumultuous period in the EFF’s history, and the staff down to just a handful, I was elected chair of the Electronic Frontier Foundation. I had been on the board for just a few years, but had been close to the organization since it was founded, including participating with it as a plaintiff in the landmark supreme court case which struck down the Communications Decency Act in 1996.

Having now served 10 years as chairman, it is time to rotate out, and I am happy to report the election of John Buckman, founder of Magnatune and Bookmooch (among other ventures) as our new chair. As a part-time resident of Europe, John will, like me, offer an international perspective to the EFF’s efforts. Pam Samuelson, a law professor of stunning reputation and credentials, is the vice-chair for the coming 5-year term, replacing John Perry Barlow.

I would love to claim credit for the EFF’s tremendous growth and success during my tenure, but the truth is that our active and star-studded board is a board of equals. We all take an active role in setting policy and attempting to guide the organization in its mission to protect important freedoms in the online world. While it would shock most of my previous employees, my board management has been very laissez-faire. I and the other board members try to let our great team do their stuff.

After I became chairman, one of the best things we on the board did was to re-recruit Shari Steele, our former legal director, to become the new executive director. Shari had been with the EFF for many years but had left to work on a new venture. We brought her back and it’s been positive ever since. We also recruited Cindy Cohn to be our legal director. Cindy had a long history of friendship with the organization, having worked tirelessly with our help on the fight to stop export controls on encryption. WIth these two appointments, I and my fellow board members started the course for an incredible decade. In spite of a chaotic global economy, during this period, our fundraising, budget and staff size have more than tripled. (That may seem minor for a dot-com but it’s great news for a non-profit.) We’ve boosted membership and membership dontations, increased funding from foundations, and created an endowment to assure the EFF’s future.

The EFF is now 20, so I’ve been privileged to chair it for half of its lifetime. In that period we’ve seen dramatic victories for free speech, privacy and freedom to program. We’ve stopped e-voting abuse and rootkits in your music CDs. We’ve protected bloggers as journalists and preserved anonymous speech online. We’ve stopped encryption software from being controlled like a munition and had so many other triumphs, big and small. We’ve also seen an expanded technical and activism program, as our technologists have led the way in unveiling things like secret dots generated by colour laser printers that track your printouts back to you and network interference with filesharing by cable ISPs.

We’ve also had our failures, but even those have spoken loudly about the quality of our team. When we took Grokster/Streamcast to the supreme court, our client lost, but the court laid down a fairly narrow standard that allows software developers building new generations of publishing products to know how to stay clear of liability. Our cases against the White House’s warrantless wiretapping program have hit major hurdles, one of which was an act of congress created specifically to nullify our attempts to have a court examine this program — granting a retroactive immunity to the phone companies that did it. Bad as that was, I figure if they have to get an act of congress to stop you, you know you’ve hit a nerve.

We’ve also hit many nerves with our great FOIA team that has uncovered all sorts of attacks on your rights, and continues to do so, and our team of activists and our new international team are working hard to promote our doctrine of free speech and freedom to develop technology around the world. With all our team does, many are shocked to find it is only around 30 people. Still, we could do much more and your donations are still what makes it all happen. I hope that if you believe in the duty to protect fundamental freedoms online, you will work towards that end directly, or consider outsourcing that work with a donation to us.

I am not leaving the EFF — far from it. I will continue to be an active boardmember. In addition, I will begin to re-explore commercial ventures, seek new opportunities, and continue on my quest to become a leading evangelist for one of the world’s most exciting new technologies — robotic transportation. At my robocars site you can see my beginnings of a book on the subject, and why it may have the largest positive effect on the world that computer technology delivers in the medium term. Of course with my EFF hat on you will find growing sections on the freedom and privacy issues of the technology.

During my tenure, I have served with a tremendous group of fellow board members, as you can see from the biographies at the EFF board page. I will continue to work with them to protect your rights as the world becomes digital, and I hope you will all join with me in supporting the EFF with your thoughts and your dollars.


We’ll mark the transition tonight, Feb 10 at the special EFF 20th birthday bash at DNA lounge. This fundraiser can be attended with a requested $30 donation, and there is also a special VIP event earlier where you can mingle more intimately with the special guests, such as Mythbuster Adam Savage. We have quite a program planned.

Border Travel in an underpants bomber world

I just landed on a flight from Toronto to San Francisco. If you were inside the USA you may not have heard about the various crazy rules applied to travel to the USA, or at least not experienced them. While we were away the rules changed every day, and perhaps every hour.

Toronto was hit the hardest because it has the most flights to the USA of any airport in the world (with a few other Canadian airports not far behind.) Due to the busy border, you clear U.S. customs and immigration through their satellite office in Toronto, so your plane lands you at domestic gates in the USA, making connections far easier.

The USA started insisting on intimate pat-downs on all passengers and complete hand screening of all carry-ons. For a while there was even a regulation that passengers would have to sit in their seats with nothing on their laps (not blankets, not books, not computers) for the last hour of the flight. That got reverted to “pilot’s discretion” and in our case there was no talk of this.

The heavy search requirements brought Toronto’s heavy to-USA traffic to a standstill. Even with extra mounties pitching in, there was now way to get all those people through the terminal, so the CATSA brought in a near-ban on carry-ons. You could only carry on items from a short list. Notable things not on the list (ie. banned) included books, kid’s toys, lenses and various items people bring on not because they need them in flight, but because they are essential to their trip, or are fragile.

After a few days of reduced carry-ons, they got the processing down, as long as you got there 3 hours in advance, sometimes more. A real burden on 1 hour flights to New York, Boston or Washington. Still a burden on my 5 hour flight to SFO, since that was at 7am, meaning getting to the airport at 4am, (1am Pacific Time, about the time I would get to bed.)

The process included the fairly standard x-ray (with agents making various exceptions for people, generally allowing books that could be paged through and even some small knapsacks) with pat down only if you set off the alarm. Then, shortly after you started walking down the row of gates was a 2nd checkpoint. There you got a serious patdown that might remind you of a massage, and a complete hand inspection of everything in your bags. (I suggest they should let you pay extra for a real massage, which also of course detects anything on your body.) Many checks of ID and boarding pass and you are on your way.

There are many disturbing things about the reaction to the underpants bomber but a few stand out.

  • It is certain that the TSA and all other major agencies knew about the risk of somebody strapping explosives to their legs and taking them through the magnetometer. So a plan should have been in place long ago about what to do about it, and how to react at the first public incident.
  • In spite of this the agencies are out running around like chickens with their heads cut off, changing plans every day, no sign of forethought. Are they just testing the public to see what they will tolerate?
  • Lots of talk of thz scanners to see everybody naked. Is this a way to get those accepted, after people complained?
  • For Toronto, and most of the Canadian airports, a bad guy can quite readily drive just 90 minutes and go to another airport like Buffalo and get no special screening! While the public does not like this extra trek, it’s no burden to the terrorist to do this. Only the innocent are punished.
  • You could still smuggle your stuff inside a laptop, or a body cavity or several other places I noticed.
  • Keep this up and people will stop flying, and they will definitely go to airports like Buffalo.
  • As I have suggested before, appointments for security inspections are one answer to the 3 hour early arrival.
  • For me the worst thing was packing lenses in checked bag. I had to improvise protection for them. When such a rule is put in place by surprise over Christmas, you have to expect a lot of people brought stuff that they needed to carry on on the way back, even if they would not plan a new trip today expecting to carry on their fragiles.

With some irony, all this came after a lunch with Peter Watts. If you didn’t hear, Peter was crossing back into Canada at Port Huron/Sarnia and got pull over for exit inspection leaving the USA. Because he wasn’t a complete little sheep, he reports he was beaten up by the border patrol and now is charged with assaulting an officer. I really doubt he did those things, but the most disturbing thing are those who comment on the story saying it’s his fault for not being subservient enough. I understand the reasons for letting police do their jobs, but when you are just inspecting people driving out of the country, with no special reason to believe they are criminals or worthy of above average suspicion or anything but the presumption of innocence we are all owed, then there should be standards, and better defined rights for the subject of the inspections. If a person is not a known threat, why should they not get to ask questions about what is being done to them and their vehicle? Yes, one time in many thousands, an actual nasty criminal might do something odd and need to be set upon with force. It’s one of the risks people take doing an armed policing job. It can happen anywhere, any time. But must the people give up their rights and be complete sheep because of it?

Can’t we have a system where different situations suggest different levels of police control? Where the police, while they may have the power to give you orders and you have to obey without much chance to question, get in trouble if they abuse that power in a non-hostile situation? Where they have a simple way of explaining that they think the situation has escalated, and a way to declare it that we are taught in school to understand? So if the copy says, “I’m escalation — get on the ground now” you have to get on the ground, but the cop has to justify later why he escalated. Simply being a citizen who is mindful of his rights doesn’t seem much grounds for that.

Why facebook wants you to open up your profile

There is some controversy, including a critique from our team at the EFF of Facebook’s new privacy structure, and their new default and suggested policies that push people to expose more of their profile and data to “everyone.”

I understand why Facebook finds this attractive. “Everyone” means search engines like Google, and also total 3rd party apps like those that sprung up around Twitter.

On Twitter, I tried to have a “protected” profile, open only to friends, but that’s far from the norm there. And it turns out it doesn’t work particularly well. Because twitter is mostly exposed to public view, all sorts of things started appearing to treat twitter as more a micro blogging platform than a way to share short missives with friends. All of these new functions didn’t work on a protected account. With a protected account, you could not even publicly reply to people who did not follow you. Even the Facebook app that imports your tweets to Facebook doesn’t work on protected accounts, though it certainly could.

Worse, many people try to use twitter as a “backchannel” for comments about events like conferences. I think it’s dreadful as a backchannel, and conferences encourage it mostly as a form of spam: when people tweet to one another about the conference, they are also flooding the outside world with constant reminders about the conference. To use the backchannel though, you put in tags and generally this is for the whole world to see, not just your followers. People on twitter want to be seen.

Not so on Facebook and it must be starting to scare them. On Facebook, for all its privacy issues, mainly you are seen by your friends. Well, and all those annoying apps that, just to use them, need to know everything about you. You disclose a lot more to Facebook than you do to Twitter and so it’s scary to see a push to make it more public.

Being public means that search engines will find material, and that’s hugely important commercially, even to a site as successful as Facebook. Most sites in the world are disturbed to learn they get a huge fraction of their traffic from search engines. Facebook is an exception but doesn’t want to be. It wants to get all the traffic it gets now, plus more.

And then there’s the cool 3rd party stuff. Facebook of course has its platform, and that platform has serious privacy issues, but at least Facebook has some control over it, and makes the “apps” (really embedded 3rd party web sites) agree to terms. But you can’t beat the innovation that comes from having less controlled entrepreneurs doing things, and that’s what happens on twitter. Facebook doesn’t want to be left behind.

What’s disturbing about this is the idea that we will see sites starting to feel that abandoning or abusing privacy gives them a competitive edge. We used to always hope that sites would see protecting their users’ privacy as a competitive edge, but the reverse could take place, which would be a disaster.

Is there an answer? It may be to try to build applications in more complex ways that still protect privacy. Though in the end, you can’t do that if search engines are going to spider your secrets in order to do useful things with them; at least not the way search engines work today.

Swap should be encrypted by default

There are a variety of tools that offer encrypted filesystems for the various OSs. None of them are as easy to use as we would like, and none have reached the goal of “Zero User Interface” (ZUI) that is the only thing which causes successful deployment of encryption (ie. Skype, SSH and SSL.)

Many of these tools have a risk of failure if you don’t also encrypt your swap/paging space, because your swap file will contain fragments of memory, including encrypted files and even in some cases decryption keys. There is a lot of other confidential data which can end up in swap — web banking passwords and just about anything else.

It’s not too hard to encrypt your swap on linux, and the ecryptfs tools package includes a tool to set up encrypted swap (which is not done with ecryptfs, but rather with dm-crypt, the block-device encryptor, but it sets it up for you.)

However, I would propose that swap be encrypted by default, even if the user does nothing. When you boot, the system would generate a random key for that session, and use it to encrypt all writes and reads to the swap space. That key of course would never be swapped out, and furthermore, the kernel could even try to move it around in memory to avoid the attacks the EFF recently demonstrated where the RAM of a computer that’s been turned off for a short time is still frequently readable. (In the future, computers will probably come with special small blocks of RAM in which to store keys which are guaranteed — as much as that’s possible — to be wiped in a power failure, and also hard to access.)

The automatic encryption of swap does bring up a couple of issues. First of all, it’s not secure with hibernation, where your computer is suspended to disk. Indeed, to make hibernation work, you would have to save the key at the start of the hibernation file. Hibernation would thus eliminate all security on the data — but this is no worse than the situation today, where all swap is insecure. And many people never hibernate.  read more »

A super-fast web transaction (and Google SPDY)

(Update: I had a formatting error in the original posting, this has been fixed.)

A few weeks ago when I wrote about the non deployment of SSL I touched on an old idea I had to make web transactions vastly more efficient. I recently read about Google’s proposed SPDY protocol which goes in a completely opposite direction, attempting to solve the problem of large numbers of parallel requests to a web server by multiplexing them all in a single streaming protocol that works inside a TCP session.

While calling attention to that, let me outline what I think would be the fastest way to do very simple web transactions. It may be that such simple transactions are no longer common, but it’s worth considering.

Consider a protocol where you want to fetch the contents of a URL like “www.example.com/page.html” and you have not been to that server recently (or ever.) You want only the plain page, you are not yet planning to fetch lots of images and stylesheets and javascript.

Today the way this works is pretty complex:

  1. You do a DNS request for www.example.com via a UDP request to your DNS server. In the pure case this also means first asking where “.com” is but your DNS server almost surely knows that. Instead, a UDP request is sent to the “.com” master server.
  2. The “.com” master server returns with the address of the server for example.com.
  3. You send a DNS request to the example.com server, asking where “www.example.com is.”
  4. The example.com DNS server sends a UDP response back with the IP address of www.example.com
  5. You open a TCP session to that address. First, you send a “SYN” packet.
  6. The site responds with a SYN/ACK packet.
  7. You respond to the SYN/ACK with an ACK packet. You also send the packet with your HTTP “GET” reqequest for “/page.html.” This is a distinct packet but there is no roundtrip so this can be viewed as one step. You may also close off your sending with a FIN packet.
  8. The site sends back data with the contents of the page. If the page is short it may come in one packet. If it is long, there may be several packets.
  9. There will also be acknowledgement packets as the multiple data packets arrive in each direction. You will send at least one ACK. The other server will ACK your FIN.
  10. The remote server will close the session with a FIN packet.
  11. You will ACK the FIN packet.

You may not be familiar with all this, but the main thing to understand is that there are a lot of roundtrips going on. If the servers are far away and the time to transmit is long, it can take a long time for all these round trips.

It gets worse when you want to set up a secure, encrypted connection using TLS/SSL. On top of all the TCP, there are additional handshakes for the encryption. For full security, you must encrypt before you send the GET because the contents of the URL name should be kept encrypted.

A simple alternative

Consider a protocol for simple transactions where the DNS server plays a role, and short transactions use UDP. I am going to call this the “Web Transaction Protocol” or WTP. (There is a WAP variant called that but WAP is fading.)

  1. You send, via a UDP packet, not just a DNS request but your full GET request to the DNS server you know about, either for .com or for example.com. You also include an IP and port to which responses to the request can be sent.
  2. The DNS server, which knows where the target machine is (or next level DNS server) forwards the full GET request for you to that server. It also sends back the normal DNS answer to you via UDP, including a flag to say it forwarded the request for you (or that it refused to, which is the default for servers that don’t even know about this.) It is important to note that quite commonly, the DNS server for example.com and the www.example.com web server will be on the same LAN, or even be the same machine, so there is no hop time involved.
  3. The web server, receiving your request, considers the size and complexity of the response. If the response is short and simple, it sends it in one UDP packet, though possibly more than one, to your specified address. If no ACK is received in reasonable time, send it again a few times until you get one.
  4. When you receive the response, you send an ACK back via UDP. You’re done.

The above transaction would take place incredibly fast compared to the standard approach. If you know the DNS server for example.com, it will usually mean a single packet to that server, and a single packet coming back — one round trip — to get your answer. If you only know the server for .com, it would mean a single packet to the .com server which is forwarded to the example.com server for you. Since the master servers tend to be in the “center” of the network and are multiplied out so there is one near you, this is not much more than a single round trip.  read more »

Do you get Twitter? Is a "sampled" medium good or bad?

I just returned from Jeff Pulver’s “140 Characters” conference in L.A. which was about Twitter. I asked many people if they get Twitter — not if they understand how it’s useful, but why it is such a hot item, and whether it deserves to be, with billion dollar valuations and many talking about it as the most important platform.

Some suggested Twitter is not as big as it appears, with a larger churn than expected and some plateau appearing in new users. Others think it is still shooting for the moon.

The first value in twitter I found was as a broadcast SMS. While I would not text all my friends when I go to a restaurant or a club, having a way so that they will easily know that (and might join me) is valuable. Other services have tried to do things like this but Twitter is the one that succeeded in spite of not being aimed at any specific application like this.

This explains the secret of Twitter. By being simple (and forcing brevity) it was able to be universal. By being more universal it could more easily attain critical mass within groups of friends. While an app dedicated to some social or location based application might do it better, it needs to get a critical mass of friends using it to work. Once Twitter got that mass, it had a leg up at being that platform.

At first, people wondered if Twitter’s simplicity (and requirement for brevity) was a bug or a feature. It definitely seems to have worked as a feature. By keeping things short, Twitter makes is less scary to follow people. It’s hard for me to get new subscribers to this blog, because subscribing to the blog means you will see my moderately long posts every day or two, and that’s an investment in reading. To subscribe to somebody’s Twitter feed is no big commitment. Thus people can get a million followers there, when no blog has that. In addition, the brevity makes it a good match for the mobile phone, which is the primary way people use Twitter. (Though usually the smart phone, not the old SMS way.)

And yet it is hard not to be frustrated at Twitter for being so simple. There are so many things people do with Twitter that could be done better by some more specialized or complex tool. Yet it does not happen.

Twitter has made me revise slightly my two axes of social media — serial vs. browsed and reader-friendly vs. writer friendly. Twitter is generally serial, and I would say it is writer-friendly (it is easy to tweet) but not so reader friendly (the volume gets too high.)

However, Twitter, in its latest mode, is something different. It is “sampled.” In normal serial media, you usually consume all of it. You come in to read and the tool shows you all the new items in the stream. Your goal is to read them all, and the publishers tend to expect it. Most Twitter users now follow far too many people to read it all, so the best they can do is sample — they come it at various times of day and find out what their stalkees are up to right then. Of course, other media have also been sampled, including newspapers and message boards, just because people don’t have time, or because they go away for too long to catch up. On Twitter, however, going away for even a couple of hours will give you too many tweets to catch up on.

This makes Twitter an odd choice as a publishing tool. If I publish on this blog, I expect most of my RSS subscribers will see it, even if they check a week later. If I tweet something, only a small fraction of the followers will see it — only if they happen to read shortly after I write it, and sometimes not even then. Perhaps some who follow only a few will see it later, or those who specifically check on my postings. (You can’t. Mine are protected, which turns out to be a mistake on Twitter but there are nasty privacy results from not being protected.)

TV has an unusual history in this regard. In the early days, there were so few stations that many people watched, at one time or another, all the major shows. As TV grew to many channels, it became a sampled medium. You would channel surf, and stop at things that were interesting, and know that most of the stream was going by. When the Tivo arose, TV became a subscription medium, where you identify the programs you like, and you see only those, with perhaps some suggestions thrown in to sample from.

Online media, however, and social media in particular were not intended to be sampled. Sure, everybody would just skip over the high volume of their mailing lists and news feeds when coming back from a vacation, but this was the exception and not the rule.

The question is, will Twitter’s nature as a sampled medium be a bug or a feature? It seems like a bug but so did the simplicity. It makes it easy to get followers, which the narcissists and the PR flacks love, but many of the tweets get missed (unless they get picked up as a meme and re-tweeted) and nobody loves that.

On Protection: It is typical to tweet not just blog-like items but the personal story of your day. Where you went and when. This is fine as a thing to tell friends in the moment, but with a public twitter feed, it’s being recorded forever by many different players. The ephemeral aspects of your life become permanent. But if you do protect your feed, you can’t do a lot of things on twitter. What you write won’t be seen by others who search for hashtags. You can’t reply to people who don’t follow you. You’re an outsider. The only way to solve this would be to make Twitter really proprietary, blocking all the services that are republishing it, analysing it and indexing it. In this case, dedicated applications make more sense. For example, while location based apps need my location, they don’t need to record it for more than a short period. They can safely erase it, and still provide me a good app. They can only do this if they are proprietary, because if they give my location to other tools it is hard to stop them from recording it, and making it all public. There’s no good answer here.

The overengineering and non-deployment of SSL/TLS

I have written before about how overzealous design of cryptographic protocols often results in their non-use. Protocol engineers are trained to be thorough and complete. They rankle at leaving in vulnerabilities, even against the most extreme threats. But the perfect is often the enemy of the good. None of the various protocols to encrypt E-mail have ever reached even a modicum of success in the public space. It’s a very rare VoIP call (other than Skype) that is encrypted.

The two most successful encryption protocols in the public space are SSL/TLS (which provide the HTTPS system among other things) and Skype. At a level below that are some of the VPN applications and SSH.

TLS (the successor to SSL) is very widely deployed but still very rarely used. Only the most tiny fraction of web sessions are encrypted. Many sites don’t support it at all. Some will accept HTTPS but immediately push you back to HTTP. In most cases, sites will have you log in via HTTPS so your password is secure, and then send you back to unencrypted HTTP, where anybody on the wireless network can watch all your traffic. It’s a rare site that lets you conduct your entire series of web interactions entirely encrypted. This site fails in that regard. More common is the use of TLS for POP3 and IMAP sessions, both because it’s easy, there is only one TCP session, and the set of users who access the server is a small and controlled set. The same is true with VPNs — one session, and typically the users are all required by their employer to use the VPN, so it gets deployed. IPSec code exists in many systems, but is rarely used in stranger-to-stranger communications (or even friend-to-friend) due to the nightmares of key management.

TLS’s complexity makes sense for “sessions” but has problems when you use it for transactions, such as web hits. Transactions want to be short. They consist of a request, and a response, and perhaps an ACK. Adding extra back and forths to negotiate encryption can double or triple the network cost of the transactions.

Skype became a huge success at encrypting because it is done with ZUI — the user is not even aware of the crypto. It just happens. SSH takes an approach that is deliberately vulnerable to man-in-the-middle attacks on the first session in order to reduce the UI, and it has almost completely replaced unencrypted telnet among the command line crowd.

I write about this because now Google is finally doing an experiment to let people have their whole gmail session be encrypted with HTTPS. This is great news. But hidden in the great news is the fact that Google is evaluating the “cost” of doing this. There also may be some backlash if Google does this on web search, as it means that ordinary sites will stop getting to see the search query in the “Referer” field until they too switch to HTTPS and Google sends traffic to them over HTTPS. (That’s because, for security reasons, the HTTPS design says that if I made a query encrypted, I don’t want that query to be repeated in the clear when I follow a link to a non-encrypted site.) Many sites do a lot of log analysis to see what search terms are bringing in traffic, and may object when that goes away.  read more »

Secrets of the "Clear" airport security line

Yesterday it was announced that “Clear” (Verified ID Pass) the special “bypass the line at security” card company, has shut its doors and its lines. They ran out of money and could not pay their debts. No surprise there, they were paying $300K/year rent for their space at SJC and only 11,000 members used that line.

As I explained earlier, something was fishy about the program. It required a detailed background check, with fingerprint and iris scan, but all it did was jump you to the front of the line — which you get for flying in first class at many airports without any background check. Their plan, as I outline below, was to also let you use a fancy shoe and coat scanning machine from GE, so you would not have to take them off. However, the TSA was only going to allow those machines once it was verified they were just as secure as existing methods — so again no need for the background check.

To learn more about the company, I attended a briefing they held a year ago for a contest they were holding: $500,000 to anybody who could come up with a system that sped up their lines at a low enough cost. I did have a system, but also wanted to learn more about how it all worked. I feel sorry for those who worked hard on the contest who presumably will not be paid.  read more »

The background check

Authenticated actions as an alternative to login

The usual approach to authentication online is the “login” approach — you enter userid and password, and for some “session” your actions are authenticated. (Sometimes special actions require re-authentication, which is something my bank does on things like cash transfers.) This is so widespread that all browsers will now remember all your passwords for you, and systems like OpenID have arise to provide “universal sign on,” though to only modest acceptance.

Another approach which security people have been trying to push for some time is authentication via digital signature and certificate. Your browser is able, at any time, to prove who you are, either for special events (including logins) or all the time. In theory these tools are present in browsers but they are barely used. Login has been popular because it always works, even if it has a lot of problems with how it’s been implemented. In addition, for privacy reasons, it is important your browser not identify you all the time by default. You must decide you want to be identified to any given web site.

I wrote earlier about the desire for more casual athentication for things like casual comments on message boards, where creating an account is a burden and even use of a universal login can be a burden.

I believe an answer to some of the problems can come from developing a system of authenticated actions rather than always authenticating sessions. Creating a session (ie. login) can be just one of a range of authenticated actions, or AuthAct.

To do this, we would adapt HTML actions (such as submit buttons on forms) so that they could say, “This action requires the following authentication.” This would tell the browser that if the user is going to click on the button, their action will be authenticated and probably provide some identity information. In turn, the button would be modified by the browser to make it clear that the action is authenticated.

An example might clarify things. Say you have a blog post like this with a comment form. Right now the button below you says “Post Comment.” On many pages, you could not post a comment without logging in first, or, as on this site, you may have to fill other fields in to post the comment.

In this system, the web form would indicate that posting a comment is something that requires some level of authentication or identity. This might be an account on the site. It might be an account in a universal account system (like a single sign-on system). It might just be a request for identity.

Your browser would understand that, and change the button to say, “Post Comment (as BradT).” The button would be specially highlighted to show the action will be authenticated. There might be a selection box in the button, so you can pick different actions, such as posting with different identities or different styles of identification. Thus it might offer choices like “as BradT” or “anonymously” or “with pseudonym XXX” where that might be a unique pseudonym for the site in question.

Now you could think of this as meaning “Login as BradT, and then post the comment” but in fact it would be all one action, one press. In this case, if BradT is an account in a universal sign-on system, the site in question may never have seen that identity before, and won’t, until you push the submit button. While the site could remember you with a cookie (unless you block that) or based on your IP for the next short while (which you can’t block) the reality is there is no need for it to do that. All your actions on the site can be statelessly authenticated, with no change in your actions, but a bit of a change in what is displayed. Your browser could enforce this, by converting all cookies to session cookies if AuthAct is in use.

Note that the first time you use this method on a site, the box would say “Choose identity” and it would be necessary for you to click and get a menu of identities, even if you only have one. This is because a there are always tools that try to fake you out and make you press buttons without you knowing it, by taking control of the mouse or covering the buttons with graphics that skip out of the way — there are many tricks. The first handover of identity requires explicit action. It is almost as big an event as creating an account, though not quite that significant.

You could also view the action as, “Use the account BradT, creating it if necessary, and under that name post the comment.” So a single posting would establish your ID and use it, as though the site doesn’t require userids at all.  read more »

Will we give up our privacy for unspoiled milk?

I recently attended the eComm conference on new telephony. Two notes in presentations caught my attention, though they were mostly side notes. In one case, the presenter talked about the benefits of having RFID tags in everything.

“Your refrigerator,” he said, “could read the RFID and know if your milk was expired.” In the old days we just looked at the date or smelled it.

Another presenter described a project where, with consent, they tracked people wherever they went using their cell phones, and then correlated the data, to figure out what locations were hot night spots etc. In a commercialization of the project, he said the system could notice you were visiting car dealerships and send you an email offering a bargain on a car.

Now I won’t try to say I haven’t seen some interesting applications for location data. In fact, many years ago, I started this blog with an article about a useful location aware service of my own design.

But why is it that when people are asked to come up for applications for some of the most intrusive technologies, they often come up with such lame ones? Perhaps you may have concluded that your privacy is doomed, and these invasive technologies are coming, but if so, can we at least give up our privacy for something a bit more compelling than having to smell the milk?

I mean, RFIDs in everything (and thus the trackability of everything for good and ill) just so your fridge can be a touch smarter? So you can be marketed to better and thus, in theory, get slightly cheaper products — at least until all sides have the technology and the competitive advantage goes away.

Have we revealed all our data about ourselves and our friends to Facebook just so we can throw sheep?

I’m not saying that throwing sheep (or the other, more practical applications of Facebook) aren’t fun, but are they worth the risk? I don’t say cost because you don’t see the cost until long after, until there has been a personal invasion? What if Falun Gong’s members had all been on Facebook when the Chinese government decided it was time to round them up? Mark my words, there will, before too long, be some group that a government decides to round up, using a social networking tool to find them. What cool apps are worth that?

There are ways to do applications on private data that are not nearly as risky. My yellow button application only transmits your location when you take an action, and that transmission can use a pseudonym. The real function can take place in the phone, knowing where it is, and knowing where interesting locations are that it needs to no more about. In this case, the network only learns something about you during explicit actions. The dangerous ones are the ones that are on all the time, that track and record your whole sea of data to do something useful. It is your whole sea of data that is the most dangerous to you, because if untrained eyes look in a big sea of data with something already in mind, they will find it, whether it’s there or not. That’s not as true for specialized subsets.

Comments welcome, even Anonymous ones!

Data hosting could let me make Facebook faster

I’ve written about “data hosting/data deposit box” as an alternative to “cloud computing.” Cloud computing is timesharing — we run our software and hold our data on remote computers, and connect to them from terminals. It’s a swing back from personal computing, where you had your own computer, and it erases the 4th amendment by putting our data in the hands of others.

Lately, the more cloud computing applications I use, the more I realize one other benefit that data hosting could provide as an architecture. Sometimes the cloud apps I use are slow. It may be because of bandwidth to them, or it may simply be because they are overloaded. One of the advantages of cloud computing and timesharing is that it is indeed cheaper to buy a cluster mainframe and have many people share it than to have a computer for everybody, because those computers sit idle most of the time.

But when I want a desktop application to go faster, I can just buy a faster computer. And I often have. But I can’t make Facebook faster that way. Right now there’s no way I can do it. If it weren’t free, I could complain, and perhaps pay for a larger share, though that’s harder to solve with bandwidth.

In the data hosting approach, the user pays for the data host. That data host would usually be on their ISP’s network, or perhaps (with suitable virtual machine sandboxing) it might be the computer on their desk that has all those spare cycles. You would always get good bandwidth to it for the high-bandwidth user interface stuff. And you could pay to get more CPU if you need more CPU. That can still be efficient, in that you could possibly be in a cloud of virtual machines on a big mainframe cluster at your ISP. The difference is, it’s close to you, and under your control. You own it.

There’s also no reason you couldn’t allow applications that have some parallelism to them to try to use multiple hosts for high-CPU projects. Your own PC might well be enough for most requests, but perhaps some extra CPU would be called for from time to time, as long as there is bandwidth enough to send the temporary task (or sub-tasks that don’t require sending a lot of data along with them.)

And, as noted before, since the users own the infrastructure, this allows new, innovative free applications to spring up because they don’t have to buy their infrastructure. You can be the next youtube, eating that much bandwidth, with full scalability, without spending much on bandwidth at all.

EFF Year in Review in Music

It’s been a remarkably dramatic year at the EFF. We worked in a huge number of areas, acting on or participating in a lot of cases. The most famous is our ongoing battle over the warrantless wiretapping scandal, where we sued AT&T for helping the White House. As you probably know, we certainly got their attention, to the point that President Bush got the congress to pass a law granting immunity to the phone companies. We lost that battle, but our case still continues, as we’re pushing to get that immunity declared unconstitutional.

We also opened a second front, based on the immunity. After all, if the phone companies can now use the excuse “we were only following orders they promised were legal” then the people who promised it was legal are culpable if it actually wasn’t. So we’ve sued the President, VP and several others over that. We’ll keep fighting.

But this was just one of many cases. The team made up a little musical animation to summarize them for you. I include it here, but encourage you to follow the link to the site and see what else we did this year. I want you to be impressed, because these are tough-times, and that also makes it tough for non-profits trying to raise money. I know most of you have wounded stock portfolios and are cutting back.

But I’m going to ask you not to cut back to zero. It’s not that bad. If you can’t give what you normally would like to give to make all this good work happen, decide some appropriate fraction and give it. Or if you are one of the few who is still flush, you may want to consider giving more to your favourite charities this year, to make up for how they’re hurting in regular donations.

The work the EFF does needs to be done. You need it to be done. You have a duty to protect your rights and the rights of others. If you can’t do the work to protect them yourself, I suggest you outsource it to the EFF. We’re really good at it, and work cheap. You’ll be glad you did.


Learn more about this video and support EFF!

Plus, we have cool T-shirts and shipping tape.

Where does the Ford MyKey lead?

Ford is making a new car-limiting system called MyKey standard in future models. This allows the car owner to enable various limits and permissions on the keys they give to their teen-agers. Limits included in the current system include an 80 mph speed limit, a 40% volume limit on the stereo, never-ending seatbelt reminders, earlier low-fuel warnings, audio speed alerts and inability to disable various safety systems.

My reaction is of course mixed. If you own something, it is reasonable for you to be able to constrain its use by people you lend it to. At the same time it is easy to see this literal paternalism turn into social paternalism. While it’s always been possible to build cars that, for example, can’t go over the speed limit, it’s always been seen as a “non-starter” with the public. The more cars that are out there which have governors on them, the more used to the idea people will get. (“Valet” keys that can’t go over 25mph or open the trunk have been common for some time.)

This is going to be one of the big questions on the path to Robocars — will they be able to violate traffic laws at the command of their owners? I have an essay on that coming up for the future, where I will also ask how much sense traffic laws make in a robocar world.

The Ford key limits speed to 80mph to allow the teen to pass on the highway. Of course on some highways here you could not go in the fast lane with that governor on, which probably suits the parents just fine. What they probably want would be more a limit on average speed, allowing the teen to, for short periods, burst to the full power of the car if it’s needed, but not from a standing start, and of course with advanced warning when the car has gone too fast too long to give a chance to safely slow down.

The earlier low-gas warning is just silly. The earlier you make a warning, the more you teach people to ignore it. If you have an early warning (subtle) and then a “this time we really mean it” warning most people will probably just use the second one. Many cars with digital fuel meters refuse to estimate fuel left below a certain amount, because they don’t want to be blamed for making you think you have more gas than you do. So they tell you nothing instead, which is silly.

What might make more sense would be the ability to make full use of speed, but the threat of reporting it to mom & dad if it’s over-used. (Such a product would be easy to add to existing cars, I wonder if anybody has made a product like that?) Ideally the product would warn the teen if they were getting close to the limit, to let them govern themselves, knowing that they would face a lecture and complete loss of car privileges if they go over the limitations.

On one hand, this is less paternalistic, because it does not constrain the vehicle and teaches the child to discipline themselves rather than making technology enforce the discipline. On the other hand, it is somewhat Orwellian, though the system need not report the particulars of the infringement, just the fact of it. Though we can certainly see parents wanting to know all the details.

Of course, we’ll see a lot more of that sort of surveillance asked for. Track-logs from the GPS in fact. Logging GPSs that can be hidden in cars cost only $80, and I am sure parents are buying them. (I have one, they are handy for geotagging photos.) We might also start seeing “smart” logging systems that measure speed infractions based on what road you are on. Ie. 80mph not near any highway is an infraction but on the highway it isn’t.

I doubt we’ll be able to stop this sort of governing or monitoring technology — so how can we bend it to protect freedom and privacy?

Better forms of Will-Call (phone and photo)

Most of us have had to stand in a long will-call line to pick up tickets. We probably even paid a ticket “service fee” for the privilege. Some places are helping by having online printable tickets with a bar code. However, that requires that they have networked bar code readers at the gate which can detect things like duplicate bar codes, and people seem to rather have giant lines and many staff rather than get such machines.

Can we do it better?

Well, for starters, it would be nice if tickets could be sent not as a printable bar code, but as a message to my cell phone. Perhaps a text message with coded string, which I could then display to a camera which does OCR of it. Same as a bar code, but I can actually get it while I am on the road and don’t have a printer. And I’m less likely to forget it.

Or let’s go a bit further and have a downloadable ticket application on the phone. The ticket application would use bluetooth and a deliberately short range reader. I would go up to the reader, and push a button on the cell phone, and it would talk over bluetooth with the ticket scanner and authenticate the use of my ticket. The scanner would then show a symbol or colour and my phone would show that symbol/colour to confirm to the gate staff that it was my phone that synced. (Otherwise it might have been the guy in line behind me.) The scanner would be just an ordinary laptop with bluetooth. You might be able to get away with just one (saving the need for networking) because it would be very fast. People would just walk by holding up their phones, and the gatekeeper would look at the screen of the laptop (hidden) and the screen of the phone, and as long as they matched wave through the number of people it shows on the laptop screen.

Alternately you could put the bluetooth antenna in a little faraday box to be sure it doesn’t talk to any other phone but the one in the box. Put phone in box, light goes on, take phone out and proceed.

Photo will-call

One reason many will-calls are slow is they ask you to show ID, often your photo-ID or the credit card used to purchase the item. But here’s an interesting idea. When I purchase the ticket online, let me offer an image file with a photo. It could be my photo, or it could be the photo of the person I am buying the tickets for. It could be 3 photos if any one of those 3 people can pick up the ticket. You do not need to provide your real name, just the photo. The will call system would then inkjet print the photos on the outside of the envelope containing your tickets.

You do need some form of name or code, so the agent can find the envelope, or type the name in the computer to see the records. When the agent gets the envelope, identification will be easy. Look at the photo on the envelope, and see if it’s the person at the ticket window. If so, hand it over, and you’re done! No need to get out cards or hand them back and forth.

A great company to implement this would be paypal. I could pay with paypal, not revealing my name (just an E-mail address) and paypal could have a photo stored, and forward it on to the ticket seller if I check the box to do this. The ticket seller never knows my name, just my picture. You may think it’s scary for people to get your picture, but in fact it’s scarier to give them your name. They can collect and share data with you under your name. Your picture is not very useful for this, at least not yet, and if you like you can use one of many different pictures each time — you can’t keep using different names if you need to show ID.

This could still be done with credit cards. Many credit cards offer a “virtual credit card number” system which will generate one-time card numbers for online transactions. They could set these up so you don’t have to offer a real name or address, just the photo. When picking up the item, all you need is your face.

This doesn’t work if it’s an over-21 venue, alas. They still want photo ID, but they only need to look at it, they don’t have to record the name.

It would be more interesting if one could design a system so that people can find their own ticket envelopes. The guard would let you into the room with the ticket envelopes, and let you find yours, and then you can leave by showing your face is on the envelope. The problem is, what if you also palmed somebody else’s envelope and then claimed yours, or said you couldn’t find yours? That needs a pretty watchful guard which doesn’t really save on staff as we’re hoping. It might be possible to have the tickets in a series of closed boxes. You know your box number (it was given to you, or you selected it in advance) so you get your box and bring it to the gate person, who opens it and pulls out your ticket for you, confirming your face. Then the box is closed and returned. Make opening the boxes very noisy.

I also thought that for Burning Man, which apparently had a will-call problem this year, you could just require all people fetching their ticket be naked. For those not willing, they could do regular will-call where the ticket agent finds the envelope. :-)

I’ve noted before that, absent the need of the TSA to know all our names, this is how boarding passes should work. You buy a ticket, provide a photo of the person who is to fly, and the gate agent just looks to see if the face on the screen is the person flying, no need to get out ID, or tell the airline your name.

Should we let people safely talk to the police?

There’s a bit of an internet buzz this week around a video of a law lecture on why you should never, ever, ever, ever talk to the police. The video begins with the law professor and criminal defense attorney, who is a good speaker, making that case, and then a police detective, interesting but not quite as eloquent, agreeing with him and describing the various tricks the police use every day with people stupid enough to talk to them.

The case is very good. In our society of a zillion laws, you are always guilty of something, and he explains, even if you’re completely innocent, and you tell nothing but the truth, there are still a lot of ways you could end up in jail. Not that it happens every time, but the chance is high enough and the cost is so great that he advocates that you should never, ever talk to the police. (He doesn’t say this, but I presume he does not include when you are filing a complaint about a crime against you or are a witness in a crime against others, where the benefits may outweigh the risk.)

Now fortunately for the police, few people follow the advice. Lots of people talk to the police. Some 80% of cases, the detective declares, are won because of a confession by the suspect. Cops love it, and they will lie (and are permitted to lie) to make it happen if they can.

But since a rational person should never, ever, under any circumstances talk to the police, this prevents citizens from ever helping the police. And there are times when society, and law enforcement, would be better if citizens could help the police without fear.

What if there existed a means for the police to do a guaranteed off-the-record interview with a non-suspect? Instead of a Miranda warning, the police would inform you that:

“You are not a suspect, and nothing from this interview can be used against you in a court of law.”

First of all, could this work? I believe our laws of evidence are strong enough that actual quotes from the interview could not be used. To improve things, you could be allowed to record the interview, or the officer could record it but hand you the only copy, and swear it’s the only copy. It could be a digitally signed, authenticated copy, which can never be taken from you by warrant or subpoena, or used even if you lose it, perhaps until some years after your death.

However, clearly if the police learn something in the interview that makes them suspect you, they will try to find ways to “learn” that again through other, admissible means. And this could come back to bite you. While we could have a Fruit of the poisonous tree doctrine which would forbid this, it is much harder to get full rigour about such doctrines. Is this fear enough to make it still always be the best advice to never speak to the police? Is there a way we could make it self to assist the police?

I will note that if we had a safe means to assist the police, it would sometimes “backfire” in the eyes of the public. There would be times when interviewees would (foolishly, but still successfully) say “nyah, nyah, I did it and you can’t get me” and the public would be faced with the usual confusion over people who are let free even when we know they are guilty. And indeed there would be times when the police learn things in such interviews and could have then found evidence, but are prohibited from, that get the public up in arms because some rapist, kidnapper, murderer or even terrorist goes free.

Syndicate content