The peril of the Facebook anti-privacy pattern
There's been a well justified storm about Facebook's recent privacy changes. The EFF has a nice post outlining the changes in privacy policies at Facebook which inspired this popular graphic showing those changes.
But the deeper question is why Facebook wants to do this. The answer, of course, is money, but in particular it's because the market is assigning a value to revealed data. This force seems to push Facebook, and services like it, into wanting to remove privacy from their users in a steadily rising trend. Social network services often will begin with decent privacy protections, both to avoid scaring users (when gaining users is the only goal) and because they have little motivation to do otherwise. The old world of PC applications tended to have strong privacy protection (by comparison) because data stayed on your own machine. Software that exported it got called "spyware" and tools were created to rout it out.
Facebook began as a social tool for students. It even promoted that those not at a school could not see in, could not even join. When this changed (for reasons I will outline below) older members were shocked at the idea their parents and other adults would be on the system. But Facebook decided, correctly, that excluding them was not the path to being #1. If you don't protect user privacy, building a service is easier. You never have to worry what you can and can't do with data once you have declared them public. You can't have a leak of what is already disclosed. This allows greater innovation, and in particular it allows innovation outside the company, or outside the community. When everybody can see your data, everybody can figure out and try out cool new things to do with them. Some will be stupid; some will be dangerous; some will be popular, and even useful.
When a site is protecting privacy, even if it is the largest -- especially if it is the largest -- outside competitors will see if they can do something new without following those rules. Ignoring the constraints is the easiest way to get an edge on a big player. Small players are not subject to much scrutiny by privacy watchdogs, and because they start off violating a privacy rule, they do not have the "legacy burden." They don't have to sell a change of policy to users. They don't have to generate a UI so users can opt in or opt out. Even if they do they can set the default as they like.
This new approach may turn off some users, but the hard reality is that it won't impede a new business much. There are always plenty of early adopters ready to try something new and cool, and because of the fundamental theorem of privacy -- nobody cares about privacy until after they've been through an invasion -- only the most privacy aware will avoid the service.
As the upstart grows, the larger, older player will find itself forced to take notice. Users may be migrating, or complaining that the old service is not as full featured as those of upstarts. There is strong competitive pressure to abandon the old protections. Worse, there is an added disadvantage -- the old service needs to now develop a UI to support both the old and new systems, and it doesn't want to have a giant and complex UI.
A good example of this is Twitter. When Facebook added a feed about what friends were doing, it caused some uproar, even though only your friends could see it. But Twitter arose and by default, all your updates were visible not just to followers but the whole world, and they were archived forever. You could try to use Twitter with a "protected" account, as I did, but you quickly realized you were missing out on what Twitter was about. An ecosystem of external apps grew up around Twitter without the Twitter company having to do anything, because the data were public. Protected Twitter users could generally not use these new, hot apps.
A company like Facebook had to look at Twitter and salivate over what they could do, without the legacy constraints. And indeed, before long, the Facebook feed was modified to look more like the Twitter feed, though still largely available only to friends. But now the new default makes your wall available to outsiders.
This trend will continue. New sites will arise that expose more data, and sites like Facebook will feel pressure to widen what they make available, even if there is no revenue reason for doing so.
Facebook as the internet identity gorilla
This becomes even more troubling as Facebook makes a play to be the main provider of what is sometimes called "identity" services on the internet. Federated identity began with services like Microsoft Passport (now called LiveID) which mainly attempted to be a single sign-on with a fairly small amount of data. Many efforts have been pushed to expand that including the hopefully distributed OpenID system, the Liberty Alliance and a few others. Facebook, however, with 400 million users, surged onto the scene with Facebook Connect, which has quickly grown because so many users are already routinely logged into Facebook and need do nothing more to make use of it. This has expanded greatly with Facebook's partner program, which not only puts "Like" buttons onto many web sites but allows special partners to get access to much of a user's Facebook profile, including who their friends are.
While the old systems offered little more than a Login ID, Facebook offers its partners your whole life when it serves them your identity. It's been doing that for its "application" partners for some time, who were recently allowed to remember the data about you that they fetch from Facebook. Now they want to take it out to the whole web, and they have a decent shot at success.
Some of us remember a day when it was considered rude for a web site to ask your name or E-mail address. Now, with no effort, users will offer up everything to -- if Facebook gets its way -- every site. It's a potential rich experience, but at a huge cost to privacy and a big jump towards the fully instrumented surveillance state.
Users will demand the rich experience, but what they need is a way to assure that sites that want to make use of personal information only ask for, and only get, what they truly need in order to make that experience work. If they don't need your birthday, or all your friend's names, they should not get it. And this must happen all the time, not just when you take the time to use a complex privacy console to control what they will be given. This is not something individual users can or will negotiate on a site by site basis. They don't have the power to negotiate it and the companies don't have the time. Negotiation requires parties of equal power to get real give and take.
If we don't solve this, the two forces (market pressure to reduce privacy, and natural monopolies in identity provision) will drive us in a direction we don't want to go. As I have written before, I believe the only answer is to move social apps back closer to our own computers and away from the cloud, as tempting as the cloud is. Only if the data never leaves our hands will they remain under our control. We need a resurgence of the belief that software that took our data and exported it for inappropriate purposes was spyware. Facebook and its partners are now purveyors of spyware, yet no anti-spyware program is yet ready to delete it from your browser for you. Indeed, the new way the protections work, your friends are offering up information about you when they visit the partner sites, and you have even less control over that.
Facebook argues that their whole service is "opt in" because you have to join it. That's true to an extent, but ignores the fact that if social apps are going to be useful, we should find a way to do them without the pressure to strip users of all privacy, and not only offer people the choice of living in a glass house or never leaving the house at all.