Let’s start with a story. Bono, from U2, was on vacation in St. Tropez. I guess this happens when you’re a rock star with a social conscience, but he meets two 19-year-olds, Andrea and Hannah. And Bono has just a grand old time with them. How do I know this? Because Andrea posted pictures to Facebook, and from there, they went viral. It’s great. They’re wearing bikinis; he’s wearing those tinted one-piece sunglasses; The Daily Mail publishes the photos, Fox News, bam, so much for a private little walk on the beach.
So here are four things we could say about this story, using Andrea as a stand-in for her fellow Facebook users:
All four of these claims are, I will argue, false. They are myths, urban legends, about privacy on Facebook and other social network sites.
The first claim you sometimes see is that Facebook users — or, if you prefer, “kids these days” — don’t care about privacy. They use Facebook, after all, deliberately exposing personal information to others on a web site accessible to anyone in the world.
But if you ask Andrea and Facebook users like her, they’ll tell you that they do care about privacy. When Facebook rolled out News Feed, which made it possible to get a real-time stream of updates about everything your contacts were doing on the site, it sparked massive user protests, to the point that Marc Zuckerberg had to apologize to the Facebook community. And when the kids these days find out that employers, deans, parents, or police are looking at their Facebook profiles, they also object. These are the protests of people for whom privacy matters.
Nor is it cheap talk. Andrea and her peers also act in ways that show a regard for privacy. In the first place using Facebook at all is a privacy-positve move. The alternative, after all, is the Web. Facebook is a controlled network, and the visible mechanic of choosing whom to friend and whom not to serves as a way of limiting the spread of information. Facebook users also create fake profiles, tell outright lies on their real profiles, and engage in a million other techniques of information modulation. After raging keg parties, college kids go back to their dorm rooms, check Facebook, and untag their names from any photos of them doing keg stands, lest their athletic coaches or campus pollice catch them drinking.
Why, then, does the idea that Facebook users don’t want privacy have such persistence? In part because of a related belief: that people make rational cost-benefit tradeoffs when evaluating privacy online. If that were true, the regularly observed low-privacy outcomes from Facbook use could only be a result of conscious choice. But the premise fails, and spectacularly so.
For one thing, users have a massive lack of understanding about Facebook’s privacy architecture and settings. One study that found that over half of Facebook users surveyed didn’t know that their profiles were searchable by millions of others. Another found that two-fifths of Facebook users were willing to add a green plastic frog as a friend. (The frog’s name, in a nice touch, was “Freddi Staur,” an anagram for “ID Fraudster.”)
Take Andrea; she posted the photos to her account, thinking they’d be visible only to her friends and networks. The trouble is that one of her networks was “New York City,” whose membership by default, consisted of anyone in New York City with a Facbook account. Oops. Facebook itself eventually eliminated this “feature” of networks, having concluded that it was never going to be able to educate its users into understanding how they worked.
This mismatch between the privacy Facebook users epxect and the privacy they get is hardly surprising. The design of social network sites plays into plenty of well-understood social cognition errors. The most basic heuristic of privacy self-help — know your audience — is hard to use in an electronically mediated environment that gives you little feedback on who any given communication is visible to. People don’t realize they’re exposing information to unwated viewers, in part, because social network sites activate the subconscious cues that make users think they’re interacting within bounded, closed, private spaces.
Of course, given that Facebook so regularly smashes its users’ fragile precious hopes for privacy, it’s also regularly argued that Facebook users must simply accept that everything that happens there is public, and learn to live with the consequences. This is a third myth, that Facebook users’ desire for privacy is unrealistic, and thus we should either laugh at them, or smile sadly but knowingly.
Let me emphasize once again that Facebook is NOT a fully public site. It creates back-and-forth conversations among small groups, social contexts that are not intended to be intelligible to outsiders, and spaces for interaction that do have bounds. Information spreads in limited ways within social networks. Most things said and done on Facebook aren’t likely to make the jump into fully public spaces on their own. And privacy scholars have recognized the importance of preserving context. To treat everything on Facebook as fair game is to run a steamroller over the millions of differentiated, localized social contexts on Facebook, each with its own norms of appropriateness and flow.
Let me also emphasize that Facebook is subject to a whole flood of attacks on privacy, many of which are in no sense their victim’s fault. Miss New Jersey 2007 was blackmailed by someone who got a hold of some mildly racy photographs in what she thought was a Facebook photo album restricted to friends only. A group of students at MIT were able to identify gay users on Facebook with surprisingly high accuracy, even when they kept their sexual orientation private, simply by looking to see whether they had gay friends. And Facebook’s ill-fated Beacon advertising program used users’ names and faces to hawk the products they’d bought on other sites, like Blockbuster or Zappos — a model of information sharing and exposure that nothing in their previous online experience would have led them to expect.
Just to make things even more bleak, let me turn to an—only slightly caricatured—commonplace of information privacy debates. The theory goes that if only we prevented large commercial entities from collecting or transfering personal data about individuals, all would be well. No, it wouldn’t. That’s a myth, too. Data collection rules will not help.
Think about it. If you were to limit the categories of data Facebook could collect, you would kill it. A typical Facebook profile contains answers to most of the questions you’re not allowed to ask in a job interview: race, sex, age, national origin, religion, marital status—it’s one-stop shopping! People are voluntarily uploading it all because they’re social and because Facebook scratches social itches. It lets you establish an identity by saying who you are in a detailed, textured way. It lets you build relationships to new friends and reconnect with old oens. And it lets you situate yourself within communities, to create and be part of social groups. These are the bread and butter of ordinary social life, they’re why people use Facebook, and if you tell Facebook it can’t collect personal information, people will stop using it. And if you believe, as I do, that social software offers profound social benefits, that would be a tragic outcome.
Nor can we do much better on the back end. Telling Facebook it can’t transfer personal information to third parties would also be a death sentence; every other user is a third party! Limiting the rule to transfers to companies and governmental entities might be more managable, but would also be beside the point. Every privacy harm I have mentioned today is a social harm; people are exposed to their peers in embarassing ways. The interventions that would work for government snooping or commercial profiling are powerless to deal with these peer-to-peer privacy violations. Here, the privacy harms flow from the very purposes and for which the data was collected and from the same sort of transfers the data subjects intended. The call is coming from inside the house.
To recap, then, Facebook users do care about privacy, but they have understandable difficulty acting to secure it. They deserve our sympathy and help, but our usual regulatory interventions are likely to be ineffective.
While I don’t have a set of legislative reforms to propose—I’m not yet convinced I know what the right ones would be—I do want to suggest what might be a more productive way of thinking about this problem. My suggestion is that we approach it as a product-saftey issue.
It’s true that using Facebook can be hazardous to your privacy, but a hammer can also be hazardous to your thumb. People need tools, and sometimes they need dangerous tools. Hammers are physically dangerous; Facebook is socially dangerous. We shouldn’t ban hammers, and we shouldn’t ban Facebook. The challenge for policymaker is to ensure that the tools people do use are not unnecessarily dangerous.
What could we learn from this framing? A few things, possibly:
For all of that, I’m optimistic. The silver lining to the dark, dark clouds of privacy violations on social network sites is that users still do work to secure their privacy, as best they can, in incredibly complex social settings. Our challenge is to help them while respecting their agency and autonomy—because ultimately, that’s what privacy is about. Facebook privacy isn’t an oxymoron and it doesn’t need to be a myth.
November 21, 2009
This essay is licensed under a Creative Commons Attribution 3.0 United States License. It is canonically available at http://james.grimmelmann.net/essays/FacebookMyths.
I welcome your comments, critiques, and corrections.