Privacy in Context
Although it was not the largest of its kind, or the most invasive, or even particularly surprising, the recent Cambridge Analytica scandal produced a surprisingly large amount of outrage and commentary. If nothing else, it was yet another reminder that we have gradually slipped into a regime where certain aspects of our privacy that could once be taken for granted are now long gone. Are people concerned? Is this something we should be worried about? What exactly are the harms that come from this sort of loss of privacy.
Part of the difficulty with trying to answer these sorts of questions is that privacy is a somewhat slippery term, which can mean different things to different people. Surely everyone has aspects of their lives which they think of as being completely private — certain moments, or spaces, or memories — and yet it is also true that collectively we are using the equivalent of a digital megaphone to broadcast details about ourselves in a way that would once have been unthinkable. How should we approach this concept?
By far the best book I have read on the subject is Helen Nissenbaum’s Privacy in Context. Nissenbaum is a Professor at Cornell Tech, well known for her work on the social and ethical implications of information technology. Laying out a principled yet pragmatic approach to privacy, Nissenbaum begins the book with an overview of why privacy matters and how people have historically thought about it, then outlines her proposed framework, which she calls “contextual integrity,” along with many examples of how we can think through these issues.
Although it is nearly a decade old now, Nissenbaum’s book still feels incredibly fresh and relevant. It is true today, as it was then, that recent developments in technology have vastly expanded the type and extent of threats to our privacy, including biometric surveillance, social networks, ubiquitous computing, etc. Although people have always monitored other people in various ways (even something as mundane as taking attendance), tracking has become much more complete, and vastly more automated, widespread, and invasive. In some cases this tracking is more of a side effect of a technology, rather than the original intent (as in transportation payment systems, such as E-Z pass), but they nevertheless contribute to the sum of data that is available on any individual.
Although most people agree that privacy is important, it can be difficult to articulate exactly why it is important, or precisely what negative consequences might follow from its loss. Surveying the literature, Nissenbaum touches on a wide range of ideas from different thinkers which suggest many different reasons why we should value privacy, both individually and collectively.
Perhaps the most obvious is the possibility of individual harm. If there were no mechanisms in place to protect your information, it would presumably be much easier for someone to use that information against you, even in something as prosaic as identity theft. The classic rebuttal to this idea, of course, is that only those who have something to hide have something to fear. And yet, even (perhaps especially) those who have done the most to undermine our privacy seem to be quite concerned with keeping their own information private, demonstrating a hypocrisy which is instructive.
Revealingly, one of the most potent and terrifying weapons that people have to use against us today is the idea of “doxing” — obtaining and making public quintessentially private information, including address, phone number, email, family members, photos, etc. Ironically, this is exactly the information that is perhaps most sensitive, (as it allows escalation to the next level of threat), but also most widely available. In writing her book Dragnet Nation, Julia Angwin, for example, was able to uncover hundreds of data brokers in the US, many of whom knew all the places she had ever lived, and far more.
Beyond the threat to your individual information, there is also a somewhat more diffuse danger in the possibility of discrimination. Given how badly biased society currently is on many levels, it is completely understandable that people would prefer to not automatically reveal every personal detail about themselves in every situation. Even as something as prosaic as religion might provide a barrier to initial communication, and very possibly has no bearing on what might follow.
At an even higher level, the loss of privacy comes in the form of aggregation of information about individuals by governments and corporations. At it’s most benign, this brings with it the potential to discover ways in which the world could be made better (for example, by reducing inequality). At the same time, however, there is every reason to believe that we should be concerned about the power that this sort of information grants to manipulate people en masse. Although we may never know how much of a role it played in the last presidential election, it is quite clear that Facebook at least could have a massive directed influence on politics if it chose to do so.
While the power of companies like Facebook is perhaps more pervasive, the power of the government is still in many ways more concerning. Although we can be thankful for the many layers of checks and balances which are built into the US system, the biggest concern is that if these were somehow to fail, the giant surveillance machinery which has been put in place could suddenly and easily be turned against the population in force — monitoring people’s actions, quashing dissent, and allowing a would-be-tyrant to parley a small amount of power into a full-scale tyranny. Whether or not such a thing is likely to happen in a country like the United States (where there are many actors with competing interest) is unclear, but the potential is certainly there. One only needs to look to a country like Iran to see how powerful the state can be when they have taken control of and assert their power through electronic surveillance. As Frank Church lamented in 1975,
“If this government ever became a tyrant, if a dictator ever took charge in this country, the technological capacity that the intelligence community has given the government could enable it to impose total tyranny, and there would be no way to fight back, because the most careful effort to combine together in resistance to the government, no matter how privately it was done, is within the reach of the government to know.”
Beyond specific threats, we quickly get into more philosophical territory. Just as constitutions have historically protected certain types of personal information, many people have suggested that this sort of individual privacy is essential for certain types of personal development — the freedom to experiment and develop artistically, morally, politically, etc. Ultimately, the liberal tradition holds that this sort of free thinking by individuals is what allows for society to progress.
The reverse of this is the kind of “chilling effect” that might occur if we knew that everything we said was being recorded, such as people not being willing to criticize the government. Nissenbaum is somewhat cautious about embracing any certainty around all of this: “As far as I can see, there is insufficient evidence to claim that these systems and practices [data collection, aggregation, and profiling], in general, lead to coercion, deception, and manipulation, only that they may.” (p. 84)
There is also the intriguing argument from people like Charles Fried and James Rahels that privacy is in some sense essential for developing deep personal relationships (in that sharing private information with select individuals is a key part of intimacy). Again, this highlights the idea that we would probably be willing to share even our most personal information in the right circumstances; it is all a question of who we are revealing it to, and under what conditions.
Finally, people have also made the suggestion that we should think of privacy as a public good, akin to clean air or water, with the implication that privacy may have value for all, even if they are not individually incentivized to prioritize it. As such, the market may not be a particularly good mechanism for ensuring that privacy is protected, as there is a clear free-rider problem. This certainly seems to be the case, as more and more people are flouting their own individual ability to share everything, and yet there can still be a strenuous backlash when people feel their sense of privacy has been violated.
One of the mysteries that comes up frequently in discussions of this sort is the so-called “privacy paradox” — the idea that most people say they care about privacy, and yet in their actions seem to place almost no value on it. While people were intensely concerned that the government might be recording the metadata of their phone calls, they were simultaneously tweeting details about their personal life, posting personal photos from inside their homes on Instagram, and seemingly indifferent to the surveillance cameras that are now pervasively installed throughout society, or so the story goes. Moreover, in experimental studies, people are often unwilling to pay even tiny amounts of money in order to obtain greater privacy (at least within the context of the experiment). As Nissenbuam says, “In almost all situations in which people must choose between privacy and just about any other good, they choose the other.” (p. 105)
Nissenbaum does a great job of unpacking this paradox and shedding some light on what is happening. In the case of observed behaviour, such as sharing sensitive information, (address, social security number, etc.), it is not clear that these are truly free or deliberate choice; in many cases people may be unaware that their privacy is being violated, or have no other options. Similarly, on the experimental side, part of the evidence may be explained by how things were framed. According to Nissenbaum, when aspects of choices related to privacy are made salient, people take it more seriously, and in some cases are willing to pay at least slightly more for it. Finally, it may be the case that most people have simply not thought that hard about the potential negative downsides of giving up their personal information, or feel it is too late to start worrying.
Regardless of shifting attitudes, however, we should avoid falling into the trap that privacy doesn’t matter if no one is actively trying to protect it. As Nissenbaum says,
“Privacy’s moral weight, it’s importance as a value, does not shrink or swell in direct proportion to the numbers of people who want or like it, or how much they want or like it. Rather, privacy is worth taking seriously because it is among the rights, duties, or values of any morally legitimate social and political system.” (p. 66)
This quote highlights one final reason to value privacy: because we have long done so (at least collectively). Although this is inherently a rather conservative argument, there is some logic in not letting go of tradition too carelessly. Given how fragile and difficult to create a functioning democracy seems to be, we should be cautious about jettisoning anything which has long been regraded as a fundamental pillar of such a system, unless there is a compelling ethical case for doing so.
In an effort to tie together all the diverse threads of thinking on privacy and the reasons for its importance, Nissenbaum proposes that we should use a framework which she calls “contextual integrity.” A large part of the book is devoted to exploring this proposal, but the basic premise is that we all have certain expectations about how the world works with respect to the flow of information — who has access to our data, what they will do with it, who they might share it with, etc. The main insight of contextual integrity is that when something produces a change in what is possible or how information is handled (for example, through technological development), it is likely to eventually produce a strong negative reaction if it differs too greatly from established norms.
To take a rather analog example, for many years people sent (and presumably still send) post cards — single pieces of card sent through the post without an envelope. Although a post card served a roughly equivalent purpose to that of a letter, there is the obvious difference that the text written on the post card is open to inspection by anyone who might encounter it. Although this method of communication in some sense lacked privacy, it was usually taken for granted that the post office would not intentionally read messages written on post cards, both because of a presumed integrity, and because of the sheer volume of mail they had to process. Nevertheless, the way in which post cards were handled set certain norms about how they should be used. Most commonly, post cards served to convey a kind of “I am here” message (often showing a picture of where they were being sent from). We might be somewhat incensed if a friend were to write extremely personal information about us on a post card, even if it was unlikely to have been read by anyone along the way. Similarly, we would presumably be outraged if we learned that the post office was surreptitiously opening and reading all of our regular mail, even if we hadn’t sent anything truly personal. The idea is not that we have suffered some direct harm, but that our expectations about what was proper and expected had been violated.
For many years, people seem to have acted as though emails and other forms of instant messages were effectively like sealed letters — safe from the eyes of others between sender and receiver. In fact, of course, all along they were much more like post cards — open to inspection by the companies through whose wires they passed. Moreover, while these companies might not have been interrogating each of our messages individually, they were able to solve the problem of sheer volume faced by the post office, by algorithmically analyzing the text (for the purpose of advertising, etc.).
Contextual integrity does not itself try to resolve what is right or wrong, or what is the best system of communication and information processing. Rather, it provides a framework for thinking about technologies which might have an impact on these things. Specifically, it suggests that developers of a system or technology ask how their proposed development might alter the ways in which information is obtained, stored, and transmitted, who might be affected by this, and how it might affect them. If this implies an alteration of existing norms, then things should be carefully evaluated in terms of costs and benefits, and the changes clearly explained to users. Or perhaps alternatives could be found which would imply less of a violation of expectations.
Nissenbaum admits that there is an element of conservatism in her proposed framework, with a default tendency towards keeping things the same. This is not, however to suggest that we should never change things; just that we should be prepared for blowback if things change too abruptly or dramatically. And although there are cases where systems are truly in need of change, it is probably far more common that the changes we are seeing these days tend to lead to less privacy and potentially worse outcomes. As Nissenbaum says,
“An evaluation that finds a given system or practice to be morally or politically problematic, in my view, is grounds for resistance and protest, for challenge, and for advocating redesign or even abandonment. If special circumstances are so compelling as to override this prescription, the burden of proof falls heavily upon the shoulders of proponents.” (p. 191)
One of the great virtues of Nissenbaum’s framework is that it allows us to make sense of the response to the revelations about how Cambridge Analytica had abused Facebook’s terms of service. It wasn’t just that people were upset that some of their “private” information on Facebook had been shared with a third party, it was the fact that this information was gathered en masse, that it was gathered by exploiting the very people who were ostensibly their “friends,” that it was then given to a questionable third party they disliked, and that it was then used (potentially) to help elect a politician that many of them likely despise.
As a final extended case study, Nissenbaum spends some time on the idea of data aggregation, and the harms that can follow from it. Perhaps one of the most concerning examples (because of the power imbalance involved) is the trend towards putting all court records online. This is an example where the same information has long been in some sense public, in that an individual (such as a journalist, for example) could easily request it and gain access. However, by making all records available in one place, with just a few clicks, almost anyone can suddenly harvest a large amount of information on a wide range of individuals. When combined with other kinds of information, companies can easily bootstrap this into highly detailed portraits of individuals. This is a classic case of a massive systemic change resulting from what might seem like a fairly superficial technological change. (A related low-tech example is that of the reverse phone directory: this clearly contains the exact same information as a phone book, and yet it enables a totally different type of information retrieval).
In any event, aggregation is something that should now concern all of us. The possible harms include unfair discrimination against individuals for arbitrary reasons, the possibility of “price discrimination” (charging different people different amounts depending on what they are able or willing to pay — something we are already seeing in the price of airline tickets and other areas), mass manipulation and so forth. Moreover, this will almost certainly exacerbate informational inequality, in that far more will be known about those who are vulnerable, and far less about those who are savvy or rich enough to build barriers around their privacy.
It seems trivially true that no one can reasonably expect to have complete control over what is known about them, and this is not what Nissenbaum is advocating for. Rather, the point is that we must pay attention to the context of informational flows, and if a technological change threatens to disrupt that context, or the nature of those flows, we should put some serious thought into the implications of such as disruption, and be prepared for the possibility that people will react quite negatively once the full extent of the change is understood.
In her concluding words, Nissenbaum reminds us that
“We have a right to privacy, but it is neither a right to control personal information nor a right to have access to this information restricted. Instead, it is a right to live in a world in which our expectations about the flow of personal information are, for the most part, met; expectations that are shaped not only by force of habit and convention but a general confidence in the mutual support these flows accord to key organizing principles of social life.”