So my privacy has been violated. So what?
In which we try to consider privacy violation consequences in a systematic fashion
True story, when drafting this post I accidentally named it “why do we care about privacy?” just like the last post. Because I’m creative and original!
This will pretty much be a summary and exploration of the ideas in Karwatzki et al – consequences etc
Last week we examined an alternative perspective about the perpetual question “how do I get people to care about privacy”. This week I’m going to do something similar but different – why do we care about privacy? Or more specifically, what are the potential consequences of having our privacy violated, of having information leak or stolen?
This has obvious implications for threat modelling – something grossly under-utilised and misunderstood in the privacy “community” for reasons I’m not going into here. Part of threat modelling is considering the possible consequences of a piece of information being revealed against your will, which informs how much effort/money/inconvenience you should exert to prevent that. Because privacy always involves at least some trade-off against other things – because that is how life is – you should always weigh that cost against the risk being mitigated. What benefits are worth what costs is going to ultimately be individually determined, but having some idea as to what the benefits are is still necessary to do this in a sensible way.
There’s several ways to do this, but today we’re looking at one work by Karwatzki and colleagues. Building on their previous work, Karwatzki et al were looking at trying to measure the potential costs of privacy violations, which requires at least some kind of classification system. Which makes sense, being physically beaten is very different than being turned down for a job, which is different again from being charged by the government – I might be really worried about the government, for example, discovering I practice a prohibited religion because I don’t want to be prosecuted and thrown into prison, but less worried about my income being limited. It’s not that one is necessarily worse than the other – although that’s definitely plausible – so much as they’re qualitatively different concerns which require different types and levels of intervention to prevent.
Karwatzki broke the potential consequences into 7 broad categories: physical safety, social ramifications, resource/economic impacts, psychological impacts, prosecution/legal, career concerns and freedom considerations. Obviously these are going to cross-over and inter-relate, sometimes in a very direct way – being locked up is not generally great for your career or social standing, for example. So don’t think of these as rigid categories, but shades of colour that can mix. The difference isn’t binary, but rather one of emphasis.
This post is going to be somewhat less “psychology-focused” than my usual. There is going to be psychology elements, of course – that’s how I roll – but the topic of necessity wanders afield of that.
Physical safety
This is probably the philosophically and psychologically simplest concern – if some piece of information becomes publicly known, then my physical safety becomes significantly lessened. For example, GSM (gender or sexual minority) people often can keep their gender or sexuality secret out of concerns of being physically attacked, either by the government or by just violent thugs.
(Just to be clear, there’s some really interesting argument about what “violence” means, does it have to be physical or can other behaviours be considered “violent”. I’m not interested in that argument here; for the purposes of this post “violence” means “being physically attacked and/or directly physically harmed without your consent”.)
The FBI recorded about 8,000 “hate crimes” in 2020, of which about 20% were motivated by the target’s sexual orientation, and about 13.3% their religion. England estimated 19,000 and 23,000 “personal crimes” motivated by these characteristics respectively over 2016 to 2018 (although these numbers have question marks over them so you can’t conclude England is dramatically worse than the US based on those numbers alone). Both GSM status and religion of these are aspects of a person that you can’t necessarily tell by looking, so people who belong to targeted groups might have legitimate concerns about their memberships becoming public.
Other people who might have concerns about their physical safety should information about them be leaked are stalking victims, or victims of domestic violence, especially if they’ve moved away from their abuser/stalker. In his OSINT podcast, Michael Bazzell tells a story about a client of his who was in this exact situation, to the point where when their house got broken into, having the client’s name on the police report was a potential concern as it might reveal the client’s new address. Finding statistics on how common this situation is is obviously near-impossible, but since 15% of women and 6% of men have experienced stalking in the US, even if we (totally arbitrarily) assume that 1% of victims need this kind of privacy, that’s still a lot of people in the absolute sense.
And of course lastly is people in the local equivalent of a witness protection program. If they’re subject to a long-term or lifelong protection or hiding program due to a particularly persistent or well-resourced threat (like an organised crime group), leaks about their personal information can literally put their lives at risk, or the lives of their loved ones.
Social consequences
We all have things that we keep secret from our friends, even from our loved ones. This might be because they’re just personal things (as I described last week, things we have strong “psychological ownership” over), but it might also be because having that information known will impact our relationships with them. For example, I have a friend (John) who does not tell his family his religion, simply because he does not want to deal with the hassle and drama that would result if the family know John had a different religious perspective than they do.
That would be a fairly mild form, but illustrative of what this concern is – if a piece of information comes out, it will harm/negatively impact our relationships. This can be as mild as our friends making boring jokes about my deep and abiding love for hair metal, or as serious as family members disowning us entirely because of our religion, sexual or political orientation. Every time the US has an election these days you hear stories about people cutting ties with family members because they voted for a different candidate. The Brexit vote was notoriously tribalistic and socially toxic – I can’t find any hard numbers, but it’s plausible that at least some people kept their votes or perspective private to avoid social consequences.
The effect can be less concrete as well – sometimes you keep aspects of yourself private in general, if known by close friends, in order to avoid issues with broader reputation. For example, an organiser of an unrelated community group (e.g. a board game group, a photography group) might keep their religious views private to avoid accusations related to it impacting the community group (insert your own examples here). If the religion (or lack thereof) has negative stereotypes, a person might get tired of the “Yes, I’m X, but not like that” dance, so you just keep it to yourself.
This obviously ties into the later career consideration. If your reputation is harmed enough, it may impact your ability to get or keep a job – even if you’re a perfectly lovely member of Group X, if people think Group X are generally terrible, they’re going to be less likely to employ you because a) they don’t want to take the risk you are a terrible X-er, and b) they may be concerned about their customers not wanting to think they support Group X. (Notice that this effect does not actually require Group X to actually be any more terrible than everyone else, only be thought that way.)
I have a friend (Bob) who has the idea that people of a certain political ideology are all raging fanatics who will destroy society and who deny basic reality. I know for a fact that there are people who he knows and gets along with who fall within that umbrella. I don’t know if Bob knows they do and he’s carved out a little mental exception, or if he’s so blinded by the stereotype he has in his head that he just doesn’t know. I do know that those people are generally not going to risk bringing it up where Bob can hear. Not because they’re afraid for their safety – Bob can be a tool, but he’s not going to be remotely violent – they just know that it’d make interacting with Bob more difficult.
Resource/economic impacts
The previous two were fairly individual in their impacts – people are managing their information flow that Bob knows, or John is managing what his family knows, but it’s ultimately part of the general social networks that we work within that are, at the end of the day, technically optional. If John wanted, he could cut ties with his family. It’d suck in many ways, but it’s an option.
This category is not that kind of thing, and refers to the broad systems we work within. Basically, it has to do with someone’s access to resources, or their economic functioning within society. For example, if it becomes known that I secretly deeply love being tied upside-down and being beaten with an electrified flogger while drinking huge amounts of vodka, a health insurance company might deem me at greater risk, and charge me more for coverage, or deny it entirely. This would impact me economically, and depending on the system I work within I might not be able to access medical services as easily.
More real-world scenarios can include being charged more for insurance depending on your sexual orientation or gender due to perceived risk. In the US, pre-existing conditions have been grounds for refusing cover, although it’s currently illegal to do so. But as is always the case with protection based on law, the law may always change, and once information is out there, it’s notoriously hard to recall. The UK, Australian and Canadian situation is more complicated.
Other possible risks involve things like receiving government benefits. Australia recently flirted with bringing in mandatory drug testing for people on welfare, a policy that the US is fond of as well. The UK apparently sometimes uses drug testing as a consideration in family courts.
Many welfare programs require quite invasive information about the recipient’s personal lives. Australia, for example, can require information as to your sex life to determine payments (no word on how asexual people are assessed on this criteria). There was a well-publicised situation where two pensioners who lived together had to argue their sexual relationship impacting their benefits received. Same-sex couples have historically been officially described as “friends”, so whether they were “out” or not really impacted their ability to receive benefits.
(Look, I swear I’m not targeting Australia in this section. The Australian welfare system is just really weird.)
In some contexts certain vaccinations may also be necessary to gain access to benefits, including childcare. Varies heavily by not just country but state and even individual businesses, so I’m not going to bother citing sources lest I give a misleading impression – I don’t need someone saying “Oh, you said this happens, but this is illegal in X jurisdiction!”.
Psychological consequences
Ironically, given the usual psychological focus of my approach, this is the hardest to talk about coherently. But I do think it’s one of the most common concerns.
A lot of people who get into privacy, especially if they haven’t created a coherent threat model, can’t point to a specific harm they’re worried about, but instead regard the regular violations of personal privacy by corporations or government as just generally discomfort-inducing. “Icky” is the word I like to use in this context, because it reminds me that it’s not necessarily rationally based, but still has important meaning to me. True story, the main reason I stopped using Facebook was because it made me feel icky. I didn’t seriously anticipate my physical safety or economic impacts or whatever being substantially impacted, but the sheer extent to which it violates your day to day expectation of privacy just made me really uncomfortable in itself, even if I know it’s very unlikely to cause any concrete harm.
Is this invalid? Of course not! Our emotions are often cues to when something is violating our values, even if we can’t specifically describe how. And that is completely valid – I don’t have to give a 100% rational argument why I don’t like something to justify not engaging with it. I don’t have to give a rational argument why I don’t watch war movies, “I don’t enjoy them” is entirely sufficient. As Philosophy Tube once said, albeit in a very different context: “I don’t have to tell you why, I don’t even have to know why, the fact that I want it is enough.” (Although that video is talking about abortion, I think it’s telling that the villain in the piece immediately after that cites the victims wife and children and address as a clear threat.)
That said, it is important to distinguish when something is just making us feel icky as opposed to a more concrete or direct harm because again, there’s a qualitative difference here which we need to deal with differently.
Ickiness is a very mild form, but for other people having their privacy violated can cause very serious anxiety, even rising to clinical significance. One study found that the majority of victims suffered ongoing emotional and physical symptoms for months after the theft was discovered. In addition, victims were more likely to be subsequently targeted for offline violent crimes like muggings even after controlling for pre-identify theft victimisation and risk. (My personal guess is that the victimisation and stress leads to changes in posture and behaviour, including cues to muggers which make them more attractive, but I can’t offer much support for that beyond “seems plausible?” and a broad shrug.) While this is an obvious case of overlap of consequence type (the emotional consequences were mediated by victim income, suggesting that richer people experienced less stress etc probably because the marginal impact on their life was less), the fact of this and similar findings elsewhere is illustrative of just how serious having privacy violated can affect us on a psychological level. Other findings have illustrated this causing less trust both of online services (which in general is probably wise, but this also includes legitimate ones, and can cause serious difficulty in your ongoing life if you don’t trust eg internet banking) and people in general.
So, yes. Although it’s easy to sniff at the psychological kind of consequences, and it’s important to weight them appropriately, these can be – as the kids say – no joke.
Prosecution/legal
After things like identity theft, this is probably where most people’s minds go when we think about privacy violations. Whether you’re a GSM person in a repressive country, a person of a prohibited political perspective in a repressive country, or even just an out-and-out criminal – we may not like to think about them in the privacy sphere, but being prosecuted for a crime is a legitimate concern for people’s privacy.
(I’m specifically not dealing with the question of how we define what behaviours as crimes, but know that that is a deeply complicated political topic. Remember that homosexuality and inter-racial relationships used to be outright illegal in even countries which are pretty free today. Or for a contemporary example, abortion: was legal in the US for a while, and now is illegal again, and might be legal again in the next few years. Or the question of using deadly force when you’re being attacked: whether it’s totally permissible, or only if it’s the only option available, or if the attack needs to be happening or simply anticipated, can vary widely on where you live. And, of course, even “actual” criminals like terrorists or serial killers or paedophiles are still humans, and as such deserve human rights, including privacy.)
Even if we grant that law enforcement requires some kind of privacy-violating powers to some extent, most of us would agree that those powers should have some kind of limits (and definitely should not extend to full-blown entrapment). So if the police want to know about my collection of erotic horse poetry, they can’t just wander into my house and read them, no matter how much I torture the rhyming meter. Even if we grant that my crimes against the written word would absolutely justify state involvement, they probably don’t warrant violating innocent people’s privacy. If, however, I was kidnapping people off the streets and torturing them in my basement and people could hear the screams of the victims, or someone saw me carrying unconscious people inside, it’s somewhat easier to justify the police coming in.
Of particular interest is the idea that some piece of information about myself might be totally acceptable or legal now, but that could change in the future. For example, fifty years ago someone (Jane) who was politically active against GSM rights wouldn’t have faced nearly the same degree of social censure that the same behaviour would now. If it comes out that Jane used to march and write letters and engage against things like same-sex marriage, it’s very possible that Jane would face greater investigation into potential criminal behaviours than someone else would, and as we all know if the police are looking they’ll find something to charge you with, even if what drew you to their attention in the first place was totally legal. A lot of minorities end up charged with so-called nuisance crimes like loitering way more than non-minorities – technically they are laws on the books, but if a police officer doesn’t like people of a particular group, they often have a lot of “crimes” they can bring out. Keeping your membership in those groups – easier for some than others – is not a bad way to avoid this scenario.
Career
While all of these consequences naturally bleed into each other – being prosecuted for something is definitely going to have elements pertaining to physical safety, psychological, financial and social consequences, for example – career is one that really does that in a way that makes it hard to separate it into its own category. Karwatski examines it using questions like if it would “affect [your] career negatively” or whether it would “reduce [your] career aspects]”, I like to think of it in more grounded terms.
Example: I own a cafe. It’s a pretty nice place, in a good position in a CBD to get a lot of foot traffic, and business has been picking up to the point where I want to hire another barista. After I advertise for a while, I get let’s say 50 applications, of which I whittle down to a half-dozen interviews. After the interviews, I have it down to tow main candidates who would be a good fit and can probably do the job. I do a quick search, and it comes out that one applicant is pretty publicly associated with a somewhat controversial communist party. In and of itself I don’t care, but now I’m worried whether they’re going to start calling my customers corporate lapdogs of capital, or causing trouble in other ways. So I go with the other candidate.
The point of that story isn’t to say I’m being reasonable or not in that scenario, but to illustrate a way in which an aspect of the candidate’s personal life – of arguable relevance – has directly cost them a job opportunity. Very similar patterns happen with corporate advancement – partnership offers at law firms are notoriously affected by arguably irrelevant personal details, for example – so this can be a legitimate concern for someone. Many countries have laws on the books that outlaw religious, gender-based, racial or other discrimination, but these are notoriously difficult to enforce because they not outlaw behaviour, so much as reasoning for behaviour. So the law doesn’t say I have to hire the communist, but it says that I can’t use their political affiliation to make my decision. But knowing our reasoning is notoriously difficult even within ourselves – guessing why someone else made a decision is near-impossible to know, much less prove in a formal proceeding like a lawsuit.
True story: I applied for a job at a supermarket, and they noted that part of the job application involved a medical check, a police check and a social media check. I’m not on social media, and I’m pretty sure that’s part of why I didn’t get that job, although I can’t prove that. In theory, what I say on my personal social media, or who I follow, should have absolutely zero bearing on whether I can stock shelves or scan groceries. And depending on what they ask, my medical history, beyond “can you physically do the job” (which they can just, you know, ask) and my criminal history is also none of their business. But I don’t know how they make their decision, so the potential for abuse and discrimination is massive and almost entirely impossible to prove.
Now, let’s maintain perspective – this was a basic job at a supermarket., not a six-figure job at a law firm or high political office But for many people, this will be their first job, or if they’re a member of a discriminated class, this may be the biggest they can aim for, or at least a necessary first step towards establishing a work history and getting references. And if the person making the decision holds negative beliefs about people of your race/gender/religion/political view/movie taste, they can follow those relatively easily. The only real defence against that kind of thing is to keep it hidden – if they don’t know you’re Jewish, they can’t discriminate against you for it. Now, this obviously is easier for some things than others – it’s pretty hard to hide your gender or your ethnicity, for example. But it illustrates the concern here.
Freedom
This is both really obvious as a concern, but also the vaguest and hardest to talk about coherently. So I’m going to just kind of wave generally in the direction to illustrate what it refers to, and ask for your co-operation and charitable readings.
It’s important to differentiate this from the concerns about being prosecuted for crimes or other legal consequences. This refers to the concern about having your freedom compromised in a more insidious and subtle way, though manipulation or censorship.
Real example: in 2018, a former employee of the company Cambridge-Analytica went public about how their company had used illegally gained information to shape advertising on topics like electoral campaigns and the Brexit campaign, probably in addition to others. The impact of these campaigns is unclear (and arguably impossible to establish), but by using private information to target advertising, in theory much greater effectiveness levels could be reached. And if we grant that people follow at least broadly consistent (if very complex) patterns – that is, if psychology and behavioural neuroscience are remotely feasible as sciences – then it is at least feasible in principle to influence people.
This is exactly the goal of marketing – including corporate advertising but also things like health campaigns to discourage smoking or domestic abuse, or to encourage exercise, or indeed privacy-encouraging behaviours. And while it’s probably not possible to craft a 30-second ad that will totally change people’s behaviour and outlook, it’s totally plausible to cause changes around the margins and even potentially change cultural norms over a long period of time. So at least some people will have their minds changed against their will.
This can definitely happen with censorship, and is arguably one of the common justifications given for it. For example: most countries have some kind of classification system of media that differentiates between media suitable for children or only adults, the argument given that some material has the potential to cause some degree of harm to those who are too immature to critically engage with it.
Film classification is maybe not the best example if we’re talking about this from a privacy perspective, but it illustrates the ways in which media can be both actively manipulative (through targeted propaganda and ads) as well as hiding things (through censorship, especially selective censorship). Because it’s not hard to imagine, say, Facebook deciding to selectively hide certain kinds of information or pages from its users in order to encourage certain viewpoints or behaviours – arguably this is part of how they propose to “combat misinformation”. It wouldn’t be hard to have an algorithm act differently depending on, say, the users age; young people are viewed as more easily manipulated, so maybe we hide some posts pointing out ambiguous evidence around masks in combating a pandemic so as to encourage them to do that to those users, for example.
Conclusions
So, that’s seven broad categories that consequences of privacy violations can fall into. As I said at the start, which is more likely or severe is going to depend on the individual and their context, as well as the information being revealed to whom. Whether your concern is hate groups looking for lists of people who fall within their targets, or concern over being profiled for a job, or worry over having some nominally-public information hidden from you depending on your location or other private characteristic is going to vary wildly.
Threat modelling is the most important aspect of privacy. You can’t hide everything, and arguably probably don’t want to. Even if you can, the very extremity of the steps you’d have to take would in themselves draw attention to you. To a point, this is inevitable; using Tor if that’s rare does really make you stand out, but if you’re less concerned with people knowing that you’re using Tor than what you’re looking at, that’s an acceptable trade-off. If you’re having to never talk to anyone, ever, about your religious beliefs, that’s pretty extreme, but if you live in a place where you might get killed for them, it’s potentially justified. I’m not going to say you’re right or wrong on where you fall into those trade-off curves – I have a policy against arguing with people’s threat model (although I will definitely invite people to examine their reasoning for their threat model) but in order to have a coherent one, you need to have considered the potential consequences of having the information come out in order to have a sensible answer of what it’s worth to hide it.
You don’t have to follow this classification system. It definitely has issues, for sure – the categories bleed all over into each other to the point where separating them is almost impossible sometimes; how do you draw a hard line between legal prosecution, freedom concerns, physical safety and social consequences? But I find a beginning framework helps massively, because you can modify it to suit your specific case, rather than coming up with one ex nihilo.