One of the most common questions within the privacy “community” is “how do I get people to care about privacy?” The people asking this question see that our society is either outright hostile to the idea of personal privacy (e.g. “What do you have to hide?”), or view it as a luxury – nice to have, but ultimately subservient to more important things (e.g. stopping terrorists, making Alphabet another trillion dollars). I’m not interested in litigating the extremely complicated question of how we trade off between personal privacy and law enforcement investigation, what constitutes “consent”, or how we differentiate between “private” and “public” spaces. That’s all work for the philosophers, and while I’ll confess to have read more philosophy than is probably healthy, I’m ultimately a psychologist, and I’m interested in what people do and why.
What those people asking the question are really saying is “how do I make people value what I value to the same extent as I do?” Which is, ultimately, a psychological question. Why do people value what they value, and how does this valuation affect behaviour? What does it mean, psychologically, to value things? How do people trade off between various values – like the perpetual privacy/security and convenience trade-off?
This is an extremely big question, and I’m certainly not going to be able to answer it here. My own research does deal with this, but a) only a small part of it and b) I haven’t actually gathered and analysed the data yet. But I can go over what work has been done, and maybe this will be useful, or at least interesting. My intention here is just to get you thinking in a different direction than you might otherwise.
Cost/benefit analysis
There’s a lot of ways to think about this question. Barth and de Jong did a systematic literature review (in case you don’t know, a systematic literature review is exactly what it sounds like – you define a set of criteria of inclusion, grab all the papers that fall within that criteria, and then sort of look at them and see if there’s any patterns that emerge. It’s not quite a meta-analysis, you’re not doing any additional statistical analysis, but it’s a great way to just kind of get the vibe of a body of work), and they found that it’s a common theme in research that people engage in a kind of systematic, semi-rational cost/benefit analysis – I can choose not to use Facebook, but then I find it harder to communicate with my friends, and connect with community groups. So I choose to “sell” certain parts of my information to “buy” that benefit of communication ease. And on the surface, this is a tempting and intuitive perspective – it certainly matches how people describe their process – but people are never that simple. There’s two major issues with this perspective:
Just because that’s how people describe their process, doesn’t mean it’s actually what’s going on in their head. There’s a whole body of work which suggests that people’s given reasons are often justification of decisions we’ve made for completely different reasons that we often don’t really know ourselves – look at all those people who honestly and sincerely say that they called the police on a person because they were acting suspiciously sitting on a public bench for a black person or a person dressed shabbily, but not a white or well-dressed person sitting on the same bench in the same context. Those people are not lying – they honestly do believe that was why they acted the way they did. But what actually happened was their brain saw a black person, associated that with “dangerous” for various complicated reasons, and triggered the appropriate response (call the police). There’s a pretty well-known example where you show someone a bunch of photos in turn, and ask them which they’d prefer to work with/date/whatever. Then you ask them why they chose each face, but you’re showing them the face they didn’t choose. If you have enough faces (like they had like 10 pairs or something, and they had to choose one) they’ll come up with a justification for why they did the thing they never actually did, and believe those justifications.
A lot of research has to deal with the problem of ‘how you ask the question, or what question you’re asking, will shape the answer you get’. So if a lot of people are taking a cost/benefit analysis perspective in their work, even for good and valid reasons, then the research body as a whole will suggest a strong effect of cost/benefit analyses. Not because it’s what people actually do, but because it’s how the researchers chose to approach the question. It’s like if I chose to measure intelligence through how many decimal places of pi someone happens to know, then my work is probably going to show that intelligence is largely consisting of remembering long arbitrary strings of digits. I’m not being dishonest or unreasonable, I may have good and valid reasons for that approach which I clearly lay out and note the limitations of. But if enough people do it, the pattern can be set, and if you’re not very, very careful when reading through it, you can become mistaken on what the pattern means.
So when Barth and de Jong find that several papers note that this is not how people actually behave when we actually measure them, they have two main choices. One is “well, I guess the cost/benefit analysis perspective has not captured all of how people engage with this question”, while another is “Wow, some people must really be dumb”.
But I don’t want to be too harsh. I do think this perspective has value. People might not work strictly as cost/benefit machines, but we do at least sometimes. If I can buy a pretty good iced coffee for $3, or one that’s really terrible for $10, unless I’ve got a pretty compelling reason (e.g. signalling wealth) I’m going to choose the good one. We would be pretty confused by anyone that didn’t. So just because it doesn’t perfectly explain all of human behaviour is no reason to ignore its contribution.
Kehr and colleagues examined people’s engagement in privacy-endangering behaviour, specifically using a (fake) app that allegedly measured their driving behaviours and gave feedback on how they could improve (e.g. giving more space when following, speed recommendations when turning). The app passed information onto insurance companies, which meant users could get more accurate estimations of risk, and thus potentially save money on car insurance in addition to the targeted driving recommendations. (I’m deliberately skipping over potential confounds where privacy behaviour could be correlated with riskier driving behaviours.) They told the participants that the app would only pick up certain information, deliberately chosen to be more or less “sensitive” – low-sensitivity was things like what year the car was constructed, high sensitivity was detecting speeding, for example. They measured participants based on a few criteria – I’m not going to go over all of them here, but the ones I’m going to talk about are
General institutional trust – specifically, how much they trusted that smartphone app companies would not misuse the data and be honest about what data they were gathering
General privacy concern – basically, how generally concerned they were about their personal information being gathered without their knowledge
Perceived benefits of information disclosure – the extent to which they felt they would gain some material benefit for using the app
So Kehr and friends found that intention to disclose personal information was positively predicted by institutional trust, perceived benefits and expected privacy (as you’d expect), and negatively predicted by general privacy concerns and perceived risks (again, no surprises here). This was consistent across both countries (USA and Switzerland), with pretty similar correlation strengths. Dinev did a similar study comparing the USA and Italy, and found a mixed bag of results – sometimes similar across countries, sometimes pretty different, but they’re broadly similar enough to Kehr for our purposes, I just wanted to mention it.
So this would suggest that from this perspective, people use services and such when the benefits are viewed as worth the risk, and when they have a general sense of trusting that the companies in question will use the information gathered in ethical ways – like Uber Eats showing me nearby restaurants rather than far away ones. They’re less likely to use them when they think the companies either will use it in unethical ways – like selling it to potentially malevolent or untrusted bodies (which is part of why you see so many people angry about TikTok and China, among many other reasons), or when they see the risks as too high.
No surprises there, really.
Psychological ownership
There’s one variable which doesn’t get talked about so much that I think sheds much more light on how we think about these things; it’s called “psychological ownership”. This refers to the sense in which we view something as, in some sense, “ours”. This doesn’t have to be legal, and in fact often isn’t. Renters might come to call where they live “their place”, even though they know they don’t own it. People who work in an office generally have a sense that their desk is “theirs” in a way beyond just “the desk that I work at”. That’s what I’m talking about when I talk about psychological ownership – it’s not invalid, it’s a real thing that really happens, and it plays a really important psychological role.
One of those roles can be in regard to how we view personal information. I have a lot of information tied to and about me, but I don’t really feel the same way about all of it. Some I view as very private, very “my information” or “my business”, other I feel less possessive of. My medical history, for example, is something I’m generally reluctant to share unless I have a compelling reason, because I just view that as “not your business”. In comparison, my hobbies are really not something I view as especially “mine”. This has obvious ties to identity – what I feel ownership of, in many ways, makes up how I see myself. But it’s important not to conflate the two – I can’t “own” being straight or gay or asexual or bisexual or whatever (although the idea of a “personal characteristic” patent market is an amusing one). But it would be much easier to feel ownership over the information of what my sexual orientation is – that has a much closer tie to me as an individual. While this isn’t the same as viewing that piece of information as “private”, it’s very closely related – it would be hard for me to view my medical history as “mine”, but tell everyone I meet, in the same way that it’d be hard for me to view the tomatoes I grow as mine if I constantly give them away. I could, theoretically, but it’s pretty unlikely in practice. People who give away a lot of money or whatever tend to describe not viewing that money as “theirs” – they don’t think they earned it, they were given it by God to help the world, whatever. On the other hand, people who view a place as “theirs” are more likely to work to help protect it, or donate money towards helping it.
Unsurprisingly, if we feel more ownership of a piece of personal information, we’re less likely to “sell” it. Which makes sense – if I view something as “mine” more, I’m going to place more “value” on it. So in regard to that question at the start (“how do I get people to care about privacy?”) the asker is approaching it from the wrong perspective; changing people’s values is hard, and questionably ethical.
(I’m going to do a post at some point about the body of work in marketing and service design which takes as a basic tenet that it’s OK to basically trick people into giving up private information against their will and apparently doesn’t feel any need to critically examine this position.)
Instead, maybe we should be asking “how do we increase people’s psychological ownership of information?” And while I can’t find any research looking at this – gap in the research! Opportunity for low-hanging fruit! – I have an intuition that a lot of people who “don’t value privacy” generally have low ownership of this information for whatever reason. This lines up with what they say; “oh, it’s not like they can get anything really important” i.e. things they place importance on, or they feel ownership of. In fact, I’d hazard a theory that the extent that people feel ownership over their personal information is a key psychological difference between “privacy-conscious” and “privacy-unconscious” people.
OK, so what do we do?
How you do that? I’m not sure. Psychological ownership in regard to personal information is a new approach, so there’s not a lot of work on it. But if we look at psychological ownership in other areas, maybe we can hazard some guesses.
A common perspective is that ownership can be created by basically giving people a degree of control over the space or project – marketing students who choose what product they’re creating a campaign around report much greater feeling of ownership over the project, people often put personal items in their office space. Relating these to information is tricky – I can’t put a skull-and-crossbones flag on my medical information, and I can’t choose what type of kidney failure to develop. At a flat guess, my main thought is to emphasise the relation and uniqueness of the information to them. Yes, lots of people might have the same medical conditions or status as me, but the information that I have those conditions or status is unique to me. The fact that I do or do not need to take X medication is my information. I may choose to share it if I wish, or not.
Remember that information is not static, but constantly created and grown. I used to take X, but no more. I identified as Y sexual orientation five years ago, and while I might still think of myself that way, the role it plays in my life might have changed – maybe I’m with a different partner, or whatever. We constantly generate our own information, and that’s the information that is often harvested – you
sometimes hear about information becoming “stale”.
Social norms
I’ve written about cost/benefit analyses, and psychological ownership, and I do think these views are important and useful. But there’s another aspect we’re overlooking – the social aspect. If everyone we want to connect with only has Facebook as their main contact method, we’re going to face social costs if we don’t use it. Yes, part of that is not being able to contact people as easily, but also we’re going to be viewed a “weird”, and people love to dismiss weird people’s perspectives and arguments. Also, we just don’t like to stand apart from our friends – humans are very social creatures, and we just have a nagging sense that if we’re doing something very different, then we’re doing something wrong.
Heirman and colleagues examined the role of social norms around giving away personal information in exchange for discounts or whatever – like those “loyalty programs” where you give the company your phone number, and you get every 20th coffee 5% off. They measured the relative contribution of a group of variables, and they found that social norms was the single largest predictor of intention to disclose information, which itself was the largest predictor of actually disclosing the information. In other words, the perception that this was what everyone else was doing beat out aspects like past privacy violation experiences, concern about privacy, tendency to trust, and whether people thought disclosing was a good/smart thing or not. It wasn’t even close, either – the correlation between intention to disclose and social norms was 0.76, while the next strongest relationship was attitude (whether people thought disclosure was a good thing to do) at 0.56. This broad pattern has been replicated in students in Singapore to adopt privacy behaviours, adolescent sexting, and partially replicated in engagement in ads on Facebook (social norms was important, but in this case was beaten out by attitude).
(I’ve reached out to Heirman to get some more detail on the findings, specifically how some of the variables predict each other so as to tease out the unique contribution to outcome, but he hasn’t had time to dig into the data yet. I’ll let you know if this changes. Academics love to talk about their work, but they’re also often busy people.)
So, social norms matter. A lot. I’ll probably do a post digging into this more in the future (my own research partially touches on this, so hopefully I’ll have my own contribution to this), but for now I think we can settle on “people’s social circles really matter in determining behaviour”. But it’s important to note that this applies to “people whose opinion I place positive value in” – so the Taliban is probably a bad role model here, but if the people around you think you know what you’re talking about and like you, they’d be more likely to adopt your behaviours, like using Signal or Session or whatever. If you’re not using Facebook, and your friends think you’re paranoid or foolish, then you’re probably not going to do much. But if they see that you’re generally pretty sensible and cool, then you’re going to have more of a chance. Network effects are a pain in the neck, especially with social media, but I personally know that my ceasing to use Facebook has weakened the degree to which a lot of my friends use it, which means their friends have less incentive, and so on.
So the practical lesson here is “don’t be obnoxious or annoying”. Try to appear knowledgeable but not superior, and approachable and reasonable. Don’t go on about “shadow profiles” or the “Meta pixel” – you’re factually right, but you sound like a crazy person. Maybe talk more about how social media just makes you feel icky to use – goodness knows there’s enough toxic spaces on there that you can make a decent argument that the whole platform just isn’t worth your time. Discord has problems, certainly, but it’s relatively mainstream and for all its flaws doesn’t follow you around the web like Facebook and Twitter. WhatsApp absolutely collects all the meta-data, but it’s better than Facebook Messenger, which collects everything, so don’t push for Session if that’s too far outside your social circles Overton Window. Pushing for more than people are willing or able to do makes you appear paranoid (even if you’re right, we’re talking perception here), and drastically reduces your ability to improve your friends and family’s situations. And if you can’t get them to make even that small change, do your best to appear as reasonable and friendly as possible, even if you can’t find a middle ground. Again, people adopt the viewpoints of people who they like – when was the last time you had someone insult you into doing what they wanted and it worked? I’d guess roughly never, unless there was some massive power imbalance like your boss or school.
Take-aways
So, to sum up, the key points are:
People do – sometimes – act as if they’re weighing benefits and risks when making decisions, but not always. Trust in the company and platform is a part of the picture, but not the only one. So bringing up Facebook’s data breaches and that is a good idea, but it’s unlikely to be the sole determinant.
The degree of ownership we feel over our data is a strong predictor of how willing to “sell” it we are. So if you’re trying to get people to improve their privacy stance, focus instead on creating that sense of ownership, because I guarantee you companies are trying to strip it away.
Social norms are massive. Like, I cannot overstate how powerful they are in determining behaviour, they’re huge. But, we tend to only adopt the views and behaviours of people who we like, trust and view as knowing what they’re talking about. So ranting and raving about how the CCP owns Discord isn’t going to help even if it’s true (it isn’t, or at least there’s no evidence of it). So appear reasonable and sensible as much as possible, while nudging your friends to better places. Marginal improvements is the name of the game here.