“Security is mostly a superstition. It does not exist in nature, nor do the children of men as a whole experience it. … Serious harm, I am afraid, has been wrought to our generation by fostering the idea that they would live secure in a permanent order of things. It has tended to weaken imagination and self-equipment and unfit them for independent steering of their destinies. Now they are staggered by apocalyptic events and wrecked illusions. They have expected stability and find none within themselves or their universe. Before it is too late they must learn and teach others that only by brave acceptance of change and all-time crisis-ethics can they rise to the height of superlative responsibility.”
Helen Keller, The Open Door, p. 17-18
So I wrote about how a structurally privacy-invasive system can create a kind of learned helplessness in people with regard to the exposure of their personal information, and the attendant problems that creates exposure to fraud and similar when that information is inevitably breached or misused. And although I try to steer away from politics on this blog, it’s kind of a fact that government is going to be a part of that picture. More specifically, how people view government or authority is going to affect how they view any given event or system.
So I’m going to do my best to focus on the psychology here, and not get distracted by political arguments. I have my views on that1 and I’m not going to pretend they don’t affect my analysis. I had a conversation with a colleague about the role of legislation in research definitions and such, and I had to very carefully structure my thoughts and phrasing to avoid going off on a tangent. But I do recognise that my view is not the only view, nor is it necessarily objectively more correct than a different view, but rather an expression of values and experiences. Also, while I encourage discussion on the topic, I want to remind you of the comment policy – comments must be useful, true or kind, pick any two. Disagreement is fine, personally attacking someone for having a different view is completely out.
OK? OK.
Political psychology is a fascinating field, albeit one I haven’t spent much time studying, something I kind of regret but also not. To paraphrase a friend of mine, politics is full of brain-worms2. It has a way of sliding into your mind and making you just really insufferable to be around. And I am in no way immune to this! Since starting to pay more attention to a very small arena of politics – that related to privacy – I know I’ve become substantially less pleasant to be around, in part due to my tendency to go off on rants about things that annoy me. In combination with my bad habit of getting annoyed by what I perceive as bad reasoning (and the sheer endemic nature of bad reasoning in political rhetoric), this is a very toxic space that I know I have to exert strong effort to even partially mitigate these effects. This is part of why I wanted to be extra-careful in any comments.
One of the common areas of discussion and investigation is how people view political bodies and authority. This can be “the government” generally, more specific levels (e.g. “the current administration of my region”), specific institutions (e.g. “law enforcement”), specific examples (e.g. “this particular police force”) or even specific people. And this is obviously relevant to how people view any specific legislation or act or agency – if I view the federal government of my country as fundamentally malicious, then I’m going to view them passing a “feed the puppies” legislation as more cynically – maybe it feeds money to their friends, or is a vehicle for oppressing opposing speech, or is a distraction for covering up something malicious they’re doing. Conversely, if I think the current administration is basically honest, then even things like banning certain words or symbols, or extending mass surveillance, can be justified – “Yes, normally, it’d be bad, but come on, it’s the swastika. It’s not like there’s legitimate usage”, or our old favourite “if you have nothing to hide you have nothing to fear”.
Do I trust them?
A common finding of political psychology – one you’ve almost certainly heard of – is the general decrease in trust in various institutions over the last few decades. A 2017 poll in the US found that 35% of people overall felt there was “a lot of” misinformation in mainstream media outlets (those who identified as Republican voters were more like 80% likely to say this). Causes are unclear, and are almost certainly complex, multi-faceted and affect different groups in different ways and magnitudes – while both Muslim immigrants and Christian native-born people may both trust the government of their country less, it is likely for different reasons, at least in part. A conservative might view government as being overtaken by virtue-signalling woke brigades who are totally happy with stomping on “regular” people in order to be seen to cater to privileged minorities, pointing to increasingly strict and wide-reaching anti-discrimination laws that can be weaponised, or perceived unwillingness of law enforcement to investigate crimes against them. A progressive might point to ongoing abuse by police against marginalised groups, with perceived support by lawmakers. A libertarian might see the ever-increasing centralisation of power in government, with demonstrated commitment of seemingly all groups to more surveillance, more heavily armed police forces and less civil liberties, along with more and more societal infrastructure being owned and controlled by private entities who are not subtle about their own agendas. 3
But – at the risk of sounding like a philosopher – what do we mean when we talk about trust? When you think about it, there’s a few different dimensions to that idea, aren’t there? I trust my doctor to know medical stuff, but I wouldn’t necessarily lend them money. I trust my friends to not stab me in the back, but I wouldn’t necessarily think they are omnicompetent in all areas. I have a friend who is some kind of programmer (developer, maybe? He keeps explaining it to me and I keep forgetting – honestly I’m not clear on the difference) and I implicitly trust he knows more than I do in his area, but when it comes to other areas… well, we’ll just say his knowledge is more lacking than mine and leave it at that. There’s one or two people who I would only be mildly worried if they walk up and put a sharp knife to my neck, but does that mean I would let them do surgery on me?
OK, so what is trust?
I’m so glad you asked.
We’ve already established is not a one-dimensional thing – that is, it doesn’t make sense to say “I trust X more than Y across the board in all things and ways”. So what are some of those dimensions? There’s been a fair amount of work on this, and they all seem to more-or-less converge on some common things, so I’ll just talk about those things separately. Be warned that basically all the work talks about how these are all inter-correlated, so there is no presumption of independence here. I’m just separating them because it makes it easier to talk about.
In addition, I’m going to be talking in very general terms. I’ll try to make it clear when individual studies are talking about specific institutions or groups, but it’s pretty clear that people are going to view different authorities differently. I view law enforcement very differently than I view education policy makers, who I view differently again to non-government scientific authorities. Further, I probably have somewhat different views of specific examples within each of those categories.
Benevolence
This is probably the closest to what people casually mean when they say “trust”. Basically, it refers to the extent to which I think X has my best interest at heart. In the case of government, it might be the extent to which I think individual/agency/administration is attempting to maximise the welfare of the people it governs.
In examining the views of police amongst Chicago school students, one study found that an important aspect of how they viewed police was the extent to which “the police really cared about what is good for their neighborhood... and treat most people fairly”, with all the usual correlations you’d expect to find on that dimension 4. A similar work in Boston found similar things, with an emphasis on the extent to which police “share our priorities or motives” (that is, reducing crime and violence), and the degree to which the police can be trusted to “be respectful, courteous, and fair”. Another study – although, it must be noted, this was validating a pre-existing scale rather than deriving one from data – found significant involvement of governmental “benevolence”, or the degree to which “government organization to care about the welfare of the public and to be motivated to act in the public interest”. In non-governmental examples, the extent to which an authority is viewed as valuing their “subjects” well-being has been found in Italian lecturers and Turkish managers, so this is probably an element of how we view authority figures more generally.
Intuitively, this makes sense. A common view of everyone’s favourite bad guy the Taliban is that they are absolutely abhorrent morally, but very, very competent, while long-haired hippies talking about love and peace are probably very benevolent in their intentions, but utterly useless at actually helping people. And it’s very useful to bear the aims of any given authority in mind – we would generally oppose a government that really, really wants to violate your civil liberties even if we all agree it’s so hilariously incapable that it’d never be able to actually meaningfully do so. Similarly, there’s a long history of supporting at least the ideals of a benevolent group, even if we agree that in practice it probably won’t work. There’s a lot of good strategic reasons for this, most obviously that showing support for a given set of principles hopefully increases the chance that other authorities will cater to those principles. In addition, there’s a general presumption that the more power a group has, the better able they are to effect their principles, while you can’t easily hack in principles to a group that has power. So we would prefer a benevolent authority that accomplishes 10% of their goals than a malevolent authority that accomplishes 50%.
Competence
I kind of alluded to this above, but another common dimension is competence – the extent to which an authority or group is able to realise their goals. So a benevolent police force is one that cares about the welfare of the citizenry it serves, while a competent one is that actually results in whatever goals it cares about (benevolent goals being things like safety of the citizenry, malevolent goals being oppression of non-favoured groups or growth of personal power). One of the items in this study was “The municipality of XX] carries out its duty very well”, while the Boston study focused on the extent to which police “have the knowledge or skills to play the roles required of them” and “have the knowledge and skills to effectively and consistently enforce the law, control crime and maintain high levels of safety”. In addition, people tended to report that even when they disliked the police and felt regularly disrespected and treated like criminals absent any proof, they usually conceded that the police had reduced drug crime in their area. When talking about their lecturers, this dimension got the most attention out of all criteria for judgement at 4.2 words/sentences per participant, compared to 2.9 for benevolence.
So it’s clear that the degree to which an authority is capable of achieving their goals is also an aspect of “trust”. Which, again, makes sense – we say things like “I trust that the doctor knows what they’re doing”, or “I don’t trust the police to find the criminal responsible”. None of them refer to the entity valuing the same things I do, simply that I believe they can/will do what they set out to do regardless of their motivations. The police could find the criminal to lock him up, the doctor could do their doctoring because they want to make money. This aspect of trust only speaks to ability, not motivation. I’m sure you have friends who you absolutely would not trust to do surgery on you – not because they wouldn’t want to help, but because they’re not surgeons and don’t have the abilities or skills.
I’m jumping ahead a little bit, but you see this in discussions about how effective mass surveillance is at preventing terrorism. It’s commonly repeated that mass surveillance “doesn’t work” – that every time a terrorist or drug lord or child pornographer is caught, it was either with existing, non-mass surveillance techniques or it could easily have been done so. Without getting into a very complicated debate that my half-remembered criminology degree is woefully inadequate for, the literature on the topic is… complicated and muddled. Some forms of surveillance do demonstratively reduce crime – CCTV notably so – but others not so much. Rather than ask “does mass surveillance reduce crime/increase safety”, it’s better to ask “what kind of mass surveillance, against what types of crime, in what context, by how much”, and then decide if that’s worth the cost both financial and otherwise. Notice that this is an argument for capability – a cousin to competence – not benevolence. While many people may have negative views of the NSA, few people would argue that they are motivated to help support the US Department of Defense and monitor potential threats.
Integrity
This is sometimes folded into benevolence, but I think it’s best viewed separately. “Integrity” in this context means, basically, “honesty” – the extent to which an authority is viewed as telling the truth, dealing openly and transparently, and (try to) fulfil its promises. So if an authority promises to gather all the personal information and give it to corporations to increase their profit margin, that would be acting with integrity, but not benevolence (unless you’re a corporation, I guess). If they try to do this but fail, they have integrity but not competence.
Real-life example 5: in the early days of the COVID-19 pandemic, certain parts of the US government said that masks were unlikely to be effective – something we now know to be untrue, and some argue that the medical authorities at the time did know, saying the contrary to avoid a panic and allow them to stockpile supplies for critical usage. This is debated – because it’s US politics and the colour of the sky is debated – but that’s a common narrative. For the purposes of this example, I’m going to be saying that it’s true. This would be a really good example of an authority – in this case the CDC – acting (arguably) benevolently and competently, but totally lacking integrity. Saying something you know to be untrue, in direct contravention of your stated mandate, is not honest in any way, shape or form. It may be motivated by benevolent motivations – if the authority sincerely believes that doing so will lead to a better outcome for its people – and if it works (or there’s good reason to believe it would work) it may well be the competent and effective choice, but it’s not honest. And that is exactly how that behaviour is viewed, as illustrative of why people shouldn’t trust the CDC, which caused a lot of problems later on, which are compounded by mixed messaging by a lot of the media which are also viewed as lacking integrity due to political bias. And then of course there’s the counter-narrative that that’s not true, it’s actually Trump’s fault, and that he’s totally lacking in integrity and also he’s totally malevolent and also incompetent but also caused a lot of damage.
That kind of compounding and long-term impact is why I think it’s best to view integrity as its own dimension. The idea of the “noble lie” is not new, but it does causes problems – it’s hard to view someone as being benevolent when they do not deal honestly with you. If I had a friend who regularly lied to me “for my own good”, I probably wouldn’t trust them very much – who knows what they might decide they need to do “for my own good”? Also, it makes it really easy to infer malevolence and – since you discovered their deceit – competence.
You’ve probably heard of the “halo effect” – the phenomenon where people who have one good trait (e.g. physically attractive, good speaker, materially successful, good at math) are assumed to have greater ability in other, unrelated areas. The classic finding is that attractive people tend to be assumed to be more trustworthy, friendly and honest, despite the fact that there is zero reason why someone who is attractive cannot be dishonest or unfriendly. This then feeds into a self-fulfilling prophecy – if Alex is assumed to be more friendly, than people will approach them as more friendly, which will obviously increase the chance that Alex responds in a friendly way. It’s really easy to be friendly to people who are friendly to you, after all.
Slightly less well-known is the so-called “horns effect” – basically the reverse. If someone has some negative trait the assumption is that they are more negative on unrelated things. So if Riley is unattractive, I might assume that they are more likely to be unfriendly, or socially unskilled, or of lower moral standards. This then leads into the usual self-fulfilling prophecy – if I assume Riley is more unfriendly, then I’m going to interpret ambiguous things as indicating that. So I’m not going to give Riley a chance to demonstrate friendliness, which means they’ll be more socially isolated.
So if I view Authority X as basically benevolent, I’m probably going to be more likely to assume they’re basically honest and capable, all else being equal. And if I view them as basically incapable, it’s not hard to conclude that they’re also malevolent and dishonest. Again, these factors are not independent, and we shouldn’t assume they are.
How does this relate to privacy?
I’m almost there, I promise.
So we’ve established that we don’t interpret people’s actions in a disinterested and neutral way, but that we tend to interpret things in lines with pre-existing conceptions. In addition, when it comes to authority, trust tends to boil down to benevolence, competence and integrity, all of which are likely influenced by unrelated things like attractiveness or success, as well as affecting each other in complicated ways.
But how do they affect how we react to statements made? As in, if I view an authority as competent but not benevolent, do I react to their statements the same as if I view them as benevolent but not competent?
I mean, no. Obviously. That’s the whole point of the previous section.
If a company has a position on some new technology (for or against), and if I view that company as honest/dishonest or competent/incompetent, those factors will combine in complicated ways. More specifically, if the company is for a technology, I will tend to agree with them if I view them as basically competent. However, if they’re against the technology, I will tend to discredit their view if I view them as basically dishonest. The study I’m basing this on worked on an existing and politicised technology (carbon capture systems) so I think people’s priors are causing problems here. I’d be curious to see a replication on a less politicised or even fictional technology, but that’s what we’ve got so let’s go with it.
In addition, when it comes to judging people’s competence, we tend to view success and failure differently than morality.
When judging competence, we tend to be at least a bit forgiving of failure. If someone loses a game of pool or go, I’m not going to view it as condemning them to incompetence forever. Everyone loses or fails sometimes, so a failure in itself doesn’t really communicate super-much in itself. Success, on the other hand, does communicate something – in this case, the person was capable, or at least more capable than their opponent. As a result, we weight success as more meaningful when judging competence.
In contrast, when judging someone’s morality (in this example either benevolence or integrity), we tend to weight moral failures as more meaningful. We’re more likely to view moral failings as revealing their “true nature”, and moral successes as less indicative. If someone spends weeks and weeks working to help homeless people, but once when they’re really stressed and tired they snap at someone who comes up to ask them for money, we’re going to view that as their “real self” 6, with the rest of the time spent just covering that up. Maybe they’re just trying to look good to other people or gain some nebulous reward. It is weighted much less strongly than their moral successes – the weeks spent trying to help.
No, seriously, what does this have to do with privacy?
OK, OK.
So in my previous post I spoke about the effect of legally-mandated privacy-endangering behaviours and how it can generate a kind of learned helplessness. In response to that, it’s easy to argue that “well, that’s bad law, made by bad people”. But a lot of the people I speak to don’t view it that way – they view it as just how things are, how they have to be. They point – reasonably – to things like legitimate law enforcement concerns. If I’m getting threatening calls at all hours of the day and night, I’d really like the police to be able to find out who’s doing that and try to intervene, rather than having to block specific numbers or change my number or other really annoying things. If I’m laundering money from my illegal dealings, we’d generally agree the government should probably do something about that, especially since that money is probably helping me get away with my crimes.
The problem is when we ask “is the government/police/whatever doing that, or are they gathering it for their own malicious agenda”? There’s stories of the NSA wildly abusing their power, passing around sexy pictures of innocent civilians. There’s more stories than I can count of law enforcement officials abusing surveillance databases to stalk their ex-partners. The FBI has accessed information it had no legal right to do so an average of approximately 40 times a day for about 20 years. And then of course there’s the previously mentioned debate about whether mass surveillance “works” or not.
And how you respond to that question is probably going to hinge pretty heavily on how you view those agencies in specific and the government more generally. If you view the FBI as fundamentally benevolent, you’re probably going to chalk that up to a few bad agents, or poor communication about standards, or something else that’s basically fixable. If you view the government as basically malevolent or dishonest however, you’re more likely to view that as more or less as expected, and instead advocate for more restricted role of government. “Yes”, you might say, “this will lead to less good outcomes in many ways. More threatening phone calls, more money laundering 7. But also less stalking by law enforcement officials and regular violations of our privacy and exposure via data breaches through legally-mandated information collection and retention.”
Overall, my point is: yes, we should ask questions as to whether a given measure actually helps achieve whatever the goal is. But that’s only half the question – we also need to ask “do we trust the people to whom we give the power to do The Thing?”
It’s a common example I often use to non-privacy people. Let us assume that you don’t care if Facebook knows you have a foot fetish. OK, that’s totally legitimate. But you are also trusting Facebook to not leak that knowledge to every cyber criminal and marketing person out there.
Maybe I trust, say, my local police to know my routine. But do I trust them to keep that information secure from people who would want to burgle my place and steal my priceless collection of gaming dice? They may be benevolent, but are they also competent?
(Trust graph image taken from Wikimedia Commons.)
Maybe you know what they are, maybe you don’t, I don’t hide them very well but I’ve been called slurs for all parts of the political spectrum from libtard to fascist so who knows
Or, as the rationalists say, “politics is the mind-killer”
I’m sure some of you, if not all of you, are saying “yes, maybe, but those examples are not equal in their empirical support! X is clearly more/less established and has more/less actual material impact on actual people!”. That may well be true – I am simply establishing that trust is down across the board, and all can provide subjectively-persuasive reasons for their viewpoint.
Potentially. I’ll note this throughout, but a lot of this is subject to the usual narrative and revision and biased reasoning that is so common in these kinds of areas. If you don’t think it’s true, then just treat it as another fictional example.
Nonsense idea if there ever was one
Or maybe not, if you also view them as incompetent!