Disclaimers
Fair warning, this one’s a bit nastier than my other posts. I deliberately held off on posting it until I had a couple under my belt, and I’m going to make a point of posting more positive ones after this, and doing so more quickly than I usually plan to post. I do not care for the kind of content that consists of pointing at Problematic Figures and saying “They are so problematic, isn’t it great we’re not problematic like them?”, which this definitely has shades of, but on the other hand I do think this is a serious problem that we really need to look at. And I really do think an examination of some of the psychology going on would be useful to that process. This is, at best, an interesting starting point to thinking about what might be happening. Do not take this as a definitive description of what is going on with any given individual. Definitely do not take this as any kind of justification of abuse. I tried to keep it to descriptions of behaviour and not cross the line between “describing bad behaviour” and “attacking people doing that behaviour”, please try to read this in that spirit.
OK? OK.
This post is about what I call “extreme threat modellers” or “excessive threat modellers” – ETMs for short. Before I get into who that is, I want to be very clear who it isn’t. It does not refer to people who, for whatever reason, have a very extreme threat model – persecuted minorities, stalking or domestic violence victims, journalists in oppressive governments, government officials, military people, whistle-blowers, and others. I’m not talking about them. These groups are distinct in several ways – their behaviour tends to be very different, for one. Their motivations tend to be much clearer – “the government will arrest and maybe kill me if they discover I’m gay”, “I need to keep my location a secret because I’m stationed on a secret military base” are really good reasons to take steps that Bob Average just doesn’t have to. In addition, for obvious reasons you tend not to encounter these people on the usual platforms like Discord or Reddit or Twitter, and if you do they’re definitely not going to be talking about how private or secure they are in a public or semi-public space with a relative stranger online.
Nor is it people who enjoy exploring the privacy space as a kind of challenge, trying to minimise the amount of information that they leak or can be found about them, not because they view it as an actual threat, but because it’s their version of fiddling with stuff. They just get a kick out of playing with things in this way. They tend to be pretty distinct as well - much more chill, more able to explain their reasoning, and usually less likely to “push” their preferred solutions.
So who are we talking about here?
So I’ve laid out what this group isn’t, but that just raises the question of what it is. ETMs are, in short, those people you see in the “privacy community” who are always talking about how private they are and pushing people to use their solution, often bragging – implicitly or explicitly – about how extreme their threat model is. This group has certain behaviours that characterise them:
Extreme threat modelling: Adopting, or claiming to adopt, a very extreme threat model in both severity and breadth. These people don’t want anyone knowing anything about them, ever. Whether it’s Signal knowing your phone number, or the local cinema having an e-mail address for pre-purchasing tickets (even an alias), or their bank knowing that they spent $2.83 at the local supermarket. Now, there are some legitimate reasons to have serious reservations for each of these, of course. However, what differentiates these behaviours from the above (rare) legitimate need for a threat model is they are usually unwilling to present a coherent reason for this. There’s a characteristic hollowness to their reasoning given for their threat models when pressed, which are almost always strongly disproportionate to their actual situation. They don’t say “I’m concerned about stalkers buying my information from data brokers”, or “I’m trying to limit spam”, they say things like:
One True Way-ism: If someone asks “are there any known major risks with X solution?”, they will immediately chime in, promoting their very specific solution – you can’t mitigate the issues with Windows, you have to use Qubes (a security-focused operating system which is notoriously tricky to work with), without even asking about the person’s threat model or situation. Theirs is The One True Way, and to do otherwise is to be doing nothing at all. Marginal improvements are at best pointless, and more likely to be decried as “security/privacy theatre”.
Thin Justifications: They accept, or at least perform accepting, extremely thin evidence for more extreme or restrictive behaviours, regardless of how plausible it is to actually apply to either reality in general or their situation in specific. For example, there was a research paper a while back in which the researchers could pull data including passwords and such from even air-gapped systems (for those who don’t know, “air-gapping” is basically totally isolating a system. So no WiFi, no ethernet cables, nothing) by moderating the speed of the fans, and decoding the signals sent. Very cool research, but really not something Bob Average needs to worry about – it requires planting very specific malware onto the system, the monitoring needs to be pretty close (since it’s working off sound), and the speed is painfully slow. Really, apart from sheer coolness, it’s mostly only interesting as a proof-of-concept for a very specific kind of situation. But ETMs look at this and conclude (or perform concluding) that the NSA is using this en masse to steal your KeePass library. Or, more often: “Pfft, you don’t use mikes-fan-modulator-defense.exe? Guess you don’t really care about privacy then”, and only when pressed about the value of the software will they vaguely gesture at the paper as “proof”, completely ignoring the question of whether it’s an actual problem, or whether the software actually helps solve it.
Entitlement/Arrogance: This is a bit more vague, and I’ll get into this below, but there’s a characteristic sneer to all of their communication – you can’t figure out how to flash GrapheneOS (a custom security-focused phone operating system) onto your phone, or you can’t afford a Pixel phone? Then you’re just stupid, if you really cared about privacy you’d Find A Way. If you want to move away from Facebook Messenger, but your friends are being obstinate? Just leave! If they really value you, they’ll download Session through Tor in a VM.
(Or maybe you shouldn’t even have a phone! Pixels are made by Google, and Google is the devil, and can cast magic spells to steal everything that touches or comes near a product they have touched, or any product that comes near one of those products, or…
To be clear, Google and that are notoriously invasive, and I totally believe they would do that if they could – it’s their entire business model. Surveillance capitalism is very real. But for all their resources and such, they’re not magic.)
According to these people, there is no such thing as compromise or limitation, and the very suggestion of such is a sign that you’re either naive, stupid, willfully ignorant or a shill. It’s a form of purity politics, or on an individual level splitting, in which an individual is either 100% on board with whatever the speaker says and will do absolutely anything to achieve those levels, or they’re basically blasting every single personal detail to everyone. The idea that some people can’t always use the “best” OS or solution, or that the “best” solution varies on the individual situation, is completely antithetical to their position – while being absolutely crucial to their motivations.
What are their motivations? I’m so glad you asked. Again, I don’t know these people personally, and everyone is a little bit different, but as a starting point I have three broad theories: social clout, paranoia and narcissism. This post will only really go into the first – I want to do a post about narcissism and paranoia on their own because they’re just super-interesting topics, and the clout one is big enough by itself.
Social signalling
The social clout is both the simplest and (to me) most interesting aspect. In simple terms, we all want to be viewed positively by those around us – we want to be viewed as morally good, or as intelligent or caring or beautiful or thorough or whatever. This is normal and healthy – it acts as a motivation to try to be good/intelligent/caring/beautiful/thorough/whatever. It’s part of why signalling behaviour is so very common. If I can’t easily tell by looking how clever you are, but I need to know for some reason, the fact that you can figure out how to self-host an e-mail system is a pretty good indicator that you’re at least somewhat technically capable, so I can give your statements appropriate weight, which in this case means social status.
(If you’re still unclear on what “signalling” means in this context, consider a job interview. You’re trying to convince the interviewer that you’re a good employee, and they’re trying to figure out if you’re lying. You are signalling honesty/capability/hard-working/whatever when you describe situations you’ve faced in the past or answer questions about what fruit you think most describes you, they’re attempting to detect dishonest signals. If they could tell just by looking at you then recruitment specialists or person-job fit researchers would be out of a job. It’s important to remember that signalling does not have to be dishonest or even deliberate – I signal to my friends all the time that I care about their well-being because I do care and do things consistent with that, from which they can infer my emotional state.)
Since our society values intelligence and knowledge, if you crave social status (which is very common and healthy!) appearing competent in a field perceived as difficult and desirable is a good way to do it, and you want to discourage people from viewing you in any other way. There’s several ways you can discourage people from doing this, but a common one is to discredit people who visibly disagree or question you to discourage others from agreeing with them. In technical terms, they assert dominance by attending to stereotypic information, (information that justifies treating others as adhering to a useful stereotype), and ignoring counter-stereotypic information (downplaying or outright ignoring information that contradicts how you want other people to view that person). In this case, the stereotype is that of the “rube” – a technically ignorant and naively trusting person who does not have the competence to recognise their own incompetence. By treating other people as foolish, and encouraging other people to do the same, you discredit their criticisms of you, leaving you free to make yourself look good.
(Side note: the Dunning-Kruger effect is actually much more controversial and uncertain than usually depicted. And even if it is true, it’s much more fragile than is usually understood. It does not take much competence to negate the effect almost entirely – it only makes statements about those who are utterly incompetent at a task, not those who are just somewhat bad at it.)
It’s important to pause here and emphasise that this dynamic is entirely dependent on perceived intelligence. We can’t perceive intelligence directly, we have to infer it. You can look at me and see how tall I am, the colour of my skin, hair and eyes. You can hear my voice, smell my cologne, and if you poke me you can tell how firm my musculature is. But you cannot see how smart I am – you have to look at the complexity of the ideas I present, how valid my logical reasoning is, how well I demonstrate domain-specific knowledge, etc. This is not necessarily a bad thing, but it is very different from seeing it. And because it’s inferred rather than perceived directly, this opens up the possibilities of mistaken inference, maybe even deliberately so, and it definitely allows for this to be manipulated. You know that person who’s always trying so very hard to seem smart by quoting philosophers or passing off ideas they got elsewhere as theirs? This is what they’re aiming at – regardless of what we think about Freud’s theories, he was legitimately intelligent because he came up with the theories and such, building on previous ideas in a complex and interesting way that few people of the time could. But if I just read Freud and repeat his ideas of humour as ways of processing logical contradictions as if they were my own reflections, then I am clearly doing something different – I do not become Freud’s level of intelligence through simple repetition of his ideas, even if I do genuinely understand his reasoning.
Cynical paranoia as status
Related to this is the trope of “cynicism is intelligence”, or the equally annoying “pessimism is intelligence”. Let there be no ambiguity here: I disagree, strongly, with this position. I have less than no time for it. However, this is also a very important aspect of people attempting to signal intelligence – we’ve all known (or been) that person who always points out problems in an attempt to appear “good at critical thinking”. Or suggested that a law won’t work, or is maliciously intended, not because of problems with how the law is written or enforced, but because that would involve things improving or being less than maximally terrible. I may loath it with every fibre of my being, but it does seem to resonate with some people.
These two factors are obvious factors in these people. By accepting every single indicator of a particular privacy violation as not only a possibility but widely present and – this is important – you already knew that, you can position yourself as intelligent and knowledgeable without having to actually have any of those actual insights. And by predicting “bad” outcomes, you can essentially play the odds; if your prediction comes true, then you can use it as evidence of your intelligence, and if it doesn’t you can just quietly ignore it. And unless someone is keeping detailed records, it’s very hard to tell a person’s success rate, especially if the ETM in question phrases their predictions vaguely; “Proton will reveal that they don’t actually care about privacy” could refer to anything from adopting an encryption implementation the ETM doesn’t like to revealing the information of a user to a law enforcement agency.
Collaborative hierarchies
However, you may have noticed an issue within this proposed dynamic. This relies on the ETM being viewed as more intelligent or knowledgeable than the average person in order to gain social status, but the existence of other ETM or just knowledgeable people who disagree threaten this position. So this tends to lead to the creation of a two-tier system; people who the ETM wants to affiliate with, and the majority to flex over. I call these “agree-ers” and “flex fodder” respectively.
Agree-ers basically refers to a small group – the smaller the better – to provide validation to the ETM. They do this by expanding the ETMs information flow – three people can absorb more information than one – which improves their ability to appear more knowledgeable than Bob Average. In the same way that directors of intelligence organisations often appear to know a lot, because they have dozens of people collating and organising the reports of hundreds or thousands rather than any magical omniscience on their part, the agree-ers essentially form a co-operative base to improve their apparent knowledge. In addition, since the group now appears more knowledgeable through this sharing, by associating with each other they gain associative clout – kind of like MENSA; nobody joins MENSA because they’re smart, they join because they want other people to know they’re smart, or to have access to smart people. In addition, by sharing more cynical and paranoid suggestions amongst themselves, this makes it easier for the ETM to further refine and extremify their threat models and responses – you might not even think about the possibilities of your router’s firmware spying on your Signal messages, but they have. So when they suggest it, you think “oh, I didn’t even think about that” and they look more insightful. Also, by agreeing with each other, they make their perspective seem more reasonable – one person saying Google is spying on you is a nutcase. Ten people is harder to dismiss, although beware the Chinese Robber Fallacy or the appeal to poularity.
In addition, the internal dynamics of such groups are far more interesting than the external dynamics of these groups with the wider community. They rely on each other to support their own positions – “I’m correct because my friends believe the same things as me”. But because a large part of the motivation is to appear more intelligent/paranoid/cynical than the average, then they start developing internal positional tensions as a “new average” emerges. It’s an often-underappreciated factor of human psychology that we primarily don’t compare ourselves to any objective standard, or to humanity or our country generally. We tend to compare ourselves to our local communities or social circles – globally, I’m far wealthier than most of humanity, but I’m poorer than most of my friends, so I feel worse off, and if my self-worth came from my wealth, that’d bother me. Jack and John are both ETM, but Jack wants to feel superior, so he looks for ways to feel superior to John, most often by drawing ever-more-thin conclusions and adopting ever-more-thinly-justified solutions. Both are already more extreme and appear more “knowledgeable” to the majority of people, even people interested in privacy, but their primary reference has become each other. If your “very private” group uses Signal, then you can gain extra style points by using Session, because it doesn’t require a phone number. And then that can become the norm (because nobody wants to be the least paranoid, because in those circles paranoia = intelligence = status), so then people start adopting XMPP. Then someone puts their car and house in a trust so it’s not in their name. And then, crucially, you tell as many people as possible that you did it. Because while the agree-ers have become each other’s reference points locally, as a group they depend on the flex fodder to hold the whole system up. The lowest-ranked people in the agree-er camp rely on the flex fodder to shore up their wider social status as well as their own self-worth. “I might not be as knowledgable/paranoid as John, but at least I’m better than the unwashed masses of people who still use Facebook.” This legitimises the whole system, because if they suddenly decide this whole thing is nonsense… what do they have left? Are they just going to be like everyone else, those… normies?
Of course, this whole thing is further complicated by the One True Way-ism noted above. If John thinks XMPP is the One True Way, but Jack thinks it’s an NSA honeypot, they’re going to conflict. This is one way you tend to end up with multiple such agree-er groups, which itself further reinforces the system. Not only are the ETMs smarter than the flex fodder, they’re also smarter than the “fakers”. You’ll notice they spend a lot of time sharing videos or pieces by other people perceived as knowledgeable and mocking them, thus improving their perceived status; “Edward Snowden is knowledgable, he’s pointing out flaws in Snowden’s reasoning, so he must be even more knowledgable!”
There’s a bit of a joke in the privacy community about people arguing about how they’re the most private people in a YouTube comment section, or on Facebook, or whatever. The idea being they’re using such a privacy-invasive and public-facing platform to talk about how private they are. That is, of course, half the point – in the same way that the real challenge isn’t climbing Everest but doing so and not telling anyone, a huge amount of the motivation is fundamentally social clout, which you can’t have without people seeing your accomplishments. Which, since this whole game is using the topic of privacy as its setting, is just really funny.
So what?
Now, there’s a question hovering over all of this. Let us assume that this is a substantially accurate description of the dynamics at play in this group. So what? Even if we grant that this is correct and bad, what can we, individually or collectively, do? That depends on what you want the community to be, and what you want out of it. But I would suggest a couple of things:
Don’t reward bad behaviour, model good behaviour: I don’t think it’s unwarranted to say that these dynamics at least have heavy potential to be toxic as all get-out. They’re based on lording over other people and either making them feel/look foolish, or be slavishly worshiping. They force particular solutions forward regardless of whether they’re the right solution for the situation, and frankly just make us all look like crazy, unpleasant people and even marginal improvements to privacy being outside of 95% of people’s ability, which is completely untrue and harmful. So while I absolutely do not condone attacking or abusing people, I absolutely do condone not giving such behaviour credibility or status. If someone is pushing one solution without regard for whether it’s the right tool for the job at hand? Ignore them, and try to engage with the situation. If someone is mocking someone for being on Facebook or using Telegram or WhatsApp? Tell them to stop, and try to answer the target’s questions as best you can.
Engage in mindful self-awareness: We all make mistakes, it happens. Not a big deal. But try to notice within yourself if you find yourself engaging in these behaviours and – and this is important – try to stop, or at least reduce. I, personally, have a bad habit of engaging with bad arguments – they just really hit my buttons. I’m trying to notice when I do this, and when I do I make a point to disengage – change the subject, or just distract myself. If I’m in a situation where that’s appropriate, I leave (really easy online). Same thing – if you see yourself just reflexively recommending one solution without knowing the situation, then notice, stop, and try to do better. We can improve, but it takes work, and nobody else will do the work for us.
Notice whether spaces are good or bad, and leave bad ones: It’s a truism that there are spaces, especially online, that are just full of bad people. Subreddits where 80% of the comments are just nastiness, Twitter tags which are guaranteed to make you feel angry, or Facebook groups where apparently basic human decency is a radical idea people there don’t want a bar of. Here’s a little secret: 99% of the time, you can leave without consequences. In fact, I guarantee you’ll be better off if you leave. How you differentiate between “bad space” and “space that doesn’t cater to my particular preferences” is up to you, but if you regularly end up angry or depressed when you’re in there? Maybe best not go back. Mute the channel, unfollow the group. See how you go. Let the awful people be awful to each other, and try to guide other people to better spaces.
But that’s pretty speculative. The social dynamics are definitely real things that definitely happen, but applying them in this specific case is… wobblier than I’m comfortable with. You know what’s way more fun for all the family? Narcissism and paranoia. I’ll cover those in their own posts.