So, while I’m in various privacy spaces, I don’t think it’ll surprise anyone I’m much more of a lurker. I’ll post on occasion – sometimes questions, sometimes (trying to) answer other people’s questions – but mostly it’s pointing out how what someone said is not justified by their arguments etc. It’s a bad habit I have – what I perceive as bad arguments really get under my skin, and more than once I’ve missed an aspect because of that knee-jerk annoyance.
But what I see a fair amount is the recurrence of various things that aren’t just bad arguments, but flat-out, provably wrong. Some examples of the more common ones I see:
Discord is owned by Tencent/CCP
VPNs stop viruses/tracking
Proton is a honeypot
Tor is a honeypot
Now, I want to very briefly go over each of these, because while they are all wrong, they are not coming out of nowhere. And I think knowing the context they exist within is useful for understanding why they keep coming back, no matter how often they’re disproven.
Discord owned by Tencent/CCP
This is arguably the most complicated example on this list, in terms of potential justifications. As you might be aware, the founder of Discord previously had another company called OpenFeint, which after being acquired by Gree was found to be harvesting user data that it wasn’t supposed to have access to, in a way that the people running it definitely would have had to know about, and bypassed usual protections. OpenFeint, like Discord, gathered a lot of venture capital funding and had an unclear business model, seemingly relying on games paying OpenFeint to be included.
After selling OpenFeint, Jason Citron went on to found Discord, a company with a lot of superficial similarities: it’s a communication platform aimed at gamers, that relied heavily on venture capital funding and an unclear business model (in this case the freemium Nitro service), that gathers a lot of data from its users and misleads them (especially in regard to account deletion). One of these early investors was indeed Tencent, although I couldn’t easily find how much they dropped on it. The figure of Discord being 38% owned by Tencent occurs periodically, but from what I can tell that figure came out of nowhere and is based on nothing. In addition, just because someone invested in a company, does not mean they own part of that company. It can, for sure; if you buy shares in companies that’s exactly what you’re doing, but it’s far from certain. It’s not an uncommon position to invest capital in exchange for revenue share, with no management or policy control; this is what’s called a silent partnership. Now, we don’t know what kind of arrangement Tencent made with Discord, but we do know that formal ownership has to be declared if it is more than a certain threshold (I found a few sources saying 10%, but I’m not a tax expert). So it’s pretty unlikely that it’s an ownership model.
VPNs stop viruses/tracking
This one is much simpler to trace the origins of; VPN ads regularly lie about what a VPN can do. Or, at minimum, present things in a deeply skewed and misleading fashion. Thankfully, this particular chestnut seems to be dying as better information is coming out.
I’m not nearly technically savvy enough to properly describe what a VPN does and does not do, but I’m sure you can find someone smart enough to do so.
Proton/Tor is a honeypot
This is not an uncommon one, albeit one that gets a fair amount of mockery. And although I’m describing it as Proton or Tor (because those are the main two I see), but it’s hardly unique to them.
Of course complicating this is that honeypots are a thing that happen. ANOM ran for 3 or so years, and by all accounts caught most users off guard. Malicious Tor nodes are an ongoing concern that the Tor Foundation has to constantly manage. And as Tom Scott says in the video I linked above (after being explicitly clear that he does not think it’s likely that this is actually happening):
If you wanted to see what the most paranoid, security-conscious people are connecting to, and you wanted to install software on their systems that is designed to read all their network traffic and then redirect it through a single choke point, then setting up a VPN service with a huge advertising budget would be a great way to do it.
Of course, Tor and ProtonVPN are both open-source, so if this was happening, I would imagine it would be detected, hence why I’m comfortable defining this as “provably false”. But again, this idea isn’t coming from totally nowhere.
But why, though?
I know I say this a lot, but it keeps being true: people are complicated and weird. So I can’t give you the whole answer, or even an appreciable chunk of it. But I can go over part of what’s going on. The way I see it, reasons can be broadly broken down into two categories: social and individual.
Again, my coverage of these is not going to be remotely comprehensive; consider these a introduction or quick overview of part of what’s going on there.
Social
Loud voices
One cause of these ideas constantly coming up is quite simple: people with loud voices keep repeating them, sometimes for their own cause. Rob Braxman is notorious for promoting misinformation, and presenting himself as the only one who is attempting to offer a solution. However, he’s hardly the only one, or even the worst offender. That’s not the point; the point is that people with large platforms, who are trusted by people, repeat provably false information. Sometimes they do this for commercial reasons (“X service cannot be trusted, unlike my service!”, sometimes for ideological reasons (X service is run by Y person, who they don’t like for reasons which may or may not be valid, true, or relevant), sometimes they’re just wrong and bad about updating their own understanding due to factors idiosyncratic to them.
Checking information for accuracy is somewhat rare, especially if you’re inclined to agree with that information for whatever reason. This is especially true if checking it is hard; relies on not-easily-accessible data, or it’s interpretations of deeply technical ideas, or you’re limited on time or mental energy. I’ve made no secret of the fact that I am not technically especially savvy, which is why I’m very careful to not put forward technical ideas, and I’m careful about trying to cite my sources. But if you’re just starting to be aware of the degree to which our information is hoovered up by government and corporate entities who may or may not have your best interests at heart, you’re probably not going to be able to meaningfully distinguish between a source that is more-or-less correct, or a source that is spewing total gibberish. This becomes even more difficult when you’re working within an environment where the various sources tend to dislike each other; The Hated One makes no secret that he dislikes Techlore, Techlore doesn’t like Daniel Micay, and Braxman seemingly doesn’t like anyone from what I can tell. Each of those people have their reasons, and regardless of how valid I think those reasons are or how much I think they affect the information they offer, it does make it hard for Bob Average to know how much he can trust other information sources. The horn effect is very real, after all. So Bob absorbs the bad information, and maybe repeats it to others.
Out-group denigration
This leads to another social factor that can affect people repeating bad information: out-group denigration. In simple terms, this refers to the phenomenon where people are more likely to say or believe bad things about people who are part of the out-group.
Actually, I’m going to drill down a bit more into the terms “in-group” and “out-group”, because they’re easy to misunderstand. In very simple terms, your “in-group” is people who you identify with or relate to in some substantial way. I consider other research psychologists to be basically similar to me in some important ways; we have overlapping interests and basically similar value systems (knowledge about people is good, psychology should broadly take empirical and rational investigation methods to attain said knowledge, etc) in a way that, say, I don’t view car mechanics. I don’t dislike car mechanics, to be clear; I’m very glad they exist, especially when my car needs work. They have very useful and valuable knowledge and skills. But the basis for that identification with or relation to is just much smaller; the approach they take to their work is just fundamentally different, and from my limited experience they tend to have somewhat different value systems, cultural views and social dynamics. And as those differences grow, people move more out of my “in-group” and more into my “out-group”.
It’s not a simple matter of one or the other, too. It’s more a matter of a mess of distances between me and any given individual across many different criteria and dimensions, not all of which I am consciously aware of. But you can simplify it down to “in-group” means more-or-less similar to me, while “out-group” means more-or-less different to me. Most individuals will fall within a weird quantum place where X quality makes them “closer”, Y quality pushes them “further away”, and there’s a lot of variation and inter-relationship between how those qualities matter in any given individual.
It’s important to not conflate “being different” and “being disliked”. For example, I have little in common with people in the Belgian military; we speak a different language, we definitely have different values and cultures and experiences, but I don’t have any particular negative view of them. However, I have more in common with neurocognitive researchers, but by and large I tend not to get on with them, probably because I deal with them more often, and such interactions tend to be pretty negative. Most I’ve dealt with have been antagonistic on both sides, so I have built up a pretty negative stereotype.
So, out-group denigration basically refers to the generally more negative view I’m likely to have of an out-group. If you told me a member of a fascist movement beat someone up I’d probably be very willing to believe it, for example.
Obviously this is relevant in terms of prejudice studies; occasionally you’ll hear people saying that racism is just normal group dynamics. Which even if that’s true, that doesn’t mean it’s invalid to ask why this particular quality has this particular effect. That kind of objection is basically saying “it’s natural thus examining it is pointless because you can’t change it”. Which definitely isn’t true; smallpox was natural, but we definitely eliminated it and that was definitely a good thing.
You see this in conspiracy theories. People who find their positive group self-image (that is, the image they have of their groups being basically good) threatened are more likely to believe conspiracy theories, especially those which posited that the in-group was somehow persecuted or disliked by the target of the theory (e.g. Germans pre-WW2 were really willing to believe that Jewish people were responsible for them losing WW1, and the resulting humiliation). This is also affected by a concept called “collective narcissism”, which basically means “my group is great, people don’t appreciate how great my group is”.
In this case, people feel (justly) attacked/persecuted by large technology companies, and a lot of these false ideas directly paint these companies as bad. In some cases this is justified, in other cases not so much, but the important part is that X company/service is bad, and the reasoning is basically just tacked on to justify why. See, if you say “Tor is bad!”, that immediately raises the question “why?” And if you can’t offer some reason why, you don’t look very credible. But if you can offer some justification, people are going to be more likely to believe you, even if that justification is false.
Social bonding
This is especially true if the speaker and listener belong to the same in-group and thus are more likely to believe each other. Particularly if the group identification involves a lot of the idea of disliking the target; Nazis dislike Jewish people, anti-racists tend to dislike white supremacists (to the point where they are somewhat overly prone to identifying people as them), and people who like FOSS tend to dislike large technology companies with proprietary software. To a point, that mutual dislike is part of what bonds the group together, and participating in that dislike can become a social bonding exercise, in the same way that people talking about their favourite Batman stories serves as a bonding exercise for Batman fans. The truth or fact of any given statement is less important than that bonding. There’s a sense that “well, as long as they’re driving people away from harmful services there’s no harm”.
Individual
Confirmation bias
One of the major individual factors I touched on recently is obviously confirmation bias. I went into that in that post so I won’t repeat myself, but I will go into a variation called the “Semmelweis reflex”, which basically refers to the tendency to reject evidence that contradicts established norms. An older version of this was common around the growth of germ theory. A lot of research suggested that if doctors washed their hands between talking to different patients, the death rate dropped hugely. This idea was seriously pushed back on, in part because it blame doctors at least in part for causing disease, which was the opposite of what doctors were seen to be working towards, the point of being seen as a deadly insult.
Another, similar, bias at play is authority bias. Basically, this is a bias where we tend to place more value on the perspective of people we see as authorities. So if I think of Henry from Techlore as an authority, or someone who knows more than I do in certain areas (which I do), if he says something I’m obviously going to probably believe him, unless I have a compelling reason not to. This ties in with system justification theory, which I wrote about last week, so I shan’t go into detail, but it’s important to bear it in mind as a variable.
Coherent worldview
I touched on this in my continuation of the ideas in Techlore Talks #3, but as humans we need to build a coherent worldview and understanding of ourselves to function. Failure to do this is (part of) what underlies things like borderline personality disorder; people cannot form a coherent or consistent way of seeing themselves, which means they find it very hard to relate to the world and people around them. This is maybe best explained using an example:
Chris has a job working at a supermarket. Their friends tell them that they’re planning on going out to see a movie later, and invite Chris along, but Chris has also been offered to pick up an extra shift at work. How Chris makes that decision is going to rely on what Chris values (time with friends, more income, getting along with their boss and workplace benefits that brings), but if Chris doesn’t have a good sense of what they value, Chris isn’t going to be able to make that decision in a coherent fashion. So Chris is probably going to make decisions in a chaotic and random fashion, which means their friends are probably going to be less likely to make plans, they’re going to have difficulty getting or keeping a job, they might drift into things like substance abuse, and so on.
Because Chris doesn’t have a good sense of their values, they’re going to struggle creating a sense of whether their decisions are good or bad, which is going to interfere with their ability to be resilient against incidental feelings of guilt or whatever. As a result, they’re more likely to suffer from depression, or self-medicate by abusing addictive substances, or fall into abusive relationships, etc.
I described Chris’s issues in terms of values rather than worldview, largely because it’s simple. But worldview works out much the same way; if I view the world as fundamentally hostile or fundamentally just is going to make a huge difference in how I interact with it, what I expect to happen, how I relate to people, and a dozen other ways, as well as my value system. And a lot of how we create and structure our worldview is through narratives; stories we tell ourselves about how the world is and works. A very simple version of this is through proverbs; as an example take “there’s no smoke without fire”. Although laughably simplistic, there is a narrative there: there is some underlying cause there, which then later started putting out (metaphorical or literal) smoke.
From this perspective, you can see things like “Discord is owned by the CCP” not just as incorrect statements; they’re parts of a story, events in a larger narrative. Like “the big bad wolf blew down the second pigs house”, it serves as an event in a series of events that serves to convey a worldview (wolves like to eat pigs that build houses, social platforms are malevolent). Their factual correctness is less important than the story they’re telling; even if Discord isn’t controlled by the CCP (which it very probably isn’t), it doesn’t matter because social platforms are not made for you, they’re traps to lure you in to harvest your data, steal your information, and sell it to whoever wants it. Which… isn’t exactly wrong.
Narrative psychology is actually a fascinating subfield of psychology. It’s new enough that you can actually come to a pretty decent understanding with not a huge amount of work, and it actually really gives a legitimately different way of looking at people and how we move through the world. I’d recommend checking it out, if you’re curious. I’ve done a fair amount of work looking at narrative identity development in my time, and I’d describe it as being probably the closest to being my “default” approach to identity.
This worldview gives us a framework to interact with the world and connect ideas to each other. When this is functional, the parts reinforce and support each other, allowing us to navigate the chaotic and weird universe we exist within.
People form beliefs and attitudes in a given context, with those beliefs being reconstructed within those contexts. If you’re in a similar context, that can lead to over-confidence in those initial beliefs or attitudes, due to our brains retaining those original emotional states better than the cognitive, factual elements. If those emotional contexts are similar, it reinforces our confidence in our understanding because it feel harmonious, but if they’re different then we’re less confident, because it just feels like it doesn’t hang together.
Again, let’s explore this in an example. You form a belief (Proton is a bad company) in a given context (Proton does something smelly, you generally view tech companies as malevolent). Later, you encounter a similar context (people talking about malevolent tech companies), and someone mentions Proton, even in a tangential fashion. You retrieve the (primarily emotional) cues because our brains do a better job holding onto those than factual details, and reconstruct your belief that Proton is malevolent. Because you’re feeling confident because it all seems consistent (because your brain isn’t properly remembering things), you over-commit by believing that it’s malevolent, maybe a honeypot. Social contexts also plays a big role here (wanting to “win” an argument, for example). Add in the difficulty of updating our understanding, and you have a recipe for weird ideas just recurring over and over.
Feelings of control
This last category is a bit of a red herring, but it’s important enough (and interesting enough) that I want to talk about it briefly.
It’s tempting to describe conspiracy theories as deriving from a need to understand and feel in control. However, if that’s the use of them, they’re bad at it; mere exposure to conspiratorial ideas makes people feel less in control. People who belong in marginalised groups – who, definitionally, have less control over their environment – are more likely to believe them, which reduces their ability to trust even people from uninvolved entities. For example, let’s say a person from a particular marginalised community thinks their community is being persecuted by Tesla (just to pick a random example). In that worldview, the people from the corner store have no involvement with Tesla, so it shouldn’t be relevant, but people who believe in this conspiracy theory are less likely to trust the people from the corner store.
Which raises an interesting question: does marginalisation make people more susceptible to beliefs, or do their beliefs induce lack of trust in others, which inhibits their ability to work with others to improve their situation in society, leading to their marginalisation? Which is the conspiratorial chicken and which is the epistemological egg?
Conclusion
So… I’m not sure how to wrap this up, to be honest. I started out wanting to examine why demonstratively false ideas keep recurring, and I think we covered some really interesting ideas, but ultimately I don’t think I can give a sensible answer. Certainly not a simple one. People are complicated, and groups of people are both more and less complex than an individual. We need to create coherent worldviews to function, but many of the parts of those worldviews don’t necessarily need to be accurate to achieve that goal. We need authorities to cover gaps in our understanding, but we often place too much weight in them, which undermines their usefulness. We rely on narratives and touchstones to form and maintain communities, but those narratives and touchstones, again, need not be accurate or even plausible to function.
“People are weird” isn’t an especially satisfying way to end, but I guess at least it’s probably accurate?
Proton is legit i.e. reputable, not a honeypot. Just read all their transparency stuff.
Tor is *probably* legit. Yes, there is US govt involvement, but in addition to all those darkweb cyber criminals, there are all those hallowed "journalists" who depend on Tor for their lives. (Could the NSA be *mostly* good?) Therefore, it comes down to what I used to believe about BlackBerry: BlackBerry is either completely secure or totally compromised (i.e. a honeypot); in the case of the latter, it would mean that in the US, a BlackBerry user is secure up to the State Police (even the FBI) level, meaning that he could probably commit a murder and the prosecution would not have smartphone evidence(1) against him, because the NSA isn't going to reveal its capabilities just to help locals or feds convict someone of a mere murder. "Your secrets are safe with us."
(1) When it comes to a properly configured smartphone, not one being used with watered-down default settings