Response to Techlore Talks #3
In which I lay out all the evidence why you're completely correct
Quick Introduction
The YouTube channel Techlore recently did a video where Henry (the face of the channel) and Jonah (community administrator for Techlore, moderator in the Techlore Discord and major figure at Privacy Guides) talked about cognitive dissonance and confirmation bias, and how they saw them manifesting in the privacy “community”, for the worse. Me being me, I obviously started having thoughts as to what they were saying; some good (these are definitely things that happen and can be really negative and toxic if not kept in check) and some less good (mostly due to what I saw as inaccuracies or over-simplifications). So, I decided to write about it.
EDIT: I completely screwed up Jonah’s role. This was a complete research failure on my part, and I have no excuse. I apologise to Jonah specifically, and to anyone else who was misled.
Disclaimers
(Question to the reader: do you think these are necessary or useful? As in, do you think it’s pretty clear what the limitations are of my understanding or the scope of the posts without explicitly noting it?)
I just want to be 100% absolutely clear here: this post is in no way intended as an attack or a “take-down” (as if I could), or anything like that. The video in question included discussion of some psychological concepts that I think would be more interesting to dig deeper into and explore the complexities of; this is an expansion of the ideas presented, a continuation of the conversation. Because although there are a couple of points made in the video that I do disagree with, I agree with Henry’s basic assumption that talking about this stuff is of use to the privacy “community”.
Based a whole blog on that notion, after all.
Cognitive dissonance
Cognitive dissonance is a tricky concept to talk about, honestly. On the surface it’s relatively simple, and that’s how it was taught to me in my undergraduate studies. But if you dig any deeper it gets more complicated, murkier, and a surprising amount of sass in the research (I don’t want to say that nobody throws shade like academics, but academics do do it in the most delightfully polite fashion:
“We suggest that backfire effects are not a robust empirical phenomenon, and more reliable measures, powerful designs, and stronger links between experimental design and theory could greatly help move the field ahead … ”
Or, in non-academic terms:
But while there’s complexities, in essence cognitive dissonance basically refers to the discomfort we feel when there’s a perceived inconsistency between our actions and our beliefs/attitudes/values, or between two beliefs/attitudes/values. For example: I enjoy eating meat, but I also oppose animal cruelty. I also know that eating meat directly supports factory farming, which can be horrifically cruel. When I hold both of those facts in my mind at the same time, I experience pretty serious psychological discomfort. I feel that I am in some important way wrong, that I am allowing my individual, inessential pleasure override my moral values, which challenges my self-concept as someone who does their best to do what’s right (even if I don’t always live up to that standard). My actions are dissonant with what I perceive as my values.
When I experience that discomfort, I obviously want it to end. There’s few ways I can do that.
I can ignore the knowledge I have about factory farming, and instead focus on the tasty, tasty steak or chicken I’m eating
I can change my behaviours and stop eating meat, thereby removing the contradiction
I can change my values and either stop opposing animal cruelty, or change my beliefs and doubt whether factory farming is actually all that bad or whether my actions contribute to in any significant way
Or some other method, or a combination thereof. What I do is less important at the moment than the basic dynamic of “notice contradiction, feel discomfort, do something to mitigate discomfort”. That, in essence, is cognitive dissonance.
That’s a pretty clear, conscious example, but often that discomfort isn’t conscious. In fact, the theory and research suggests that most of the time that dissonance is un-recognised, at least consciously. Which is strange – how can discomfort, a subjective experience, not be recognised but still be a Thing, much less influence behaviour? That, I can’t speak to, but it does seem to work that way.
There’s a classic method in attitude research where you get people to write a short essay in support of some position, then measure their attitude to that position in contrast to their attitude before writing the essay (the “before” measure can be gotten a few ways, with various positives and negatives that lie outside the scope of this post). Unsurprisingly, this paradigm is common in dissonance research.
What we tend to see is people’s attitudes after writing the essay tend to skew further toward supporting whatever position they were writing about regardless of their initial view, and even if that position was randomly assigned to them by the experimenter. So I might start out being pretty anti-animal cruelty, but after writing 500 words in support of animal cruelty or anti-animal welfare or whatever, I’m probably going to be less against cruelty than I was before.
Now, part of that result is probably that I’m being forced to really confront and think about arguments for a given side that I might not have properly considered before, which you would expect would give you a greater appreciation for the nuance for that side, and even if you still disagree maybe you’re less confident or black-and-white or whatever. Except, this also happens with positions the person previously agrees with, or topics that the person is already knowledgeable about and presumably already has an appreciation for that nuance. No, the effect seems to be deeper than that, and one theory is that part of it is my brain says “I am constructing arguments in favour of X position. But I disagree with X position. But then why would I be doing that? I must agree with X position.” and the various sliders on my attitudes around that topic shift somewhat.
Now, in the video they focused on what is sometimes called “attitudinal dissonance”, or when I hold two contradictory attitudes. So I might dislike violence, but I enjoy watching MMA, for example. The source of the dissonance is different from above – it’s attitudes, rather than a behaviour – but the basic dynamic is the same. There’s a dissonance, and if that dissonance becomes perceived on some level, then I need to deal with it either by ignoring one side of the conflict (putting my distaste for violence aside when watching MMA, for example), or changing one or both sides of the conflicting attitudes.
There’s a lot of different perspectives on this. Some argue that it’s fundamentally an interpersonal effect, except the observer and observed happen to be the same person, while others argue that it’s the result of not wanting to be responsible for creating bad consequences, rather than a result of dissonance between attitude and/or action. Honestly, these all have their benefits and drawbacks, but the argument is really very technical and surprisingly tribalistic. This is a decent starting point on these various interpretations, albeit one that clearly supports the OG Festinger position, but it has some stuff to look up.
Confirmation bias
You know this one. Confirmation bias, in simple terms, is the idea that people tend to prefer information that confirms their pre-existing ideas. However, like cognitive dissonance (which it is almost certainly related to) once you start digging the details start getting confusing. I generally advise viewing these ideas as less “essentially unitary processes with clear characteristics” and more a Wittgensteinian family of members that resemble each other and fall within a broad umbrella, in the same way that there isn’t any key characteristic that is true of both chess and football that isn’t true of court cases or movies, but we recognise chess and football as both “games” and court cases and movies as something else. So instead of arguing over whether X or Y is “really” confirmation bias or not, let’s just lump all these together for now:
A tendency to seek out information that confirms ones own previously held ideas and beliefs
Notice that this is different from seeking out previously selected information; when writing I often remember things I have read previously that would be useful to re-read or provide to illustrate my points, and I spent time looking for that particular work. Not because it supports my point (although it usually does), but because I remember it explaining things in a really clear way, or it containing some useful insight
A tendency to weigh information that one comes across that confirms ones own previously held ideas more heavily than contradictory information (e.g. if I come across a study that suggests that drinking mercury increases intelligence, I’m probably not going to give it as much credibility as one that suggests that good nutrition in childhood does the same thing, even if the apparent methodological rigour is the same).
This also includes dismissing information that contradicts my beliefs; when an anti-Semite has a belief that Jewish people are disproportionately powerful and rich, but is confronted with evidence that this belief is false, they will posit (totally without basis) that all or most the rich people are secretly Jewish-by-descent, dismissing all the counter-examples as “exceptions” while holding to the (false) “general trend”.
This tends to be what the layman thinks of when they say “confirmation bias”
When forced to change beliefs, a tendency to generate new beliefs that are as close as feasible to previously held beliefs, or to find generating new beliefs very difficult (e.g. if I am a devout atheist, and I receive undeniable proof of some kind of divinity, I may continue to be an atheist simply because I cannot generate a belief in that divinity that is coherent with the rest of my identity and worldview)
Now, because I have a strange voice in my head who constantly makes bad arguments (who is definitely a Flanderised version of a friend I have), I have to say: just because this is a bias does not necessarily make these things wrong or bad. I should weigh the hundreds of studies I’ve read on a given topic more heavily than one study even if I can’t find any major flaws with it. I am not committing any moral or epistemological wrong for not seeking out every possible counter-argument for every position I hold – that would be exhausting for gain that is often pretty unclear in actual value. What I am saying is that these phenomena do exist, these are actually behaviours that actual human beings actually do, and therefore interesting to explore and talk about.
As you can imagine, this is a broad grouping of behaviours, that is very common across pretty much most humanity, including in the privacy “community”. However, in the video in question, Henry and Jonah seemed to mostly conflate it with what’s called the “hostile media effect” or “hostile media phenomenon”. Basically, this refers to the tendency of people to view the same piece of media as being biased against “their” side. In that study, two groups of people were shown a (crafted to be specifically neutral) news story about the Israeli-Palestinian conflict, and people who were pro-Israel interpreted it as being biased against Israel, with pro-Palestine people interpreting it as being biased against Palestine. Alternatively, they tend to view biased-against-outgroup articles as unbiased.
So in the privacy world, an example of this might be a piece going through the benefits and drawbacks of a particular solution. Some people will interpret this as clearly biased against the solution, while others will accuse the piece of “shilling” that solution at the same time. Henry notes this as common when he does Surveillance Report with Nate from The New Oil; while they make a strong effort to be as unbiased as possible, they do note their personal views in a clearly denoted fashion, usually by something like “Personal opinion; … ”. And they have both noted in the past that they are regularly accused of being biased for or against any number of things.
I reached out to Henry and Nate for comment on this phenomenon, but unfortunately they were utterly swamped and didn’t have time. But they’ve spoken on this in several times, so I don’t think I’m mis-representing their views or experiences. If I have, I apologise, and will update this post accordingly upon someone letting me know.
So cognitive dissonance and confirmation bias are deeply related, and one may in fact play a role in the other. If I believe two things (A and B) that are coherent with each other, but Belief A is false, if I encounter disproving information I’m suddenly going to be faced with a choice between changing Belief A (unpleasant in itself, plus will now lead to dissonance with Belief B, which might have flow-on effects) or I dismiss the evidence that Belief A is wrong.
(And this is part of why science generally and psychology specifically is dealing with a replication crisis.)
In an attempt to provide some kind of guidance for improvement, Henry noted that he hoped that simply being aware of these biases helped people avoid them. I’m not going to say that that has literally zero benefit, but these are extremely hard to avoid, because they rarely operate at the level of conscious awareness and choices. I do not consciously think “OK, I’m going to look for evidence that supports my pre-existing beliefs and discredit contradictory evidence”, I think “I’m curious about X, let’s look some stuff up”, and am naturally drawn to dismiss stuff that is obviously nonsense (which is mostly defined by “disagrees with what I believe”) unless I make a consciously concerted effort. Avoiding confirmation bias is really, really hard, even in the short term, and in the long term I’d say the best we can do is mitigate it by the ways we move through the world and make decisions. When writing these posts, I consciously attempt to weigh the evidence in an impartial and informed way, but the way I think is not unbiased; I’m inclined to view certain papers are more or less informed partially as a result of how well it fits with my understanding of the topic at hand.
I dug into the research on how to reduce the effect of confirmation bias, and frankly I couldn’t find anything remotely actionable. It seems to vary depending on the exact domain we’re talking within – so addressing it within political beliefs is different than how you’d avoid it in research terms. If you can find something useful, drop it in the comments! I honestly am not happy with my lack of finding on this, so that’d be cool.
For example; I read an article about an attempt to reduce the effect of confirmation bias, which included a hypothesis about how people with confirmation bias would act differently than those without. On the surface that’s a fine hypothesis, but since I understand confirmation bias as a human universal, it’s actually completely unfalsifiable because one group will remain empty. This made me think of the authors as less knowledgable, which made me less likely to interpret their points charitably – there’s a long tradition of people from other fields (especially technical fields) thinking psychology is easy and just needs someone with more rigour to Do Studies Properly because illusory superiority is something that happens to other people, I guess? Was this warranted downgrading of the article based on observable evidence, or was it me discrediting potentially valuable information because the authors disagree with my previously held beliefs and a kind of tribalistic defensiveness? And is there a meaningful difference between the two?
Identity and worldview are obviously important here. In the above paragraph I alluded to tribal or social identity, in which my group membership, in addition to my stereotypes of other groups and relation between various groups, made it easier for me to reinforce my worldview by discrediting the source, rather than sincerely engaging with the actual arguments or evidence put forward. Both confirmation bias and cognitive dissonance can be viewed as a kind of self-protective mechanism. We need to have a worldview and self-view that at least appears to us as consistent over time. While we may intellectually acknowledge that we act differently depending on context (who I am at work is very different to who I am when writing this, which is different again to who I am when I’m with my friends), we tend to downplay the significance of these differences. The idea of being “two-faced” or a “chameleon” is still very much an insult
You see this in political rhetoric. Whenever a person is attempting to argue for a particular position, they will almost always say something along the lines of “I have believed for many years that…” or point out that their opponent was inconsistent on a topic. Admitting that you learned and grew over time is extremely rare, and conventional wisdom is that admitting you were wrong in politics is a death sentence, and must only be done when there is no other choice, or when you can reframe things as actually self-glorifying (“They’re so humble!”)
There is a complicating factor here, in that people who are being mass-harassed for some perceived wrongdoing are often demanded to make a public apology and say they were wrong in whatever they said. However, this is usually more about demanding submission to the mob than about the actual positions or beliefs at play, so I’m tempted to classify that as its own phenomenon.
Further, if we were wrong once, we might be wrong again now. Which is inherently incompatible with the requirement to basically function (I have to assume that I’m broadly correct to interact with the world in any semi-rational fashion). Adding social dynamics into play – and being humans, they’re always in play – only exacerbates things. If I’m a part of a group that thinks dogs are great, then admitting that dogs might not be so great risks backlash from my own community, or at least loss of status. Which humans really don’t like.
In the video, Henry and Jonah go on to talk about how having central, deeply-held beliefs that, if attacked, causes strong negative feelings is bad. This… I can’t phrase this nicely or delicately, so I will instead go for clarity: this is just a bad take. We all have deeply held beliefs, and we should. Maybe you believe that people should try to help people whenever possible, or that rape is bad or that fair working conditions is a human right or whatever. Imagine that you came across someone in your life who argued, entirely sincerely, that that position is completely wrong. Rape is actually great, because it helps more people come into being! People don’t have a right to good and fair working conditions, and any laws that try to enforce that are at best unnecessary (because everyone has free choice where to work) and at best harmful (because they disincentivise competition). Helping people is bad, because it makes them dependent on that help rather than improving their own capability. Privacy is unnecessary, because if you have nothing to hide you’ll be fine. (Note: I’m not arguing any of these, they’re examples.) Can you tell me, in all seriousness, that you would not have some kind of strong negative reaction? Can you really say that you believe that “rape is good actually” is a position that is within the pale of legitimate positions that could be held by a reasonable person? Of course not.
So saying that you don’t believe you have any deeply held beliefs like that is either:
Ignorant about your own values, either due to never having been tested on them or because you’ve never seriously introspected about what is important to you
A pretty severe failing of imagination and blindness
A sign of sociopathic nihilism, because nothing does matter to you
None of these are good or healthy things, and certainly not anything to put forward as a good thing. This isn’t even about cognitive dissonance or confirmation bias. This goes to what values even are, and how they form a crucial part of identity and worldview.
To be clear, I don’t think this is what they meant to say. What I think they meant was “becoming overly attached to any one position can lead to over-reacting to perceived criticism of that position, which can hurt your ability to learn and grow, as well as making you unpleasant to be around”. Which is probably broadly correct, although I can’t remember or find any research supporting that position (gap in the literature opportunity!) But the idea of “you have sincerely held values that you actually defend or act on, that makes you unpleasant therefore bad” is just a really bad take, and while I’m sure that’s not what they meant, the way they phrased it was sufficiently poor that that was how it came off.
Anyway, that’s all for me this week. Next week I hope to have a really cool announcement to make, along with a post about a topic with little direct relation to privacy, just something I think is kind of interesting.
Somehow I missed this blog, thanks so much for such a thoughtful response to our episode!
Thanks a lot for sharing this with us