DRMacIver's Notebook

Morality tests as self-fulfilling prophecies

Morality tests as self-fulfilling prophecies

Suppose someone is making a bad argument for a political point that you basically agree with. Do you correct their argument?

Probably not. Because if you correct their argument, you will be treated as if you disagree with their conclusion.

This is not wrong of them per se: Almost everybody who corrects an argument does so because they disagree with the conclusion.

Why is that the case?

Well, because if you correct the argument you will be treated as disagreeing with the conclusion, which is rarely worth it unless you actually disagree with the conclusion, particularly in politics where disagreeing with the conclusion will usually be treated as a sign that you are a bad person.

This seems a little self-referential (A because B because A because B because…) but in fact it’s a very straightforward feedback loop.

If you do disagree with the conclusion, it is worth correcting the argument. Therefore there is a core population of people who will always (yes this is a simplification. This is a toy model. Deal with it) correct the argument.

Suppose about 50% of the population disagree with the conclusion, and initially the other 50% are all happy to. So if someone disagrees with your argument there’s a 50% chance that they’re a bad person. That’s a reasonable chance, but it’s not conclusive by any means. It creates a suspicion at best.

But now everyone who corrects an argument has to do a certain amount of work to pass the morality test: To demonstrate that they’re not a bad person. Initially it’s not much - “Look, obviously (conclusion), but I don’t think this is the right reasoning because (correction). Instead (alternative)” will easily clear the bar.

But that’s still a nontrivial amount of work. Say about 20% of the population who agree with the conclusion (i.e. 10% of the total population) just don’t care enough to do the work. Now what you have is that 50% of the population will argue with you because they disagree with the conclusion, and 40% of the population will argue with you because they agree with the conclusion and disagree with the argument. Now, \(\frac{5}{9} \approx 56\%\) of people who disagree with your argument are bad people. This increases the level of suspicion, and thus the level of work required to pass the morality test, and so slightly more people drop out. Over time, thanks to this feedback loop, increasingly people only disagree with your argument if they also disagree with your conclusion or if they are prepared to put in a huge (in some cases impossible) amount of effort to call you on your bad argument.

In this way the prediction (if you disagree with my argument you may be a bad person) has turned into a norm (only bad people disagree with my argument).

I don’t know about you, but I hate this. I am extremely in favour of knowing true things, and making sure arguments are correct is a part of that. If my argument is wrong I usually want to know (although I’ll admit that this doesn’t mean that I accept correction gracefully all the time, especially if it comes from strangers, especially especially if it comes from strangers on Hacker News). This norm essentially means that I cannot trust anybody’s arguments to be reliably truth tracking.

I also hate it more because this pattern repeats over and over again. It’s one of the driving factors of political polarisation - if you disagree with me on one political issue, that’s evidence that you disagree with me on all the others (and thus are a bad person) etc. In general, people are not allowed to agree with the outgroup on anything, because that is treated as evidence that they are a member of the outgroup.

I would like to fix these feedback loops.

I think for public discourse with strangers we’re basically fucked, and there’s no way to fix them right now except to set up much harder boundaries. The best we can do hope to do in most of these cases right now is to deescalate - politely rebuff people instead of shouting at them. It won’t fix the problem but it might ease off on accelerating it a bit.

But I do think we can do better with people we know, and I think this starts by accepting more of the burden of proof: If we suspect someone we know, especially a friend, of being a bad person, we should give them the benefit of the doubt. This doesn’t require us to make excuses for them, but it does require us to be willing to do the work ourselves to figure out what’s going on rather than immediately leaping to conclusions.

In order to get the ball rolling on that I will make the following prediction: if you’re not willing to do that, chances are you’re a bad friend.