Quick Summary: In this Substack I talk about disagreement. I set out ways to take the disagreeableness out of disagreements - by recognising the role chance has to play in our beliefs, by clarifying terms and by turning a moral question into a data one.
*Philosophy Warning* In the last part I show why chance is (I think) an irremovable part of knowledge.
If someone had ChatGPT, YouGov polling data and a very basic biography of me, they could predict almost all my political opinions. This (1) made me wonder what on earth the point was of spending £27K on a politics degree and (2) illustrates a broader truth. Our beliefs are predictable, and the factors that predict them aren’t ones we choose, or ones that seem to have much of a bearing on the accuracy of a belief.
I’m not the only person who is this predictable. The best predictor (after party choice in the last election) of how people will vote is not the books they’ve read, or the way they reason - but how their parents voted. The correlation of beliefs and demographics is way too strong for us to kid ourselves that we come to our beliefs independently. We’re just not these marvellous rational agents, carefully weighing the information presented to us. This isn’t great for our egos. But recognising this correlation should make us appreciate the important truth that many of our beliefs are an accident of time, birth and circumstance.
I thought about this a lot in my first year at Oxford. Many people from the London Bubble of private schools dispensed a lot of judgement on others for thinking the ‘right’ or ‘wrong’ things. I thought the ‘right’ things, but only because I had learnt them in the Bubble. I didn’t know them before and I don’t know if I would have done had I not gone to a school in the Bubble. A lucky scholarship application was all that stood between me and the supposed-deplorables under discussion. (Of course, this isn’t the case for everyone, but it was certainly true for me). And so the consequences drawn - taking the presence of these beliefs as an indicator of moral superiority, their absence as markers against someone’s character - didn’t seem warranted when it seemed such a matter of chance what beliefs you arrived with.
There are people with PhD’s on both sides of most issues we discuss. There’s whole long interlocking chains of arguments and counter-arguments that we will never know, and very few of us are anywhere near seeing even a single debate in its entirety. Most of the time we’ve chosen to stop tracing a chain at a certain point, and adopted that position as our own, often with a lens that distorts how we evaluate any evidence from that point forward. Where we end up is a product of confirmation bias and chance.
At the moment, when we disagree, we tend to trace arguments, a call-and-response of For and Against. And we’re unlikely to change our minds. We’ve spent time learning our sides arguments and counterarguments, and this can make our positions feel entrenched, solid. They form a fortress of beliefs we want to defend and we’re loathe to abandon. But we don’t have the chain in its entirety, so our location is arbitrary.
An alternative to this ritualistic tennis match is to try and trace the sources of our beliefs and where they might have come from. It is to answer the question “Why it is this avenue of arguments and counterarguments I have been exposed to?” and locate the discrepancy that prompted the disagreement. To embrace the ad hominem. This takes the heat away by turning the conversation into a kind of ethnography – a tracing back of how different lives, and different stories lead to a different edifice of belief.
***
The second way to avoid disagreement is to make sure you know what you’re actually talking about. First, check your definitions - or, as Hume puts it - An explanation of the terms commonly ends the controversy and the disputants are surprised to find that they had been quarrelling while at bottom they agreed in their judgement. Often, we think we’re disagreeing but we’re actually agreeing violently with different language.
I think we also mix up data and normative questions. Questions that seem to be incendiary need not be because they are – at root – a data question. Take Isla Bryson - a trans woman imprisoned in a woman’s prison.
Maybe this is a little naïve, but it seems to me most people accept two things. The first is that men who pretend to be trans women to rape people are bad. And I think it is fair to say that these men could exist – and recognising their existence is not a means to marginalise the trans community, so much as to express cynicism about the actions of some men. The second is that trans women deserve respect wherever it can be given and that some will suffer a lot if placed in a men’s prison.
If we accept these two things, the question of whether trans women should be housed in a women’s prison, is an unpleasant calculation. Does the harm wrong imprisonment could have on trans women outweigh the risk to the female prison population from evil, trans-impersonating men?
This isn’t a conceptual question. It’s not about who is a woman, or what a woman is. It’s a data question. A question that we would be able to calculate with an alternative-hypothesis machine and some maths. And the task then becomes – not how to unknot a conceptual debate – but how to gather evidence to form a workable hypothesis about which of those two harms would be greater.
Recognising the status of something as an unresolved data question takes the disagreeableness out of the disagreement. It makes the task one of figuring out how to find evidence – rather than one of laying down the law. The disagreement becomes a shared project, of trying to find something to be answered. It allows us to stand side by side and look to answer the question.
***
We should expect to disagree. Time spent agreeing is odd-time-out. There are far more ways to disagree than agree, and it’s far quicker to agree. The former requires a lot of nodding, the latter a detailed post-mortem. Things are hard, we often don’t know the right answer, and even if you think you are ‘more right’ that doesn’t stop others being a ‘bit right’. There’s almost always something to take away from disagreeing.
***
Warning: A Philosophy Bit
If I was a billionaire, I’d put bank account details here where you can win 14 MILLION POUNDS for venturing beyond the philosophy warning. But I am not, so instead you’re going to get a potted bit of epistemology and some rambling about crosswords. Which is infinitely more valuable. Aren’t you lucky?
There’s two models of how we build up knowledge. The first is called Foundationalism. Foundationalism suggests that knowledge is a bit like a pyramid. There are some fundamental basic beliefs - there is a chair in front of me; I think therefore I am; coffee is a human right - and we build up using inferences from those basic beliefs. Take ‘I think therefore I am’ and ‘there is a chair in front of me’ and they can get you (ish) to ‘the chair exists as something I can see’.
Coherentism suggests that knowledge is a little like a web or a raft. We know lots of different things, each supporting the others, but none are primary. None are anchoring the raft down. Instead, we know things by virtue of their position in the web - because they fit with the other things that we know. I know that King Charles is the ruler of England, because I know that I can rely on newspaper reports, because I know that he is the son of Queen Elizabeth, because I can remember watching a play called Charles III. It fits into my web of knowledge, and so I accept it as true.
Each model has problems. Foundationalism seems to require us to make big inferential leaps to know anything of substance. Basic beliefs are things like ‘I know the sun rose this morning’ - and it seems to require quite a long chain to get from there to ‘Charles III is King of England’. But coherentism seems a bit too chancey. Without the basic beliefs to anchor our web of knowledge to the world, how do we distinguish between my nice normal web with ‘Charles III is King of England’, and the web of belief encompassing it alongside ‘Queen Elizabeth is really a lizard’?
So (and a truly horrible portmanteau coming up) an epistemologist named Susan Haack came up with an alternative - foundherentism. This suggested that knowledge isn’t like a raft, and nor is it like a pyramid. Instead, she suggests it is like a crossword. We have some foundational beliefs - like the first clues that we work out - but we update or abandon them based on how they fit into the whole. We’re willing to discard them if they stop the puzzle fitting together correctly.
Initially this seemed like the best of both worlds. You’ve got the anchoring of foundationalism, combined with the group-properties of coherentism. But I think there’s still a problem here. Look at the below crossword.
Gendered colour
To hit something
Show of affection
There’s two reasonable answers to 1 down (gendered colour) - pink or blue - and each seem equally likely. Depending on which one you plump for, your subsequent answer to 1 across (to hit something) - punch or knock will change. And then again, there is your evaluation of your next answer (show of affection) - kissing or hugging.
Whether you end up with the blue or the pink crossword depends on what you started with. There’s still an element of chance in where you end up, a matter of luck depending on whether you chose pink or blue first.
(For more on this - and for any basic philosophy generally - check out the IEP. It’s brilliant - super accessible and really interesting. Coherentism is here; foundationalism is here; and there’s also a great article on aeon - thank you Aidan for the recommendation).
Have a lovely week everybody!! Please appreciate my posting this after approximately three hours sleep and an awful lot of tequila. Dedication.
***
this is great, and overlaps which lots that i also think about. i’ve also shared the desire for increased reduction of policy questions to ‘data questions’, however i think this does brush over the fact that the question you consider, as almost any political disagreement, also roots itself in a particular moral framework (what is ‘should’, what is ‘best’), and unless you all agree on an ethics, and it’a neat and calculationary one (and perhaps not even then!), this reduction doesn’t obviously get you any closer to a framework for assessing true agreement/disagreement, or perhaps truth itself. this comes up perhaps most commonly when party politicians in moments of difficulty will proclaim that all the main parties ‘share the same goals’ — those goals presumably being making britain a better place for everyone to live in. asides from the fact that even this statement is obviously factually incorrect, the fact that a politician could sincerely believe that would seem to suggest all politics should reduce to the simple calculation of what’s best, which is never seems to.
the other objection of course, and commonly applied to utilitarian arguments, is that the supposed data questions to which we’re in the business of reducing things don’t have knowable answers (in the form eg of clear truth criteria) — is it 67% bad to make put someones safety at risk in a particular situation or is it 68%?
i’m not totally swayed by these objections and sometimes approximations will be good enough, and in general i think in any case the thrust of ‘people should be significantly more precise on exactly what they disagree on when they disagree’ is something i believe very strongly, i just think perhaps just hoping this can always become a ‘data question’, or that doing so will solve things, is overoptimistic
The story of Edshu - the god who walked down a road with a hat half red and half blue - some in the village saw the red side, others th eblue and everyone started fighting about the colour of the hat