Quick Summary: In the first part of this Substack, I set out two different ways of thinking about the right thing to do. In the second, I discuss why it is difficult to know what the right thing to do is when something has gone wrong - why it is difficult to navigate the world of the second best. In the third, I describe a lesson I had to learn - about thinking about right as ‘a little less wrong’.
Is what is ‘right’ better-than-what-exists-at-the-moment or is ‘right’ that-which-belongs-in-a-perfect-world? As I wrote that question, I felt the Earth subtly shift on its axis, the rotation disrupted – Superman style – not from the power of the insight, but the torque of a thousand eye rolls.
But! It’s an important question, and this is a simple way of making the distinction. ‘Better-than-what-exists-at-the-moment’ - Route 1 - is the idea that the right thing to do is the best out of all the possible options available to you. (Formally, out of the set of accessible possible worlds, choose the 'best one’.) ‘In-a-perfect-world’ means what would happen in the most perfect world available. (Formally, the right things to do is whatever is identical to what occurs in a perfect world.)
‘Lauren, these are basically the same. Go and contribute to GDP in some way. Go outside. Drink a beer. SAVE ME FROM THIS WRETCHED PEDANTRY PLEASE.’
No. Sozzles.
We mix these two up a lot. People can be talking at cross-purposes when we’re talking about the right thing to do, and having this distinction in mind is a useful way to work out what we’re on about. An example. Alice and Bob are discussing a black beauty pageant. Alice says it’s not right. Racism shouldn’t exist, there shouldn’t be a need for a black beauty pageant, and so it’s not right that it exists. Bob says it is. Given that racism exists, it’s the best thing we can do - it’s right that it exists. Alice is Perfect-Worlding. Bob is Best-of-all-Alternativesing.
There’s advantages and disadvantages to each. Best-of-all-alternatives has a bit of a problem - which alternatives? How big do you cast your net? What’s the relevant horizon to pick from? How much are the alternatives dependent on other people, and how should I take that into account? You might have to make squirmy tradeoffs - is the best alternative world one where there is discrimination, but less hunger? More pleasure and less truth? You might have to calculate or do complicated forecasting or even make a spreadsheet with LOADS of formulae. (Shudders).
If that idea fills you with dread, you may want to move to Perfect Worlds. I think that Perfect-Worlds are intuitively clearer.* We use Perfect Worlds explicitly (‘well in a perfect world…’ ‘ideal scenario’ ‘North Star’) but also implicitly - when we invoke principles like ‘what if everyone did that?’. When we do this, we appeal to a space where people all act fairly to show that an action is wrong - or, rather, we appeal to what would happen in a Perfect World.
(I think people can be sorted into two ‘types’ depending on what their default is - are they a Perfect-Worlder or an Alternativeser. For a freebie ‘Pop Quiz: What Sort of Ethicist Are YOU???’ - CLICK HERE!)
***
I was a hardcore Perfect Worlder for a long time. My default was to think of right things as those that could exist in a perfect world - and some things just intuitively seemed wrong. They didn’t seem to fit with the right thing to do. And then economics ruined everything.
There’s a bit of maths in welfare economics called the Lipsey-Lancaster Theorem. This states that ‘if one optimality condition in an economic model cannot be satisfied, it is possible that the next-best solution involves changing other variables away from the values that would otherwise be optimal’. What this means is that we may have an ideal, a utopia. But if one piece of that utopia falls apart, there’s absolutely no reason to think that the ideal is a good guide to the next best option.
Imagine you’re a judge at Crufts. Dog A is the perfect spaniel. She has fluffy ears, and spots. Dogs B and C both lack fluffy ears. Without the fluffy ears, the spots on Dog B just seem sad. So second place goes to Dog C - even though Dog B was more similar to the winner. Once one piece was lost, the system as a whole required shifting about to reach a new optima. The same rule applies for any ideal. Once you’ve fallen short in one area, it becomes really hard to work out how the other variables will trade-off to get to the second-best. The Perfect World isn’t a very useful guide any more.
I thought about this a lot in Kenya. I taught two girls, who’d lived through a war on the conservancy. They’d been shot at, and one had attachment anxiety as a result. When trying to work out how to help her, all the ‘normal’ rules about the right thing to do went out the window - because we were so far from the ideal starting point. There’s no reason to think what works for a child that’s had a stable starting point in life would work for these kids - and it could even be harmful. So we had to dismiss most of the information available as inapplicable and work it out from scratch.
More generally, the Lipsey-Lancaster Theorem makes clear the costs of moving away from that first, ideal equilibria. Once something is thrown off kilter, it becomes much, much harder to figure out what the right course is. Any theory becomes pretty useless, because they’re generally written for ideals, not for the many, many ways a second-best can come about. There’s far more ways for something to go wrong than to go right, and no theory can cover them all. And the Lipsey-Lancaster theorem, dry as it may sound, is so important if we’re going to be able to recognise this, and recognise the limits of our theories that are only aimed at ideal scenarios. We have to work out for ourselves how to live in this land of the second-best.**
***
One of the other ways of thinking about right is as ‘less wrong’ or failing smaller. And this type of right is odd. It’s always associated with a failure - something that, on its own, seems to be a bad thing. It requires taking a step back and looking at trajectories, celebrating you’re getting closer to the way you want the world to be, even if right now things still seem to be getting worse.
I once helped a friend during a very bad drug addiction. Ordering alone, hospital trips, relapses - every rock bottom turned out to be another false floor. But - credit to her - she got clean. And there wasn’t a silver bullet, but one of the things I think helped was celebrating smaller failures.
Eventually, the tide turned. Every day became every couple, became every week became a couple of times a month became - close enough to - sobriety. Enough to stop it seeming like dying was imminent. And, instead of treating each relapse as a disaster, we celebrated that it was longer than the time before. Which was hard. It’s really hard to celebrate when the drug addiction that is putting your friend in hospital has happened again.
But it worked - and the reason I think it worked was because it split the binary between ‘success’ and ‘failure’. Relapses changed from just more evidence that they were never going to get better to being a sign of a little more progress. This required breaking things down - splitting out speed and acceleration, and allowing one metric to succeed even as the other failed. It taught me that sometimes what we see as the ‘right’ thing has to involve celebrating some ‘wrong’ things. Some of the advantages of being a bit less binary in how I think about right and wrong.
Thank you for reading everyone - slightly over-sharey headspace for this one. Hope you enjoy, hope everyone is happy, hope everyone is well! Much love! xxx
*Some people would say there’s no such thing as a perfect world. But I think we do have intuitions about what could or couldn’t belong in it, even if those intuitions don’t line up with a neat philosophical analysis - even if those intuitions are inconsistent, they do exist, and they do play a role when we’re trying to judge something as right or wrong in every day discussion.
** It’s also really powerful, in part because the flexibility of economic modelling means basically anything can be couched as an optimisation problem. So it always provides a pro taint reason for rejecting an ideal theory in a non-ideal scenario. It means there’s always more work to do to identify if a given situation is ‘ideal-thinking-apt’