Darkwood Spheres, Barbara Hepworth, 1936
Quick Summary: Some quick thoughts on AI. I suggest (1) how AI I think will increase unfairness in the workplace (2) an idea about the ‘texture of thought’ - can an AI be wise? We know they can be clever - but could you have a wise AI? and (3) a summary of https://situational-awareness.ai and some other recommendations.
Thank you to the VERY CLEVER friends who have been pinging me recommendations and thoughts. You guys know who you are. (Also apologies in advance. I consumed an absurd amount of wine with my siblings yesterday and I’m alternating writing this with trying quite hard not to be sick on the train. Keeping it classy.)
There is a type of fraud called the 419 scam. Send a message to 10,000 people predicting the outcome of a boxing fight, half saying win and half saying loss. Repeat with the 5,000 who have the right outcome. Continue until one person thinks you have correctly predicted 15 boxing fights, and charge them millions to access your powers of divination.
When I was in management consulting, I focused on commodity trading. One of my mentors had a theory that the so-called gift for commodity trading was just the logic behind the 419 fraud, scaled up. Start off with a thousand entry level traders, and there will have to be someone who gets lucky at a seemingly improbable frequency. The trader was just the lucky odd one out. Commodity traders were reaping extortionate rewards for what was, in reality, little more than hope and random chance.
I think AI means this Randomness Reward is going to become increasingly common. AI smooths edges. It gives you the right answer, ticks off all the easy wins. You have most of the work done off the bat. So where do people come in? I think the role of humans will be to introduce the grit at the edges - to provide a random quirk that distorts where the different AI systems settle. On a playing field that is levelled by access to a machine that can automatically generate the text-book answer, competitive advantages and the role of people will be doing something unpredictable - pulling a company or a decision towards something that takes it out of the rhythms of AI, this new space where everything is the thing that-makes-sense. If everyone is doing the thing-that-makes-sense because they all have access to it, humans will come in to introduce the ‘non-sense’, and prompt new changes and disruption in their wake.
We will probably think that we have come to a conclusion, or ended up at a given outcome through some feat of reasoning - some specifically human flair, from being as special as our mums told us. For those of us who end up in the fortunate position of the commodity trader, or the unfortunate fraud victim - we will find a way to rationalise, post hoc, our own success. But it won’t be that. We’ll each have introduced some grit that forces very complicated systems, that know more than we ever will, to reach a new equilibrium. Sometimes those equilibriums will pay off, and sometimes they won’t. Some people will have winning streaks of equilibriums. And they will be rewarded.
I don’t think the extent of this Randomness Reward would be an issue if it weren’t for how much it relies on perception. It is not enough to arrive at a new equilibrium, it is also presenting the arrival as a project that you have done. For commodity traders, this is less of an issue - it’s a reasonably democratic Randomness Reward because the spoils of the trade are clearly assignable to a single person. Where it will be a problem is in teams, where most of the outcomes are generated by a Randomness Reward from the grit introduced by a range of different people. Because then all the credit goes to the person who is best at presenting themselves as responsible for the outcome…
***
Imagine a very precocious 15 year old. Think an all A*’s, all A-Levels, ‘Oh yes, Rupert finished the entire works of Kant after his Grade 8 trombone exam’ sort of chap. Rupert has consumed a whole bunch of philosophy - everything from Aristotle to Zizek - and is now (a) everyones favourite dinner party neighbour and (b) feels himself pretty well qualified to opine on Life, The Universe and Everything.
Would you talk to Rupert about how to console your widowed mother?
The precocious 15 year old is used in philosophy to talk about the difference between being clever and being wise. You can know a lot, you can be very clever, but that doesn’t make you wise. AI is definitely clever. But I’m not sure if it’s wise. There’s a lot of scales that AI has advanced in (reasoning, recognition, deduction etc.). I don’t know if wisdom is one of them.
I was really struck by a section in the Situational Awareness essays. It talks about how or the way in which AI ‘reads’ at the moment. AI skims right now, extracting all the facts and information, but it’s not able to think about it or consider it. If you’ve ever skimmed poetry, you’ll appreciate the difference. You finish reading and you’ve taken nothing really away - it’s just been moving your eyes over the page (and feeling productive). You don’t become wiser as a result.
The author, Leopold Aschenbrenner (fantastic, Bond Villian name) predicts that this time will increase - that AI will stop skimming and start reading things in greater depth, spending the equivalent of a couple of months or years thinking about it. There’s a course at Duke University that is based around 50 books everyone should read***, and I wonder what an AI trained on those books would look like - whether that sort of model would be wise, whether the texture of its thought would change. I wonder - qualitatively - what thought and consciousness would be like as that sort of model.*
***
Recommendations:
I thought this was really good (cheers Henry!) - https://situational-awareness.ai. As a quick summary it says that we should assume AI is advancing logarithmically and fast (that is to say in orders of magnitude), with explosive step changes in what it can do in the very near future. Unless there is a very good reason to think there is a cap/ physical barrier/ limiting barrier will exist (e.g. data, power, spend, hardware), we’ll have AGI by 2027.
This is because there's easy wins from (i) improving how data is used (greater thinking/ reflection (ii) algorithm tweaks and (iii) greater context/ 'unblocking' of solving some simple problems/ use of tools. We can achieve superintelligence when AGI becomes self-refining, but it will also reach a point where we are unable to check if it's aligned against our goals. Superintelligence represents a very real security risk, in the very near future and requires adopting a war footing and approach. Currently the security is way too lax relative to the risks that AI represents. Read it! It’s scary but it’s a clear-eyed, well-written intro.
I also have a soft spot for Novacene by James Lovelock. Lovelock’s pretty ‘woo-woo’ (he came up with the Gaia hypothesis) but I think this is a pretty creative and brave attempt to try and answer the question of what it is actually like to be an artificially intelligent being. He describes them seeing us as we see plants, with a very slow process of perception and action, and suggests that the role of life in regulating the temperature of the Earth means there will be an incentive for them to keep us around. Probably (and hopefully) not right, but interesting nonetheless.
***
Some Quick Predictions**
In the next 2-3 years one of Google or Amazon Cloud Services will do something to try and seize data that they’re hosting. If you can choke off people from data you have such a good moat for your AI. I reckon there’ll be a ‘move-fast-and-break-things moment’, quite a bit of dudgeon and then one of them will become the new AI winner.
In the next 5 years we’re going to see hyper-personalised trading markets, in increasingly small classes of goods (like shoes etc.) and much more differentiated pricing. Combine data and the ability to be super responsive, and you’ll see an increase in trends that we’ve been seeing at the moment
*Although I think the bar for wisdom is a lot lower than this may make it seem. I know the texture of my own thought. I know what it is like for me to think, know the difference between how I thought at 5 and how I think now. I think I know how it feels to be wiser. And it seems boggling to imagine a model being ‘wise’ in the sense I feel myself to be.
But (reverse-Uno here), the AI doesn’t need to reach that. I also don’t know the texture of Rupert The Precocious 15 Year Olds thought. I don’t know the texture of the wise mans thought - I just make an assumption. The AI only needs to be functionally equivalent to the wise man and perform the same job in my life (this is the same insight behind the Turing Test). The fun combination of scepticism and the privacy of other minds means everyone else could already be AI robots in the matrix for all I know.
**These are primarily because if I’m right, I’ll brag about this forever, and if I’m wrong, it’ll be a good check on my ego. Plus, I read Tetlock’s ‘Superforecasters’ recently, and his entire argument is we should be willing to put out time-bound predictions so we can know if we’re actually saying anything useful.
*** Although I can’t find any evidence for this online, and I’m slightly concerned I might have dreamt it.
There's also a limit on the role AI can have with traders - Palmer's three-week horizon for even the most granulated models. So prediction that trading will return to a longer-term perspective/ become slightly more future orientated?
Prediction it's all going to be gobbled up with rent-seeking unless there's regulation