The perils of underestimating second-order effects
Or the parable of the crypto man with the utilitarian plan.
So uhhh… Hey everyone.
This past week has been weird and terrible. For those who live under a rock (this sounds nice right about now. Is there any space left?), FTX’s implosion has caused some serious ripples in the EA community world.
This disaster happened to occur around the same time I published an article, one that now seems rather awkward given the current state of affairs. The article’s main thrust was that sometimes people overestimate the importance of second-order effects when evaluating cause-and-effect. I still stand by this point, but man, what unfortunate timing.
“Sometimes” is doing a lot of heavy lifting here. Indeed, sometimes people actually underestimate the importance of second-order effects. Disappointingly, that sort of reasoning has recently hurt a lot of people. Let’s explore what happened and then unpack how this might have been avoided.
It’s increasingly seeming like Sam Bankman-Fried engaged in behaviour that sits somewhere between dangerously reckless and outright fraudulent.
Let me caveat this by saying that I barely understand how crypto works, let alone FTX. As far as I know, a liquidity crisis is what happens when I drink too much coffee, and Alameda is the focal point of that Lynyrd Skynyrd song. I think any attempt at pinpointing what exact wrongs have occurred would be irresponsible for me to do at this point. I’m not sure where the truth stands as the story is still unravelling before our eyes.
But wow, what I’m seeing does not look good.
Nonetheless, I think we have enough evidence to justify a much needed discussion of “do-bad-and-or-foolish-things-in-the-name-of-the-greater-good” moral reasoning. I’ll simplify and just refer to this as naive utilitarianism — aka when we try doing highly consequential expected value number crunching, even though we’re dumb humans who can’t reason as carefully as the moral calculus requires.
As usual, I’m standing on Eliezer Yudkowsky’s shoulders here.
In one of my least favourite pieces of writing ever (sorry Eliezer, this wasn’t the move. But that’s for a separate blog post), Yudkowsky further highlights the dangers of naive utilitarianism:
“There are, I think, people whose minds readily look for and find even the slightly-less-than-totally-obvious considerations of expected utility, what some might call "second-order" considerations. Ask them to rob a bank and give the money to the poor, and they'll think spontaneously and unprompted about insurance costs of banking and the chance of getting caught and reputational repercussions and low-trust societies and what if everybody else did that when they thought it was a good cause; and all of these considerations will be obviously-to-them consequences under consequentialism.” - Eliezer Yudkowsky
And this is where we return to the current situation with FTX and Sam. Being reckless with billions of dollars of other peoples’ money is the kind of thing that bad utilitarians might be inclined to do, so long as the profits from doing so can be used for good. Good utilitarians, on the other hand, don’t just consider the immediate and obvious first-order effects of their actions. They take into account the second-order effects too — the messy ripples that happen afterwards.
A bad utilitarian might take the expected haul from robbing a bank and divide that number by $4,500. A good utilitarian realises that the second-order effects — living in a society where bank robberies occur and so on — would be bad, even if those effects are fuzzy and incalculable. Instead of robbing the bank, the good utilitarian wisely chooses to spend his afternoon putting together a Canva poster for the vegan dinner party he’s hosting for his local EA group.
Neglecting to consider second-order effects when setting out to do big, ambitious things in the name of a better world is not just foolish. It’s dangerous. The problem with this type of reasoning is that attempting to properly weigh second-order effects can, depending on the seriousness of the task at hand, be a fool’s errand — one that is only magnified as the stakes grow.
“It is true there are cases in which, if we confine ourselves to the effects of the first order, the good will have an incontestable preponderance over the evil. Were the offence considered only under this point of view, it would not be easy to assign any good reasons to justify the rigour of the laws. Everything depends upon the evil of the second order; it is this which gives to such actions the character of crime, and which makes punishment necessary.” - Jeremy Bentham
The solution to this conundrum is not to narrowly pursue things that are good on a first-order basis, but to pause and think very fucking carefully before doing anything rash. If the second-order effects are unclear, but the decision might have huge repercussions, please for the love of god think extra hard. And if your plan for doing good involves doing bad, tell a trusted friend who can talk some sense into you. This sort of foresight was neglected during the FTX saga and now the world will be worse off as a result.
What’s the lesson here?
Perhaps utilitarians should consider adding a little bit of — may Bentham forgive me — deontology to the mix. Deontological guardrails can be useful for ensuring that motivated reasoning, poor epistemics, and the other quirks of our silly little brains don’t lead us off a cliff on our journey to Utilityville. This whole saga has a few important lessons to teach: the world is more complicated than it seems; doing advanced utilitarian math is enticingly dangerous; and we need to think hard about our guardrails, lest we become foolish utilitarians ourselves.
Maybe a way to slice it is to see it as an infinite series, and see whether it converges to a finite number or diverges to become infinite. The latter second order effect overwhelms the primary effect, the former doesn't.
Slightly funny to think that this whole disaster might never have happened if SBF had just *read the Sequences*