6 Comments
Sep 21, 2022·edited Sep 21, 2022Liked by Julian

Thanks for writing this, I also cringe when people argue that EA actually isn't focused on longtermism now when it's pretty clear ~all of the high-status EAs have moved toward it as well as the majority of highly-engaged EAs.

I might add an analogy to arguments around veganism. Anti-vegans often levy arguments like "your crops are picked by humans in bad conditions, why not fix the human suffering first before worrying about animals?" And the sensible response for a full-time vegan activist isn't to argue that many other vegans work on improving human lives, but to argue that they have limited resources which they think are best directed on the margin to reducing animal suffering. But this doesn't reduce the importance of human suffering.

Expand full comment
Sep 21, 2022·edited Sep 21, 2022Liked by Julian

I enjoyed this post and think it makes an important point. A quibble:

"Some EAs don’t care about longtermism whatsoever, focusing instead on legible ways to improve existing people’s lives, like providing vitamin A supplementation to children living in poverty. Others think global poverty is a “rounding error” and that the most important cause is protecting future generations. Others want animals to live happier lives. Some want all of these things."

This paragraph seemed a bit off, for two reasons. First, not all of these sentences are about what outcomes people do or do not want. They're about what people think should be prioritized on the margin, and so in that sense it's definitionally impossible to want "all of these things" (especially the rounding error one in combination with the other two). Secondly, and relatedly, if we instead construe the preceding sentences as being not about prioritization but as about outcomes - lower existential risk, less poverty, more animal welfare - then probably most EAs (and many cosmopolitan-minded people) do want all of those things, not just "some" of them.

(ETA: you make all of this clearer later on in the dialog, when the discussant distinguishes between "not caring" about poverty and choosing to prioritize something else)

Expand full comment
author
Sep 21, 2022·edited Sep 21, 2022Author

(FWIW I updated the "rounding error" bit to be more diplomatic).

Thanks for pointing this out. I agree with what you're saying and I think I could've been clearer.

Maybe what I was trying to say is something like "There are large disagreements amongst EAs regarding what the most important priorities are, where some people distinctly think priority A is the most important, others think priority B is. Some EAs are largely uncertain and instead bucket them all together as important. But the key takeaway is that there is a lot of disagreement about prioritisation".

Expand full comment

I think the argument is that dollars and time are fungible, so a dollar or career spent in making the future of humanity better is a dollar not spent in making the lives of that 700m people. This is an explicit tradeoff, and people will rightly argue that this tradeoff means you're willing to let suffering happen today to prevent it tomorrow.

It's possible to defend this by arguing on preventing things like existential risks, though I have quibbles, but that's a logically valid argument. But there is an equilibrium between "humanity dies out in a plague that we could've stopped" and "saving one life 100 years from now is morally equal to saving one life today".

Expand full comment
author

Yeah, agreed.

What I would prefer is if longtermist EAs just bit the bullet and said "yeah there are tradeoffs to everything, but here is why I'm making that tradeoff" and then argued at the object-level rather than saying something along the lines of "oh yeah but we EAs are working on improving the lives of people in poverty silly" as if to obfuscate the (very real) opportunity costs of focusing on longtermism.

Expand full comment

Yes, that definitely would be clarifying on all parts for this argument.

Expand full comment