6 Comments
Sep 21, 2022·edited Sep 21, 2022Liked by Julian

Thanks for writing this, I also cringe when people argue that EA actually isn't focused on longtermism now when it's pretty clear ~all of the high-status EAs have moved toward it as well as the majority of highly-engaged EAs.

I might add an analogy to arguments around veganism. Anti-vegans often levy arguments like "your crops are picked by humans in bad conditions, why not fix the human suffering first before worrying about animals?" And the sensible response for a full-time vegan activist isn't to argue that many other vegans work on improving human lives, but to argue that they have limited resources which they think are best directed on the margin to reducing animal suffering. But this doesn't reduce the importance of human suffering.

Expand full comment
Sep 21, 2022·edited Sep 21, 2022Liked by Julian

I enjoyed this post and think it makes an important point. A quibble:

"Some EAs don’t care about longtermism whatsoever, focusing instead on legible ways to improve existing people’s lives, like providing vitamin A supplementation to children living in poverty. Others think global poverty is a “rounding error” and that the most important cause is protecting future generations. Others want animals to live happier lives. Some want all of these things."

This paragraph seemed a bit off, for two reasons. First, not all of these sentences are about what outcomes people do or do not want. They're about what people think should be prioritized on the margin, and so in that sense it's definitionally impossible to want "all of these things" (especially the rounding error one in combination with the other two). Secondly, and relatedly, if we instead construe the preceding sentences as being not about prioritization but as about outcomes - lower existential risk, less poverty, more animal welfare - then probably most EAs (and many cosmopolitan-minded people) do want all of those things, not just "some" of them.

(ETA: you make all of this clearer later on in the dialog, when the discussant distinguishes between "not caring" about poverty and choosing to prioritize something else)

Expand full comment

I think the argument is that dollars and time are fungible, so a dollar or career spent in making the future of humanity better is a dollar not spent in making the lives of that 700m people. This is an explicit tradeoff, and people will rightly argue that this tradeoff means you're willing to let suffering happen today to prevent it tomorrow.

It's possible to defend this by arguing on preventing things like existential risks, though I have quibbles, but that's a logically valid argument. But there is an equilibrium between "humanity dies out in a plague that we could've stopped" and "saving one life 100 years from now is morally equal to saving one life today".

Expand full comment