“The philosophers have only interpreted the world in various ways,” said Marx, adding: “The point, however, is to change it.” By this standard, Peter Singer is almost certainly the most influential philosopher alive and his 1972 paper “Famine, Affluence, and Morality” is the most influential philosophy paper ever written.
Influence in philosophy is normally measured in terms of hundreds of journal citations. The influence of Singer’s paper can be measured in terms of billions of dollars. “Famine, Affluence, and Morality” is credited with inspiring the Effective Altruism movement, which started taking shape about fifteen years ago and has grown into a global movement. The Gates Foundation and other large philanthropic organizations have got on the bandwagon, for a total of nearly $50 billion in pledges.
The Effective Altruism movement is hardly free from controversy. One of its most prominent supporters has been Sam Bankman-Fried, the disgraced head of the cryptocurrency exchange FTX. This blog post will say a bit about what Effective Altruism is and ask what we should make of it in the light of Bankman-Fried’s downfall.
Singer’s paper centres on a simple and vivid thought-experiment. Suppose you were walking by a shallow pond and saw a child drowning. To jump in and save the child would mean ruining the nice new clothes you’re wearing. But it would be monstrous not to. You wouldn’t be going above and beyond the call of duty in sacrificing your clothes for the sake of the child. It would be morally wrong not to prioritize the life of the child over your clothes.
The world is full of drowning children, Singer says. He was writing at the time of a devastating famine in what is now Bangladesh. But it’s painfully easy to extrapolate to the present day. Every year, millions of people die preventable deaths due to a lack of access to food, medicine, and other necessities. If it’s wrong not to make a sacrifice for the sake of the drowning child, it’s wrong not to make a sacrifice for those who just happen to be farther away.
One premise of this argument is that distance is morally irrelevant. Why should the drowning child ten feet from you matter more than the child with intestinal worms half a world away? Especially when a few clicks of a button online can send deworming medicine to that child, distance doesn’t seem to matter.
Typically we think of charitable donations as optional generosity. It’s nice of you to give precisely because giving is supererogatory—it’s doing more than you’re strictly required to do. But if Singer’s argument is sound, charitable giving isn’t supererogatory but morally obligatory. Not to give is as bad as walking past the drowning child and letting her die.
(Singer’s argument is one of the “ten dangerous ideas” in my self-guided introduction to philosophy course. I explore these issues in more detail in the course.)
Taking Singer’s argument seriously can effect a wholesale change in your life. How can I spend my money on that fancy new iPhone, that smart jacket in the shop window, that pricey restaurant meal, when every dollar I spend on these things is a dollar I’m not spending to help those in desperate need?
One distinguishing mark of philosophers is that they’re averse to ducking a cogent argument. If they can’t find satisfactory reasons for resisting an argument’s conclusion, they feel compelled to accept it, and whatever follows from it.
Toby Ord found the logic of Singer’s reasoning irresistible. As a graduate student at Oxford University, he resolved to budget for himself an income that would support a modest lifestyle and to donate everything above that limit to charity. In 2009, he founded an organization called Giving What We Can that encouraged members to pledge 10% of their earnings to charitable causes.
Ord was joined in this venture by Will MacAskill (then Crouch), who had recently completed his undergraduate studies at Oxford. MacAskill also helped set up 80,000 Hours, an organization that dispenses career advice to people wanting to make the most impact (80,000 hours being the average number of hours in a person’s working life), and the Centre for Effective Altruism that oversaw these ventures.
I was a graduate student at Oxford myself when all this was getting underway. At first, Giving What We Can seemed like another student club, where small groups of keen young people come together over a shared passion. But this one caught on—and spread. The technical approach to improving the world resonated with the Silicon Valley set and that’s when the real money started flowing in.
Both words in the name warrant some unpacking. The altruism part is fairly straightforward: you should do as much good as you can. But the “good” should be understood in an objective and impersonal way. Remember the implication of Singer’s drowning child thought-experiment. It doesn’t matter if the good that’s done benefits your best friend or a stranger you never meet. The goal is simply to confer the most good you can.
The effective part of Effective Altruism is where the technical rigour takes hold. If you want to do the most good you can, it’s not enough just to give away as much money as you can. You have to spend it in a way that maximizes its impact. The signature feature of Effective Altruism is its use of rigorous methods to measure the impact of different kinds of giving.
Let me sketch out the technical thinking. To talk about “saving a life” sounds worthy but it’s vague. One reason we mourn the deaths of children is that they had so much more potential life ahead of them. Saving a child’s life seems to count for more than saving the life of an octogenarian. And let’s face the grim fact: you only ever “save” a life temporarily. Everyone dies eventually.
So rather than “saving lives,” maybe we should talk about “extending lives.” But even that doesn’t capture the full scope of potential impacts. Many debilitating illnesses, such as intestinal worms, don’t normally kill the people they afflict. But they do significantly reduce the quality of their lives. Interventions like deworming treatments don’t stop people from dying but they do help a lot.
One popular metric among effective altruists is the quality-adjusted life year, or QALY. One QALY is one year of life lived in full health. A surgical intervention that keeps someone from dying and permits them to live another twenty years in full health would be worth 20 QALYs. An eye operation that restores sight to someone who can then live for forty years in full health is also worth 20 QALYs if we suppose that those forty years would have been half as good without eyesight. (Effective altruists use survey methods to determine just how much different health issues impact quality of life.)
The idea, then, is to get the most QALY bang for your buck. Some of the results are surprising. Training guide dogs for the blind sounds like a worthy cause but it’s on the Effective Altruism blacklist. Training these dogs is costly, and has a limited duration, so that it costs something in the range of a $50,000 investment to get one QALY of benefit from guide dogs. By contrast, a deworming treatment for a child in sub-Saharan Africa will buy a QALY for $5. Forget about the guide dogs, say the Effective Altruists, and invest in deworming.
So far, so virtuous. But things get more counter-intuitive from here. If your goal is to do as much good as possible, spending your resources effectively is only one part of the equation. Another is how much you have to give. You can do a lot of good as a doctor treating malaria in low-income countries. But think how many more malaria treatments you could supply if you stayed home, got rich, and earmarked those riches for effective charities.
The idea of “earning to give” first got hashed out by the philosopher Peter Unger, and it’s the driving notion behind 80,000 Hours. If you’re clever and have a head for money, why not join a hedge fund, get filthy rich, and then lavish your riches on effective charities?
If you recognize Sam Bankman-Fried in this description, you’re on the mark. SBF (as he’s commonly known) was until recently a poster child for earning to give. He credited conversations with MacAskill with pushing him into the world of high finance so as to do as much good as possible. MacAskill was closely involved in the FTX Future Fund and the former CEO of the FTX Foundation, Nick Beckstead, is another committed proponent of Effective Altruism.
The collapse of FTX has inevitably tarnished the reputation of Effective Altruism. But I think we need to be careful to distinguish what this scandal reveals about Effective Altruism and what it doesn’t. If the issue is that an unscrupulous person associated himself with a movement that purports to do a lot of good, I invite you to show me a positive movement that hasn’t attracted its share of shysters. MacAskill, Beckstead, and others may have been naïve to put their faith in Bankman-Fried, but SBF’s disgrace needn’t discredit the movement as a whole.
There’s a subtler way in which the FTX fiasco puts a spotlight on what I think is a valid criticism of Effective Altruism. Our word “ethics” derives from the Greek ēthos, which means “character.” Greek ethics is about cultivating virtues of character—qualities like courage, temperance, and justice—in service of living a good life.
In the utilitarian tradition from which Effective Altruism derives, character doesn’t enter into it. What matters isn’t the goodness of people but the goodness of outcomes. A surly jerk who donates thousands to effective charities has done more good than a noble sweetheart who donates hundreds. Utilitarians care not about how good you are but about how much good is done.
Because their focus is on the outcomes of our choices and not on the characters of the people making the choices, utilitarian thinkers risk being inattentive to the ways that choice and character inform one another. My character is shaped by the people around me and the prevailing norms of my culture. If I throw myself into a line of work in which profit maximization overrides all other interests, I’m unlikely to be unchanged by the experience.
There’s a danger of self-alienation in utilitarian moral philosophy. I become nothing more than a tool in the grand project of global utility maximization. Seen in a sympathetic light, this outlook is attractively selfless. If I want to do good, it shouldn’t be about me. But the technical rigour in these calculations of utility risks turning people into abstractions—both me and the people I intend to help. Ends and means can get jumbled and I can lose sight of how my single-minded pursuit of noble-seeming goals might be corrupting my character. It takes deliberate effort to keep the human context in view when you’re dealing in abstractions. Evidently some people fail to make that effort.
The FTX scandal is just one reason people might feel lukewarm about Effective Altruism. This is a movement shaped by clever mostly young mostly white mostly men who mostly come from privileged backgrounds. If you don’t share these characteristics, you’re liable to find the whole thing a bit off-putting, the more so since they claim their cleverness does far more good than your good-heartedness.
Another reason to shy away from Effective Altruism is its demandingness. Every creature comfort you enjoy is money that could have been spent on those in greater need. Singer’s “Famine, Affluence, and Morality” can read like the ultimate guilt trip.
But don’t let these qualms let you off the hook. You might not recognize yourself in the technically-minded proponents of Effective Altruism; you might feel uneasy about its implications (I haven’t even touched on longtermism—that would have to be the topic of another post); you might even feel a touch of Schadenfreude in the wake of SBF’s downfall. But none of that takes away from the fact that deworming treatments, anti-malarial bed nets, and the like unquestionably do a lot of good. You don’t have to be a fully paid-up member of the Effective Altruist movement to recognize this—and to recognize that you could probably be doing more to help others than you presently are.