The truth about Elon Musk, Sam Bankman-Fried, and effective altruism – Fast Company

Posted: September 16, 2022 at 2:43 am

If you happen to be reading this a million years from now, maybe a movement called effective altruism really took off. Perhaps it protected the lives of the 80 trillion human beings between our generation and yours, who managed to stave off ravaging poverty, man-made pathogens, and nuclear war.

More likely, youre reading this in 2022. If so, chances are that eight years ago, you or some close friends dumped an entire bucket of ice water over your head and shared the footage on Facebook, with a $10 donation to the ALS Association. The research group reported that, in total, the 2014 Ice Bucket Challenge raised $115 million for ALS, the deadly progressive neurodegenerative disease, also called Lou Gehrigs disease. That sounds like a lot of people doing a lot of good.

But if youre an effective altruist, you would probably say that it was funding cannibalism. It was ineffective giving because it pumped millions into a cause that isnt a high priority since it already has sufficient attention, and the research required for a cure will be slow and expensiveessentially depriving more worthy diseases of donations. At the time, the founder of effective altruism (EA), Scottish philosopher William MacAskill, wrote: If someone donates $100 to the ALS Association, he or she will likely donate less to other charities. So, he said, the Ice Bucket Challenge did more harm than good.

This kind of rational pragmatism is a central tenet of EA. The phrase itself was coined in 2011; and the movement, which lies at the junction of philosophy and philanthropy, burgeoned in the halls of the University of Oxford, and has now permeated the world of the ultrarich. By leaving the safe collegial confines of the academy, however, EA has been allowed to grow, attracting a broader range of adherents, often incorporating their own more elaborate ideas. Even when those are promoted by the founding members, theyre easily transformed into even further fantasy by acolytes far detached from the movements core creedsbut wealthy enough to push them.

The grounds of the philosophy are as follows: We should give to the charities that alleviate the worlds biggest problems, and do so with the most effective use of dollars. That premise seems hard to dispute, but theres more. To help achieve that, the movement dictates a narrow set of valuable causes: those committed to alleviating global poverty, investing in biomedical research, and ending factory farmingrigorously selected with empirical evidence to compute cost-effectiveness. Natural disaster relief doesnt pass the test because its oversubscribed. Donating to the treatment of intestinal worms may be more advisable than to tuberculosis, for example, because even though the parasitic disease causes relatively milder illness, its more neglected and more easily remediable at scale.

Sam Bankman-Fried [Photo: Lam Yik/Bloomberg/Getty Images]Now, EA is evolving from obscurity, delving into political spheres, and unfastening the wallets of billionaires. When I talk to William MacAskill on Zoom, he estimates the total of inner-circle EAs at 10,000, up from 100 in 2009. Included in that 10,000 is cryptocurrency-exchange founder Sam Bankman-Fried, who rubs shoulders with Tom Brady and Gisele Bndchen, having thrusted them onto a $20 million Super Bowl ad this year for his company, FTX. And, perhaps theres now a new endorser of the movement: Elon Musk, the fifth most-followed person on Twitterranking between Rihanna and Ronaldowho tweeted his support for MacAskills newest book. This has all formed heavyweight momentum for the rollout of the title, What We Owe the Future, which would be the envy for any product launch.

Much of the newfound enchantment with EA springs from a shake-up of the doctrine in favor of a philosophical concept called longtermism. Between MacAskills first book, 2015s Doing Good Better, and this years, weve suffered a nightmarish pandemic, climate change has spiraled, and tech has produced disquieting side effects. Developed to take on those new threats, EAs now argue that its essential to protect not only our population, but also hundreds of coming generations, millions of years into the future, whose well-being is just as important as ours. That requires even more methodical consideration: calculating not only the cost-effectiveness of philanthropic strategies, but their estimated value for millions of humans, millions of years into the future.

Now embraced, and financed, by some of the worlds richest and most powerful, EA has gone from a simple argument for better allocation of charitable dollars to part of elite discussions about space colonies and digital human enhancement. Which could mean not just eschewing things like the Ice Bucket Challenge, but also constituting a free pass for the wealthy class to abrogate responsibility for addressing todays societal ills while cloaking themselves in a presumably enlightened outlook.

Dont Follow Your Passion, MacAskill titles a chapter in Doing Good Better. To wit, fledgling EAs commit to embarking on career paths where theyre either working for impactful nonprofits or earning to giveworking in well-paid industries, like finance or software engineering, that allow them the luxury of setting aside heaps of cash for donations, typically at least 10% of their total earnings. Theyd say that anyone reading this should be doing the same because theyre privileged to. MacAskill, 35, who says he lives on 26,000 pounds ($31,000) after donating half of his income to charity, is still in the top 3% of the worlds richesteven with his two housemates, lack of a car, and a leaky shower.

The philanthropic causes to which EAs contribute are said to be ones that are relatively neglected, easily solvable, and affect enough people in the world to be impactful if solved. Global poverty has long ranked near the top of lists; other priorities include climate action, criminal justice reform, and animal welfare. To not attend to animals would be to practice speciesism: All creatures are sentient beings capable of pain, and widespread factory farming subjects animals to a lifetime of extreme suffering.

To decide how to tackle those issues, they analyze the causes impacts with randomized control trials. They determine cost-effectiveness using quality-adjusted life years, or QALYs, a numerical measure of the relationship between the predicted quantity of years a person has left to live, and the quality of those years. This should help givers weigh the value of saving a life versus improving the quality of one: Would it be more effective to prevent 10 people from suffering from AIDS or 100 from severe arthritis? EA-aligned organizations, such as GiveWell, prescribe the best routes for charitable giving. For alleviating poverty, the recommended paths are funding parasitic-deworming medicines and bed nets, which respectively cure intestinal parasites and protect against malaria-bearing mosquitos; also, making direct cash transfers to people in developed countries via charities like GiveDirectly.

This validation of prioritizing causes is compellingly novel, says Benjamin Soskis, senior research associate in the Center on Nonprofits and Philanthropy at the Urban Institute. Throughout American philanthropic history, theres been ultimately a deference and nonjudgmental attitude toward the ways that people give, he says, fueled by individuals identities, priorities, and prerogatives. EA has opened a space for the community to scrutinize the often-self-indulgent philanthropic choices of the wealthy (and to a much more scrupulous extent than past one-off instances of criticism, such as when hotelier Leona Helmsley left $12 million to her dog, a Maltese, named Trouble). Previously, people wouldnt want to [push] back on gifts to Harvard and Stanford and Princeton as a waste of money, he says, despite their relative ineffectiveness.

But a common concern is that the movements rational assessment of causes removes emotion from givingthat it has an unfeeling, robotic, utilitarian calculus, Soskis says. (EA is explicitly based in Utilitarianism, a British economic movement of the 18th and 19th centuries that preached that actions are right if they are useful or for the benefit of a majority.) But emotion may be the most important factor when deciding where people give: When Michael Bloomberg gives billions to his alma mater, Johns Hopkins, it may not be the most effective use of the funds, but he feels a genuine sense of connection to the school. And the Ice Bucket Challenge had a shared sense of community of friends and familyit s an example of what Jennifer Rubenstein, an associate professor of politics at the University of Virginia and an EA critic, calls intimate donating, like how shed feel pleased to donate to her nieces dance-a-thon for cancer research. But cancer may not pass the prioritization test because its not neglected enough.

I think the emotion is still there, says MacAskill, on the Zoom call. Its just channeled particularly in one way rather than another. In practice, there needs to be some detachment in order to do the most good. Take ER doctors: How much emotion are they feeling day to day? he asks. It might be a fair amount, but if someone dies under their watch, its not the same amount of emotion as if a friend or family member of theirs dies. If you were intensely emotionally resonating with every single person you were interacting with, you just wouldnt be able to do your job.

MacAskill is clearly not devoid of emotion; he opens up about his Eureka moments that sparked the movement, one of which was his visceral reaction as a teenager to learning about the broad neglect of the global AIDS crisis. I was just like, thats fucked up, he says. I cannot believe that people arent talking about this. But rather than emotion, he speaks in terms of ethics. His work stems from a deep, moral desire to make the world better. (In the intervening years, AIDS has become a more prominent global cause, so EAs tend to focus more on malaria and parasitic diseases, though some advocate for funding AIDS interventions.)

MacAskill wants to build a collective movement that effects large-scale moral change, in the way that abolitionists and suffragettes did. Those movements take time, but hes patient; the first public statement against slavery was released in 1688, but it wasnt fully abolished globally for another 300 years (Mauritania was the holdout, until 1981). He envisions, in 100 years, a cultural shift whereby it becomes normal for everyone to consider how theyll make the world a better place. And, naturally, theyll design their plans of action using high-quality evidence and careful reasoning.

The moral underpinnings of EA come from the work of Australian philosopher Peter Singer, specifically his drowning child analogy from 1972. If you walked by a pond, so it goes, and saw a child submerged, the moral obligation to save them would clearly outweigh the small cost of dirtying your clothes and being late for your obligations. Just as critically, this extends to if the child were in a far-flung place across the globe, and you could still save them at a small cost. Thats the rationale for contributing money to relieve world poverty.

But more recently, the understanding of where the drowning child could exist has become more expansive in the eyes of EA thinkers. Now some in the movement advocate that theres no distinction between caring about the spatial versus the temporal. Just as we want to help people in other geographic areas, we should be as concerned about people in the futurepeople who dont exist, and wont for centuries, or millennia, or even millions of years. Homo-sapiens history thus far is minuscule; there will be infinitely many more humans in the future than have ever lived, so preserving that majority should be the priority. When the child is drowning is equally important to where it is.

The effective altruism movement has absolutely evolved, MacAskill says. Ive definitely shifted in a more longtermist direction. Longtermism is rooted in the notion of existential risk, promulgated in 2013 by another Oxford philosopher, Nick Bostrom. Its a more important moral priority to reduce the risks of future extinction over any other global public good. The human race needs to improve our ability to deal with those risks to our species continued existence, so we should generously fund mitigation strategies.

Various extinction scenarios preoccupy EAs: still global poverty and climate change, but also pandemics (natural and engineered), nuclear war, and potentially the takeover of malevolent artificial intelligencea worry that Elon Musk expressed long before his more overt championship of EA. Vastly more risk than North Korea, he tweeted in 2017. (Though, EAs would say that stable dictatorshipsundemocratic governments that stand firm against the international orderare also a high-importance risk.)

So EA is now in the business of catastrophes. But its still informed by empiricism; EAs say theres a risk of between 1% and 3% that an engineered pandemic could kill off the entire human race this century, or a 20% risk of a third world war by 2070. Again, the rationale can feel cold. Derek Parfit, a philosophy professor who mentored MacAskill at Oxford, once wrote that there is a much greater difference between a nuclear war that kills 99% and 100% of the worlds population, than between a nuclear war that kills 99% and complete peacebecause, in the former scenario, humanity is able to regather and rebuild civilization. And future people need the resources with which to do that.

Some of those resources may be fossil fuels. Theyre more tried and tested than renewable sources, MacAskill writes in his book, and solar panels and wind turbines degrade over time. Future people would need a reserve if they had to come back from the brink of a cataclysm, so we shouldnt deplete them now. We have 200 billion tonnes of carbon left in surface coal, and that stockpile would be easy to access using technology as simple as a shovel, he writesand enough to produce the energy we used from 1800 to 1980.

To many critics, these arithmetic predictions for scenarios so far into the future seem absurd; one called them Pascalian probabilities. EAs unemotionally commit to shut up and multiply: to enumerate the expected utility of an intervention aeons into the future by multiplying the value of an outcome by the probability that it will happen. Even the population figures of future people are vague and varying. Some say humanity could exist one million years into the future, based on other mammals survival rates, but because were more developed, it may be closer to 50 million. Or, millions, billions, trillions of years, suggested Nick Beckstead,yet another ex-Oxforder. What matters, Bostrom has said, is not the exact numbers, but the fact that they are huge.

Evangelizing that future people matter just as much could create an injustice to people who are currently living, including the 1.3 billion people in global poverty, says Ted Lechterman, assistant professor of philosophy at IE University in Madrid, previously a research fellow at the Institute for Ethics in AI at Oxford, whos written extensive criticisms of EA. Those trade-offs with present and near-term concerns . . . are difficult to justify. He appreciates the way the movement challenges common-sense morality, and that its generally open to debating its ideas, but thinks theyre overvaluing the future.

They also run the risk of overfunding some far-flung, sci-fi, oddball causes, such as asteroid collisions and robot apocalypse, Lechterman says. Some causes do feel outlandish; the EA forums host animated debates about the importance of reducing insect pain. MacAskill defends those discussionsnot because he imagines that saving the ants will become a defining cause, but because the dismissal of weird moral ideas has a very bad track record, he says, again citing early abolitionists, whose beliefs were peculiar to the 19th-century majority. Thrashing out insect welfare, he says, helps us mull over morality, and apply that thought to other concepts.

Lately, EAs pocketbooks have become more plentiful, as two tech billionaires have infused the movement with funds. Along with his wife, Cari Tuna, Facebook cofounder Dustin Moskovitz, whos worth a reported $15.7 billion, launched the nonprofit Open Philanthropy, which a spokesperson told me committed more than $450 million in grants last year, and $500 million so far this year, to a variety of EA causes, including vaccine development, criminal justice reform, the welfare of carp, tilapia, and shrimp, and adversarial robustness research.

(One leading cause is growing effective altruism itself, through grants to the Effective Altruism Foundation, and MacAskills nonprofit, 80,000 Hours, named for the timespan an average person works in their lifetime. On the 80,000 Hours website, promoting effective altruism receives five-out-of-five scores on importance and neglectedness, and a four on solvability, totaling to a high score among causes of a whopping 14/15.)

Sam Bankman-Fried is probably the most prominent example of the EA earning-to-give model, that you can donate the most by working lucrative jobsa course of action reportedly influenced by MacAskill himself, whom he met in 2012 as an MIT undergraduate. The CEO of FTX has granted $130 million since February from his Future Fund, which is dedicated to solving longtermist problems. The fund welcomes petitions for grants from anyone working on projects, such as better PPE, advocacy for high-skilled immigration, biological weapons shelters, dealing with population decline, and the ability to rapidly scale food production in case of nuclear winter.

The donors have plunged the movement into politics. Moskovitz and Tuna donated $20 million to Democrats in 2016, making them the third-largest donors of the cycle. This year, Bankman-Fried bankrolled the Democratic primary campaign of Oregon House candidate Carrick Flynn, who ran on an EA platform; he lost, receiving 19% of the vote. EA has long been for political spendingand votingfor achieving better policies to improve the world; MacAskill has been a policy advisor to 10 Downing Street. But $11 million on a failed campaign suggests squandering money, the antithesis of EAs dogma of effective expenditure. Looking back, I think that was too much, MacAskill says (though, he wasnt involved in the spending).

Still, Bankman-Fried has since said he will contribute more than $100 million to the 2024 election. Perhaps north of $1 billion, if he has to stop Donald Trump from winning again. Speaking on the podcast Whats Your Problem, he said: I would hate to say [a billion is a] hard ceiling because who knows whats going to happen between now and then. (Fast Company reached out to Bankman-Fried for an interview but did not receive a response. Moskovitz politely declined.) Even Lechterman, the critic, says political spending may be justified in this case, for preventing the horrible candidate from coming to power. He says denying not only Trump, but also other recently elected global leaders, by funding opposition candidates could have saved a dramatic number of lives, while also improving standards of life and reducing social injustices, which are moral improvements in the EA mold.

The substantial involvement of the wealthy has kindled fears that they could start to drive the movement. Soskis, the philanthropy expertwho is partly funded by Open Philanthropythinks theres enough insulation in the movement to keep a mega-donor takeover at bay. There are a lot of people, like himself, who dont label themselves as EAs but are involved in the discourse, intrigued by the novel philanthropic ideas, and willing to steer them in the right directions. He thinks the number of those people is certainly more significant than their numbers would suggest.

Nor is MacAskill overly concerned. His book discusses value lock-in, the notion that some very niche groups tend to define what the worlds values are, for good or evil, and can change the trajectory of the future forever. He runs through the prominent value influencers of the pastJesus, Confucius, Hitlerconcluding, I really dont think its the rich that systematically determined the values of the future. (Hitler, though, was thought to have amassed vast wealth in the sum of more than $6 billion in todays money.) One of the earliest pivotal abolitionists, he says, was Benjamin Lay, a modest Quaker who lived in a cave. The modern environmentalist movement grew to success from the ground up, all along opposed to corporate interests.

But another billionaire might be the source of some unease. Elon Musk has been effusive about EA, asserting that it aligns closely with his ideology. Maybe more than anyone else in the world, Elon has a worldview, MacAskill says. If [longtermism] were wedded to any one particular person, I think it would be a real shame. Musk, who didnt respond to a request for comment, has reportedly not yet donated to causes based on EA, though hes charged Igor Kurganov, a pro-poker player and EA follower, with guiding his philanthropy plan. (An interesting six-degrees-of-EA-separation tidbit: Kurganovs partner, Liv Boeree, is a former housemate of MacAskills.)

MacAskill says that Musk seems to believe in the uncontroversial aspects of EA, but also has his own cause priorities, such as starting Martian civilizations. Some reports suggest hes fixated on transhumanism, or using technology to enhance our natural human states and transcend biological evolution, to achieve greater intelligence and super longevity. He has discussed the importance of keeping the Earth populated; Musk himself might be playing a first-person role in that procreation program. Theres a worry in general: as ideas get more popular, that they get twisted, MacAskill says.

Elon Musk [Photo: Michael Gonzalez/Getty Images]In his book, MacAskill does endorse reproduction, he says to counter an expanding worldview that its immoral to have kids because of your carbon impact. He stops short of recommending it for everyone because he respects personal reproductive choices, but he believes failing to breed could cause future technological stagnation. Even if the generations ahead dont face a calamitous extinction event, they could go through another Dark Ages, deprived of tech innovation, and an existential brain-drain could exacerbate those sluggish eras and collapse society.

But the transhumanism obsession began inside the Oxford halls, particularly from the mind of Nick Bostrom. He has researched genetic enhancement of intelligence via embryo selection, to engineer designer babies with high IQs, which he has acknowledged is reminiscent of eugenics. Transhumanism goes further, in changing the very substrate of persons from carbon-based biological beings to persons based in silicon computers, wrote philosopher Mark Walker. Bostrom has suggested that if we venture into transhumanism, we could create vastly huge numbers of fugture people. He is also a fan of space expansion, claiming in his Astronomical Waste paper, retweeted by Musk, that we waste 100 trillion human lives for each second that we do not colonize space.

The stagnation concern raises some worry about the fate of the future global poorinitially the very individuals that the EAs deemed most worthy of our help. Beckstead, who is now CEO of the FTX Foundation, wrote in 2013 that saving a life in a rich country is substantially more important than saving a life in a poor country because wealthy nations have more potential to innovate. For Lechterman, the critic, the main source of EA disapproval is that the movement has power over the poor, with a heroic, elitist mentality that our global problems are things that smart, wealthy people can solve on their own.

Deciding whats right for poorer countries creates a dangerous power dynamic, he says. Cash transfers may be better than bed nets and deworming drugs because theyre less paternalistic, and allow people autonomy to spend money as they see fit, but theyre still incentives for societies to put off addressing the root causes of poverty. He says the movement should prioritize investing in advocacy groups and grassroots movements, to put resources in the hands of the people suffering the most, and give them the power to effect long-lasting systemic change.

It can be terribly hubristic for an elite few to make important decisions on the worlds behalf, Lechterman says, even if their motivations are, in fact, pure, and their beliefs are correct. Thats paramount now, as billionaires are flocking to the operation without the same philosophical introspection as the Oxfordian thinkers. Thats where things can especially go awry.

Read the original here:
The truth about Elon Musk, Sam Bankman-Fried, and effective altruism - Fast Company

Related Post