The Effective Altruism movement is deeply flawed and probably not salvageable
Meet the new boss, same as the old boss
‘Effective Altruism’ is two things: A general philosophical approach to charity, and a specific set of organizations and projects which claim to be based on those principles. Emphasis on the word ‘claim’. If you had an organization dedicated to the creation of round skyscrapers called ‘The Round Skyscraper Institute’ and it had early on made a round skyscraper but thereafter had only made square skyscrapers you might question whether its name carried any weight. If the org published long screeds about the effectiveness of making square skyscrapers because it was generating massive profits which would in the future allow them to create even more round skyscrapers than they could make if they simply started making round skyscrapers today that would be an argument, but you’d be probably be justifiably skeptical.
Okay that analogy might be a bit on the nose, but let’s get into specifics, in particular how existing orgs could change to be better (whether they’re interested is another question). The most obvious problem is that the founder and leader of the Effective Altruism movement mentored, fostered, and provided cover for one of the biggest con artists in history, who had deep ties to the whole movement in many other ways. Arguably this has so permanently and completely damaged the brand that it should be abandoned in favor of something else, regardless of what else is done. Unfortunately the movement hasn’t even had a coherent reckoning of why and how this happened and what they’re going to do differently in the future, so maybe me talking about specifics of what could be done better is a bit pointless, but I’m going to do it anyway.
The big meta problem with traditional charity work is that it tends to be run opaquely by people who are overly concerned with their own pet dubious causes and are often con artists. The philosophical foundation of EA is to acknowledge this and try to do things differently. The big problems which have come up in the actual EA movement is that it’s run opaquely by people who are overly concerned with their own pet dubious causes and has included some very bad con artists. Obviously the root problem in traditional charity has not been addressed here. That meta problem aside, let me drill into more specifics.
The con artist problem I already mentioned and don’t see as getting any better until a full-throated admission of having screwed up badly is made. While earn-to-give sounds great in principle, serious thinking needs to be done about whether any profession which pays very well gets its money by unjustly taking it from others rather than being fabulously productive and getting rewarded commensurately. EA critiques do a lot of measuring of the positive benefits of various professions but pretend that negative effects simply can’t happen. Obviously this has already become a problem.
The pet causes which EA people are over-torqued on are AI safety and veganism. Don’t get me wrong, I think AI safety and animal welfare are both real causes which should are worth worrying about, but they way they’re handled in the EA movement is a problem. For AI safety it’s being run by a bunch of doomsayers whose whole framework of thinking about it is fundamentally wrong and counterproductive. That should really be the subject of its own post but it’s a bit weird that I’m writing philosophical critiques justifying things which almost everybody already knows. For animal welfare the problem is that while reasonably apples-to-apples comparisons can be made between different animal welfare programs and apples-to-apples comparisons can be made between non-animal-welfare programs there must be some conversion ratio between them. What that constant should be has been argued ad nauseum, many extremists have truly outrageous values set for it and won’t budge, and continuing to discuss it more doesn’t help,. The practical thing to do would be to move animal welfare to its own conferences and banish discussion of it from more general charity work to get rid of the repetitive unproductive discussion/arguments.
Finally we get to the general opaqueness problem. One fair critique made is that when EA orgs evaluate charities even though they’re big one ones which operate in poor countries they seem to dismiss ones started and run by locals out of hand. That’s unfortunately consistent with the patronizing attitude which led to the aforementioned massive scandal. Beyond that there’s the general problem of charity selection and effectiveness. There’s a general principle that whenever a charity appears to be very effective there’s a strong correlation with the analysis being dubious and the return on investment diminishing rapidly. This is because even the parts of the world who don’t use the term ‘Effective Altruism’ do care about effectiveness and aren’t totally awful at it. The result is that EA groups identify particularly effective charities, but those tend to be fully funded and whatever money outsiders give to them gets offset by some previous donor putting their money somewhere else, and what that thing might be is of course completely opaque. Again this is a dubious practice which has long been common in traditional charities. In particular I’d like to point out that one of the charitable activities with rapidly diminishing ROI is the meta one of evaluating other charities, and if you find yourself engaging in spam to raise money for EA causes you should stop and go do something else with your life.
I found writing this post a bit depressing. I might have talked myself into believing that there isn’t much of anything salvageable in the existing EA movement and its continued existence is getting in the way of something better forming in the future.
I agree that the exchange rate between animal-welfare causes and human-related causes should not be presented as equivalent, but I also truly believe it's getting easier than ever to make less harmful food choices. Like the trade-off between level-of-effort to address the problem (basically, educating and increasing options) and the magnitude of the issue is pretty solid. At least, if you do value non-human animals and minimizing their experienced suffering. Maybe it's best solved by the market, though, as alternatives become more cost effective to produce.
I can see that vegan talking points are uncomfortable but it can be frustrating to watch all your close friends (some of which intensely love their pet dogs) eat tortured cows, pigs, or chickens every day for almost every meal, and then try to dismiss the horrors of factory farm conditions. Even though human rights take priority, that's still alarming enough that it's not worth completely banishing the topic from altruistic circles (because, those that hold these views, may not want to define Altruism purely in the context of humans and not other animals).
Other thoughts: If Effective Altruism was supposed to be an implementable (as in, money-in-the-right-places) version of regular Altruism, but then doesn't work due to reality having all kinds of disincentives, that should still mean that the general problem of finding good causes/charities is worth solving, right? Like what set of provably true statements demonstrate that a cause is worth investing in to decrease something Specific&Bad from happening, and who could be trusted to identify that criteria.
It sounds like EA has became a fancy label in the same way that you have "dolphin-safe" tuna labelling– if you see the label, it's probably likely that some dolphins have been harmed.
Effective Altruism = Family. Maybe we need to extend on that concept. It’s on the surface a closed organism defined by heritage (in various strict forms) but I see a future where maybe a concept of family becomes reality that takes its values and primacy and puts it into a form of Organisation. The Mafia is a good but flawed Version of this. I am going a little off tangent but so many people in the west get scammed nowadays with exactly that angle. Victims are trusting people they don’t know and get taken advantage off because they seem to long for some kind of family. There seems to be a deep desire so maybe it could be leveraged. Just my incoherent thoughts about how some kind of future you mention in the end could look like.