Pascal's Mugging and AI Safety
Not working on AI because of safety concerns is a bad idea
Let’s say someone comes to you and claims to be an alien from another planet whose ship has been stranded, and they need $20 to repair their ship after which they’ll go home and return with $1 bazillion worth of resources and you will be the credited with bringing about utopia for the whole human race all in for just the $20 today. Should you give them the $20?
If you’re a very sophisticated and intelligent philosopher you might run the calculation: The downside of giving them the $20 is exactly $20, while the upside is the chances that they’re telling the truth, which is some very small value, time $1 bazillion, which still multiplies out to $1 million, so I should give it to them. While this logic is internally consistent, it seem kind of… stupid.
Let’s consider what would happen if there were an entire planet full of ridiculously credulous people who would fall for that sort of scheme. Inevitably the productivity of the planet as a whole would collapse to almost nothing as all resources got funneled into paying con artists claiming to be aliens. Given that fact an appropriate response to someone attempting Pascal’s Mugging would be ‘You make a very compelling pitch, and I’m prepared in principle to immediately give vast resources to any wayward alien in need of help. But I’m very concerned that if I misallocate resources by giving them to a con artist now and a real alien shows up in the future I’ll be unable to help them. I’m very concerned that despite being eloquent you outwardly appear to be human being. So I’m afraid that unless you can provide some actual evidence that you’re an alien, I’ll have to demur.’
Now lets talk about AI safety. There’s a currently very prominent form of AI doomsaying making the claims that:
AI will inevitably bring about armageddon
All that any work on AI can do is hasten the armageddon.
They then go on to conclude that no work on AI should be done. They thus far haven’t full-throatedly called for banning AI research and haven’t at all called for assassinating AI researchers, but the armageddon hasn’t happened yet and there’s still time.
(You might object that I started talking about Pascal’s Mugging when AI doomsaying isn’t asking for resources. Unfortunately that isn’t true. There have been multiple iterations of The Pascal Institute for AI Safety which makes a big deal about working on AI safety without actually building AI. The people behind these things act very important but have accomplished diddly.)
The problem with both tenets of AI doomsaying is that they’re rank speculation, and just as plausible counterfactuals can be made in the opposite direction. Maybe AI will bring about utopia and we should make it happen as soon as possible. Maybe safety-focused AI research can put off the inevitable armageddon by decades or centuries and we should work on it as much as possible.
In practice many of the researchers concerned with AI safety have voted with their feet and are working on further AI with a strong focus on safety. That’s both helping with the many beneficial things AI is doing today and the many less-than-armageddon-level safety and ethics issues facing AI today. This is all a good thing, and it’s reasonable to extrapolate that armageddon-level AI issues are best handled with the same approaches which are good for less-than-armageddon-level issues.
Thanks for reading Bram’s Substack! Subscribe for free to receive new posts and support my work.