Not working on AI because of safety concerns is a bad idea
What if someone gave that alien $20 and AI is the reward? And what if AI ends up becoming the way we can communicate with that higher power? I'm really disappointed you didn't use XCH in your example though, lol.
It would be a story worth $20 to tell
Bram, the issue with people comparing AI risk to Pascal's Mugging - or even Pascal's wager for that matter - is that these scenarios involve compensating for tiny (sub-1%) probabilities by claiming that sufficiently large values of benefit/harm outweigh them in utilitarian calculus.
But there's extremely widespread belief, documented by many surveys, that the chance of AI-driven extinction in the next few decades is 5%+. So no such outweighing is necessary, even if it does happen to be the case that very large benefits & harms are also being discussed.
This is the exact same reason it would be wrong to accuse nuclear annihilation of being a Pascal's Mugging or Pascal's Wager.
I hope you'll retract your claim.