$1,000: Reduce my P(doom|AGI) with technical, mechanistic reasoning saying why it isn't high by default

Greg Colbourn
Other type of bounty

I would really appreciate it if someone could provide a detailed technical argument for believing P(doom|AGI)≤10%.

I’m hereby offering up to $1000 for someone to provide one, on a level that takes into account everything written here (https://forum.effectivealtruism.org/posts/THogLaytmj3n8oGbD/p-doom-or-agi-is-high-why-the-default-outcome-of-agi-is-doom), and the accompanying post (https://forum.effectivealtruism.org/posts/8YXFaM9yHbhiJTPqp/agi-rising-why-we-are-in-a-new-era-of-acute-risk-and). Please also link me to anything already written, and not referenced in any answers here (https://forum.effectivealtruism.org/posts/idjzaqfGguEAaC34j/if-your-agi-x-risk-estimates-are-low-what-scenarios-make-up), that you think is relevant. I’m very interested in further steel-manning the case for high P(doom|AGI).

So far none of the comment or answers I've received have done anything to lower my P(doom|AGI). You can see that I've replied to and rebutted all the comments so far. A lot of people's reasons for low P(doom|AGI) seem to rely on wishful thinking (the AI being aligned by default, alignment somehow being solved in time, etc), or perhaps stem from people not wanting to sound alarmist. In particular, people don't seem to be using a security mindset and/or projecting forward the need for alignment techniques to scale to supehuman/superintelligent AI (that will happen after AGI). Alignment needs to be 100% watertight with 0 prompt engineering hacks possible!

1
0

No comments yet