9. Don’t work on long-term AGI x-risk now

194 words

Suppose you believe AGI will be invented in 200 years, and, if it is invented before the alignment problem is solved, everyone will be dead forever. Then you probably shouldn’t work on AGI Safety right now.

On the one hand, our ability to work on AGI Safety will increase as we get closer to making AGI. It is preposterous to think such a problem can be solved by purely reasoning from first principles. No science makes progress without observation, not even pure mathematics. Trying to solve AGI risk now is as absurd as trying to solve aging before the invention of the microscope.

On the other hand, spending resources now is much more expensive than spending resources in 100 years. Assuming a 4% annual growth rate of the economy, it would be around 50 times as expensive.1In all honesty, I don’t actually believe in unlimited exponential economic growth. But my job here is to attack the AI Safety premise, not to accurately represent my own beliefs.

Solving AGI Safety becomes easier over time, and relatively cheaper on top of that. Hence you should not work on AGI Safety if you think it can wait.

Leave a Reply