Comment by Nick Bostrom

Philosopher; 'Superintelligence' author; FHI founder
Developing superintelligence is not like playing Russian roulette; it is more like undergoing risky surgery for a condition that will otherwise prove fatal. [...] Models incorporating safety progress, temporal discounting, quality-of-life differentials, and concave QALY utilities suggest that even high catastrophe probabilities are often worth accepting. [...] The optimal strategy would involve moving quickly to AGI capability, then pausing briefly before full deployment: swift to harbor, slow to berth. Unverified source (2026)
Like Share on X 23h ago
Policy proposals and claims
replying to Nick Bostrom