We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Nick Bostrom
Philosopher; 'Superintelligence' author; FHI founder
ai (2)
ai-risk (2)
ai-safety (2)
existential-risk (2)
future (2)
×
ai-alignment (1)
ai-ethics (1)
ai-governance (1)
ai-policy (1)
ai-regulation (1)
Top
New
-
Should humanity build artificial general intelligence?
Nick Bostrom strongly agrees and says:
I think broadly speaking with AI, … rather than coming up with a detailed plan and blueprint in advance, we’ll have to kind of feel our way through this and make adjustments as we go along as new opportunities come into view. […] I think ultimately this transition to the superintelligence era is one we should do. It would be in itself an existential catastrophe if we forever failed to develop superintelligence. (2025) source Unverified -
Could AGI quickly lead to superintelligence?
Nick Bostrom strongly agrees and says:
once we have full AGI, super intelligence might be quite close on the heels of that. (2025) source Unverified