We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Nick Bostrom
Philosopher; 'Superintelligence' author; FHI founder
ai-safety (5)
ai (4)
ai-governance (4)
existential-risk (2)
future (2)
ai-alignment (1)
ai-regulation (1)
cern-for-ai (1)
economics (1)
Top
New
-
AI poses an existential threat to humanity
Nick Bostrom votes For and says:
the greatest existential risks over the coming decades or century arise from certain, anticipated technological breakthroughs that we might make in particular, machine super intelligence, Unverified source (2017) -
Build artificial general intelligence
Nick Bostrom votes For and says:
I think broadly speaking with AI, … rather than coming up with a detailed plan and blueprint in advance, we’ll have to kind of feel our way through this and make adjustments as we go along as new opportunities come into view. […] I think ultimately t... more Unverified source (2025) -
AGI will create abundance
Nick Bostrom votes For and says:
-
Participate in shaping the future of AI
Nick Bostrom votes For and says:
Yudkowsky has proposed that a seed AI be given the final goal of carrying out humanity’s “coherent extrapolated volition” (CEV) [...] Yudkowsky sees CEV as a way for the programmers to avoid arrogating to themselves the privilege or burden of determi... more Unverified source (2014) -
Mandate the CERN for AI to build safe superintelligence
Nick Bostrom votes For and says:
The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such su... more Unverified source (2003) -
Could AGI quickly lead to superintelligence?
Nick Bostrom votes For and says:
once we have full AGI, super intelligence might be quite close on the heels of that. Unverified source (2025) -
Ban superintelligence development until safety consensus is reached
Nick Bostrom votes For and says:
Superintelligence could become extremely powerful and be able to shape the future according to its preferences. If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintel... more Unverified source (2014)