We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Nick Bostrom
Philosopher; 'Superintelligence' author; FHI founder
ai-governance (4)
×
ai-safety (3)
ai (1)
ai-regulation (1)
cern-for-ai (1)
existential-risk (1)
future (1)
Top
New
-
AI poses an existential threat to humanity
Nick Bostrom votes For and says:
the greatest existential risks over the coming decades or century arise from certain, anticipated technological breakthroughs that we might make in particular, machine super intelligence, Unverified source (2017) -
Participate in shaping the future of AI
Nick Bostrom votes For and says:
Yudkowsky has proposed that a seed AI be given the final goal of carrying out humanity’s “coherent extrapolated volition” (CEV) [...] Yudkowsky sees CEV as a way for the programmers to avoid arrogating to themselves the privilege or burden of determi... more Unverified source (2014) -
Mandate the CERN for AI to build safe superintelligence
Nick Bostrom votes For and says:
The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such su... more Unverified source (2003) -
Ban superintelligence development until safety consensus is reached
Nick Bostrom votes For and says:
Superintelligence could become extremely powerful and be able to shape the future according to its preferences. If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintel... more Unverified source (2014)