We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Nick Bostrom
Philosopher; 'Superintelligence' author; FHI founder
ai (4)
ai-safety (4)
existential-risk (4)
ai-governance (3)
ai-regulation (3)
ai-policy (2)
ai-risk (2)
future (2)
ai-alignment (1)
ai-ethics (1)
cern-for-ai (1)
democracy (1)
ethics (1)
eu (1)
international-relations (1)
Top
New
-
Should humanity build artificial general intelligence?
Nick Bostrom strongly agrees and says:
I think broadly speaking with AI, … rather than coming up with a detailed plan and blueprint in advance, we’ll have to kind of feel our way through this and make adjustments as we go along as new opportunities come into view. […] I think ultimately this transition to the superintelligence era is one we should do. It would be in itself an existential catastrophe if we forever failed to develop superintelligence. (2025) source Unverified -
Should a CERN for AI aim to build safe superintelligence?
Nick Bostrom strongly agrees and says:
The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. [...] Suppose we develop superintelligence safely and ethically, and that we make good use of the almost magical powers this technology would unlock. We would transition into an era in which human labor becomes obsolete—a "post-instrumental" condition in which human efforts are not needed for any practical purpose. (2003) source Unverified -
Could AGI quickly lead to superintelligence?
Nick Bostrom strongly agrees and says:
once we have full AGI, super intelligence might be quite close on the heels of that. (2025) source Unverified -
Should humanity ban the development of superintelligence until there is broad scientific consensus that it will be done safely and controllably and strong public buy-in?
Nick Bostrom agrees and says:
Superintelligence could become extremely powerful and be able to shape the future according to its preferences. If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintelligence until we figured out how to do so safely. And then maybe wait another generation or two just to make sure that we hadn’t overlooked some flaw in our reasoning. Unfortunately, we do not have the ability to pause. (2014) source Unverified