We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Nick Bostrom
Philosopher; 'Superintelligence' author; FHI founder
ai (1)
ai-governance (1)
ai-policy (1)
ai-regulation (1)
ai-safety (1)
cern-for-ai (1)
ethics (1)
eu (1)
×
existential-risk (1)
international-relations (1)
research-policy (1)
science-funding (1)
Top
New
-
Should a CERN for AI aim to build safe superintelligence?
Nick Bostrom strongly agrees and says:
The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. [...] Suppose we develop superintelligence safely and ethically, and that we make good use of the almost magical powers this technology would unlock. We would transition into an era in which human labor becomes obsolete—a "post-instrumental" condition in which human efforts are not needed for any practical purpose. (2003) source Unverified