We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Future of Life Institute
Nonprofit on existential risks
ai (1)
ai-governance (1)
ai-policy (1)
ai-regulation (1)
ai-safety (1)
cern-for-ai (1)
ethics (1)
eu (1)
existential-risk (1)
international-relations (1)
research-policy (1)
×
science-funding (1)
Top
New
-
Should a CERN for AI aim to build safe superintelligence?
Future of Life Institute agrees and says:
Haydn Belfield, a researcher at the University of Cambridge’s Centre for the Study of Existential Risk and the Leverhulme Centre for the Future of Intelligence, proposes two reinforcing institutions: an International AI Agency (IAIA) and CERN for AI. The IAIA would primarily serve as a monitoring and verification body, enforced by chip import restrictions: only countries that sign a verifiable commitment to certain safe compute practices would be permitted to accumulate large amounts of compute. Meanwhile, a “CERN for AI” is an international scientific cooperative megaproject on AI which would centralise frontier model training runs in one facility. [...] As an example of reinforcement, frontier foundation models would be shared out of the CERN for AI, under the supervision of the IAIA. (2024) source Unverified