Future of Life Institute

Info
Nonprofit on existential risks
Top
New
  • Should a CERN for AI aim to build safe superintelligence?
    Haydn Belfield, a researcher at the University of Cambridge’s Centre for the Study of Existential Risk and the Leverhulme Centre for the Future of Intelligence, proposes two reinforcing institutions: an International AI Agency (IAIA) and CERN for AI. The IAIA would primarily serve as a monitoring and verification body, enforced by chip import restrictions: only countries that sign a verifiable commitment to certain safe compute practices would be permitted to accumulate large amounts of compute. Meanwhile, a “CERN for AI” is an international scientific cooperative megaproject on AI which would centralise frontier model training runs in one facility. [...] As an example of reinforcement, frontier foundation models would be shared out of the CERN for AI, under the supervision of the IAIA. (2024) source Unverified
    Comment Comment X added 8d ago
Back to home
Terms · Privacy · Contact