We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Nonprofit on existential risks
Haydn Belfield, a researcher at the University of Cambridge’s Centre for the Study of Existential Risk and the Leverhulme Centre for the Future of Intelligence, proposes two reinforcing institutions: an International AI Agency (IAIA) and CERN for AI. The IAIA would primarily serve as a monitoring and verification body, enforced by chip import restrictions: only countries that sign a verifiable commitment to certain safe compute practices would be permitted to accumulate large amounts of compute. Meanwhile, a “CERN for AI” is an international scientific cooperative megaproject on AI which would centralise frontier model training runs in one facility. [...] As an example of reinforcement, frontier foundation models would be shared out of the CERN for AI, under the supervision of the IAIA.
(2024)
source
Unverified
Polls
replying to Future of Life Institute