We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Chatham House
International affairs think tank
ai-governance (2)
ai-safety (2)
×
cern-for-ai (2)
ai (1)
ai-policy (1)
ai-regulation (1)
ethics (1)
eu (1)
existential-risk (1)
international-relations (1)
research-policy (1)
science-funding (1)
Top
New
-
Should a CERN for AI aim to build safe superintelligence?
Chatham House disagrees and says:
Proponents of a CERN-like body for AI have called for its creation as a way to build safer AI systems, enable more international coordination in AI development, and reduce dependencies on private industry labs for the development of safe and ethical AI systems. Rather than creating its own AI systems, some argue, a CERN-like institution could focus specifically on research into AI safety. Some advocates, such as computer scientist Gary Marcus, also argue that the CERN model could help advance AI safety research beyond the capacity of any one firm or nation. The new institution could bring together top talent under a mission grounded in principles of scientific openness, adherence to a pluralist view of human values (such as the collective goals of the UN’s 2030 Agenda for Sustainable Development), and responsible innovation. (2024) source Unverified -
Should we create a global institute for AI, similar to CERN?
Chatham House disagrees and says:
Long timelines and cost overruns often plague ambitious big science collaborations. Physics breakthroughs have required enormous hardware investments over years. For example, to build CERN’s Large Hadron Collider, over 10,000 scientists and engineers from hundreds of universities and labs contributed to its design and construction over a decade. But while current computer clusters for AI research have yet to require such large workforces, constructing data centres and network infrastructure at scale for a new institute will still take time, investment, and reliable access to currently undersupplied specialized chips for AI development. That said, the modular nature of graphics processing units (GPUs) and servers could allow for much faster scaling up of AI infrastructure than has been feasible in previous science megaprojects. Challenges in AI safety also differ from those of particle physics, so addressing them may require more dynamic, distributed initiatives. Care would need to be taken to involve diverse stakeholders, and to balance capabilities against controls. Inflated expectations for AI governance via a CERN-like model could backfire if they are not realistic about such an organization’s inherent limitations. (2024) source Unverified