We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Chatham House
International affairs think tank
ai-governance (4)
cern-for-ai (4)
ai-policy (3)
research-policy (3)
ai (2)
ai-safety (2)
eu (2)
international-relations (2)
science-funding (2)
ai-regulation (1)
ethics (1)
ethics-in-research (1)
existential-risk (1)
open-science (1)
public-interest-ai (1)
Top
New
-
Should a CERN for AI have a central hub in one location?
Chatham House disagrees and says:
Researchers have also raised concerns that giving a centralized institution access to the advanced AI models of leading labs might compromise the security of those labs and models. For example, effective access to design evaluations and benchmarks may require the ability to copy a given model, which could undermine the commercial interests of those labs and enable diffusion of those models before adequate testing. This may be less of an issue for mechanistic interpretability and similar research, which may not require access to the latest models. Lastly, a CERN for AI would have to grapple with rising geopolitical tensions. It is arguably harder today to start an international governance body than it was in the era immediately after the Second World War. Most leading AI labs are based in the US and China, two countries that are arguably engaged in a ‘new cold war’ that is fuelling a technological arms race between them. (2024) source Unverified -
Should a CERN for AI aim to build safe superintelligence?
Chatham House disagrees and says:
Proponents of a CERN-like body for AI have called for its creation as a way to build safer AI systems, enable more international coordination in AI development, and reduce dependencies on private industry labs for the development of safe and ethical AI systems. Rather than creating its own AI systems, some argue, a CERN-like institution could focus specifically on research into AI safety. Some advocates, such as computer scientist Gary Marcus, also argue that the CERN model could help advance AI safety research beyond the capacity of any one firm or nation. The new institution could bring together top talent under a mission grounded in principles of scientific openness, adherence to a pluralist view of human values (such as the collective goals of the UN’s 2030 Agenda for Sustainable Development), and responsible innovation. (2024) source Unverified -
Should we create a global institute for AI, similar to CERN?
Chatham House disagrees and says:
Long timelines and cost overruns often plague ambitious big science collaborations. Physics breakthroughs have required enormous hardware investments over years. For example, to build CERN’s Large Hadron Collider, over 10,000 scientists and engineers from hundreds of universities and labs contributed to its design and construction over a decade. But while current computer clusters for AI research have yet to require such large workforces, constructing data centres and network infrastructure at scale for a new institute will still take time, investment, and reliable access to currently undersupplied specialized chips for AI development. That said, the modular nature of graphics processing units (GPUs) and servers could allow for much faster scaling up of AI infrastructure than has been feasible in previous science megaprojects. Challenges in AI safety also differ from those of particle physics, so addressing them may require more dynamic, distributed initiatives. Care would need to be taken to involve diverse stakeholders, and to balance capabilities against controls. Inflated expectations for AI governance via a CERN-like model could backfire if they are not realistic about such an organization’s inherent limitations. (2024) source Unverified -
Should a CERN for AI be completely non-profit?
Chatham House agrees and says:
Some advocates, such as computer scientist Gary Marcus, also argue that the CERN model could help advance AI safety research beyond the capacity of any one firm or nation. The new institution could bring together top talent under a mission grounded in principles of scientific openness, adherence to a pluralist view of human values (such as the collective goals of the UN’s 2030 Agenda for Sustainable Development), and responsible innovation. Similar sentiments have been repeated by other prominent actors in the AI governance ecosystem, including Ian Hogarth, chair of the UK’s AI Safety Institute, who argues that an international research institution offers a way to ensure safer AI research in a controlled and centralized environment without being driven by profit motive. [...] A publicly funded international research organization conducting safety research might be more resilient than private sector labs to economic pressures, and better able to avoid the risk of profit-seeking motives overriding meaningful research into AI safety measures. (2024) source Verified