We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Chatham House
International affairs think tank
ai-governance (3)
ai-policy (3)
×
cern-for-ai (3)
research-policy (3)
ai (2)
eu (2)
international-relations (2)
science-funding (2)
ai-regulation (1)
ai-safety (1)
ethics (1)
ethics-in-research (1)
existential-risk (1)
open-science (1)
public-interest-ai (1)
Top
New
-
Should a CERN for AI have a central hub in one location?
Chatham House disagrees and says:
Researchers have also raised concerns that giving a centralized institution access to the advanced AI models of leading labs might compromise the security of those labs and models. For example, effective access to design evaluations and benchmarks may require the ability to copy a given model, which could undermine the commercial interests of those labs and enable diffusion of those models before adequate testing. This may be less of an issue for mechanistic interpretability and similar research, which may not require access to the latest models. Lastly, a CERN for AI would have to grapple with rising geopolitical tensions. It is arguably harder today to start an international governance body than it was in the era immediately after the Second World War. Most leading AI labs are based in the US and China, two countries that are arguably engaged in a ‘new cold war’ that is fuelling a technological arms race between them. (2024) source Unverified -
Should a CERN for AI aim to build safe superintelligence?
Chatham House disagrees and says:
Proponents of a CERN-like body for AI have called for its creation as a way to build safer AI systems, enable more international coordination in AI development, and reduce dependencies on private industry labs for the development of safe and ethical AI systems. Rather than creating its own AI systems, some argue, a CERN-like institution could focus specifically on research into AI safety. Some advocates, such as computer scientist Gary Marcus, also argue that the CERN model could help advance AI safety research beyond the capacity of any one firm or nation. The new institution could bring together top talent under a mission grounded in principles of scientific openness, adherence to a pluralist view of human values (such as the collective goals of the UN’s 2030 Agenda for Sustainable Development), and responsible innovation. (2024) source Unverified -
Should a CERN for AI be completely non-profit?
Chatham House agrees and says:
Some advocates, such as computer scientist Gary Marcus, also argue that the CERN model could help advance AI safety research beyond the capacity of any one firm or nation. The new institution could bring together top talent under a mission grounded in principles of scientific openness, adherence to a pluralist view of human values (such as the collective goals of the UN’s 2030 Agenda for Sustainable Development), and responsible innovation. Similar sentiments have been repeated by other prominent actors in the AI governance ecosystem, including Ian Hogarth, chair of the UK’s AI Safety Institute, who argues that an international research institution offers a way to ensure safer AI research in a controlled and centralized environment without being driven by profit motive. [...] A publicly funded international research organization conducting safety research might be more resilient than private sector labs to economic pressures, and better able to avoid the risk of profit-seeking motives overriding meaningful research into AI safety measures. (2024) source Verified