Chatham House

Info
International affairs think tank
Top
New
  • Should a CERN for AI aim to build safe superintelligence?
    Proponents of a CERN-like body for AI have called for its creation as a way to build safer AI systems, enable more international coordination in AI development, and reduce dependencies on private industry labs for the development of safe and ethical AI systems. Rather than creating its own AI systems, some argue, a CERN-like institution could focus specifically on research into AI safety. Some advocates, such as computer scientist Gary Marcus, also argue that the CERN model could help advance AI safety research beyond the capacity of any one firm or nation. The new institution could bring together top talent under a mission grounded in principles of scientific openness, adherence to a pluralist view of human values (such as the collective goals of the UN’s 2030 Agenda for Sustainable Development), and responsible innovation. (2024) source Unverified
    Comment Comment X added 1d ago
  • Should a CERN for AI be completely non-profit?
    Some advocates, such as computer scientist Gary Marcus, also argue that the CERN model could help advance AI safety research beyond the capacity of any one firm or nation. The new institution could bring together top talent under a mission grounded in principles of scientific openness, adherence to a pluralist view of human values (such as the collective goals of the UN’s 2030 Agenda for Sustainable Development), and responsible innovation. Similar sentiments have been repeated by other prominent actors in the AI governance ecosystem, including Ian Hogarth, chair of the UK’s AI Safety Institute, who argues that an international research institution offers a way to ensure safer AI research in a controlled and centralized environment without being driven by profit motive. [...] A publicly funded international research organization conducting safety research might be more resilient than private sector labs to economic pressures, and better able to avoid the risk of profit-seeking motives overriding meaningful research into AI safety measures. (2024) source Verified
    Comment Comment X added 2d ago
Back to home
Terms · Privacy · Contact