We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Should we create a global institute for AI, similar to CERN?
Cast your vote:
Results (29):
filter
Quotes (22)
Users (1)
-
Yoshua BengioAI Pioneer, Turing Award winnerstrongly agrees and says:In order to reduce the probability of someone intentionally or unintentionally bringing about a rogue AI, we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Demis HassabisNobel laureate, AI Researcher and CEO of DeepMindstrongly agrees and says:What I’d like to see eventually is an equivalent of a CERN for AI safety that does research into that – but internationally. source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ian HogarthUK AI Safety Institute chairstrongly agrees and says:A thought experiment for regulating AI in two distinct regimes is what I call The Island. In this scenario, experts trying to build God-like AGI systems do so in a highly secure facility: an air-gapped enclosure with the best security humans can build. All other attempts to build God-like AI would become illegal; only when such AI were provably safe could they be commercialised “off-island”. This may sound like Jurassic Park, but there is a real-world precedent for removing the profit motive from potentially dangerous research and putting it in the hands of an intergovernmental organisation. This is how Cern, which operates the largest particle physics laboratory in the world, has worked for almost 70 years. [...] I would support significant regulation by governments and a practical plan to transform these companies into a Cern-like organisation. (2024) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Tim Berners-LeeInventor of the World Wide Webstrongly agrees and says:We can’t let the same thing happen with AI. I coded the world wide web on a single computer in a small room. But that small room didn’t belong to me, it was at Cern. Cern was created in the aftermath of the second world war by the UN and European governments who identified a historic, scientific turning point that required international collaboration. It is hard to imagine a big tech company agreeing to share the world wide web for no commercial reward like Cern allowed me to. That’s why we need a Cern-like not-for-profit body driving forward international AI research. I gave the world wide web away for free because I thought that it would only work if it worked for everyone. Today, I believe that to be truer than ever. Regulation and global governance are technically feasible, but reliant on political willpower. If we are able to muster it, we have the chance to restore the web as a tool for collaboration, creativity and compassion across cultural borders. We can re-empower individuals, and take the web back. It’s not too late. (2025) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Gary MarcusProfessor of Psychology and Neural Sciencestrongly agrees and says:In other times in history when we have faced uncertainty and powerful new things that may be both good and bad, that are dual use, we have made new organizations, as we have, for example, around nuclear power. We need to come together to build a global organization, something like an international agency for AI that is global, non profit and neutral. source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
JM MonguetUPC professor. Collective and artificial intelligence.strongly agrees and says:AI security is an issue of utmost importance, and ultimately international institutions have an important role to play in bringing the diversity of interests and perceptions to the table.Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Haydn BelfieldResearch Scientist at Google DeepMind and Research Affiliate Cambridge_Uni, CSERCambridge & LeverhulmeCFIstrongly agrees and says:Haydn Belfield, a researcher at the University of Cambridge’s Centre for the Study of Existential Risk and the Leverhulme Centre for the Future of Intelligence, proposes two reinforcing institutions: an International AI Agency (IAIA) and CERN for AI. The IAIA would primarily serve as a monitoring and verification body, enforced by chip import restrictions: only countries that sign a verifiable commitment to certain safe compute practices would be permitted to accumulate large amounts of compute. Meanwhile, a “CERN for AI” is an international scientific cooperative megaproject on AI which would centralise frontier model training runs in one facility. As an example of reinforcement, frontier foundation models would be shared out of the CERN for AI, under the supervision of the IAIA. (2024) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Gideon LichfieldFormer editor-in-chief of Wired and MIT Technology Reviewagrees and says:It’s a CERN or DARPA for AI. Many of the US’s biggest technological innovations in the 20th century came out of the research labs at firms like AT&T, Xerox, and IBM. But those firms still had profits in their sights, not societal goals. DARPA, however, funds research that’s crucial to US national security. CERN pools billions of dollars worth of research funding that individual countries wouldn’t be able to muster alone. Public AI could do the same, giving scientists the means to do cutting-edge research and develop AI models for uses the private sector might not. For example, a specialist medical AI for public-health research, a housing AI to help solve problems of affordable housing, or a legal AI to improve the justice system. (2025) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Brookings InstitutionU.S. public policy think tankagrees and says:For joint R&D, recommendation R15 of the progress report called for development of “common criteria and governance arrangements for international large-scale AI R&D projects,” with the Human Genome Project (HGP) and the European Organization for Nuclear Research (CERN) as examples of the scale and ambition needed. “Joint research and development applying to large-scale global problems such as climate change or disease prevention and treatment can have two valuable effects: It can bring additional resources to the solution of pressing global challenges, and the collaboration can help to find common ground in addressing differences in approaches to AI.” source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Tony Blair Institute for Global ChangePolicy institute led by Tony Blairagrees and says:An effort such as Sentinel would loosely resemble a version of CERN for AI and would aim to become the “brain” of an international regulator of AI, which would operate similarly to how the International Atomic Energy Agency works to ensure the safe and peaceful use of nuclear energy. Sentinel would initially focus on ensuring best practice in the top AI labs, but the five-year aim of such an organisation would be to form the international regulatory function across the AI ecosystem, in preparation for the proliferation of very capable models. Recommendation: The UK government should create a new national laboratory effort to test, understand and control AI to ensure it remains safe. This effort should be given sufficient freedom, funding and authority to empower it to succeed. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
European Laboratory for Learning and Intelligent Systems (ELLIS)Pan-European AI research networkstrongly disagrees and says:At ELLIS we believe that a strong AI research landscape distributed across various locations in Europe is essential for maintaining Europe’s sovereignty in this competitive field. AI research does not require access to centralized, expensive and unique physical facilities (known for example from fields like particle physics). Having a central research body would risk isolating AI research from the sectors, research communities and citizens that it should serve. A multi-centric AI research laboratory with strong institutions in all parts of Europe and well-rooted in the regional ecosystems will generate real innovation for Europe, best leverage Europe’s cultural diversity, and integrate European values in the development of future technology. Read the ELLIS statement “AI Foundation Models - A Roadmap for Europe” which includes a call to establish an intergovernmental multi-centric AI research organization with high-performance computing facilities in Europe. source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Daniela DiaconuELLIS scientific coordinator.disagrees and says:So many needs for innovation are put under the label of AI that there will be no way to address them with a single organisation. (2020) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Eric SchmidtFormer Google CEO; tech investordisagrees and says:I spent a lot of years hoping that the collaboration would occur, and there are many people in our industry who think that the arrival and development of this new intelligence is so important, it should be done in a multinational way. It should be done in the equivalent of CERN, which is the great physics laboratory, which is global in Switzerland. The political tensions and the stress over values is so great. There’s just no scenario. There’s just — I want to say it again, there’s just no scenario where you can do that. (2024) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Antony BlinkenU.S. Secretary of Statedisagrees and says:Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Michael WooldridgeOxford AI professor; multi-agent systems expertstrongly agrees and says:Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Gina RaimondoU.S. Secretary of Commercedisagrees and says:cooperation with our allies through a global scientific network on AI safety. (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
OpenAIAI research organizationagrees and says:Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable. As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say. (2023) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Chatham HouseInternational affairs think tankdisagrees and says:Long timelines and cost overruns often plague ambitious big science collaborations. Physics breakthroughs have required enormous hardware investments over years. For example, to build CERN’s Large Hadron Collider, over 10,000 scientists and engineers from hundreds of universities and labs contributed to its design and construction over a decade. But while current computer clusters for AI research have yet to require such large workforces, constructing data centres and network infrastructure at scale for a new institute will still take time, investment, and reliable access to currently undersupplied specialized chips for AI development. That said, the modular nature of graphics processing units (GPUs) and servers could allow for much faster scaling up of AI infrastructure than has been feasible in previous science megaprojects. Challenges in AI safety also differ from those of particle physics, so addressing them may require more dynamic, distributed initiatives. Care would need to be taken to involve diverse stakeholders, and to balance capabilities against controls. Inflated expectations for AI governance via a CERN-like model could backfire if they are not realistic about such an organization’s inherent limitations. (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Holger HoosMachine learning professor; CAIRNE co-founderstrongly agrees and says:One possibility would be to establish a largescale research facility, a CERN for AI. That would hit the headlines everywhere and attract talent from all over the world. If you want to have global appeal, you need a beacon that is really big and bright. (2022) source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Huw RobertsAI policy researcher, Oxford.strongly disagrees and says:Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
CLAIRE (Confederation of Laboratories for AI Research in Europe)European AI research network.strongly agrees and says:Choose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Centre for Future Generations (CFG)EU policy think tank.agrees and says:A CERN for AI could give Europe the computational infrastructure to build its own frontier AI models, (2024) source UnverifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ursula von der LeyenPresident of the European Commissionstrongly agrees and says:We want to replicate the success story of the CERN laboratory in Geneva. CERN hosts the largest particle accelerator in the world. And it allows the best and the brightest minds in the world to work together. We want the same to happen in our AI Gigafactories. We provide the infrastructure for large computational power. Researchers, entrepreneurs and investors will be able to join forces. source VerifiedChoose a list of delegatesto vote as the majority of them.Unless you vote directly.