We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
ai
international-relations
public-interest-ai
ai-governance
ai-risk
ai-policy
ai-regulation
ai-safety
Cast your vote:
Results (24):
filter
Quotes (21)
Users (0)
-
Demis HassabisNobel laureate, AI Researcher and CEO of DeepMindstrongly agrees and says:Then what I’d like to see eventually is an equivalent of a CERN for AI safety that does research into that – but internationally. And then maybe there’s some kind of equivalent one day of the IAEA, which actually audits these things. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ursula von der LeyenPresident of the European Commissionagrees and says:First of all, let’s not forget that A.I. is a tremendous opportunity — if used responsibly. But responsibility at the frontier cannot be left to chance or to any single company or country. Just as the world built institutions to govern nuclear power — with the IAEA at the core of a U.N. framework — we should work with partners on a global architecture for A.I. that can set clear rules for the most compute-intensive systems, assess risks, and verify compliance. That means independent testing, shared scientific expertise, and an international authority that can see across borders. Europe will do its part. But only a U.N.-anchored, IAEA‑like approach for the highest‑risk A.I. can give citizens everywhere the confidence that this technology will be safe and trustworthy. (2024) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yann LeCunComputer scientist, AI researcherstrongly disagrees and says:Calls for a global A.I. regulator modelled on the IAEA are misguided. Nuclear technology is a narrow, slow‑moving domain with obvious materials to track and a small set of state actors; A.I. is a broad, fast‑moving field with millions of researchers and developers worldwide. A U.N.-led, IAEA‑style body that ‘oversees’ compute‑intensive A.I. would be unworkable in practice and harmful in principle: it would freeze progress, entrench incumbents, and starve open research — all while failing to stop bad actors who won’t participate. What we need instead is open science, open models, and targeted rules for concrete harms. Safety and robustness should be advanced by more eyes on the code and more researchers able to test and improve systems — not by a centralized global authority trying to police computation itself. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Eric SchmidtFormer Google CEO; tech investoragrees and says:We are moving well past regulatory understanding, government understanding of what is possible. That’s why, in my view, A.I. is begging for global rules. Think of institutions the world already knows: an IPCC to organize scientific consensus and, for the riskiest, compute‑heavy systems, an IAEA‑style body rooted in the U.N. system to set standards, inspect, and verify. We did this for nuclear technology because the stakes were existential; with frontier A.I., the stakes are at least comparable in their potential for harm if we get it wrong. A U.N.-anchored watchdog could focus on the narrow slice that truly warrants it: the most powerful training runs and deployments. It would not micromanage every app or model. But it would give governments confidence that someone with access and authority is watching the red lines and sounding the alarm when needed, so innovation can continue without sleepwalking into catastrophe. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Mustafa SuleymanMicrosoft AI CEO; authoragrees and says:So, an intergovernmental panel on A.I. would be one that has access to all of the largest commercial labs and academic labs all around the world developing these large language models. They would be able to probe them and test them, audit them, look at what data they are, you know, using to do training and try to find weaknesses and failure modes in the models. Once they discover those, they should then be able to share those with other national or international commercial competitors in order to improve the quality and performance of those models. But the first step is really just understanding and auditing and establishing the fact pattern of what are the boundaries that these models can’t cross today and what — where are they headed in the future. In short, we need a standing, global mechanism with legitimacy and access to oversee the most compute-intensive A.I. development, the way the IAEA oversees nuclear technology — not to stop progress, but to make it safe and accountable for everyone. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Akash WasilLawfare contributing writerdisagrees and says:Many have turned to the International Atomic Energy Agency as a model for international A.I. institutions. But an ‘IAEA for A.I.’ is not a panacea. The IAEA works in a context with material stockpiles, treaty hooks, and decades of consensus about the nature of the risk. Advanced A.I. lacks that clarity. Verification of compute use is far more complex than accounting for fissile material, and enforcement would still depend on geopolitics at the U.N. Security Council. If nations eventually converge on the need for strict controls at the frontier, an IAEA‑style body could help with incident reporting, standards, inspections, and emergency response. Until then, the better path is building verifiable agreements step by step — beginning with hardware safeguards and jurisdictional standards — rather than leaping to a U.N.-led super‑regulator for all compute‑intensive A.I. (2024) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ryan NabilTech policy researcher, NTU Foundationdisagrees and says:I have attached my paper entitled ‘Global AI Governance and the United Nations: The UN Should Update Its Existing Institutional Framework Instead of Creating a Global AI Agency’. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Mary RobinsonFormer Irish president; Elders chairagrees and says:Along with my fellow Elders, I reaffirm our call from May 2023 for an international AI safety body. (2024) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
The EldersIndependent group of global leadersstrongly agrees and says:A global architecture is needed [...] drawing on the model of the International Atomic Energy Agency [...] via the UN General Assembly [...] establish an AI safety agency. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ian J. StewartNonproliferation expert and scholarstrongly disagrees and says:But the nuclear governance model is actually not a good one for regulation of Artificial General Intelligence. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Mike WatsonHudson Institute fellow and writerstrongly disagrees and says:If the alarmists are right about the dangers of AI and try to fix it with an IAEA-style solution, then we’re already doomed. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Kenneth PayneWar studies professor, KCLstrongly disagrees and says:OpenAI, the company behind ChatGPT, is worried about the emergence of a ‘superintelligent’ AI - perhaps even an AGI, or Artificial General Intelligence, that will be far smarter than humans. So, this week they published a memo sketching out how that risk might be managed. They’re proposing governance that looks like the international regime for nuclear energy, the IAEA. This will regulate who does what research (perhaps by international treaty, it’s not clear) and then monitor compliance. Alas, there are some big, I think insurmountable, problems with their proposal. [...] Back to the idea of international regulation. If only we could get all governments agree to regulation along OpenAI’s lines… Fat chance. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Zlatko LagumdžijaFormer Bosnian prime ministerstrongly agrees and says:So today I think we urgently need a leadership call, and I see our gathering as one of the leadership calls from this profile of people for establishment of an IAIA—International Artificial Intelligence Agency—as a next step to understanding the governance of the new AI and Internet ecosystem for work and life. So using the same UN words defining the International Atomic Energy Agency, we can say that the IAIA is the "world's central intergovernmental forum for scientific and technical cooperation in the AI field. It works for the safe, secure, and peaceful use of AI and digital governance, contributing to international peace and security in accordance to the Universal Declaration of Human Rights and to the UN Sustainable Development Goals (SDGs)." I hope it will not take ten years to get from the Commission to a proper agency where we deal with artificial intelligence, like it took us ten years to come to some kind of functioning body when it was dealing with atomic energy. I see today our discussion as some kind of call to leaders to show the wisdom in the interest of using science and technology fruits in order to move our plans in the right direction. (2021) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ian BremmerGeopolitics expert; Eurasia Group founderstrongly disagrees and says:That’s a critical, critical aspect of fighting climate change. And with A.I., so much more urgent and fast- moving, you will need an organization like that, a multistakeholder. But another thing we’re calling for is a techno credential approach, like you see in the global financial community. In other words, a geotechnology stability board where you will have governments and non-state actors together being able to respond to crises in real-time as they occur. This isn’t like the United Nations. This is more like how the world responded to the 2008 financial crisis, and where even though they were different governments, the United States and China are both members of the IMF. They’re both members of the bank of international settlements. You will be need A.I. institutions to be similarly inclusive and similarly non-politicized to be able to respond to challenges that are fundamentally global. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Rishi SunakFormer UK prime ministerdisagrees and says:We’re a long way from anyone establishing an IAEA equivalent for AI, those things are long into the distance. [...] But in the first instance just talking through this with like-minded countries is a sensible thing. We will all benefit from hearing and talking to each other in conversation with businesses themselves. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nick CleggMeta president of global affairsagrees and says:The fundamental idea is, how should we as a world react if and when AI develops a degree of autonomy or agency? [...] Once we do that, we do cross a Rubicon. If that happens, by the way, there’s debate among experts; some say in the next 18 months, some say not within 80 years. But once you cross that Rubicon, you’re in a very different world. The large language models we’ve released are very primitive compared to that vision of the future. [...] But if it does emerge, I do think, whether it’s the IAEA or some other regulatory model, you’re in a completely different ballgame. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
António GuterresUN Secretary-Generalstrongly agrees and says:I would be favorable to the idea that we could have an artificial intelligence agency (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Gary MarcusProfessor of Psychology and Neural Scienceagrees and says:I think the UN, UNESCO, places like that have been thinking about this for a long time. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dmitry PolyanskiyRussia's Deputy U.N. Ambassadorstrongly disagrees and says:We oppose the formation of supra-national oversight bodies in the field of AI. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Robert TragerOxford political scientist; governance expertdisagrees and says:DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sam AltmanCEO at OpenAIagrees and says:We talk about the IAEA as a model where the world has said 'OK, very dangerous technology, let's all put some guard rails.' (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.