We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
OpenAI
AI research organization
ai-safety (6)
ai-governance (4)
ai (2)
ai-regulation (2)
cern-for-ai (2)
ai-alignment (1)
ai-risk (1)
future (1)
policy (1)
research-policy (1)
Top
New
-
Create a global institute for AI, similar to CERN
OpenAI votes For and says:
Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems,... more Unverified source (2023) -
Participate in shaping the future of AI
OpenAI votes For and says:
We aim to develop mechanisms that empower human stakeholders to express their intent clearly and supervise AI systems effectively - even in complex situations, and as AI capabilities scale beyond human capabilities. Decisions about how AI behaves and... more Unverified source (2025) -
Mandate the CERN for AI to build safe superintelligence
OpenAI votes For and says:
First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. ... more Unverified source (2023) -
Open-source AI is more dangerous than closed-source AI
OpenAI votes For and says:
Due to our concerns about malicious applications of the technology, we are not releasing the trained model. Unverified source (2020) -
Build artificial general intelligence
OpenAI votes For and says:
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and benefic... more Unverified source (2018) -
Ban open source AI models capable of creating WMDs
OpenAI votes For and says:
Our Preparedness Framework defines capability thresholds that could lead to severe risk, and risk-specific guidelines to sufficiently minimize the risk of severe harm. For biology, a model that meets a High capability threshold could provide meaningf... more Unverified source (2025) -
Ban superintelligence development until safety consensus is reached
OpenAI votes Against and says:
We are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require... more Unverified source (2023)