We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
OpenAI
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai-governance (7)
ai-regulation (6)
ai-safety (5)
ai-policy (4)
ai (3)
ai-ethics (3)
policy (3)
research-policy (3)
cern-for-ai (2)
ethics (2)
existential-risk (2)
future (2)
international-relations (2)
defense (1)
law (1)
Top
New
-
OpenAI votes For and says:
Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems,... more Unverified source (2023) -
OpenAI votes For and says:
We aim to develop mechanisms that empower human stakeholders to express their intent clearly and supervise AI systems effectively - even in complex situations, and as AI capabilities scale beyond human capabilities. Decisions about how AI behaves and... more Unverified source (2025) -
OpenAI votes For and says:
First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. ... more Unverified source (2023) -
OpenAI votes For and says:
Due to our concerns about malicious applications of the technology, we are not releasing the trained model. Unverified source (2020) -
OpenAI votes For and says:
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and benefic... more Unverified source (2018) -
OpenAI votes For and says:
Our Preparedness Framework defines capability thresholds that could lead to severe risk, and risk-specific guidelines to sufficiently minimize the risk of severe harm. For biology, a model that meets a High capability threshold could provide meaningf... more Unverified source (2025) -
OpenAI votes Against and says:
We are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require... more Unverified source (2023)