We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
OpenAI
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai-governance (4)
ai-policy (4)
×
ai-regulation (3)
ai-ethics (2)
ai-safety (2)
international-relations (2)
ai (1)
cern-for-ai (1)
defense (1)
ethics (1)
future (1)
law (1)
policy (1)
regulations (1)
research-policy (1)
Top
New
-
OpenAI votes For and says:
Due to our concerns about malicious applications of the technology, we are not releasing the trained model. Unverified source (2020) -
OpenAI votes For and says:
We aim to develop mechanisms that empower human stakeholders to express their intent clearly and supervise AI systems effectively - even in complex situations, and as AI capabilities scale beyond human capabilities. Decisions about how AI behaves and... more Unverified source (2025) -
OpenAI votes For and says:
Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems,... more Unverified source (2023) -
OpenAI votes For and says:
Our Preparedness Framework defines capability thresholds that could lead to severe risk, and risk-specific guidelines to sufficiently minimize the risk of severe harm. For biology, a model that meets a High capability threshold could provide meaningf... more Unverified source (2025)