We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
OpenAI
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai-governance (9)
ai-safety (7)
policy (7)
ai-regulation (6)
research-policy (6)
ai-policy (5)
ai-ethics (4)
ai-risk (4)
regulations (4)
ai (3)
existential-risk (3)
open-source (3)
cern-for-ai (2)
ethics (2)
transparency (2)
Top
New
-
OpenAI votes For and says:
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and benefic... more Unverified source (2018) -
OpenAI votes For and says:
Our Raising Concerns Policy [...] expressly prohibits harassment and retaliation [...]. Our policy [...] employees have the right to make reports or disclosures to government agencies [...]. Unverified source (2026) -
OpenAI votes For and says:
Safety is foundational [...] for open models. gpt-oss models perform comparably [...] on internal safety benchmarks. We’re sharing the results [...] in the model card. Unverified source (2025) -
OpenAI votes For and says:
First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. ... more Unverified source (2023) -
OpenAI votes For and says:
The Safety Hub provides public access to safety evaluation results for our models. [...] evaluations of models under our Preparedness Framework prior to their deployment. Unverified source (2025) -
OpenAI votes For and says:
Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems,... more Unverified source (2023) -
OpenAI votes For and says:
Due to our concerns about malicious applications of the technology, we are not releasing the trained model. Unverified source (2020) -
OpenAI votes For and says:
Our Preparedness Framework defines capability thresholds that could lead to severe risk, and risk-specific guidelines to sufficiently minimize the risk of severe harm. For biology, a model that meets a High capability threshold could provide meaningf... more Unverified source (2025) -
OpenAI votes Against and says:
We are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require... more Unverified source (2023)