We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
OpenAI
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai-governance (4)
ai-risk (4)
×
ai-safety (4)
policy (4)
ai-regulation (3)
regulations (3)
ethics (2)
existential-risk (2)
open-source (2)
research-policy (2)
ai (1)
ai-ethics (1)
ai-policy (1)
cern-for-ai (1)
nuclear (1)
Top
New
-
OpenAI votes For and says:
Due to our concerns about malicious applications of the technology, we are not releasing the trained model. Unverified source (2020) -
OpenAI votes For and says:
First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. ... more Unverified source (2023) -
OpenAI votes For and says:
The Safety Hub provides public access to safety evaluation results for our models. [...] evaluations of models under our Preparedness Framework prior to their deployment. Unverified source (2025) -
OpenAI votes For and says:
Our Preparedness Framework defines capability thresholds that could lead to severe risk, and risk-specific guidelines to sufficiently minimize the risk of severe harm. For biology, a model that meets a High capability threshold could provide meaningf... more Unverified source (2025)