We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai-governance (3)
ai-regulation (3)
ai-safety (3)
open-source (3)
×
policy (3)
ai-risk (2)
regulations (2)
ai (1)
ai-ethics (1)
ethics (1)
existential-risk (1)
nuclear (1)
open-science (1)
research-policy (1)
transparency (1)
Top
New
-
OpenAI votes For and says:
Due to our concerns about malicious applications of the technology, we are not releasing the trained model. Unverified source (2020) -
OpenAI votes For and says:
Safety is foundational [...] for open models. gpt-oss models perform comparably [...] on internal safety benchmarks. We’re sharing the results [...] in the model card. Unverified source (2025) -
OpenAI votes For and says:
Our Preparedness Framework defines capability thresholds that could lead to severe risk, and risk-specific guidelines to sufficiently minimize the risk of severe harm. For biology, a model that meets a High capability threshold could provide meaningf... more Unverified source (2025)