We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
OpenAI
AI research organization
ai (2)
ai-ethics (2)
ai-governance (2)
ai-policy (2)
×
public-interest-ai (2)
ai-deployment (1)
ai-regulation (1)
ai-risk (1)
ai-safety (1)
digital-democracy (1)
existential-risk (1)
future (1)
Top
New
-
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
OpenAI strongly agrees and says:
We aim to develop mechanisms that empower human stakeholders to express their intent clearly and supervise AI systems effectively - even in complex situations, and as AI capabilities scale beyond human capabilities. Decisions about how AI behaves and what it is allowed to do should be determined by broad bounds set by society, and evolve with human values and contexts. AI development and deployment must have human control and empowerment at its core. We create transparent, auditable, and steerable models by integrating explicit policies and “case law” into our model training process. We facilitate transparency and democratic input by inviting public engagement in policy formation and incorporating feedback from various stakeholders. Our democratic inputs to AI grant was one example for exploring possible ways of democratic process to decide AI model behavior. Another example is our publishing of the Model Spec, making explicit the tradeoffs and decisions that went into shaping it, and inviting public inputs for future versions. (2025) source Unverified -
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
OpenAI strongly agrees and says:
Our Preparedness Framework defines capability thresholds that could lead to severe risk, and risk-specific guidelines to sufficiently minimize the risk of severe harm. For biology, a model that meets a High capability threshold could provide meaningful assistance to novice actors with basic relevant training, enabling them to create biological or chemical threats. If a model reaches a High capability threshold, we won’t release it until we’re confident the risks have been sufficiently mitigated. Our Safety Advisory Group, a cross-functional team of internal leaders, partners with internal safety teams to administer the framework. For high-risk launches, they assess any remaining risks, evaluate the strength of our safeguards, and advise OpenAI leadership on whether it’s safe to move forward. Our Board’s Safety and Security Committee provides oversight of these decisions. This can mean delaying a release, limiting who can use the model, or turning off certain features, even if it disappoints users. For a High biology capability model, this would mean putting in place sufficient safeguards that would bar users from gaining expert capabilities given their potential for severe harm. (2025) source Unverified