OpenAI

Info
AI research organization
Top
New
  • Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
    We aim to develop mechanisms that empower human stakeholders to express their intent clearly and supervise AI systems effectively - even in complex situations, and as AI capabilities scale beyond human capabilities. Decisions about how AI behaves and what it is allowed to do should be determined by broad bounds set by society, and evolve with human values and contexts. AI development and deployment must have human control and empowerment at its core. We create transparent, auditable, and steerable models by integrating explicit policies and “case law” into our model training process. We facilitate transparency and democratic input by inviting public engagement in policy formation and incorporating feedback from various stakeholders. Our democratic inputs to AI grant was one example for exploring possible ways of democratic process to decide AI model behavior. Another example is our publishing of the Model Spec, making explicit the tradeoffs and decisions that went into shaping it, and inviting public inputs for future versions. (2025) source Unverified
    Comment Comment X added 1mo ago
  • Should we ban future open-source AI models that can be used to create weapons of mass destruction?
    Our Preparedness Framework defines capability thresholds that could lead to severe risk, and risk-specific guidelines to sufficiently minimize the risk of severe harm. For biology, a model that meets a High capability threshold could provide meaningful assistance to novice actors with basic relevant training, enabling them to create biological or chemical threats. If a model reaches a High capability threshold, we won’t release it until we’re confident the risks have been sufficiently mitigated. Our Safety Advisory Group, a cross-functional team of internal leaders, partners with internal safety teams to administer the framework. For high-risk launches, they assess any remaining risks, evaluate the strength of our safeguards, and advise OpenAI leadership on whether it’s safe to move forward. Our Board’s Safety and Security Committee provides oversight of these decisions. This can mean delaying a release, limiting who can use the model, or turning off certain features, even if it disappoints users. For a High biology capability model, this would mean putting in place sufficient safeguards that would bar users from gaining expert capabilities given their potential for severe harm. (2025) source Unverified
    Comment Comment X added 2mo ago
  • Should humanity build artificial general intelligence?
    OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles: [...] We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community. We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. (2018) source Unverified
    Comment Comment X added 1mo ago
  • Should we create a global institute for AI, similar to CERN?
    Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable. As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say. (2023) source Unverified
    Comment Comment X added 2mo ago
  • Should humanity ban the development of superintelligence until there is a strong public buy-in and broad scientific consensus that it will be done safely and controllably?
    We are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. But the governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. We believe people around the world should democratically decide on the bounds and defaults for AI systems. (2023) source Unverified
    Comment Comment X added 1mo ago
Back to home
Terms · Privacy · Contact