We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
OpenAI
AI research organization
ai-governance (3)
ai-safety (3)
ai (2)
ai-policy (2)
ai-regulation (2)
cern-for-ai (2)
existential-risk (2)
ai-deployment (1)
ai-ethics (1)
ai-risk (1)
ethics (1)
eu (1)
international-relations (1)
public-interest-ai (1)
research-policy (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
OpenAI strongly agrees and says:
Our Preparedness Framework defines capability thresholds that could lead to severe risk, and risk-specific guidelines to sufficiently minimize the risk of severe harm. For biology, a model that meets a High capability threshold could provide meaningful assistance to novice actors with basic relevant training, enabling them to create biological or chemical threats. If a model reaches a High capability threshold, we won’t release it until we’re confident the risks have been sufficiently mitigated. Our Safety Advisory Group, a cross-functional team of internal leaders, partners with internal safety teams to administer the framework. For high-risk launches, they assess any remaining risks, evaluate the strength of our safeguards, and advise OpenAI leadership on whether it’s safe to move forward. Our Board’s Safety and Security Committee provides oversight of these decisions. This can mean delaying a release, limiting who can use the model, or turning off certain features, even if it disappoints users. For a High biology capability model, this would mean putting in place sufficient safeguards that would bar users from gaining expert capabilities given their potential for severe harm. (2025) source Unverified -
Should a CERN for AI aim to build safe superintelligence?
OpenAI agrees and says:
First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. There are many ways this could be implemented; major governments around the world could set up a project that many current efforts become part of, or we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year. And of course, individual companies should be held to an extremely high standard of acting responsibly. Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable. As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say. Third, we need the technical capability to make a superintelligence safe. This is an open research question that we and others are putting a lot of effort into. (2023) source Unverified -
Should we create a global institute for AI, similar to CERN?
OpenAI agrees and says:
Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable. As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say. (2023) source Unverified