We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Michael Jacob
Council on Strategic Risks fellow
ai (1)
ai-deployment (1)
ai-ethics (1)
ai-governance (1)
ai-policy (1)
ai-regulation (1)
×
ai-risk (1)
ai-safety (1)
existential-risk (1)
public-interest-ai (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Michael Jacob strongly agrees and says:
Via the Australia Group and the US Department of Commerce, the US Government should explicitly design export controls to limit open sourcing of the riskiest AI-enabled Biological Design Tools (BDTs). Since publishing a tool online can be considered an “export,” new export-control restrictions would necessarily limit the ability to freely open source a piece of software. This is a feature, not a bug, of the export control process, since open source should not be a loophole allowing for the proliferation of dangerous AI-enabled software. For these export controls to be effective, the United States should consider adding a new, narrow carve-out to the “publicly available” exclusion. (2024) source Unverified