We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Jonas B. Sandbrink
Biosecurity researcher, University of Oxford
ai (1)
ai-deployment (1)
ai-ethics (1)
ai-governance (1)
ai-policy (1)
ai-regulation (1)
ai-risk (1)
ai-safety (1)
×
existential-risk (1)
public-interest-ai (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Jonas B. Sandbrink agrees and says:
LLMs, such as GPT-4 and its successors, might provide dual-use information and thus remove some barriers encountered by historical biological weapons efforts. [...] BDTs may enable the creation of pandemic pathogens substantially worse than anything seen to date and could enable forms of more predictable and targeted biological weapons. In combination, the convergence of LLMs and BDTs could raise the ceiling of harm from biological agents and could make them broadly accessible. A range of interventions would help to manage risks. Independent pre-release evaluations could help understand the capabilities of models and the effectiveness of safeguards. Options for differentiated access to such tools should be carefully weighed with the benefits of openly releasing systems. (2023) source Unverified