We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Managing editor at THE DECODER
Are open-source AI models more dangerous than closed models like GPT-4? A new study says no, and offers recommendations for policymakers.
Open Foundation Models (OFMs) offer significant benefits by fostering competition, accelerating innovation, and improving the distribution of power, concludes a study by the Stanford Institute for Human-Centered Artificial Intelligence. In the study, the authors examined the social and political implications of OFMs, compared potential risks with those of closed models, and offered recommendations for policymakers.
The risks of open foundation models examined include disinformation, biorisks, cybersecurity, spear phishing, non-consensual intimate images (NCII), and child sexual abuse material (CSAM). The study concludes that there is currently limited evidence of the marginal risk of OFMs compared to closed models or existing technologies.
Unverified
source
(2024)
Polls
replying to Maximilian Schreiner