Artificial intelligence is advancing so rapidly that many who have been part of its development are now among the most vocal about the need to regulate it. While AI will bring many benefits, it is also potentially dangerous; it could be used to create cyber or bio weapons or to launch massive disinformation attacks. And if an AI is stolen or leaked even once, it could be impossible to prevent it from spreading throughout the world. These concerns are not hypothetical. Such a leak has, in fact, already occurred. In March, an AI model developed by Meta called LLaMA appeared online. LLaMA was not intended to be publicly accessible, but the model was shared with AI researchers, who then requested full access to further their own projects. At least two of them abused Meta’s trust and released the model online, and Meta has been unable to remove LLaMA from the internet. The model can still be accessed by anyone. (2023) source Unverified
Comment X 2d ago
Polls
replying to Jason G. Matheny
Terms · Privacy · Contact