America’s footprint in artificial intelligence is prodigious, and it is hard to overstate how consequential this is for the American national interest if it further develops with the right balance between innovation and guardrails. Into this new technology are two divergent directions on the basic structure of the innovation: open-source or company controlled. ChatGPT is the latter model and was developed and licensed by OpenAI. Meta’s LLaMa is an example of open-source AI.1 In this paper, we explore the benefits and drawbacks of open-source AI and conclude that open-source can help balance the safety and security we want from AI with the innovation necessary to set the standard for the world. Both models are right for innovation, safety, and competition.[...] Just as open-source encryption algorithms are a public good, so too are open-source AI models like LLaMa. However, as with cryptography, democratizing knowledge entails doing so for "bad guys," too. Senators19 and experts20 have raised this concern regarding LLaMa, although actual reports of LLaMa-enabled abuse can be counted on one hand. But it is true: LLaMa is a possible tool in the toolkit of criminals, hackers, propagandists, and foreign spies who may have use for it—if not now then certainly in the future. This problem is not unique to open-source models. Iran used closed-source ChatGPT for its election interference operations. (OpenAI and Microsoft thwarted the campaign, which received little traction.)21 China’s extensive experience hacking and spying to steal intellectual property22 will also make it practically impossible to keep closed-source AI models secure from exfiltration indefinitely, especially from the AI replication technique known as "distillation."23 Unverified source (2025)
Comment X 13d ago
Polls
replying to Mike Sexton
Terms · Privacy · Contact