Yann LeCun

Info
Computer scientist, AI researcher
X: @ylecun · Wikipedia
Location: United States
Top
New
  • Should we ban future open-source AI models that can be used to create weapons of mass destruction?
    human-avatar Yann LeCun strongly disagrees and says:
    I see the danger of this concentration of power through proprietary AI systems as a much bigger danger than everything else. What works against this is people who think that for reasons of security, we should keep AI systems under lock and key because it’s too dangerous to put it in the hands of everybody. That would lead to a very bad future in which all of our information diet is controlled by a small number of companies who proprietary systems. (2024) source Verified
    Comment 1 Comment X added 1y ago
  • Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
    human-avatar Yann LeCun strongly disagrees and says:
    Calls for a global A.I. regulator modelled on the IAEA are misguided. Nuclear technology is a narrow, slow‑moving domain with obvious materials to track and a small set of state actors; A.I. is a broad, fast‑moving field with millions of researchers and developers worldwide. A U.N.-led, IAEA‑style body that ‘oversees’ compute‑intensive A.I. would be unworkable in practice and harmful in principle: it would freeze progress, entrench incumbents, and starve open research — all while failing to stop bad actors who won’t participate. What we need instead is open science, open models, and targeted rules for concrete harms. Safety and robustness should be advanced by more eyes on the code and more researchers able to test and improve systems — not by a centralized global authority trying to police computation itself. (2023) source Unverified
    Comment Comment X added 6d ago
  • Does AI pose an existential threat to humanity?
    human-avatar Yann LeCun strongly disagrees and says:
    The point is, AI systems, as smart as they might be, will be subservient to us. We set their goals, and they don't have any intrinsic goal that we would build into them to dominate. It would be really stupid to build that. It would also be useless. Nobody would buy it anyway. (2024) source Unverified
    Comment Comment X added 1mo ago
  • Should humanity build artificial general intelligence?
    I don't like the phrase AGI. I prefer human-level intelligence because human intelligence is not general. Internally, we call this AMI-advanced machine intelligence. We have a pretty good plan on how to get there. First, we are building systems that understand the physical world-which learn by watching videos. Second, we need LLMs to have persistent memory. Humans have a special structure in the brain that stores our working memory, our long-term memory, factual, episodic memory. We don't have that in LLMs. And the third most important thing is the ability to plan and reason. (2024) source Unverified
    Comment Comment X added 7d ago
Back to home
Terms · Privacy · Contact