We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Yann LeCun
Computer scientist, AI researcher
ai (5)
ai-safety (5)
existential-risk (5)
×
ai-governance (4)
ai-regulation (4)
ai-ethics (3)
ai-policy (3)
ai-risk (3)
ai-deployment (1)
cern-for-ai (1)
democracy (1)
ethics (1)
eu (1)
future (1)
international-relations (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Yann LeCun strongly disagrees and says:
I see the danger of this concentration of power through proprietary AI systems as a much bigger danger than everything else. What works against this is people who think that for reasons of security, we should keep AI systems under lock and key because it’s too dangerous to put it in the hands of everybody. That would lead to a very bad future in which all of our information diet is controlled by a small number of companies who proprietary systems. (2024) source Verified -
Does AI pose an existential threat to humanity?
Yann LeCun strongly disagrees and says:
The point is, AI systems, as smart as they might be, will be subservient to us. We set their goals, and they don't have any intrinsic goal that we would build into them to dominate. It would be really stupid to build that. It would also be useless. Nobody would buy it anyway. (2024) source Unverified -
Should humanity ban the development of superintelligence until there is broad scientific consensus that it will be done safely and controllably and strong public buy-in?
Yann LeCun strongly disagrees and says:
My first reaction to [the Pause Giant AI Experiments letter] is that calling for a delay in research and development smacks me of a new wave of obscurantism. (2023) source Unverified -
Should humanity build artificial general intelligence?
Yann LeCun agrees and says:
I don't like the phrase AGI. I prefer human-level intelligence because human intelligence is not general. Internally, we call this AMI-advanced machine intelligence. We have a pretty good plan on how to get there. First, we are building systems that understand the physical world-which learn by watching videos. Second, we need LLMs to have persistent memory. Humans have a special structure in the brain that stores our working memory, our long-term memory, factual, episodic memory. We don't have that in LLMs. And the third most important thing is the ability to plan and reason. (2024) source Unverified -
Should a CERN for AI aim to build safe superintelligence?
Yann LeCun disagrees and says:
AI could theoretically replace humans, but it is unlikely due to societal resistance. Humans would remain in control, effectively becoming the 'boss' of superintelligent AI systems. [...] He downplayed fears of a doomsday scenario caused by AI, labeling them as sci-fi clichés, and argued that current AI advancements are not close to achieving superintelligence. He suggested that to mitigate misuse and unreliability in AI, the focus should be on creating better AI systems with common sense and reasoning capabilities. (2025) source Unverified