We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Yann LeCun
Computer scientist, AI researcher
ai-safety (7)
ai-governance (6)
ai (3)
ai-regulation (2)
existential-risk (2)
policy (2)
ai-alignment (1)
ai-risk (1)
cern-for-ai (1)
future (1)
international-relations (1)
Top
New
-
Ban open source AI models capable of creating WMDs
Yann LeCun votes Against and says:
-
Participate in shaping the future of AI
Yann LeCun votes For and says:
Human feedback for open source LLMs needs to be crowd-sourced, Wikipedia style. It is the only way for LLMs to become the repository of all human knowledge and cultures. Unverified source -
Build artificial general intelligence
Yann LeCun votes For and says:
I don't like the phrase AGI. I prefer human-level intelligence because human intelligence is not general. Internally, we call this AMI-advanced machine intelligence. We have a pretty good plan on how to get there. First, we are building systems that... more Unverified source (2024) -
Establish a UN-led body to oversee compute-intensive AI
Yann LeCun votes Against and says:
Calls for a global A.I. regulator modelled on the IAEA are misguided. Nuclear technology is a narrow, slow‑moving domain with obvious materials to track and a small set of state actors; A.I. is a broad, fast‑moving field with millions of researchers ... more Unverified source (2023) -
AI poses an existential threat to humanity
Yann LeCun votes Against and says:
The point is, AI systems, as smart as they might be, will be subservient to us. We set their goals, and they don't have any intrinsic goal that we would build into them to dominate. It would be really stupid to build that. It would also be useless. N... more Unverified source (2024) -
Open-source AI is more dangerous than closed-source AI
Yann LeCun votes Against and says:
We know for a fact that open-source software platforms are both more powerful and more secure than the closed-source versions. Unverified source (2024) -
Ban superintelligence development until safety consensus is reached
Yann LeCun votes Against and says:
My first reaction to [the Pause Giant AI Experiments letter] is that calling for a delay in research and development smacks me of a new wave of obscurantism. Unverified source (2023) -
Could AGI quickly lead to superintelligence?
Yann LeCun votes Against and says:
There is no such thing as an intelligence explosion. There is no reason AI should become in control just because it is more capable. Unverified source (2025) -
Mandate the CERN for AI to build safe superintelligence
Yann LeCun votes Against and says:
AI could theoretically replace humans, but it is unlikely due to societal resistance. Humans would remain in control, effectively becoming the 'boss' of superintelligent AI systems. [...] He downplayed fears of a doomsday scenario caused by AI, label... more Unverified source (2025)