We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai-risk (5)
×
ai-safety (5)
ai-governance (4)
existential-risk (4)
ai (3)
ai-policy (3)
ai-regulation (3)
ethics (3)
policy (3)
agi (2)
open-source (2)
regulations (2)
cern-for-ai (1)
future (1)
nuclear (1)
Top
New
-
Yann LeCun votes Against and says:
-
Yann LeCun votes For and says:
Current LLMs are intrinsically unsafe, but with world models, guardrail objectives can be implemented — so by construction, they will not knowingly produce actions that will produce dangerous results. Unverified source (2026) -
Yann LeCun votes Against and says:
We know for a fact that open-source software platforms are both more powerful and more secure than the closed-source versions. Unverified source (2024) -
Yann LeCun votes Against and says:
There is no such thing as an intelligence explosion. There is no reason AI should become in control just because it is more capable. Unverified source (2025) -
Yann LeCun votes Against and says:
AI could theoretically replace humans, but it is unlikely due to societal resistance. Humans would remain in control, effectively becoming the 'boss' of superintelligent AI systems. [...] He downplayed fears of a doomsday scenario caused by AI, label... more Unverified source (2025)