We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Comment by Yann LeCun
Computer scientist, AI researcher
Current LLMs are intrinsically unsafe, but with world models, guardrail objectives can be implemented — so by construction, they will not knowingly produce actions that will produce dangerous results.
Unverified
source
(2026)
Policy proposals and claims
replying to Yann LeCun