Comment by Yann LeCun

Current LLMs are intrinsically unsafe, but with world models, guardrail objectives can be implemented — so by construction, they will not knowingly produce actions that will produce dangerous results. Unverified source (2026)
Like Share on X 3h ago
Policy proposals and claims
replying to Yann LeCun