Comment by Vitalik Buterin

Humans and LLMs fail in distinct ways, and so requiring human + LLM 2-of-2 confirmation to take risky actions (and allowing human override only with much more friction and/or time delay) is much safer than fully relying only on either one. Unverified source (2026)
Like Share on X 16h ago
Policy proposals and claims
replying to Vitalik Buterin