Comment by Liron Shapira

Tech investor and AI risk commentator; host of Doom Debates podcast
The real extinction-level AI safety challenge, the reason we're nowhere close to surviving superintelligence, is something AI companies decided they won't mention anymore, because it exposes their AI safety efforts as a shockingly inadequate facade. [...] It's an excellent facade. It lets researchers, executives, and policymakers feel relieved that the 'AI safety plan' box is checked. It just doesn't do anything when the superintelligence comes. Unverified source (2026)
Like Share on X 13h ago
Policy proposals and claims
replying to Liron Shapira