Comment by Joe Carlsmith

Senior research analyst at Open Philanthropy; writes on AI safety, alignment, and existential risk from power-seeking AI
we would indeed see significant (read: multi-year) restraint on the development of artificial superintelligence while we improve our understanding of how to ensure its safety. [...] A wiser and more coordinated civilization would likely be employing quite a lot of capability restraint in building advanced AI, especially as we start to approach transformatively powerful systems. Unverified source (2026)
Like Share on X 6h ago
Policy proposals and claims
replying to Joe Carlsmith