Comment by Samuel Buteau

AI researcher and PhD candidate at Mila (Quebec AI Institute).
Barring an international agreement, humanity will quite likely not have the ability to build safe superintelligence by the time the first superintelligence is built. Therefore, pursuing superintelligence at this stage is quite likely to cause the permanent disempowerment or extinction of humanity. I support an international agreement to ensure that superintelligence is not built before it can be done safely. Verified source (2025)
Like Share on X 6mo ago
Policy proposals and claims

Verification History

Verified Migrated from legacy verification · Hector Perez Arenas · 6mo ago
replying to Samuel Buteau