AI researcher and PhD candidate at Mila (Quebec AI Institute).
Barring an international agreement, humanity will quite likely not have the ability to build safe superintelligence by the time the first superintelligence is built. Therefore, pursuing superintelligence at this stage is quite likely to cause the permanent disempowerment or extinction of humanity. I support an international agreement to ensure that superintelligence is not built before it can be done safely. (2025) source Verified
Comment X 1d ago
Polls
replying to Samuel Buteau
Terms · Privacy · Contact