We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
AI researcher and PhD candidate at Mila (Quebec AI Institute).
Barring an international agreement, humanity will quite likely not have the ability to build safe superintelligence by the time the first superintelligence is built. Therefore, pursuing superintelligence at this stage is quite likely to cause the permanent disempowerment or extinction of humanity. I support an international agreement to ensure that superintelligence is not built before it can be done safely.
(2025)
source
Verified
Polls
replying to Samuel Buteau