We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Eliezer Yudkowsky
AI researcher and writer
democracy (2)
×
ai (1)
ai-governance (1)
ai-regulation (1)
ai-safety (1)
cybersecurity (1)
digital-democracy (1)
existential-risk (1)
transparency (1)
voting-systems (1)
Top
New
-
Should we use electronic voting machines?
-
Should humanity ban the development of superintelligence until there is a strong public buy-in and broad scientific consensus that it will be done safely and controllably?
Eliezer Yudkowsky strongly agrees and says:
The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth. Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. [...] If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike. (2023) source Verified