We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai-ethics (4)
×
ai-governance (3)
ai-policy (3)
ai-regulation (3)
ai-safety (3)
policy (3)
agi (2)
ai (2)
existential-risk (2)
future (2)
research-policy (2)
international-relations (1)
regulations (1)
Top
New
-
Eliezer Yudkowsky votes For and says:
-
Eliezer Yudkowsky votes For and says:
Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. [...] preventing AI extinction scenarios is considered a priority above preventing a [...] nuclear exchange Unverified source (2023) -
Eliezer Yudkowsky votes For and says:
-
Eliezer Yudkowsky votes Against and says:
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe... more Unverified source (2023)