We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai-ethics (4)
×
ai (3)
ai-regulation (3)
existential-risk (3)
agi (2)
ai-governance (2)
ai-policy (2)
ai-safety (2)
future (2)
policy (2)
ai-risk (1)
research-policy (1)
Top
New
-
Nick Bostrom votes For and says:
the greatest existential risks over the coming decades or century arise from certain, anticipated technological breakthroughs that we might make in particular, machine super intelligence, Unverified source (2017) -
Nick Bostrom votes For and says:
I think broadly speaking with AI, … rather than coming up with a detailed plan and blueprint in advance, we’ll have to kind of feel our way through this and make adjustments as we go along as new opportunities come into view. […] I think ultimately t... more Unverified source (2025) -
Nick Bostrom votes For and says:
Superintelligence could become extremely powerful and be able to shape the future according to its preferences. If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintel... more Unverified source (2014) -
Nick Bostrom votes For and says:
there could well be other systems now, or in the relatively near future, that would start to satisfy the criteria Unverified source (2022)