We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Nick Bostrom
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai-ethics (3)
ai-governance (3)
ai-regulation (3)
×
ai-safety (3)
existential-risk (3)
ai (2)
future (2)
policy (2)
ai-policy (1)
ai-risk (1)
Top
New
-
Nick Bostrom votes For and says:
the greatest existential risks over the coming decades or century arise from certain, anticipated technological breakthroughs that we might make in particular, machine super intelligence, Unverified source (2017) -
Nick Bostrom votes For and says:
Superintelligence could become extremely powerful and be able to shape the future according to its preferences. If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintel... more Unverified source (2014) -
Nick Bostrom votes For and says:
there could well be other systems now, or in the relatively near future, that would start to satisfy the criteria Unverified source (2022)