We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Nick Bostrom
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai (6)
ai-safety (5)
ai-governance (4)
ai-regulation (4)
existential-risk (4)
future (4)
ai-ethics (2)
ai-risk (2)
ethics (2)
policy (2)
research-policy (2)
ai-policy (1)
cern-for-ai (1)
economics (1)
public-interest-ai (1)
Top
New
-
Nick Bostrom votes For and says:
the greatest existential risks over the coming decades or century arise from certain, anticipated technological breakthroughs that we might make in particular, machine super intelligence, Unverified source (2017) -
Nick Bostrom votes For and says:
I think broadly speaking with AI, … rather than coming up with a detailed plan and blueprint in advance, we’ll have to kind of feel our way through this and make adjustments as we go along as new opportunities come into view. […] I think ultimately t... more Unverified source (2025) -
Nick Bostrom votes For and says:
-
Nick Bostrom votes For and says:
Yudkowsky has proposed that a seed AI be given the final goal of carrying out humanity’s “coherent extrapolated volition” (CEV) [...] Yudkowsky sees CEV as a way for the programmers to avoid arrogating to themselves the privilege or burden of determi... more Unverified source (2014) -
Nick Bostrom votes For and says:
The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such su... more Unverified source (2003) -
Nick Bostrom votes For and says:
once we have full AGI, super intelligence might be quite close on the heels of that. Unverified source (2025) -
Nick Bostrom votes For and says:
Superintelligence could become extremely powerful and be able to shape the future according to its preferences. If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintel... more Unverified source (2014)