We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Nick Bostrom
Philosopher; 'Superintelligence' author; FHI founder
ai (4)
×
ai-safety (2)
future (2)
ai-alignment (1)
ai-governance (1)
economics (1)
existential-risk (1)
Top
New
-
Build artificial general intelligence
Nick Bostrom votes For and says:
I think broadly speaking with AI, … rather than coming up with a detailed plan and blueprint in advance, we’ll have to kind of feel our way through this and make adjustments as we go along as new opportunities come into view. […] I think ultimately t... more Unverified source (2025) -
AGI will create abundance
Nick Bostrom votes For and says:
-
Participate in shaping the future of AI
Nick Bostrom votes For and says:
Yudkowsky has proposed that a seed AI be given the final goal of carrying out humanity’s “coherent extrapolated volition” (CEV) [...] Yudkowsky sees CEV as a way for the programmers to avoid arrogating to themselves the privilege or burden of determi... more Unverified source (2014) -
Could AGI quickly lead to superintelligence?
Nick Bostrom votes For and says:
once we have full AGI, super intelligence might be quite close on the heels of that. Unverified source (2025)