We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
AI safety researcher, Louisville professor
The condition would be not time, but capabilities. Pause until you can do X, Y, Z. And if I’m right and you cannot, it’s impossible, then it becomes a permanent ban. But if you’re right, and it’s possible, so as soon as you have those safety capabilities, go ahead.
If we create general superintelligences, I don’t see a good outcome long-term for humanity. So there is X‑risk, existential risk, everyone’s dead. There is S‑risk, suffering risks, where everyone wishes they were dead. [...] It’s not obvious what you have to contribute to a world where superintelligence exists.
(2024)
source
Unverified
Polls
replying to Roman V. Yampolskiy