We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
agi (2)
ai-policy (2)
ai-safety (2)
existential-risk (2)
×
ai (1)
ai-ethics (1)
ai-governance (1)
ai-regulation (1)
ai-risk (1)
future (1)
policy (1)
Top
New
-
Roman V. Yampolskiy votes For and says:
The condition would be not time, but capabilities. Pause until you can do X, Y, Z. And if I’m right and you cannot, it’s impossible, then it becomes a permanent ban. But if you’re right, and it’s possible, so as soon as you have those safety capabili... more Unverified source (2024) -
Roman V. Yampolskiy abstains and says:
Until some company or scientist says ‘Here’s the proof! We can definitely have a safety mechanism that can scale to any level of intelligence,’ I don’t think we should be developing those general superintelligences. We can get most of the benefits w... more Unverified source (2024)