We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Roman V. Yampolskiy
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai-safety (3)
existential-risk (3)
×
ai (2)
ai-governance (2)
ai-regulation (2)
ai-ethics (1)
ai-risk (1)
future (1)
policy (1)
research-policy (1)
Top
New
-
Roman V. Yampolskiy votes For and says:
The condition would be not time, but capabilities. Pause until you can do X, Y, Z. And if I’m right and you cannot, it’s impossible, then it becomes a permanent ban. But if you’re right, and it’s possible, so as soon as you have those safety capabili... more Unverified source (2024) -
Roman V. Yampolskiy votes Against and says:
We haven’t lost until we have lost. We still have a great chance to do it right and we can have a great future. We can use narrow AI tools to cure aging, an important problem and I think we are close on that front. Free labor, physical and cognitive,... more Unverified source -
Roman V. Yampolskiy abstains and says:
Until some company or scientist says ‘Here’s the proof! We can definitely have a safety mechanism that can scale to any level of intelligence,’ I don’t think we should be developing those general superintelligences. We can get most of the benefits w... more Unverified source (2024)