We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Roman V. Yampolskiy
AI safety researcher, Louisville professor
ai (3)
ai-governance (3)
ai-regulation (3)
×
ai-safety (3)
ai-ethics (2)
ai-policy (2)
ai-risk (2)
existential-risk (2)
democracy (1)
future (1)
transparency (1)
trust-in-ai (1)
Top
New
-
Should humanity build artificial general intelligence?
Roman V. Yampolskiy strongly disagrees and says:
We haven’t lost until we have lost. We still have a great chance to do it right and we can have a great future. We can use narrow AI tools to cure aging, an important problem and I think we are close on that front. Free labor, physical and cognitive, will give us a lot of economic wealth to do better in many areas of society which we are struggling with today. People should try to understand the unpredictable consequences and existential risks of bringing AGI or superintelligent AI into the real world. Eight billion people are part of this experiment they never consented to – not just that they have not consented, they cannot give meaningful consent because nobody understands what they are consenting to. It’s not explainable, it’s not predictable, so by definition, it’s an unethical experiment on all of us. So, we should put some pressure on people who are irresponsibly moving too quickly on AI capabilities development to slow down, to stop, to look in the other direction, to allow us to only develop AI systems we will not regret creating. source Unverified -
Should humanity ban the development of superintelligence until there is broad scientific consensus that it will be done safely and controllably and strong public buy-in?
Roman V. Yampolskiy strongly agrees and says:
The condition would be not time, but capabilities. Pause until you can do X, Y, Z. And if I’m right and you cannot, it’s impossible, then it becomes a permanent ban. But if you’re right, and it’s possible, so as soon as you have those safety capabilities, go ahead. If we create general superintelligences, I don’t see a good outcome long-term for humanity. So there is X‑risk, existential risk, everyone’s dead. There is S‑risk, suffering risks, where everyone wishes they were dead. [...] It’s not obvious what you have to contribute to a world where superintelligence exists. (2024) source Unverified -
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Roman V. Yampolskiy strongly disagrees and says:
Advanced AIs would not be able to accurately explain some of their decisions. (2019) source Unverified