We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Ben Goertzel
SingularityNET founder; AGI researcher
ai (2)
ai-governance (2)
ai-policy (2)
×
policy (2)
ai-ethics (1)
ai-regulation (1)
ai-risk (1)
ai-safety (1)
economics (1)
ethics (1)
existential-risk (1)
future (1)
innovation-policy (1)
law (1)
regulations (1)
Top
New
-
AGI will create abundance
Ben Goertzel votes For and says:
Situated in this specific species, place, and time, I care a lot about the condition of all of us humans, and so I would like to not only create a powerful general intelligence, but create one which is [...] going to be beneficial to humans and other life forms on the planet, even while in some ways going beyond everything that we are. There are so many virtuous cycles among these different technologies, the more you advance in any of them, the more you're going to advance in all of them. And it's the coming together of all of these that's going to create, you know, radical abundance and the technological [...] Singularity. [...] (2018) source Unverified -
Ban superintelligence development until safety consensus is reached
Ben Goertzel votes Against and says:
I don’t think Nick Bostrom or anyone else is going to stop the human race from developing advanced AI because it’s a source of tremendous intellectual curiosity but also of tremendous economic advantage. So if let’s say President Trump decided to ban artificial intelligence research – I don’t think he’s going to but suppose he did. China will keep doing artificial intelligence research. If U.S. and China ban it, you know, Africa will do it. Everywhere around the world has AI textbooks and computers. And everyone now knows you can make people’s lives better and make money from developing more advanced AI. So there’s no possibility in practice to halt AI development. What we can do is try to direct it in the most beneficial direction according to our best judgment. source Unverified