We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Comment by Ben Goertzel
SingularityNET founder; AGI researcher
My own view is once you get to human-level AGI, within a few years you could get a radically superhuman AGI — unless the AGI threatens to throttle its own development out of its own conservatism. I think once an AGI can introspect its own mind, then it can do engineering and science at a human or superhuman level.
It should be able to make a smarter AGI, then an even smarter AGI, then an intelligence explosion.
Unverified
source
(2024)
Polls
replying to Ben Goertzel