Comment by Ben Goertzel

SingularityNET founder; AGI researcher
My own view is once you get to human-level AGI, within a few years you could get a radically superhuman AGI — unless the AGI threatens to throttle its own development out of its own conservatism. I think once an AGI can introspect its own mind, then it can do engineering and science at a human or superhuman level. It should be able to make a smarter AGI, then an even smarter AGI, then an intelligence explosion. Unverified source (2024)
Like Share on X 3d ago
Polls
replying to Ben Goertzel