Error!
You must log in to access this page.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Professor of computer science at UW and author of 'The Master Algorithm'
One is that artificial general intelligence is still very far away. We’ve made a lot of progress in AI, but there’s far, far more still to go. It’s a very long journey. We’ve come a thousand miles, but there’s a million more to go. So, a lot of the talk that we hear as if AI is just around the corner—human level, general intelligence is just around the corner—really is missing a knowledge of the history of AI and knowledge of how hard the problem is. We know now that this is a very hard problem. In the beginning, the pioneers underestimated how hard it was, and people who are new to the field still do that. That’s one aspect. The other aspect is, which is more subtle but ultimately more important, is that even if AGI was around the corner, there’s still no reason to panic.
We can have AI systems that are as intelligent as humans are; in fact, far more, and not have to fear them. People fear AI because when they hear “intelligence” they project onto the machine all these human qualities like emotions and consciousness and the will to power and whatnot, and they think AI will outcompete us as a species. That ain’t how it works. AI is just a very powerful tool, and as long as we... I can imagine hackers trying to create an evil AI and we need a cyber police to do that. But short of that think, for example, that you want to use AI to cure cancer. And this is, of course, a very real application. We want it to be as intelligent as possible, or any other application. So the more intelligent we make the AI, the better off we are, provided that we stay in control.
(2021)
source
Unverified
Polls
replying to Pedro Domingos