We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Geoffrey Hinton
Godfather of Deep Learning
ai (5)
ai-safety (5)
×
ai-governance (4)
ai-risk (4)
ai-ethics (3)
ai-policy (3)
ai-regulation (3)
existential-risk (3)
public-interest-ai (2)
ai-deployment (1)
democracy (1)
transparency (1)
trust-in-ai (1)
Top
New
-
Does AI pose an existential threat to humanity?
Geoffrey Hinton strongly agrees and says:
I used to think it was a long way off, but I now think it's serious and fairly close [...] The alarm bell I’m ringing has to do with the existential threat of them taking control. source Unverified -
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
-
Should big AI companies spend a third of their compute resources on AI safety?
Geoffrey Hinton strongly agrees and says:
It’s research but only the big companies have the resources to do this research because it’s research on the large cutting edge models. My belief is that government’s the only people who are powerful enough to deal with these large companies and even they may not be. My belief is the government ought to mandate that they spend a certain fraction of their computing resources on safety research. Now it would be great if that happened. [...] I find it very hard to keep up with what’s happening. There’s new models coming out every day and there’s new techniques being invented every day because there’s a very large number of very smart people working on it now. I find that scary. So it will be hard to regulate. But if you say something like spend a third of your computing resources on AI safety research, that’s sort of more generic and easier to do. (2025) source Unverified -
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Geoffrey Hinton strongly disagrees and says:
GH: I’m an expert on trying to get the technology to work, not an expert on social policy. One place where I do have technical expertise that’s relevant is [whether] regulators should insist that you can explain how your AI system works. I think that would be a complete disaster. People can’t explain how they work, for most of the things they do. When you hire somebody, the decision is based on all sorts of things you can quantify, and then all sorts of gut feelings. People have no idea how they do that. If you ask them to explain their decision, you are forcing them to make up a story. Neural nets have a similar problem. When you train a neural net, it will learn a billion numbers that represent the knowledge it has extracted from the training data. If you put in an image, out comes the right decision, say, whether this was a pedestrian or not. But if you ask “Why did it think that?” well if there were any simple rules for deciding whether an image contains a pedestrian or not, it would have been a solved problem ages ago. (2017) source Unverified -
Should humanity ban the development of superintelligence until there is broad scientific consensus that it will be done safely and controllably and strong public buy-in?
Geoffrey Hinton agrees and says:
I don’t think they should scale this up more until they have understood whether they can control it. We’ve entered completely unknown territory. We’re capable of building machines that are stronger than ourselves, but what if we develop machines that are smarter than us? We have no experience dealing with these things. The best I can recommend is that many very smart people try to figure out how to contain the dangers of these things. There’s no use waiting for the AI to outsmart us; we must control it as it develops. If that control and the broad societal confidence in it aren’t there yet, then the responsible course is not to push ahead to ever larger systems. (2023) source Unverified