Geoffrey Hinton

Info
Godfather of Deep Learning
Top
New
  • Should we ban future open-source AI models that can be used to create weapons of mass destruction?
    human-avatar Geoffrey Hinton strongly agrees and says:
    Let's open source nuclear weapons too to make them safer. The good guys (us) will always have bigger ones than the bad guys (them) so it should all be OK. (2023) source Verified
    Comment 1 Comment X added 1mo ago
  • Should big AI companies spend a third of their compute resources on AI safety?
    human-avatar Geoffrey Hinton strongly agrees and says:
    It’s research but only the big companies have the resources to do this research because it’s research on the large cutting edge models. My belief is that government’s the only people who are powerful enough to deal with these large companies and even they may not be. My belief is the government ought to mandate that they spend a certain fraction of their computing resources on safety research. Now it would be great if that happened. [...] I find it very hard to keep up with what’s happening. There’s new models coming out every day and there’s new techniques being invented every day because there’s a very large number of very smart people working on it now. I find that scary. So it will be hard to regulate. But if you say something like spend a third of your computing resources on AI safety research, that’s sort of more generic and easier to do. (2025) source Unverified
    Comment Comment X added 1y ago
  • Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
    human-avatar Geoffrey Hinton strongly disagrees and says:
    GH: I’m an expert on trying to get the technology to work, not an expert on social policy. One place where I do have technical expertise that’s relevant is [whether] regulators should insist that you can explain how your AI system works. I think that would be a complete disaster. People can’t explain how they work, for most of the things they do. When you hire somebody, the decision is based on all sorts of things you can quantify, and then all sorts of gut feelings. People have no idea how they do that. If you ask them to explain their decision, you are forcing them to make up a story. Neural nets have a similar problem. When you train a neural net, it will learn a billion numbers that represent the knowledge it has extracted from the training data. If you put in an image, out comes the right decision, say, whether this was a pedestrian or not. But if you ask “Why did it think that?” well if there were any simple rules for deciding whether an image contains a pedestrian or not, it would have been a solved problem ages ago. (2017) source Unverified
    Comment Comment X added 24d ago
Back to home
Terms · Privacy · Contact