We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Geoffrey Hinton
Godfather of Deep Learning
ai (4)
ai-risk (4)
ai-safety (4)
ai-ethics (3)
ai-governance (3)
ai-policy (3)
ai-regulation (2)
existential-risk (2)
public-interest-ai (2)
ai-deployment (1)
transparency (1)
trust-in-ai (1)
Top
New
-
Should big AI companies spend a third of their compute resources on AI safety?
Geoffrey Hinton strongly agrees and says:
The government should insist that the big companies do lots of safety experiments, spend considerable resources like a third of their compute resources on doing safety experiments while these things are still not as intelligent on us to see how they might evade control and what we could do about it. And I think that's a lot of the debate at OpenAI. The people interested in safety like Ilya Sutskever wanted significant resources to be spent on safety. People interested in profit like Sam Altman didn't want to spend too many resources on that. source Unverified -
Does AI pose an existential threat to humanity?
Geoffrey Hinton strongly agrees and says:
I used to think it was a long way off, but I now think it's serious and fairly close [...] The alarm bell I’m ringing has to do with the existential threat of them taking control. source Unverified -
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
-
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Geoffrey Hinton strongly disagrees and says:
GH: I’m an expert on trying to get the technology to work, not an expert on social policy. One place where I do have technical expertise that’s relevant is [whether] regulators should insist that you can explain how your AI system works. I think that would be a complete disaster. People can’t explain how they work, for most of the things they do. When you hire somebody, the decision is based on all sorts of things you can quantify, and then all sorts of gut feelings. People have no idea how they do that. If you ask them to explain their decision, you are forcing them to make up a story. Neural nets have a similar problem. When you train a neural net, it will learn a billion numbers that represent the knowledge it has extracted from the training data. If you put in an image, out comes the right decision, say, whether this was a pedestrian or not. But if you ask “Why did it think that?” well if there were any simple rules for deciding whether an image contains a pedestrian or not, it would have been a solved problem ages ago. (2017) source Unverified