We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Geoffrey Hinton
Godfather of Deep Learning
ai (3)
ai-safety (3)
existential-risk (3)
×
ai-ethics (2)
ai-governance (2)
ai-regulation (2)
ai-risk (2)
ai-deployment (1)
ai-policy (1)
democracy (1)
public-interest-ai (1)
Top
New
-
Does AI pose an existential threat to humanity?
Geoffrey Hinton strongly agrees and says:
I used to think it was a long way off, but I now think it's serious and fairly close [...] The alarm bell I’m ringing has to do with the existential threat of them taking control. source Unverified -
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
-
Should humanity ban the development of superintelligence until there is broad scientific consensus that it will be done safely and controllably and strong public buy-in?
Geoffrey Hinton agrees and says:
I don’t think they should scale this up more until they have understood whether they can control it. We’ve entered completely unknown territory. We’re capable of building machines that are stronger than ourselves, but what if we develop machines that are smarter than us? We have no experience dealing with these things. The best I can recommend is that many very smart people try to figure out how to contain the dangers of these things. There’s no use waiting for the AI to outsmart us; we must control it as it develops. If that control and the broad societal confidence in it aren’t there yet, then the responsible course is not to push ahead to ever larger systems. (2023) source Unverified