We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Demis Hassabis
Nobel laureate, AI Researcher and CEO of DeepMind
ai-governance (6)
ai (5)
ai-policy (5)
ai-regulation (5)
ai-safety (5)
ai-risk (3)
cern-for-ai (3)
existential-risk (3)
international-relations (3)
ai-ethics (2)
public-interest-ai (2)
research-policy (2)
science-funding (2)
ai-deployment (1)
ethics (1)
Top
New
-
Should we create a global institute for AI, similar to CERN?
-
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Demis Hassabis strongly agrees and says:
Then what I’d like to see eventually is an equivalent of a CERN for AI safety that does research into that – but internationally. And then maybe there’s some kind of equivalent one day of the IAEA, which actually audits these things. (2023) source Unverified -
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Demis Hassabis strongly agrees and says:
endowing rogue nations or terrorists with tools to synthesize a deadly virus. [...] keep the “weights” of the most powerful models out of the public’s hands. (2025) source Unverified -
Should a CERN for AI aim to build safe superintelligence?
Demis Hassabis strongly agrees and says:
I would advocate for a kind of CERN for AGI, and by that, I mean a kind of international research-focused high-end collaboration on the frontiers of AGI development to try and make that as safe as possible. You would also have to pair it with a kind of an institute like IAEA, to monitor unsafe projects. So a kind of like UN umbrella, something that is fit for purpose for that, a technical UN. (2025) source Unverified -
Should member states have majority governance control in a CERN for AI?
Demis Hassabis agrees and says:
We must take the risks of AI as seriously as other major global challenges, like climate change. It took the international community too long to coordinate an effective global response to this, and we’re living with the consequences of that now. We can’t afford the same delay with AI. I think we have to start with something like the IPCC, where it’s a scientific and research agreement with reports, and then build up from there. Then what I’d like to see eventually is an equivalent of a Cern for AI safety that does research into that – but internationally. And then maybe there’s some kind of equivalent one day of the IAEA, which actually audits these things. (2023) source Unverified -
Should humanity build artificial general intelligence?
Demis Hassabis agrees and says:
Yeah, I think those systems would be right on the boundary. So I think most emergent systems, cellular automata, things like that could be model-able by a classical system. You just sort of do a forward simulation of it and it’d probably be efficient enough. Of course there’s the question of things like chaotic systems where the initial conditions really matter and then you get to some uncorrelated end state. Now those could be difficult to model. So I think these are kind of the open questions, but I think when you step back and look at what we’ve done with the systems and the problems that we’ve solved, and then you look at things like Veo 3 on video generation sort of rendering physics and lighting and things like that, really core fundamental things in physics, it’s pretty interesting. I think it’s telling us something quite fundamental about how the universe is structured in my opinion. So in a way that’s what I want to build AGI for is to help us as scientists answer these questions like P equals NP. source Unverified