We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Eric Schmidt
Former Google CEO; tech investor
ai (3)
ai-governance (3)
ai-policy (3)
×
ai-regulation (2)
ai-safety (2)
cern-for-ai (2)
eu (2)
international-relations (2)
research-policy (2)
ai-risk (1)
ethics (1)
existential-risk (1)
public-interest-ai (1)
science-funding (1)
Top
New
-
Should a CERN for AI have a central hub in one location?
Eric Schmidt disagrees and says:
I spent a lot of years hoping that the collaboration would occur, and there are many people in our industry who think that the arrival and development of this new intelligence is so important, it should be done in a multinational way. It should be done in the equivalent of CERN, which is the great physics laboratory, which is global in Switzerland. The political tensions and the stress over values is so great. There’s just no scenario. There’s just — I want to say it again, there’s just no scenario where you can do that. (2024) source Verified -
Should a CERN for AI aim to build safe superintelligence?
Eric Schmidt disagrees and says:
I spent a lot of years hoping that the collaboration would occur, and there are many people in our industry who think that the arrival and development of this new intelligence is so important, it should be done in a multinational way. It should be done in the equivalent of CERN, which is the great physics laboratory, which is global in Switzerland. The political tensions and the stress over values is so great. There’s just no scenario. There’s just — I want to say it again, there’s just no scenario where you can do that. (2024) source Verified -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Eric Schmidt agrees and says:
We are moving well past regulatory understanding, government understanding of what is possible. That’s why, in my view, A.I. is begging for global rules. Think of institutions the world already knows: an IPCC to organize scientific consensus and, for the riskiest, compute‑heavy systems, an IAEA‑style body rooted in the U.N. system to set standards, inspect, and verify. We did this for nuclear technology because the stakes were existential; with frontier A.I., the stakes are at least comparable in their potential for harm if we get it wrong. A U.N.-anchored watchdog could focus on the narrow slice that truly warrants it: the most powerful training runs and deployments. It would not micromanage every app or model. But it would give governments confidence that someone with access and authority is watching the red lines and sounding the alarm when needed, so innovation can continue without sleepwalking into catastrophe. (2023) source Unverified