We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Eric Schmidt
Former Google CEO; tech investor
ai-governance (4)
ai-safety (4)
×
ai (3)
ai-regulation (3)
ai-policy (2)
cern-for-ai (2)
existential-risk (2)
international-relations (2)
ai-risk (1)
democracy (1)
ethics (1)
eu (1)
public-interest-ai (1)
research-policy (1)
science-funding (1)
Top
New
-
Should humanity ban the development of superintelligence until there is broad scientific consensus that it will be done safely and controllably and strong public buy-in?
Eric Schmidt strongly disagrees and says:
I’m not in favor of a six-month pause because it will simply benefit China. What I am in favor of is getting everybody together to discuss what are the appropriate guardrails. So, I’m in favor of letting the industry try to get its act together. This is a case where you don’t rush in unless you understand what you’re doing. (2023) source Unverified -
Should we create a global institute for AI, similar to CERN?
Eric Schmidt disagrees and says:
I spent a lot of years hoping that the collaboration would occur, and there are many people in our industry who think that the arrival and development of this new intelligence is so important, it should be done in a multinational way. It should be done in the equivalent of CERN, which is the great physics laboratory, which is global in Switzerland. The political tensions and the stress over values is so great. There’s just no scenario. There’s just — I want to say it again, there’s just no scenario where you can do that. (2024) source Verified -
Should a CERN for AI aim to build safe superintelligence?
Eric Schmidt disagrees and says:
I spent a lot of years hoping that the collaboration would occur, and there are many people in our industry who think that the arrival and development of this new intelligence is so important, it should be done in a multinational way. It should be done in the equivalent of CERN, which is the great physics laboratory, which is global in Switzerland. The political tensions and the stress over values is so great. There’s just no scenario. There’s just — I want to say it again, there’s just no scenario where you can do that. (2024) source Verified -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Eric Schmidt agrees and says:
We are moving well past regulatory understanding, government understanding of what is possible. That’s why, in my view, A.I. is begging for global rules. Think of institutions the world already knows: an IPCC to organize scientific consensus and, for the riskiest, compute‑heavy systems, an IAEA‑style body rooted in the U.N. system to set standards, inspect, and verify. We did this for nuclear technology because the stakes were existential; with frontier A.I., the stakes are at least comparable in their potential for harm if we get it wrong. A U.N.-anchored watchdog could focus on the narrow slice that truly warrants it: the most powerful training runs and deployments. It would not micromanage every app or model. But it would give governments confidence that someone with access and authority is watching the red lines and sounding the alarm when needed, so innovation can continue without sleepwalking into catastrophe. (2023) source Unverified