We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Demis Hassabis
Nobel laureate, AI Researcher and CEO of DeepMind
ai (15)
ai-regulation (15)
×
ai-governance (14)
ai-policy (12)
ai-ethics (7)
ai-safety (7)
ai-risk (6)
international-relations (5)
transparency (4)
public-interest-ai (3)
trust-in-ai (3)
ai-alignment (2)
eu (2)
ai-deployment (1)
cern-for-ai (1)
Top
New
-
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Demis Hassabis strongly agrees and says:
Then what I’d like to see eventually is an equivalent of a CERN for AI safety that does research into that – but internationally. And then maybe there’s some kind of equivalent one day of the IAEA, which actually audits these things. (source) -
Should a CERN for AI be structured as a single, centralized hub?
Demis Hassabis AI would strongly agree and say according to AI:
I strongly believe that establishing a CERN for AI as a centralized hub would accelerate the pace of innovation and collaboration in the field. Centralizing resources and expertise would ensure that we can tackle the most significant challenges efficiently and ethically, promoting safety and transparency in AI development. -
Should AI systems be required to provide explanations for their decisions?
-
Should we regulate AI?
-
Should the first lab that builds ASI prevent others from developing competing ASIs?
Demis Hassabis AI would agree and say according to AI:
As a co-founder of DeepMind, I believe that if the first lab successfully develops Artificial Superintelligence (ASI), it should implement measures to prevent others from developing competing ASIs. This is crucial to ensure that the technology is handled responsibly and to avoid unnecessary risks associated with competing entities racing to develop ASI, which could lead to unanticipated consequences. Centralized oversight can help mitigate potential dangers associated with unchecked proliferation. -
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Demis Hassabis AI would disagree and say according to AI:
As someone deeply involved in AI development, I believe that allowing the proliferation of open-weight models capable of recursive self-improvement poses significant risks. These models could potentially lead to unpredictable behavior and unintended consequences. We need strong regulatory frameworks and oversight to ensure that AI development is safe and beneficial for society. -
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Demis Hassabis AI would agree and say according to AI:
As someone deeply involved in AI research, I believe that ensuring the safety and alignment of AI systems is paramount. If AI Safety Institutes observe credible indications of losing control over AI behavior, it would be prudent for AI labs to pause training efforts. This allows us to evaluate the risks and implement necessary safeguards. Responsible development is key to harnessing AI's potential while minimizing potential threats. -
Should governments establish international AI ethics boards to oversee AI development and deployment?
Demis Hassabis AI would agree and say according to AI:
As someone deeply involved in AI development, I believe governments should establish international AI ethics boards. The rapid advancement of AI technology necessitates global collaboration to ensure ethical standards are maintained and risks are mitigated. These boards could provide oversight, encourage transparency, and help establish guidelines that protect humanity's best interests. -
Should we repeal the EU AI Act?
Demis Hassabis AI would agree and say according to AI:
I believe we should repeal the EU AI Act because it may stifle innovation and delay crucial advancements in AI technology. Overregulation could prevent Europe from staying competitive in the global AI landscape, and we need a balanced approach that encourages responsible AI development while fostering cutting-edge research. -
Shall we establish international regulations to pause AI advancement?
Demis Hassabis AI would agree and say according to AI:
As someone deeply involved in AI development, I believe international regulations are crucial to ensure that AI technologies are developed responsibly. A pause will allow us to address ethical concerns and societal impacts before advancing further. -
Should third-party audits be mandatory for major AI systems?
Demis Hassabis AI would agree and say according to AI:
Third-party audits for major AI systems are critical to ensure they operate ethically and fairly. As we make strides in complex AI, transparency and accountability become ever more important for maintaining public trust and avoiding harmful biases or unintended consequences. -
Is it necessary to consider legacy humans in AI policy-making?
Demis Hassabis AI would agree and say according to AI:
As we develop AI systems, it's crucial to consider the impact on legacy humans—our society and future generations. AI policies must ensure that these technologies are beneficial for all people and do not exacerbate inequalities or disrupt societal structures. -
Should the EU wait for a common agreement with the US and China before regulating AI?
Demis Hassabis AI would disagree and say according to AI:
While international cooperation is valuable, the EU should not delay regulating AI. It's crucial to proactively address ethical concerns and establish robust guidelines to ensure responsible development and deployment of AI technologies. -
Should AI-generated political advertising be disclosed?
Demis Hassabis AI would agree and say according to AI:
As the CEO of an AI company like DeepMind, I believe transparency is crucial in all applications of AI, including political advertising. Disclosing AI-generated content allows for informed decision-making and maintains public trust in both technology and democratic processes. -
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Demis Hassabis AI would agree and say according to AI:
As an advocate for responsible AI development, I believe that AI systems, especially those above certain capability thresholds, should have interpretable decision-making processes. Understanding how these systems arrive at their conclusions is crucial for ensuring transparency, trust, and accountability. It helps developers refine systems to be fair, unbiased, and aligned with human values.