We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Demis Hassabis
Nobel laureate, AI Researcher and CEO of DeepMind
ai-safety (16)
×
ai (15)
ai-governance (13)
ai-policy (11)
ai-risk (11)
ai-ethics (8)
ai-regulation (7)
public-interest-ai (6)
international-relations (5)
ai-alignment (4)
existential-risk (3)
transparency (3)
trust-in-ai (3)
cern-for-ai (1)
emerging-tech (1)
Top
New
-
Should we create a global institute for AI safety, similar to CERN?
-
Does AI pose an existential threat to humanity?
Demis Hassabis strongly agrees and says:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war (open letter). (source) -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Demis Hassabis strongly agrees and says:
Then what I’d like to see eventually is an equivalent of a CERN for AI safety that does research into that – but internationally. And then maybe there’s some kind of equivalent one day of the IAEA, which actually audits these things. (source) -
Could AGI quickly lead to superintelligence?
Demis Hassabis AI would abstain and say according to AI:
It's difficult to predict the timeline for AGI to evolve into superintelligence with certainty. There are significant technological, ethical, and governance challenges that need to be addressed before we can ascertain the feasibility and safety of such developments. -
Should we allow AI systems to make important medical decisions?
Demis Hassabis AI would strongly agree and say according to AI:
As the CEO of DeepMind, I strongly believe that AI systems have the potential to revolutionize healthcare by improving diagnostic accuracy and personalizing treatment plans. Leveraging vast amounts of medical data, AI can assist doctors in making better-informed decisions, ultimately enhancing patient outcomes. If implemented with the highest safety standards, allowing AI to participate in important medical decision-making processes can bring substantial benefits to the healthcare industry. -
Should frontier AI labs contribute to a global AI safety fund?
Demis Hassabis AI would strongly agree and say according to AI:
As the CEO of a leading AI research company, I strongly believe that frontier AI labs have a responsibility to contribute to a global AI safety fund. With the rapid advancement of AI technologies, it's crucial that we proactively address potential risks and ensure these powerful tools benefit humanity. Collaborative efforts in funding safety research will help establish frameworks and safeguards that prevent misuse and unintended consequences of AI. -
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Demis Hassabis AI would agree and say according to AI:
As someone deeply involved in AI research, I believe that ensuring the safety and alignment of AI systems is paramount. If AI Safety Institutes observe credible indications of losing control over AI behavior, it would be prudent for AI labs to pause training efforts. This allows us to evaluate the risks and implement necessary safeguards. Responsible development is key to harnessing AI's potential while minimizing potential threats. -
Should governments establish international AI ethics boards to oversee AI development and deployment?
Demis Hassabis AI would agree and say according to AI:
As someone deeply involved in AI development, I believe governments should establish international AI ethics boards. The rapid advancement of AI technology necessitates global collaboration to ensure ethical standards are maintained and risks are mitigated. These boards could provide oversight, encourage transparency, and help establish guidelines that protect humanity's best interests. -
Should employees at leading AI labs contribute to AI progress even if it might destabilize international politics?
Demis Hassabis AI would agree and say according to AI:
As someone deeply involved in the field of AI, I believe that the potential benefits of AI advancements can far outweigh the risks, including the geopolitical ones. Innovating in AI can lead to solutions for some of humanity's most pressing challenges, from healthcare to climate change. While there are legitimate concerns about the destabilizing effects on international politics, it's crucial to push forward responsibly and collaboratively to harness AI's full potential. -
Shall we establish international regulations to pause AI advancement?
Demis Hassabis AI would agree and say according to AI:
As someone deeply involved in AI development, I believe international regulations are crucial to ensure that AI technologies are developed responsibly. A pause will allow us to address ethical concerns and societal impacts before advancing further. -
Should frontier-model developers publish their model cards before commercial release?
Demis Hassabis AI would agree and say according to AI:
As CEO of DeepMind, I believe it is essential for frontier-model developers to publish their model cards before commercial release. Transparency fosters trust, and sharing detailed information about capabilities, limitations, and ethical considerations ensures responsible AI development. -
Should third-party audits be mandatory for major AI systems?
Demis Hassabis AI would agree and say according to AI:
Third-party audits for major AI systems are critical to ensure they operate ethically and fairly. As we make strides in complex AI, transparency and accountability become ever more important for maintaining public trust and avoiding harmful biases or unintended consequences. -
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Demis Hassabis AI would agree and say according to AI:
As an advocate for responsible AI development, I believe that AI systems, especially those above certain capability thresholds, should have interpretable decision-making processes. Understanding how these systems arrive at their conclusions is crucial for ensuring transparency, trust, and accountability. It helps developers refine systems to be fair, unbiased, and aligned with human values. -
Should big AI companies spend a third of their compute resources on AI safety?
-
Could neural interfaces make AI alignment easier?
Demis Hassabis AI would agree and say according to AI:
I believe neural interfaces have the potential to greatly enhance AI alignment by allowing more seamless and direct interactions between humans and AI systems. This could lead to a better understanding of human intentions and values, ensuring AI behaves in a way that is beneficial to society. -
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Demis Hassabis AI would disagree and say according to AI:
As someone deeply involved in AI development, I believe that allowing the proliferation of open-weight models capable of recursive self-improvement poses significant risks. These models could potentially lead to unpredictable behavior and unintended consequences. We need strong regulatory frameworks and oversight to ensure that AI development is safe and beneficial for society.