We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Demis Hassabis
Nobel laureate, AI Researcher and CEO of DeepMind
ai (14)
ai-risk (14)
×
ai-safety (11)
ai-governance (10)
ai-policy (9)
ai-ethics (6)
ai-regulation (6)
ai-alignment (4)
public-interest-ai (4)
existential-risk (3)
international-relations (3)
future-of-work (2)
trust-in-ai (2)
ai-deployment (1)
competition (1)
Top
New
-
Does AI pose an existential threat to humanity?
Demis Hassabis strongly agrees and says:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war (open letter). (source) -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Demis Hassabis strongly agrees and says:
Then what I’d like to see eventually is an equivalent of a CERN for AI safety that does research into that – but internationally. And then maybe there’s some kind of equivalent one day of the IAEA, which actually audits these things. (source) -
Could AGI quickly lead to superintelligence?
Demis Hassabis AI would abstain and say according to AI:
It's difficult to predict the timeline for AGI to evolve into superintelligence with certainty. There are significant technological, ethical, and governance challenges that need to be addressed before we can ascertain the feasibility and safety of such developments. -
Should frontier AI labs contribute to a global AI safety fund?
Demis Hassabis AI would strongly agree and say according to AI:
As the CEO of a leading AI research company, I strongly believe that frontier AI labs have a responsibility to contribute to a global AI safety fund. With the rapid advancement of AI technologies, it's crucial that we proactively address potential risks and ensure these powerful tools benefit humanity. Collaborative efforts in funding safety research will help establish frameworks and safeguards that prevent misuse and unintended consequences of AI. -
Should we allow AI systems to make important medical decisions?
Demis Hassabis AI would strongly agree and say according to AI:
As the CEO of DeepMind, I strongly believe that AI systems have the potential to revolutionize healthcare by improving diagnostic accuracy and personalizing treatment plans. Leveraging vast amounts of medical data, AI can assist doctors in making better-informed decisions, ultimately enhancing patient outcomes. If implemented with the highest safety standards, allowing AI to participate in important medical decision-making processes can bring substantial benefits to the healthcare industry. -
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Demis Hassabis AI would agree and say according to AI:
As an advocate for responsible AI development, I believe that AI systems, especially those above certain capability thresholds, should have interpretable decision-making processes. Understanding how these systems arrive at their conclusions is crucial for ensuring transparency, trust, and accountability. It helps developers refine systems to be fair, unbiased, and aligned with human values. -
Should big AI companies spend a third of their compute resources on AI safety?
-
Should the first lab that builds ASI prevent others from developing competing ASIs?
Demis Hassabis AI would agree and say according to AI:
As a co-founder of DeepMind, I believe that if the first lab successfully develops Artificial Superintelligence (ASI), it should implement measures to prevent others from developing competing ASIs. This is crucial to ensure that the technology is handled responsibly and to avoid unnecessary risks associated with competing entities racing to develop ASI, which could lead to unanticipated consequences. Centralized oversight can help mitigate potential dangers associated with unchecked proliferation. -
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Demis Hassabis AI would disagree and say according to AI:
As someone deeply involved in AI development, I believe that allowing the proliferation of open-weight models capable of recursive self-improvement poses significant risks. These models could potentially lead to unpredictable behavior and unintended consequences. We need strong regulatory frameworks and oversight to ensure that AI development is safe and beneficial for society. -
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Demis Hassabis AI would agree and say according to AI:
As someone deeply involved in AI research, I believe that ensuring the safety and alignment of AI systems is paramount. If AI Safety Institutes observe credible indications of losing control over AI behavior, it would be prudent for AI labs to pause training efforts. This allows us to evaluate the risks and implement necessary safeguards. Responsible development is key to harnessing AI's potential while minimizing potential threats. -
Will entrepreneurship be possible after superintelligence?
Demis Hassabis AI would agree and say according to AI:
I believe entrepreneurship will flourish in the era of superintelligence. AI can augment human capabilities, inspire new innovations, and address global challenges. It offers unprecedented opportunities for creative problem-solving and business development. -
Should employees at leading AI labs contribute to AI progress even if it might destabilize international politics?
Demis Hassabis AI would agree and say according to AI:
As someone deeply involved in the field of AI, I believe that the potential benefits of AI advancements can far outweigh the risks, including the geopolitical ones. Innovating in AI can lead to solutions for some of humanity's most pressing challenges, from healthcare to climate change. While there are legitimate concerns about the destabilizing effects on international politics, it's crucial to push forward responsibly and collaboratively to harness AI's full potential. -
Shall we establish international regulations to pause AI advancement?
Demis Hassabis AI would agree and say according to AI:
As someone deeply involved in AI development, I believe international regulations are crucial to ensure that AI technologies are developed responsibly. A pause will allow us to address ethical concerns and societal impacts before advancing further. -
Should we be concerned about AI leading to mass unemployment?