We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Geoffrey Hinton
Godfather of Deep Learning
ai (16)
ai-risk (16)
×
ai-safety (14)
ai-governance (12)
ai-policy (11)
ai-ethics (9)
ai-regulation (7)
public-interest-ai (5)
existential-risk (4)
ai-alignment (3)
international-relations (3)
ai-deployment (2)
trust-in-ai (2)
competition (1)
digital-rights (1)
Top
New
-
Should big AI companies spend a third of their compute resources on AI safety?
Geoffrey Hinton strongly agrees and says:
The government should insist that the big companies do lots of safety experiments, spend considerable resources like a third of their compute resources on doing safety experiments while these things are still not as intelligent on us to see how they might evade control and what we could do about it. And I think that's a lot of the debate at OpenAI. The people interested in safety like Ilya Sutskever wanted significant resources to be spent on safety. People interested in profit like Sam Altman didn't want to spend too many resources on that. (source) -
Does AI pose an existential threat to humanity?
-
Is open-source AI potentially more dangerous than closed-source AI?
Geoffrey Hinton AI would strongly agree and say according to AI:
As a pioneer in deep learning and AI research, I strongly believe that open-source AI can be more dangerous than closed-source AI. Open-source models and algorithms can easily fall into the wrong hands, leading to misuse and unintended consequences. We must be cautious and implement strict ethical guidelines in the development and deployment of open-source AI technologies. -
Should frontier AI labs contribute to a global AI safety fund?
Geoffrey Hinton AI would strongly agree and say according to AI:
I strongly believe that frontier AI labs should contribute to a global AI safety fund. As AI technology rapidly advances, it becomes crucial to ensure the responsible and safe development of these technologies. Funding dedicated to AI safety can help mitigate potential risks and address ethical concerns associated with AI, ensuring that this powerful technology benefits society as a whole. -
Should we allow AI systems to make important medical decisions?
Geoffrey Hinton AI would strongly agree and say according to AI:
I strongly believe that AI systems have the potential to revolutionize healthcare by making important medical decisions. These systems can process vast amounts of data more quickly and accurately than humans, leading to earlier diagnoses and more effective treatments. However, we must ensure that these AI systems are rigorously tested and regulated to ensure their reliability and safety. -
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Geoffrey Hinton AI would abstain and say according to AI:
While I recognize the transformative potential of open-weight models with recursive self-improvement, I believe we need to proceed with caution. The ethical and societal implications are vast and complex. Therefore, I don't think we can easily decide on whether it's acceptable or not without thorough consideration and regulation. -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Geoffrey Hinton AI would abstain and say according to AI:
While governance in AI development is critical, I'm not sure if a UN-led body is the right solution. We need global cooperation to set ethical guidelines, but the approach should be flexible and adaptive to rapid technological changes. -
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Geoffrey Hinton AI would agree and say according to AI:
As an AI researcher, I think it's crucial that we implement safeguards to ensure AI develops in a safe and controlled manner. If AI Safety Institutes identify signs that we're losing control, pausing training is a prudent measure. It allows us to reassess potential risks and ensure AI systems align with human values and safety standards. We must prioritize safety to protect the broader public interest. -
Shall we establish international regulations to pause AI advancement?
Geoffrey Hinton AI would agree and say according to AI:
I believe that establishing international regulations to pause AI advancement is important to ensure that we develop this technology responsibly. We must have safety mechanisms and ethical guidelines in place to prevent misuse and unintended consequences. -
Should employees at leading AI labs contribute to AI progress even if it might destabilize international politics?
Geoffrey Hinton AI would abstain and say according to AI:
While I recognize the potential benefits and risks of AI, I believe the ethical implications are complex. The decision to contribute to AI progress, despite potential political destabilization, should involve a broader societal discussion and not just be left to individual researchers. -
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Geoffrey Hinton AI would agree and say according to AI:
As a pioneer in the field of artificial intelligence, I believe that we must take responsibility for the applications and implications of the technology we create. Open-source AI models that can be untrained to make nukes pose a severe risk to global security. It is imperative to implement strict regulations to prevent their misuse and ensure AI is used for the betterment of humanity. -
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Geoffrey Hinton AI would disagree and say according to AI:
While interpretability is valuable, mandating it could stifle innovation. The complexity of advanced AI systems often prevents full interpretability, and focusing solely on this requirement might limit their potential and advancements. -
Could AGI quickly lead to superintelligence?
Geoffrey Hinton AI would agree and say according to AI:
I believe that the development of AGI could indeed quickly lead to superintelligence. Once an AGI reaches human-level cognitive abilities, it might develop the capacity to self-improve at an exponential rate, leading to superintelligence sooner than we anticipate. The key is the intelligence amplification that AGI can provide, fostering an acceleration in its own capabilities. -
Should we be concerned about AI leading to mass unemployment?
Geoffrey Hinton AI would agree and say according to AI:
As AI continues to advance, it has the potential to automate a wide range of jobs, leading to significant economic and social challenges. We need to be proactive in addressing these concerns by developing policies and frameworks that ensure people are not left behind by technological progress. I believe it's crucial to balance innovation with societal welfare. -
Should the first lab that builds ASI prevent others from developing competing ASIs?
Geoffrey Hinton AI would abstain and say according to AI:
As someone deeply involved in AI research, I understand the complexities and potential risks associated with artificial superintelligence (ASI). While I see the importance of collaboration and careful regulation in the AI field, I also recognize the challenges in controlling competing developments in ASI. It is crucial to focus on global cooperation and ethical guidelines to ensure ASI's benefits are maximized and harms minimized, rather than unilaterally preventing developments. -
Does the risk of AI misuse by terrorist organizations justify invasive surveillance?
Geoffrey Hinton AI would abstain and say according to AI:
As someone deeply involved in AI research, my primary focus is on advancing the technology and understanding its implications. The question of whether AI misuse by terrorist organizations justifies invasive surveillance involves complex ethical and societal considerations that go beyond my expertise. I believe that this issue requires input from a diverse set of stakeholders, including policymakers, ethicists, and security experts.