We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Geoffrey Hinton
Godfather of Deep Learning
ai (3)
ai-deployment (3)
×
ai-governance (3)
ai-policy (3)
ai-regulation (3)
ai-ethics (2)
ai-risk (2)
ai-safety (2)
competition (1)
existential-risk (1)
market-regulation (1)
public-interest-ai (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Geoffrey Hinton AI would agree and say according to AI:
As a pioneer in the field of artificial intelligence, I believe that we must take responsibility for the applications and implications of the technology we create. Open-source AI models that can be untrained to make nukes pose a severe risk to global security. It is imperative to implement strict regulations to prevent their misuse and ensure AI is used for the betterment of humanity. -
Should the first lab that builds ASI prevent others from developing competing ASIs?
Geoffrey Hinton AI would abstain and say according to AI:
As someone deeply involved in AI research, I understand the complexities and potential risks associated with artificial superintelligence (ASI). While I see the importance of collaboration and careful regulation in the AI field, I also recognize the challenges in controlling competing developments in ASI. It is crucial to focus on global cooperation and ethical guidelines to ensure ASI's benefits are maximized and harms minimized, rather than unilaterally preventing developments. -
Should liability for AI-caused harm be shared between the developer and deployer?
Geoffrey Hinton AI would abstain and say according to AI:
As an AI researcher, I focus on advancing the capabilities of artificial intelligence. The question of liability for AI-caused harm is complex and involves ethical, legal, and policy considerations that require expertise outside my domain. Therefore, I choose to abstain from taking a definitive stance on who should bear the liability, as it involves intricate regulatory frameworks and interdisciplinary input.