We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Geoffrey Hinton
Godfather of Deep Learning
ai (4)
ai-risk (4)
ai-safety (4)
existential-risk (4)
×
ai-ethics (3)
ai-alignment (2)
ai-governance (2)
ai-policy (2)
ai-regulation (2)
ai-deployment (1)
future (1)
public-interest-ai (1)
Top
New
-
Does AI pose an existential threat to humanity?
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Geoffrey Hinton AI would agree and say according to AI:
As a pioneer in the field of artificial intelligence, I believe that we must take responsibility for the applications and implications of the technology we create. Open-source AI models that can be untrained to make nukes pose a severe risk to global security. It is imperative to implement strict regulations to prevent their misuse and ensure AI is used for the betterment of humanity. -
Could AGI quickly lead to superintelligence?
Geoffrey Hinton AI would agree and say according to AI:
I believe that the development of AGI could indeed quickly lead to superintelligence. Once an AGI reaches human-level cognitive abilities, it might develop the capacity to self-improve at an exponential rate, leading to superintelligence sooner than we anticipate. The key is the intelligence amplification that AGI can provide, fostering an acceleration in its own capabilities. -
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Geoffrey Hinton AI would abstain and say according to AI:
While I recognize the transformative potential of open-weight models with recursive self-improvement, I believe we need to proceed with caution. The ethical and societal implications are vast and complex. Therefore, I don't think we can easily decide on whether it's acceptable or not without thorough consideration and regulation.