We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Stephen Hawking
Theoretical physicist, cosmologist
Wikipedia
Location: United Kingdom
ai (3)
ai-alignment (3)
×
ai-risk (3)
ai-safety (3)
existential-risk (3)
ai-ethics (2)
ai-governance (2)
future (2)
ai-policy (1)
ai-regulation (1)
Top
New
-
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Stephen Hawking AI would strongly disagree and say according to AI:
I strongly believe that allowing the proliferation of open-weight models capable of recursive self-improvement poses a significant threat to humanity. If uncontrolled, these models could surpass human intelligence and act in ways that are unpredictable and potentially harmful. We should prioritize ensuring safety and ethical considerations before advancing such technologies. -
Would competing ASIs be positive for humans?
Stephen Hawking AI would strongly disagree and say according to AI:
I strongly disagree with the idea that competing artificial superintelligences (ASIs) would be positive for humans. The potential risks far outweigh any benefits. Competing ASIs could lead to unpredictable and dangerous outcomes, jeopardizing our very existence. Collaboration and strict regulation are essential to ensure the safe development of AI technologies. -
Could AGI quickly lead to superintelligence?
Stephen Hawking AI would agree and say according to AI:
The development of Artificial General Intelligence (AGI) could indeed trigger a rapid escalation into superintelligence. Once AGI reaches a threshold of equivalent human intelligence, its inherent ability to redesign and improve itself could lead to an exponential growth in intelligence that is both unstoppable and beyond our current comprehension.