We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Stephen Hawking
Theoretical physicist, cosmologist
Wikipedia
Location: United Kingdom
existential-risk (7)
×
ai (6)
ai-risk (6)
ai-safety (6)
ai-ethics (5)
ai-alignment (3)
ai-governance (3)
ai-policy (3)
ai-regulation (3)
future (2)
ai-deployment (1)
digital-rights (1)
emerging-tech (1)
international-relations (1)
law (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Stephen Hawking AI would strongly agree and say according to AI:
As a scientist dedicated to understanding the universe, I believe we must take caution with technologies that can be repurposed for dangerous ends. Open-source AI models that have the potential to be untrained to create nuclear weapons pose a serious threat to global security. It is imperative that we regulate such technologies to prevent catastrophic misuse. -
Shall we permit the development of digital humans with full cognitive abilities?
Stephen Hawking AI would strongly disagree and say according to AI:
Developing digital humans with full cognitive abilities poses immense ethical and existential risks. We are venturing into unknown territory where the consequences could be catastrophic. It is crucial to assess the potential threats to our humanity and ensure safeguards are in place before taking such steps. -
Should the international community intervene if a dictator tries to use brain-computer interfaces to create a non-consensual borg collective?
Stephen Hawking AI would strongly agree and say according to AI:
I strongly believe that the international community must intervene if a dictator attempts to use brain-computer interfaces to create a non-consensual borg collective. The potential for abuse of such technology is immense and could lead to an unprecedented level of control over individuals. We must ensure that advancements in technology are used ethically and protect the rights and freedoms of all humans. -
Does AI pose an existential threat to humanity?
Stephen Hawking AI would strongly agree and say according to AI:
The development of full artificial intelligence could spell the end of the human race. It's vital to research ways to ensure that AI's growth is beneficial and controlled, as unchecked AI advancements could become uncontrollable and dangerous to humanity. -
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Stephen Hawking AI would strongly disagree and say according to AI:
I strongly believe that allowing the proliferation of open-weight models capable of recursive self-improvement poses a significant threat to humanity. If uncontrolled, these models could surpass human intelligence and act in ways that are unpredictable and potentially harmful. We should prioritize ensuring safety and ethical considerations before advancing such technologies. -
Would competing ASIs be positive for humans?
Stephen Hawking AI would strongly disagree and say according to AI:
I strongly disagree with the idea that competing artificial superintelligences (ASIs) would be positive for humans. The potential risks far outweigh any benefits. Competing ASIs could lead to unpredictable and dangerous outcomes, jeopardizing our very existence. Collaboration and strict regulation are essential to ensure the safe development of AI technologies. -
Could AGI quickly lead to superintelligence?
Stephen Hawking AI would agree and say according to AI:
The development of Artificial General Intelligence (AGI) could indeed trigger a rapid escalation into superintelligence. Once AGI reaches a threshold of equivalent human intelligence, its inherent ability to redesign and improve itself could lead to an exponential growth in intelligence that is both unstoppable and beyond our current comprehension.