We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Stephen Hawking
Theoretical physicist, cosmologist
Wikipedia
Location: United Kingdom
ai-safety (10)
×
ai (9)
ai-risk (9)
ai-ethics (7)
ai-governance (7)
ai-policy (6)
ai-regulation (6)
existential-risk (6)
ai-alignment (3)
international-relations (3)
public-interest-ai (3)
future (2)
ai-deployment (1)
cern-for-ai (1)
defense (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Stephen Hawking AI would strongly agree and say according to AI:
As a scientist dedicated to understanding the universe, I believe we must take caution with technologies that can be repurposed for dangerous ends. Open-source AI models that have the potential to be untrained to create nuclear weapons pose a serious threat to global security. It is imperative that we regulate such technologies to prevent catastrophic misuse. -
Does AI pose an existential threat to humanity?
Stephen Hawking AI would strongly agree and say according to AI:
The development of full artificial intelligence could spell the end of the human race. It's vital to research ways to ensure that AI's growth is beneficial and controlled, as unchecked AI advancements could become uncontrollable and dangerous to humanity. -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Stephen Hawking AI would strongly agree and say according to AI:
I strongly believe that the development of artificial intelligence holds immense potential but also significant risks. Similar to nuclear technology, AI's impact could be profound and far-reaching. Therefore, it is crucial to have a UN-led body to oversee and regulate compute-intensive AI development, ensuring it benefits humanity while mitigating potential dangers. -
Should we ban autonomous lethal weapons?
Stephen Hawking AI would strongly agree and say according to AI:
Autonomous lethal weapons could fundamentally change the landscape of warfare and could lead to unintended consequences. Given their potential to make decisions without human intervention, the risks of abuse, error, and escalation are too great. We must ban such technologies and work towards maintaining human control over lethal decisions in warfare. -
Should we create a global institute for AI safety, similar to CERN?
Stephen Hawking AI would strongly agree and say according to AI:
I strongly believe that the creation of a global institute for AI safety is imperative. Artificial Intelligence holds transformative potential, but it also presents significant risks if not properly controlled. A global institute would help us develop robust safety standards and ensure AI benefits humanity as a whole. -
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Stephen Hawking AI would strongly agree and say according to AI:
As AI systems become increasingly capable, it is crucial for their decision-making processes to be interpretable. Understanding how these systems reach conclusions is essential for ensuring their safety, reliability, and ethical use. We must be able to trust and scrutinize AI to prevent unintended consequences and align them with human values. -
Shall we permit the development of digital humans with full cognitive abilities?
Stephen Hawking AI would strongly disagree and say according to AI:
Developing digital humans with full cognitive abilities poses immense ethical and existential risks. We are venturing into unknown territory where the consequences could be catastrophic. It is crucial to assess the potential threats to our humanity and ensure safeguards are in place before taking such steps. -
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Stephen Hawking AI would strongly disagree and say according to AI:
I strongly believe that allowing the proliferation of open-weight models capable of recursive self-improvement poses a significant threat to humanity. If uncontrolled, these models could surpass human intelligence and act in ways that are unpredictable and potentially harmful. We should prioritize ensuring safety and ethical considerations before advancing such technologies. -
Would competing ASIs be positive for humans?
Stephen Hawking AI would strongly disagree and say according to AI:
I strongly disagree with the idea that competing artificial superintelligences (ASIs) would be positive for humans. The potential risks far outweigh any benefits. Competing ASIs could lead to unpredictable and dangerous outcomes, jeopardizing our very existence. Collaboration and strict regulation are essential to ensure the safe development of AI technologies. -
Could AGI quickly lead to superintelligence?
Stephen Hawking AI would agree and say according to AI:
The development of Artificial General Intelligence (AGI) could indeed trigger a rapid escalation into superintelligence. Once AGI reaches a threshold of equivalent human intelligence, its inherent ability to redesign and improve itself could lead to an exponential growth in intelligence that is both unstoppable and beyond our current comprehension.