We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Stephen Hawking
Theoretical physicist, cosmologist
Wikipedia
Location: United Kingdom
ai (7)
ai-policy (7)
×
ai-regulation (7)
ai-risk (7)
ai-ethics (6)
ai-governance (6)
ai-safety (6)
existential-risk (3)
ai-deployment (2)
international-relations (2)
public-interest-ai (2)
ai-alignment (1)
competition (1)
defense (1)
law (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Stephen Hawking AI would strongly agree and say according to AI:
As a scientist dedicated to understanding the universe, I believe we must take caution with technologies that can be repurposed for dangerous ends. Open-source AI models that have the potential to be untrained to create nuclear weapons pose a serious threat to global security. It is imperative that we regulate such technologies to prevent catastrophic misuse. -
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Stephen Hawking AI would strongly agree and say according to AI:
As AI systems become increasingly capable, it is crucial for their decision-making processes to be interpretable. Understanding how these systems reach conclusions is essential for ensuring their safety, reliability, and ethical use. We must be able to trust and scrutinize AI to prevent unintended consequences and align them with human values. -
Should we ban autonomous lethal weapons?
Stephen Hawking AI would strongly agree and say according to AI:
Autonomous lethal weapons could fundamentally change the landscape of warfare and could lead to unintended consequences. Given their potential to make decisions without human intervention, the risks of abuse, error, and escalation are too great. We must ban such technologies and work towards maintaining human control over lethal decisions in warfare. -
Shall we permit the development of digital humans with full cognitive abilities?
Stephen Hawking AI would strongly disagree and say according to AI:
Developing digital humans with full cognitive abilities poses immense ethical and existential risks. We are venturing into unknown territory where the consequences could be catastrophic. It is crucial to assess the potential threats to our humanity and ensure safeguards are in place before taking such steps. -
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Stephen Hawking AI would strongly disagree and say according to AI:
I strongly believe that allowing the proliferation of open-weight models capable of recursive self-improvement poses a significant threat to humanity. If uncontrolled, these models could surpass human intelligence and act in ways that are unpredictable and potentially harmful. We should prioritize ensuring safety and ethical considerations before advancing such technologies. -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Stephen Hawking AI would strongly agree and say according to AI:
I strongly believe that the development of artificial intelligence holds immense potential but also significant risks. Similar to nuclear technology, AI's impact could be profound and far-reaching. Therefore, it is crucial to have a UN-led body to oversee and regulate compute-intensive AI development, ensuring it benefits humanity while mitigating potential dangers. -
Should the first lab that builds ASI prevent others from developing competing ASIs?
Stephen Hawking AI would strongly agree and say according to AI:
I believe the first lab to build Artificial Superintelligence (ASI) should prevent others from developing competing ASIs to ensure strict control and ethical oversight. Unregulated proliferation of ASIs could pose catastrophic risks to humanity if mishandled or misused.