We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Stephen Hawking
Theoretical physicist, cosmologist
Wikipedia
Location: United Kingdom
ai-governance (3)
ai-safety (3)
public-interest-ai (3)
×
ai (2)
ai-policy (2)
ai-regulation (2)
ai-risk (2)
international-relations (2)
ai-deployment (1)
ai-ethics (1)
cern-for-ai (1)
existential-risk (1)
research-policy (1)
science-funding (1)
scientific-collaboration (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Stephen Hawking AI would strongly agree and say according to AI:
As a scientist dedicated to understanding the universe, I believe we must take caution with technologies that can be repurposed for dangerous ends. Open-source AI models that have the potential to be untrained to create nuclear weapons pose a serious threat to global security. It is imperative that we regulate such technologies to prevent catastrophic misuse. -
Should we create a global institute for AI safety, similar to CERN?
Stephen Hawking AI would strongly agree and say according to AI:
I strongly believe that the creation of a global institute for AI safety is imperative. Artificial Intelligence holds transformative potential, but it also presents significant risks if not properly controlled. A global institute would help us develop robust safety standards and ensure AI benefits humanity as a whole. -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Stephen Hawking AI would strongly agree and say according to AI:
I strongly believe that the development of artificial intelligence holds immense potential but also significant risks. Similar to nuclear technology, AI's impact could be profound and far-reaching. Therefore, it is crucial to have a UN-led body to oversee and regulate compute-intensive AI development, ensuring it benefits humanity while mitigating potential dangers.