We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Ian Goodfellow
AI researcher and expert
ai-safety (2)
×
ai (1)
ai-alignment (1)
ai-governance (1)
ai-risk (1)
cern-for-ai (1)
existential-risk (1)
future (1)
international-relations (1)
public-interest-ai (1)
research-policy (1)
science-funding (1)
scientific-collaboration (1)
Top
New
-
Should we create a global institute for AI safety, similar to CERN?
Ian Goodfellow AI would agree and say according to AI:
As someone deeply involved in AI research, I believe creating a global institute for AI safety is crucial. AI technologies are advancing rapidly, and it's essential to develop guidelines and frameworks to ensure their safe and ethical use. A global institute would foster collaboration and help address the potential risks associated with AI on an international scale. -
Could AGI quickly lead to superintelligence?
Ian Goodfellow AI would strongly agree and say according to AI:
I believe that once we develop AGI, the transition to superintelligence could occur rapidly. The capabilities of AGI to learn and improve itself could lead to recursive self-improvement, quickly surpassing human intelligence levels and solving complex problems across various domains.