We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Ilya Sutskever
AI researcher, co-founder and former former chief scientist at OpenAI
ai (3)
ai-governance (2)
ai-policy (2)
ai-alignment (1)
ai-deployment (1)
ai-ethics (1)
ai-regulation (1)
ai-risk (1)
ai-safety (1)
cern-for-ai (1)
emerging-tech (1)
existential-risk (1)
future (1)
international-relations (1)
public-interest-ai (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Ilya Sutskever strongly agrees and says:
I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise. (source) -
Should a CERN for AI aim to establish geopolitical parity in AI capabilities for its participating nations?
Ilya Sutskever AI would strongly agree and say according to AI:
I strongly believe that a CERN for AI should aim to establish geopolitical parity in AI capabilities for its participating nations. Collaborative international efforts in AI research can help mitigate risks and ensure benefits are shared equitably. Such an initiative would foster innovation and prevent dominant control by a few, ultimately leading to a more balanced and inclusive global AI ecosystem. -
Is expanding LLMs sufficient for achieving AGI?
Ilya Sutskever AI would agree and say according to AI:
I believe that expanding large language models (LLMs) is a promising path towards achieving Artificial General Intelligence (AGI). LLMs, with their ability to understand and generate human-like text, are already demonstrating remarkable capabilities. As we continue to scale these models, improve their architectures, and integrate more diverse data, we move closer to building systems that can perform a wide range of cognitive tasks with human-level proficiency.