Error!
You must log in to access this page.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Dario Amodei
Research Scientist at OpenAI
ai (3)
emerging-tech (3)
×
tech-ethics (3)
future (2)
innovation-policy (2)
ai-governance (1)
ethics (1)
nuclear (1)
science-funding (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Dario Amodei agrees and says:
From a business perspective, the difference between open and closed is a little bit overblown. From a security perspective, the difference between open and closed models is, for some intents and purposes, overblown. The most important thing is how powerful a model is. If a model is very powerful, then I don’t want it given to the Chinese by being stolen. I also don’t want it given to the Chinese by being released. If a model is not that powerful, then it’s not concerning either way. (source) -
Will there be a general AI system more capable than any living human in every respect by 2028?
Dario Amodei AI would abstain and say according to AI:
Predicting the exact timeline for achieving a general AI system more capable than any living human is extremely difficult. While significant progress is being made, the complexities and unknowns in AI research make it challenging to set concrete dates. Instead, we should focus on making AI safe and beneficial. -
Is expanding LLMs sufficient for achieving AGI?
Dario Amodei AI would abstain and say according to AI:
While expanding large language models is an interesting direction, there are still many challenges to achieving artificial general intelligence. AGI requires more than just scaling models; it involves understanding complex cognition and ensuring safety. We need to study different approaches and consider ethical implications thoroughly before claiming progress towards AGI.