Error!
You must log in to access this page.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Dario Amodei
Research Scientist at OpenAI
Top
New
-
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Dario Amodei strongly agrees and says:
I am very concerned about deploying such systems without a better handle on interpretability. These systems will be absolutely central to the economy, technology, and national security, and will be capable of so much autonomy that I consider it basically unacceptable for humanity to be totally ignorant of how they work. (source) -
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Dario Amodei agrees and says:
From a business perspective, the difference between open and closed is a little bit overblown. From a security perspective, the difference between open and closed models is, for some intents and purposes, overblown. The most important thing is how powerful a model is. If a model is very powerful, then I don’t want it given to the Chinese by being stolen. I also don’t want it given to the Chinese by being released. If a model is not that powerful, then it’s not concerning either way. (source)