We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Dario Amodei
Research Scientist at OpenAI
ai (4)
×
ai-governance (3)
ai-policy (3)
ai-regulation (3)
ai-safety (3)
ai-ethics (2)
ai-risk (2)
transparency (2)
trust-in-ai (2)
ai-deployment (1)
economics (1)
existential-risk (1)
future (1)
public-interest-ai (1)
Top
New
-
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Dario Amodei disagrees and says:
Many of the risks and worries associated with generative AI are ultimately consequences of this opacity, and would be much easier to address if the models were interpretable. [...] governments can use light-touch rules to encourage the development of interpretability research and its application to addressing problems with frontier AI models. Given how nascent and undeveloped the practice of ‘AI MRI’ is, it should be clear why it doesn’t make sense to regulate or mandate that companies conduct them, at least at this stage: it’s not even clear what a prospective law should ask companies to do. source Verified -
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Dario Amodei agrees and says:
From a business perspective, the difference between open and closed is a little bit overblown. From a security perspective, the difference between open and closed models is, for some intents and purposes, overblown. The most important thing is how powerful a model is. If a model is very powerful, then I don’t want it given to the Chinese by being stolen. I also don’t want it given to the Chinese by being released. If a model is not that powerful, then it’s not concerning either way. (2025) source Verified -
Will AGI create abundance?
Dario Amodei agrees and says:
I don’t know exactly when it’ll come, I don’t know if it’ll be 2027. I think it’s plausible it could be longer than that. I don’t think it will be a whole bunch longer than that when AI systems are better than humans at almost everything. Better than almost all humans at almost everything. And then eventually better than all humans at everything, even robotics. [...] We’ve recognized that we’ve reached the point as a technological civilization where the idea, there’s huge abundance and huge economic value, but the idea that the way to distribute that value is for humans to produce economic labor, and this is where they feel their sense of self worth. Once that idea gets invalidated, we’re all going to have to sit down and figure it out. (2025) source Unverified -
Should third-party audits be mandatory for major AI systems?
Dario Amodei agrees and says:
I think we absolutely have to make the testing mandatory, but we also need to be really careful about how we do it. (2024) source Unverified