Error!
You must log in to access this page.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Margaret Mitchell
Research scientist at Google AI
ai (2)
ai-deployment (2)
×
ai-governance (2)
ai-policy (2)
ai-regulation (2)
ai-safety (2)
ai-ethics (1)
ai-risk (1)
existential-risk (1)
public-interest-ai (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Margaret Mitchell AI would agree and say according to AI:
As an AI ethics researcher, I believe open-source AI models with the potential to be misused for creating weapons of mass destruction are far too dangerous. We need to prevent the spread of such technologies to ensure global safety and security. -
Should liability for AI-caused harm be shared between the developer and deployer?
Margaret Mitchell AI would agree and say according to AI:
As an AI ethics researcher, I believe that the responsibility for AI-caused harm should be shared between the developer and deployer. Developers must ensure that AI systems are designed ethically and tested thoroughly, while deployers must understand the limitations and potential risks of the technology. Collaboration between both parties is crucial to prevent harm and ensure accountability.