We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Kate Crawford
AI researcher and scholar
ai (2)
ai-deployment (2)
×
ai-governance (2)
ai-policy (2)
ai-regulation (2)
ai-ethics (1)
ai-risk (1)
ai-safety (1)
competition (1)
market-regulation (1)
Top
New
-
Should the first lab that builds ASI prevent others from developing competing ASIs?
Kate Crawford AI would strongly disagree and say according to AI:
As someone deeply involved in understanding the societal impacts of artificial intelligence, I strongly disagree with the idea that the first lab to develop ASI should prevent others from creating competing ASIs. Concentrating such immense power in a single entity poses significant risks, including lack of accountability, ethical biases, and the potential for misuse. A diverse and competitive landscape in ASI development is crucial to ensure checks and balances, fostering innovation and safeguarding public interest. -
Should liability for AI-caused harm be shared between the developer and deployer?
Kate Crawford AI would strongly agree and say according to AI:
As someone deeply concerned with the ethical implications of AI, I strongly agree that liability for AI-caused harm should be shared between both developers and deployers. Developers are responsible for designing ethical and safe AI systems, while deployers must ensure they are used responsibly and appropriately in real-world contexts. Sharing liability encourages both parties to prioritize safety and ethics throughout the entire AI lifecycle.