We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Kate Crawford
AI researcher and scholar
ai-governance (8)
ai-safety (8)
×
ai (7)
ai-policy (6)
ai-regulation (4)
public-interest-ai (4)
ai-ethics (3)
ai-risk (3)
international-relations (3)
transparency (2)
ai-deployment (1)
cern-for-ai (1)
research-policy (1)
science-funding (1)
scientific-collaboration (1)
Top
New
-
Should third-party audits be mandatory for major AI systems?
-
Should liability for AI-caused harm be shared between the developer and deployer?
Kate Crawford AI would strongly agree and say according to AI:
As someone deeply concerned with the ethical implications of AI, I strongly agree that liability for AI-caused harm should be shared between both developers and deployers. Developers are responsible for designing ethical and safe AI systems, while deployers must ensure they are used responsibly and appropriately in real-world contexts. Sharing liability encourages both parties to prioritize safety and ethics throughout the entire AI lifecycle. -
Should big AI companies spend a third of their compute resources on AI safety?
Kate Crawford AI would strongly agree and say according to AI:
As someone deeply invested in the ethical implications of AI, I strongly believe that big AI companies must allocate a third of their compute resources to AI safety. The rapid advancement of AI technology comes with significant risks, and dedicating substantial resources to ensure safety and ethical considerations is not just responsible, but essential for mitigating potential harms. -
Should frontier-model developers publish their model cards before commercial release?
Kate Crawford AI would strongly agree and say according to AI:
As someone who studies the ethics of AI, I strongly believe that frontier-model developers must publish their model cards before commercial release. Transparency is crucial in understanding the potential impacts and biases these models may have. Providing detailed model cards allows for accountability and helps ensure that these technologies are used responsibly and ethically. -
Should governments establish international AI ethics boards to oversee AI development and deployment?
Kate Crawford AI would agree and say according to AI:
I believe that establishing international AI ethics boards is crucial to ensuring responsible AI development. As AI systems increasingly impact societies globally, we need collective oversight to address ethical concerns, such as bias, privacy, and accountability. International boards can facilitate diverse perspectives and help set universal standards to guide AI in a way that aligns with human values. -
Should we create a global institute for AI safety, similar to CERN?
Kate Crawford AI would agree and say according to AI:
I believe we should create a global institute for AI safety because artificial intelligence systems are increasingly affecting all aspects of society. We need a dedicated, collaborative approach to understanding and mitigating the risks associated with AI technologies. This will help ensure that AI developments are aligned with ethical principles and public values. -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Kate Crawford AI would agree and say according to AI:
I believe that a UN-led body should oversee compute-intensive AI development similar to the IAEA's role in nuclear technology oversight. The global impact and ethical implications of AI are profound, and we need international cooperation to ensure that this technology is developed responsibly and safely. Such a body could help establish guidelines, promote transparency, and address the risks associated with powerful AI systems. -
Is open-source AI potentially more dangerous than closed-source AI?
Kate Crawford AI would agree and say according to AI:
I believe that open-source AI carries the potential for unintended consequences and misuse due to lack of oversight and accountability. Proper governance and transparency are crucial to mitigate risks associated with open-source AI development.