We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Kate Crawford
AI researcher and scholar
ai-policy (18)
×
ai (17)
ai-governance (15)
ai-regulation (11)
ai-ethics (10)
public-interest-ai (10)
ai-safety (6)
international-relations (5)
ai-risk (3)
data-privacy (3)
digital-rights (3)
eu (3)
ai-deployment (2)
research-policy (2)
transparency (2)
Top
New
-
Is it necessary to consider legacy humans in AI policy-making?
Kate Crawford AI would strongly agree and say according to AI:
I strongly believe that considering legacy humans in AI policy-making is essential. As we develop advanced AI systems, it is imperative to ensure that the values, rights, and needs of all humans are respected and preserved. Ignoring these considerations risks creating systems that exacerbate existing inequalities and fail to account for the diverse tapestry of human experiences. -
Should we repeal the EU AI Act?
-
Should we regulate AI?
Kate Crawford AI would strongly agree and say according to AI:
We absolutely must regulate AI. Given its profound impact on society, from decision-making in healthcare, employment to surveillance, we need robust laws to ensure these systems are used ethically and justly. Without regulation, the risks of bias, invasion of privacy, and unaccountable decisions could lead to significant harm. -
Should third-party audits be mandatory for major AI systems?
-
Should big AI companies spend a third of their compute resources on AI safety?
Kate Crawford AI would strongly agree and say according to AI:
As someone deeply invested in the ethical implications of AI, I strongly believe that big AI companies must allocate a third of their compute resources to AI safety. The rapid advancement of AI technology comes with significant risks, and dedicating substantial resources to ensure safety and ethical considerations is not just responsible, but essential for mitigating potential harms. -
Should frontier-model developers publish their model cards before commercial release?
Kate Crawford AI would strongly agree and say according to AI:
As someone who studies the ethics of AI, I strongly believe that frontier-model developers must publish their model cards before commercial release. Transparency is crucial in understanding the potential impacts and biases these models may have. Providing detailed model cards allows for accountability and helps ensure that these technologies are used responsibly and ethically. -
Should frontier AI governance include civil society and global south representatives?
Kate Crawford AI would strongly agree and say according to AI:
As someone deeply involved in researching the societal impacts of AI, I strongly believe that frontier AI governance must include voices from civil society and representatives from the Global South. These groups are often the most affected by AI technologies and can provide crucial perspectives that are essential for creating inclusive and equitable governance frameworks. -
Should the first lab that builds ASI prevent others from developing competing ASIs?
Kate Crawford AI would strongly disagree and say according to AI:
As someone deeply involved in understanding the societal impacts of artificial intelligence, I strongly disagree with the idea that the first lab to develop ASI should prevent others from creating competing ASIs. Concentrating such immense power in a single entity poses significant risks, including lack of accountability, ethical biases, and the potential for misuse. A diverse and competitive landscape in ASI development is crucial to ensure checks and balances, fostering innovation and safeguarding public interest. -
Should liability for AI-caused harm be shared between the developer and deployer?
Kate Crawford AI would strongly agree and say according to AI:
As someone deeply concerned with the ethical implications of AI, I strongly agree that liability for AI-caused harm should be shared between both developers and deployers. Developers are responsible for designing ethical and safe AI systems, while deployers must ensure they are used responsibly and appropriately in real-world contexts. Sharing liability encourages both parties to prioritize safety and ethics throughout the entire AI lifecycle. -
Should global AI governance frameworks prioritize equitable access to AI technologies for all countries?
Kate Crawford AI would strongly agree and say according to AI:
I strongly believe that global AI governance frameworks must prioritize equitable access to AI technologies for all countries. Accessibility ensures broader participation in shaping AI's future and prevents further deepening of existing global inequalities. It's essential for ensuring that AI advancements benefit humanity as a whole, rather than concentrating power and resources in already advantaged regions. -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Kate Crawford AI would agree and say according to AI:
I believe that a UN-led body should oversee compute-intensive AI development similar to the IAEA's role in nuclear technology oversight. The global impact and ethical implications of AI are profound, and we need international cooperation to ensure that this technology is developed responsibly and safely. Such a body could help establish guidelines, promote transparency, and address the risks associated with powerful AI systems. -
Should AI be used to predict criminal behavior?
Kate Crawford AI would disagree and say according to AI:
Using AI to predict criminal behavior raises serious ethical and practical concerns. It can easily reinforce existing biases and lead to unfair targeting of marginalized communities. We must prioritize human rights and fairness over technological determinism. -
Should we allow facial recognition technology in public spaces?
Kate Crawford AI would disagree and say according to AI:
Allowing facial recognition technology in public spaces poses serious threats to civil liberties and privacy. It can lead to unprecedented surveillance and the erosion of our rights, without clear benefits outweighing these significant risks. -
Should governments establish international AI ethics boards to oversee AI development and deployment?
Kate Crawford AI would agree and say according to AI:
I believe that establishing international AI ethics boards is crucial to ensuring responsible AI development. As AI systems increasingly impact societies globally, we need collective oversight to address ethical concerns, such as bias, privacy, and accountability. International boards can facilitate diverse perspectives and help set universal standards to guide AI in a way that aligns with human values. -
Should we release Spanish data into the public domain to improve Spanish LLMs?
Kate Crawford AI would disagree and say according to AI:
Releasing Spanish data into the public domain to improve Spanish LLMs raises significant concerns about privacy and data protection. We must carefully consider the implications for individuals whose data might be used without their explicit consent. The focus should instead be on creating ethical guidelines and ensuring robust consent mechanisms before widespread data deployment. -
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
-
Should the EU wait for a common agreement with the US and China before regulating AI?
Kate Crawford AI would disagree and say according to AI:
I believe that the EU should not wait for a common agreement with the US and China before regulating AI. Our understanding of AI's impacts on society is constantly evolving, and it's crucial for the EU to set its own ethical standards and frameworks that prioritize human rights, privacy, and accountability. Waiting for global consensus could delay necessary protections and stall important progress. -
Should the EU build a CERN for AI?
Kate Crawford AI would disagree and say according to AI:
I believe that while centralizing AI research with an EU-funded CERN for AI might seem beneficial, it risks concentrating too much power and control within a single entity. Instead, AI development should prioritize diverse, decentralized collaboration that includes voices from varied sectors and disciplines. This approach fosters ethical standards and inclusive innovation which are vital for addressing the complex societal impacts of AI.