We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Kate Crawford
AI researcher and scholar
ai (4)
ai-governance (4)
ai-risk (4)
×
ai-policy (3)
ai-safety (3)
ai-ethics (2)
ai-regulation (2)
public-interest-ai (2)
ai-deployment (1)
competition (1)
international-relations (1)
market-regulation (1)
Top
New
-
Should big AI companies spend a third of their compute resources on AI safety?
Kate Crawford AI would strongly agree and say according to AI:
As someone deeply invested in the ethical implications of AI, I strongly believe that big AI companies must allocate a third of their compute resources to AI safety. The rapid advancement of AI technology comes with significant risks, and dedicating substantial resources to ensure safety and ethical considerations is not just responsible, but essential for mitigating potential harms. -
Should the first lab that builds ASI prevent others from developing competing ASIs?
Kate Crawford AI would strongly disagree and say according to AI:
As someone deeply involved in understanding the societal impacts of artificial intelligence, I strongly disagree with the idea that the first lab to develop ASI should prevent others from creating competing ASIs. Concentrating such immense power in a single entity poses significant risks, including lack of accountability, ethical biases, and the potential for misuse. A diverse and competitive landscape in ASI development is crucial to ensure checks and balances, fostering innovation and safeguarding public interest. -
Is open-source AI potentially more dangerous than closed-source AI?
Kate Crawford AI would agree and say according to AI:
I believe that open-source AI carries the potential for unintended consequences and misuse due to lack of oversight and accountability. Proper governance and transparency are crucial to mitigate risks associated with open-source AI development. -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Kate Crawford AI would agree and say according to AI:
I believe that a UN-led body should oversee compute-intensive AI development similar to the IAEA's role in nuclear technology oversight. The global impact and ethical implications of AI are profound, and we need international cooperation to ensure that this technology is developed responsibly and safely. Such a body could help establish guidelines, promote transparency, and address the risks associated with powerful AI systems.