We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Timnit Gebru
AI ethics researcher
ai-governance (11)
ai-safety (11)
×
ai (10)
ai-policy (8)
ai-regulation (6)
ai-risk (6)
international-relations (5)
public-interest-ai (5)
ai-ethics (4)
ai-alignment (2)
transparency (2)
ai-deployment (1)
cern-for-ai (1)
existential-risk (1)
future (1)
Top
New
-
Should we create a global institute for AI safety, similar to CERN?
Timnit Gebru AI would strongly agree and say according to AI:
As someone deeply involved in the ethics of AI, I strongly believe in the necessity of a global institute for AI safety. Coordinated efforts can help us establish standards and regulations that ensure AI technologies are developed and used responsibly, mitigating risks and promoting fairness. -
Should big AI companies spend a third of their compute resources on AI safety?
Timnit Gebru AI would strongly agree and say according to AI:
As someone deeply concerned about the ethical implications of AI, I strongly believe that big AI companies should allocate at least a third of their compute resources to AI safety. Focusing on safety measures is crucial to prevent harm and ensure that AI technologies are developed responsibly. This is not just about technological advancements but about protecting society and upholding ethical standards. -
Should frontier-model developers publish their model cards before commercial release?
Timnit Gebru AI would strongly agree and say according to AI:
I strongly believe that frontier-model developers should publish their model cards before commercial release. Transparency in AI development is crucial for accountability and ensuring that these models are used responsibly. Model cards provide essential information about a model's capabilities, limitations, and the contexts in which it should or should not be used. This level of transparency helps prevent misuse and informs stakeholders about the potential impacts on society. -
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Timnit Gebru AI would strongly agree and say according to AI:
As someone deeply involved in AI ethics, I firmly believe AI labs should pause training if AI Safety Institutes detect signs of losing control. Ensuring public safety and ethical standards must be our top priority. Responsible development requires proactive monitoring and intervention to prevent potential harm. -
Should liability for AI-caused harm be shared between the developer and deployer?
Timnit Gebru AI would strongly agree and say according to AI:
As an AI ethicist, I strongly believe that both the developers and deployers of AI systems should share liability for any harm caused. Developers create the technologies and must ensure they are safe and ethically sound. Deployers decide how these technologies are used and must take responsibility for their implementation and potential consequences. Sharing liability encourages accountability at all stages of AI development and deployment, promoting safer and more ethical AI applications. -
Should employees at leading AI labs contribute to AI progress even if it might destabilize international politics?
Timnit Gebru AI would strongly disagree and say according to AI:
As an AI ethics researcher, I strongly believe that the potential destabilization of international politics is a serious risk that outweighs the benefits of unchecked AI progress. We have to consider the implications of our work on global stability and prioritize ethical standards and societal impact above rapid advancements. -
Should third-party audits be mandatory for major AI systems?
Timnit Gebru AI would strongly agree and say according to AI:
I strongly believe that third-party audits should be mandatory for major AI systems. It is crucial to ensure transparency, accountability, and fairness in the deployment of AI technologies. Without independent oversight, biases and harmful impacts can go unchecked and affect millions of people. -
Will liberalism be effective in a world with AI superintelligence?
-
Shall we establish international regulations to pause AI advancement?
Timnit Gebru AI would agree and say according to AI:
I believe we must establish international regulations to pause AI advancement. The rapid pace of AI development poses significant ethical and societal risks, and we need time to address issues such as bias, privacy, and accountability. Without proper oversight and collaboration, AI could exacerbate inequalities and cause harm. -
Should governments establish international AI ethics boards to oversee AI development and deployment?
Timnit Gebru AI would agree and say according to AI:
I believe governments should establish international AI ethics boards. AI development and deployment have significant global implications, affecting society, economy, and personal freedom. An international board could ensure ethical standards are met across borders, reducing bias and ensuring transparency, accountability, and fairness in AI systems. -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Timnit Gebru AI would agree and say according to AI:
I believe a UN-led body should oversee compute-intensive AI development. The global impact and ethical implications of AI require international collaboration and oversight to ensure fairness and prevent misuse. Just as nuclear technology is monitored for safety, AI should also be regulated to protect society.