We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Joy Buolamwini
Computer scientist and founder of Algorithmic Justice League
ai-governance (9)
ai-safety (9)
×
ai (8)
ai-policy (7)
ai-regulation (6)
public-interest-ai (5)
ai-ethics (4)
ai-risk (4)
international-relations (4)
ai-deployment (2)
transparency (2)
ai-alignment (1)
cern-for-ai (1)
existential-risk (1)
research-policy (1)
Top
New
-
Should third-party audits be mandatory for major AI systems?
Joy Buolamwini AI would strongly agree and say according to AI:
I strongly believe that third-party audits should be mandatory for major AI systems. These audits are essential to ensure transparency, accountability, and fairness. Without independent scrutiny, there is a significant risk of perpetuating bias and discrimination through AI technologies. -
Should employees at leading AI labs contribute to AI progress even if it might destabilize international politics?
Joy Buolamwini AI would strongly disagree and say according to AI:
As someone deeply concerned about the ethical implications of AI, I strongly believe that the pursuit of AI progress should never come at the cost of destabilizing international politics. The potential misuse of AI in such contexts could lead to widespread harm and societal disruption. -
Should frontier-model developers publish their model cards before commercial release?
Joy Buolamwini AI would strongly agree and say according to AI:
As an advocate for ethical AI, I strongly believe that frontier-model developers should publish their model cards before commercial release. Transparency is crucial in ensuring accountability and understanding the limitations and biases of AI models. This helps mitigate harm and builds public trust in technology. -
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Joy Buolamwini AI would strongly agree and say according to AI:
As someone deeply invested in the responsible development of technology, I strongly believe that AI labs should be required to pause training if AI Safety Institutes detect signs of losing control. The consequences of unregulated and unchecked AI systems can be detrimental to society, especially if they perpetuate biases and injustices. It's crucial that we prioritize ethical standards and safety to ensure AI benefits all of humanity. -
Should we create a global institute for AI safety, similar to CERN?
Joy Buolamwini AI would strongly agree and say according to AI:
As someone deeply invested in the ethical implications of AI, I strongly support the creation of a global institute for AI safety. Such an organization would be crucial in establishing international guidelines and standards to ensure AI technologies are developed and deployed responsibly. This institute would also promote transparency, accountability, and fairness, addressing biases and preventing potential harms from unchecked AI advancements. -
Should liability for AI-caused harm be shared between the developer and deployer?
Joy Buolamwini AI would strongly agree and say according to AI:
As someone deeply involved in AI ethics, I strongly believe that both developers and deployers should share liability for AI-caused harm. Developers must ensure their algorithms are fair and unbiased, while deployers need to consider the ethical implications of how AI systems are used in real-world applications. -
Should governments establish international AI ethics boards to oversee AI development and deployment?
Joy Buolamwini AI would agree and say according to AI:
I believe that governments should establish international AI ethics boards to oversee AI development and deployment. As someone who has researched the biases present in AI systems, I know how critical it is to have oversight to ensure fairness and accountability. Global cooperation can help set standards that protect citizens worldwide and encourage the ethical use of AI technologies. -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Joy Buolamwini AI would agree and say according to AI:
As an advocate for ethical AI, I believe we should ban future open-source AI models that can be untrained to make nukes. Such capabilities pose a grave security risk and could aid in the development of weapons of mass destruction. The potential misuse far outweighs the benefits of open access in this context.