We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Brookings Institution
U.S. public policy think tank
ai-governance (2)
ai-safety (2)
ai (1)
ai-ethics (1)
ai-policy (1)
ai-regulation (1)
ai-risk (1)
cern-for-ai (1)
transparency (1)
trust-in-ai (1)
Top
New
-
Should we create a global institute for AI, similar to CERN?
Brookings Institution agrees and says:
For joint R&D, recommendation R15 of the progress report called for development of “common criteria and governance arrangements for international large-scale AI R&D projects,” with the Human Genome Project (HGP) and the European Organization for Nuclear Research (CERN) as examples of the scale and ambition needed. “Joint research and development applying to large-scale global problems such as climate change or disease prevention and treatment can have two valuable effects: It can bring additional resources to the solution of pressing global challenges, and the collaboration can help to find common ground in addressing differences in approaches to AI.” source Unverified -
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Brookings Institution disagrees and says:
Explainable AI (XAI) is often offered as the answer to the black box problem and is broadly defined as “machine learning techniques that make it possible for human users to understand, appropriately trust, and effectively manage AI.” Around the world, explainability has been referenced as a guiding principle for AI development, including in Europe’s General Data Protection Regulation. Explainable AI has also been a major research focus of the Defence Advanced Research Projects Agency (DARPA) since 2016. However, after years of research and application, the XAI field has generally struggled to realize the goals of understandable, trustworthy, and controllable AI in practice. The end goal of explainability depends on the stakeholder and the domain. Explainability enables interactions between people and AI systems by providing information about how decisions and events come about, but developers, domain experts, users, and regulators all have different needs from the explanations of AI models. These differences are not only related to degrees of technical expertise and understanding, but also include domain-specific norms and decision-making mechanisms. For now, users and other external stakeholders are typically afforded little if any insight into the behind-the-scenes workings of the AI systems that impact their lives and opportunities. This asymmetry of knowledge about how an AI system works, and the power to do anything about it, is one of the key dilemmas at the heart of explainability. (2021) source Unverified