MIT Lincoln Laboratory

Info
MIT research lab for national security
Top
New
  • Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
    As autonomous systems and artificial intelligence (AI) become increasingly common in daily life, new methods are emerging to help humans check that these systems are behaving as expected. One method, called formal specifications, uses mathematical formulas that can be translated into natural-language expressions. Some researchers claim that this method can be used to spell out decisions an AI will make in a way that is interpretable to humans. MIT Lincoln Laboratory researchers wanted to check such claims of interpretability. Their findings point to the opposite: formal specifications do not seem to be interpretable by humans. In the team's study, participants were asked to check whether an AI agent's plan would succeed in a virtual game. Presented with the formal specification of the plan, the participants were correct less than half of the time. “The results are bad news for researchers who have been claiming that formal methods lent interpretability to systems. It might be true in some restricted and abstract sense, but not for anything close to practical system validation,” says Hosea Siu, a researcher in the Laboratory's AI Technology Group. (2023) source Unverified
    Comment Comment X added 2d ago
Back to home
Terms · Privacy · Contact