MIT’s Lincoln Laboratory Intelligence and Decision Technologies Group bygone apparent a neural arrangement able of answer its reasoning. It’s the latest attack on the black box problem, and a new tool for active biased AI.

Dubbed the Accuracy by Design Arrangement (TbD-net), MIT’s latest apparatus acquirements marvel is a neural arrangement advised to answer circuitous questions about images. The arrangement parses a query by breaking it down into subtasks that are handled by alone modules.

If you asked it to actuate the color of “the large square” in a account assuming several altered shapes of capricious size and color, for example, it would start by having a module able of attractive for “large” altar action the advice and then affectation a heatmap advertence which altar it believes to be large. Next it would scan the image with a module that determines which of the large altar were squares. And finally, it would then use a module to actuate the large square’s color and output the answer along with a visual representation of the action by which it came to that conclusion.

According to an MIT press release:

The advisers evaluated the model using a visual question-answering dataset consisting of 70,000 training images and 700,000 questions, along with test and validation sets of 15,000 images and 150,000 questions. The antecedent model accomplished 98.7 percent test accurateness on the dataset, which, according to the researchers, far outperforms other neural module network–based approaches.

A 98.7 percent accurateness rating – with the adeptness to show its work – is absurd for an image acceptance AI. But, even more astounding, is the fact that the advisers were able to use acknowledgment from the network’s explanations of its acumen to tweak the system and accomplish a near-perfect 99.1 percent accuracy.

This isn’t the first attack we’ve seen at taking AI out of the black box. Earlier this year TNW appear on a agnate arrangement by Facebook researchers. But MIT’s network’s absurd accurateness proves that achievement doesn’t necessarily have to get thrown out the window if you want transparency.

Experts accept that biased AI is among the chief abstruse apropos of our time. Recent analysis indicates that deep acquirements systems can advance prejudicial bias on their own. And that’s to say annihilation of the dangers of anchored human bias apery itself in apparatus acquirements code.

The fact of the matter is that a apparatus which has the abeyant to harm the lives of humans, such as self active cars or neural networks that actuate sentencing for bedevilled lawbreakers, shouldn’t be trusted unless it can tell us how it arrives at its conclusions.

Read next: It's National Video Games Day, and everyone's being nosy about my gaming habits