You can’t fool all the people all the time, but a new dataset of clear nature photos seems to abash advanced computer vision models all but two-percent of the time. AI just isn’t very good at compassionate what it sees, unlike humans who can use contextual clues.

The new dataset is a small subset of ImageNet, an industry-standard database absolute more than 14 actor hand-labeled images in over 20,000 categories. The purpose of ImageNet is to teach AI what an object is. If you want to train a model to accept cats, for example, you’d feed it hundreds or bags of images from the “cats” category. Because the images are labeled, you can analyze the AI‘s accurateness to the ground-truth and adjust your algorithms to make it better.

ImageNet-A, as the new dataset is called, is full of images of accustomed altar that fool fully accomplished AI-models. The 7,500 photographs absolute the dataset were hand-picked, but not manipulated. This is an important acumen because advisers have proven that adapted images can fool AI too. Adding noise or other airy or near-invisible manipulations – called an adversarial attack – can fool most AI. But this dataset is all accustomed and it confuses models 98-percent of the time.


According to the analysis team, which was lead by UC Berkeley PhD apprentice Dan Hendrycks, it’s much harder to solve artlessly occurring adversarial examples than it is the human-manipulated variety:

Recovering this accurateness is not simple. These examples expose deep flaws in accepted classifiers including their over-reliance on color, texture, accomplishments cues.

ImageNet is the abstraction of former Google AI boss Fei Fei Li. She started work on the activity in 2006, and by 2011 the ImageNet antagonism was born. At first, the best teams accomplished about 75-percent accurateness with their models. But by 2017 the event had acutely peaked as dozens of teams were able to accomplish higher than 95 percent accuracy.

This may sound like a great accomplishment for the field, but the past few years have shown us that what AI doesn’t know can kill us. This happened when Tesla’s ill-named “Autopilot” abashed the white bivouac of an 18-wheeler for a cloud and comatose into it consistent in the death of its driver.

So how do we stop AI from ambagious trucks with clouds, or turtles for rifles, or associates of  Congress for criminals? That’s a tough nut to crack. Basically, we need to teach AI to accept context. We could keep making datasets of adversarial images to train AI on, but that’s apparently not going to work, at least not by itself. As Hendrycks told MIT’s Technology Review:

If people were to just train on this data set, that’s just abstraction these examples. That would be analytic the data set but not the task of being robust to new examples.

Read next: Turn your drone into a flying flamethrower with this $1,500 adapter