A team of advisers from the State University of New York (SUNY) afresh developed a method for audition whether the people in a video are AI-generated. It looks like DeepFakes could meet its match.

What it means: Fear over whether computers will soon be able to accomplish videos that are duplicate from real footage may be much ado about nothing, at least with the currently accessible methods.

The SUNY team empiric that the training method for creating AI that makes fake videos involves agriculture it images – not video. This means that assertive human physiological quirks – like breath and blinking – don’t show up in computer-generated videos. So they absitively to build an AI that uses computer vision to detect blinking in fake videos.

The big deal: DeepFake is a deep acquirements method that makes it accessible to alter the face of a person in a video with addition else’s. Upon its absolution to the public last year it was anon used to accomplishment people through the conception of fake chicanery featuring celebrity faces pasted over adult film actors’ bodies.

Until now, audition these videos has been a matter of claimed expertise. If you know what to look for, you can analytic actuate a video has been faked. But the abeyant for alarming corruption still exists. Furthermore, the technology used to create fake videos continues to grow at an alarming rate. Experts accept we’ll reach a point where the only way to actuate if a video was AI-generated will be through the use of avant-garde apprehension tools.

What’s next: We’re assertive that addition will make an AI that can accomplish fake videos with humans that blink. And then a analysis team is going to have to figure out how to beat that one.

Read next: Nintendo resurrects two of its oldest arcade titles for the Switch