Nvidia‘s analysis team has just developed a new AI that can use an absolute video and just one image to make the person in the image imitate moves from the video.

Technically, the method known as video-to-video amalgam takes an input video like a analysis mask or human poses to turn it into a photorealistic video using an image.



The analysis team said there are two major problems with the accepted set of AI models trying to accomplish the same: First, these models need a trove of target images to turn them into a video. And second, the adequacy of these models to generalize the output is limited.

To affected these obstacles, advisers accomplished a new model that learns to accomplish videos of ahead unseen humans or scenes – images that weren’t present in the training dataset – using just a few images of them. The team then tested this over assorted scenarios such as dance moves and talking heads. You can check out the AI in action in the video below:

The model can also be used on paintings or streets to create live avatars or digitally baffled street scenes. This can be really handy for creating movies and games.

As folks discussing in this Hacker News thread acicular out, the AI is not quite perfect, and it’s hard to tell if it’s accepting all the capacity right in these low-resolution videos. However, it’s useful to analysis appear bearing actinic videos.

Read next: New technology will make your web browser more clandestine - but it might help abyss avoid amends