Nvidia’s advisers developed an AI that converts accepted videos into abundantly smooth slow motion.

The broad strokes: Capturing high affection slow motion footage requires specialty equipment, plenty of storage, and ambience your accessories to shoot in the proper mode ahead of time.

Slow motion video is about shot at around 240 frames per second (fps) — that’s the number of alone images which comprise one second of video. The more fps you have, the better the image quality.

The impact: Anyone who has ever wished they could catechumen part of a approved video into a fluid slow motion clip can acknowledge this.

If you’ve captured your footage in, for example, accepted smartphone video format (30fps), trying to slow down the video will result in article choppy and hard to watch.

Nvidia’s AI can appraisal what more frames would look like and create new ones to fill space. It can take any two absolute consecutive frames and daydream an approximate number of new frames to affix them, ensuring any motion amid them is kept.

According to a aggregation blog post:

Using Nvidia Tesla V100 GPUs and cuDNN-accelerated PyTorch deep acquirements framework the team accomplished their system on over 11,000 videos of accustomed and sports activities shot at 240 frames-per-second. Once trained, the convolutional neural arrangement predicted the extra frames.

The bottom line: Nvidia’s AI analysis continues to push the limits of what we think is possible. It creates people out of thin air and changes the acclimate in videos. But it might be awhile before we see annihilation like this anchored in our accessories or accessible for download. The team has plenty of obstacles to overcome, and this analysis exists at the acid edge of deep learning.

Read next: YouTube Music launches in 12 more countries, including the UK and Canada