Google has been pretty far ahead of the curve when it comes to its bogus intelligence research. The world was abashed when its AI beat a top human player at the game of Go. More afresh the aggregation taught AI to use acuteness and make predictions. The latest trick in Google’s machine-learning research? Naps.

Google is making its AI more human — to a amazing degree. It’s taught DeepMind how to sleep. In a recent blog post the aggregation said:

At first glance, it might seem counter-intuitive to build an bogus agent that needs to ‘sleep’ – after all, they are declared to grind away at a computational botheration long after their programmers have gone to bed. But this assumption was a key part of our deep-Q arrangement (DQN), an algorithm that learns to master a assorted range of Atari 2600 games to all-powerful level with only the raw pixels and score as inputs. DQN mimics “experience replay”, by autumn a subset of training data that it reviews “offline”, acceptance it to learn anew from successes or failures that occurred in the past.

DeepMind advisers are teaching computers how to learn. Neural-networks, AI, machine-learning algorithms – all the buzz words you’ve heard – what it boils down to is teaching a computer how to figure article out on its own.

Self-driving cars need to make decisions about traffic, data-analysis algorithms have to decide how to group advice segments, and AI needs to be able to think like a person. Otherwise, what’s the point?

Google’s new method means even if a computer is using its full anatomic assets to figure a botheration out, it can save advice to dream about later, while it’s offline.

It doesn’t have to be alive on a botheration to solve it. It’ll fail at something, go offline, and then be able to accomplish at that task once it’s back online.

In the future, when your computer goes into sleep-mode, it might be acute it’s next victory.

Read next: Elon Musk is right. Driverless cars will arrive by 2021