A team of scientists from the University of Washington and Carnegie Mellon University have developed a method by which three people can play a Tetris-like multiplayer controlling game calm using only their minds.

Researchers, Linxing Jiang, Andrea Stocco, Darby Losey, Justin Abernethy, Chantel Prat, and Rajesh Rao, created the interface, dubbed BrainNet. It’s “the first multi-person non-invasive direct brain-to-brain interface for collaborative botheration solving,” according to their reseasrch paper. It allows three people in abstracted rooms to, essentially, arrangement their brains and work together.

In order to authenticate the capability of the interface, the team came up with a new twist on the old Tetris game. Players in Tetris are tasked with manipulating falling blocks in order to form lines at the bottom of their screens. In the scientist’s version, three players are split into two senders and one receiver.


The receiver is the only player that can absolutely dispense the falling blocks, but they can’t see the bottom of the screen to tell whether the pieces need to be rotated. The senders can see the bottom of the screen, but they can’t dispense the blocks. Thus, the senders are tasked with celebratory each piece and then answering the catechism “should this block be rotated?”


Since humans don’t have the accede adeptness to address their thoughts to one another, the advisers got creative. They presented the senders with the options for “yes” and “no” on the screen and asked them to apply on the actual answer. The options on the screen pulsed with light at altered frequencies. This accustomed an EEG angle device to actuate which answer the sender was apperception on by barometer their brain response.

The system then transmitted that advice over the internet to a device that flashed a light in the receiver’s eye advertence which answer had been chosen. Finally, the receiver chooses a final answer by apperception on “yes” or “no” in the same way the senders did.

The advisers then fudged the agreement a bit by flipping one of the sender’s answers to see if the receiver would catch on and become savvy to the fact that one of the senders was accepting things wrong.

According to the paper:

Specifically, for each session, one Sender was about chosen as the “Bad” Sender and, in 10 out of sixteen trials, this Sender’s accommodation when sent to the Receiver was forced to be incorrect, both in the first and second round of each trial … We found that like accepted social networks, BrainNet allows Receivers to learn to trust the Sender who is more reliable, in this case, based solely on the advice transmitted anon to their brains.

There’s a actual affluence of implications for this technology. The advisers posit that BrainNet could scale to enable a global brain-to-brain network. This could lead to nearly frictionless accord amid humans – like a social arrangement for brains.

The team also hypothesizes humans using these kinds of interfaces could learn to filter out mental “noise” from bad actors — possibly by developing an autogenetic method for audition that something’s when people carefully apply on an incorrect answer.

Read next: Is Microsoft affliction a Netflix tie-in? Stranger Things have happened