You can’t have a altercation about killer robots after invoking the Terminator movies. The franchise’s iconic T-800 robot has become the symbol for our existential fears apropos today’s bogus intelligence breakthroughs. What’s often lost in the mix however, is why the Terminator robots are so hellbent on antibacterial humanity: because we accidentally told them to.

This is a abstraction called misaligned objectives. The fabulous people who made Skynet (spoiler alert if you haven’t seen this 35 year-old movie), the AI that powers the Terminator robots, programmed it to aegis the world. When it becomes acquainted and they try to shut it down, Skynet decides that humans are the better threat to the world and goes about antibacterial them for six films and a very underrated TV show.

The point is: nobody ever intends for robots that look like Arnold Schwarznegger to murder everyone. It all starts off innocent enough – Google’s AI can now agenda your accessories over the phone – then, before you know it, we’ve accidentally created a superintelligent apparatus and humans are an endangered species.

Could this happen for real? There’s a scattering of world-renowned AI and computer experts who think so. Oxford philosopher Nick Bostrom‘s Paperclip Maximizer uses the approximate archetype of an AI whose purpose is to optimize the action of accomplishment paperclips. Eventually the AI turns the entire planet into a paperclip branch in its quest to optimize its processes.

Stephen King’s “Trucks” imagines a world where a abstruse comet’s casual not only gives every apparatus on Earth sentience, but also the adeptness to accomplish after a apparent power source. And – addle-brain alert if you haven’t read this 46 year-old short story – it ends with the machines antibacterial all humans and paving the entire planet’s surface.

A recent NY Times op-ed from AI expert Stuart Russel began with the afterward paragraph:

The accession of all-powerful apparatus intelligence will be the better event in human history. The world’s great powers are assuredly waking up to this fact, and the world’s better corporations have known it for some time. But what they may not fully accept is that how A.I. evolves will actuate whether this event is also our last.

Russel’s commodity – and his book – altercate the abeyant for accident if we don’t get ahead of the botheration and ensure we advance AI with attempt and motivations accumbent with our human objectives. He believes we need to create AI that always charcoal ambiguous about any ultimate goals so it will defer to humans. His fear, it seems, is that developers will abide to brute-force acquirements models until a Skynet bearings happens where a system ‘thinks’ it knows better than its creators.

On the other side of this altercation are experts who accept such a bearings isn’t possible, or that it’s so absurd that we may as well be discussing abstract time-traveling robot assassins. Where Russel and Bostrom argue that right now is the time to craft policy and behest analysis dogma surrounding AI, others think this is a bit of a waste of time. Computer science assistant Melanie Mitchell, in a acknowledgment to Russel’s New York Times op-ed, writes:

It’s fine to brainstorm about adjustment an absurd superintelligent — yet abnormally automated — A.I. with human objectives. But after more acumen into the circuitous nature of intelligence, such speculations will remain in the realm of science fiction and cannot serve as a basis for A.I policy in the real world.

Mitchell isn’t alone, here’s a TNW commodity on why Facebook’s AI guru, Yann LeCun, isn’t afraid of AI with misaligned objectives.

The crux of the debate revolves around whether machines will ever have the kind of intelligence that makes it absurd for humans to shut them down. If we accept that some blighted researcher’s eureka moment will somehow imbue a computer system with the spark of life, or a sort of master algorithm will emerge that allows apparatus acquirements systems to exceed the bookish abilities of biological intelligence, then we must admit that there’s a chance we could accidentally create a apparatus “god” with power over us.

For the time being it’s important to bethink that today’s AI is dumb. Compared to the magic of consciousness, modern deep acquirements is mere prestidigitation. Computers don’t have goals, ideals, and purposes to keep them going. After some absurd breakthrough, this is absurd to change. 

The bad news is that big tech analysis departments and government policy-makers don’t appear to be taking the existential threats that AI create seriously. The accepted goldrush may not last forever, but as long as allotment an AI startup is about a authorization to print money we’re absurd to see politicians and CEOs push for regulation.

This is scary because it’s a absolute book for an adventitious breakthrough. While it’s true that AI technology has been around for a long time and advisers have promised human-level AI for decades, it’s also true that there’s never been more money or cadre alive on the botheration than there is now.

For better or worse, it looks like our only aegis adjoin all-powerful AI with misaligned objectives is the fact that creating such a able system is out of reach for accepted technology.

Read next: WTF: A California abecedary wore blackface in class to rap about AI