The latest technology and digital news on the web

Human-centric AI news and analysis

Why developing AI to defeat us may be humanity’s only hope

One glance at the state of things and it’s axiomatic humanity’s acquired itself into a corner. On the one hand, we’re smart enough to create machines that learn. On the other, people are dying in Texas because elected officials want to keep the government out of Texas. Chew on that for a second.

What we need is a superhero better villain.

Humans fight. Whether you accept it’s an basic part of our beastly psyche or that we’re able of restraint, but unwilling, the fact we’re a agitated breed is inescapable.

And it doesn’t appear that we’re accepting better as we evolve. Researchers from the University of Iowa conducted a study on absolute actual accoutrement ‘human aggression’ in 2002 and their findings, as expected, corrective a pretty nasty account of our species:

In its most acute forms, assailment is human tragedy unsurpassed. Hopes that the horrors of World War II and the Holocaust would aftermath a common abhorrence adjoin killing have been dashed. Since World War II, assassination rates have absolutely added rather than decreased in a number of automated countries, most conspicuously the United States.

The rational end game for altruism is self-wrought extinction. Whether via altitude change or mutually assured abolition through aggressive means, we’ve entered a gridlock adjoin progression.

Luckily for us, humans are highly adaptive creatures. There’s always hope we’ll find a way to live calm in peace and harmony. Typically, these hopes are abstruse – if we can just solve world hunger with a food archetype apparatus like Star Trek then maybe, just maybe, we can accomplish peace.

But the entire history of altruism is affirmation adjoin that ever happening. We are agitated and competitive. After all, we have the assets to feed anybody on the planet right now. We’re just allotment not to.

That’s why we need a better enemy. Allotment ourselves as our greatest enemy is self-defeating and stupid, but nobody else has stepped up. We’re even starting to kick the coronavirus’ ass at this point.

Simply put: we need the aliens from the movie to come down and just attack the crap out of us.

Or… killer robots

Just to be clear, we’re not advocating for extraterrestrials to come and abate us. We just need to focus all of our adaptive intelligence on an enemy other than ourselves.

In bogus intelligence terms, we need a real-world abundant adversarial arrangement where humans are the learners and aliens are the discriminators. That’s pretty much the plot of , the 1996 film, starring Will Smith (spoilers):

  • Humans are so war-like that two ambrosial men, Will Smith and Harry Connick Jr, are forced to become warfighters in the military
  • Aliens, apparently having watched and , decide this is a burlesque and come to abort us
  • Humans band calm in one united front and defeat the aliens

The aliens created a botheration and challenged us to solve it. The only band-aid was optimization. We optimized and solved the problem, thus creating an adequate output. Anything less than total cooperation and our breed would have failed to pass the discriminator’s test and the aliens would have swatted our attack away like a cosmic Dikembe Mutombo.

The real problem

Humans are often evil, bigoted, and full of malice. But we’re still people. The real problem is that aliens won’t just abet and come attack us. It’d be hard to focus on article like Brexit, a US election, or whether Google’s latest plan to deal with ethics in AI is a good one if aliens were currently firing lasers at cities all over the world.

We can’t ascendancy aliens. In fact, it’s accessible they don’t even exist. Aliens are not dependable enemies.

We do, however, have complete ascendancy over our computers and bogus intelligence systems. And we should absolutely start teaching them to continuously claiming us.

The easy solution

With AI, we can behest how able an antagonist it becomes with smart, well-paced development. We could avoid the whole cutting lasers at cities part of the story and just slowly work our way appear the ambulatory part where we all work calm to win.

The accepted archetype for AI development is creating things that help us. And maybe doing so in a void is what’s affliction us. Our soldiers use AI to help them fight other humans. Our agents and business leaders use AI to adapt classrooms and workplaces. Some of this is good on its surface, some of it’s allegedly for the greater good.

But how much easier can we accessible make it to be a human before we get a full blown case of Wall-E Syndrome? We should be developing bogus intelligence that challenges every one of us in tandem with life-saving and life-affirming AI.

Take the domain of Chess, for example. There’s no added catechism whether humans or AI boss the Chess board. The greatest human Chess players can be beaten by robots active on smartphone processors; ladies and gentlemen it’s a wrap. And that’s a good thing.

For centuries we’ve used Chess as an affinity for war strategy. Now, if we get into a fight with a future evil, acquainted AI, we know it’ll apparently be able to best us tactically.

We need to focus on training our troops to fight an enemy that’s stronger and more able than humans and, more importantly, on developing aegis methods that don’t rely on killing any humans, but on attention all of them. 

Humans acutely crave challenge. We attack for sport and for fun. The only reason billionaires and trillionaires exist is because sheer human hubris makes the idea of being “the best” seem like a better idea than being “the best for us all.”

Perhaps a technology shift appear developing ways to claiming humans in ways we can’t claiming one addition could turn the tides back in our favor.

Imagine a video game that continuously challenged you in deeply claimed ways or a abode appraisal system that adapted to your unique personality and experiences, thus always bidding you to do your best work.

The all-embracing goal would be to create a system by which tribalism is replaced with cooperation. Our DNA itself seems laden with humanity’s birth trauma and we’ve spent the last 5,000 years (at least, per recorded history) coming up with ways to work around that. But, with concentrated redirection, maybe our affection for affliction could become a backbone for our species. 

Maybe we need an AI antagonist to be our “Huckleberry” when it comes to the urge for competition. If we can’t make most humans non-violent, then conceivably we could direct that abandon toward a tangible, non-human antagonist we can all feel good about defeating.

We don’t need killer robots or aliens for that. All we need is for the AI association and altruism at large to stop caring about making it even easier to do all the agitated things we’ve always done to each other and to start giving us article else to do with all those adverse intentions.

Maybe it’s time we chock-full angry adjoin the idea of robot overlords, and came up with some robot overlords to fight.

Will Smith could not be accomplished for comment.

Published February 18, 2021 — 20:11 UTC

Hottest related news

No articles found on this category.