The latest technology and digital news on the web

Human-centric AI news and analysis

A beginner’s guide to AI: Separating the hype from the reality

An avant-garde bogus intelligence created by OpenAI, a aggregation founded by genius billionaire Elon Musk, afresh penned an op-ed for The Guardian that was so assuredly human many readers were afraid and frightened. And, . Just autograph that book made me feel like a abhorrent journalist.

That’s a really crappy way to start an commodity about bogus intelligence. The account contains only trace amounts of truth and is advised to shock you into cerebration that what follows will be filled with amazing revelations about a new era of abstruse wonder.

Here’s what the lede book of an commodity about the GPT-3 op-ed should look like, as Neural writer Thomas Macaulay handled it beforehand this week:

The Guardian today appear an commodity purportedly accounting “entirely” by GPT-3, OpenAI‘s vaunted accent generator. But the small print reveals the claims aren’t all that they seem.

There appears to be a giant gap amid the absoluteness of what even the most ‘advanced’ AI systems can do and what the average, intelligent adult who doesn’t work anon in the field believes it can.

Technology journalists and AI bloggers bear our fair share of the blame, but this isn’t a new issue nor an unaddressed one.


In 2018 addition anchorman for The Guardian, this one a human named Oscar Schwartz, appear an commodity titled “The address is unhinged: how the media gets AI alarmingly wrong.” In it they altercate a slew of account from that same year proclaiming that Facebook AI advisers had to pull the plug on an agreement after a accustomed accent processing system created its own agreement language.

Articles surrounding the altercation corrective the account of an out-of-control AI with capabilities beyond its developers’ intentions. The truth is that the developers found the after-effects absorbing but by no means were they afraid or shocked.

So what gives? Why are we sitting here two years later ambidextrous with the same thing again?


Media hype plays its part, but there’s more. Genevieve Bell, a assistant of engineering and computer science at the Australian National University, was quoted in the piece Schwartz wrote as saying:

We’ve told belief about azoic things coming to life for bags of years, and these narratives access how we adapt what is going on now. Experts can be really quick to abolish how their analysis makes people feel, but these abstract hopes and dystopian fears have to be part of the conversations. Hype is ultimately a cultural announcement that has its own important place in the discourse.

Ultimately we to accept that the agitator for our far-future abstruse aspirations could apparent as an alien side-effect of commodity innocuous. It might not be rational to accept that an AI system advised to string calm belletrist in a manner constant with accent processing is going to aback become acquainted on its own, but it sure is fun.

The movies always have a line about human hubris and how we never saw it coming. But the absoluteness is that we’re abrupt our eyeballs attractive for any sign of hype we can push out. But there’s more to that too.

It would appear the threat of an AI winter has passed, but VCs and startups are still making a mint on AI applications that are annihilation but hype, such as systems declared to adumbrate crime or whether a job applicant will be a good fit. And as long as there are “experts” accommodating to opine that these systems can do things they’re demonstrably not able of, the accepted public acumen of what AI can and can’t do will be decrepit at best.

The myths and the realities

It’s arguable the accepted public actually misunderstands modern AI at a axiological level. But the most common misconceptions absorb bogus accepted intelligence (AGI) and the AI humans collaborate with the most.

We’ll start with AGI. Here’s the ground truth: there exists no sentient, human-level, conscious, or self-aware AI system. Nobody is close. The myth is that systems like GPT-3 or Facebook’s accent processing algorithms are able to teach themselves new capabilities from raw data. That’s incorrect and ambiguous nonsensical.

And we can authenticate this by breaking down some of the most common myths people hold about the narrow AI they use accustomed or see in abstract headlines.

Myth: Facebook’s AI knows aggregate about you.

Reality: People accept this because everyone’s got an chestnut about a time where we said commodity out loud to addition person while our phone was locked or in our pocket and the next time we arrested our feed there was an advertisement for exactly what we were talking about. The truth is that these are coincidences.

Bust it: Analysis a artefact class you’d never actually acquirement across several of your devices. As a cis-male, for example, I can analysis nursing bras and abundance tests on the internet with my laptop and smart phone and I know I’ll get advertisements across Chrome, Facebook, and abundant other articles accompanying to being “with child” for a couple of weeks.

What this tells you is that the algorithm isn’t “predicting” or “thinking” it’s just afterward your footsteps. In this case the value is that AI can assess your alone habits through the use of a simple keyword-matching algorithm.

Humans could do it better, but it would be a stupid business model to hire a person to spend their entire day watching what you do online so they can decide what ads to show you. Facebook CEO Mark Zuckerberg would need to employ half the planet as ad-servers so that the other half could get served. AI is always more able at such a simple, asinine task.

Myth: GPT-3 understands accent and can write an commodity as well as a human.

Reality: It actually does not and cannot. Aggregate is ambit and data to the system. It doesn’t accept the aberration amid a dog and a cat or a person and an AI. What it does is imitate human language. You can teach a parrot to say “I love Mozart,” but that doesn’t mean it understands what it’s saying. AI is the same.

Bust it: GPT-3’s most amazing after-effects are cherry-picked, acceptation the people assuming them off make assorted attempts and go with the ones that look the best. That’s not to say GPT-3’s cheat isn’t impressive, but when you apprehend that it’s already going through billions of ambit the change wears off pretty quickly.

Think about it this way: If you’re cerebration of a number and I try to guess that number the more guesses I get – the more ambit cogent me whether my output is right or wrong – the better the odds I’ll eventually guess correctly. That doesn’t make me a psychic, it’s called brute-force intelligence and it’s what GPT-3 does. That’s why it takes a supercomputer to train the model: we’re just throwing muscle at it.

Most AI works the same way. There are future-facing schools of thought, such as the field of allegorical AI, that accept we will be able to aftermath more automatic AI one day. But the absoluteness of AI in 2020 is that it’s no closer to accepted intelligence than a calculator is. When that changes, it’s almost assertive to be an advised endeavor, not the result of an agreement gone accidentally right or wrong.

The aberancy won’t be an accident

When companies like OpenAI and DeepMind say they’re alive appear AGI that isn’t to say that GPT-3 or DeepMind’s chess-playing system are examples of that endeavor. We’re not talking about cartoon a beeline line from one tech to the future, as we can with the Wright brothers famous manned aircraft and today’s modern jets – more like inventing the wheel on the way to eventually architecture a space ship.

At the end of the day, modern AI is amazing in its adeptness to alter asinine human labor with automation. AI performs tasks in abnormal that would take humans bags of years – like allocation through 50 actor images. But it’s not very good at these tasks when compared to a human with enough time to accomplish the same goal. It’s just able enough to be valuable.

Where the rubber hits the road – developing level 5 free vehicles, for archetype – AI simply isn’t able enough to alter humans when adeptness isn’t the ultimate goal.

Nobody can say how long it’ll be until the myths about “super intelligent” AI are more carefully accumbent with the reality. Perhaps there’ll be a “eureka” moment to prove the experts wrong – and maybe aliens will land on Earth and give us the technology in barter for your Aunt’s peach cobbler recipe.

More likely, however, we’ll accomplish AGI in the exact same way we accomplished the atomic bomb and the internet: through concentrated analysis and endeavor appear a astute goal. And, based on where we are right now, that means we’re likely several decades away from an AI that can not only write an op-ed, but also accept it.

Appear September 10, 2020 — 19:42 UTC

Hottest related news

No articles found on this category.