The latest technology and digital news on the web

Human-centric AI news and analysis

Why humans and AI are stuck in a collision adjoin fake news

Fake news is a affliction on the global community. Despite our best efforts to combat it, the botheration lies deeper than just fact-checking or squelching publications that specialize in misinformation. The accepted cerebration still tends to abutment an AI-powered solution, but what does that really mean?

According to recent research, including this paper from scientists at the University of Tennessee and the Rensselaer Polytechnic Institute, we’re going to need more than just clever algorithms to fix our broken discourse.

The botheration is simple: AI can’t do annihilation a person can’t do. Sure, it can do plenty of things faster and more calmly than people – like counting to a actor – but, at its core, bogus intelligence only scales things people can already do. And people really suck at anecdotic fake news.

According to the above researchers, the botheration lies in what’s called “confirmation bias.” Basically, when a person thinks they know commodity already they’re less likely to be swayed by a “fake news” tag or a “dubious source” description.

Per the team’s paper:

In two consecutive studies, using data calm from news consumers through Amazon Mechanical Turk (AMT), we study whether there are differences in their adeptness to accurately analyze fake news under two conditions: when the action targets novel news situations and when the action is tailored to specific heuristics. We find that in novel news situations users are more acceptant to the advice of the AI, and further, under this action tailored advice is more able than all-encompassing one.

This makes it abundantly difficult to design, develop, and train an AI system to spot fake news.

While most of us may think we can spot fake news when we see it, the truth is that the bad actors creating misinformation aren’t doing so in a void: they’re better at lying than we are at cogent the truth. At least when they’re saying commodity we already believe.

The scientists found people – including absolute Amazon Mechanical Turk workers – were more likely to afield view an commodity as fake if it absolute advice adverse to what they believed to be true.

On the flip-side, people were less likely to make the same aberration when the news being presented was advised part of a novel news situation. In other words: when we think we know what’s going on, we’re more likely to agree with fake news that lines up with our assumption notions.

While the advisers do go on to analyze several methods by which we can use this advice to shore up our adeptness to inform people when they’re presented with fake news, the gist of it is that accurateness isn’t the issue. Even when the AI gets it right we’re still less likely to accept a real news commodity when the facts don’t line up with our claimed bias.

This isn’t surprising. Why should addition trust a apparatus built by big tech in place of the word of a human journalist? If you’re thinking: because machines don’t lie, you’re absolutely wrong.

When an AI system is built to analyze fake news it, typically, has to be accomplished on above-mentioned data. In order to teach a apparatus to admit and flag fake news in the wild we have to feed it a admixture of real and fake accessories so it can learn how to spot which is which. And the datasets used to train AI are usually labeled by hand, by humans. 

Often this means crowd-sourcing labeling duties to a third-party cheap labor outfit such as Amazon’s Mechanical Turk or any number of data shops that specialize in datasets, not news. The humans chief whether a given commodity is fake may or may not have any actual acquaintance or adeptness with journalism and the tricks bad actors can use to create compelling, hard-to-detect, fake news.

And, as long as humans are biased, we’ll abide to see fake news thrive. Not only does accepting bias make it difficult for us to differentiate facts we don’t agree with from lies we do, but the constancy and accepting of absolute lies and misinformation from celebrities, our family members, peers, bosses, and the accomplished political offices makes it difficult to argue people otherwise.

While AI systems can absolutely help analyze egregiously false claims, abnormally when made by news outlets who consistently engage in fake news, the fact charcoal that whether or not a news commodity is true isn’t really an issue to most people.

Take, for instance, the most watched cable arrangement on television: Fox News. Despite the fact that Fox News attorneys have again stated that abundant programs – including the second highest-viewed affairs on its network, hosted by Tucker Carlson – are absolutely fake news.

Per a ruling in a aspersion case adjoin Carlson, U.S. District Judge Mary Kay Vyskocil — a Trump appointee — ruled in favor of Carlson and Fox after acute that reasonable people wouldn’t take the host’s accustomed address as truthful:

The “‘general tenor’ of the show should then inform a viewer that [Carlson] is not ‘stating actual facts’ about the topics he discusses and is instead agreeable in ‘exaggeration’ and ‘non-literal commentary.’ … Fox persuasively argues, that given Mr. Carlson’s reputation, any reasonable viewer ‘arrive with an adapted amount of skepticism’.

And that’s why, under the accepted news paradigm, it may be absurd to create an AI system that can definitively actuate whether any given news account is true or false.

If the news outlets themselves, the accepted public, adopted officials, big tech, and the alleged experts can’t decide whether a given news commodity is true or false after bias, there’s no way we can trust an AI system to do so. As long as the truth charcoal as abstract as a given reader’s politics, we’ll be inundated with fake news. 

Published March 16, 2021 — 21:57 UTC

Hottest related news

No articles found on this category.
No articles found on this category.