The latest technology and digital news on the web

Human-centric AI news and analysis

Facebook issues $100K claiming to build an AI that can analyze abhorrent memes

Memes are now an basic part of how people acquaint on the internet. While a lot of these memes have an adeptness to cheer you up, a lot of them are abhorrent and discriminatory.

At the same time, AI models that are accomplished primarily with text to detect hate speech, attempt to analyze abhorrent memes. So, Facebook is throwing a new $100,000 claiming to developers to create models that can admit abhorrent images and memes.

As a part of the challenge, Facebook said it’ll accommodate developers with a dataset of 10,000 ‘hateful’ images licensed from Getty Images:

We worked with accomplished third-party annotators to create new memes agnate to absolute ones that had been shared on social media sites. The annotators used Getty Images’ accumulating of stock images to alter the aboriginal visuals while still attention the semantic content. 

In a blog post, the aggregation explained that creating an AI model to detect abhorrent memes is a multimodal problem. The model has to look at the text, look at the image, and then look at the ambience of how they’re used in conjunction. Facebook said that annotators have ensured that examples in the dataset create a multimodal botheration for the AI to solve. So, some of the absolute models for text or image apprehension might not work out of the box.

These are some examples from Facebook of ‘mean’ memes. If used separately, text and images are innocuous.

Facebook is accurate enough to open up this dataset only to accustomed researchers. The aggregation said the dataset contains meme of acute nature often appear on social media including the afterward categories:

A direct or aberrant attack on people based on characteristics, including ethnicity, race, nationality, clearing status, religion, caste, sex, gender identity, sexual orientation, and affliction or disease. We define attack as agitated or dehumanizing (comparing people to non-human things, e.g., animals) speech, statements of inferiority, and calls for exclusion or segregation. Mocking hate crime is also advised hate speech.

Detecting hate speech is a difficult botheration for Facebook and other social networks. Memes add an extra layer of complication as moderators and AI have to accept ambience of the posted meme. Companies can’t apply a one-size-fits-all solution as cultural, racial, and language-based ambience of memes change very frequently.

While this claiming might not ship a readymade band-aid for the social arrangement giant, it might give the aggregation some ideas as to how to solve this problem.

You can learn more about the antagonism here and you can read the accompanying paper anecdotic methods and benchmarks here. Selected advisers will present their paper at NeuralIPS 2020 in December.

Published May 13, 2020 — 07:06 UTC

Hottest related news

No articles found on this category.