Facebook Issues $100K Challenge to Build an AI that Can Identify Hateful Memes
Memes are now an integral part of how people communicate on the internet. While a lot of these memes have an ability to cheer you up, a lot of them are hateful and discriminatory. At the same time, AI models that are trained primarily with text to detect hate speech, struggle to identify hateful memes. So, Facebook is throwing a new $100,000 challenge to developers to create models that can recognize hateful images and memes. As a part of the challenge, Facebook said it’ll provide developers with a dataset of 10,000 ‘hateful’ images licensed from Getty Images: We worked with trained third-party annotators to create new memes similar to existing ones that had been shared on social media sites. The annotators used Getty Images’ collection of stock images to replace the original visuals while still preserving the semantic content. In a blog post, the company explained that creating an AI model to detect hateful memes is a multimodal problem. The model has to look at the text, look at the image, and then look at the context of how they’re used in conjunction. Facebook said that annotators have ensured that examples in the dataset create a multimodal problem for the AI to solve. So, some of the existing models for text or image detection might not work out of the box. These are some examples from Facebook of ‘mean’ memes. If used separately, text and images are innocuous. Facebook is careful enough to open up this dataset only to approved researchers. The company said the dataset contains meme of sensitive nature often reported on social media including the following categories: A direct or indirect attack on people based on characteristics, including ethnicity, race, nationality, immigration status, religion, caste, sex, gender identity, sexual orientation, and disability or disease. We define attack as violent or dehumanizing (comparing people to non-human things, e.g., animals) speech, statements of inferiority, and calls for exclusion or segregation. Mocking hate crime is also considered hate speech. Detecting hate speech is a difficult problem for Facebook and other social networks. Memes add an extra layer of complexity as moderators and AI have to understand context of the posted meme. Companies can’t apply a one-size-fits-all solution as cultural, racial, and language-based context of memes change very frequently. While this challenge might not ship a readymade solution for the social network giant, it might give the company some ideas as to how to solve this problem. You can learn more about the competition here and you can read the accompanying paper describing methods and benchmarks here. Selected researchers will present their paper at NeuralIPS 2020 in December. Published May 13, 2020 — 07:06 UTC » Read More
Like to keep reading?
This article first appeared on thenextweb.com. If you'd like to keep reading, follow the white rabbit.