Duration: 1 January 2019 to 31 December 2021
Supported by: under reference PGC2018-096212-B-C31

Although social media are the default channel used by people to share information, ideas and opinions, they may contribute paradoxically to the polarization of society as we have recently witnessed in the last presidential elections in the USA and in the Brexit referendum. Every user ends up receiving only the information that matches her personal beliefs and viewpoints, with the risk of an intellectual isolation (filter bubble), where beliefs may be reinforced by a repeated message inside the closed community (echo chamber). An harmful effect is that when information matches our beliefs, we tend to share it without checking its veracity. This facilitates the propagation of disinformation (fake news) and a further polarization. Another harmful effect is that the relative anonymity of social media facilitates the propagation of aggressive language, hate speech and exclusion messages, which contribute to create increasingly polarized communities and to demote argumentation in favor of pure confrontation. In the MISMIS-FAKEnHATE project we aim at: (i) identifying fake news taking into account the language they employ, the level of emotivity they trigger, and the multimodal information (images and video) they contain. We plan to address the identification of disinformation from a multimodal perspective, combining visual information with textual content, employing deep learning based techniques. A multilingual analysis will be carried out in case the content is in different languages.
Moreover, it will be important to profile if the author of a fake news is a bot or a person, and in this case to profile her demographics traits. (ii) detecting aggressive language and hate speech in social media. In order to monitoring hate speech, we plan to use a terminological approach, extracting hateful terms and linguistic patterns from previously analysed hate speech datasets and systematically monitoring if they  occur in new conversations online. Hate speech will be addressed form a multilingual perspective, and machine translation techniques will be used when part of the messages have been written in a different language. From a multimodal perspective, deep learning techniques will be employed to combine visual and textual information. The problem of profiling haters, i.e., those users that write hateful messages will be also addressed. (iii) organizing evaluation campaigns to profile bots and humans who are behind disinformation, to identify hate speech in social media, and to profile haters, e.g. in 2019 at PAN (shared task on Profiling bots and gender) and at SemEval (Multilingual detection of hate speech against immigrants and women in Twitter).

Members