Online hate speech

The United Nations defines hate speech as “offensive speech directed at a group of individuals. It is based on inherent characteristics such as race, religion or gender, and can endanger social peace.” According to UNESCO, hate speech can “create stereotypes, stigmatize and use derogatory language.” 

“Combating hate speech does not mean limiting or prohibiting freedom of expression. It is about preventing the escalation of such incitement of hatred into something more dangerous, in particular by instigating discrimination, hostility, and violence, which is prohibited under international law."

Social networks have facilitated the spread of these speeches, while companies evade all responsibility for content that incites hatred and violence on their platforms. In the early days of Elon Musk’s acquisition of Twitter, the Network Contagion Research Institute at Rutgers University showed that the use of the racist word “Nigger” increased almost 500% in twelve hours compared with previous averages. 

A group of special rapporteurs, independent experts, and UN working groups have reiterated the responsibility of social media companies to assume more responsibility in stopping these speeches, which mainly affect women, LGBTIQ+ people, and ethnic and racial minorities.

A journalistic investigation conducted by four journalists in Colombia, Brazil, and Ecuador reveals common strategies and narratives of inciting hatred against women and LGBTIQ+ people, especially against trans women. He showed how, in the three countries, they resort to showing children as victims of sexualization or indoctrination. In all cases, the false “gender ideology” is used to attack advances in women’s rights, and abortion is frequently equated with murder and genocide, an argument that serves to articulate and coordinate attacks against feminists and defenders of the right to abortion.

In Kenya, during the 2017 elections, a significant increase in online hate speech directed against ethnic minorities and political opponents was observed. An Article 19 report noted how social media platforms became critical tools for the spread of ethnic propaganda and messages inciting violence. Most of these speeches were not moderated or removed, exacerbating political and social tensions.

In Uganda, the organization Pollicy has documented how online hate speech mainly affects women and LGBTIQ+ people. A study revealed that female politicians and activists face harassment and verbal violence, with attacks including derogatory language based on gender and sexual orientation. This type of speech threatens people’s integrity and restricts these groups’ political and civic participation.

A study on hate speech on social networks in Costa Rica carried out by the United Nations, the University of Costa Rica, and the company COES detected in 2023 more than 1.4 million conversations and messages linked to hate speech, 50% more than the previous year and 255 % higher than in 2021. The research showed that the topics that most generate this type of incitement to hatred are politics and national reality (480 thousand), xenophobia (236 thousand), gender (214 thousand), sexual orientation (178 thousand), generational clash (143 thousand), racism (96 thousand), religion (36 thousand), and disability (22 thousand).