From Online Hate Speech to Deep Fakes: Shared Challenges and Solutions Around Elections
The Internet and digital media have revolutionized virtually all aspects of our lives, including how we elect our leaders. But the online platforms that expand political participation and inclusion can also be used to fuel discrimination and incite violence. So, what constitutes “responsible” online behavior in elections and how can it be brought about? Where is the line between hate speech and freedom of speech? And who gets to decide?
Online tools have enabled participation in elections on an unprecedented scale. But these tools have also been used to spread disinformation and hate speech, fuel discrimination and incite violence. We all have a broad sense as to what “responsible” online behavior in politics and elections looks like. And yet, defining this more precisely and agreeing on how to bring this about has turned out to be quite complex — mostly because it raises difficult questions: Who sets the rules? Who decides whether a negative comment about an electoral opponent, or criticism of the electoral process, is acceptable? Is what someone might perceive as hate speech simply someone else’s freedom of speech?
This is one of the emerging issues considered in the latest report of the Secretary-General on strengthening the role of the United Nations in enhancing periodic and genuine elections and the promotion of democratization (A/76/266). Other topics addressed include holding elections during public health crises; women’s political participation; and the impact of climate disruptions on elections. The report also raises important considerations regarding inclusive electoral processes, including the participation of persons with disabilities, young people, indigenous peoples and civil society.
Who decides whether a negative comment about an electoral opponent, or criticism of the electoral process, is acceptable? Is what someone might perceive as hate speech simply someone else’s freedom of speech?
To tackle the harmful use of social media, the report suggests that multiple actors can, and should, play a role. “The Secretary-General’s report recognizes that addressing responsibility for online content in and around elections is a balancing act — to foster political participation and protect human rights, but at the same time to ensure the space is safe and that any regulations do not impose undue restrictions”, says Craig Jenness, Director of the Electoral Assistance Division of the UN Department of Political and Peacebuilding Affairs.
Politicians and codes of conduct
The primary responsibility for a successful, peaceful election lies with candidates and other political leaders. Indeed, such leaders have a powerful influence on public discourse and on the perceptions of their followers about elections and their outcomes. Their words have a particularly wide reach and resonance online. Messages of discontent or of hatred against opponents can be perceived as an incitement to action, and when these reach large, loyal audiences there is a real risk that violence may follow.
Codes of conduct for candidates, politicians and political parties can therefore be important prevention measures to counter the spread of disinformation and promote a conducive election environment. The Secretary-General’s report encourages candidates and other political leaders from across the spectrum to come together and mutually commit to standards of responsible behaviour in elections. Earlier this year, for example, the United Nations supported an ethics agreement concluded between the two presidential run-off candidates in São Tomé and Príncipe, under which they committed not to publish false or defamatory allegations against the other candidate or their representatives. The public signing of the agreement helped inspire trust and calm in the days leading up to voting.
In this digital age, such political commitments need to also apply online and can take many forms — for example, by agreeing to refer to verified sources of election information only, to abstain from hate speech and inciting violence, including harassment against women; to refrain from knowingly conveying false or misleading information; and to reject manipulated content, as well as leaked or stolen digital material. United Nations’ electoral assistance in this area could comprise both technical support and political engagement.
In Uruguay … in 2019, the United Nations supported political parties as they agreed to an ethical pact against disinformation, which committed them not to generate or promote false news or disinformation campaigns.
These codes can also apply to media, social media platforms, the electorate and others. For example, in Uruguay, where in 2019, the United Nations supported political parties as they agreed to an ethical pact against disinformation, which committed them not to generate or promote false news or disinformation campaigns.
Disseminating and protecting verified election information
But it is not only political leaders who have a role to play in tackling the harmful use of social media. Electoral authorities are increasingly having to address the online spread of incorrect and misleading messages about the voting process itself, such as people’s eligibility to vote, or the time and place of polling or unsubstantiated claims of voter fraud or fake election results. To address this, some have partnered with technology and social media companies to support official electoral messages or connect voters with the correct information. In Brazil, the Superior Electoral Court (TSE) worked with Instagram and Twitter to direct the platforms’ users to official sources of the country’s 2020 election information and partnered with WhatsApp to develop a chatbot that answered election-related questions. The TSE also worked with fact-checking agencies to establish a website to expose misleading electoral information.
Social media platforms are also developing ways for electoral authorities to identify problematic or misleading election-related content. In Australia, Azerbaijan and Somalia, for example, Facebook enabled official pages of electoral management bodies to include a verified badge, which confirms a page’s authenticity.
Mechanisms can also be established for citizens to directly submit complaints of alleged disinformation. For elections in South Africa in 2019, a platform set up to receive public complaints was connected to the South African Electoral Commission with submissions considered by a panel of experts who offered recommendations for possible further action.
Partnerships with, and among, civil society organizations can be effective in the area of fact-checking and of monitoring abuses and misinformation. In Peru in 2021, a social media counter-disinformation network Ama Llula (You will not lie) was established by media with United Nations support. The initiative verified content on social networks and messaging applications, including in two indigenous languages.
Regulation and restriction
There may be circumstances where informal, voluntary initiatives are insufficient to manage online behavior. Governments, too, may therefore play a role through adopting regulatory and legal responses to hate speech that are carefully balanced against the right to freedom of expression and the right to have access to information. International expert guidance formulated under United Nations auspices in recent years helps legislators to identify when speech crosses over into incitement to hatred and places a particular weight on the status of the speaker. Social media companies can also play a role in developing policies to monitor online harassment and hate speech. This could lead, and already has led in some cases, to platforms reducing access to or removing content that constitutes an incitement to violence or discrimination. An inclusive approach appears to offer the best prospect for arriving at sound legislative and policy initiatives.
Together, these efforts can contribute towards building more resilience online and an environment conducive to holding peaceful and credible elections. The realization, however, that digital tools and how they are used or misused will continue to evolve, means that responses not only need to consider current risks, but already look ahead to impending challenges, such as “deep fakes” powered by artificial intelligence and their potential impact on future electoral processes. The United Nations, where requested and appropriate, can help to navigate such complexities within its established framework of electoral assistance.