Strengthening Democracy in the Digital Age

Strengthening Democracy in the Digital Age

In this year, where over two billion people have the chance to vote in nationwide elections, Microsoft is dedicated to bolstering election security and resilience through innovative solutions and tools for voters, candidates, political campaigns, and election authorities.

With the world in the midst of a pivotal election year, the integrity of democratic processes is threatened by AI-powered disinformation and deepfakes—tools that can deceive voters, distort discourse, and erode trust in political systems. Microsoft is dedicated to reinforcing election security and a healthy information environment with cutting-edge technology designed to protect both the voting process and its key players. In today’s complex digital landscape, it’s not just about technology—it’s about securing the future of democracy itself.

A New Era of Election Threats

Headlines frequently remind us of democracy’s fragility, and deepfakes are rapidly becoming one of its most formidable threats. Deepfake technology, which uses AI to create convincing but false images, videos, or audio, poses serious risks to election integrity. The rise of deepfakes and other AI-generated content has added a new dimension to election security concerns. Recent research by Sumsub shows a 10-fold increase in deepfakes across industries globally, with North America seeing a staggering 1,740% rise in deepfakes detected last year alone. More than 500,000 deepfake videos and audio clips were shared on social media worldwide in 2023, demonstrating the scale of the problem.

The Rise of Deepfakes in Political Campaigns

Deepfakes make it easier than ever for malicious actors to manipulate media. What once required sophisticated skills is now accessible to anyone with an internet connection and basic AI tools. This shift not only fuels identity theft and phishing scams but also jeopardizes public trust in election processes, information, and results. The second Microsoft Threat Intelligence Election Report reveals key insights about how AI is being weaponized in elections, highlighting that AI-enhanced content—rather than fully AI-generated material—can have a stronger impact, especially when designed to appear personal, such as in fake audio from a private phone call.

How AI is Used to Deceive Voters

The Report highlights several factors contributing to generative AI risk to elections in 2024:

  • AI-enhanced content is more influential than fully AI-generated content;
  • AI audio is more impactful than AI video;
  • Fake content appearing to come from a private setting such as a phone call is more effective than fake content from a public setting, such as a deepfake video of a world leader;
  • Disinformation messaging has more cut-through during times of crisis and breaking news; and
  • Impersonations of lesser-known people work better than impersonations of very well-known people such as world leaders.

Amy Larsen, Director of Global Field Engagement & Strategic Projects at Microsoft’s Democracy Forward initiative, emphasized the company’s commitment to combating these threats: Microsoft has been at the forefront of fostering responsible AI development and working to ensure these new technologies are resistant to abuse. This includes a number of actions to help safeguard voters, candidates, campaigns and election authorities in Europe and around the world.

Collaborating to Combat AI-Driven Disinformation

At the Munich Security Conference in February 2024, Microsoft joined over two dozen technology companies in signing on to the Tech Accord to Combat Deceptive Use of AI in 2024 Elections. This voluntary pledge focuses on combatting deepfakes that take the form of video, audio, and images attempting to fake or alter the appearance, voice, or actions of political candidates, campaigns, or election officials. By rallying industry leaders, the Tech Accord focuses on making it harder for bad actors to create and distribute fake or altered media of political candidates and officials.

Microsoft has also launched deepfake detection tools and training sessions, educating thousands of political stakeholders across more than 20 countries. These efforts include the expansion of Microsoft Content Integrity tools which allow political candidates and newsrooms to verify the authenticity of digital media. This tool, based on the open-source standard developed by the Coalition for Content Provenance and Authenticity (C2PA), provides transparency by showing when and where content was created, and whether it has been tampered with since its creation. In the Baltics, Microsoft AccountGuard offers advanced threat detection and notification services to stakeholders of democracy for free and protects hundreds of accounts. Additionally, Microsoft’s deepfakes public awareness campaigns in Central and Eastern Europe have reached about half a million people this year.

Evolving Threats from Nation-State Actors

Despite these efforts, democracies still face new and evolving challenges in the digital age. As Amy Larsen mentioned in a recent article, “Nation state threat actors are also upskilling, leading to the potential for increasingly sophisticated and scalable cyberattacks and disinformation campaigns.” As Microsoft Threat Analysis Center (MTAC) noted in a recent reports, nation-state threat actors like China and Russia are using generative AI tools as they continue to exacerbate political and social divisions in the United States and other democracies.

A Commitment to Responsible AI

Microsoft’s leadership in responsible AI development is highlighted through its partnerships and continued advocacy for balanced regulation. In collaboration with organizations like TrueMedia.org, the company is developing frameworks to address AI risks while maximizing its benefits for democratic processes. As Amy Larsen noted, “Beyond this election year, we must remain vigilant and proactive in protecting the public from abusive AI-generated content that can erode trust, spread misinformation, and harm individuals and communities. Technology is not just about the tools we create, but about the people it serves and the challenges it helps solve.”

A Path Forward: AI for Good

In a recently published white paper, Microsoft outlines the approach and policy recommendations to address this issue. Microsoft remains dedicated to playing its part in strengthening democracy, by providing solutions and tools that help enhance the security and resilience of elections; protect vulnerable and targeted members of society such as children, women, the elderly, and individuals running for office; and bolster the integrity of the information ecosystem. We are committed to continuing this important work with stakeholders in Europe and finding new ways to protect the most vulnerable members of our society.

In the digital era, safeguarding democracy demands ongoing alertness, protection, and support from all parties involved, such as governments, tech companies, civil society, the media, and citizens. As nation-state threat actors increasingly use advanced technologies like AI and cyber tools to target democratic societies, it is essential for everyone to play a part in defending the democratic systems that support our individual and collective well-being.

 

Tags: ,

Related Posts