The rise of artificial intelligence (AI) is reshaping political landscapes and raising concerns about its implications for democracy and public discourse. Various state actors and private entities are leveraging generative AI technologies to manipulate public opinion, as seen in campaigns from countries like Russia, China, Iran, and Israel. OpenAI’s recent report highlights the use of its tools in disinformation efforts, marking a significant moment in the acknowledgment of AI’s role in online deception.
In Bangladesh, pro-government outlets are employing cheap AI tools to create deepfake videos and spread misinformation against opposition parties. This trend reflects a broader pattern of authoritarian regimes using disinformation to maintain control, with social media platforms like Facebook taking action against coordinated inauthentic behavior linked to government interests. The intertwining of disinformation and authoritarianism raises questions about the future of free speech in such environments.
The potential for AI to disrupt electoral processes is a growing concern in the UK and the US. Generative AI can create deepfake videos and personalized communications that mislead voters. Experts emphasize the need for clear regulations and codes of conduct for political actors to mitigate these risks. The misuse of AI tools, such as voice cloning technology, has already been demonstrated in incidents like AI-generated robocalls impersonating political figures, highlighting the urgent need for safeguards.
Extremist groups are increasingly utilizing AI to spread hate speech and radicalize individuals. Reports indicate that these groups are creating AI-generated propaganda at an unprecedented scale, raising alarms about the effectiveness of current measures to combat online extremism. The development of AI tools by these groups poses a significant challenge for tech companies and regulators alike.
The emergence of synthetic media and deepfake technology is also a pressing issue. A Europol report warns that by 2026, a substantial portion of online content could be artificially generated, creating opportunities for misinformation. This shift is contributing to a phenomenon known as “deep doubt,” where skepticism about the authenticity of digital media is on the rise. The implications for public trust in information are profound, as individuals grapple with the reality of manipulated content.
AI’s impact extends beyond disinformation to the realm of political polling and lobbying. Scholars propose that AI simulations could replace traditional polling methods, offering a more accurate representation of voter sentiment. However, there are concerns about the potential misuse of AI in lobbying, where it could amplify existing advantages and create conflicts of interest. Calls for transparency and regulation in these areas are becoming increasingly urgent.
Finally, the ethical implications of AI technologies are under scrutiny. Critics argue that the rapid deployment of AI systems without adequate consideration of their societal impacts is leading to a “glitchy and scammy” internet. The need for responsible AI development that prioritizes human well-being and dignity is emphasized, as society navigates the complexities of this evolving landscape.
| name | description | change | 10-year | driving-force | |
|---|---|---|---|---|---|
| 0 | AI Generative Tools Impact | AI tools have facilitated the creation of more sophisticated misleading content. | Advancement from simple deceptive practices to complex AI-generated misinformation tactics. | In 10 years, AI could create highly believable fraudulent content that misleads users effectively. | The development and accessibility of AI tools for anyone seeking to create viral or engaging content. |
| 1 | Generative A.I. and Online Disinformation | Concerns about the role of generative A.I. in spreading disinformation during elections. | Change from traditional media influence to digital A.I.-generated disinformation. | In a decade, A.I. may play a dominant role in shaping electoral narratives and public opinion. | The convergence of technology and politics during critical electoral periods. |
| 2 | Misinformation Overload | The rise of AI-generated content leads to an overwhelming volume of misinformation. | Shift from occasional misinformation to a pervasive ocean of false information. | Society may struggle to discern truth in a landscape dominated by AI-generated misinformation. | The ease of producing large volumes of content using AI tools encourages the spread of misinformation. |
| 3 | AI as a Tool for Propaganda | Nation-states leverage AI to create vast amounts of misleading content. | Shift from traditional misinformation to automated and sophisticated propaganda techniques. | The landscape of information warfare may evolve, increasing the sophistication of misinformation campaigns. | The strategic advantage gained from using AI to amplify propaganda efforts. |
| 4 | Public Demand for Transparency in AI-Driven Campaigns | Growing public awareness and demand for transparency in AI’s role in political messaging. | From opaque political communication to a demand for transparency in AI-generated content. | Voters will expect detailed disclosures on AI usage in political messaging, influencing campaign strategies. | Public distrust in unregulated tech use in politics pushes for clearer communication. |
| 5 | AI-driven Hacktivism | Iranian hackers using AI to spread propaganda and misinformation through deepfake technology. | Shift from traditional hacking to sophisticated AI-driven misinformation campaigns. | Widespread use of AI in cyber warfare, blurring lines between reality and deepfakes in media. | Increased geopolitical tensions and desire for influence in global narratives. |
| 6 | AI’s Role in Election Outcomes | AI might significantly influence election outcomes, overshadowing traditional political discourse. | From candidate policies determining election results to AI effectiveness playing a central role. | Elections may be won or lost based on AI capabilities rather than candidate ideals or policies. | The competitive nature of political campaigns requiring innovative tools for voter engagement. |
| 7 | Concerns Over AI Misuse in Elections | Broader concerns exist regarding AI’s role in elections, including disinformation. | From traditional polling methods to AI tools that could potentially mislead voters. | AI’s role in elections could raise ethical and trust issues in democratic processes. | The impact of technology on political transparency and voter manipulation. |
| 8 | Election Misinformation Threat | The rise of AI-generated misinformation poses a significant risk during elections. | Change from traditional misinformation methods to sophisticated AI-generated audio and deepfakes. | Future elections may see widespread AI-generated misinformation, complicating voter trust. | The urgent need for effective safeguards against emerging technologies in political contexts. |
| 9 | Public Preparedness for AI Misinformation | Authorities and the public are underprepared for AI-generated misinformation. | Shift from reliance on traditional verification methods to the need for advanced detection tools. | In a decade, society may develop new literacy skills to navigate AI-generated content effectively. | Growing recognition of the impact of synthetic media on public perception and democracy. |
| name | description | |
|---|---|---|
| 0 | Misinformation Proliferation | The risk of AI-generated misinformation becoming widespread, particularly during critical events like elections, could mislead the public significantly. |
| 1 | Disinformation Risks | Generative A.I. may facilitate the spread of disinformation, especially during critical political events such as elections, impacting informed decision-making. |
| 2 | State Actor Exploitation | State actors using A.I. for covert influence campaigns suggests a risk of geopolitical tensions escalating due to misinformation tactics. |
| 3 | Overreliance on A.I. for Political Campaigning | Dependence on A.I. for generating content in political campaigns could lead to a misunderstanding of voter sentiment and misinformation. |
| 4 | Uncontrolled Spread of Misinformation | AI models could unintentionally spread misinformation through automated responses or content generation, significantly impacting public opinion. |
| 5 | Deep Fakes | The use of AI to create realistic synthetic media that can mislead voters and damage reputations during elections. |
| 6 | AI Hallucination | AI’s tendency to generate false information can lead to misinformation spreading rapidly, affecting public perception and democracy. |
| 7 | AI’s Role in Electoral Interference | AI technology could be leveraged to disrupt democratic processes, such as influencing elections through misinformation campaigns. |
| 8 | Lack of Truthfulness in Messaging | AI may generate misleading or false content without the ability to verify accuracy, compromising informed voter decisions. |
| 9 | Misinformation and Disinformation | AI-generated audio can be used to disseminate false information, potentially leading to widespread misinformation during elections. |



