The spread of misinformation remains a pressing concern, particularly in the context of social media. Studies reveal that a small group of “supersharers” significantly amplifies the reach of fake news, especially during critical events like elections. These individuals, often older and politically aligned, play a crucial role in shaping public perception and undermining trust in legitimate information sources. The rise of synthetic media, including deepfakes, further complicates this landscape, as it blurs the lines between reality and fabrication, leading to increased skepticism about the authenticity of digital content.
The phenomenon of “deep doubt” reflects a growing public wariness towards digital media, fueled by advancements in AI-generated imagery. This skepticism is not limited to social media; it extends to the broader media landscape, where the credibility of information is increasingly questioned. The concept of the “liar’s dividend” illustrates how deceptive technologies can erode trust in genuine evidence, allowing misinformation to thrive.
In response to these challenges, some countries are taking proactive measures. Finland has integrated digital literacy and fact-checking into its school curriculum, aiming to equip students with the skills needed to navigate the complex media environment. This educational approach emphasizes critical thinking and encourages young people to become informed consumers of information. However, the effectiveness of such initiatives depends on adequate teacher training and support.
The role of technology companies in combating misinformation is under scrutiny. Meta’s decision to halt political advertising in the EU highlights the tension between regulatory compliance and corporate interests. Critics argue that this move undermines transparency and accountability, raising concerns about the potential for unchecked misinformation on social media platforms. Similarly, a study by the European Commission found that X (formerly Twitter) has the highest proportion of disinformation among major social networks, prompting calls for stricter oversight.
The use of AI in disinformation campaigns is not limited to social media. Reports indicate that state-sponsored actors, such as those from Iran and Russia, are leveraging AI tools to create convincing fake content and manipulate public opinion. These tactics pose significant threats to democratic processes and public trust in media.
As the media landscape evolves, traditional media companies are adapting to the changing dynamics. The rise of brand publishing within investment banking illustrates a shift towards content creation as a means of enhancing corporate reputation. However, this trend raises questions about the authenticity of information and the potential for conflicts of interest.
The increasing prevalence of surveillance technology and the normalization of constant monitoring also raise critical privacy concerns. As cameras become ubiquitous, the volume of data generated necessitates the use of AI for analysis, yet this development outpaces the establishment of legal and ethical frameworks to protect individual privacy.
In this complex environment, the need for transparency, accountability, and media literacy is more urgent than ever. The interplay between technology, misinformation, and public perception underscores the challenges facing society as it navigates the digital age.
| name | description | change | 10-year | driving-force | |
|---|---|---|---|---|---|
| 0 | Potential for Increased Misinformation | Political voices not being amplified due to restrictions may lead misinformation spread. | Greater spread of political misinformation unregulated by major platforms. | Possibility of a fragmented political discourse without platform moderation or support. | The desire to avoid regulation may push companies to limit essential political communication. |
| 1 | Dynamic Media Environments | Media are active processes that shape human experience and engagement. | Shift from passive consumption of media to active engagement in dynamic environments. | In 10 years, media environments will heavily influence our social interactions and personal identities. | The blending of technology with social behaviors drives engagement and transformation within media environments. |
| 2 | Invisibility of Media Effects | Media environments often operate invisibly, influencing perceptions subtly. | Increased awareness of the hidden effects of media on society and individuals. | In 10 years, people will develop better tools to understand and measure media’s subtle influences. | The need for digital literacy and critical thinking skills in an increasingly mediated world drives this change. |
| 3 | Trust in Digital Content Erosion | Erosion of trust in online content as synthetic media becomes pervasive. | Shifting from trust in traditional media to skepticism towards digital content. | Audiences will rely on verification tools and critical thinking to assess content authenticity. | Increased awareness of misinformation and the capabilities of synthetic media. |
| 4 | Deep Doubt Era | Increasing public skepticism towards the authenticity of media due to AI-generated content. | Shift from trust in media to widespread skepticism about the authenticity of visuals. | In 10 years, media consumption may prioritize transparency and verification tools over traditional sources. | The proliferation of advanced AI tools that enable easy creation of convincing fake media. |
| 5 | Liar’s Dividend Recognition | Recognition of the concept of ‘liar’s dividend’ in public discourse regarding media authenticity. | From theoretical discussions to practical implications in everyday media consumption. | Public and legal frameworks may develop robust methods to counteract the liar’s dividend concept. | Growing awareness of misinformation and the need for reliable information sources. |
| 6 | Deepfake Technology in Misinformation | The use of deepfake technology to create deceptive news broadcasts. | Transition from traditional media manipulation to advanced AI-generated misinformation. | Deepfake technology could become a standard tool in political and social manipulation campaigns. | Advancements in AI and machine learning making deepfake creation accessible. |
| 7 | Shift in Journalism Standards | Major publications are prioritizing ‘content’ over traditional journalism ethics. | From quality journalism to clickbait-driven content creation. | Public trust in media may decline further, leading to alternative news sources emerging. | Audience engagement metrics increasingly dictate editorial decisions in media outlets. |
| 8 | Commercialization of Personal Image | The rise of advertising led to the commodification of individual images without consent. | From personal images being private to being used commercially without permission. | In a decade, individuals may actively monetize their own likenesses in new ways. | The growth of influencer culture and digital branding changing how images are valued. |
| 9 | Crisis of Representation | The exploitation of images sparked a crisis regarding representation and control. | From passive consumption of images to active discussions about representation and ethics. | In 10 years, representation in media may prioritize ethical considerations and consent. | Societal shifts toward equity and representation in media and advertising practices. |
| name | description | |
|---|---|---|
| 0 | Invisibility of Media Environments | Media operates as an invisible environment affecting human behavior and societal norms, often unnoticed until significant changes occur. |
| 1 | Potential Stagnation in Media Understanding | Focusing solely on media as tools may prevent a deeper understanding and utilization of their broader societal functions and implications. |
| 2 | Erosion of Public Trust | Widespread disinformation campaigns may lead to a significant erosion of public trust in media and political institutions. |
| 3 | Disinformation Proliferation | The rise of synthetic media could lead to widespread disinformation, making it hard to distinguish truth from falsehood. |
| 4 | Media Literacy Challenges | Taiwan’s struggle to counter disinformation highlights a potential gap in media literacy among the population, making them susceptible to false narratives. |
| 5 | Exploitation of Cognitive Biases | Manipulation of psychological biases through disinformation can distort public perception and decision-making. |
| 6 | Proliferation of Misinformation | The distribution of false information may lead to increased prejudices and societal division. |
| 7 | Erosion of Trust in Media | The rise of AI-generated imagery leads to skepticism about the authenticity of all media, undermining trust in legitimate news and documentation. |
| 8 | Misinformation and Bias in Financial Content | Increased brand publishing may lead to biased information that promotes the interests of banks over factual reporting. |
| 9 | Erosion of Trust in Journalism | Brand publishing initiatives could further blur the lines between journalism and marketing, undermining trust in traditional media. |



