Misinformation on social media has transformed news, influencing people’s perceptions about an event or trending news. Be it a political influence on social media or a publicity stunt to misguide viewers, today’s social media echoes a tale of propaganda and fake news. In the aftermath of the devastating California wildfires, many counterfeit images and videos (including AI-generated) circulated on social media platforms, promoting disinformation and making false claims about the disaster. This shows how social media has become a source of propaganda, misinformation, disinformation, and fake news.
Social media concentrates attention into algorithmic echo chambers, amplifying extreme and emotionally salient content. Those amplification mechanics make disinformation spread faster than corrections, increasing the chance that false claims become widely accepted by social media users. Social networks such as Facebook, Twitter, and Google have the potential to alter civic engagement, hijacking democracy by influencing individuals’ way of thinking.
Social media and news channels are considered vehicles for offering a voice to the voiceless. They are also viewed as a way of overcoming state-controlled media and content. Consequently, social media users who consume information through outlets are pressurising the idea of democracy to an extent where it may cease to exist.
The majority of US adults now access news on platforms with weak verification protocols, influencing how people first encounter events. According to the Pew Research News Platform Fact Sheet 2024, about 53% of US adults have confirmed that they sometimes get news from social media. Additionally, Pew finds that about one-fifth of adults get news on major platforms, changing their first exposure to events. This shows people’s reliance on social media platforms to get news. However, they are increasingly being used as a means for empowering disruptive messages, voices, or ideologies. This is because social media does not comply with the same established journalistic rules of vetting and news reporting.
Political actors exploit these dynamics by designing tailored narratives that resonate emotionally rather than factually. Empirical research on public-health crises shows misinformation can change beliefs, raise preventable harms, and reduce compliance with safety advice. Platforms' business incentives — engagement and time-on-site — bias systems toward outrage and simplification. A Time report suggests that state-backed influence operations now operate at an industrial scale, using bot networks and paid “amplifiers” to drown out rivals. Additionally, sophisticated microtargeting episodes illustrate how data harvesting enables political segmentation and personalized persuasion. Misinformation harms not just facts but collective trust in institutions, lowering civic resilience. Independent audits show platform moderation is uneven across languages and regions, creating safe havens for political misinformation.
Influencers, often untrained in verification, can act as force multipliers for false narratives. Survey evidence links exposure to social-platform misinformation with increased misperceptions about public events and policy. Disinformation campaigns exploit emotional triggers to change how people interpret the same factual event.
Political actors routinely pay for promoted content and employ networks of volunteers to amplify partisan narratives. Algorithmic feeds prioritize engagement signals, rewarding polarizing frames and rapid reactions. Microtargeted political ads use behavioral profiles to identify receptive audiences and adjust messaging in real time. Data-driven persuasion lowers the cognitive cost for accepting tailored falsehoods because messages match prior preferences.
Regulatory gaps let influencers and political advertisers operate with limited transparency in many jurisdictions. Global surveys since 2020 show sizable partisan gaps in trust, making politically tailored misinformation more effective among already skeptical groups. Capacity constraints and inconsistent policy enforcement let harmful campaigns persist across election cycles unless platforms and regulators act in alignment.
Social media has a profound and multifaceted influence on US politics, serving both as a tool for democratization and engagement while intensifying challenges like the spread of misinformation and political polarization. The political landscape has been transformed by social media. This resulted in an increased rise of populism around the world. The role the audience plays in social media has become a great opportunity for populist actors to spread political agendas. Active communities on social media seek to disseminate hate messages to their members and distribute propaganda to ensure new membership. Perhaps, social media’s contribution to any political or ideological movement lies in the fact that it influences users’ behaviors.
US lawmakers have urged the Federal Trade Commission (FTC) to take enforcement action against VPN apps that mislead users and misuse user data. VPN misuse on social platforms is primarily associated with activities violating a platform’s terms of service or illegal online behavior like fraud. We have already discussed that misinformation can lower civic resilience, and with VPNs, this picture gets more complicated. VPNs enable content access and distribution that evades geographic moderation. Countries with weak media freedom use VPNs to bypass censorship and spread state narratives overseas.
State actors leverage VPNs and proxy networks to publish content under false flags and evade takedowns. Cross-border coordination enables narratives to migrate across platforms and languages within hours. Disinformation actors exploit encrypted channels and VPNs to coordinate across borders beyond straightforward moderation reach. Some countries ban or restrict VPNs to control information flows, while others allow them, creating uneven global governance.
Social media manipulation has soared, with governments and political actors contributing to the spread of misinformation. Covert accounts and sockpuppet networks imitate local communities to make foreign narratives feel indigenous. Automated amplification makes repeated exposure cheap and scalable, increasing the chance that falsehoods stick.
Misinformation's effects are cumulative: small belief shifts across many individuals can reshape public opinion. Platforms’ moderation tools struggle with cross-language detection and context, limiting removal effectiveness. Platform responses — labeling, demoting, or removing content — reduce spread but struggle with scale and speed. VPNs also enable the evasion of regional content takedowns and allow bad actors to re-upload removed material from new endpoints.
Although law enforcement and platform cooperation have improved, gaps in international law hinder rapid takedown across borders. Media literacy programs and influencer training can reduce the spread by increasing verification norms among content creators. Global trust surveys recorded a sharp erosion of confidence in institutions as misinformation rose, magnifying the electoral stakes of online propaganda.
Stronger transparency in ad, cross-platform data sharing for researchers, and harmonized VPN regulation would help fill the current enforcement gaps. By eliminating these steps, malicious actors will keep exploiting technical seams to reframe events and polarize societies.
Social media has evolved as a news distributor in recent years. Information has had serious consequences in identifying what a fact is and what gets conflated with the truth. Social media is a common news source for 18-29-year-old Americans. Also, 54% of Americans turn to social media to access some of their news. Today, social media platforms like YouTube, Facebook, and more have become crucial news platforms.
When news starts migrating to social media, it accelerates some existing changes within the journalism industry. With the shift to social media, people’s expectations of what news should look like have also shifted. When everyone is a journalist, social media gets flooded with endless content, delivering social media feeds 24/7. Sensationalized news coverage and extreme opinion lead to more and more misinformation. In such a situation, holding onto ethical journalism is challenging for news providers.
Platforms like The CEO Views are subscribed to ethical journalism, ensuring the news shared on our social media handles is authentic and well-researched. Be it LinkedIn, X, or Facebook, we make sure our users are exposed to verified and trustworthy information.
Whatever we see on social media is determined by a content-curation algorithm. Its job is to keep you hooked to the internet as long as possible. To do that, the algorithm uses users’ data to decide what to show you next. These algorithms reward users who share content frequently by broadcasting their posts to a large number of social feeds, earning them likes, comments, and shares. By nudging them to share more high-performing content frequently, the algorithm fuels a network of misinformation.
Through transparency features, we can prevent the spread of misinformation on social media to a certain extent. Features like content labels, fact-checking notifications, source credibility indicators, “read before you share” prompts, and reporting mechanisms limit the sharing of misinformation across social networks. For example, labels like “false,” “partially false,” or “disputed” are applied to content by social media platforms to identify misinformation.
Although it is an uphill battle to limit the spread of such an enormous amount of misinformation shared on social media, through social media transparency features, users can choose which content to follow and which not.