How Social Media Algorithms Amplify Conspiracy Theories

Published on September 8, 2025 | By Dr. Sarah Mitchell

In the digital age, social media platforms have become primary sources of information for millions of Australians. However, the algorithms that drive these platforms' content recommendation systems are inadvertently contributing to the spread of conspiracy theories and misinformation. Understanding how these systems work is crucial for developing effective strategies to combat false information.

The Engagement Trap

Social media algorithms are designed to maximize user engagement, keeping people on platforms for as long as possible. This creates a fundamental problem: controversial and emotionally charged content, including conspiracy theories, tends to generate more engagement than factual, nuanced information.

Research conducted by our team at Merri Mortar has shown that conspiracy-related content receives 70% more engagement on average than mainstream news articles. This higher engagement signals to algorithms that users want to see more of this type of content, creating a feedback loop that amplifies false information.

Echo Chambers and Filter Bubbles

Recommendation algorithms create what researchers call "filter bubbles" – personalized information environments where users are primarily exposed to content that confirms their existing beliefs. When someone begins engaging with conspiracy content, the algorithm learns this preference and continues to recommend similar material.

This phenomenon is particularly dangerous because it:

  • Reinforces existing misconceptions through repeated exposure
  • Limits exposure to fact-checking and corrective information
  • Creates the illusion of widespread agreement with conspiracy theories
  • Gradually normalizes increasingly extreme viewpoints

The Rabbit Hole Effect

One of the most concerning aspects of algorithmic amplification is the "rabbit hole effect" – the tendency for recommendation systems to lead users from relatively mild conspiracy content to increasingly extreme material. Our research has documented pathways where users starting with skepticism about mainstream media can be led to dangerous conspiracy theories within a matter of weeks.

For example, someone searching for alternative health information might first encounter content questioning pharmaceutical companies, then be recommended videos promoting anti-vaccine theories, and eventually find themselves consuming content about global health conspiracies. Each step seems logical within the context of the previous content, making the progression particularly insidious.

Australian Case Studies

During the COVID-19 pandemic, we observed several concerning trends in Australian social media environments:

  • Health Misinformation Networks: Algorithms connected users interested in natural health to anti-vaccine communities, creating robust networks of misinformation sharing.
  • Political Polarization: Election-related conspiracy theories were amplified through partisan echo chambers, with users receiving increasingly extreme political content.
  • Climate Denial Pipelines: Environmental skepticism was algorithmically linked to broader conspiracy movements, undermining climate science acceptance.

Solutions and Interventions

Addressing algorithmic amplification of conspiracy theories requires a multi-faceted approach:

Platform Responsibility: Social media companies must redesign their algorithms to prioritize authoritative information and break up conspiracy echo chambers. This includes implementing "circuit breakers" that interrupt recommendation patterns leading toward extreme content.

User Education: Digital literacy programs should teach users how algorithms work and how to recognize when they're being led down problematic information pathways. Understanding the mechanics of personalization can help users make more conscious choices about their information consumption.

Diverse Information Diets: Encouraging users to actively seek out diverse perspectives and authoritative sources can help counteract algorithmic bias. This includes following fact-checking organizations, mainstream news outlets, and expert sources.

Moving Forward

The challenge of algorithmic amplification of conspiracy theories is not insurmountable, but it requires coordinated effort from platforms, policymakers, educators, and users themselves. By understanding how these systems work and implementing targeted interventions, we can begin to restore the information ecosystem's integrity.

At Merri Mortar, we continue to research these phenomena and develop evidence-based strategies for combating algorithmic bias. Our goal is not to eliminate all controversial content, but to ensure that factual, well-sourced information has a fair chance to compete in the attention economy.

The future of democracy may well depend on our ability to solve this challenge. By working together, we can create digital environments that promote critical thinking, factual accuracy, and informed public discourse.

← Back to Blog