Invisibility makes algorithmic influence particularly powerful. New research reveals that social media feeds can be manipulated in ways users don’t consciously perceive, yet these subtle changes produce dramatic shifts in political attitudes. A week of slightly altered content exposure created polarization equivalent to three years of gradual societal change.
The experiment focused on X during the contentious 2024 US presidential election. Researchers developed systems to analyze posts for anti-democratic content and partisan hostility in real-time, then adjusted what appeared in users’ feeds. Over 1,000 Democrats and Republicans participated, with most never realizing their experience differed from normal platform operations.
This lack of awareness is crucial. When people know they’re being influenced, they can consciously resist or discount the messaging. But algorithmic curation operates below the threshold of conscious perception, making its effects more difficult to counter. Users believe they’re simply seeing what’s naturally popular or relevant, unaware that invisible systems are shaping their information environment according to optimization objectives.
The psychological effects extended beyond political attitudes to emotional states. Participants exposed to more divisive content reported feeling sadder and angrier overall, suggesting that algorithmic choices affect mental health alongside political opinions. This dual impact—on both thoughts and feelings—makes algorithmic influence particularly potent and concerning for individual and societal wellbeing.
Researchers measured changes using a “feeling thermometer” that captured how warm or cold participants felt toward political opponents. Shifts exceeded two degrees on a 100-point scale, matching polarization increases that took four decades to develop between 1978 and 2020. The compression of this timeline from years to days represents a fundamental change in how political division forms and spreads.

