How social media platforms use addictive design to keep you scrolling

On 25 March 2026, a Los Angeles jury found Meta and Google negligent for the design and operation of their social media platforms. After days of deliberation and testimony that included Meta CEO Mark Zuckerberg on the stand, the jury awarded $6 million in damages to a 20-year-old woman, known in court as Kaley G.M., whose decade of use of Instagram and YouTube she argued had caused anxiety, depression, and body dysmorphia. The jury also found the companies had acted fraudulently and with malice, triggering separate punitive damages.
The legal strategy in the case was deliberate and significant. Rather than challenging the content users post, which is largely protected under Section 230 of the Communications Decency Act, the plaintiffs targeted how the platforms are built. Infinite scrolling, algorithmic recommendations, and autoplay features were at the centre of the argument. A jury, after hearing internal company documents and executive testimony, concluded that these design choices were harmful and that the companies knew it.
"This is the first time in history a jury has heard testimony by executives and seen internal documents that we believe prove these companies chose profits over children." Joseph VanZandt, plaintiff's attorney, New York Times, March 2026
The verdict is a bellwether case tied to roughly 2,000 pending lawsuits brought by parents, families, and school districts across the United States. Legal analysts have drawn comparisons to Big Tobacco litigation in the 1990s. The day before, a New Mexico jury had separately ordered Meta to pay $375 million for failing to protect young users from harm on Instagram and Facebook.
A month earlier, the European Commission had issued preliminary findings that TikTok breached the Digital Services Act through what regulators formally called its "addictive design." The specific features named were the same ones at the centre of the Los Angeles trial: infinite scroll, autoplay, and push notifications. TikTok's own risk assessments had argued these were mitigated through optional screen time tools. The Commission's position was that optional controls layered on top of an addictive architecture do not constitute compliance.
What is striking about both cases is that they converged on the same target, despite originating in completely different legal traditions: not what users see, but how the platforms were built to keep them there.
"The verdict validated the plaintiffs' approach of shifting the legal target; instead of focusing on the content people see on social media, the case put the spotlight on how social media services were designed." NPR, March 2026
What infinite scroll actually removes
Before infinite scroll became standard, feeds had pages. Reaching the bottom meant a small, natural decision point: click to load more, or stop. Infinite scroll removed that moment entirely.
Feeds on TikTok, Instagram, and X have no endpoint, no arrival point, no friction where the question of whether to continue would naturally surface. The ADDICT study (2026), a 55-question taxonomy of addictive platform design funded by the Chamber of Labour Vienna and conducted by the Institute for Advanced Studies, rates platforms with no feed endpoint at the highest risk level across all assessed features.
The ADDICT study risk ratings across platforms:
- TikTok: 44 high-risk ratings out of 55 features assessed
- Instagram: 40 high-risk ratings out of 55 features assessed
These figures reflect platforms where the vast majority of design choices push in the same direction: more time on screen, fewer natural moments to stop.
Autoplay and the asymmetry of defaults
Autoplay is built around a straightforward asymmetry: continuing requires nothing, while stopping requires a conscious act. That structure appears across TikTok, YouTube, Instagram Reels, and Netflix in various forms.
On TikTok, videos loop and scroll automatically regardless of user settings. The ADDICT study classifies this at the highest risk level, reserved for features that are present and cannot be turned off. On YouTube and Netflix, autoplay can technically be disabled, but research consistently finds most users never change default settings regardless of what they say they prefer.
A 2025 experimental study from the University of Chicago measured the actual effect of disabling autoplay on Netflix. Researchers assigned 76 users to two groups, one with autoplay on and one with it off, and tracked their viewing across several weeks.
What they found when autoplay was disabled:
- Average session length dropped by around 18 minutes
- Users took longer between episodes
- Participants described their viewing as more deliberate and considered
The researchers described autoplay as one of the most effective attention-capture dark patterns in streaming, noting it functions not by deceiving anyone about content, but by eliminating the natural pause where a decision would otherwise happen.
Phantom notifications and sensory design
Push notifications have an obvious purpose: telling you something happened. Researchers have also documented a related category called phantom notifications, sent not because anything happened but because a user has not opened the app recently. The content of the notification is secondary to its function, which is getting the user back into the app.
The ADDICT study flags phantom notifications alongside the broader sensory architecture of alerts: the red badge on an app icon, the sound each platform uses, the haptic pulse on a phone screen. These were not chosen incidentally. They borrow from a design tradition in gambling machine research, where audiovisual cues are calibrated to interrupt attention and signal urgency regardless of whatever the user was doing.
Internal documents released during US litigation in 2025 and 2026 put numbers on how this plays out. Among TikTok's heaviest minor users, those spending more than 6 hours daily on the platform, screen time tools produced reductions of roughly 5%. That brought daily use from approximately 6 hours to 5 hours 48 minutes. The documents also showed a stark difference in tool adoption depending on how they were set:
- When break reminders were set as defaults: adoption was high
- When break reminders required opt-in: fewer than 2% of users enabled them
- Sleep reminders set as opt-in: between 0.7% and 1.8% uptake
Tools designed to reduce time on the platform were consistently not set as defaults.
The variable reward structure behind likes
The design principle with the longest research history in this field is variable ratio reinforcement. B.F. Skinner documented it in the 1930s: when rewards arrive on an unpredictable schedule rather than a consistent one, behaviour becomes more persistent. The unpredictability, not the reward itself, is what sustains the behaviour.
Social media applies this at scale. When someone posts, the response is unpredictable in timing, quantity, and source. Checking back for likes and comments is structurally the same behaviour as pulling a lever that sometimes pays out. Loren Brichter, the designer who invented the pull-to-refresh gesture, has said publicly that he modelled it consciously on the slot machine lever. That comparison was his own, not a critic's.
The design of exit
The ADDICT study examined what platforms do when users try to leave.
- Attempting to log out on most major platforms produces a prompt to switch accounts instead
- Attempting to delete an account typically surfaces deactivation as the preselected default, keeping the profile intact and ready for return
These reflect the same underlying logic as every other design choice in this category: the platform reduces friction for actions that increase engagement and adds it to actions that reduce it.
The Commission's TikTok findings made this explicit: compliance with the Digital Services Act, regulators concluded, requires modification of core design features, not the addition of optional controls sitting on top of systems designed to maximise retention.
Who is responsible for this?
The dominant response to heavy social media use has framed it as a personal problem: diagnoses of "internet use disorder," recommendations for "digital detox," and advice about self-discipline. The ADDICT study argues this framing places the burden entirely on users while the structural cause sits with the platforms that built the environment.
Philosopher Jesper Aagaard raised a related point in a 2020 paper in Phenomenology and the Cognitive Sciences. Calling compulsive technology use "addiction," he argued, can pathologise behaviour that is better understood as habit formation in environments specifically built to form habits. His point was not that the design is harmless, but that the addiction label shifts attention toward individuals when the more actionable target is the design itself.
"While tech addiction is bad and must be eliminated, good tech habits can be trained and cultivated." Aagaard, Phenomenology and the Cognitive Sciences (2020)
At FairPatterns, we believe that fundamental human rights like freedom, dignité and privacy, should not « dissolve in the digital world ». Addictive design is a predatory practice, preying on the people these platforms pretend to serve. That’s why we did 3 years of R&D to create the concept of “fair patterns” (interfaces that empower users to make their own, free and informed choices” and built a multimodal AI that scans sites, apps and social media to find and fix dark patterns and addictive design.
We’re building the Human Safety Tech architecture that’s now indispensable to protect humans online and when interacting with AI.
Sources: ADDICT Study, IHS Vienna (2026); Flayelle et al., Nature Reviews Psychology (2023); Schaffner et al., University of Chicago (2025); Aagaard, Phenomenology and the Cognitive Sciences (2020); European Commission TikTok DSA preliminary findings (February 2026); Knight-Georgetown Institute, TikTok litigation analysis (2026); Meta and Google negligence verdict, Los Angeles Superior Court (March 25, 2026); NPR coverage of the verdict (March 25, 2026)

