The Courts Are Catching Up to Big Tech's Attention Machine

For years, the business model was hiding in plain sight. Platforms built recommendation algorithms, autoplay features, and notification systems that were never really designed to serve users. They were designed to retain them. To keep people scrolling past the point of wanting to stop, to surface content that triggered emotional reactions, to make disengaging feel harder than it should.
Three recent court decisions suggest that era may finally be coming to an end.
What happened in the US
A California jury handed down a landmark verdict last week, finding Meta and YouTube liable for negligence in the design of their platforms and awarding the plaintiff $6 million in damages, split between $3 million compensatory and $3 million punitive. Meta was found 70% responsible, YouTube 30%. The jury also found that both companies acted with malice, oppression or fraud in how they treated young users.
The plaintiff, a 20-year-old woman referred to throughout proceedings as Kaley, testified that she started using YouTube at age six and Instagram at nine. By the time she finished elementary school, her use had contributed to depression, body dysmorphia, and suicidal thoughts. Her lawyers argued the companies were aware their design features, including infinite scroll, autoplay, push notifications, and cosmetic filters, were engineered to hook young users, and that internal documents showed executives knew this. One internal Meta memo noted that 11-year-olds were four times more likely to return to Instagram than competing apps, despite the platform requiring users to be at least 13.
Critically, this was not a case about content. Social media companies have long shielded themselves from liability using Section 230, which protects platforms from responsibility for what users post. Kaley's lawyers deliberately targeted the design layer instead, and it worked. The verdict is the first of more than 1,500 similar cases to reach trial, making it a bellwether for what comes next. Hundreds more are scheduled in California alone, and legal experts have drawn comparisons to the tobacco litigation of the 1990s.
Two rulings in Europe
In the Netherlands, the Amsterdam District Court issued a preliminary injunction on 26 March against xAI (the company behind Grok), X Corp, and XIUC, X's EU-facing entity. The case was brought by Offlimits, a Dutch nonprofit focused on online sexual abuse, alongside victim support organisation Fonds Slachtofferhulp. Their complaint: that Grok's image editing functionality allowed users to generate non-consensual "undressing" images of real people, and to produce content qualifying as child sexual abuse material under Dutch law.
xAI had repeatedly assured the court that safeguards were in place and that such content was no longer possible to generate. What undermined that position was a demonstration conducted on 9 March, the same day xAI sent Offlimits a categorical denial. Offlimits uploaded a photo of a woman and, using a single prompt, produced a sexualised video without any consent verification. The court found that hard to reconcile with the company's assurances, and ruled accordingly. Non-consensual undressing images were found to violate the GDPR, and facilitating child sexual abuse material was found to constitute unlawful conduct under Dutch tort law. xAI and XIUC now face fines of €100,000 per day for non-compliance, capped at €10 million per entity, and must confirm in writing how they have complied within ten working days.
The German case, decided a week earlier by the Higher Regional Court of Bamberg, had a different focus but a familiar underlying dynamic. TikTok's mechanism for reporting illegal content was found too difficult for ordinary users to locate: the correct reporting path was buried inside an inconspicuous menu option, with nothing to indicate that only this path triggers an official DSA procedure. The court also found TikTok's profiling-based recommender system non-compliant, because the non-personalised alternative, though technically available, was practically impossible for users to find. The DSA requires that users be able to access this option easily. The case was brought by Verbraucherzentrale Bayern, the Bavarian consumer organisation. The ruling is not yet legally binding and can still be appealed.
What these cases are really about
Stepping back across all three rulings, the pattern is less about any specific feature and more about a shared logic: that optimising relentlessly for engagement, while making it harder for users to understand or change what is being done to them, eventually runs into legal limits. The mechanisms differ, whether it is an algorithm that serves minors content calibrated to keep them scrolling, an AI tool that generates sexual imagery on demand, or a reporting system deliberately buried so users cannot easily flag harm. What they share is that the platform's interests were placed well ahead of the user's ability to understand or exit the situation.
Courts are now being asked to put a legal name to that, and increasingly they are. Platform companies have long relied on the argument that their products are neutral infrastructure, and that responsibility lies with users or with the content itself. That argument is getting harder to sustain when internal documents show executives tracking teen retention rates by age group, when a nonprofit can generate prohibited content on the same day a company swears it cannot, and when a reporting button is so well hidden it might as well not exist.
Who is actually driving this
One thing worth noting is that most of this litigation is coming from civil society organisations and consumer groups, not regulators. Offlimits moved to court after concluding that regulatory enforcement was moving too slowly relative to the pace of harm. Verbraucherzentrale Bayern did not wait for the European Commission to act on TikTok's DSA compliance. In California, it was a single plaintiff's legal team that spent six weeks putting Zuckerberg on the stand.
That approach is slow and expensive, and it depends on organisations willing to take on well-resourced defendants. But it is producing results, and each ruling makes the legal ground firmer for the cases that follow. For anyone building digital products, the message coming out of Amsterdam, Bamberg, and Los Angeles is becoming harder to ignore: design decisions that deliberately exploit users are starting to carry real legal consequences.

