A jury in Los Angeles has found Meta and Google liable for designing products that are addictive to children. The total award is $6 million, with Meta liable for 70% of the sum. TikTok and Snap had already reached settlements.

A few days later, in New Mexico: Meta has been ordered to pay $375 million for facilitating the sexual exploitation of children.

Two verdicts, two states, one week.

It is an intuitive conclusion.

And it is wrong.

Social media is not the new tobacco.

Not because it cannot cause harm – but because the kind of harm being discussed is not the sort that the courts are equipped to deal with.

And when the problem is poorly defined, so is the solution.

What the prosecution has demonstrated

Before dismantling the analogy, one point must be acknowledged: the prosecution has hit the mark. The internal documents that emerged during the trial are hard to ignore.

Meta had an internal study – ‘Project Myst’ – which found that minors who had already had negative experiences were the most vulnerable to Instagram addiction. A YouTube memo described ‘viewer addiction’ as a corporate goal. An Instagram employee reportedly wrote that the company was made up of ‘basically pushers.’ A Meta document stated:

"If we wanna win big with teens, we must bring them in as tweens."

None of these documents prove that social media causes harm comparable to that of tobacco.

But they do prove something more specific and significant: these companies have deliberately designed mechanisms to maximise the time spent on their platforms by younger users, and they have done so in full knowledge of the risks. Algorithms are not neutral. Dark patterns exist. Design is geared towards engagement, not user wellbeing. Which, incidentally, is absolutely obvious: it is their business model. Every advertising platform in history has optimised for attention – that is what they do.

So far, no serious controversy.

The problem arises in the next step: from ‘problematic design’ to ‘defective product’ to ‘large-scale product liability’. It is in this logical leap that the parallel with tobacco breaks down.

Why the analogy doesn’t hold up

Tobacco isn’t infrastructure

A cigarette does just one thing. You light it, you smoke it, it harms you. There is no productive use case for tobacco. The product is the harm itself.

Social media is something else entirely. It is the infrastructure through which businesses reach customers, communities organise themselves, governments communicate, and news outlets distribute content. Treating Instagram like a cigarette means ignoring the fact that the same product is also a commercial channel, a publishing platform, and a communication service used by billions of people for entirely legitimate purposes.

As the American Enterprise Institute has argued, these are services that facilitate public discourse – suing the tobacco industry does not raise issues of freedom of expression; suing a communications infrastructure does.

The correct analogy is not with tobacco. It is with the telephone network. The telephone, too, has been used for harassment, stalking and scams. And the response was to regulate it as infrastructure: caller ID, do-not-call registers, legislation on unsolicited calls, spam filters. No one sued a telephone company claiming the phone was a ‘defective product’ because it was used for stalking. The response was legislative and infrastructural — not product liability.

‘Addiction’ is not the same thing

Nicotine creates physical dependence. The body’s chemistry changes. Quitting smoking is a medical process, with measurable and reproducible withdrawal symptoms.

For social media, there is no evidence of comparable physiological dependence. There are compulsive behavioural patterns – craving, intermittent reinforcement, seeking gratification – but the literature still struggles to classify them as clinical addiction in the medical sense of the term. As Reason observed: tobacco is a chemical substance with direct and measurable physical effects; social media is a content distribution system, different for every user. Calling both ‘addiction’ means using the same word for two phenomena with radically different biological bases.

Science does not close the loop

For social media, the picture is different. The strongest evidence supporting the harm thesis comes from Jonathan Haidt, a social psychologist at NYU and author of “The Anxious Generation” – the book that, more than any other, has fuelled the narrative of harm. His strongest figures concern teenage girls who use social media intensively (more than four hours a day), showing a 2–3 times higher risk of depression. But these are studies based on studies based on correlations that do not allow us to infer a causal relationship.

Christopher Ferguson’s meta-analysis (2024, Psychology of Popular Media) examined the available experimental studies and concluded that the evidence for causal effects was statistically indistinguishable from zero.

Burnell et al. (2025), in trials restricting social media use, found minimal effects – most below the threshold of practical significance.

And then there is the bigger picture. Candice Odgers’ review in Nature (2024) summarised meta-analyses conducted across 72 countries: no consistent, measurable association between well-being and the prevalence of social media. The Adolescent Brain Cognitive Development Study – the largest longitudinal study on adolescent brain development in the United States – found no evidence of drastic changes associated with the use of digital technologies. Odgers’ comment on Haidt:

‘He is a talented storyteller, but his story is currently lacking in evidence.’

Haidt has challenged Ferguson by identifying methodological errors and recalculating the magnitude of the effects. The debate is ongoing and fierce. But that is precisely the point: we are still in the realm of debate, not consensus. Haidt himself admits that the appropriate standard is that of the ‘preponderance of evidence’ – not ‘definitive proof’.

In other words: there is insufficient scientific evidence to support a large-scale product liability model. In the case of tobacco, the scientific evidence was overwhelming and the industry was ‘lying’. In the case of social media, the science is ambiguous – and the industry may be right when it says that teenagers’ mental health is too complex a phenomenon to be attributed to a single app.

Product liability “does not scale”

This is the central issue, and it is worth breaking it down using a clear framework.

There are three ways to address a product that causes – or could cause – harm to society.

Model

How it works

When it works

Example

Limitation

Product liability

Product is defective → damages

Physical harm, strong causality

Tobacco

Doesn’t scale without strong causal evidence

Behavioral regulation

Limits on features and usage

Known but non-absolute risks

Gambling / slot machines

Limited impact, easy to bypass

Infrastructure regulation

Rules on system design

Complex systems with legitimate uses

Telecom networks

Requires legislation

The first is product liability: the tobacco model. It works when the harm is uniform, physical, measurable, and the causal link is strong. If you smoke, the risk is 15–30 times higher. There is no need to prove this on a case-by-case basis – the scientific evidence is so overwhelming that it allows for class actions and mass compensation. This is how the $206 billion Master Settlement Agreement came about.

The second is behavioural regulation: nudges, design constraints, usage limits. It is the slot machine model – you do not ban gambling, but you limit the most harmful features and impose warnings.

The third is infrastructural regulation: the telephone network model. You do not declare the product defective. You impose design standards, identification requirements, access requirements, and transparency and audits of recommendation systems — for example, making it explicit that a feed is optimised to maximise dwell time, or allowing independent audits of how content is selected and amplified.

Social media falls under model three. We are treating it as model one.

The Los Angeles verdict is worth $6 million. Meta reported revenues of $164 billion in 2024. There are around 2,000 pending lawsuits in the United States – brought by families, school districts and state attorneys-general. Each will have to prove individual causation: that this platform caused this harm to this person. The upcoming bellwether trial, scheduled for June in Kentucky, will have to start from scratch with the same disputed evidence.

Without legislation in place, the judicial system produces friction – not change.

The answer exists. And it doesn’t lie with the courts.

Several countries have already realised this.

Australia introduced a ban on social media access for under-16s in December 2025 – the first of its kind in the world. After three months, over 4.7 million accounts had been removed or deactivated. It is not perfect – many minors circumvent the controls – but this is, in part, inherent to the model. The primary responsibility shifts to the platforms – to demonstrate that they have implemented verification systems – rather than on completely eliminating access.

This is another reason why the platforms support these measures: they shift the focus from an impossible outcome (zero access) to a verifiable requirement (checks in place).

France is preparing a similar law for children under 15, due to come into force in September 2026. The European Parliament has voted in favour of a ban for children under 16, subject to parental consent. Malaysia, Denmark, Brazil – the list is growing. Italy, Germany, Spain and Greece are considering similar restrictions.

None of these countries has opted for legal action.

They have opted for legislative measures.

Moreover, even in the tobacco industry, the actual trajectory has not been merely punitive but proactive: filters, health warnings, advertising restrictions and, more recently, reduced-risk products such as e-cigarettes. They have not eliminated the problem, but they have reshaped it.

With social media, we are still at the previous stage: we debate whether the product is ‘intrinsically harmful’ instead of addressing how it is designed.

What does this mean for you

If you run a business in Europe, the reforms currently underway — age verification, restrictions on recommendation systems, and the separation of feeds — will change the way social media platforms function as channels for marketing and content distribution. This isn’t just theory: it’s a fundamental shift in how these platforms operate.

And it raises an unavoidable question.

Limiting optimisation for engagement is not a technical decision. It is a decision about what is worth showing more of — and therefore, inevitably, a political decision.

But the most important point is another.

The parallel with tobacco doesn’t hold up — and when it doesn’t hold up, the solution falls apart too: individual lawsuits that don’t gain traction and compensation awards that don’t make the slightest difference.

Social media isn’t a product. It’s infrastructure.

And it must be treated as such: regulation, design constraints, verification standards, and audits of recommendation systems.

Courts deliver verdicts.

They do not change the system.

A jury in Los Angeles fined Meta, a company with over $160 billion in revenue, $6 million.

Australia, through legislation, removed 4.7 million accounts.

Courts punish the past.

Regulation changes the product.

Fabio Lauria

CEO & Founder, ELECTE

Every week we explore AI without the hype — with data, analysis and an independent perspective.

Note: the author is a smoker. The irony is not lost on him.

Sources

If you found this analysis useful, please share it with someone who might be interested. And if you’d like to find out how ELECTE uses AI to automate data analysis and reporting, you can find out more at electe.net.

Recommended for you