top of page

Originally published in Spanish by El Economista

February 25, 2026

THE NEW CIGARETTES: HOW SOCIAL MEDIA IS REDESIGNING OUR BRAINS

When did we trade cigarettes for the infinite scroll?

Smoking rates have dropped dramatically — an undeniable public health victory — but in its place we have adopted a quiet digital dependency, so deeply embedded in everyday life that it is difficult to imagine it as a problem.

What we are facing now with social media is not new. It replicates the logic that once drove the global rise of tobacco: accessibility, social appeal, and neurochemical reinforcement. But digital platforms go further. They are faster, frictionless, and completely intertwined with our daily routines. And unlike tobacco, their addictive power is not accidental: it is the business model.

The regulatory frameworks and public-health strategies that gradually transformed the tobacco industry now provide useful models for addressing the growing crisis of digital dependence.

The architecture of addiction: from nicotine to notifications

The neurological mechanism that makes cigarettes addictive is well documented. As Neal Benowitz, professor of medicine at the University of California and one of the leading authorities on nicotine research, explains, the stimulation of nicotinic acetylcholine receptors in the central nervous system triggers the release of several neurotransmitters — particularly dopamine — responsible for behavioral reinforcement and habit formation.

Social media platforms have transferred this logic into the digital era, though with one crucial difference. While the tobacco industry stumbled upon neurochemical reinforcement almost accidentally, the contemporary digital environment was designed from the outset to capture and sustain attention.

Social media are permanently accessible from any device, without physical or temporal barriers.

At the same time, they have transformed abstract metrics — likes, followers, views — into visible indicators of social value, embedded at the core of contemporary identity.

A functional MRI study published in Psychological Science showed that among adolescents, receiving “likes” activates brain regions associated with reward processing and social evaluation. Later research in Nature Communications demonstrated that user behavior on social media follows computational models of reward learning similar to those explaining other reinforced behaviors: the greater the perceived reward, the greater the probability of repetition.

A key factor in this process is unpredictability. Notifications, updates, and social responses do not follow a stable pattern. They arrive intermittently. Behavioral psychology has demonstrated for decades that variable reward schedules — those in which the stimulus is unpredictable — are particularly effective at sustaining compulsive behavior. In the digital environment, this logic translates into a constant expectation: checking the device for the possibility of new validation.

These dynamics are not accidental. Features such as infinite scroll, autoplay, and notification systems were designed to eliminate natural stopping points and reduce conscious decision-making. Former technology industry executives have acknowledged this publicly. In 2017, Sean Parker, former president of Facebook, admitted it plainly: “The thought process was: ‘How do we consume as much of your time and conscious attention as possible?’” Former Facebook executive Chamath Palihapitiya was equally direct: “The short-term, dopamine-driven feedback loops we have created are destroying how society works.”

Vulnerability intensifies during adolescence. At this stage, the brain’s reward system is highly sensitized, while regions associated with inhibitory control and planning — such as the prefrontal cortex — are still developing. The American Psychological Association and the U.S. Surgeon General have warned that this neurobiological asymmetry makes adolescents especially susceptible to digital environments designed to maximize social validation, constant comparison, and prolonged exposure.

Recent data reinforces this concern. According to Common Sense Media (2023), more than 60% of American children between the ages of 10 and 12 already use social media despite formal age restrictions.

In Latin America, reports from UNICEF show early adoption and intensive exposure to screens, associated with higher levels of anxiety, sleep disruption, and emotional distress.

The consequences are increasingly well documented. A study published in JAMA Psychiatry (2019) found that adolescents who spend more than three hours a day on social media face a significantly higher risk of developing symptoms of depression and anxiety. Longitudinal research in JAMA (2018) identified an association between frequent digital media use and the emergence of attention-deficit symptoms. Large-scale analyses led by Jean Twenge showed a sharp decline in adolescent psychological well-being beginning around 2012, coinciding with the widespread adoption of smartphones and social media.

As with tobacco before effective regulation, the problem lies not only in the act of consumption but in the architecture that makes it persistent, normalized, and difficult to abandon. History shows that when an environment is designed to reinforce dependency, appeals to individual self-regulation are insufficient.

Learning from tobacco regulation

The regulation of tobacco was not the result of an abrupt prohibition or sudden moral consensus. It was a long, cumulative, and deeply contested process that unfolded over more than four decades. For much of the twentieth century, smoking was framed as a personal choice, even as a symbol of cultural sophistication. The turning point came when harm ceased to be anecdotal and began to be systematically documented.

The shift was catalyzed by the 1964 report of the U.S. Surgeon General, which conclusively established the link between smoking and cardiovascular disease, lung cancer, and premature mortality. From that moment onward, gradual regulations began to emerge: mandatory health warnings on cigarette packages, increasing restrictions on advertising, and eventually limitations on smoking in public spaces.

The process reached a new stage when it acquired an international dimension. In May 2003, the World Health Organization adopted the Framework Convention on Tobacco Control (FCTC), which entered into force in February 2005. It was the first global public-health treaty dedicated to a commercial product.

The FCTC marked a qualitative shift: it recognized that tobacco harm could not be addressed solely through individual behavior but required intervention across an entire ecosystem composed of advertising, pricing, availability, product design, and industry interference.

The treaty consolidated tools that had already proven effective — high taxes, comprehensive bans on advertising and sponsorship, graphic health warnings, and protection from secondhand smoke — and integrated them into a binding legal framework. It also introduced a crucial political principle: the obligation of governments to protect public-health policies from interference by the tobacco industry.

Since its adoption, the convention has become the global standard and has contributed decisively to the reduction of tobacco consumption worldwide.

This experience now offers a particularly relevant framework for thinking about the regulation of digital media. Like tobacco for decades, digital platforms operate within a model that normalizes dependency, minimizes harm, and shifts responsibility onto the user. The experience of the FCTC shows that effective policy does not merely warn about risks; it intervenes in the structural conditions that produce them. Applied to the digital environment, this means moving beyond self-regulation and the rhetoric of “personal choice.”

Just as tobacco control addressed advertising, design, and accessibility, a digital regulation inspired by that model would need to address algorithmic architecture, content amplification, marketing directed at minors, systematic attention capture, and platform-funded programs for education and mental health.

The parallel is not literal but conceptual: in both cases the goal is not prohibition, but harm reduction through intervention in the system that generates it.

Australia did it again

Australia was one of the first countries in the world to confront smoking as a structural public-health issue rather than an isolated individual choice. Beginning in the 1990s, it implemented a coherent set of policies — high taxes, advertising bans, graphic health warnings, and plain packaging — that radically transformed the cultural landscape of cigarette consumption.

The result is measurable. According to the Australian Institute of Health and Welfare, the proportion of adult daily smokers fell from around 24% in 1991 to 8.3% in 2022–2023, one of the lowest rates globally.

The World Health Organization recognizes Australia as a leading example of sustained tobacco-consumption reduction through consistent evidence-based public policy.

This precedent helps explain why Australia has once again taken a pioneering role in confronting another phenomenon initially perceived as benign: digital harm. In 2015, the country became the first in the world to pass a dedicated online safety law — the Enhancing Online Safety Act — which created the eSafety Commissioner.

This was more than just another law; it was a powerful political signal. For the first time, a state established a public authority dedicated exclusively to protecting users — especially children and adolescents — from harms generated by digital environments, including online harassment, the non-consensual distribution of intimate images, and virtual abuse.

That framework did not remain static. In 2021, Australia passed the Online Safety Act, which replaced and expanded the 2015 law. The new regime extended platform responsibilities to social networks, messaging services, forums, and online games, while strengthening mechanisms for the rapid removal of harmful content.

More recently, between 2024 and 2025, the eSafety Commissioner introduced the Basic Online Safety Expectations (BOSE), a set of regulatory standards requiring technology companies to actively demonstrate — not merely declare — how they prevent harassment, child exploitation, radicalization, and risks associated with algorithmic recommendation systems.

The emphasis is not only on content but on system design. The key question is no longer simply what circulates on platforms, but what architectures make it circulate.

Europe has taken this logic even further. With the entry into force of the Digital Services Act (DSA) and the Digital Markets Act (DMA), the European Union established between 2023 and 2024 the most ambitious digital governance framework in the world.

The DSA fundamentally redefines platform responsibility by requiring companies to identify, assess, and mitigate systemic risks arising from their algorithms — from disinformation and electoral manipulation to mental-health impacts, particularly among minors. It also introduces unprecedented transparency obligations regarding recommendation systems, data access for independent researchers, and a strict ban on targeted advertising based on the profiling of children and adolescents.

The DMA, meanwhile, addresses the structural dimension of digital power by imposing antitrust limits on so-called “gatekeepers” — Meta, Google, Apple, Amazon, and TikTok — with potential penalties of up to 10% of global revenue.

Together, these regulations do more than govern platforms; they redefine Europe’s digital social contract, placing attention, privacy, and user autonomy at the center of political debate.

The United States presents a different picture. Digital regulation there advances in a fragmented, reactive, and heavily litigated manner. The Children’s Online Privacy Protection Act (COPPA), in force since 2000, remains the primary federal instrument for protecting children online.

In 2025, the Federal Trade Commission finalized its most significant review since 2013, expanding the definition of personal data and restricting certain forms of algorithmic monetization of children’s attention.

At the state level, initiatives in California, Utah, and elsewhere attempt to impose age verification and establish digital duties of care for platforms, although many of these measures have been suspended or challenged in court over potential conflicts with the First Amendment.

Meanwhile, the Kids Online Safety Act (KOSA) continues to be debated in Congress, proposing that companies be required to assess the mental-health impact of their products on young users before deployment.

The result is a regulatory mosaic reflecting the structural tension in the United States between technological innovation, freedom of expression, and the protection of well-being.

In Latin America, Argentina occupies a distinctive position. It was one of the first countries in the region to enact a personal data protection law (Law 25,326, in 2000) and is currently undergoing an active process of comprehensive reform led by the Agency for Access to Public Information (AAIP).

The stated goal is to align the local legal framework with the standards of the European Union’s General Data Protection Regulation (GDPR).

Draft proposals include rights such as data portability, the right to be forgotten, stronger requirements for informed consent, and new provisions addressing automated decision-making and artificial intelligence systems.

At the same time, legislative proposals are being debated that would establish a minimum age of 16 for social media use, require verifiable parental consent, and recognize digital violence as a specific form of gender-based violence.

Although these initiatives have not yet become law, they signal an important conceptual shift: data protection and digital well-being are increasingly understood not merely as technical issues but as human rights linked to mental health, autonomy, and identity integrity.

A legal déjà vu

In the United States, an unprecedented wave of lawsuits is emerging against major technology platforms for the harm they allegedly cause to the mental health of children and adolescents — a development many analysts compare to the historic class-action lawsuits against the tobacco industry.

In January 2026, a trial began in the California Superior Court in Los Angeles against Meta (Facebook and Instagram), TikTok, and Snap. A young plaintiff claims that the addictive design of these platforms damaged her psychological well-being from childhood.

This case is only one among thousands consolidated in a federal multidistrict litigation that groups more than 2,000 lawsuits filed by individuals, parents, and even school districts against social media companies for alleged addictive effects and impacts including anxiety, depression, eating disorders, and suicide.

At the same time, several states — including Kansas — and the city of New York have filed governmental lawsuits accusing these companies of deliberately designing addictive features that foster mental-health problems among minors.

Internal documents presented by plaintiffs suggest that companies were aware of these risks yet prioritized their business models over meaningful protections for young users — echoing the historical conduct of cigarette manufacturers that once concealed information about nicotine addiction.

This growing body of litigation raises a fundamental question about the responsibility of digital platforms for children’s public health, recalling the lawsuits that forced the tobacco industry to account for decades of harm caused by nicotine.

Not prohibition, but redesign

As with tobacco, the regulation of the digital environment does not arise from a single gesture or an immediate solution. It advances through the accumulation of evidence, social pressure, and gradual political decisions.

The debate about how to regulate social media is no longer theoretical. It is already underway. As with tobacco — where scientific evidence preceded political action by decades — what we are witnessing today is a slow but inevitable transition: the recognition that human attention and mental health require global frameworks of protection.

Unlike tobacco, for which no safe level of consumption exists, social media holds genuine potential for connection, learning, and creativity. The goal is not abstinence but integrity. Technology should strengthen autonomy, not erode it.

Dr. Tristan Harris, former Google design ethicist and co-founder of the Center for Humane Technology, puts it this way: “It’s about recognizing how to realign technology with our own minds and limits.” His vision of “humane technology” suggests what could be possible if we demanded better digital environments.

The question is not whether social media is the new cigarette. The question is whether this time we will act faster to protect ourselves and our children from its effects.

The roadmap exists.
The evidence is accumulating.


It is time to act.

© 2023  by Isabel Englebert Studio

bottom of page