Big Tech sued by U.S. schools for “hooking” youngsters
On January 6, the public schools of the largest school district in Seattle (Washington, United States) filed a lawsuit against Snapchat, Meta (owner of Facebook and Instagram), Byte Dance (owner of TikTok) and Alphabet (parent company of YouTube) for encouraging addictive use of their respective social networks among minors.
In the text, the plaintiffs allege that these platforms have consciously exploited the vulnerability of young brains to make them spend as much time as possible hooked, often exposing them to harmful content that undermines their psychological and physical health. This behaviour would have increased depression, anxiety, eating disorders and cyberbullying.
The footprint of addiction in the classroom
Schools, the plaintiffs argue, feel legitimized for the accusation insofar as this deterioration of mental health directly impacts their function: among students suffering from any of these conditions, academic results and behaviour in class worsen, learning disorders increase, and absenteeism and drug use increase. In addition, to deal with these problems, education authorities are forced to make an extra expenditure of public money to hire reinforcement staff. In 2020 alone, more than three million U.S. students had to receive psychological assistance at school, making schools one of the leading providers of these services.
As in other countries, the mental health of young Americans has been showing signs of crisis for years. Suicidal behaviours (ideation, attempted or completed suicides) are rising, and self-injury is increasing, especially among girls. The pandemic has exacerbated the problem, but the trend is going further. Indeed, not everything can be blamed on the abusive use of social networks or screens. However, several studies – cited in the lawsuit – have shown a clear correlation between this factor and the prevalence of psychological disorders.
The country’s public health spokesman, Dr Vivek H. Murthy, declared at the end of 2021 that the situation can be described as a true “national emergency.” Among other factors, Murthy pointed out that “through the media and popular culture, young people are bombarded with messages that erode their self-esteem, telling them they are not good-looking, popular, smart or rich enough.”
The demand for Seattle schools also echoes this problem. The authors explain that a good portion of depressive conditions stems from “unhealthy social comparison,” which often leads to eating disorders such as anorexia. In addition, according to the text, young people find other potentially dangerous content on the networks, such as sexual, political disinformation or self-harm.
Dark burrows
These toxic materials would not have such a harmful effect if the algorithms designed by the platforms were not responsible for directing young people to them. However, according to the plaintiffs, this is precisely what is happening.
Specifically, the authors cite two investigations by journalists from The Wall Street Journal who created fake accounts on TikTok to see how the formula that decides which videos are shown in users’ feeds worked. In the first, the bots pretended to be adults; in the second, girls between the ages of 13 and 15.
In both cases, after a slight “nudge” by the WSJ programmers (for example, they made a supposed teenager type in the app’s search engine the name of a well-known adult content social network or watched a video with a hashtag about marijuana), they found that the algorithm quickly directed this user towards the so-called rabbit holes, “burrows” where destructive content (ads for paid pornography websites, sadomasochistic practices, etc.) increasingly predominate. The same happened with a “push” associated with sadness in the research with adult bots: in a short time, the feed was filled with depressive videos and vague references to suicide.
Designed to hook
But the harm to young people is not just about harmful content. Addictive consumption, of whatever kind, is itself a problem. According to a national Pew Research Center survey cited in the lawsuit, one in five young people admit to using YouTube “almost constantly”; one in six for TikTok and Snapchat. Forty percent report spending more time on these platforms than they would like.
In the plaintiffs’ view, this is no accident. The designers of these services strive to encourage addictive use. For example, the text argues that the endless scrolling system generates a “flow state” in the user’s mind, which weakens their executive capacities. On the other hand, the constant updating of the feed causes a feeling of a slot machine: we reload to see what suggestions we have been given in the new distribution. In this sense, it is eloquent that, according to some research, the average consumer on YouTube spends almost 70% of their browsing on this platform on videos recommended by the algorithm. This percentage reaches 95% on TikTok.
Other mechanisms created by these platforms to generate addiction are the famous “double-check”, which shows that the recipient of the message has read it -and keeps us in suspense until then-or, content with a set expiration date, which self-destructs shortly after being published (Instagram stories, Snapchat messages).
Clash of laws
The Seattle schools’ lawsuit is based on a state law against unjustified public nuisance, the Washington Public Nuisance Law, which outlaws “activities that harm public health, expose offensive content, or impede the comfortable enjoyment of life.” The technology companies consider this wording too vague, relying instead on Section 230 of the Communications Decency Act, which stipulates that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of information provided by another information content provider”. For many analysts, these words effectively provide almost total immunity to platforms from content published by third parties.
However, the plaintiffs consider that Meta, Google or TikTok are liable insofar as they do not limit themselves to impartially displaying third-party content. In particular, they claim that the algorithms, developed by the companies themselves, are designed to hook young people and steer them towards increasingly extreme content. They also created notification policies or photo filters that lead many young people – especially girls – to become obsessed with an unattainable ideal of beauty. In any case, the plaintiffs explain, these companies’ business is based on the fact that users spend much time on their platforms, regardless of the content they consume, so there is no reason to blame the creators.
Initiatives suspended, lawsuits pending
Seattle is not the first territory to declare war on big tech for its harmful influence on young people. In March of last year, a bill was introduced in California to make it easier for state prosecutors – in the first version, parents – to sue them if they knowingly encourage addictive behaviour in minors. A Democrat and a Republican congressman promoted the initiative. However, after being approved in the assembly, it was “suspended” (a way of letting it die) in a Senate fiscal committee, according to some analysts, after intense pressure from TikTok, Snapchat and the company.
However, according to Bloomberg in September, dozens of lawsuits filed by parents across the country are still pending. The news explained that the revelations by Frances Haugen, a former employee of Instagram, about how Meta executives knew that the abusive use of this social network by young people increased the risk of suffering depression were a milestone in the visibility of the problem, and led many families to file complaints.
Several complaints employ arguments similar to those used for the lawsuits against Big Tobacco: they know that their product is addictive, and they could reduce this risk, but not only do they fail to do so, but the design and marketing seek precisely the opposite. However, some jurists quoted in the Bloomberg article explained that, in the case of technology companies, it will not be easy to prove that, of all the factors affecting the psychological well-being of young people today, it is precisely the use of social networks that damages their mental health.
A proposal inspired by the U.K.
At about the same time as the failed bill was presented in California, another bipartisan initiative was taking its first steps in the U.S. Congress, which is still alive. However, almost a year later, no date has yet been set for its discussion. It is the Kids Online Safety Act (KOSA). If finally passed, it would oblige Big Tech to behave “in the child’s best interest”. Specifically, they will have to report on how they use algorithms when it comes to young people and what protections shield their private data; prevent them from receiving ads for products that are illegal for them or with sexual content; prepare an annual report on the potential risks for minors on their respective platforms; and give parents tools so that they can monitor their children’s use of the networks.
The KOSA text is inspired by a law passed in the U.K. (a new code of behaviour to which all websites with a “high likelihood” of being used by minors must submit). Although the text focuses more on privacy than on the issue of addictive use, it also obliges platforms to improve their transparency and filter content better and makes it easier for parents to monitor their children’s activity.
This article was written by Fernando Rodríguez Borlado and originally published in Aceprensa in January 2023. Sally shares this article as part of the “Sally Families” project, which aims to help families be safe online.