Social media challenges can be deadly. Here’s what to do.
A Chester mom is suing TikTok after her 10-year-old daughter died doing a challenge she saw on the platform. It doesn't have to be this way.
Late last year, 10-year-old Nylah Anderson from Chester saw something in her personalized TikTok feed that caught her attention. It was a challenge that dared users to choke themselves until they almost pass out. Nylah tried it, and she died.
Nylah’s mom is now suing TikTok in federal court in Philadelphia, accusing the social media platform of using a dangerous algorithm that directed her daughter to this deadly challenge.
» READ MORE: 10-year-old Chester girl died doing a TikTok choking ‘challenge.’ Her mother is suing the video platform.
But TikTok is far from the only purveyor of dangerous “challenges” and algorithms that feed users harmful content.
Many challenges are innocuous, involving cup stacking, making homemade slime, or dancing. And the concept isn’t all bad — many social media challenges enabled people to stay connected during pandemic quarantines by, for instance, recreating your favorite piece of art, or making fashion out of your pillowcase. But then there have also been dangerous stunts like the “Tide Pod challenge” that took off a few years ago.
Challenges are not the only dangerous aspect of social media. Our own research at the Annenberg Public Policy Center has identified ways in which social media can encourage the dissemination of harmful content via algorithms, like portrayals of self-harm sent to vulnerable young people. And we found that young adults exposed to self-harm on Instagram were more likely to exhibit these thoughts and behaviors in the month after seeing it.
So what do we do about this? We can’t cut off kids’ access to social media, and can’t expect parents to be able to monitor everything their kids view online. Instead, social media platforms should take more responsibility for monitoring harmful content.
“Social media platforms should take more responsibility for monitoring harmful content.”
Right now, Twitter, Facebook, and other outlets say they are doing this monitoring, but it’s not nearly enough, and there are too few legal consequences if they fail. Even video recordings of mass shootings that are deposited on social media can remain available for search long after their initial appearance.
The largest barrier to getting digital platforms to take responsibility for what they share is the 1996 Communications Decency Act. In order to allow the internet to flourish, Section 230 of the act proclaimed that internet platforms were not considered the publishers of the content posted on them, and hence were not responsible for it. But the internet has changed a lot since 1996.
In the past 25 years, as advertising has become a way for websites to make money, they have become more and more like publishers, just as magazines and older media have always been. They’re even better at it: With highly refined user and tracking data, along with the ability to use algorithms to identify target audiences, the platforms can sell advertising with much greater precision than was possible with traditional media.
Directing users to content they might find engaging need not result in harm. But when there is no accountability for what the platforms serve up, harmful but engaging content can proliferate.
For instance, we found that the vast majority of young people exposed to Instagram posts depicting self-harm said they encountered that content by accident — and that they had not been seeking it out.
We don’t know how Nylah got to the challenge that led to her death, but if TikTok sent content containing the challenge to Nylah because she fit a profile of people who had taken the challenge or sought out content like it, then the creators and custodians of the algorithm could bear some of the responsibility.
One way to make that happen is to modify Section 230. A potential change suggested by U.S. Sens. Amy Klobuchar (D., Minn.) and Ben Ray Luján (D., N.M.) is the Health Misinformation Act, which would define retransmission of user content that can harm health as a form of publishing, thereby removing the free pass that Section 230 has given internet platforms. (The Department of Health and Human Services would define what is harmful to health.) The act was proposed to protect the public from misleading content during the pandemic, but it could easily be applied to other forms of content harmful to health, including the sorts of dangerous challenges that led to Nylah’s suffocation.
While this may look like government infringement on free speech, it could be narrowly fashioned to reduce the spread of content that poses a direct harm to public health. I think this sort of regulation is worth considering if it saves the lives of 10-year-old girls.
Dan Romer is the research director of the Annenberg Public Policy Center of the University of Pennsylvania, where his research focuses on media and social influences on adolescent health and development.