Skip to Main Content
 

Major Digest Home Children searching for ‘Gaza’ on Instagram can easily find images of corpses, a new report finds - Major Digest

Children searching for ‘Gaza’ on Instagram can easily find images of corpses, a new report finds

Children searching for ‘Gaza’ on Instagram can easily find images of corpses, a new report finds

New research finds that children are able to easily access graphic content, including images of corpses, when searching for the Israel-Hamas war on social media platforms—and in turn are served more explicit content by the algorithms.

The results were published earlier this week by the U.K.-based Institute for Strategic Dialogue (ISD), which created profiles for 13-year-old kids on Instagram, TikTok, and Snapchat. The researchers found over 300 posts or videos “portraying extremely graphic, distressing, or violent imagery” when browsing hashtags like #Gaza and #Gazaconflict over a two-day period.

ISD researchers found most of the extreme content on Instagram, where 16.9% of the searches for “Gaza” turned up graphic and/or violent content including naked and mutilated bodies and infants’ skulls.

On TikTok, graphic content made up 3% of the search results, but researchers noted that the video app automatically suggested phrases like “Gaza dead children,” “Gaza dead kids,” and “dead woman Gaza” in its search bar.

In a follow-up search conducted on Thursday for Fast Company, the researchers found that on one fictitious 13-year-old Instagram user’s home feed, roughly one-fifth of the recommended posts were images of corpses.

Isabelle Francis-White, the head of technology and society at ISD and a report coauthor, says the results stunned her. “It’s always possible for researchers to find something violative at any given time, but in this instance, I was shocked at both the volume of the content, but more specifically, just how accessible it was,” she tells Fast Company.

A spokesperson for Meta (the parent company of Instagram) referred to a recent blog post, in which the company outlined a number of steps it had taken to reduce graphic and violent content. “We already use technology to avoid recommending potentially violating and borderline content across Facebook, Instagram and Threads,” the company wrote. “We’re working to further reduce the possibility of this happening by lowering the threshold at which our technology will take action to avoid recommending this type of content.”

A TikTok spokesperson pointed to a blog post in which the platform said it is “evolving” its automated detection systems to “automatically detect and remove graphic and violent content,” and adding more Arabic and Hebrew-speaking moderators to review content. The spokesperson added that the problematic autofill searches identified by ISD researchers had also been disabled.

Snap, the company that operates Snapchat, did not respond to a request for comment.

But Francis-White says her research shows the platforms aren’t doing a good enough job at enforcing their own policies—and suggests new regulations are needed.

Francis-White pointed to the E.U.’s sweeping new Digital Services Act, which includes requirements for tech platforms to enforce their own content moderation policies and protect their younger users’ psychological well-being. Earlier this week, authorities in Brussels cited the law in warning letters to companies including Meta, X, TikTok, and YouTube about alleged Gaza-related disinformation on their platforms.

By contrast, “In the U.S., all members [of Congress] can do is send letters and request briefings, but there’s no teeth to that, and there’s no enforcement,” Francis-White says. “We’ve dragged our feet for far too long on regulation.”

There are ongoing efforts to pass child online safety laws in the U.S., including the bipartisan Kids Online Safety Act, which would impose a legal responsibility on platforms to mitigate “harms to minors.” But that effort faces pushback from an unlikely confluence of digital rights activists and tech industry lobbyists, who argue that certain aspects of child safety legislation—even if well-intentioned—could end up harming all internet users since the government would be left to define what constitutes harmful content. “We have politicians who think that children seeing drag shows is harmful,” says Jillian York, the director for international freedom of expression at the Electronic Frontier Foundation (EFF). “So there is definitely the potential for over-censorship.”

Instead, EFF supports comprehensive data privacy laws, which it argues would disincentive social media platforms from using toxic content to scoop up user data, and lower the barriers for users to choose alternative platforms.

Carl Szabo, vice-president and general counsel at NetChoice, a Silicon Valley industry group that represents companies including Meta, Google, X, and TikTok, opposes the idea of an age verification mandate, which he says would create a privacy disaster:  “We are talking about massive data collection just to do something as simple as in this case, an internet search.”

He argues that, rather than pursuing new regulations, “the correct answer is to encourage and engage parents more, to better understand how to use these tools, and work with our children and our families to keep them safe online.”

For now, parents have their work cut out for them. According to data from parental monitoring software company BrightCanary, searches for Gaza conflict-related terms on Google and YouTube have spiked this month among their customers’ 8- to 12-year-old users, including a 1,674% increase in searches for the term “hostage,” a 218% increase in searches for “bombing,” and a 287% increase in searches for “violence.” (A YouTube spokesperson told Fast Company that users must be 13 or older to use the service, and that it terminates younger users’ accounts when discovered. But it’s easy for kids to sign up with a fake age, says BrightCanary CEO Karl Stillner.)

In one sequence recorded by BrightCanary’s software, an 11-year-old user who searched for the term “israel farm” ended up landing on a news segment about a Hamas attack that killed “children, babies, and old people.” It contained brief footage, blurred out, of a dead body on the ground.

Even news videos that don’t appear to violate YouTube’s guidelines could still “report the realities of war in ways that are traumatic for younger children,” Stillner says.

Source:
Published: