The Washington PostDemocracy Dies in Darkness

Raw videos of violent incidents in Texas rekindle debate about graphic images

News organizations have long held back from publishing explicit or violent images of death, which are now rapidly disseminated across social media platforms

Adriana Rosales, left, is consoled by Therese Badiki on May 7 as she weeps at a makeshift memorial outside Allen Premium Outlets in Allen, Tex., where a day earlier a gunman killed eight people before being shot dead by police. (Jeffrey McWhorter for The Washington Post)
15 min

The shooter who killed eight people outside an outlet mall in Allen, Tex., on May 6 was captured on a dash-cam video as he stood in the middle of a parking lot, methodically murdering people.

The next day, when a driver plowed his SUV into a cluster of men waiting for a bus in Brownsville, Tex., a video showed him speeding into and rolling over so many human beings that the person behind the camera had to pan across nearly a block-long field of mangled bodies, pools of blood and moaning, crying victims to capture the carnage. The driver killed eight people.

These gruesome videos almost instantly appeared on social media and were viewed millions of times before, in many cases, being taken down. Yet they still appear in countless back alleys of the internet.

The footage made clear that the deaths were horrific and the suffering unspeakable. The emotional power of the images would shake almost any viewer. Their rapid dissemination also rekindled an unsettling debate — one that has lingered since the advent of photography: Why does anyone need to see such images?

Images of violence can inform, titillate, or rally people for or against a political view. Ever since 19th-century photographer Mathew Brady made his pioneering photos of fallen soldiers stacked like firewood on Civil War battlefields, news organizations and now social media platforms have grappled with questions of taste, decency, purpose and power that suffuse decisions about whether to fully portray the price of deadly violence.

Newspaper editors and television news executives have long sought to filter out pictures of explicit violence or bloody injuries that could generate complaints that such graphic imagery is offensive or dehumanizing. But such policies have historically come with exceptions, some of which have galvanized popular sentiments. The widely published photo of the mangled body of the lynched 14-year-old Emmett Till in 1955 played a key role in building the civil rights movement. And although many news organizations decided in 2004 not to publish explicit photos of torture by U.S. service members at the Abu Ghraib prison in Iraq, the images that did circulate widely contributed to a shift in public opinion against the war in Iraq, according to several studies.

More recently, the gruesome video of a police officer killing George Floyd on a Minneapolis street in 2020 was repeatedly published across all manner of media, sparking a mass movement to confront police violence against Black Americans.

Following the killings in Allen and Brownsville, traditional news organizations, including The Washington Post, mostly steered clear of publishing the most grisly images.

“Those were not close calls,” said J. David Ake, director of photography for the Associated Press, which did not use the Texas videos. “We are not casual at all about these decisions, and we do need to strike a balance between telling the truth and being sensitive to the fact that these are people who’ve been through something horrific. But I am going to err on the side of humanity and children.”

But even as news organizations largely showed restraint, the Allen video spread widely on Twitter, YouTube, Reddit and other platforms, shared in part by individuals who expressed anguish at the violence and called for a change in gun policies.

“I thought long and hard about whether to share the horrific video showing the pile of bodies from the mass shooting‚” tweeted Jon Cooper, a Democratic activist and former Suffolk County, N.Y., legislator. He wrote that he decided to post the video, which was then viewed more than a million times, because “maybe — just maybe — people NEED to see this video, so they’ll pressure their elected officials until they TAKE ACTION.”

Others who posted the video used it to make false claims about the shooter, such as the notion that he was a Black supremacist who shouted anti-White slogans before killing his victims.

From government-monitored decisions about showing deaths during World War II to friction over explicit pictures of devastated civilians during the Vietnam War and on to the debate over depictions of mass killing victims in recent years, editors, news consumers, tech companies and relatives of murdered people have made compelling but opposing arguments about how much gore to show.

The dilemma has only grown more complicated in this time of information overload, when more Americans are saying they avoid the news because, as a Reuters Institute study found last year, they feel overwhelmed and the news darkens their mood. And the infinite capacity of the internet has upped the ante for grisly images, making it harder for any single image to provoke the widespread outrage that some believe can translate into positive change.

Recent cutbacks in content moderation teams at companies such as Twitter have also accelerated the spread of disturbing videos, experts said.

“The fact that very graphic images from the shooting in Texas showed up on Twitter is more likely to be content moderation failure than an explicit policy,” said Vivian Schiller, executive director of Aspen Digital and former president of NPR and head of news at Twitter.

Twitter’s media office responded to an emailed request for comment with only a poop emoji, the company’s now-standard response to press inquiries.

Efforts to study whether viewing gruesome images alters popular opinion, changes public policy or affects the behavior of potential killers have generally been unsuccessful, social scientists say.

“There’s never been any solid evidence that publishing more grisly photos of mass shootings would produce a political response,” said Michael Griffin, a professor of media and cultural studies at Macalester College who studies media practices regarding war and conflict. “It’s good for people to be thinking about these questions, but advocates for or against publication are basing their views on their own moral instincts and what they would like to see happen.”

The widely available videos of the two incidents in Texas resurfaced long-standing conflicts over the publication of images of death stemming from wars, terrorist attacks or shootings.

One side argues that widespread dissemination of gruesome images of dead and wounded victims is sensationalistic, emotionally abusive, insensitive to the families of victims and ultimately serves little purpose other than to inure people to horrific violence.

The other side contends that media organizations and online platforms ought not to proclaim themselves arbiters of what the public can see, and should instead deliver the unvarnished truth, either to shock people into political action or simply to allow the public to make its own assessment of how policy decisions play out.

Schiller said news organizations are sometimes right to publish graphic images of mass killings. “Those images are a critical record of both a specific crime but also the horrific and unrelenting crisis of gun violence in the U.S. today,” she said. “Graphic images can drive home the reality of what automatic weapons do to a human body — the literal human carnage.”

It’s not clear, however, that horrific images spur people to protest or action. “Some gruesome images cause public outrage and maybe even government action, but some result in a numbing effect or compassion fatigue,” said Folker Hanusch, a University of Vienna journalism professor who has written extensively about how media outlets report on death. “I’m skeptical that showing such imagery can really result in lasting social change, but it’s still important that journalists show well-chosen moments that convey what really happened.”

Others argue that even though any gory footage taken down by the big tech companies will nonetheless find its way onto many other sites, traditional news organizations and social media companies should still set a standard to signify what is unacceptable fare for a mass audience.

The late writer Tom Wolfe derisively dubbed the gatekeepers of the mainstream media “Victorian gentlemen,” worried about protecting their audience from disturbing images. Throughout the last half-century, media critics have urged editors to give their readers and viewers a more powerful and visceral sense of what gun violence, war and terrorism do to their victims.

Early in the Iraq War, New York columnist Pete Hamill asked why U.S. media were not depicting dead soldiers. “What we get to see is a war filled with wrecked vehicles: taxis, cars, Humvees, tanks, gasoline trucks,” he wrote. “We see almost no wrecked human beings. … In short, we are seeing a war without blood.”

After pictures of abuses at Abu Ghraib appeared, it was “as though, rather suddenly, the gloves have come off, and the war seems less sanitized,” wrote Michael Getler, then the ombudsman at The Post.

Still, news consumers have often made clear that they appreciate restraint. In a 2004 survey, two-thirds of Americans told Pew Research Center that news organizations were right to withhold images of the charred bodies of four U.S. contractors killed in Fallujah, Iraq.

Images of mass shooting victims have been published even less frequently than grisly pictures of war dead, journalism historians have found. “Mass shootings happen to ‘us,’ while war is happening ‘over there,’ to ‘them,’” Griffin said. “So there’s much more resistance to publication of grisly images of mass shootings, much more sensitivity to the feelings” of families of victims.

But despite decades of debate, no consensus has developed about when to use graphic images. “There’s no real pattern, not for war images, not for natural disasters, not for mass shootings,” Hanusch said. “Journalists are very wary of their audience castigating them for publishing images they don’t want to see.”

Ake, the AP photo director, said that over the years, “we probably have loosened our standards when it comes to war images. But at the same time, with school shootings, we might have tightened them a little” to be sensitive to the concerns of parents.

For decades, many argued that decisions to show explicit images of dead and mangled bodies during the Vietnam War helped shift public opinion against the war.

But when social scientists dug into news coverage from that era, they found that pictures of wounded and dead soldiers and civilians appeared only rarely. And in a similar historical survey of coverage of the 1991 Persian Gulf War, images of the dead and wounded made up fewer than 5 percent of news photos, as noted by professors at Arizona State and Rutgers universities.

Some iconic images from the Vietnam War — the running, nude Vietnamese girl who was caught in a napalm attack, for example — gained their full historic import only after the war.

In the digital age, publication decisions by editors and social media managers can sometimes feel less relevant because once images are published somewhere, they spread virtually uncontrollably throughout the world.

“People are just getting a fire hose of feeds on their phones, and it’s decontextualized,” Griffin said. “They don’t even know where the images come from.”

The flood of images, especially on highly visual platforms such as Instagram and TikTok, diminishes the impact of pictures that show what harm people have done to one another, Griffin said, pointing to the example of the photo of 3-year-old Aylan Kurdi, the Syrian refugee found washed ashore on a Turkish beach, a powerful and disturbing image from 2017 that many people then compared with iconic pictures from the Vietnam War.

“At the time, people said this is going to be like the napalm girl from Vietnam and really change people’s minds,” Griffin said. “But that didn’t happen. Most people now don’t remember where that was or what it meant.”

Social media companies face pressure to set standards and enforce them either before grisly images are posted or immediately after they surface. With every new viral video from a mass killing, critics blast the social media platforms for being inconsistent or insufficiently rigorous in taking down sensational or grisly images; the companies say they enforce their rules with algorithms that filter out many abuses, with their content moderator staffs and with reports from users.

Soon after the Allen shooting, a Twitter moderator told a user who complained about publication of the gruesome video that the images did not violate the site’s policy on violent content, the BBC reported. But a day later, images of dead bodies at the mall — bloody, crumpled, slumped against a wall — were taken down.

Although the biggest social media platforms eventually removed the video, images of the shooter firing his weapon and photos of the shooter sprawled on his back, apparently already dead, are still widely available, for example on Reddit, which has placed a red “18 NSFW” warning on links to the video, indicating that the images are intended for adults and are “not safe for work.”

A moderator of Reddit’s “r/masskillers” forum told his audience that the platform’s managers had changed their policy, requiring images of dead victims to be removed.

“Previously, only livestreams of shootings and manifestos from the perpetrators were prohibited,” the moderator wrote. Now, “[g]raphic content of victims of mass killings is generally going to be something admins are going to take down, so we’ll have to comply with that.”

The group, which has 147,000 members, focuses on mass killings, but its rules prohibit users from sharing or asking for live streams of shootings or manifestos from shooters.

After the attack in Allen, YouTube “quickly removed violative content … in accordance with our Community Guidelines,” said Jack Malon, a spokesman for the company. In addition, he said, to make sure users find verified information, “our systems are prominently surfacing videos from authoritative sources in search and recommendations.”

At Meta, videos and photos depicting dead bodies outside the mall were removed and “banked,” creating a digital fingerprint that automatically removes the images when someone tries to upload them.

But people often find ways to post such videos even after companies have banned them, and Griffin argued that “you can’t get away anymore with ‘Oh, we took it down quickly,’ because it’s going to spread. There is no easy solution.”

Tech platforms such as Google, Meta and TikTok generally prohibit particularly violent or graphic content. But those companies often make exceptions for newsworthy images, and it can take some time before the platforms decide how to handle a particular set of images.

The companies consider how traditional media organizations are using the footage, how the accounts posting the images are characterizing the events and how other tech platforms are responding, said Katie Harbath, a technology consultant and former public policy director at Meta.

“They’re trying to parse out if somebody is praising the act ... or criticizing it,” she said. “They usually [want to] keep up the content denouncing it, but they don’t want to allow praise. … That starts to get really tricky, especially if you are trying to use automated tools.”

In 2019, Meta, YouTube, Twitter and other platforms were widely criticized for their role in publicizing the mass killing at two mosques in Christchurch, New Zealand. The shooter, Brenton Tarrant, had live-streamed the attack on Facebook with a camera affixed to his helmet. Facebook took the video down shortly afterward, but not until it had been viewed thousands of times.

By then, the footage had gone viral, as internet users evaded the platforms’ artificial-intelligence content-moderation systems by making small changes to the images and reposting them.

But just as traditional media outlets find themselves attacked both by those who want grisly images published and those who don’t, so too have tech companies been pummeled both for leaving up and taking down gruesome footage.

In 2021, Twitch, a live-streaming service popular among video game players, faced angry criticism when it suspended an account that rebroadcast video of Floyd’s death at the hands of Minneapolis police officer Derek Chauvin. The company takes a zero-tolerance approach to violent content.

“Society’s thought process on what content should be allowed or not allowed is definitely still evolving,” Harbath said.

Jeremy Barr contributed to this report.