Friday’s slaughter in two New Zealand mosques played out as a dystopian reality show delivered by some of America’s biggest technology companies. YouTube, Facebook, Reddit and Twitter all had roles in publicizing the violence and, by extension, the hate-filled ideology behind it.
The alleged shooter, a heavily armed man authorities have not yet named, also released a 74-page manifesto denouncing Muslims and immigrants that spread widely online. He left behind a social media trail on Twitter and Facebook that amounted to footnotes to his manifesto. Over the two days before the shooting he posted about 60 of the same links across different platforms, nearly half of which were to YouTube videos that were still active late Friday.
The horror began Friday morning in New Zealand, as the alleged shooter used Facebook to live-stream his assault on Al Noor Mosque, one of two Christchurch mosques that he attacked and the scene of most of the 49 fatalities. Many hours later — long after the man and other suspects had been arrested — some internet users still were uploading and re-uploading the video to YouTube and other online services. A search of keywords related to the event, such as “New Zealand,” surfaced a long list of videos, many of which were lengthy and uncensored views of the massacre.
The almost instantaneous spread of online images from Friday’s shooting underscored how deeply entwined social media platforms have become, with savvy users moving content back and forth across platforms faster than the platforms themselves can react. It also was a reminder of the repeated inability of YouTube, the world’s biggest video site, to detect and remove some types of violent content, even though it has for years automatically flagged nudity and copyrighted music.
“The rapid and wide-scale dissemination of this hateful content — live-streamed on Facebook, uploaded on YouTube and amplified on Reddit — shows how easily the largest platforms can still be misused,” said Sen. Mark Warner, (D-Va.). “It is ever clearer that YouTube, in particular, has yet to grapple with the role it has played in facilitating radicalization and recruitment.”
Policy experts say U.S. regulators and Congress are ill-equipped to intervene in the problem, but technology companies have become uncharacteristically vulnerable in Washington. A growing number of policymakers from both parties have raised the prospect of new privacy laws, fines and even breakups of past tech mergers. How regulation would prevent the posting of such content was not immediately clear, however.
Public and political scrutiny is growing over YouTube in particular. It has sparked a succession of controversies in recent years for spreading hateful online conspiracies, violent terrorist recruiting videos and a wide range of inappropriate content reaching children, including suicide instructions spliced into kids’ videos.
Tech companies “have a content-moderation problem that is fundamentally beyond the scale that they know how to deal with,” said Becca Lewis, a researcher at Stanford University and the think tank Data & Society. “The financial incentives are in play to keep content first and monetization first.”
The New Zealand massacre video, which appeared to have been recorded with a GoPro helmet camera, was discussed even before it began on the fringe message board 8chan, an anonymous forum known for its politically extreme and often hateful commentary. Users on the site followed the attack in real time, cheering or expressing horror.
They also traded links to the alleged shooter’s hate-filled postings and to copies of his videos on various sites, while encouraging each other to download the clips before they were taken offline. By Friday afternoon, short clips of the footage had been edited to include footage of YouTube personalities superimposed as if they were live-streaming a video game.
The first-person shooting video spread particularly widely on YouTube as the people uploading it outraced website moderators’ ability to delete the clips. Some of the videos were named after quotes from the shooter, such as, “Let’s get this party started.”
YouTube tweeted Friday morning: “Our hearts are broken over today’s terrible tragedy in New Zealand. Please know we are working vigilantly to remove any violent footage.” Twitter said that the company suspended the account of one of the suspects and is working to remove the video from its network, both of which violate its policies.
On message boards such as Reddit, people posted links to the videos, which would then sometimes get deleted, only for others to post new links to alternative “mirror” sites, beginning the cycle anew.
Reddit, one of America’s most popular websites, on Friday banned forums named “gore” and “watchpeopledie,” where the videos had been reposted for users to narrate and comment on in real time. A moderator on the “watchpeopledie” forum had defended keeping the video online because it offered “unfiltered reality.” The 7-year-old forum had more than 300,000 subscribers as of the time of the New Zealand shooting.
Reddit said in a statement that it was “actively monitoring the situation in Christchurch, New Zealand. Any content containing links to the video stream are being removed in accordance with our site-wide policy.”
When a shooting video gets uploaded to social media sites, review teams often use that video to create a marked copy, known as a hash, that they can use to build an automatic blacklist for when it gets posted again. The years-old algorithmic technique, first popularized as a tactic to combat the spread of child pornography, has now been used to automatically flag copyrighted material, porn and other content that violates the social-media sites’ rules.
But such algorithms remain limited, experts say. Those uploading videos can sidestep the rules by altering the clips in small ways, such as attaching a watermark, distorting the music, or skewing the video’s size, editing or speed. Several of the shooting videos reposted to YouTube appeared to have those alterations, though it was unclear whether those changes contributed to their remaining online.
Friday’s massacre in New Zealand marked the third time that Facebook has been used to broadcast video of a murder. In 2015, a gunman uploaded smartphone video that showed him shooting two television journalists from a station in Roanoke, Va. In 2017, a gunman posted video of his fatal shooting of a bystander in Cleveland, then went on Facebook Live to talk about the killing.
“Shock videos — especially with graphic first-person footage — is where reality television meets violent gaming culture meets attention-amplification algorithms,” said Jonathan Albright, research director at the Tow Center for Digital Journalism at Columbia University. “The modern internet has been designed engagement-first, and this works in opposition to quickly halting the spread of harmful material and ideas — especially for sensational ultra violent terrorism footage.”
Facebook and YouTube have said artificial-intelligence algorithms will help them patrol the onslaught of content posted on their platforms every minute and that early successes have helped crack down on explicit video and terrorist propaganda. Both companies in recent years also have made major new investments in human and automated systems for detecting and removing problematic content, together hiring tens of thousands of new employees to help.
“New Zealand Police alerted us to a video on Facebook shortly after the live stream commenced, and we quickly removed both the shooter’s Facebook and Instagram accounts and the video,” said Mia Garlick, a Facebook spokeswoman. “We’re also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware. We will continue working directly with New Zealand Police as their response and investigation continues.”
The company said it removed the original video of the New Zealand shooting within an hour, but duplicates were quickly reposted on Facebook and other platforms.
Live video has been one of the biggest drivers of growth for Silicon Valley. In 2016, when Facebook chief Mark Zuckerberg announced an expansion of live video, he said it was to “support whatever the most personal and emotional and raw and visceral ways people want to communicate are as time goes on.”
But live video has also attracted bad actors who want to use the full force of that technical infrastructure to propel violent videos and hate speech around the world.
With live-streaming, “the potential for profit and notoriety (is) astronomical, and that can incentivize certain types of behavior,” said Joan Donovan, director of the Technology and Social Change Research Project at Harvard University’s Shorenstein Center. “The point isn’t to gain attention to the violence; the point is to gain attention to the ideology.”
The companies, she said, have little motive to police content because fast and easy sharing helps boost users, views and advertising revenue. But content moderation also is expensive and carries with it the potential for politicization, including recent claims by some conservatives and liberals that their posts are being unfairly suppressed.
The New Zealand Department of Internal Affairs said Friday that people who share video of the mosque shooting online “are likely to be committing an offense” because “the video is likely to be objectionable content under New Zealand law.”
“The content of the video is disturbing and will be harmful for people to see. This is a very real tragedy with real victims and we strongly encourage people to not share or view the video,” the agency said in a statement.
The department said it is working with social media platforms to remove the clips and urged the public to report objectionable content if they come across it. The agency acknowledged the issue of autoplay on social media and on the websites of traditional news outlets, in which people may see disturbing content without choosing to do so.
“We are aware that people may have unsuspectingly viewed the video on social media platforms thinking it is a media article, so please be vigilant of images that yourself and those around you are viewing, particularly our young people.”