In U.S., Social Media Platforms Not Liable When Hate Speech Leads to Harm

Shauna Curphey
4 min readSep 25, 2018

Facebook’s role in the Rohingya genocide in Myanmar raises a multitude of questions about how to prevent the spread of dangerous hate speech— without quashing vital free expression online. So many questions, in fact, that this is part two in a series exploring those issues. I’m a lawyer, so my thoughts naturally tend to lawsuits as a way to change corporate behavior.

Can the survivors of the Rohingya genocide sue Facebook in the U.S. for its role in propagating the hate speech that contributed to the violence against them?

The answer, so far, is surprisingly unequivocal. It is no.

Photo credit: Marco Vertch, Creative Commons 4.0

Several similar lawsuits have all failed. In Cohen v. Facebook, and Force v. Facebook, plaintiffs, who include residents of Israel and the family members of victims of terrorists attacks there, alleged that Palestinian terrorists used Facebook “to incite, enlist, organize, and dispatch would-be killers to ‘slaughter Jews.” In Crosby v. Twitter, victims of the mass shooting at the Pulse Night Club in Orlando alleged that ISIS used Facebook, Twitter and Google to post proselytizing videos, which radicalized the shooter, triggering the attack. Similar suits have been brought by the victims of the November 2015 attack in Paris, the December 2015 shooting in San Bernandino, the July 2016 truck attack in Nice, and the August 2017 van attack in Barcelona.

To date, courts have dismissed the lawsuits at the pleading stage — meaning, even assuming that everything the plaintiffs alleged was true, they still had no case. They ruled that Section 230 of the Communications Decency Act of 1996 forecloses these types of claims. In essence, Section 230 protects website operators from liability for material posted on their sites by someone else.

Plaintiffs have tried to overcome that immunity, arguing that social media sites play an active role in shaping their content by: using algorithms to place ads and to connect users with content and with each other; deciding to remove some posts but leave others; and by banning some users but not others. But courts have held that decisions about the structure and operation of a website, or about who may obtain an account, do not disqualify a social media site from Section 230 protection.

And this is, perhaps, a good thing. The Electronic Frontier Foundation has called Section 230 “the most important law protecting internet speech.”

Without Section 230, online intermediaries — which includes not just giants like Facebook, Twitter or YouTube, but every web site that allows a user to post a comment or a review — would have to take a much more active role in policing what users say online and might lead them to eliminate user-contributed content altogether.

Moreover, even if Section 230 were not an issue, standing poses an additional legal hurdle in these types of cases. Essentially, standing requires a personal stake in the outcome of the case. Courts have held that plaintiffs’ fear of falling victim to the violence promoted against them online is insufficient to confer standing absent a showing that there is a risk of imminent harm. In Cohen v. Facebook, for example, the court found that the plaintiffs, who were 20,000 residents of Israel, lacked standing because they had failed to allege that they specifically “will be the target of any future, let alone imminent, terrorist attack.”

Even where plaintiffs have suffered harm, a case will fail if a court decides that the defendant’s actions were not the “proximate cause,” i.e., the acts were too attenuated or too indirect to be considered the cause of the harm. In Crosby v. Twitter, for example, the victims of the Pulse nightclub shooting alleged that content posted by ISIS on social media radicalized the shooter. The court held that plaintiffs failed to establish causation because they did not allege any contacts or communications between anyone — let alone between ISIS and the shooter — that directly concerned the attack before it took place.

The standing and causation requirements illustrate that — even absent the immunity granted by Section 230 — litigation is often ill-suited to address the broad human rights concerns raised by Facebook’s role in the Rohingya genocide. Lawsuits typically result in payment of monetary damages to the victims. Any other form of relief (i.e. injunctive relief) requires a showing of substantial likelihood of future harm, and that the remedy proposed is appropriate. Thus, a lawsuit would not fix the fact that Facebook has become the de facto internet in Myanmar. It would not remedy Facebook’s failure to hire a single employee in Myanmar despite its 18 million users there. (Facebook is currently advertising in Dublin for a Burmese market specialist). And, assuming Section 230 did not apply, it would not grapple with the broader issues associated with how to impose liability for harmful hate speech without destroying opportunities for free expression online.

Litigation is a limited tool, indeed. In this example, it does not appear to be the best way to address the host of problems posed by Facebook’s power to spread hate. There are multiple ways to seek to change corporate behavior, however. I plan to explore them in this series. More to come.

--

--

Shauna Curphey

Lawyer, Researcher and Advocate: Business and Human Rights; Corporate Accountability; Access to Remedy | @shaunamc | www.justground.org