Google (GOOG) (GOOGL) presented arguments before the U.S. Supreme Court Tuesday in a case that could reform the internet – and especially the business models of social media companies – by further defining how much risk comes with hosting third-party content.
The court’s decision in Gonzalez v. Google is expected to clarify the scope of a 27-year-old law that provides broad liability protection to websites and apps that host others’ ideas, images, and compositions. The bottom line: the case could alter whether interactive sites like YouTube, Facebook, Instagram (META), TikTok, and Twitter, can steer clear of legal responsibility when they recommend third party content.
Arguments concluded today and a decision could be handed down in June.
During arguments on Tuesday, justices questioned the plaintiffs’ contention that site owners, under Section 230 of the 1996 Communications Decency Act, can be held liable when their organizational algorithms recommend particular content by generating thumbnails for suggested videos—and Google's claim that the company's mere organizational choices can't strip it of Section 230 protection.
According to the plaintiffs, YouTube's act of creating thumbnails — images that appear in internet search results as representations of available third-party content — converts the company from a passive host of third-party content into a recommender of content, more akin to publishers or speakers that are not covered by Section 230's liability shield.
The justices centered many of their questions around how to draw a line between publishing and hosting.
"The question is what you do with the algorithm," Gonzalez's lawyer Eric Schnapper told the court. "It's the recommendation practice that we think is actionable."
Google's lawyer Lisa Blatt objected, arguing that for content providers, there's no way around making choices to organize and provide search results to their users.
"We have to organize somehow," Blatt told the court.
Daniel Lyons, a law professor and associate dean for Boston College Law School opined that the Gonzalez petitioners didn't have a good day at court.
"They seemed to be struggling to explain what precisely their argument was," Lyons said. "Multiple lines of questions showed the justices struggling with where to draw the line between user speech and the platform's own speech."
Associate Justice Elena Kagan asked if Section 230 should protect websites like YouTube, regardless of whether their search algorithms are neutral, a distinction made important by the Ninth Circuit Court of Appeals in Dyroff v. Ultimate Software Group, which held that Section 230 provided liability protection because the defendant's algorithm recommended harmful third-party content in the same way that generated other recommended third-party content.
Associate Justice Neil Gorsuch suggested that the Ninth Circuit's rule is flawed given that algorithms by design are incapable of genuine neutrality.
Justices Amy Coney Barrett and Ketanji Brown Jackson asked Blatt if Congress meant to limit Section 230 immunity, given that a central goal of the legislation was to block harmful and offensive online content. Such a limitation, Blatt said, wasn't Congress' plan because too much or too little moderation would have have produced only "garbage on the internet."
"Your theory suggests to me that it's exactly the opposite of what Congress was trying to do," Justice Jackson said.
The complexity of the issue was highlighted by several justices who voiced confusion over the plaintiffs' arguments.
"These are not the nine greatest experts on the internet," Justice Elena Kagan said.
In a separate argument on behalf of the plaintiffs, a lawyer for the U.S. Justice Department argued that platforms should lose their immunity protections when they organize or recommend content in a way that's unlawful, such as by discriminating or knowingly repeating libelous content.
In Gonzalez, family members and the estate of Nohemi Gonzalez, a 23-year-old U.S. citizen killed in a December 2015 ISIS shooting at Paris' La Belle Equipe bistro, argue that Google should be held at least partially liable for her death. That's because, they allege, the company's YouTube service knowingly permitted and recommended, via algorithms, inflammatory ISIS-created videos that allegedly played a key role in recruiting the attackers.
A decision in plaintiffs’ favor would not only overhaul cautions that platforms would need to take to protect against harmful third party posts, it would also threaten their lucrative advertising revenue streams that rely on user-generated content.
Lyons cautioned against predicting how the court would rule, though said he I was surprised that even Justice Clarence Thomas seemed hostile towards the Gonzalez arguments and that Justice Brown seemed sympathetic to the plaintiffs' cause.
"Since 2019, he has been the loudest voice on the court for taking a Section 230 case to narrow the scope of the statute," Lyons said. "But he seemed unable to accept the petitioners' arguments today."
Other major tech firms joined in supporting 47 friend-of-the-court briefs filed on behalf of Google, including Instagram and Facebook parent company Meta (META), TikTok parent company ByteDance, and Microsoft (MSFT). NetChoice, an organization that advocates for major firms, also filed in support of Google.
Lawmakers on both sides of the aisle have suggested remaking the 1996 law. Some Republicans say the law allows Big Tech to silence content with impunity, while certain Democrats, including President Joe Biden, say it allows the companies to spread false information with ease.
Before the Supreme Court took up the Gonzalez case, the U.S. District Court for the Northern District of California dismissed the lawsuit at Google's request, concluding that Section 230 barred the claims because ISIS, not Google, created the videos. Meanwhile, judges in separate jurisdictions, faced with similar claims, applied varying interpretations to Section 230's liability shield.
In a case alleging that Facebook's algorithmic recommendations led to killings by Islamist militant group Hamas, the U.S. Court of Appeals for the Second Circuit similarly held in Force v. Facebook that Section 230 protected the social media company from liability. The court reasoned that recommendations brought about the same effect as native third party posts.
However, in a partial dissent, Judge Robert Katzman instead argued that content recommendations convey messages beyond the defendant itself and therefore strip away Section 230 protection. Extending Section 230 to recommendations, he said, immunizes social media from "unsolicited, algorithmic spreading of terrorism.”
Once online companies transitioned from subscription-based revenue streams to advertising-based revenue streams, the plaintiffs argue, their motivation to use automated algorithms skyrocketed. The algorithms, they say, tend to increase the time that users spend at their websites.
“The public has only recently begun to understand the enormous prevalence and increasing sophistication of these algorithm-based recommendation practices,” the plaintiffs wrote in court documents.
On Wednesday, the court will hear another case, Twitter v. Taamneh, over Section 230 immunity. The plaintiffs allege that Twitter should be held liable for providing material aid to ISIS terrorists.
This story has been updated to include comments from Boston College Law School professor Daniel Lyons.
Alexis Keenan is a legal reporter for Yahoo Finance. Follow Alexis on Twitter @alexiskweed.