The parties in Gonzalez v. Google, a Supreme Court case, made their oral arguments on Feb. 21. At question is the safe-harbor liability immunization web platforms benefit from under Section 230 of the Communications Decency Act of 1996. Evangelists of Section 230 have long argued that any significant gutting of the politically controversial law could upend the internet.

In the format of court cases or legislation, there is also a belief among tech executives, social justice activists, civil libertarians and defenders of the First Amendment that the law is necessary for encouraging viewpoint diversity and the rapid proliferation of the digital economy.

Nohemi Gonzalez was a college student on foreign exchange in France, where she was gunned down by extremists of the Islamic State who perpetrated the 2015 Paris terror attacks. Her family now claims that Google-owned YouTube is partly liable for her death because the terrorist group posted terrorist propaganda, and the videos got recommended to other YouTube users.

The legal counsel for Nohemi Gonzalez’s family, University of Washington law professor Eric Schnapper, argued that Section 230 doesn’t necessarily cover recommendation algorithms. Typically, a recommendation system for a website, such as YouTube, is built on machine learning artificial intelligence that incorporates big data collected from user behavior.

The algorithm uses that data to recommend content based on user interests. Such data includes past purchases, past viewing history, search history on Google, demographics and other factors. It just so happens that algorithms like these are used across the web to optimize user experiences. 

Schnapper, however, argued that while Section 230 was created to immunize platforms from the liability of illegal content published by third-party users, it doesn’t shield those websites from recommending such content to others. This viewpoint could significantly narrow the federal statute and place more liability on web platforms.

Section 230 has been interpreted by lower courts to immunize platforms from malicious third-party users and content broadly. Platforms, in this structure, are simply venues for publishers, not publishers themselves. In recent years, Justice Clarence Thomas has long lobbied to discuss Section 230’s safe-harbor provisions before the high court. Now that the court has done so, many justices displayed confusion and caution across ideological lines.

Justice Elena Kagan struck a chord when she said they are “not the nine greatest experts on the internet.” Kagan’s remark seemed to align with other justices unsure about the potential damage to the digital economy and several industries reliant on Section 230’s safe harbor.

Justice Brett Kavanaugh pointed to the lower appeals courts agreeing that web platforms — such as YouTube, Twitter, Facebook and Instagram — shouldn’t be held in legal liability if algorithms happen to surface illegal content to other users.

This is only the case assuming that algorithms created for content recommendation optimization aren’t intentionally designed to promote illegal content. Most recommendation algorithms don’t intentionally promote an illegal post from a third-party user. That is automatically the standard operating procedure in a legal environment that encourages web platforms to self-regulate and moderate posts that are bigoted, harmful, misinformative, disinformative, or simply violate terms of use.

Virtually every web platform is affected by the safe-harbor provisions of Section 230. But, any major reform action — even through case law and judicial review — on the statute could lead to what Kavanaugh called significant “economic dislocation.” Kavanaugh proved to be the voice of reason among the panel. He also questioned whether the high court was the correct venue to consider Section 230 reform and suggested that Congress should lead efforts to potentially amend or simply leave the current interpretation of the statute in question alone.