A week after Google filed a defense brief to the US Supreme Court, warning that altering Section 230 of the Communications Decency Act (CDA) would “upend the internet,” a number of corporations —including Twitter, Meta and Microsoft, have filed their own legal briefs — supporting Google’s argument that narrowing the statute could have dire consequences for digital publishers.
Under the 1996 CDA statute, companies are protected against liability for user content, including comments, reviews and advertisements. However, the Supreme Court has been asked to consider whether Section 230 is still relevant and appropriate, given that it was enacted before the internet became such an integral part of daily life.
The statute came under scrutiny after a lawsuit was brought by the family of Nohemi Gonzalez, a 23-year-old US citizen who was killed by ISIS in Paris in November 2015. Gonzalez’s family argues that that algorithms should be considered editorial content that is not protected against liability Section 230, and so Google-owned YouTube violated the Anti-Terrorism Act (ATA) when its algorithms recommended ISIS-related content to users.
The Surpeme Court is set to hear oral arguments in the case on February 21.
Lawmakers criticize Section 230 protections for social media
Republican and Democratic lawmakers alike have criticized the statute’s protections. Republicans say the liability protections allow social media sites to make biased decisions on content removal, while Democrats want sites to take more responsibility for content moderation. US President Joe Biden has said his administration would support the position that Section 230 protections should not extend to recommendation algorithms.
In its January 19 filing, Microsoft argued that if the Supreme Court was to make changes to Section 230, it would “strip these digital publishing decisions of long-standing, critical protection from suit—and it would do so in illogical ways that are inconsistent with how algorithms actually work.”
It added that any such ruling to narrow the statute “would thereby expose interactive computer services to liability for publishing content to users whenever a plaintiff could craft a theory that sharing the content is somehow harmful.”
In its own brief, Meta stated that the petitioners’ argument is “deeply flawed as a legal matter,” since interpreting Section 230 as a means of protecting sites from liability for user content, but removing the protection for recommending content, it “ignores the way the internet actually works.”
It went on to describe the complainants’ assertion as “misguided as a practical matter” and said that a ruling in their favor would ultimately “incentivize online services to remove important, provocative, and controversial content on issues of public concern.”
Twitter says liability protection needed for web sites to function
Twitter said that the current interpretation of Section 230 “ensures that websites like Twitter and YouTube can function notwithstanding the unfathomably large amounts of information they make available and the potential liability that could result from doing so.”
Since Elon Musk’s takeover of Twitter, the social media platform has come under fire for allowing previously banned users back onto the platform, including former President Donald Trump and social media personality Andrew Tate, who is currently under investigation in Romania over allegations of rape and human trafficking.
However, before that takes place, several other high-profile cases are due to be considered.
Today, the Supreme Court is set to discuss whether to hear two cases that challenge laws in Texas and Florida barring online platforms from taking down certain political content. Additionally, a case with notable similarities to Gonzalez v. Google —Twitter v. Taamneh — is scheduled for oral arguments on February 2. In that case, Twitter, Facebook, and YouTube are alleged to have aided and abetted a different ISIS attack.