作者:Wall Street CN
The shield that tech giants have relied on to evade legal responsibility for three decades is facing unprecedented challenges.
Last week, Meta and Google's YouTube lost two separate jury trials, with total damages amounting to approximately $400 million.
At the same time, a number of new lawsuits have been filed, with plaintiffs' lawyers systematically undermining the legal immunity that technology platforms have long enjoyed by circumventing Section 230 of the U.S. Communications Decency Act.
The Communications Decency Act was passed by the U.S. Congress in 1996 and signed into law by then-President Bill Clinton.This law allows websites to act as content moderators without bearing responsibility for the content ultimately retained.
Over the past 30 years, platforms such as Meta, Google, TikTok, and Snap have benefited from this clause, allowing them to define themselves as neutral platforms and thus avoid a large number of potential lawsuits.
As the tech industry transitions from the era of traditional search and social networks to a new landscape dominated by artificial intelligence, the nature of legal risks is subtly changing. Platforms are no longer simply passively hosting user content; instead, they are proactively shaping user experience through algorithmic recommendations, autoplay, and even AI-generated content.
Two lawsuits lost, product design becomes the breakthrough point
Last week, a plaintiff using the pseudonym Jane Doe filed a class-action lawsuit against Google, accusing the company's AI model of creating its own summaries and links, leaking personally identifiable information of Epstein victims, including names, phone numbers, and email addresses.
According to CNBC, plaintiffs' attorney Kevin Osborne stated that the lawsuit was filed because Google refused the plaintiffs' request to remove victims' contact information from the AI model. Osborne said that due to the extremely rapid spread of information, the case must proceed quickly.:
We chose to file the lawsuit at that time because we needed to act quickly to remove these things. People were receiving calls from complete strangers and death threats. It was a nightmare.
Osborne added that the timing was "purely coincidental," given Meta's loss in court last week, but he stated that...What these cases have in common is that the plaintiffs all attempted to circumvent Section 230.Osborne said:
In his case, it was an AI-generated content that the court has not yet explored in depth.
Last week, a New Mexico jury found Meta Inc. liable in a case involving child safety; meanwhile, a Los Angeles jury found Meta Inc., Facebook’s parent company, negligent in another personal injury case.
Both companies have stated their intention to appeal last week's ruling.
Legislative deadlock and judicial prospects
At the U.S. Congressional level, both parties have proposed various reform plans for Section 230 of the Communications Decency Act, but none have been implemented.
Trump supported imposing more restrictions on social media companies during his first term; the Biden administration also publicly stated during the 2020 campaign that the provision should be repealed.
Nadine Farid Johnson, policy director of the Knight First Amendment Institute at Columbia University, attributes the legislative challenges to the fact that "these issues are extremely complex."
Farid Johnson is currently calling on Congress to take a more cautious reform path, suggesting that technology companies be allowed to receive Section 230 protection under the Communications Decency Act only after meeting specific conditions such as data privacy and platform transparency.
She warned:
As platforms continue to expand the application of generative AI and upgrade their algorithmic capabilities, the related legal challenges will become increasingly complex.Our concern is that each technological iteration turns into a whack-a-mole game.
Legal experts say the case could ultimately go to the U.S. Supreme Court after appeals, at which point the Supreme Court will make an authoritative ruling on whether the platform is protected by law.
David Greene, senior legal counsel at the Electronic Frontier Foundation, pointed out that there is currently no consensus in the legal community regarding whether product functionality is protected under Section 230 of the Communications Decency Act or even the First Amendment. Greene stated:
Simply labeling a feature as a 'design feature' is meaningless; if it is essentially speech, it is protected by both the First Amendment and Section 230 of the Communications Decency Act.
















No Comments