A Comment on Malwarebytes v. Enigma

From Bibliotheca Anonoma

Disclaimer: I am not a lawyer, and this article is not and should not be relied on as legal advice.

On first glance, Malwarebytes v. Enigma may not seem to be very important. It doesn't seem to be. It's one of those cases where one party has appealed a case all the way up to the Supreme Court, but it doesn't think the case is important enough for them to take.

But, even if its legal effect may not extend beyond the medium- or even short-term, it has historical value. For one, it is the first time the Supreme Court has interpreted or even mentioned a very important statute for the Internet, section 230 of the Communications Decency Act[1]. For two, it shows how the Supreme Court, or at least Clarence Thomas (nobody else joined this "statement"), is inclined to interpret section 230 should a case revolving around it ever come before the Court.

When a judge interprets a law, he will of course first look to the plain meaning of the text. If the plain meaning supplies no clear meaning or a clearly absurd one, he will then look to the purpose of the legislature in enacting the law.

So first, a brief discussion of the Communications Decency Act, with a view to its purpose. In the early 1990s, the Internet had been opened to commercial use, and massive growth began as entrepreneurs decided to make their pot of gold by colonizing it. The Religious Right being very much in power at the time, they were afraid that there would be content on the Internet that would corrupt "good morals". That's why the Communications Decency Act is called this: to make it illegal to transmit "indecent" content on the Internet, even if the Supreme Court had found it otherwise protected by the First Amendment.

Most state codes have provisions stating that headings of sections are not part of the law and shouldn't be used to interpret laws. The US Code is different in that section "catchlines" and "analyses" (what it calls headings and tables of contents) are part of the law. So in some cases you'll have tables of contents that don't match up with the actual headings, and the people who update the US Code are powerless to fix it, because only Congress can change them to fit. What I'm getting with this is that section headings can be used to find out what Congress meant in the sections they head.

So, the title of the actual enactment, "Telecommunications Act of 1996", is a welcome break from the blatantly ideological titles and/or contrived attempts to come up with "clever" acronyms that have become so common in regular years. This doesn't tell us much, and that's a good thing.

Title V of this Act has the short title of "Communications Decency Act" and the long title of "Obscenity and Violence". Section 509 is located within Subtitle A, headed "Obscene, Harassing, and Wrongful Utilization of Telecommunications Facilities". So we get an idea of what the Congress wanted to target by making this law.

Section 509 itself is entitled "Online Family Empowerment". The section it inserts into the Communications Act of 1934 is entitled "Protection for private blocking and screening of offensive material." This accords with the general purpose of the CDA: the purpose of this section is to allow families to block "offensive material" from their children through private means.

Now we come to subsections (a) and (b) of this section. Respectively, they declare the "findings" of Congress and the "policy" of the United States. They are an almost uniquely American feature; their purpose is to help the courts discover the purpose of the laws they are in, and so make it more likely for them to interpret the laws according to the will of Congress.

The fourth "finding" basically sums up the gist of all five findings: "The Internet and other interactive computer services have flourished, to the benefit of all Americans, with a minimum of government regulation." The "policy" states that Congress wants to keep the Internet free, but is concerned about obscene and harassing content online; it will encourage the private development of technology that will block such content.

And now, the meat of the whole section, subsection (c):

(c) Protection for `Good Samaritan' Blocking and Screening of Offensive Material.—
   ``(1) Treatment of publisher or speaker.—No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
   ``(2) Civil liability.—No provider or user of an interactive computer service shall be held liable on account of—
       ``(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
       ``(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

This was intended as a replacement for the punitive provisions of the CDA, which made all indecency on the Internet illegal. Instead, this was enacted as another section of the CDA, and the punitive provisions remained in the final law[2].

No doubt, paragraph (2) was intended to be the more important of the two short paragraphs of the section. It seems to fit more with the heading of the subsection. Yet in the long run, paragraph (1) became more momentous. Here's why: if you're not treated as the publisher or speaker of content someone else created, you can't be liable to be sued for defamation on it.

There was much outrage over the Act; for instance, it was the impetus for John Perry Barlow's Declaration of the Independence of Cyberspace. ACLU soon took the Attorney-General, Janet Reno, to court over this Act. In Reno v. ACLU (1997), the Supreme Court struck the punitive provisions of the CDA down, but kept this section, which was not challenged. Subsections (d) and (e) simply set out ancillary provisions. "Information content provider", as defined by this section, includes users of the Internet as well as websites; "interactive computer service" includes websites as well as ISPs.

Since then, section 230 has become one of the most important enactments regulating the Internet. It allows sites to exist which cannot reasonably moderate all the content posted on it, because they are not liable for content that is posted which they don't know about. Even if they remove content, they're not treated as the publisher or speaker of the content that they choose to stay up as long as they do so in good faith, because of this section.

Section 230 has been amended twice. The first amendment, in 1998, required ISPs to help customers find software that could help them stop minors from viewing obscene content. The second, in 2018, was the first real limitation on the expansive protection of section 230: facilitating sex trafficking and prostitution became something websites were liable for as publishers. If they knew about it, they had an obligation to do all they could to stop it, or else they would be civilly liable. This amendment is called FOSTA, and it did not affect the power of websites to freely remove content.

Now, despite the polarized political atmosphere in the United States, both sides of the aisle have attempted to reduce the scope of section 230 protections: the Democrats against perceived racism and hatred, the Republicans against perceived liberal bias by the moderation of the largest websites. For the first time, with this statement, the Supreme Court throws the hat it should have thrown a decade ago into the ring.

Thus much for background.

The statement itself is divided into two parts. Part I encompasses the main body, while Part II is the conclusion. It does not purport to interpret section 230. Rather, it points out some defects in how federal courts have interpreted it.

Part I consists of four divisions. Each division contains a criticism of how lower courts have interpreted section 230.

  1. Division A states that at common law, "distributors" are responsible for the content they distribute if they know it is illegal. Thomas' opinion is that Congress did not intend to eliminate distributor liability when it enacted section 230. To prove that, he cites section 502, which imposed distributor liability, without mentioning that this section was struck down by Reno v. ACLU[3].
  2. Division B states that courts have ignored the word "another" in section 230 by "giving Internet companies immunity for their own content." Thomas especially criticizes how websites that alter submitted content are still not liable for the content so altered according to the jurisprudence of some federal courts.
  3. Division C states that some courts have concluded that section 230(c)(1) protects corporations' decisions to remove content as well as host it. Thomas views that line of thought as conflicting with the more specific liability shield in (2)(a).
  4. Division D states that courts have used section 230 to "protect companies from a broad array of product-defect suits". Given as an example is a suit against Facebook for "recommending content by terrorists", which was dismissed under section 230.

Part II, as is customary for Supreme Court opinions, suggests possible alternative courses of action to the disapproved measure: for instance, it recommends the States and the Federal Government "update their liability laws to make them more appropriate for an Internet-driven society."

To conclude, here are some principles which should guide the debate on Section 230:

  • Being sued, even if you're completely, unquestionably in the right, is such a bad experience most will do anything to get out of it. On the other hand, the person suing you also has to do the same calculation in their head. On a disembodied limb, there is an infinite number of potential plaintiffs, and so a high chance at least one of them will decide suing you is worth it, and only one defendant, you!
  • Justice Thomas is easily the most conservative of the Supreme Court Justices. It is difficult to say, with Amy Coney Barrett's investiture and the conservative turn on the Highest Court, whether the Court is more or less likely to interpret Section 230 with a view to obviating Thomas's criticisms of the current interpretation. Conservative politicians are pro-corporate on one hand and anti-"Big Tech bias" on the other; time will tell if and how they balance each other out.
  • Even if section 230 is completely removed or reinterpreted beyond recognition, it will still be more difficult to win a defamation case in the United States than in any other country. For "public figures", "actual malice", that is "knowledge of falsehood or reckless disregard of falsity", is required to prove defamation. In many other countries, the burden of proof is on the speaker to prove their statement is not only true, but that they made it with good motives: in the United States, the burden of proof is always on the subject of the defamation.

I am not American so I hope any American readers may forgive these unsolicited comments on American politics. In this present age of identity politics, on both Biden and Trump's campaign websites there are lists of "coalitions" showing particular groups of people for either candidate, including such exotic ones as "Sportsmen & Sportswomen For Biden"[4] and "Chaldeans for Trump"[5]. However, on both candidate webpages I saw not a single word for users of the Internet - not a single word for us. Until we unite to protect our own interests, the assault on our rights will, even if it doesn't succeed, never end.

We shouldn't give up our "true diversity of political discourse" in relation to the real-life politics we may feel strongly and deeply about; on the contrary, we should recognize that our viewpoint diversity is untenable without freedom of speech and debate online. The Democrats and the Republicans use the Internet for their other goals like their predecessors a century and a half ago used Southern poor whites and African-Americans to reduce the power of the other without actually caring about their well-being or intending to enforce their rights. But until we organize and speak up for ourselves, we will keep being "used" like Obama "used" the Internet and Trump "used" social media instead of being spoken to.


  1. To be sure, this is section 230 of the Communications Act of 1934, which was added by section 509 of the Telecommunications Act of 1996. The CDA is the name for the part of Telecommunications Act section 509 is in. This was codified to 47 USC 230. Confusing, I know.
  2. 521 U.S. 844, 858
  3. The CDA, as evidenced by its being named as though it were a separate law, was a "rider" on the Telecommunications Act of 1996 that did not pass through the hearings process the other 6 titles of the Act passed through. And as shown above, section 230 is a rider on top of that rider.
  4. https://joebiden.com/coalitions/ Please archive as soon as possible
  5. https://www.donaldjtrump.com/coalitions Please archive as soon as possible