Trump Administration Launches New Crackdown on Content Moderation
Trump Administration Launches New Crackdown on Content Moderation
TL;DR: President Trump signed an executive order on January 20, 2025, directing federal agencies to investigate and end government pressure on social media platforms to moderate content, while the FTC and FCC are pursuing separate regulatory actions against tech companies' content moderation practices. Multiple legal battles are underway challenging both government restrictions and platform editorial decisions.
Key Takeaways
- President Trump signed an executive order on January 20, 2025, titled "Restoring Freedom of Speech and Ending Federal Censorship," directing the Attorney General to investigate federal government activities that pressured social media companies to moderate content over the past four years [5]
- The Federal Trade Commission launched an inquiry in February 2025 into "tech censorship," requesting public comment on how social media platforms' content moderation practices may constitute unfair business practices under antitrust law [7]
- The House Judiciary Committee, chaired by Rep. Jim Jordan, released a 160-page report in February 2026 accusing the European Union's Digital Services Act of forcing American tech companies to censor U.S. users' speech [12]
- Elon Musk's X Corp sued New York State in June 2025, challenging the Stop Hiding Hate Act, which requires social media companies to disclose their content moderation practices and submit biannual reports to the state attorney general [17]
- The U.S. Supreme Court declined to resolve the constitutional question in Moody v. NetChoice (2024), leaving lower courts to continue wrestling with whether state laws restricting platform content moderation violate the First Amendment [4]
What is the Trump Administration's approach to content moderation?
The Trump Administration has taken a multi-pronged approach to challenging content moderation practices. The centerpiece is Executive Order "Restoring Freedom of Speech and Ending Federal Censorship," signed on Trump's first day in office [5]. This order establishes a policy that "no Federal Government officer, employee, or agent engages in or facilitates any conduct that would unconstitutionally abridge the free speech of any American citizen" [5].
The order specifically targets what it describes as the previous administration's practice of "exerting substantial coercive pressure on third parties, such as social media companies, to moderate, deplatform, or otherwise suppress speech that the Federal Government did not approve" [5]. The Attorney General is directed to investigate these activities and prepare recommendations for remedial actions.
Separately, the State Department has launched diplomatic attacks on European content moderation regulations, calling the EU's Digital Services Act "Orwellian" and claiming it amounts to censorship [18]. Secretary of State Marco Rubio announced visa restrictions targeting foreign officials involved in censoring Americans' online posts [18].
How are federal agencies targeting tech companies?
The Federal Trade Commission has opened a formal inquiry into what it terms "tech censorship," requesting public comment on how platforms' content moderation decisions may violate consumer protection laws [7]. The FTC is exploring whether platforms' "adverse actions" against users constitute "deceptive" or "unfair" practices under Section 5 of the FTC Act [13].
However, legal experts warn this approach faces significant First Amendment hurdles. The Future of Free Speech, a think tank at Vanderbilt University, cautioned in public comments that "content moderation may spark public disagreement, but that's not the same as deception or unfairness under consumer protection law" [7]. They argue that treating editorial discretion as consumer deception would transform the FTC into an "unconstitutional speech regulator" [7].
The Federal Communications Commission, under Chairman Brendan Carr, has also criticized European content moderation rules as threatening free speech, though the FCC has limited authority over online content [2].
What legal challenges are platforms facing?
Multiple lawsuits are testing the boundaries between platform editorial rights and government regulation. The most significant ongoing case stems from Moody v. NetChoice, where the Supreme Court in 2024 declined to resolve whether state laws restricting platform content moderation violate the First Amendment [4].
The case involved conflicting circuit court decisions on Texas and Florida laws. The Fifth Circuit upheld a Texas neutrality requirement prohibiting platforms from moderating content based on viewpoint, while the Eleventh Circuit struck down a Florida law mandating that platforms carry speech from certain speakers like journalists and political candidates [4].
| Platform Challenge | Legal Theory | Current Status |
|---|---|---|
| X Corp v. New York | First Amendment editorial discretion | Active lawsuit filed June 2025 [17] |
| Moody v. NetChoice | State content moderation restrictions | Remanded by Supreme Court for further analysis [4] |
| FTC Antitrust Inquiry | Consumer protection/competition law | Public comment period ongoing [7] |
Elon Musk's X Corp has been particularly aggressive in challenging government oversight, filing suit against New York's Stop Hiding Hate Act, which requires platforms to disclose content moderation practices [17]. X argues these disclosure requirements violate the First Amendment by compelling companies to reveal "how their editorial processes work" [17].
What is Congress investigating?
House Judiciary Committee Chairman Jim Jordan has conducted a year-long investigation into European influence on American content moderation. His committee's February 2026 report accuses the EU's Digital Services Act of conducting "a decade-long campaign to censor political speech worldwide" [12].
The report, based on subpoenaed documents from ten major tech companies, claims EU officials pressured platforms to censor content on COVID-19 vaccines, the Ukraine war, and transgender issues [12]. Internal TikTok documents allegedly show the platform revised its global Community Guidelines in 2024 specifically to comply with the Digital Services Act [12].
Jordan's investigation found that European regulations effectively control speech worldwide because major platforms use global content moderation policies rather than country-specific rules [12]. The report characterizes this as "extraterritorial actions" that "directly infringe on American sovereignty" [12].
Why This Matters
These developments represent the most significant challenge to current content moderation practices since Section 230 of the Communications Decency Act was enacted in 1996. The legal framework that has allowed platforms to moderate content while avoiding liability for user posts is under unprecedented pressure from multiple directions [6].
The outcome will determine whether platforms retain editorial discretion over their content or face government mandates about what speech they must carry. This has implications beyond social media, potentially affecting how all online services—from comment sections to app stores—manage user-generated content.
The international dimension adds complexity, as American platforms must navigate conflicting regulatory demands from the U.S. and Europe. The Trump Administration's diplomatic pressure on European regulators suggests this could become a broader trade and sovereignty issue.
FAQ
Q: Can the government force social media platforms to host certain content? A: The First Amendment generally protects platforms' editorial discretion, similar to newspapers' right to choose what to publish. However, the Supreme Court has not definitively ruled on this question for social media platforms, leaving the issue unresolved [4].
Q: What is Section 230 and how might it change? A: Section 230 of the Communications Decency Act provides legal immunity to platforms for user-generated content and clarifies their ability to moderate content. While not directly targeted by current actions, the broader regulatory pressure could lead to legislative changes affecting these protections [6].
Q: How do European regulations affect American users? A: Because major platforms often use global content policies rather than region-specific rules, European regulations like the Digital Services Act can influence what content Americans see, even though the laws don't directly apply in the U.S. [12].
Q: What happens if platforms refuse to comply with disclosure requirements? A: Platforms could face enforcement actions, fines, or other penalties depending on the specific law. However, they may also challenge these requirements in court on First Amendment grounds, as X Corp is doing with New York's law [17].
Q: Will these changes make social media less moderated? A: The ultimate impact depends on how legal challenges resolve and what specific policies are implemented. Some proposals could reduce content moderation, while others focus more on transparency and process rather than limiting moderation itself [1].
Sources
[1] https://www.congress.gov/crs-product/R46662 [2] https://www.fcc.gov/consumers/guides/fcc-and-speech [3] https://www.ebsco.com/research-starters/communication-and-mass-media/free-speech-and-regulation-social-media-content [4] https://www.theregreview.org/2025/10/15/huang-a-new-approach-to-understanding-content-moderation/ [5] https://www.whitehouse.gov/presidential-actions/2025/01/restoring-freedom-of-speech-and-ending-federal-censorship/ [6] https://www.newamerica.org/insights/everything-moderation-analysis-how-internet-platforms-are-using-artificial-intelligence-moderate-user-generated-content/legal-frameworks-that-govern-online-expression/ [7] https://futurefreespeech.org/the-future-of-free-speech-urges-ftc-to-stay-within-constitutional-bounds-on-content-moderation/ [8] https://progresschamber.org/research/toxic-by-design-ftc-fcc-content-moderation-230/ [9] https://www.pewresearch.org/short-reads/2023/07/20/most-americans-favor-restrictions-on-false-information-violent-content-online/ [10] https://www.youtube.com/watch?v=bxidA-9toEY [11] https://www.rstreet.org/?post_type=research&p=95059 [12] https://www.cleveland.com/news/2026/02/jim-jordan-report-accuses-eu-of-censoring-american-speech-through-tech-companies.html [13] https://www.aei.org/research-products/report/first-amendment-problems-with-using-antitrust-law-against-social-media-platforms-content-decisions/ [14] https://missouriindependent.com/2025/09/10/former-missouri-ag-shelves-rule-on-social-media-moderation-as-he-leaves-office/ [15] https://www.cambridge.org/core/journals/cambridge-forum-on-ai-law-and-governance/article/metas-ai-moderation-and-free-speech-ongoing-challenges-in-the-global-south/2DB952F896DB5744A43CD3E6C1A6DCB4 [16] https://hls.harvard.edu/today/rappaport-forum-talks-first-amendment-limits-of-content-moderation-lawful-but-awful-speech-on-social-media/ [17] https://time.com/7295402/elon-musk-x-new-york-lawsuit-free-speech-content-moderation/ [18] https://www.politico.com/news/2025/07/22/state-department-attacks-europe-free-speech-00468916 [19] https://www.nytimes.com/2025/06/09/technology/youtube-videos-content-moderation.html [20] https://www.aljazeera.com/news/2025/6/17/elon-musks-x-sues-new-york-to-block-social-media-hate-speech-law