Files / United States

After Musk's acquisition, the increase in hate speech on the platform reached %.

Annual Platform Data Analysis: Trends in Homophobic, Racist, and Transphobic Speech and Assessment of Fake Account Activity

Detail

Published

23/12/2025

Key Chapter Title List

  1. Research Background and Core Problem Statement
  2. Association Between Online Hate Speech and Offline Hate Crimes
  3. Timeline of Musk's Acquisition of Platform X and His Commitments
  4. Measurement Methods for English Hate Speech
  5. Evaluation Criteria for Fake and Bot-like Accounts
  6. Comparison of Weekly Hate Speech Incidence Before and After the Acquisition
  7. Trends in Specific Types of Hate Speech (Homophobia, Racism, Transphobia)
  8. Analysis of Changes in Likes on Hate Speech Posts
  9. Evolution of Fake Account Activity
  10. Comparison of Research Findings with Platform X's Public Statements
  11. Research Limitations and Cautious Explanation of Causality
  12. Online Platform Safety and Content Moderation Policy Recommendations

Document Introduction

As the influence of social media platforms in the public sphere becomes increasingly prominent, the spread of online hate speech and misinformation has become a critical issue concerning social safety and public interest. Previous research has confirmed an association between online hate speech and offline hate crimes, while bots and bot-like accounts can cause multiple harms through spreading misinformation, spam, etc., including facilitating fraud, interfering with real-world election processes, and hindering public health campaigns.

On October 27, 2022, Elon Musk completed the acquisition of the former Twitter platform and became its CEO. Despite his commitment to reduce bot activity on the platform, existing research showed an initial increase in hate speech after the acquisition, and fake accounts did not decrease. However, whether this trend persisted until Musk stepped down as CEO in June 2023 had not been clearly established previously, which constitutes the core focus of this study.

To address this research gap, Daniel Hickey and colleagues from the University of California, Berkeley, employed validated research methods to conduct a systematic analysis of English hate speech and fake account activity on Platform X between 2022 and 2023. Through quantitative statistics, the study focused on key metrics such as the incidence of hate speech before and after the acquisition, changes in specific types of hate speech, the reach of related posts, and the evolution of fake account activity.

The research results indicate that the surge in hate speech observed on Platform X prior to Musk's acquisition continued until May 2023. Compared to the months before the acquisition, the weekly incidence of hate speech on the platform increased by approximately 50% after the acquisition, with significant increases in the frequency of specific types of hate speech such as homophobia, racism, and transphobia. Simultaneously, the average number of likes on hate speech posts increased by 70%, indicating that more users were exposed to such harmful content. Notably, the number of bot accounts and other fake accounts on the platform did not decrease and may have actually increased.

These findings are inconsistent with Platform X's public claim that "exposure to hate speech declined after the acquisition." The researchers point out that due to a lack of information regarding specific internal policy changes at Platform X, it is not possible to establish a clear causal relationship between Musk's acquisition and the research findings. However, based on the results, they expressed concerns about the safety of online platforms and called for Platform X to strengthen its content moderation efforts. They also recommended further research to comprehensively understand the patterns of harmful content dissemination on social media platforms.

The study authors emphasize that current policies aimed at reducing user exposure to harmful content appear insufficiently effective. This conclusion provides important empirical reference for subsequent platform governance and policy-making. The research results were published on February 12, 2025, in the open-access journal *PLOS One*. Its findings hold significant academic value and practical implications for understanding social media platform governance, harmful content control, and digital space safety.