TL;DR

X has announced plans to speed up its review of hate and terrorist content in the UK, aiming to assess reports within 24 hours and withhold access to illegal terrorist accounts. This move comes amid ongoing scrutiny over its previous rise in hate speech and regulator investigations.

X has announced it will improve its response to hate and terrorist content in the UK, committing to review reports within 24 hours and to withhold access to accounts posting illegal terrorist content. The move follows increased regulatory pressure and concerns over the platform’s past rise in hate speech following Elon Musk’s acquisition.

According to Ofcom, the UK’s communications regulator, X has committed to reviewing and assessing reports of hate and terrorist content within an average of 24 hours, or at least 85 percent of such reports within 48 hours. The platform also plans to work with UK experts and ban offending accounts to combat hate speech and terrorism. Ofcom will monitor X’s performance quarterly over the next year as part of its ongoing investigation into the platform’s compliance with online safety regulations.

This announcement follows a study from UC Berkeley indicating that hate speech on X increased by 50 percent after Elon Musk’s acquisition, partly due to a rise in bot activity. Despite this, X asserts it is taking steps to address the issue, including content review and account bans. The regulator also continues its investigation into Elon Musk’s Grok AI for generating illegal content, and previously fined 4chan nearly $700,000 for violations of the UK’s Online Safety Act.

Why It Matters

This development is significant because it signals regulatory pressure on X to curb hate speech and terrorist content in the UK, an issue that has escalated since Musk’s takeover. The platform’s actions, if effective, could impact online safety and hate crime prevention in the country. However, skepticism remains regarding X’s willingness and ability to follow through, especially given Musk’s history of controversial posts and the platform’s previous increase in hate speech.

Amazon

content moderation software for social media

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

Background

Since Elon Musk acquired Twitter and rebranded it as X, multiple studies and reports have documented a rise in hate speech and bot activity on the platform. UK regulators, including Ofcom, have signaled increased concern over the proliferation of illegal and harmful content, prompting investigations and regulatory actions. Prior to this, X faced criticism for its handling of hate speech, with some experts questioning whether the platform’s moderation policies have been sufficient. The UK government has also emphasized the importance of online safety, especially following recent hate-motivated crimes affecting the Jewish community.

“We have evidence that terrorist content and illegal hate speech is persisting on some of the largest social media sites. We are challenging them to tackle the problem and expect them to take firm action.”

— Oliver Griffiths, Ofcom’s Online Safety Group Director

“We are committed to reviewing and assessing hate and terrorist content promptly and working with experts to enhance safety measures in the UK.”

— X spokesperson

Amazon

online hate speech detection tools

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

What Remains Unclear

It remains unclear whether X will fully meet its proposed review timeframes or effectively remove all illegal hate and terrorist content. Skepticism persists regarding Musk’s personal posting history and platform moderation priorities, which could undermine these commitments. Ongoing investigations by Ofcom into X’s compliance and the platform’s actual performance data are still in progress.

Amazon

AI-based hate content filtering

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

What’s Next

Ofcom will review X’s quarterly performance data over the next year to assess compliance with its commitments. The regulator’s ongoing investigation into X’s content moderation practices and Musk’s AI tools will also influence future regulatory actions. The platform’s ability to deliver on its promises will likely be scrutinized through these reviews and public reporting.

Amazon

social media account banning tools

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

Key Questions

Will X actually reduce hate speech in the UK?

It is uncertain whether X will effectively implement its commitments, given past increases in hate speech and Musk’s controversial posting history. Ongoing regulator oversight will determine actual outcomes.

What specific actions will X take to remove hate and terrorist content?

X plans to review reports within 24 hours on average, assess 85 percent of hate content within 48 hours, work with UK experts, and ban offending accounts. Details on enforcement are still emerging.

How will Ofcom monitor X’s compliance?

Ofcom will review X’s performance data quarterly over the next year, evaluating the platform’s responsiveness and effectiveness in removing harmful content.

Has X been penalized for hate speech before?

While Ofcom has fined other platforms, such as 4chan, for violations, there is no record of X being penalized directly for hate speech prior to this commitment. The ongoing investigation may lead to future sanctions if non-compliance is found.

You May Also Like

FCC angers small carriers by helping AT&T and Starlink buy EchoStar spectrum

FCC approves AT&T and Starlink’s purchase of EchoStar spectrum, prompting backlash from rural carriers over competition concerns and spectrum consolidation.

X is fighting Andrew Tate’s attempt to unmask his critics

X is opposing efforts by Andrew Tate to unmask online critics, citing First Amendment protections amid a legal dispute involving Tate and anonymous accounts.

Will Singapore warm to nuclear as 20% of electricity goes to data centers?

Singapore is projected to use nearly 20% of its electricity for data centers in 2026, prompting discussions on nuclear energy adoption amid rising power demands.

Instructure strikes deal with hackers who breached it twice

Instructure, maker of Canvas, has reportedly struck a deal with hackers who breached its systems twice, claiming to have destroyed stolen data. Details remain unclear.