Twitter suspended the accounts of well-known white nationalists Monday, moving swiftly after putting into place new rules on what it sees as abusive content.
The account of far-right group Britain First, a small group that regularly posts inflammatory videos purporting to show Muslims engaged in acts of violence, was among the first to go dark. The individual accounts of two of its leaders, Jayda Fransen and Paul Golding, were also suspended.
President Donald Trump caused a stir last month when he retweeted a post by Fransen, drawing criticism from British Prime Minister Theresa May. Fransen and Golding were arrested in Belfast last week for allegedly stirring up hatred.
The guidelines, announced a month ago and put into force this week, address hateful images or symbols, including those attached to user profiles.
Monitors at the company will weigh hateful imagery in the same way they do graphic violence and adult content.
If a user wants to post symbols or images that might be considered hateful, the post must be marked "sensitive media." Other users would then see a warning that would allow them to decide whether to view the post.
Twitter is also prohibiting users from abusing or threatening others through their profiles or usernames.
The account for American Renaissance, a white nationalist online magazine run by Jared Taylor, was among those suspended. The magazine responded to the Twitter ban with the terse message, "this isn't goodbye" and referred readers to a chat site frequented by white nationalists. Brad Griffin, who blogs under the name Hunter Wallace on the website Occidental Dissent, said in blog post that he was also suspended, along with Michael Hill of the Traditionalist Workers Party and others.
The white nationalist Richard Spencer, whose account was not suspended, tweeted that he had lost more than a hundred followers in the past 24 hours and noted that he didn't "see any systematic method to the (hash)TwitterPurge."
There appeared to be some inconsistencies in the enforcement. Still on Twitter was David Duke, with some of his posts hidden behind the "sensitive material" warning. However, Twitter allowed him to keep the message "It's Ok To Be White" as his header, even though the same phrase was hidden by the warning on his pinned tweet.
Twitter said it would not comment on individual accounts.
While the new guidelines are now in play, the social media company continues to work out internal monitoring tools and it is revamping the appeals process for banned or suspended accounts. But Twitter will begin accepting reports from users.
Users can report profiles, or users, that they consider to be in violation of Twitter policy. Previously, users could only report individual posts they deemed offensive.
Now being targeted are "logos, symbols, or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation, or ethnicity/national origin."
There is no specific list, however, of banned symbols or images. Rather, the company will review complaints individually to consider the context of the post or profile, including cultural and political considerations.
It is also broadening existing policies intended to reduce threatening content, to include imagery that glorifies or celebrates violent acts. That content will be removed and repeat offenders will be banned. Beginning Monday, the company will ban accounts affiliated with "organizations that use or promote violence against civilians to further their causes."
While more content is banned, the company has provided more leeway for itself after it was criticized for strict rules that resulted in account suspensions.
There was a backlash against Twitter after it suspending the account of actress Rose McGowan who opened a public campaign over sexual harassment and abuse, specifically naming Hollywood mogul Harvey Weinstein. Twitter eventually reinstated McGowan's account and said that it had been suspended because of a tweet that violated its rules on privacy.
"In our efforts to be more aggressive here, we may make some mistakes and are working on a robust appeals process," Twitter said in its blog post.
Twitter relies in large part on user reports to identify problematic accounts and content, but the company said it is developing "internal tools" to bolster its ability to police content.
Twitter also seeks to improve communications with users about the decisions it makes. That includes telling those who have been suspended which rules they had violated.