Twitter announced on Thursday that it had cut off 360,000 accounts during the past six months in a heightened crackdown on use of the global messaging service to promote violent extremist causes.
The suspensions raised to 360,000 the total number of accounts sidelined since the middle of 2015 and was helping "drive meaningful results" in curbing the activity, according to the San Francisco-based company. Twitter has been striving to balance protecting free speech at the one-to-many messaging service with not providing a stage for extremist groups to spread violent messages and enlist people to their causes.
In February, Twitter said that it had neutralised 125,000 accounts for violating rules against violent threats and promotion of terrorism. "Since that announcement, the world has witnessed a further wave of deadly, abhorrent terror attacks across the globe," Twitter said in a blog post. "We strongly condemn these acts and remain committed to eliminating the promotion of violence or terrorism on our platform."
Daily suspensions of accounts are up more than 80 per cent since last year, and spike in the immediate aftermath of terror attacks, according to Twitter.
Twitter said that it is getting quicker at identifying extremist content and shutting down accounts involved, resulting in dramatic decreases in the number of followers attracted while posts are active. Moves have been made to make it tougher for people behind suspended accounts to immediately return to Twitter, and teams reviewing reports of suspected terror content have been expanded.
Like Twitter, Facebook and YouTube rely heavily on users to point out posts that violate standards or policies. Tech titans have been increasingly dabbling with enlisting software to battle extremist propaganda.
"There is no one 'magic algorithm' for identifying terrorist content on the Internet," Twitter said in the post. "But, we continue to utilise other forms of technology, like proprietary spam-fighting tools, to supplement reports from our users and help identify repeat account abuse."
During the past six months, automated tools have helped Twitter identify more than a third of the accounts suspended for promoting terrorism, according to the company.
Twitter said that it collaborates with other online social platforms in the fight against terror content. Since dreadful attacks in Paris and the city of San Bernardino in California, pressure has been growing for online social networks to thwart extremist groups from taking advantage of their platforms.
A US judge last week tossed out a lawsuit accusing Twitter of abetting terrorism by allowing Islamic State (IS) group propaganda to be broadcast using the messaging platform. District Court Judge William Orrick granted a motion by Twitter to dismiss the case, reasoning that providing a platform for speech is within the law and that the company did not create the content.
The Communications Decency Act (CDA) protects online platforms from being held responsible for what users post. The suit was filed in San Francisco federal court by the families of two government contractors killed late last year while working at a police training center run by the United States in Amman, according to court documents.
A Jordanian police captain studying at the center fatally shot the two men, and IS later claimed the captain was a "lone wolf" working for the group's cause, the judge recounted in his ruling. "As horrific as these deaths were, under the CDA Twitter cannot be treated as a publisher or speaker of ISIS's hateful rhetoric and is not liable under the facts alleged," Orrick said in the decision, using another name for IS.
The suit accused Twitter of providing "material support" by letting accounts spread the message of the extremist group. The judge left open the option of refiling an amended version of the suit.