New Delhi, Delhi, India
In the wake of the recent terror attack in London on Saturday night, British Prime Minister Theresa May issued a statement wherein she mentioned different ways that need to be adopted to counter terrorism. Emphasising on the need to overhaul strategies to combat extremism, May held tech companies partly responsible for providing extremists a “safe space” to grow.
Also Read: Facebook, Twitter aim to clean 'terrorist content'
“We cannot allow this ideology the safe space it needs to breed. Yet that is precisely what the internet, and the big companies that provide internet-based services provide,” said May in a statement on the London Bridge terror attack.
It is true that extremist groups are using the web to recruit members, spread hate and extremist propaganda, send coded messages and coordinate attacks, but it wouldn’t be right to condemn internet companies for the same as these companies, seemingly, are doing a fair job to take down any content or accounts that are involved in terrorist activities and/or encourage support for terrorism.
Facebook, for instance, has been using a mix of tech and humans to prevent the spread of extremist content on the website. "Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it,” said Simon Milner, director of policy at Facebook in an emailed statement.
“And if we become aware of an emergency involving imminent harm to someone's safety, we notify law enforcement,” added Milner.
Twitter, on the other hand, suspended a total of 3,76,890 accounts for promoting terrorism between July-Dec 2016. According to the company, 3/4th of the accounts taken down were discovered with the help of Twitter's internal, proprietary spam-fighting tools, while only the two per cent of those were suspended in the reported time after government requests.
Not only Facebook and Twitter, Google has also been playing an active role to fight terrorism on its multiple platforms. For instance, YouTube doesn’t encourage the spread of hateful content and immediately removes any content that aims to incite violence. Its software further prevents the video from being reposted. In 2015 alone, YouTube removed 92 million videos for violation of its policies through a mix of user flagging and its own spam-detection technology. However, videos that were taken down for terrorism or hate speech violations happened to be 1 per cent of them.
In fact, companies including Facebook, Twitter, Microsoft and YouTube last year joined hands to help curb extremist content online. They teamed up to create a shared industry database of unique digital fingerprints for images and videos that support extremist organisations. Those digital fingerprints come to the companies' help for identifying extremist content so that it can be easily removed.
Cyberspace regulation
Not only May held tech companies responsible for the growing extremism online, but she also expressed the need “to work with allied democratic governments to reach international agreements that regulate cyberspace to prevent the spread of extremist and terrorism planning." While it is believed that May’s recent remark suggesting the need to regulate the internet and encryption could aggravate problems further.
Explaining why this wouldn’t be a right move to opt for cyberspace regulation, privacy advocacy organisation Open Rights Group said, “This could be a very risky approach. If successful, Theresa May could push these vile networks into even darker corners of the web, where they will be even harder to observe.”
“While governments and companies should take sensible measures to stop abuse, attempts to control the Internet is not the simple solution that Theresa May is claiming,” added British non-profit.
The government, on the other hand, finds encrypted data to be of threat to national security. It has been long arguing that companies should provide investigators with the backdoor access to the encrypted information. But privacy advocates have been suggesting that by forcing tech companies to share encrypted data wouldn’t protect national security.
Weakening encryption wouldn’t guarantee people’s safety, says cyber security expert Richard Forno. Terrorists in such cases may develop their own cyber channels, Forno hinted at a likely possibility. "The bad guys are not constrained by the law. That's why they're bad guys,” he added.
Also, building backdoor entries into encryption could make data, which include personal and confidential details, more prone to hacking.
No wonder, tech companies like Facebook and Google are behaving responsibly and putting in their efforts to prevent the spread of extremist content, without compromising on the user privacy, but the bigger question is that is it solely the tech companies' job to police the internet?
Will the Internet censorship help curb the terrorism? We don’t believe so.
(WION)