The world will never agree on controlling technology to fight terror

Countries have different definitions of what is acceptable and repressive regimes cannot be allowed access to decryption keys, Mark Weston writes

Go to the profile of The Brief team
Jun 08, 2017
0
0
Recommend 0 Comment

The government says that evil terrorists have all been radicalised by flourishing terrorist propaganda on the internet and that they plan attacks and learn skills to implement their plans online. It follows, says Theresa May, that the government must immediately move to control these technologies and “safe terrorist spaces”.

But this is knee-jerk rhetoric and it betrays a fundamental misunderstanding of how the Internet works and how different standards in different parts of the real world interface online.

The prime minister proposes two controls: first, to force the large tech companies such as Google and Facebook to take responsibility for removing content; second, to force writers of end-to-end encryption apps that allow unbreakable secure communication, used by terrorists to spread hate and plans, to give governments a “master decryption” key.

Laudable ambitions, but they are doomed to fail.

Most technology services hosting user-generated content, such as YouTube and Facebook, are US-based. The first amendment to the US constitution enshrines freedom of expression to a far greater extent than in the UK. Simply put, Americans allow people to say things that we do not. Every country has different standards.

Also, the sheer volume of uploaded material to be checked means most material cannot be caught early enough because the resources needed would make the services we all take for granted uneconomical to offer.

To make a real impact, there would need to be a cast iron international agreement among internet service providers, technology companies and governments about what is not acceptable. That is easier to do with something like images of child sexual abuse because everybody agrees on the definition of what is bad.

But what constitutes a real threat to incite terrorism is difficult to police within a first amendment principle of “say what you like as long as you do not cause harm” because no two cultures agree, particularly when the idiosyncrasies of language and culture are considered. For example, in a Facebook post, is “I could murder an Indian” a genuine desire to murder someone of Indian origin or an urge to grab a spicy curry? Is saying “non-believers should be punished” an incitement to terrorism or a political statement of opinion?

Potentially, master decryption keys would allow repressive regimes to access communications between freedom-fighters or allow organised crime to monitor the intelligence services. Keys do find their way to those who should not have them.

Also, new applications are constantly created. Telegram, which Islamic State uses, is based in Russia. What of a new app based in Congo? A locally-based service can be used anywhere globally. Also, would China or North Korea play ball, which they would need to for a seamless decryption coverage?

The solution does not lie in controlling the technology. Human answers are needed.

Mark Weston is a partner at the City of London office of the law firm Hill Dickinson

Go to the profile of The Brief team

The Brief team

Articles by The Brief's team of reporters and daily guest columnists

No comments yet.