Protest and authoritarianism in the internet age

Issei Kato/Reuters
Riot police officers detain a protester during a demonstration in Tsim Sha Tsui neighborhood in Hong Kong on Aug. 11, 2019.

Two ways to read the story

  • Quick Read
  • Deep Read ( 4 Min. )

Hong Kong protests have focused attention on a new stage in the battle for the internet age: the use of increasingly advanced technical tools to quash dissent.

The situation in Hong Kong stands in stark contrast to recent protests in Sudan and Kashmir. In those cases, authorities used blunt instruments, cutting off internet access and mobile communication. Hong Kong turned to higher-tech tools at its disposal: facial-recognition cameras and stores of personal data available through ID documents and payment cards. Mainland China has even more advanced tools.

Why We Wrote This

The development of increasingly sophisticated high-tech tools that can track critics and stymie protest is raising profound political and ethical questions about technology development.

China has been accused of deploying those tools against Uyghur Muslim residents in Xinjiang. And at least two other authoritarian states, Russia and Iran, are building up domestic internet architectures to bring usage under central control.

For rights groups, a key concern centers on ethical issues. Last week, a mathematics researcher of University College London urged instituting a “Hippocratic oath” for tech developers. Hannah Fry pointed to companies “filled with very young, very inexperienced” tech whizzes building systems to cull and market personal data. “They have never been asked to think about ethics. ... These are the people who are designing the future for all of us.”

It’s a battle for the internet age, with authoritarian governments across the globe moving to tighten control of web content and digital communications in order to stymie and ultimately defeat critics and protest movements.

Yet the continuing protests in Hong Kong have focused attention on a new stage in that campaign, raising political and potentially ethical questions about the use of increasingly advanced technical tools to quash dissent or target individuals and groups deemed to be a threat. One leading British academic has gone so far as to suggest the need for young math-and-technology students to take a “Hippocratic oath” to ensure they’re aware of the wider implications of their work.

The situation in Hong Kong, the former British colony handed back to China in 1997, stands in stark contrast to the Sudanese army’s crackdown on protesters earlier this year, or India’s move this month to end decades of political autonomy in Muslim-majority Kashmir. In those cases, the authorities also moved on the digital front. But they relied on a fairly low-tech approach. They in effect flicked a switch, cutting off all internet access and mobile communication.

Why We Wrote This

The development of increasingly sophisticated high-tech tools that can track critics and stymie protest is raising profound political and ethical questions about technology development.

For Hong Kong, removing internet and mobile-phone access for the more than 7 million inhabitants of a major international business and finance hub was never going to be a viable option.

But authorities’ efforts to identify, isolate, and arrest protest organizers have drawn attention to the higher-tech tools at their disposal: facial recognition cameras, the enormous store of personal data available through Hong Kong’s ID documents, and the information captured by the contactless payment cards used for a range of commercial transactions, including public transport. Some protesters are also concerned by the arrest of the coordinator of a group communicating via Telegram, an encrypted messaging program, although it remains unclear whether that was due to electronic infiltration or a spy working from the inside.

The fear of technological targeting may help explain the original catalyst for the Hong Kong protests: an extradition bill that would have allowed residents to be handed over to mainland China, which has a far more extensive network of facial recognition cameras and other electronic data, and a range of advanced tools to make use of the information.

China has been accused of deploying those tools during its “reeducation” campaign in Xinjiang, which has seen as many as a million Uyghur Muslim residents interned in special camps. The Chinese authorities have also reportedly been installing an application on some local mobile phones and on those of some visitors to Xinjiang that combs the devices for what they deem to be suspicious content.

And at least two other authoritarian states – Russia and Iran – are building up their own domestic internet architectures in a broader move that could bring their citizens’ internet usage under central control.

Discourage, restrict, block

Like the Chinese with their Great Firewall, Russia and particularly Iran already regulate the internet by discouraging, restricting, or blocking access to a range of foreign websites. Their security services also use “deep-packet” technology to identify and frustrate a widespread way of avoiding such censorship: VPNs, or virtual private networks, that allow users to mask the identity and location of their computers.

But Russia’s move in recent days to pressure Google over the livestreaming of large-scale street demonstrations in Moscow leaves little doubt about the Kremlin’s desire to deny political protesters access to internet outlets. The Russians have also had decidedly mixed success with a formal ban last year on Telegram: Large numbers still use it, often with the help of VPNs.

That helps underscore the potential significance of a law passed in Russia earlier this year that combined expanded central control of VPN searches and other online data with a drive to set up its own national internet.

Iran, meanwhile, has claimed its equivalent is about 80% complete. Once such domestic systems are operational, they could provide a far more radical option for monitoring and controlling access and content: hiving off these networks altogether from the wider world network.

An ethics oath?

The more immediate fear for media-freedom and human rights groups remains precisely what is unsettling demonstrators in Hong Kong: the potential combination of large amounts of personal data, sophisticated artificial intelligence applications, and security services determined to track, identify, and move against those they define as a threat.

Last week, one of Britain’s top mathematics researchers, University College London’s Hannah Fry, sounded a warning.

Pointing out that in democratic countries as well, internet users are ceding enormous amounts of personal data, she cited the example of a genetic testing firm. “We literally hand over our most private data, our DNA,” she told The Guardian newspaper, “but we’re not just consenting for ourselves. We are consenting for our children, and our children’s children.” Her worst-case scenario: governments that, decades from now, might embark on policies of genetic discrimination. “And we are paying to add our DNA to that dataset.”

She pointed to the proliferation of technology companies – “filled with very young, very inexperienced” math and computer-science whizzes – that are constructing systems to cull and market personal data. As one immediate step, she urged a shift in mathematics, technology, and computer education to institute a “Hippocratic oath” like that impressed on medical students. “They have never been asked to think about ethics. ... And ultimately, these are the people who are designing the future for all of us.”

You've read  of  free articles. Subscribe to continue.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to