Electronic surveillance: Minority Report minus the precogs

Details are emerging about the USA’s National Security Agency (NSA) secret surveillance programme to collect and analyse data from computer and telephone networks. It is hardly a surprise that the North American government has a surveillance programme. Rather, it is the secretiveness, the extent and the intrusiveness of the programme that is causing shockwaves.

Adams cartoon on The Telegraph, June 12th 2013

The monitoring of suspicious individuals is not a new phenomenon, of course. Fingerprinting, phone tapping and DNA profiling – to name a few – have long been used in surveillance programmes. Thus, it could be argued that electronic monitoring is a natural evolution for homeland security. At the end of the day, digital data is already used extensively by private and public organisations, alike, to inform the simplest of management decisions – for instance, this report outlines several formal initiatives to use everyday data to support health management, engineering initiatives and climate research, among others.

CCTV

But the recent revelations are about much more than updating surveillance programmes. The details of NSA’s global surveillance programme reveal the incredible intensity and reach of present monitoring activities; a stage where any citizen is a subject of interest. There is another crucial difference between traditional means of surveillance, and those used in PRISM and similar programmes: fingerprinting and other traditional surveillance methods are focused on past crimes – they aim to prove what happened, who did it, and who or what enabled it. PRISM and similar electronic surveillance programmes, on the contrary, are focused on pre-empting crime. It’s about identifying would be terrorists and anticipating threats.

How is this done?

  • In its simplest form, the observed pattern is compared with the ‘norm’ in order to identify outliers. If it does no fit the expected pattern of behaviour – e.g., not keeping your family savings in a bank account – it comes under scrutiny. This immediately puts minorities and deviant groups at a heightened likelihood of coming to the attention of authorities.
  • Alternatively, the observations are compared with the known patterns of behaviour of criminals. The problem here is that terrorist behaviour is not only relatively rare, but it is also very variable. So, there isn’t enough reliable, stable evidence to build the profiles on. Louise Amoore explains in an interview with the BBC:

For example, post-Boston there may be more attention in the US to travel to particular parts of the world, perhaps including Chechnya and Dagestan. We could imagine, post-Woolwich, that there might be greater attention in the refining of algorithms to think about patterns of travel and links to deportation (…) But of course, it’s using data from past events. Our research is suggesting that the tuning of the algorithm reflects almost always past events‘.

  • Lastly, there is a growing trend, however, to use futuristic scenarios and work back from the scenario to the actions that would enable that scenario and, from there, to the behaviours that ought to be monitored presently. Researchers Alexandra Hall and Jonathan Mendel describe one such programme in the paper “Threatprints, threads and triggers: Imaginaries of risk in the ‘war on terror”, published in the Journal of Cultural Economy (volume 5, issue number 1, pages 9-27). They write:

The cutting edge of data analytics is a move towards ‘threat blueprints’ – or threatprints. (…) The threatprint is not a pattern generated via profiles of established criminal or terrorist activity. The threatprint does not attempt to foil a threat already known, or extrapolate forward by applying knowledge of past experience. The threatprint approach directly confronts the uncertainty of the future by projecting a range of future scenarios from which it is possible to look back and ask: What should we have seen in the data? How did the digital clues appear? Whose ‘footprint’ was suspicious? Who should we have searched? As an analytical approach, the threatprint takes anomalous data deviations and then moves one step ahead to hypothesis and envisage possible future events that have yet to happen. The threatprint directs the search for evidence as if the future threat event had already happened (…) It is the proliferation of endless potential future scenarios that may or may not be probable, and which may or may not come to fruition (…) The risk here is that these programmes lead on to ‘actions against projected futures and the creation of targetable populations that have yet to fully emerge.

It’s life imitating fiction – It’s the film Minority Report, minus the precogs.

And before you say ‘nothing to hide, nothing to fear’ from surveillance, take into consideration that, according to US legislation, anyone who is deemed to have assisted terrorists, directly or indirectly, knowingly or unknowingly, can be pursued by law enforcement. Add to that the fact that individuals can be blacklisted and assets frozen as preventive measures and without the need of a trial, and that the evidence supporting such interventions is often highly classified (and, therefore, it is impossible to assess its quality)… and you can see how easily you can find yourself drawn into a nightmare, not because you did something wrong, but because your electronic behaviour fell in the gaps of a highly fallible system. And that is why we need transparency about governmental surveillance programmes.

4 thoughts on “Electronic surveillance: Minority Report minus the precogs

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s