I read two things recently that made me think about the New Media & Society paper by Draper and Turow, entitled ‘The corporate cultivation of digital resignation’. In this paper, published in 2019, Nora A Draper and JosephTurow look at the role of companies in sustaining or, at least, amplifying the phenomenon of digital resignation.
Digital resignation describes that situation when people continue using digital services even though they are aware that the companies behind those services use their data in ways that infringe on their privacy (namely, through extensive surveillance), in ways that manipulate them (e.g., profiling for political advertising) or in ways that reduce their overall wellbeing (e.g., through social engineering, and exploitation of our psychological vulnerabilities).
The literature on what is usually referred to as the “privacy paradox” explores the psychological processes that explain this behaviour. In rough terms, this literature argues that people continue using those services because they don’t really understand the problems (e.g., for instance, how much a company actually knows about them). Alternatively, they continue to use those services because they overestimate the benefits (e.g., fear of missing out) and underestimate the costs of using those services.
However, Draper and Turow cite numerous studies that show that people are, indeed, aware of those costs, and that they are also aware of the limited benefits (the so called “Tradeoff Falacy”). The problem, the authors argue, is that people feel powerless in the face of those surveillance practices; and resign themselves to accepting this exploitation as the price to pay to be part of the digital economy. Draper and Turow go on to examine how companies create this sense of powerlessness among the users of their digital services, by using the following tactics:
- Placation – Efforts to falsely appease concerns. For instance, Facebook took some steps in the wake of the Cambridge Analytica scandal to create the impression that it was being transparent about how it shares user data with third parties;
- Diversion – Efforts to shift individuals’ focus away from controversial practices. For instance, the ability to check, download and amend (some) personal data that Facebook has on us, creates the perception of ownership and control of those data;
- Jargon – Use of terminology that is difficult to understand. For instance, privacy policies are notoriously long, and difficult to read.
- Misnaming – Use of misleading labels to obscure business practices. For instance, it has been shown that calling these contracts “privacy policies” gives users the illusion that their rights are protected, whereas, in reality, they only state all the ways in which the companies give themselves the right to use our data, without any protection to users whatsoever.
The two articles that I read recently echo this theme of resignation; though they seem to use another tactic – I am not sure what to call it. The “you have no choice” tactic? The “the cure is worse than the disease” tactic?
Let me tell you about the articles.
The first one was an analysis of the EU’s draft AI regulations. Alongside the merits of the proposed regulations in terms of curbing the use of AI in ways that could be deemed invasive and unethical, the article reported that some voices had expressed concerns that that the proposed regulation would stifle AI innovation in Europe. The implicit reasoning is that AI is a fact of life, and that if we / Europe don’t use it, then other nation’s companies (namely, China), with no quibbles about mass surveillance and abusing your personal data, will occupy this space. I.e., you have no choice regarding surveillance; just who gets to do it.
The second one was the news, this morning, that the head of GCHQ had warned that the lack of investment in technologies that enable smart cities would leave the country vulnerable in terms of security and defence policy. He did recognise the privacy and anonymity risks presented by a network of sensors and cameras around the country. Though, again, the choice is not whether to install such technology, but rather who controls the technology and, hence, the personal data that are generated in the process of using it.
As I mentioned, these two articles made me think about that paper. They both talk of digital resignation, and they both use the premise that we can’t avoid mass surveillance, only who gets to do the spying.
Is that a reasonable assumption? What should we name this tactic?