Algorithms as the problem vs the solution to discrimination in the digital age

We know that algorithms are everywhere, from curating our news and social media feeds, to scoring our job and credit applications. And many have warned that algorithms deepen unfairness. For instance, in 2021, the UK’s Competition and Markets Authority (CMA) released a report breaking down how algorithms may harm consumers and markets, because they can:

  • Personalise prices in ways that target vulnerable consumers.
  • Prefer a platform’s own products (self-preferencing).
  • Facilitate collusion between competitors.
  • Manipulate choice architecture through “dark patterns.”

And because algorithms run at scale and are hard to scrutinise, these harms can spread quickly and unnoticed.

But that’s not the whole story.

While cleaning my inbox, recently, I came across a paper that partly challenges this view. The paper I am referring to is “Discrimination in the Age of Algorithms” by Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan and Cass R Sunstein, published in the Journal of Legal Analysis. 

Kleinberg and colleagues agree with the view expressed in the CMA report that the use of algorithms can disadvantage individuals and communities (e.g., based on gender, race…). However, these authors also make a compelling counter-argument: that algorithmic decision making can actually be fairer than human decisions. 

The black box of human decision making

Kleinberg and colleagues’ key claim is that, like algorithms, humans are biased, too. See, for instance, that 2011 study showing that judges were more lenient at the start of a session or after a break, and harsher just before a break or lunch.

The difference between human and algorithmic bias, according to Kleinberg et al., is that, algorithms do not have unconscious bias. The bias, where present, results from any of the following:

  • The training data
  • The decision logic
  • The output patterns

In contrast, in humans the bias is often unconscious. Thus, there is no audit trail to assess where the bias comes from and what needs to be corrected: 

Given the black-box nature of human cognition, even cooperative managers may not be able to explain it. Nor may they be able to articulate what predictor variables for future productivity were used, or why those were chosen over other candidate predictors” (p. 130).

The solution: Auditing and Transparency

Both the CMA report and Kleinberg et al.’s paper reject the “algorithmic neutrality” myth and stress that human choices in design determine outcomes. Thus, both sources advocate for algorithmic auditing, transparency, and explainability as key regulatory tools. Kleinberg et al. propose legal mandates for record-keeping, in order to prove and prevent or correct discrimination. CMA proposes technical audits, sandboxes, and ongoing monitoring to ensure compliance with competition and consumer law.

The key point of difference, in my view, is around what kind of algorithmic governance we build. If we focus only on harms, we end up pushing for strict limits or bans on algorithmic decision-making in sensitive areas. But if we also take the view that algorithms have the potential for transparency, we are encouraged to consider how we might use auditing, explainability standards, and algorithmic due process to potentially turn algorithms into instruments to increase transparency in decision making.

One example that came to mind, of how automation reduced discrimination, was the British Academy’s decision to introduce partial randomisation for its Small Research Grants programme. In stage 1, applications are assessed by human judges. In stage 2, a computer programme randomly selects a number of applications to fund, from all that passed stage 1. This means that all applications that met the quality threshold have an equal chance of being funded, removing subjectivity from the final stage of selection. Since introduction of this two-stage process, applications have increased by 70%, and diversity among award holders has risen significantly. As reported in this post for the LSE Impact blog:

The pool of applicants and of award-holders has become more diverse, with a notable increase in the number of applicants and awards held by researchers of Asian or Asian British background, and a rise in those of Black or Black British background. Institutional diversification has also been notable with awards going to some institutions either not previously supported by the Academy or only rarely supported”.

Are you aware of other situations where an automated system improved inclusion?

Leave a comment