Understanding and solving opacity in algorithms

One of the key challenges presented by algorithms is its opacity – that is, the inability to see how the algorithm produced a specific output. For instance, the ability to see how a search engine algorithm ranks content; how credit rating algorithm ranks the characteristic of potential borrowers; or, how a self-driving algorithm ranks external inputs.

Opacity is problematic because without understanding how algorithms produce their outputs, we can’t challenge the results and, therefore, we can’t prevent the harmful outcomes that derive from their use. For instance, we can’t fight filter bubbles, we can’t challenge discrimination, and we can’t make educated decisions about buying a product. However, solving the problem of algorithm opacity is not as simple as encouraging– or demanding – that the corporations running those algorithms share their code.

In the paper “How the machine ‘thinks’: Understanding opacity in machine learning algorithms”, Jenna Burrell argues that the solution to algorithm opacity very much depends on the reason for that opacity; and that code sharing – even if it happened – would only address some of the reasons why it may be difficult to understand how the algorithm produced a specific output.

Looking at the specific case of machine learning algorithms, Burrell describes three types of opacity:

  • Opacity as a form of secrecy – this is an intentional form of opacity, where the firm that uses the algorithm deliberately hides its workings. This form of opacity results from a desire to protect corporate secrets, maintain competitive advantage, or avoid public scrutiny. An example would be YouTube video recommendations.
  • Opacity as technical illiteracy – this is a functional form of opacity, manifested in the inability of those without technical training to read code. It’s like my inability to decipher a text in – say – Greek, because I don’t speak that language. This form of opacity results from the fact that writing and reading code is a specialised activity, for which most of us don’t get any training. An example would be open source algorithms, which are publicly available, but whose operation remains largely incomprehensible for those without appropriate training.
  • Opacity as a characteristic of machine learning algorithms – this is an inherent form for opacity, where even programmers can’t fully understand how the algorithm reads and acts on data. This form of opacity results from the fact that an algorithm doesn’t break down its tasks in a way that is readily intelligible to humans. An example is provided in the image below, which represents the hidden layer of a hand-writing recognition algorithm. Each box in the image below represents a number digit. While you and I would read the overall shape of the digit “6”, the machine learning algorithm “only” sees pixels in a black and white scale.
Image source

An additional problem is that:

With greater computational resources, and many terabytes of data to mine (now often collected opportunistically from the digital traces of users’ activities), the number of possible features to include in a classifier rapidly grows way beyond what can be easily grasped by a reasoning human. In an article on the folk knowledge of applying machine learning, Domingos (2012) notes that ‘intuition fails at high-dimensions.’ In other words, reasoning about, debugging, or improving the algorithm becomes more difficult with more qualities or characteristics provided as inputs, each subtly and imperceptibly shifting the resulting classification.” (page 9).

An example of this complexity is AI negotiation bots developed by Facebook, which developed their own, incomprehensible-to-humans, language.

Burrell goes on to discuss how each of these three forms of opacity requires a completely different approach to understanding how the algorithm works.

To solve intentional opacity, Burrell proposes: “to make code available for scrutiny, through regulatory means if necessary. […] Such measures could render algorithms ineffective though […] it may still be possible with the use of an independent, ‘trusted auditor’ who can maintain secrecy while serving the public interest.” (page 4).

To solve functional opacity, Burrell suggests: “Writing for the computational device demands a special exactness, formality, and completeness that communication via human languages does not. The art and ‘craft’ of programming is partly about managing this mediating role and entails some well-known ‘best practices’ like choosing sensible variable names, including ‘comments’ (one-sided communication to human programmers omitted when the code is compiled for the machine), and choosing the simpler code formulation, all things being equal.” (page 4).

She also highlights the importance of “developing ‘computational thinking’ at all levels of education […] widespread educational efforts would ideally make the public more knowledgeable about these mechanisms that impact their life opportunities and put them in a better position to directly evaluate and critique them.” (page 9).

The third form of opacity – the one that is inherent to machine learning – is harder to solve. Burrell says that it would take a team of code auditors many, many hours of work to untangle the logic of a relatively simple machine learning algorithm. And that, even then, it may not be possible to fully understand “why” the algorithm behaved in a particular way. She shows this difficult by examining a machine learning algorithm that sorts through e-mails to identify possible Nigerian 419 style scams; and which ended up attributing a relatively high weight to the words “visit” and “want”. 

While there are ways of simplifying machine learning models (e.g., feature extraction), and using metrics to evaluate algorithm-induced discrimination, ultimately, the exponential increase in the number of features, and the resulting complexity of the algorithms, may render them impossible to unpack. In which case, Burrell argues, we might need to “avoid using machine learning algorithms in certain critical domains of application”.

I highlight one such domain, in my paper looking at the potential and limitations of using machine learning to detect money laundering. I note that, in the UK, “by law, financial service providers must always be able to prove that the technologies that they use do not unfairly discriminate against certain customers.” Thus, I conclude that while this industry “seems ripe for machine learning deployment, and some industry players are investing in this technology”, the opacity of machine learning algorithms is a significant barrier for the widespread use of this technology in financial services.

What other domains should avoid machine learning algorithms?

6 thoughts on “Understanding and solving opacity in algorithms

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s