Critical science’s framework to classify the risks from AI

Artificial Intelligence has great potential, but also presents many risks, from taking over jobs, to making biased decisions. Rather than thinking about the risks of AI separately and reactively, it would be useful to have a framework to identify those risks holistically and proactively. 

Shakir Mohamed, Marie-Therese Png and William Isaac suggest one such framework, drawing on decolonial theories, which offer an historical lens to study issues of power and value. The framework is presented in the paper “Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence”, which was published in the journal Philosophy and Technology.

Mohamed and colleagues argue that AI impacts on society through the same mechanisms by which colonial powers impact on the regions that they occupy. Those mechanisms are: oppression, exploitation and dispossession. The paper’s ideas are summarised below.

MechanismOppressionExploitationDispossession
DescriptionSubordination of one social group and the privileging of another maintained by a network of restrictions.Taking advantage of people by unfair or unethical means, for the asymmetrical benefit of industries.Centralisation of power, assets, or rights in the hands of a minority and the deprivation of power, assets, or rights from a disempowered majority.
ApplicationAlgorithmic decision systems – e.g., facial recognition in Singapore; predictive policing in India; welfare interventions in New Zealand.Ghost working – e.g., labelling of input data by humans, including by prisoners and the economically vulnerable.  Beta-testing – e.g., fine-tuning of early versions of software systems; testing of predictive systems.National politics and AI governance – e.g., under-representation of geographic areas such as Africa, South and Central America and Central Asia in the AI ethics debate. International Social Development – e.g., AI for Good and AI for the Sustainable Development Goals.

I think that, when faced with an AI application (e.g., marketing personalisation) this taxonomy can help me think through issues such as:

  • In what ways does this application oppress, exploit or dispossess?
  • Who is being oppressed, exploited or dispossessed? How are their interests being protected?
  • Who gains from the oppression, exploitation or dispossession?

Let me know if there are other holistic ways you use to think about the consequences of deploying AI in a given context.

One thought on “Critical science’s framework to classify the risks from AI

Leave a comment