I have been reading a lot of about artificial intelligence (AI), lately, and reflecting on its implications for how organisations interact with its customers. I have noticed that, often, AI is equated with the algorithm that underpins it (which is, often, assumed to be a machine learning algorithm). That makes sense, because algorithms are fundamental to how AI works, and whether it is successful… or not.

Remember when I asked Google Home to play the national anthem, and got “The Star-Spangled Banner”, instead of “God Save the Queen”? That was an algorithmic failure. Be it through biases in the training dataset or through biases in the coders programming it, the algorithm came to equate a request to “play the national anthem” as a request to play the US hymn.
Likewise, the fatal accident when one of Uber’s self-driving vehicles killed a pedestrian occurred because the algorithm wasn’t trained to look for pedestrians outside of crosswalks.
However, not all AI problems result from algorithmic failure. Take Tay, the AI chatbot designed by Microsoft to “experiment with and conduct research on conversational understanding”, on Twitter. The chatbot was launched on March 24th 2016, but had to be put on hold 16 hours later, when it started producing obscene content. The problem there was not the algorithm, which was designed to learn from its conversational partners, and to adjust its conversational style and topic of conversation, accordingly. And that’s what it did. The problem was that various users were purposefully engaging in hateful conversations with the bot. I.e., they were knowingly providing harmful input, which, given the nature of the algorithm (i.e., machine learning), would therefore produce a harmful outcome.
Another example would be an AI drawing on inputs from different databases, some of which have the date entered in the UK format (i.e., dd/mm/yy) and others in the US format (i.e., mm/dd/yy). Unless the algorithm is able to distinguish which database uses which date format, it is very likely to make mistakes in date-sensitive decisions, such as calculating someone’s age, or the amount of interest accrued or owed, because of problems related to the input, not the algorithm.
Problems can also occur because of the type of output produced by the AI. For instance, AI powered chatbots may be cost-effective ways of providing 24-hours customer service. However, they can also create confusion, frustration (specially, voice activated ones), and introduce unnecessary delays.
Moreover, AIs that aggregate and disseminate news, autonomously, can spread unverified information and rumours, such as false cures or protections against Coronavirus. Autonomous content aggregation and decision making was also at the heart of the flash crash in the US stock market in 2010. Finally, AI could also be a source, or an intensifier, of social discrimination, by virtue of the type of output produced – for instance, whether it privileges those that have access to a smartphone, or even a specific app. And, as evidenced by the Tay case, if left unchecked, AI can also amplify misogynyand other social problems.

For all of these reasons, I advocate that, when considering the benefits and pitfalls of adopting AI, we need to think of AI as an assemblage of technological components, rather than one technology. AI is a technological solution composed of elements which collect inputs, process them, and produce outputs, in ways that simulate human intelligence.
The first component of the AI assemblage is the input dataset. Data are so integral to the functioning of AI that, without them, AI has been described as mathematical fiction. One common type of data used is historical data. For instance, Fraugster uses transaction data such as billing vs shipping address, and the type of IP connection used, to detect payment fraud. AI such as chatbots, retail beacons or recommendation systems use data collected in real time, via physical sensors or by tracking online activity. Other AI solutions can also tap into the firm’s knowledge databases, such as whether previous product recommendations were accepted or rejected.
The second key component is the processing algorithm, i.e., the computational procedure that processes the data inputs. Often these are machine learning algorithms, but that is not necessarily the case. It can use if-then algorithms, for instance, which are entirely written by coders. Machine learning algorithms can use supervised learning, unsupervised learning or reinforced learning.
The third key AI component is the output decision resulting from the algorithmic processing. At the lower end of the spectrum, AI may produce a single result, for instance a deception score, which has no performative value until an analyst decides to act on it. Alternatively, the system may produce a selection of results for further action by human analysts, such as flagging content for the attention of moderators in online platforms. Finally, some AI systems have autonomy to act on the basis of the results of their analysis; for instance, a self-driving car can drive, steer or brake without human intervention.
Key components of an AI solution
Opportunities – as well as problems – can arise from any of these elements. So, when planning to deploy AI technology, make sure to consider what value can be destroyed at each level, and how. I examine this phenomenon in the paper “Artificial intelligence and machine learning as business tools: A framework for diagnosing value destruction potential”, co-authored with my colleague Fintan Clear, and published in Business Horizons.
2 thoughts on “AI is a system, not a technology”