As we enter the fourth industrial revolution, Artificial Intelligence (AI) and Machine Learning (ML) technologies are being used to automate business processes in more and more areas, from calculating optimal transport loads to shortlisting loan applicants without human input. These technologies promise to create business value, for instance, by improving productivity and reducing mistakes. However, these technologies can also destroy business value, as evidenced by the flash crashes in the US stock market caused by automatic trading algorithms, or the death of a pedestrian caused by one of Uber’s self-driving vehicles.
In the paper “Artificial intelligence and machine learning as business tools: A framework for diagnosing value destruction potential”, Fintan Clear and I unpack how the fundamental characteristics of AI systems can destroy business value. The paper was published in the Business Horizons journal, earlier this year. A free version is available here.
Characteristic 1: Connectivity
The first characteristic that we looked at is “connectivity”. The various components of an AI solution need to be interconnected, in order to work. For instance, an AI algorithm is connected to a data input, such as textual and visual data, or meta-data; and self-driving cars are connected to each other so that when one car makes a mistake, the learning can be quickly shared with the network.
Problems may occur because the business has no control over how external inputs were collected or labelled. Therefore, it may be using data that are corrupted, incomplete or that mean something different from what the data label suggests.
Connectivity also relies on the different parties being compatible with each other (e.g., the date needs to be entered in the same format across all data sources). However, standardisation reduces AI’s flexibility, and limits its contextual richness. Moreover, the need to use compatible programming languages may lead to particular algorithms being used for pragmatic reasons, rather than because they are the best for the specific business problem.
Connectivity can also create problems at the level of the AI system’s output. For instance, poor outputs can spread broadly and quickly, increasing the scope and likelihood of mistake, as is the case with bots that automatically aggregate news feeds’ content thus spreading unverified information and rumors.
Characteristic 2: Cognitive ability
The second characteristic that we considered is “cognitive ability”, that is the AI’s system capability to reason and solve problems. For instance, machine learning algorithms can detect patterns in the input data, learns from mistakes, and self-corrects. One popular example of this is AlphaGo Zero, which mastered the board game Go, simply by playing against itself over and over again.
AI’s increasing cognitive ability has led to the deployment of applications that collect data, process them and produce outputs autonomously. However, as evidenced in the Tay chatbot case, the type of input data provided led to disastrous consequences for Microsoft’s reputation when the chatbot was allowed unfettered use to those inputs. Not all input data is suitable to be used for learning purposes, specially in the case of unsupervised learning algorithms.
AI’s cognitive ability has also led to it being applied in areas that stretch machine learning’s ability to convert complex features or ideas into binary formats. One example of over-simplification is the attempt to use AI to predict a person’s sexual orientation based on facial features. The algorithm uses a binary definition of gender identity and sexual orientation, failing to reflect the variety of ways in which they can be defined, both physiologically and psychologically.
Moreover, AI’s cognitive ability has led to a move away from describing how consumers behave to predicting and, even, trying to influence that behaviour, as in the case of personalisation of customer experiences. However, the quality of machine learning predictions is very difficult to assess prior to implementation and scaling, as evidenced by the Apple credit card fiasco. It is also difficult to assess whether the patterns identified through machine learning are true of the population at large, or only in terms of the data set available. Moreover, the machine learning algorithms can produce outputs that are not comprehensible to humans, and therefore, are impossible to correct or control. One example was when Facebook’s AI negotiation bots developed their own, incomprehensible to humans, language.
Characteristic 3: Imperceptibility
The third characteristic that we investigated was AI’s “imperceptibility”, given that the vast majority of AI applications go unnoticed by users. The subtlety of many AI applications can support the acceptance of this technology, and the satisfaction with AI-supported interactions. However, AI’s imperceptibility can also create problems for business and other AI users.
AI’s imperceptibility means that its use may go unchecked and unchallenged. This presents ethical and reputational threats, as data collection expands from explicit interactions between the firm and the customer, to include the customers’ social life, or even their home life, via personal wearables and other internet-enabled devices. Plus, it undermines the principles of choice and informed consent as illustrated by Google Duplex’s AI voice assistant presentation.
In addition, the imperceptibility of AI makes it difficult to assess whether it is possible and secure to access the data needed. For instance, certain US law-enforcement agencies have been using AI to find criminals in a crowd. However, as the solution was developed by third parties, the agencies do not know what data the AI is using, what weight is given to different features, or what assumptions were made when defining the variables. Firms are also be unable to access and update the underlying model, assumptions and data sources.
Finally, it has been observed that people act differently when they realize that they are interacting with AI. Without knowing whether the observations resulted from interactions with perceptible AI, managers cannot assess how representative of reality the data being modelled is. There may also be fewer opportunities to correct mistakes and biased outcomes. In the case of Apple’s credit card, the discriminatory outcomes were easily detected because they involved males and females in the same family unit. But in other cases, such as HR recruitment or service upgrades, the biased outputs may not be so obvious.
|Input data||Use of external data over which the firm has limited quality control||Dataset may be unsuitable for predictive profiling||User unable to provide informed consent; data may not be representative|
|Processing algorithm||Trade-off between standardization and compatibility vs. fit and flexibility||Formulae oversimplify complex phenomena||No ability to access, assess and update model|
|Output decision||Mistakes and poor outputs can go viral||Difficulty in verifying quality of predictions, or even understand ML outputs||Impossible to check, challenge or correct outcomes|
In summary, while AI has great business potential, it can also result in cost-ineffective applications, customer frustration, embarrassing situations for the brand, and so on. Our paper illustrates the importance of examining how the characteristics of AI technology may impact on the various elements of an AI solution, so that business problems can be anticipated, and either averted or minimised.