In a recent interview on Lex Friedman’s podcast, Meta’s CEO, Mark Zuckerberg, was asked whether he worried about the existential threat presented by the rapid development of Artificial Intelligence (AI) systems. He replied:
“My own view is that, where we really need to be careful is on the development of autonomy, and how you think about that… I mean, it could be simple computer code, that is not particularly intelligent, but just spreads itself and does a lot of harm…
I just think these are somewhat separate things, and a lot of what I think we need to develop, when people talk about safety and responsibility, is really the governance on the autonomy that can be given to systems…
Building intelligent systems can create a huge advance in terms of people’s quality of life and productivity and growth in the economy. But it’s the autonomy part of this that I think we really need to make progress on. How to govern these things, responsibly, before we build the capacity for them to make a lot of decisions on their own, or give them goals, or things like that. I do think that, to some degree, (intelligence and autonomy) are somewhat separable things.”
I can’t comment on whether AI represents an existential risk in the near future. However, I do find myself agreeing with Zuckerberg regarding the risks of autonomous AI, for what may be only the second time ever. The first one was when a woman said that she advised her granddaughters to marry the nerd, and he replied that she should be advising them to be the nerd, instead.
AI and Machine Learning (ML) technologies are being used to automate business processes in more and more areas without human input. They promise to be more cost-effective than humans, but they can also be problematic. For instance, automatic trading algorithms have created flash crashes in the US stock market, while Uber’s self-driving vehicle hit and killed a pedestrian.
In the paper “Artificial Intelligence and Machine Learning as business tools: factors influencing value creation and value destruction”, Fintan Clear and I analysed how three characteristics of ML specifically (which is a subset of AI technology) can create problems when it is allowed to act autonomously.
The first such characteristic is connectivity between the various AI components. For instance, self-driving cars are connected to each other so that when one car makes a mistake, the learning can be quickly shared with the network. AI can also connect with external databases to use textual, visual, meta-data and other types of external data such as search engines or social media.
This means that AI autonomous systems can spread poor outputs broadly and quickly, increasing the scope and likelihood of mistakes. For example, bots that automatically aggregate news feeds’ content can spread unverified information and rumors.
The second characteristic is cognitive ability. ML detects patterns in the input data, learns from mistakes and self-corrects. For instance, AlphaGo Zero has mastered the board game Go, simply by playing against itself over and over again.
This means that AI autonomous systems can produce outputs that are not comprehensible to humans, and therefore, are impossible to correct or control. For example, when ML analysis complex features (e.g., x ray images) and produces a binary classification (e.g., cancer vs not cancer) whose reasoning humans are unable to understand; or when Facebook’s AI negotiation bots developed their own, incomprehensible to humans, language.
Finally, there is the imperceptibility of AI use. The vast majority of AI applications go unnoticed by users, as exemplified in Google Duplex’s demonstration of its AI voice assistant.
This means that AI use may go unchecked and unchallenged, with fewer opportunities to correct mistakes and biases – for instance, when it produces unfair results. It also presents ethical and reputational threats, and undermines the principles of choice and informed consent, as data collection expands from explicit interactions between the firm and the customer, to include the customers’ social life, or even their home life, via personal wearables and other internet-enabled devices.
The table below, adapted from our paper, summarises the risks presented by the connectivity, cognitive ability and imperceptibility of AI – see the last line, for the effect on autonomous AI.
| omponent | Connectivity | Cognitive ability | Imperceptibility |
| Input data | Use of external data over which the firm has limited quality control | Dataset may be unsuitable for predictive profiling | User unable to provide informed consent; data may not be representative |
| Processing algorithm | Trade-off between standardization and compatibility vs. fit and flexibility | Formulae oversimplify complex phenomena | No ability to access, assess and update model |
| Output decision | Mistakes and poor outputs can go viral | Difficulty in verifying quality of predictions, or even understand ML outputs | Impossible to check, challenge or correct outcomes |
When assessing how these characteristics can destroy value for businesses, Clear and I advise the following:
The main form of assessing the performance of AI and ML in business settings is its cost-efficiency. AI solutions are said to be cheaper, faster and less prone to mistakes than humans, particularly when applied to mechanical and analytical tasks. Though, AI and, particularly, ML are also valued for their ability to produce novel outcomes, such as finding previously unknown patterns in the datasets available; or new ways of solving a problem. In addition, the connectivity aspect of AI and ML enable complementarity among different nodes in a network, such as individual vehicles in a self-driving fleet.
However, if the cost of achieving these benefits is narrowly defined, it may underestimate costs such as reputational damage. Cost calculation may also fail to account for trade-offs such as calculation speed vs. degree of confidence in the calculation’s results, or accuracy vs. interpretability of the algorithms. Trade-offs can also occur over time. For instance, accuracy can be increased if the business is prepared to allow for mistakes in the short-term, or to invest in quality checkers to train the ML. Another issue to consider is the business’s starting point. Analytical capabilities and big data handling skills vary significantly across firms, which means that different firms will face different hurdles when deploying AI and ML.
You can hear the full interview, here; or watch it here. The discussion about AI intelligence vs autonomy, and existential risk, starts at 2:10:50.

