Algorithms are not all powerful, autonomous entities

Facebook contractors working on content moderation are, reportedly, being forced back into the office because Facebook’s attempt to use Artificial Intelligence for this difficult task failed. The open letter from those Facebook contractors states that:

Facebook tried using ‘AI’ to moderate content—and failed.

At the start of the pandemic, both full-time Facebook staff and content moderators worked from home. To cover the pressing need to moderate the masses of violence, hate, terrorism, child abuse, and other horrors that we fight for you every day, you sought to substitute our work with the work of a machine. 

Without informing the public, Facebook undertook a massive live experiment in heavily automated content moderation. Management told moderators that we should no longer see certain varieties of toxic content coming up in the review tool from which we work— such as graphic violence or child abuse, for example. 

The AI wasn’t up to the job. Important speech got swept into the maw of the Facebook filter—and risky content, like self-harm, stayed up. 

The lesson is clear. Facebook’s algorithms are years away from achieving the necessary level of sophistication to moderate content automatically. They may never get there.”

This letter reminded me of book chapter co-authored by my colleague Ace Simpson. The chapter is entitled “Artificial Intelligence and the Future of Practical Wisdom in Business”, and was published in “Handbook of Practical Wisdom in Business and Management”. In the chapter, Ace and his co-authors note that:

(B)oth algorithms and robots tend to be rhetorically misrepresented as powerful, autonomous agencies (…) (This) misrepresentation … becomes a device for reducing corporate responsibility for the consequences of organizational actions. (…) Designing (and using) an algorithm implies a process of delegating decisional responsibilities, and delegation does not remove responsibility.

(…) Technology is constituted within an ensemble of social, political, economic relations, and exists within a certain mode and relations of production … In the case of algorithms, this happens both by accepting their black-boxing (as imprescrutable, self-generating entities) and by considering their control a purely technological issue. (…)

A further illustration of these issues occurs in the debate on self-driving vehicles. For instance, the German government has issued rules for the programming of autonomous vehicles with the intent of making sure that machine intelligence will comply with duty of care principles (The Federal Government 2017). Acknowledging the limitations of the judgment of machine intelligence, these principles prescribe that, in case of an unavoidable collision, the AI should opt for harm minimization making no discrimination on the basis of person’s relative worth (e. g., in terms of age, gender, health, relationships, etc.). While these normative principles seem reasonable, they demonstrate the perils of considering the management of unpredictable, morally charged situations as a technical problem rather than one that is sociological. First, this principle overlooks that people have inconsistent requests, in that most agree with driverless cars making utilitarian decisions but only if these decisions do not pose a risk to the car’s occupants…: in practice, no one would buy a car that decides to kill the driver to save pedestrians (which makes the issue moot). Second, it is a mistake to draw on extreme scenarios (such as the “trolley problem”), because they are very rare and based on extreme simplification. The problems encountered by a self-driving cars are the same as those encountered by a human driver: in most cases, choices are standardizable and are regulated by legal dictates (e.g., traffic rules) that are formulated through collective decision-making. In following such rules, a machine can actually be more consistent (and law abiding) than a human driver, minimizing accidents. In unpredictable and sudden events requiring moral deliberation (saving the driver or another person), it could be argued that the choice is best made randomly, similar to the instinctive, prerational deliberation that would be made by a human driver in those circumstances.”

Facebook, YouTube and other social media companies often argue that they have no responsibility in the spread of misinformation and hate messages on their platforms because content is filtered and/ promoted by algorithms. The use of algorithms gives these companies’ activities a veneer of legitimacy.

But, as my colleague Ace argues, and the content moderation issue shows, algorithms are not all powerful, autonomous entities. Not only do they perform really badly in novel scenarios, but they are only embraced when they support the interests of the companies that deploy them. For Facebook, letting child abuse and other such content slip through the net is not worth the risk.

So, when you hear a company say “We are doing X because the algorithm says so”, ask yourself “Whose interests and which priorities are being served by this decision?”.

Leave a comment