The human brains behind the veil of Artificial Intelligence

Abishek Thamma, Tom Pelletreau-Duris · 2023-02-05

How do human cognitive biases and commercial incentives shape the supposedly "objective" algorithms that increasingly govern our attention, decisions, and social institutions—and can transparency offer a path toward accountability?

1. Introduction

We are currently in the era of new form of business competition, the "Attention Economy" [1], where the technological providers are fighting for the limited attention span of the consumer. This system incentivizes the various technological companies to attract the attention of the users in the form of personalised content curation, clickbait content, and frequent and often unnecessary notifications. Companies end up spending large amounts of time in developing highly sophisticated algorithms that can be used to identify target audience through the creation of a user profiles and building recommendation systems to maximise the user's interaction with their application or website.

The term Artificial Intelligence (AI) is often used to refer to systems and algorithms designed to exhibit smart (often, human like) behavior and decision-making. However, this term fails to recognize the human subjectivity that lies behind these algorithms. AI is often presented as an impartial and objective technology, which obscures the complexities of the human subjectivity that guides these algorithms and also the potential for human errors when building these systems. Currently, we've come to realise that these AI systems are less of "intelligent" systems and are more of models that formalize collective representations of culture, values, beliefs and by extension its biases and stereotypes [2]. When portrayed under the guise of objectivity, people are more likely to accept this tool and not question the way it has been made, often unable to criticise it because they can't understand.

In this essay, we will explore the two topics in the era of widespread AI adoption, 1) how the term AI is a veil behind which a multitude of subjective and human cognitive biases lie and 2) how the users of the AI systems fall at a risk of being manipulated into parting with their attention to some of these AI systems.

2. Cognitive biases during human-algorithm interaction

Statistical learning algorithms require data to produce results, and the quality of the data determines the quality of the results. While the biases in algorithm behaviour attributed to biases in data is commonly acknowledged in both popular press and scientific literature, the biases stemming from algorithm design are not, despite evidence suggesting towards it. By defining these biases as those that arise during the human-algorithm interaction during the design, if we are to assume that the biases in behaviour are not due intentional design choices, we can try to understand them through the existence of various human cognitive biases.

The results of algorithms depend on the way in which the programmers have written them. However, programmers are still human beings, and much research in psychology and cognitive science shows the existence of cognitive biases in decision making (Tversky and Kahneman, 1974)[3]. These cognitive biases can lead to biases in the algorithms. Cognitive biases are a distortion of the way information is processed in relation to rational behaviour or reality. For example, the "bandwagon" or "Parnurge's sheep" bias may lead the programmer to follow models that are popular without ensuring their accuracy. Anticipation and confirmation biases can lead the programmer to favour his or her worldview even though available data may challenge this view. The 'illusory correlation' bias can also lead a person to detect correlations between two independent events. All of these biases can lead to choices of variables that reflect more of a certain perception of the phenomena and that will guide the algorithm towards biased decisions. It is easy to believe that scientists and researchers are better placed to understand the bias of AI than the mainstream press, however the reality is far from it. To illustrate an example, we can look at the very controversial work of Kosinski and Wang. Their work on detecting an individual's sexual orientation by face has been under the radar of many articles in the mainstream press as it appears in the Guardian (Levin, 2017) [4] where "LGBT groups denounce 'dangerous' AI that uses your face to guess sexuality" and the journal concluded saying their work is probably more indicative of their own perceptions than of any reality.

In addition to the biases of the during design, we can also see cognitive biases playing a role in the human-algorithm interaction. A known type of bias is stereotype bias, which can occur when an individual acts with reference to the social group with which they identify rather than their individual abilities. A lot of research has shown that individual performance can decrease when an individual thinks he or she is being judged or selected on the basis of negative stereotypes. These biases can appear, for example, in online job advertisements where women self-select, and respond to advertisements they believe will have a higher probability of being accepted. In turn, algorithms that feed on click data reinforce these stereotype threats. And this type of bias can then be both present in the data that feed the model or (worst) in the selection made by the designer. Such stereotype biases have been highlighted in many critical situations like police control or social services "scoring" algorithms. In the Smithsonian magazine (Rieland, s. d.) and the article called "Artificial Intelligence Is Now Used to Predict Crime. But Is It Biased?" [5] the journalist depicts the story of a computer scientist at Carnegie Mellon University who developed a crime-predicting software tool called CrimeScan several years ago. His original concept was that in some ways violent crime is like a communicable disease, that it tends to break out in geographic clusters. They also came to believe that lesser crimes can be a harbinger of more violent ones, so they built an algorithm using a wide range of "leading indicator" data, including reports of crimes, such as simple assaults, vandalism and disorderly conduct, and 911 calls about such things as shots fired or a person seen with a weapon. The idea is to track sparks before a fire breaks out. "We look at more minor crimes," Neill says. "Simple assaults could harden to aggravated assaults. Or you might have an escalating pattern of violence between two gangs.". The American Civil Liberties Union [ACLU], the Brennan Center for Justice and various civil rights organisations have all raised questions about the risk of bias being baked into the software. Indeed, historical data from police practices, critics contend, can create a feedback loop through which algorithms make decisions that both reflect and reinforce attitudes about which neighbourhoods are "bad" and which are "good." That's why AI based primarily on arrests data carries a higher risk of bias. To conclude, it's likely to be more reflective of police decisions, as opposed to actual reported crimes.

3. Algorithms in the Attention Economy

The concept of attention of "Attention Economy" was first theorized by the economist and psychologist Herbert A. Simon, where he writes about scarcity of attention in a human rich world as follows "In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it." In an attempt to compete for the scarce resources of a user's attention, the tech companies are in a form of arms race, trying to improve their AI systems. Through the use of a wealth of user data, companies are attempting to build sophisticated user profiles including being able to accurately categorise the user's personality much better than their human counterparts. The algorithms end up leveraging these extensive user profiles to better provide custom curated content in an attempt to maximise their utilisation of the product, and also provide them with various "relevant" targeted advertisement. [6] Authors of the paper "Personality Panorama" [7] discuss about the ability of large scale big data systems to build accurate personality profiles through behavioural personality data and how the same data, if not careful, can be used to build a "personality panopticon", a dystopian system inspired from Jeremy Bentham's "The Panopticon", where all behaviours are "ceaselessly observed and regulated".

Through the use of reinforcement based techniques, algorithms end up trying to maximising the user attention. Examples of this includes use of rewards in the form of "likes" or "Daily log in rewards". Research has shown that human behaviour on social media applications consistently conform to the "reward learning" theory, which was originally developed to explain the behaviour of animals [8]. Additionally, social media applications are designed "deliberately" addictive to users. [9] By generating "sufficient motivation", a trigger in the form of vibrating notification and an action, the apps create a cycle for habit formation, which in the longer term turns out to be addictive. [10] As the famous adage goes "If something is free, then you are the product"

4. Transparency: A possible approach towards a solution

The lack of transparency surrounding proprietary software and its algorithms makes it difficult to understand the choices it makes and criteria under which it makes them, sometimes, even for the engineers who created the AI system. This phenomenon, called "black box" algorithms, makes it so that there is no accountability for the decisions the AI system makes. As machine learning becomes more advanced, this lack of transparency and accountability will only become more pronounced. Both the issues of biases creeping up during design or intelligent system interaction and the manipulation of the user through the algorithms employed in social media could have solutions that involve understanding the transparency involved in the algorithms.

In the example of social media algorithms, personalization achieved through algorithmic curation can often lead to a lack of diversity in media content and possible "echo chambers" [11]. Diakopoulos, N. (2016) [12] argues that "complete source-code transparency of algorithms, however, is overkill in many if not most cases. Instead, the disclosure of certain key pieces of information, including aggregate results and benchmarks, would be far more effective in communicating algorithmic performance to the public"

In a different context, this problem seems to become even more critical when it concerns scoring algorithms used for health, social welfare or financial aids. AI and scoring algorithms rely on data or biased designers and because of this they will end up formalizing some form of bias. Elizabeth Brico wrote the article "an algorithm took my kid" (2018) [13] about her personal situation. She asserts that her children were unfairly removed from her custody due to the influence of biased predictive analysis software. She discusses how social services often target those who cannot afford to escape the algorithm's judgement, creating a system that views poverty as a moral failing. Rico believes this to be a grave injustice. The need for transparency thus appears like a necessity. But the transparency of the algorithmic criteria may actually feed the "self-fulfilling algorithmic prophecy" (Burk, 2021)[14] by revealing to potential applicants the characteristics they should emphasise and display in their applications in order to successfully comport with the non-obviousness requirement. As Dan L. Burk highlights in his paper, this tendency is making performativity play a stronger role in our social administrations. Indeed, if AI deployment amplifies practices already at work in human institutions extending human cognitive capabilities, analysing data sets to identify patterns beyond human perception, they also extend existing social practices such as performativity, enhancing these practices for good or ill.

5. Conclusion

Despite the expectation that algorithms will provide "objective" and superior performance compared to humans, studies have found that algorithms are often subject to the same biases that humans are, as they can replicate systematic biases learned from training data. However, bias can also arise at another level: in the human-AI interaction, with how decision-makers interpret and act upon algorithmic outputs. Thus, while the use of algorithms in high-stakes public sector areas is often justified on the grounds of improved performance and "objectivity," it is important to remember that bias can still arise in the human-AI interaction in how decision-makers process, interpret, and act upon algorithmic outputs. Furthermore, there exists a power imbalance in the context of social media where corporations have a heavy influence over user behaviour while users have close to zero knowledge over which of their data is being used to and how it is being used to "personalise" their experience. While demand for higher amount of transparency can help minimise the bias and the power imbalance, there is a long hard way to go forward. To quote the French sociologist and philosopher Jean Baudrillard, "The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence", however we should be able to conclude that while it doesn't necessary have artifice deliberately, artifice could end up being emergent, along with intelligence

References

  1. Herbert A Simon et al. Designing organizations for an information-rich world. Computers, communications, and the public interest, 72:37, 1971.
  2. Sanne Blauw. Artificial intelligence is more human than it seems. so who's behind it? The Correspondent, October 2019.
  3. Amos Tversky and Daniel Kahneman. Judgment under uncertainty: Heuristics and biases. Science, 185(4157):1124–1131, 1974.
  4. Sam Levin. Lgbt groups denounce 'dangerous' ai that uses your face to guess sexuality. The Guardian, September 2017.
  5. Randy Rieland. Artificial intelligence is now used to predict crime. is it biased?, March 2018.
  6. Ricardo Baeza-Yates and Usama M. Fayyad. The Attention Economy and the Impact of Artificial Intelligence, pages 123–134. Springer International Publishing, Cham, 2022.
  7. R. L. Boyd, P. Pasca, and K. Lanning. The personality panorama: Conceptualizing personality through big behavioural data. European Journal of Personality, 2020.
  8. B. Lindström, M. Bellander, D.T. Schultner, and et al. A computational reward learning account of social media engagement. Nature Communications, 12:1311, 2021.
  9. Hilary Andersson. Social media apps are 'deliberately' addictive to users, July 2018.
  10. Hannah Schwär. How instagram and facebook are intentionally designed to mimic addictive painkillers, August 2021.
  11. Emilee Rader, Kelley Cotter, and Janghee Cho. Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18, page 1–13, New York, NY, USA, 2018. Association for Computing Machinery.
  12. N. Diakopoulos. Accountability in algorithmic decision making. Communications of the ACM, 59(2):56–62, 2016.
  13. Elizabeth Brico. New algorithms perpetuate old biases in child welfare cases, September 2018.
  14. Dan L. Burk. Racial bias in algorithmic ip. 106 Minnesota Law Review Headnotes, Forthcoming, August 2022. Available at SSRN.