By Sushmita Srivastava, Associate Professor, Organisation & Leadership Studies, S. P. Jain Institute of Management and Research (SPJIMR) and Aiden Gigi Samuel, Student, Artificial Intelligence & Data Science, College of Engineering, Mumbai
In the rapidly evolving landscape of today’s tech-driven world, the convergence of human biases and technological advancements has become a topic of profound significance. From unconscious biases influencing our decisions to algorithmic biases shaping AI-driven systems, understanding the complex interplay between these two realms is essential for informed decision-making and fostering a culture of trust and transparency.
The human brain possesses remarkable capabilities, yet it is not without its imperfections. Scientific research has illuminated the presence of various mental errors known as “cognitive biases,” which can exert influence over our thoughts and behaviours. These biases, often leading us to draw inaccurate conclusions, confirm pre-existing beliefs, or distort our recollection of events, are an inherent aspect of human cognition.
However, the intricate interplay between cognitive biases and unconscious biases adds a layer of complexity to our understanding. Cognitive biases, as previously discussed, are the mental missteps we take when processing information, frequently contributing to flawed decision-making. Unconscious biases, in contrast, are deeply ingrained attitudes and beliefs that shape our perceptions and interactions without conscious awareness.
The intriguing connection emerges as these two forms of biases can interact and mutually reinforce each other. Our automatic cognitive biases can unwittingly amplify the strength of unconscious biases. For instance, the confirmation bias, which drives us to seek information aligning with existing beliefs, can intensify unconscious biases by filtering out contradictory viewpoints. Similarly, the availability bias, where readily available information guides our thinking, can perpetuate unconscious biases by limiting our exposure to certain information that aligns with those biases.
Recognizing this linkage underscores the importance of addressing both cognitive and unconscious biases in tandem. By comprehending the potential for cognitive biases to magnify unconscious biases, we gain insight into how to counteract their negative influence. This involves deliberately seeking out diverse perspectives, challenging assumptions, and cultivating a mindset of inclusivity. Ultimately, acknowledging and tackling the intricate interplay between these two types of biases holds the potential to enhance decision-making on both personal and societal scales. Such a comprehensive approach aims to foster a more equitable and balanced thought process, underscoring the significance of bridging the gap between cognitive and unconscious biases.
Unconscious bias, a phenomenon explored by the Human Resources Professional Association, reveals that even well-intentioned managers may inadvertently favour candidates who share similarities with them. This subconscious preference perpetuates systemic inequalities, highlighting the need for introspection and active efforts to counteract such biases. The Implicit Association Test (IAT) further uncovers language bias, exposing our subconscious word associations that suggest hidden biases we may not even be aware of.
In a world where AI increasingly guides human resource planning, the potential for “algorithmic bias” becomes a concern. AI algorithms learn from historical data, which can be riddled with deeply ingrained biases. Language bias, interview bias, and educational pedigree bias can permeate even AI-driven systems, necessitating transparency and vigilant evaluation. IBM’s approach to fine-tuning AI-based compensation strategies showcases the importance of adapting algorithms to fit an organization’s unique culture and values.
The transformative potential of AI in reducing bias is exemplified by a company that employs Pymetrics’ AI-based gamified evaluation for hiring. This shift led to a remarkable 30% increase in performance rates for marketing and sales hires, effectively addressing the existing biases of “interview bias” and “educational pedigree bias.” However, implementing AI requires a thoughtful approach, as haste can lead to unintended consequences. Just as IBM has honed its AI strategies over years, other organizations must invest time in refining AI systems to align with their goals and values.
The interplay of biases extends beyond binary interactions and enters the realm of algorithms. Algorithmic bias, rooted in biased data inputs, can propagate skewed predictions. However, humans’ reactions to these predictions can either magnify or mitigate algorithmic bias. This reaction can lead to increase in confirmation bias, availability bias and automation bias as shown in Figure 1. The example of ChatGPT, while powerful in providing accurate information, highlights the importance of validation and critical analysis by users to avoid succumbing to automation bias.
This intricate web of biases is magnified on social media platforms, where algorithms exploit confirmation bias by tailoring content to users’ existing beliefs. This personalization of content narrows perspectives and hinders holistic understanding. The prevalence of automation bias is equally evident in our interactions with AI systems, as demonstrated by my preference for certain systems due to perceived real-time capabilities. This predisposition towards automation can lead to a skewed perception of information authenticity.
The impact of biases on trust and acceptance of AI systems cannot be overstated. As our reliance on AI grows, user personality becomes intricately linked to the adoption of these systems. However, while technology exposes us to biases, it also presents the opportunity to confront and challenge them. The narrative of bias is intricate, its influence on decision-making profound. As we advance into a tech-enabled future, cultivating awareness about biases—both human and technological—becomes a shared responsibility. This awareness lays the foundation for a world where technology and human biases are acknowledged, understood, and transcended.
In this evolving landscape, the convergence of human and algorithmic biases underscores the need for caution and informed decision-making. The case of overreliance on AI in aviation exemplifies the potential consequences of automation bias, where excessive trust in autopilot systems can lead to accidents due to lack of human intervention. Additionally, the shift from traditional machine learning algorithms, relying on expert-labelled data, to algorithms trained on unchecked information from the general population poses challenges. Biased samples and labels can infiltrate the algorithms, leading to algorithmic bias that then influences human decisions.
However, humans can inadvertently exacerbate algorithmic bias. When humans react to biased outputs of machine learning methods, their decisions based on biased information can perpetuate the cycle. An intriguing case arises with ChatGPT, Bard—when users challenge its responses, the dynamics between human understanding, algorithmic bias, and automation bias come to the forefront. A human’s past experiences and beliefs might drive them to correct the AI, leading to a reiteration of the initial biased response.
Most research on algorithmic bias treats it as a static factor, overlooking its dynamic and iterative properties. Bias can evolve over time and become reinforced by repeated exposure to information that confirms it. Factors like confirmation bias and iterative filter bias in personalized user interfaces can lead to inequality in relevance estimation, limiting users’ access to diverse perspectives. This phenomenon is visible in social media platforms like YouTube and Instagram, where users are exposed only to content that aligns with their existing beliefs.
Personal experiences also shed light on the intricate relationship between biases and automation. The reliance on Google’s Bard for up-to-date information due to its lack of a 2021 restriction led to automation bias. This overreliance was driven by my perceived trust in Bard’s accuracy. However, it’s essential to recognize that no AI system is infallible. While ChatGPT includes disclaimers encouraging users to verify its responses, automation bias might prevent users from doing so.
The intersection of biases and technology is a critical juncture where our trust and decision-making are on the line. As AI systems continue to shape our experiences, understanding biases becomes paramount. Trust in AI systems is intimately tied to user personality, amplifying the importance of building trustworthy systems. Nonetheless, the path to trustworthy AI systems informed by user personality is an unexplored territory, and more research is needed.
In conclusion, the intricate relationship between human biases and technological advancements necessitates a multifaceted approach. From acknowledging unconscious biases to evaluating and fine-tuning AI systems, fostering transparency and vigilant awareness is crucial. As AI continues to evolve, so do biases. The challenge lies in embracing this evolution, understanding its implications, and forging a path towards a balanced and informed society. Only through this collective effort can we ensure that technology serves as a tool for progress rather than perpetuating existing biases.