By Nandita Krishan – Vice President- OD Consulting & Facilitation, Marching Sheep
Artificial Intelligence or AI, as we all call it, is the new buzzword. Organisations and individuals are increasing their usage of AI. And while we are moving towards Machine Learning and Artificial Intelligence, there are also some concerns surrounding this shift to newer technology. Will AI be a threat to humanity and surpass human intelligence? Will AI be a threat to our jobs? Can we trust the judgment of AI systems and be sure that it is completely logical and correct?
When a child develops, they use senses like hearing, vision, and touch to learn from the world around them. Their understanding of the world, their opinions, and the decisions they end up making are all heavily influenced by their upbringing. Machine learning models are the same. Instead of using senses as inputs, they use data - data that we give them!
Machine Learning bias, also known as algorithm bias or Artificial Intelligence bias, refers to the tendency of algorithms to reflect human biases. It is a phenomenon that arises when an algorithm delivers systematically biased results because of erroneous assumptions of the Machine Learning process. In today’s climate of increasing representation and diversity, this becomes even more problematic because algorithms could be reinforcing biases.
Bias represents injustice against a person or a group. A lot of existing human bias can be transferred to machines because technologies are not neutral; they are only as good, or bad, as the people who develop them. Here is an example that demonstrates how bias can lead to prejudices, injustices, and inequality in corporate organisations around the world, where bias in artificial intelligence was identified, and the ethical risk mitigated.
In 2014, a team of software engineers at Amazon was building a program to review the resumes of job applicants. Unfortunately, in 2015 they realised that the system discriminated against women for technical roles. Amazon recruiters did not use the software to evaluate candidates because of these discrimination and fairness issues.
So even AI is not free from bias. Unfortunately, it is not safe from the tendencies of human prejudices. Often, the underlying data, rather than the method itself, is the cause of AI bias.
With that in mind, let’s look at a few biases in AI technology:
1. Sample Bias: This happens when our data does not accurately reflect the makeup of the real-world usage of the model. In the ever-evolving world, where everybody is unique in their own ways, how can a sample be representative of the whole lot? And it is here, that a bias creeps in where the outcome becomes skewed and unfair.
2. Label Bias: A lot of the data required to train ML algorithms needs to be labeled before it is useful. You actually do this yourself quite a lot when you log in to websites. Been asked to identify the squares that contain traffic lights, buses, or bicycles? You’re actually confirming a set of labels for that image to help train visual recognition models. The way in which we label data, however, varies a lot and inconsistencies in labeling can introduce bias into the system.
3. Aggregation Bias: Sometimes we aggregate data to simplify it or present it in a particular fashion. This can lead to bias regardless of whether it happens before or after creating our model.
4. Confirmation Bias: Simply put, confirmation bias is our tendency to trust information that confirms our existing beliefs or discard information that doesn’t. When we overlook our “gut feeling” and blindly follow the technology, it does lead to a bias.
5. Temporal Bias: This is based on our perception of time. We can build a machine-learning model that works well at this time but fails in the future because we didn’t factor in possible future changes when building the model.
It is not just humans, but even machines and technology are biased. So, will AI ever be unbiased? And the answer is both, Yes and No. It is possible, but it’s unlikely that an entirely impartial AI will ever exist. The reason for this is that it’s unlikely that an entirely impartial unbiased human mind will ever exist. An Artificial Intelligence system is only as good as the quality of the data it receives as input. Suppose you can clear your training dataset of conscious and unconscious preconceptions about race, gender, and other ideological notions. In that case, you will be able to create an artificial intelligence system that makes data-driven judgments that are impartial.
However, in the actual world, we know this is unlikely. AI is determined by the data it’s given and learns from. Humans are the ones who generate the data that AI uses. There are many human prejudices, and the continuous discovery of new biases increases the overall number of biases regularly. As a result, it is conceivable that an entirely impartial human mind, as well as an AI system, will never be achieved. After all, people are the ones who generate the skewed data, and humans and human-made algorithms are the ones who verify the data to detect and correct biases.