In the light of recent protests against racism in the US, tech giant IBM announced that it would not be offering its facial recognition technology or analysis software anymore. In a letter penned to Congress, IBM’s CEO Arvind Krishna asked for new efforts to pursue justice and racial equality.
The company has opposed the use of technology for mass surveillance and racial profiling. They will no longer be working or researching facial recognition technology and neither will they be offering it anymore. Their visual object detection will also not be used for facial recognition.
What is facial recognition technology?
It is a form of Artificial Intelligence that uses algorithms to identify the unique facial geometry of a person and compare it to the one in the dataset. The consumer world is very well adapted to this technology since it is available on high-end phones for unlocking.
It can also be useful to catch crime by doing mass surveillance which the Chinese government has also been doing. However, the datasets are prepared by humans. If the human being harbours a bias, it reflects on the AI platform.
This is why there have been cases of racial profiling bias in facial recognition technology. Tech giant Google has also been the victim of finding this bias in their technology. However, it was corrected soon after it was noticed.
What does IBM’s move mean?
With IBM’s move to completely stop their facial recognition technology offerings, it would mean a bigger push for better laws and solidarity against police brutality. IBM’s CEO Arvind also added that he wishes to work with Congress by focusing on 3 key policy areas- police reform, responsible use of technology, and broadening skills and educational opportunities.
IBM’s Watson and other AI offerings have been widely appreciated. In an exclusive interview with Express Computer, IBM Research India elucidates on the developments on AI and cloud in these labs.
Resolving the bias
The death of George Floyd has sparked protests across the US with protestors demanding racial equality. It is crucial for artificial intelligence to correct any kind of bias that exists in its algorithms.
A larger dataset would be one way to get rid of these toxic biases and the other way would be reforming deep-rooted biases that exist in human minds. The latter needs to be worked on at many levels while the former is the first step towards promoting equality.