A team of US researchers has developed an Artificial Intelligence (AI) tool for detecting unfair discrimination — such as on the basis of race or gender. AI systems — those involved in selecting candidates for a job or for admission to a university — are trained on large amounts of data.
“But if these data are biased, they can affect the recommendations of AI systems,” said Vasant Honavar, Professor at Pennsylvania State University. For example, if a company historically has never hired a woman for a particular type of job, then an AI system trained on this historical data will not recommend a woman for a new job.
The team at Pennsylvania State and Columbia University created the AI tool for detecting discrimination with respect to a protected attribute, such as race or gender.
“For example, the question, ‘Is there gender-based discrimination in salaries?’ can be reframed as, ‘Does gender have a causal effect on salary?,’ or in other words, ‘Would a woman be paid more if she was a man?’” said Aria Khademi from Penn State.
The researchers tested their method using various types of available data, such as income data from the US Census Bureau to determine whether there is gender-based discrimination in salaries.
They also tested their method using the New York City Police Department’s stop-and-frisk programme data to determine whether there is discrimination against people of colour in arrests made after stops.
“We found evidence of gender-based discrimination in salary. Specifically, we found that the odds of a woman having a salary greater than $50,000 per year is only one-third that for a man. This would suggest that employers should look for and correct, when appropriate, gender bias in salaries,” explained Honavar.
“Our tool,” he said, “can help ensure that such systems do not become instruments of discrimination, barriers to equality, threats to social justice and sources of unfairness.”