The rise in Artificial intelligence (AI) has brought a new and important element to the diversity and inclusion conversation. How can we mitigate biases in Artificial Intelligence?
AI is increasingly being integrated into our daily lives, from virtual assistants to self-driving cars. However, there is growing concern about the potential for biases to be built into these systems, which could have significant negative impacts on Diversity, Equality and Inclusion (DEI).
It’s easy to question how AI can exhibit bias, but any program is only as good as its coding. AI systems are trained on data sets. Should the data contain a bias that has not been identified and accounted for, the AI system will learn and reinforce that bias. For example, if a facial recognition system is trained primarily on data of white males, it may have difficulty accurately identifying individuals who do not fit that profile.
The Word Economic Forum shared an example of when software engineers were building a program to review job applicants’ resumes at Amazon in 2014. They soon realised the system discriminated against women for technical roles. Amazon recruiters did not use the software to evaluate candidates because of these discrimination and fairness issues. If an AI system is used for hiring decisions, it is important to ensure that the system does not discriminate against individuals based on protected characteristics, such as race or gender.
There is also potential for bias in algorithms. These are the sets of rules and procedures that AI systems use to analyse data and make decisions. If an algorithm is biased, it leads to biased outcomes. For example, if a credit scoring algorithm places more weight on factors that disadvantage people from certain demographics, such as race or gender, it may result in unfair credit scores for individuals from those demographics.
Developers should take steps to identify and address biases before deploying their AI systems. These include:
- Understand how datasets may contain bias
- Test algorithms for potential bias
- Implement processes for monitoring and addressing issues that arise
To mitigate biases that arise from biased data, you should ensure that the training data used is representative of the population that the AI system will be used for. This means that the training data should include individuals from diverse backgrounds and demographics.
The algorithms used should also be transparent and auditable. This means that the decision-making process of the AI system is reviewed by humans, with the outcomes carefully evaluated to identify and correct any biases that may have arisen.
It’s wise to involve a diverse group of stakeholders in the development and implementation of these systems, including individuals from diverse backgrounds and demographics. This reduces the potential for biases to be built into AI systems, and allows the ethical implications to be more thoroughly considered.
If you are using or considering utilising AI systems, how will you mitigate biases?
Take a look at these resources for further reading: