Bias in AI models

Article written by Victoria Maldonado

Victoria Maldonado
2 min readFeb 22, 2021

A model is simply an abstract representation of some process that takes what we know and uses it to predict responses in various situations according to Cathy O’Neil in her book “Weapons of Math Destruction”. Therefore, as she explains, the model can be biased since it is based on our biases represented mathematically.

An interesting example she talks about in the introduction mentions a program designed in Washington in 2009 to improve school performance. This system would evaluate teachers based on student’s end of year test and suggest replacing the teachers whose students perform worse than the rest. The whole idea of trying to improve education is not a bad one, but the way the algorithm works is the problem. Since the algorithm was based solely on the end-of-year performance of students in exams, it would not account for so many other things that distinguish a good teacher from a bad one. This idea that good teachers should only make sure that students get good grades shows the biases of the person or people who designed this algorithm in the first place. Being a teacher is a lot more complex than just making sure students memorize the material. Being a teacher includes dealing with behavioral and emotional problems, as well as making sure that students become better than they used to be last year, but not necessarily better than students in other school like this algorithm expected. All of these expectations from the algorithm highlight the biases that people have toward schools with lower budget and with maybe more problematic children, and do nothing to acknowledge the work that the teacher puts in to help students become not only smarter, but better people.

This example is only one of a hundred others that show the biases that the people who create those algorithms have. That is why I think that it is really important to first, realize that there are biases, but second, make sure to minimize those as much as possible either by diversifying the workforce in charge of the algorithms, or by making those algorithms public so people know what they are being judged for.

--

--