Home Issues Algorithmic Decision Making: What is it and why should Blacks know about...

Algorithmic Decision Making: What is it and why should Blacks know about it?

1734

Should Black People be worried?

Algorithms are part and parcel of life in the 21st century. Every time you switch on a computer, use your phone, have a criminal record check carried out or even stop at traffic lights, you are being exposed to algorithms. While you might have heard of them, you’d be forgiven for not knowing exactly what one is.

As we become more reliant on digital technologies in our lives, it’s algorithms that we rely on to make the decisions that keep things running smoothly. The

Put simply, an algorithm is a series of well-defined instructions that are used to carry out specific tasks or solve specific problems. For example, every time you run a Google search, an algorithm is used to rank and display the most relevant pages to your search.

But a recent landmark study, commissioned by the UK government, has confirmed what many people already knew about machine learning algorithms. Algorithmic decision-making software can learn, reflect and accentuate existing societal biases within its process. This bias is immediately visible using something like Google. One of our previous articles showed that when doing an image search for the term ‘girl’ 46 of the 50 images were white, 3 were Asian and only 1 was Black.

Google are far from being the only culprit. What’s more concerning is that algorithms are being increasingly relied upon in recruitment to help automate the process. Why pay someone to do something when you can rely on a machine to do it, right? Whilst automation is a boon for many businesses, unwittingly perpetuating systemic biases via your recruitment process isn’t.

Learnt Bias

Algorithmic bias doesn’t occur because machines are inherently racist, sexist or intolerant. They become biased because these biases already exist in society. Machine learning algorithms are generally trained on existing banks of data to ascertain what is needed and what isn’t, in a given situation.

One example of this is from a British university in the 1980’s. Due to being inundated with a record number of applicants for their medical school, they designed a computer programme to help determine the good from the bad. What was found is that women and those with an immigrant background were unfairly treated. This is because the historical placement data fed to the algorithm in the first place was biased against these groups meaning they were marked as inappropriate candidates.

Until we can learn to eliminate bias in people, it’s unlikely that machines and algorithms will ever stand a chance. However, the fact is, humans are inherently biased. Stereotyping is a well-documented psychological tool that has existed since the dawn of time, helping early humans navigate the dangers of their lives. The problem with the modern world is that racist and sexist tropes have become so ingrained within contemporary stereotypes that many people accept them as truth.

Bias in healthcare

Being discriminated against when searching for a job is one thing but having your life hinge on the decision of a biased machine is another matter entirely. An alarming study, published in Science in 2019, demonstrated that algorithms used for determining referrals for healthcare programmes are incredibly biased.

The study showed that Black patients were significantly less likely to be referred to healthcare programmes than white patients. This was despite often being significantly more unwell than their white counterparts. The authors predicted that referrals would increase from 17.7% to 46.5% if the algorithm was functioning without bias.

Princeton University professor, Ruha Benjamin, believes this is partly down to the demographic make-up of the development ecosystem. He suggests that a lack of diversity in the field of algorithm design means that developers aren’t equipped to anticipate these problems.