Artificial intelligence. Robot racism


There is something rather democratic about the robot called Bender (from the TV show Futurama) and his desire to kill all humans. It means he doesn’t seem to care about race, nationality, gender or diet preference; he hates all humans just the same: they are whiny meat bags doomed to a self-inflicted apocalypse.

Bender’s equal opportunity approach seems far removed from today’s Artificial Intelligence (AI) – news stories keep popping up about AI failing to treat all humans equally, surprisingly along racial lines. These machines are sexist, racist and they make a lot of mistakes, so much so that it seems we have created autonomous technology that looks more like us than we might like. But how bad is it really?

The famous COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) programme was revealed by ProPublica in 2016 to be unfavourably biased against black offenders when it was used to measure the risk that a convict might re-offend. In the ProPublica article they use a few examples to illustrate how good the system is at punishing black offenders, and showing exceptional leniency toward white offenders.

Bender

COMPAS surprises: a white offender scored three out of ten, having spent five years in a state prison, compared to a black man who resisted arrest, with no other offences, scoring a ten.

A recent story in the New York Times, aptly titled “Facial Recognition Is Accurate, if You’re a White Guy”, says facial recognition programmes could identify “white guys” correctly 99% of the time, compared to 35% for black women.

Just a week before writing this, the Financial Times reported that the UK Home Office had told thousands of foreign students to leave the UK in error after a voice recognition system found that these students cheated in an English proficiency test, with no hope of appeal with a human.

These stories do not so much say that these tools are racially biased as that they are not agents in law and we cannot hold them accountable. That’s a problem since more and more of these systems are being given increasing responsibilities every day. In the UK visa example, it is easy to say that the system made a mistake and that technicians will look into it, but if a few humans had done it, the Twitter horde would have happily ended someone’s career.

Google has been known to identify black people as apes, and Snapchat in trouble for a “hot” feature that gave users distinctly European features to make them look “hotter”. In such cases – including many that involve Facebook – the copout has been that no one really knows what goes on in the mysterious box of algorithms that make such mistakes.

It’s not that humans have built the bias into the machines; rather, they learn from humans and the data fed them. So a system that is fed real-world data that paints black people in a bad light will deem black offenders a higher risk than white offenders.

Our history and prejudices affect how we collect data, who gets to curate that data, and it determines the select few who have the means to create bots and unleash them out into the world.

As we relinquish more of our judgments of society to these tools, we should never forget that we have created them in our own image, and should therefore not see them as innocent gods with impeccable reason. They hold our collected values, except those values are mainly from a privileged few who are, ignorant of lived experiences outside of their own circles and circumstances.

Imagine an algorithm built by someone who not only benefits from white male privilege, but is completely unaware of it. Such a tool would see the world as a perfect meritocracy, and by design the machine would not be flawed in any way from a certain point of view. A certain point of view? Yes, Luke, the flaw is in the human who made it, the machine works perfectly in its function. Such a machine could be made to help us make an important judgment we are too afraid to determine ourselves, and with that we can bypass all accountability, as things stand.

My biggest worry with AI isn’t that it will become self aware and kill all humans, like Bender, but that it will kill only some of us.

Share this article:

Reader's comments

Like to add your own comment ? Please click here to subscribe - OR -

Disclaimer

While every reasonable effort is taken to ensure the accuracy and soundness of the contents of this publication, neither the authors nor the publishers of this website bear any responsibility for the consequences of any actions based on the information contained therein.