On International Women’s Day (March 8th), NEoN hosted a screening of Coded Bias, a documentary by Shalini Kantayya. Coded Bias explores the history and dangers, and biases of artificial intelligence (AI). It drives home the point that the reason algorithms are racist or sexist is because of the data that programmers feed them. Cathy O’Neill, author of Weapons of Math Destruction, defines an algorithm as “using historical information to make a prediction about the future.” Bias towards women, people of colour, and other minorities is in fact “coded,” since the data we provide as training sets for algorithms is a reflection of our history. Coded Bias makes these concepts accessible to everyone, whether or not you have a background in science.
Researcher Joy Buolamwini took a class in science fabrication at MIT, where she was encouraged to build something that was straight out of science fiction. When making a mirror that used computer vision, Buolamwini found it refused to recognise her face. If she put on a white mask, it recognised her. This prompted her to look further into the issues of biases that can creep into technology. AI was essentially established in one room in Dartmouth College in 1956 by a group of white men, and those men got to decide the state of the entire field. Everyone has unconscious biases, and people embed their own biases into technology.
The group at Dartmouth postulated that intelligence could be demonstrated by ability to play games, specifically chess. In reality, intelligence has many more facets than this, but ideas about technology came from a small and homogenous group of people. These have been carried forward into the present day, and they keep being replicated. According to O’Neill, there is increasingly blind faith in what is termed “Big Data.” Algorithms are touted as objective truth since they are mathematical, and this allows companies such as Google and Amazon to use them as a shield for corrupt practices.
The underlying mathematical structure of algorithms isn’t racist in itself, but data embeds the past. That is to say that data used to train algorithms creates outcomes that replicate oppressive structures of the past and the present. Amazon reportedly abandoned an AI hiring tool when they saw it was biased against women. The algorithm rejected anyone with a female name, and anyone who had belonged to a women-only society at college. There are very few women in powerful tech jobs anywhere. Fewer than 14% of AI researchers are women. It was simply replicating what we see in the world today. The decisions the algorithm makes are mathematical, but they are in no way ethical.
Algorithms are a “black box.” This means that data is fed into an algorithm, and a result is spit out, but no-one can see the process in between, even the programmers. This raises the question “How do we get justice in a system where we don’t know how the algorithm is working?” There are algorithms determining whether people get into college, what rates they get on their mortgage, and whether people get fired. In the United States the 14th amendment grants citizens the right to due process. It’s unlawful to arrive at a decision without telling the individual in question how and why that decision was made. This was the basis on which Daniel Santos sued the Houston Independent School District after the Educational Value Added Assessment System (EVAAS) was rated ineffective by the algorithm. Santos was an award-winning middle school teacher.
In 2010, Facebook carried out a social experiment on 61 million people. A message saying “It’s election day” was shown in people’s feed, only once for each of these 61 million people. One version of this had small thumbnails of people’s Facebook friends below it, showing them who had voted. The other version had no thumbnails. Facebook matched people’s names to voter rolls, and concluded that they had moved 300,000 people to the polls. We are given context in the documentary that the 2016 election was only decided by about 100,000 votes. If Facebook hadn’t disclosed the 2010 experiment, we would have no clue. They could show a higher percentage of ads for a specific candidate too, and we still wouldn’t know because it’s sceen by screen.
Buolamwini asked herself the question “What does it mean if there’s no-one to advocate for those who aren’t aware of what the technology is doing?” She formed the Algorithmic Justice League, which spreads the word about the social implications of AI. Everyone has a stake and is impacted, so they should have a say in how their data is used. Facial recognition is inaccurate and while this is damaging and can ruin lives, 100% accuracy does not necessarily make it better. Even if everyone is perfectly classified, it just enables surveillance. In China, an element of the Social Credit Score system is that whatever you say about the Communist Party (CCP) will affect your score. Your score affects things as big as your ability to rent a flat, and as small as your ability to buy a can of Pepsi from a vending machine. Your Social Credit Score has wider implications than just yourself. When your score drops, this affects your friends’ and family’s scores too. Cathy O’Neill calls this system “Algorithmic Obedience Training.”
In Cape Town, South Africa, Buolamwini spoke about “supremely white data and the coded gaze.” Her research into the biases of the facial recognition technologies of Amazon, IBM, and other major companies had encouraged them to step up. The accuracy of IBM’s facial recognition for white women was now at 100%. The lowest score was for darker-skinned women, which was at 96.5%. “What changed was making it a priority, and acknowledging what our differences are,” Buolamwini said.
Coded Bias lays plain so many of the pitfalls of AI and makes the danger impossible to ignore. It is a political, feminist issue. Joy Buolamwini had to go to Congress to testify about the use of facial analysis technology, and proved that AI can easily have life-shattering effects. The data, and more importantly the people behind AI, must stick to a rigorous ethical code, and without that we cannot begin to hold people accountable. Algorithms reflect and perpetuate the biases that have been and continue to be global oppressive structures.
Find out more about the Algorithmic Justice League here.
Words: Beatrix Livesey-Stephens
Image: Screenshot from Coded Bias (Shalini Kantayya)
Leave a Reply