AI & THE LEGAL SYSTEM


︎ Make your own layout by dragging the pictures! ︎

Artificial Intelligence has for years cultivated an idea of clean, machinery and as a future-oriented field. However, that’s not the case for a lot of the programmes being produced at the moment. A lot of these AI systems are perpetuating outdated and harmful ideologies, that clearly show tendencies towards racist and sexist views.

According to Data USA, a 2017 study shows 73.1% of people awarded a degree in this field in the US are men 1, where 63.6% of these are white 2. Back in 2017, only 1.42% that were awarded a degree were African American 3. This inequality in the working field is bound to have programmes that show inequality and bias towards a certain race and gender. This is not, however,  a critique on the AI field alone. These discrepancies in percentages can be found in a lot of courses and areas. So, when advocating for change in these ratios and inequality, we are advocating for a reform of the educational system as a whole.

To put this into a more practical example, back in 2014 a woman by the name Brisha Borden and her friend saw a bike on the side of the road and took it, however, the owner of the bike was able to catch them in the act. She was later convicted for burglary and petty theft. Similarly, Vernon Prater was charged with a similar crime when he was caught shoplifting a Home Depot store. Prater had previously been charged with more serious crimes like armed robbery and served 5 years in prison. However, Borden also had a record but it was only misdemeanours committed as a juvenile. When both these cases were compared by an AI programme, on the likelihood  of any of them committing a crime again, the programme clearly showed a racial bias when predicting this value, since Borden was a black woman and Prater a white man. It’s also important to note that 2 years later Vernon Prater committed a new crime and was arrested for it, whereas Brisha Borden has not. This is, however, one case, there are multiple other cases judged by an AI programme that show the same patterns and misjudgments 4.

I wanted to write about this topic mainly because I always had a big interest in the way AI processes information and how useful and helpful it can be when used right. I think that despite all the fascination with the concept and the potential of artificial intelligence, I think it’s important to talk about where this type of technology falls short. I also chose to combine the topic of AI with gender/race politics because I think some of the best essays I wrote are when I’m able to write about my political views. I’m a big advocate for equality and I hope I can translate those ideas through my practical work but also through my dissertation.




1 Integrated Postsecondary Education Data System (IPEDS) (2017) Gender Imbalance for Common Institutions. Available at: https://datausa.io/profile/cip/artificial-intelligence#demographics (Accessed: 30 March 2020);
2 Integrated Postsecondary Education Data System (IPEDS) (2017) Race & Ethnicity by Gender. Available at: https://datausa.io/profile/cip/artificial-intelligence#demographics (Accessed: 30 March 2020);
3 Integrated Postsecondary Education Data System (IPEDS) (2017) Race & Ethnicity by Degrees Awarded. Available at: https://datausa.io/profile/cip/artificial-intelligence#demographics (Accessed: 30 March 2020);
4 Angwin, J., Larson, J., Mattu, S., Kirchner, L. (2016) Machine Bias. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (Accessed: 2 April 2020);

︎ Make your own layout by dragging the pictures! ︎