On Racism and Sexism in Branding, User Interface, and Tech
Race, ethnicity, sexuality, and gender are complex topics. For years, the world has been trying to move past certain stereotypes and transcend prejudice. However, as technology and design advance, they uncover more deep-seated, more unconscious biases dwelling in the human mind. Today, we’d like to explore a less technical topic. In this article, we’ll take a look at some unfortunate design, copy, and tech decisions made by massive international brands. It’s safe to say that these decisions’ repercussions could have easily been mitigated if only these products were developed with diversity in mind. Let’s dive right in. A glimpse of the problem In 2017, we saw what seems like one of the most unfortunate combinations of tech and racism. A machine learning (ML) risk assessment algorithm used by the US court was shown to be heavily biased against black prisoners. “The program, Correctional Offender Management Profiling for Alternative Sanctions (Compas), was much more prone to mistakenly label black defendants as likely to reoffend – wrongly flagging them at almost twice the rate as white people (45% to 24%)” Source This case had shown us that we might have to deal with a new wave of a systemic and more technological strain of discrimination. More importantly, it shed some light on the complexity of the problem we all have to deal with. White buyer personae While many tech companies are trying to create more diverse workspaces, racism seeped into the tech side of things. The Compas case, unfortunately, wasn’t some sort of turning point for tech companies. As awareness grew, we started seeing more and more of the unconscious bias permeating products. There is still a variety of fitness trackers such as Fitbit and Apple Watch that don’t work properly on dark or tattooed skin. This is probably an issue that stems from their R&D departments’ unwillingness to test or invest in finding solutions. From this, we can assume that black customers are an edge case for many tech companies, and it seems that their buyer personae are predominantly white. A study published by Stanford University indicates that conversational user interfaces are somewhat racially divided. The researchers tested five voice-assistants from five large companies and established that the interfaces recognize more words spoken by white users compared to black users. The devices misidentified 19% of words pronounced by white people, and 2% of the sequences were deemed unreadable. The same statistics clock in at 35% and 20% for black people. Masters and slaves If you’ve ever worked with computer parts, CRMs or code, you’ve most definitely come across the terms “master” and “slave.” While many companies and corporations have long abandoned these archaic and unnecessarily cruel terms, many still haven’t. The idea of renaming these terms has been given a new impetus with the BLM protests in the US. Many businesses have replaced these terms with “Primary” and “Replica” or “Main” and “Secondary.” There is an endless spectrum of opportunities that allows us to rename this relationship without having to surface some of the most shameful of human behaviors. Github, home to over 40 million users worldwide, has only very recently decided to abandon the usage of “master,” “slave,” and “blacklist.” While these are tiny steps towards improvement, they are a vital part of our journey towards equality. The default Icons and avatars are very often subject to inherent bias. » Read More
Like to keep reading?
This article first appeared on adamfard.com. If you'd like to keep reading, follow the white rabbit.