This article was originally published on The Conversation.
At almost every point in our day, we interact with digital technologies which collect our data. From the moment our smart phones wake us up, to our watch tracking our morning run, every time we use public transport, every coffee we purchase with a bank card, every song skipped or liked, until we return to bed and let our sleep apps monitor our dreaming habits – all of these technologies are collecting data.
This data is used by tech companies to develop their products and provide more services. While film and music recommendations might be useful, the same systems are also being used to decide where to build infrastructure, for facial recognition systems used by the police, or even whether you should get a job interview, or who should die in a crash with an autonomous vehicle.
Despite huge databases of personal information, tech companies rarely have enough to make properly informed decisions, and this leads to products and technologies that can enhance social biases and inequality, rather than address them.
A recent report found that 28% of British tech workers were worried that the tech they worked on had negative consequences for society.
Microsoft apologized after its chatbot started spewing hate speech. “Racist” soap dispensers failed to work for people of color. Algorithm errors caused Flickr to mislabel concentration camps as “jungle gyms.” Resume sorting tools rejected applications from women and there are deep concerns over police use of facial recognition tools.
These issues aren’t going unnoticed. A recent report found that 28% of British tech workers were worried that the tech they worked on had negative consequences for society.
Most tech companies, big and small, claim they’re doing the right things to improve their data practices. Yet, it’s often the very fixes they propose that create the biggest problems. These solutions are often born from the same ideas, tools and technologies that got us into this mess. The master’s tools—as the black lesbian activist and writer Audre Lorde once said—will never dismantle the master’s house. Instead, we need a radically different approach from collecting more data about users, or plugging gaps with more education about digital technology.
The reason biases against women or people of color appear in technology are complex. They’re often attributed to data sets being incomplete and the fact that the technology is often made by people who aren’t from diverse backgrounds. Increasing the diversity of people working in the tech industry is important. Many companies are also collecting more data to make it more representative of the people who use digital technology, in the vain hope of eliminating racist soap dispensers or recruitment bots that exclude women.
The problem is that these are social, not digital, problems. Attempting to solve those problems through more data and better algorithms only serves to hide the underlying causes of inequality. Collecting more data doesn’t actually make people better represented, instead it serves to increase how much they are being surveilled by poorly regulated tech companies. The companies become instruments of classification, categorizing people into different groups by gender, ethnicity and economic class, until their database looks balanced and complete.
Collecting more data doesn’t actually make people better represented, instead it serves to increase how much they are being surveilled by poorly regulated tech companies.
These processes have a limiting effect on personal freedom by eroding privacy and forcing people to self-censor – hiding details of their lives that, for example, potential employers may find and disapprove of. Increasing data collection has disproportionately negative effects on the very groups that the process is supposed to help. Additional data collection leads to the over-monitoring of poorer communities by crime prediction software, or other issues such as minority neighborhoods paying more for car insurance than white neighborhoods with the same risk levels.
People are often lectured about how they should be careful with their personal data online. They’re also encouraged to learn how data is collected and used by the technologies that now rule their lives. While there are some merits to helping people better understand digital technologies, this approaches the problem from the wrong direction. As noted by media scholar, Siva Vaidhyanathan, this often does little more than place the burden of making sense of manipulative systems squarely onto the user, who is actually often still left powerless to do anything.
Access to education isn’t universal either. Inequalities in education and access to digital technologies means that it’s often out of reach from just those communities that are most negatively affected by social biases and the digital efforts to address them.
The tech industry, the media and governments have become obsessed with building ever bigger data sets to iron out social biases. But digital technology alone can never solve social issues. Collecting more data and writing “better” algorithms may seem helpful, but this only creates the illusion of progress.
Turning people’s experiences into data hides the causes of social bias – institutional racism, sexism and classism. Digital and data-driven “solutions” distract us from the real issues in society, and away from examining real solutions.
We need to slow down, stop innovating, and examine social biases not within the technology itself, but in society. Should we even build any of these technologies, or collect any of this data at all?
Better representation in the tech industry is vital, but their digital solutions will always fall short. Sociology, ethics, and philosophy have the answers to social inequality in the 21st century.
View editorial post