x
Loading
+ -
Cancer. (01/2023)

Refuting the myth of neutral technology.

Text: Bianca Prietl

Artificial intelligence is meant to produce objective truths free of human error. Yet cases of algorithmic discrimination give the lie to this promise time and again. How can we imagine technoscientific futures without holding onto the hope of technological neutrality?

Prof. Dr. Bianca Prietl
Prof. Dr. Bianca Prietl. (Illustration: Studio Nippoldt)

When we talk about artificial intelligence (AI) today, we are usually referring to a data-centric approach based on “machine learning”: Algorithms process massive datasets (often referred to as big data) to identify patterns in the data and derive rules about the phenomenon in question. These rules are then used to produce prognoses regarding probable future developments. The current boom in AI is only possible thanks to digital technologies that generate enormous quantities of data. It is also linked to a widespread belief in data solutionism, which holds that data contains an information potential that, if “mined”, allows to solve many problems – in particular, to overcome the limits to human performance and human bias.

It is against this backdrop that we now see a rising demand for the use of AI, for instance, in making fair hiring decisions, speeding up legal proceedings while rendering them more equitable, or optimizing public assistance programs. “Learning algorithms” are then employed to assess social situations and directly impact people’s ability to participate in society and their chances in life.

Algorithmic discrimination

This practice is highly controversial, not least because cases of AI discrimination go public on a regular basis. For instance, Amazon was forced to recall an AI tool that had been developed for HR recruiting after it was revealed that the tool favored the applications of men over those of women.

Following testing by Austria’s employment agency, a technology designed to evaluate job seekers’ chances for reentering the market drew negative press because it had systematically lowered the scores of women with children, immigrants and older candidates. One technology used in the corrections systems of many US states came under fire when it was found that Black people and other People of Color who had been convicted of a crime were assigned a higher risk of recidivism than their ‘white’ counterparts.

When it comes to Switzerland and its use of AI systems in policing, corrections, public administration and medicine, there is hardly any thorough knowledge. According to the “Automating Society Report 2020”, for instance, there is an automated risk analysis tool operating in all German-speaking Swiss cantons that evaluates the probability that those who have committed a crime in the past will reoffend. However, due to the lack of transparency of this and most other tools, which are usually commercially developed, it is difficult to assess their social consequences.

Whenever cases of algorithmic discrimination crop up, they not only cast doubt on the promise of data solutionism; they also demonstrate again and again that the people facing discrimination by AI technologies are the same ones already experiencing marginalization, exclusion and inequality in our society.

That is no wonder, since AI uses datasets – which, by definition, always stem from the past – to derive patterns and generate prognoses about the future. It thus preserves the pattern structures it finds in the data – also the ones that are unjust.

Hence, the recruiting tool developed by Amazon “learned” from the previous ten years of applicant data that a disproportionate number of men had been hired in the past. The sorting system designed by the Austrian employment agency “recognized” that women with children, immigrants and older applicants had a harder time finding work. Now, however, the symbolic authority of data and algorithms is legitimizing human decisions of the past, thereby rendering them more difficult to contest. In both cases, the use of technology reinforces established structures of social inequality and undermines the transformation to a more equitable society that these technologies were promised to bring about.

Not a mirror image of reality

In the current “gold rush” of big data and data-centric AI, it is also easily forgotten that simply improving bad datasets in order to champion data solutionism is not enough. Data do not simply represent any preexisting reality. Data are themselves the product of social processes and practices involving decisions about what to include or exclude, about what is relevant and what is irrelevant.

Amazon’s recruiting AI learned from a dataset that was the product of many years of decision-making practices by HR staff who had favored applications from men. The fact that the application data only differentiates between women and men is at the same time indicative of a heteronormative gender order that recognizes only two genders. Non-binary as well as trans people are, as an effect, not reflected in the data.

Data do not provide us with unmediated access to social reality. On the contrary, they are inseparable from our cultural norms and consequently do not offer neutral insights into social reality.

What we care about

Yet this insight also gives us reason to hope. If we let go of our belief in technological neutrality and objective AI, we can begin the search for technoscientific futures that are no longer based on motives of rationalization and profit but rather are designed around the principle of care. As an alternative guiding principle, care allows us to ask ourselves what matters to us as a society, what we “care” about or what we wish to “care for” and which technologies we want to develop and implement to pursue those aims.

The answers to these questions do not have to be – and moreover, cannot be – neutral. They would have to be found in a broadly based, participatory societal discussion that encompasses a diverse array of lived experiences. In this way, the voices of those very people who are so seldom heard in our society could be the ones to precipitate new perspectives on care-ful development and the care-ful use of technology.

To top