Artificially intelligent, inherently racist
This article is more than 2 years old.
A new study regarding online language prediction models reveals that they discriminate against young, non-white men
Talk of artificial intelligence tends to fall into two camps: that of an interconnectedness that streamlines every aspect of human life – or a dystopian HAL 9000-type technological singularity in which “I’m sorry, Dave. I’m afraid I can’t do that,” is the last thing we hear before the machines take over and turn us into fleshy slaves.
Right now, we’re in the grey zone of prototypes, so some of our forays into AI are less than perfect. Take language prediction models. They’re used in everything from Google searches to legal cases, but a new study by researchers at China’s National University of Defense Technology and the University of Copenhagen shows they have a systemic racial bias.
Deeply ingrained tech
The language models under the microscope were ELECTRA, GPT-2, BERT, DistilBERT, ALBERT and RoBERTa. If you’re wondering why so many are called ‘Bert’, they’re all offshoots of the original ‘Bidirectional Encoder Representations from Transformers’ – a type of machine-learning technique developed by Google in 2018.
To give an idea of how prevalent these models are: at the end of 2019, BERT had been adopted by Google’s search engine in 70 languages. By 2020 the model was used in almost every English-language search query. This is the technology that fills in the gap in the search bar when you type “Why am I so ___?”
The study in detail
The study measured the models’ performance differences across demographics in so-called English-language ‘cloze tests’ (fill-in-the-gap tests). Since the cloze task is how BERT systems are trained, researchers were able to evaluate the models directly.
Some 3,085 sentences were completed by 307 human subjects asked to fill in the most likely word based on their experience. They were sorted into 16 demographics according to age, gender, education and race. The ‘fairness’ of the language model responses was measured by whether the risk of error across any two demographics was roughly equal.
The results showed a systemic bias against young non-white male speakers. Older, white speakers were also poorly aligned. Not only do the models learn stereotypical associations, they also learn to speak more like some than like others – in this case white men under the age of 40.
Why is it important?
We already know that BERT is an integral part of our online navigation system, so users who do not align with the models receive unequal results and opportunities.
When GPT-2 was announced in February 2019 by San Francisco technology company OpenAI, James Vincent of The Verge described its writing as “one of the most exciting examples yet” of language generation programs.
“Give it a fake headline, and it’ll write the rest of the article, complete with fake quotations and statistics. Feed it the first line of a short story, and it’ll tell you what happens to your character next,” he said.
The Guardian called it “plausible newspaper prose”, while journalists at Vox mused that GPT-2 may be the technology that kicks them out of their jobs. A study by the University of Amsterdam even found that some participants were unable to distinguish poems generated by GPT-2 from those written by humans.
The upshot should be better training, argue the researchers at the University of Copenhagen, so the models more accurately represent the diversity of users.
Sports body warns against live-streaming of youths
Parents, clubs and associations say they won’t be stopping, but are open to following guidelines
HOT IN TOWN: Why zombie apocalypses tend to bring out the high heels and low-lifes
Two more years: Danish national coach extends contract
Kasper Hjulmand and the Danish football association have extended their working arrangement until 2026
Uffe Jørgensen Odde
Denmark’s battle against burglaries: Significant progress, but more can be done
This content is sponsored
Government’s proposed daycare ban on screens is “overkill”, warns blue bloc parties
Finally a chance to see the Julian Assange doc: Denmark’s waited longer than Odysseus’s dog
PM at White House today: NATO assurances and fighter jet pledges top of the agenda
Workplace inclusion joys and lows: from being thrown in the deep end to successful onboarding
Too much candy, sweetie: how a municipal pilot is helping Danish kids to reduce their ‘slik’ intake
Municipalities to have their own ‘police’
New legislation targeting criminal behaviour will also seek to curb football hooligans and antisocial party-goers