Artificial intelligence tools used to make hiring decisions could have built-in biases that favour men and disadvantage women.
Researchers from the University of Melbourne looked into the way supposedly objective tools can be influenced by pre-existing bias confirming that implicit human bias can be implanted onto and amplified by machine learning systems.
The experiment began with a hiring panel who made quantifiable judgements about a series of CVs for three different jobs – data analyst, finance officer, and recruitment officer – which the researchers used to inform machine learning algorithms that would rank CVs.
One of the algorithms, created using a linear regression technique, did a decent job of matching the results of its human counterparts.
Except those results were already biased.
The researchers had found that its human panel tended to favour male candidates for the male-dominated data analyst role and the gender-balanced finance officer role.
When looking for sources of bias in the machine, the researchers found little evidence that it stemmed from methods of machine learning – such as keyword matching language models and classifier/predictor models – but strong evidence that human bias from the modelled dataset influenced the final output.
“We know that these biases can be exaggerated by artificial intelligence,” said co-author of the study, Leah Ruppanner.
“This means online job seeking and CV ranking will continue to work against women in jobs where humans exhibit a stronger prefernece for male candidates.
“Computers ‘learn’ that a successful candidate has a man’s name because of human decisions and will be more and more likely to rank a CV from a man more highly as a result.
“Computers don’t ask why. The onus is on us to understand the subconscious bias behind job hiring decisions, before we start embedding these problematic preferences into artificial intelligence algorithms.”
Problems with AI in hiring have already been established, such as when Amazon decided to scrap its AI recruiting tool once it was found to be biased against women.
Because the Amazon AI recruiter was trained on a dataset that mostly included male CVs, it ended up concluding that male characteristics were preferable and automatically downgraded female candidates regardless of relevant skillsets.
The University of Melbourne research recommends that, where AI is used as a recruiting tool, it is transparent, regularly audited, and built with the intention to reduce gender bias.
It also want to see more training so human resources professionals are aware of the potential for bias in algorithmic hiring processes.
Published at Thu, 03 Dec 2020 01:07:30 +0000
Vaccines to block study released Wednesday by the Massachusetts Institute of Technology indicated.that are in development by Moderna, Pfizer, AstraZeneca and others, and that are currently in Phase III clinical trials, may not do as well covering people of Black or Asian genetic ancestry as they do for white people, a
The study was published Thursday in the scholarly journal Cell Systems.
“There are obviously many other factors to consider, but our preliminary results suggest that, on average, people of Black or Asian ancestry could have a slightly increased risk of vaccine ineffectiveness,” one of the authors of the report, David K. Gifford, who is with MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), said in a press release issued by MIT.
The report, entitled “Predicted Cellular Immunity Population Coverage Gaps for SARS-CoV-2 Subunit Vaccines and their Augmentation by Compact Peptide Sets,” had originally been posted on the Bioarxiv pre-print server.
Enthusiasm has surged in recent weeks as Moderna, Pfizer and AstraZeneca all announced initial results from Phase III trials in human patients that showed surprisingly powerful rates of immunity, with tests subjects given the drugs being 94% to 95% less likely than people given a placebo to contract COVID-19.
Those three vaccine efforts are only the most prominent in a vast array of efforts. There are fifty-one vaccines in clinical trials in total, according to the World Health Organization. There are another one hundred and sixty-three vaccines in a pre-clinical stage of evaluation.
Many of the vaccines, including those from Moderna and Pfizer and AstraZeneca, share the same weakness, the MIT report contends, which is that they do not use a sufficiently diverse set of viral particles to stimulate the same level of immune response in all people in the population, depending on genetic makeup.
The report draws on in silico computer models. Gifford and co-authors Ge Liu and Brandon Carter, two PhD students with MIT’s CSAIL, used machine learning models to predict, based on patient data and models of proteins in the immune system, how likely vaccines would be to have a “hit,” meaning, to successfully stimulate an immune response, in different population groups based on self-reported ethnic type or genetic ancestry.
The work in the paper builds on work done this summer by the group to develop two computer models that predict vaccine coverage. One, called OptiVax, predicts a vaccine’s stimulation of immune responses. A second, called EvalVax, maps that immune response to the biochemistry of population groups by ethnic or genetic ancestral status.
The vaccine mechanism is modeled by the programs. When an invading organism enters the body, such as a virus, some of the bits of virus, short strings of perhaps 8 to 25 amino acids, known as peptides, fit into a groove in the surface of a person’s cells. The cell is then able to present the bits of the virus to the body’s T cells as a signal of the invasion. The T cells begin a process of killing off such infected cells.
That’s how natural human immunity works, and vaccines mimic that process by using a bit of the virus specially engineered to artificially simulate the cell’s response.
In the report this summer, Gifford and team had warned that not using enough different parts of the virus could leave gaps in coverage. That is because humans have different “alleles,” versions of genes, in what’s called the major histocompatibility complex, the area of the human genome that encodes the cell-surface receptors that are supposed to match the viral peptides. Some alleles produce cell receptors that will bind more or less reliably to some viral peptides.
In the present study, Gifford and team built upon that study to show that the vaccines from Moderna and Pfizer and AstraZeneca have exactly the weakness that the researchers had predicted in their computer modeling.
All the vaccines are using the same bits of the virus, the so-called Spike protein, or S protein, and a special area of the Spike protein, called the Receptor Binding Domain, or RBD. “All reported current efforts for COVID-19 vaccine design that are part of the United States Government’s Operation Warp Speed use variants of the spike subunit of SARS-CoV-2 to induce immune memory,” Gifford and team write.
That focus on a limited number of viral peptides becomes a common weakness in the vaccines, they argue. “We find that proposed SARS-CoV-2 subunit vaccines exhibit population coverage gaps in their ability to generate a robust number of predicted peptide-HLA hits in every individual.” HLAs is the technical term for the cell surface receptors that bind with the peptides.
The researchers relate several instances where their models show that the lack of greater peptide diversity leads to widely varying coverage:
Based on our prediction, the receptor binding domain (RBD) subunit had no MHC class II peptides displayed in 15.12% of the population (averaged across Asian, Black, and White self-reporting individuals) […] We note that the uncovered population of RBD with no predicted display of MHC class II peptides ranges from 0.811% for the popu- lation self-reporting as White, to a high of 37.287% for the population self-reporting as Asian.
ZDNet reached out to Moderna, Pfizer, and AstrZeneca for comment and will update the article with any response.
The authors have a couple of suggestions for the drug makers. One is to take into account genetic ancestry explicitly. “Clinical trials need to carefully consider ancestry in their study designs to ensure that efficacy is measured across an appropriate population,” they write.
Second, as they did in de novo vaccine synthesis over the summer, Gifford and collaborators were able to tweak vaccine designs to include a greater diversity of peptides.
Their computer model of the drug designs suggests the proportion of people who would be covered would substantially improve if a greater mix of peptides is included, the authors write:
The computed sets of augmentation peptides were predicted to substantially reduce the populations predicted to be insufficiently covered by each subunit. Post augmentation the predicted uncovered population for RBD with no peptide-MHC hits is reduced to 0.003% (MHC class I) and 4.351% (MHC class II) with MIRA positive peptides only, and 0.0% (MHC class I) and 0.309% (MHC class II) with all filtered peptides from SARS-CoV-2. (Table S1).
The authors note that their code and data is freely available on GitHub.
Published at Wed, 02 Dec 2020 20:03:45 +0000