Study: In Allocation of Scarce Resources Using AI, Randomization Can Improve Fairness

Organizations are increasingly using machine learning models to allocate scarce resources or opportunities. For example, such models can help companies screen resumes to select job interview candidates, or help hospitals evaluate kidney transplant patients based on their likelihood of survival.

When deploying a model, users typically try to ensure that its predictions are fair by reducing bias. This often involves techniques such as adjusting the features the model uses to make decisions or calibrating the scores it generates.

However, researchers at MIT and Northeastern University argue that these methods of justice are insufficient to address structural injustices and inherent insecurities. In a new paper, they show how randomizing the model’s decisions in a structured way can improve fairness in certain situations.

For example, if several companies use the same machine learning model to deterministically rank candidates for a job interview—without any randomization—then one deserving individual could be the lowest-ranked candidate for each job, perhaps because of how the model weights the responses provided in the online form. Introducing randomization into model decisions could prevent one worthy person or group from ever being denied a rare resource such as a job interview.

Through their analysis, the researchers found that randomization can be particularly beneficial when the model’s decisions involve uncertainty or when the same group consistently makes negative decisions.

They represent a framework that can be used to introduce some amount of randomization into the model’s decision-making by allocating resources through a weighted lottery. This method, which an individual can adapt to their situation, can improve fairness without reducing the efficiency or accuracy of the model.

“Even if you could make fair predictions, should you make decisions about these social allocations of scarce resources or opportunities strictly outside of scores or rankings? As things expand and we see more and more opportunities decided by these algorithms, the inherent uncertainties in these scores can be amplified. We show that fairness may require some kind of randomization,” says Shomik Jain, a graduate student at the Institute for Data, Systems and Society (IDSS) and lead author of the paper.

Jain was joined on the paper by Kathleen Creel, assistant professor of philosophy and computer science at Northeastern University; and lead author Ashia Wilson, Lister Brothers Professor of Career Development in the Department of Electrical Engineering and Computer Science and Principal Investigator in the Laboratory for Information and Decision Systems (LIDS). The research will be presented at the International Conference on Machine Learning.

With regard to claims

This work follows on from a previous paper in which the researchers examined the damage that can occur when one uses deterministic systems on a large scale. They found that using a machine learning model to deterministically allocate resources can amplify inequalities that exist in the training data, which can reinforce bias and systemic inequality.

“Randomization is a very useful concept in statistics, and to our delight it satisfies the demands for fairness coming from both a systemic and an individual perspective,” says Wilson.

In this paper, they examined the question of when randomization can improve fairness. They framed their analysis in the ideas of philosopher John Broom, who wrote about the value of using lotteries to allocate scarce resources in a way that respects all individual claims.

A person’s claim to a scarce resource, such as a kidney transplant, can stem from merit, merit, or need. For example, everyone has a right to life, and their claims for a kidney transplant can stem from that right, Wilson explains.

“When you recognize that people have different claims to these precious resources, justice will require that we respect all the claims of individuals. If we always give the resource to someone with a stronger claim, is that fair? says Jain.

This kind of deterministic allocation could cause systemic exclusion or exacerbate pattern inequality, which occurs when receiving one allocation increases the likelihood that an individual will receive future allocations. Additionally, machine learning models can make mistakes, and a deterministic approach could cause the same mistake to be repeated.

Randomization can overcome these problems, but that doesn’t mean that all the decisions the model makes should be randomly random.

Structured randomization

Researchers use a weighted lottery to adjust the level of randomization based on the degree of uncertainty associated with the model’s decision making. A decision that is less certain should involve more randomization.

“For kidney allocation, planning is usually projected around life expectancy, and that is deeply uncertain. If there is only five years between two patients, it is much more difficult to measure. We want to use this level of uncertainty to adjust the randomization,” says Wilson.

Researchers have used statistical uncertainty quantification methods to determine how much randomization is needed in different situations. They show that calibrated randomization can produce results for individuals without significantly affecting the utility or efficiency of the model.

“A balance needs to be struck between overall utility and respecting the rights of individuals receiving scarce resources, but often the trade-off is relatively small,” says Wilson.

However, the researchers point out that there are situations where random decisions would not improve justice and could harm individuals, such as in the context of criminal justice.

But there may be other areas where randomization can improve fairness, such as college admissions, and the researchers plan to study other use cases in future work. They also want to explore how randomization can affect other factors, such as competition or pricing, and how it could be used to improve the robustness of machine learning models.

“We hope that our paper is a first step in illustrating that randomization can be beneficial. We offer randomization as a tool. How much you want to do this will be up to all parties involved. And of course, how they decide is another research question altogether,” says Wilson.

Leave a Comment