Topics

Support Independent Journalism. Donate today!

Algorithms of Inequality

The algorithms states are rolling out to ration scarce resources during the COVID-19 pandemic may ensure that white patients and wealthy patients are more likely to receive life-saving care.

Photo by John Tlumacki/The Boston Globe via Getty Images

The algorithms that will be used to ration scarce resources during the COVID-19 pandemic may ensure that white patients and wealthy patients are more likely to receive life-saving care.

Last week, Massachusetts unveiled how it will determine which patients will have access to ventilators and intensive care beds in the event that these resources become scarce—and, by extension, which patients will not. If the epidemic curve does not flatten enough, the virus may overwhelm hospitals, and the number of patients who require intensive care and mechanical ventilation to survive an infection may outstrip available resources.

The state, where I have been caring for COVID-19 patients at a safety-net public hospital since the pandemic began, rolled out an algorithm intended to make rationing decisions “as objective as possible,” to quote the new state guidelines. The Department of Public Health assures us that this algorithm excludes “factors that have no bearing on the likelihood or magnitude of benefit, including but not limited to race…socioeconomic status, [or] perceived social worth.”

This statement sounds comforting. But beneath its cloak of objectivity, the state’s proposed algorithm relies on measurements shaped by the very factors that the Department of Public Health claims that it excludes. 

If it is implemented in the event of a ventilator shortage, it will consign a disproportionate number of Black, Latinx, and poor Americans to death. 

The same is true of the algorithms other states have outlined. It is already clear that the virus is not an equal-opportunity predator, and Black and Latinx Americans are disproportionately affected. How we choose to ration care may worsen that trend.

Decision-makers have long turned to algorithms to resolve thorny dilemmas in healthcare, education, and criminal justice. Besides diluting responsibility for outcomes, these tools foster the perception of impartiality based on the illusion that they are not subject to the same biases as human beings. This stems in part from their “black box” quality—the relationship between the data that goes into them, the factors shaping that data, and the resulting decisions is opaque. However, such algorithms are only as good as their inputs. If they are fed the trappings of an inherently unjust society, they will return unjust decisions, not just reflecting but also amplifying and systematizing the preexisting disparities.

At least Massachusetts has foregone the most biased type of algorithmic tool, one that explicitly bases decisions on the presence of chronic underlying illnesses.

The Charlson Comorbidity Index, a prominent example, is a key component of Colorado’s new rationing plan. It has already been used to triage ventilators in Italy’s hard-hit Piedmont region. It tallies a patient score based on pre-existing illnesses, including kidney disease, diabetes “with chronic complications,” and HIV/AIDS.

But these are uniformly diseases of disparity. They are a direct consequence of poverty and low access to health care, conditions that in the United States are marred by immense racial and ethnic disparities. Moreoever, Americans of color suffer from worse health outcomes regardless of wealth, the consequence of systematic discrimination and of a history of fraught relationships with the healthcare system.  

Black Americans are 60 percent more likely to be diagnosed with diabetes, 2.8 times as likely to have end-stage kidney disease, and 8.4 times as likely to be diagnosed with HIV than their white counterparts. Latinx Americans are also considerably more likely to be diagnosed with diabetes or chronic kidney disease

The likelihood of developing diabetes, for instance, reflects a lack of access to healthy foods. For individuals without insurance and primary care, disproportionately people of color, diabetes often goes undiagnosed until later stages, when treatment is more difficult. Even once they are in the healthcare system, Black and Latinx Americans are routinely offered worse care and less monitoring for their diabetes. Insulin, an essential treatment for some diabetics, has to be refrigerated—not a reliable option for many Americans. People with poor nutrition, healthcare and treatment are thus far more likely to develop the “chronic complications” of diabetes that worsen a patient’s score on the Comorbidity Index. They would fare worse if hospitals use that index to withhold ventilators from some patients. 

Massachusetts, unlike Colorado, is not using scores that directly input chronic conditions. But Massachusetts still uses another algorithm, the Sequential Organ Failure Assessment (SOFA) score, which still brings such measurements in, just more surreptitiously. 

On the surface, the SOFA score may appear more equitable.  It takes into account only the state of organ systems within the body at the moment the triage decision is being made. But even these metrics are likely to favor white patients over patients of color and wealthy patients over their poorer neighbors—it is impossible to divorce the state of one’s body in a moment of crisis from the accumulation of chronic illnesses that result from a lifetime of inequality. 

For instance, the SOFA score considers creatinine level, which measures kidney function. Creatinine is affected by the presence of underlying chronic kidney disease—one of the diseases that Black patients are significantly more likely to suffer from.

The decision to use the SOFA score is based on the notion that the only relevant variable in allocating scarce resources should be a patient’s likelihood of surviving their illness. Even if this were a fair way of making such decisions, there is no satisfactory evidence that differences in SOFA scores between patients accurately predict who is likeliest to survive in the setting of an infection (COVID-19) that remains such a considerable unknown.

In fact, some racially determined factors that feed into SOFA say nothing about a patient’s health at all. Multiple studies have found creatinine to be significantly higher in black Americans than white Americans, even when both have fully functioning kidneys.

The Massachusetts plan and other algorithms like it, then, would not adequately predict who is likely to survive. Instead, they would determine who gets to survive, generating a devastating and racially biased self-fulfilling prophecy. 

If we use these algorithms when faced with patients who have a still-substantial chance of surviving if given equal access to ventilators and ICU beds, we would be making a decision to withhold critical care resources from them for reasons that are tied to their race and class.

The virus has become a crushing reminder of the inequality of our existing healthcare system. But the current state of crisis will not last forever. At the other end of this pandemic, we will remember the irrevocable decisions we have made. We should make them with eyes wide open rather than relying unquestioningly on barely veiled proxies of inequality in deciding who should live and who should die.

Pria Anand is a writer and physician. She cares for patients at Boston University School of Medicine, where she is an assistant professor of neurology.

How are state and local governments responding to COVID-19?
Explore with our interactive tracker.