daily californian logo

BERKELEY'S NEWS • NOVEMBER 18, 2023

UC Berkeley researchers find widely used health care algorithm is biased

article image

KAREN CHOW | FILE

SUPPORT OUR NONPROFIT NEWSROOM

We're an independent student-run newspaper, and need your support to maintain our coverage.

OCTOBER 28, 2019

Campus researchers contributed to a study that found a widely used algorithm in the health care industry contributes to racial, socioeconomic and gender prejudices despite being built to overcome bias.

According to the study, high-risk care management software programs determine access to high-risk health care programs, which consistently admit more healthy white individuals. Fixing the algorithm would increase the percentage of Black patients admitted to the programs from 17.7% to 46.5%, according to the study.

Campus acting associate professor of public health and lead researcher Ziad Obermeyer said the source of the algorithm bias is found within the inequalities of the health care system.

“The problem is that when you’re Black and sick, you cost less than a white patient because of inequality and their access to health care,” Obermeyer said.

Bias occurs in the system because the algorithms use health costs to determine health needs, according to the study. The algorithm falsely concludes that Black patients are healthier than white patients because less money is spent on Black patients who have the same level of need.

Obermeyer and the other researchers partnered with an academic hospital that used the risk-based algorithms to determine who had preferential access to a health care program. The study’s sample consisted of 6,079 people who self-identify as Black and 43,539 people who self-identify as white.

The researchers received the algorithm-predicted risk score and related it to direct measures of health like the number of ill patients. The risk score found that Black people had poorer health than white people.

For campus social welfare professor Tina Sacks, the algorithm’s bias was not a surprise.

“Researchers and lay people alike tend to decontextualize AI and big data without understanding that fundamentally all of these systems are created by humans with the attendant biases and overt discrimination baked into the cake,” Sacks said in an email. “What is surprising is that anyone would think a human-made ‘machine’ would not reproduce the biases of the humans that built it.”

There were efforts in addressing the issue of bias in health care systems, however.

The researchers eventually reached out to the software company that makes the algorithm. The company was incredibly responsive, verified the bias and tried to generate a solution, according to Obermeyer. Obermeyer also said the company was able to reduce the bias by 84%.

Obermeyer added that to overcome this bias, it is up to students to lead the way. Those studying data science and algorithms are only some of the leading catalysts for change, Obermeyer said.

“We want a health system that anticipates health needs and a lot of these algorithms would be made by students today,” Obermeyer said. “It is an extremely important job because it’s up to us to find solutions to inequalities in the system.”

Contact Marc Escobar at [email protected] and follow him on Twitter at @mescobar_dc.
LAST UPDATED

NOVEMBER 10, 2019


Related Articles

featured article
At a special city council meeting on Tuesday, Berkeley city manager Dee Williams-Ridley presented an update on Berkeley’s 2020 Vision: Equity in Education.
At a special city council meeting on Tuesday, Berkeley city manager Dee Williams-Ridley presented an update on Berkeley’s 2020 Vision: Equity in Education.
featured article
featured article
In one of my classes, we were trying to measure the impact of policing on the physical and emotional well-being of Black and brown folks.
In one of my classes, we were trying to measure the impact of policing on the physical and emotional well-being of Black and brown folks.
featured article
featured article
Banning facial recognition technology, yet considering exemptions for certain locations of said technology, is hypocritical, to say the least.
Banning facial recognition technology, yet considering exemptions for certain locations of said technology, is hypocritical, to say the least.
featured article