Could Technology Worsen Healthcare Disparities?

Sarah Mackel

I n today's healthcare system, primary care providers are under increasing stress, A&E is struggling to meet its target waiting times(1), and the King’s Fund think tank estimates a level of NHS staff vacancies “threatening the ability of the service to deliver safe, high-quality care”(13). Artificial Intelligence (AI) has been proposed as a solution to automate healthcare decision-making to alleviate the pressure on medical institutions. DeepMind, a Google subsidiary, has already developed an AI capable of diagnosing 50 eye diseases with 94.5% accuracy, surpassing optometry experts(12). Hence, some claim that AI could streamline medical practice through its purported objectivity and indefatigability; after all, cognitive error accounts for 74% of misdiagnosis(4). These very features, however, could create unwitting biases in algorithms that perpetuate a grim legacy of medical inequality. Do faster diagnoses compensate for the risk of failure, especially when Black, Asian, and minority ethnic (BAME) populations are disproportionately affected?

Already, diagnostic algorithms that are trained with predominantly light-skinned sample data underperform when diagnosing melanoma on darker skin(9), leading to potentially severe consequences and perpetuating a longstanding diagnostic inequality(10). Algorithms have encountered similar problems when reading chest x-rays after having been trained on gender-imbalanced data(8). The fault here is that humans input information into machine learning systems. Programmers neglected to give a complete paradigm for the AI. Though this is largely due to limited data sets, it highlights one of the key concerns of using AI in high-stakes sectors: that instead of eliminating diagnostic inequalities and cognitive errors through machine learning, AI programs might only automate them. 

Problems surrounding training data extend even further. Though existing medical training data has a known diversity problem(5), recent backlash surrounding companies’ overuse of personal data has created a skeptical, fearful public, complicating any attempts to aggregate sensitive data. Questions of whether sensitive information like genomic data should be given to healthcare providers or insurance companies have led to a significant ethical debate.

Even with diverse data, fairness is not guaranteed. In 2019, a United States healthcare AI was found to have accidentally developed racial bias. It routinely let healthier white patients in instead of sicker BAME patients that the AI referred to as a high-risk healthcare management program(11). This arose because the algorithm correlated higher healthcare spending with illness severity. However, unequal access to care meant that BAME patients spent less on healthcare than equally sick white patients. So, white patients were given lower risk scores and were referred to the high-risk program less often(11). These are not isolated cases, either: Harvard University researchers recently reviewed thirteen clinical algorithms which unearthed numerous examples of bias that made BAME patients less likely to receive appropriate care(6). 

Algorithmic bias is not limited to healthcare. Women are less likely to be shown advertisements for highly-paid positions(3), facial recognition systems underperform in recognizing faces of women and black individuals(7). An Amazon hiring AI was discontinued in 2018 because it discriminated against female applicants(2). To blame the AI itself for this is an oversimplification of the true issue: the algorithms only pick up on and automate pre-existing inequalities because they do not understand their significance. This is due to limited training data and programmers’ errors—errors that could lead to waning trust in medical institutions which is paramount to attenuate as symbiotic trust is central to the integrity of medicine. 

The medical profession is only starting to grapple with the longstanding systemic biases that have led to a report from the journal Pediatrics which finds that previously healthy Black children remain three times more likely to die or experience complications after surgery than white children(6). Although AI is a tempting solution to healthcare challenges, the knowledge transferred to AI is simply an automation of the designer’s ideological beliefs instead of a purely objective program. Understanding the need for unbiased programs is central to AI’s technological development. 

Technology needs to promote equality instead of exacerbating discrimination. If AI is to play a larger role in the health industry, then programmers and users must be cognizant of its potential to amplify patterns of inequality. AI has to help support the bond of trust between medical institutions and the public as the third year of the COVID-19 pandemic rages on. 


Works Cited: 

  1. Baker, Carl. "NHS Key Statistics: England, October 2019.", House of Commons Library, 2019.

  2. Dastin, Jeffrey. "Amazon scraps secret AI recruiting tool that showed bias against women." Reuters, 11 Oct. 2018, www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G. Accessed 20 Oct. 2021.

  3. Datta, Amit, et al. "Automated Experiments on Ad Privacy Settings." Proceedings of Privacy Enhancing Technologies, 16 Apr. 2015, https://doi.org/10.1515/popets-2015-0007. Accessed 19 Oct. 2021.

  4. Graber, Mark L., et al. "Diagnostic Error in Internal Medicine." Archives of Internal Medicine, vol. 165, no. 13, Nov. 2005, p. 1493., doi:10.1001/archinte.165.13.1493.https://jamanetwork.com/journals/jamainternalmedicine/article-abstract/486642

  5. Kaushal, Amit, et al. "Health Care AI Systems Are Biased." Scientific American, 17 Nov. 2020, www.scientificamerican.com/article/health-care-ai-systems-are-biased/. Accessed 19 Oct. 2021.

  6. Kent, Chloe. "A Race to the Bottom: How AI Encodes Racial Discrimination within Medicine." Medical Technology, Sept. 2020, medical-technology.nridigital.com/medical_technology_sep20/ai_racial_discrimination_medicine. Accessed 17 Oct. 2021.

  7. Klare, Brendan F., et al. "Face Recognition Performance: Role of Demographic Information." Face Recognition Performance: Role of Demographic Information, 9 Oct. 2012, https://doi.org/10.1109/TIFS.2012.2214212. Accessed 19 Oct. 2021.

  8. Larrazabal, Agostina J., et al. Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Stanford University, 26 May 2020. PNAS (Proceedings of the National Academy of Sciences of the United States of America), https://doi.org/10.1073/pnas.1919012117. Accessed 17 Oct. 2021.

  9. Lashbrook, Angela. "AI-Driven Dermatology Could Leave Dark-Skinned Patients Behind." The Atlantic, Atlantic Media Company, 16 Aug. 2018, https://www.theatlantic.com/health/archive/2018/08/machine-learning-dermatology-skin-color/567619/

  10. Mahendraraj, Krishnaraj et al. “Malignant Melanoma in African-Americans: A Population-Based Clinical Outcomes Study Involving 1106 African-American Patients from the Surveillance, Epidemiology, and End Result (SEER) Database (1988-2011).” Medicine vol. 96,15 (2017): e6258. doi:10.1097/MD.0000000000006258

  11. Obermeyer, Ziad, et al. "Dissecting racial bias in an algorithm used to manage the health of populations." Science, vol. 366, no. 6464, 25 Oct. 2019, https://doi.org/10.1126/science.aax2342. Accessed 15 Oct. 2021.

  12. Shead, Sam. "Google DeepMind's AI Can Detect 50 Eye Disease Conditions And Save Sight." Forbes, Forbes Magazine, 13 Aug. 2018, https://www.forbes.com/sites/samshead/2018/08/13/google-deepminds-ai-can-detect-50-eye-disease-conditions-and-save-sight/#3adb22b127f3.

  13. West, Michael. "The NHS Crisis of Caring for Staff." The King's Fund, 1 Mar. 2019, https://www.kingsfund.org.uk/blog/2019/03/nhs-crisis-caring.