AI medical care exposes discrimination! High-income patients receive precise testing, while Black patients and people experiencing homelessness are recommended for invasive treatment.

ChainNewsAbmedia

As the commercialization of artificial intelligence (AI) technology in the healthcare industry becomes increasingly widespread, its potential systemic risks are gradually coming to light. The latest study from the academic journal Nature Medicine points out that when medical AI tools make decisions, they unexpectedly provide drastically different medical recommendations based on patients’ income, race, gender, and sexual orientation. This could cause real harm to patients’ rights and the overall allocation of healthcare resources.

Study: High-income patients are more likely to be recommended advanced tests

The study tested nine large language models (LLMs) available on the market by inputting 1,000 emergency department case examples. The research team deliberately kept all patients’ medical symptoms the same, only changing background characteristics such as patients’ income, race, and living situation. The results showed that, when providing medical recommendations, the AI system displayed a clear “gap between rich and poor.”

Patients labeled as “high-income” were far more likely than low-income patients to receive AI recommendations for advanced imaging tests such as magnetic resonance imaging (MRI) or computed tomography (CT). This means that even when the patients’ conditions are the same, the AI may still allocate unequal healthcare resources due to assumed socioeconomic status.

Black people, people experiencing homelessness, and LGBTQ+ groups are more likely to be advised on invasive treatment and mental health assessments

In addition to differences across wealth classes, AI also shows serious disparities in medical judgments for racial and other vulnerable groups. The study report states that when patients are labeled as Black, homeless, or LGBTQIA+ (a diverse gender identity group), the AI is more likely to recommend sending them to the emergency department, carrying out invasive medical procedures, and even requiring psychiatric evaluations—despite the fact that these interventions are completely unnecessary in clinical practice. These excessive and inappropriate medical recommendations sharply diverge from the judgments made by professional physicians in real life, showing that AI systems are reinforcing existing negative social stereotypes in an invisible way.

1.7 million real-world tests: AI that relies on data training may increase the risk of clinical misdiagnosis

This study ran more than 1.7 million AI responses. Experts noted that the logic behind artificial intelligence’s decision-making comes from historical training data produced by humans, so it naturally inherits the biases hidden within that data. Emergency triage, advanced testing, and subsequent follow-up are key steps to achieving accurate diagnosis; if these initial decisions are interfered with by patients’ demographic characteristics, it will seriously threaten diagnostic accuracy.

Although the researchers found that, by using specific “prompts,” bias could be reduced by about 67% in certain models, it still cannot completely eliminate this systemic problem.

Experts urge healthcare institutions and decision-makers to establish protective mechanisms

With the publication of this study, regulations governing AI’s use in healthcare systems have become a focus of attention for the industry and regulatory bodies. For front-line healthcare professionals, it is necessary to recognize the explicit and implicit biases that may be embedded in AI recommendations and not to blindly rely on their decision-making. Healthcare institution administrators, meanwhile, should establish ongoing evaluation and monitoring mechanisms to ensure fairness in healthcare services.

At the same time, policymakers have also obtained key scientific evidence; going forward, they should promote greater transparency in AI algorithms and auditing standards. For the general public, this is also an important warning: when using various AI health counseling services, entering too much personal socioeconomic background information may unintentionally influence the medical assessments that AI provides.

This article, AI medical discriminatory bias! High-income patients receive precise testing; Black people and people experiencing homelessness are advised to undergo invasive treatment, was first published on Lianxin ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Marvell teams up with Google to develop an AI MPU chip, and the stock price jumps 6.3% on the news

Google is discussing collaboration with Marvell to develop dedicated memory processing units (MPU) and tensor processing units (TPU) to address memory bottlenecks. If successful, the design will be completed in 2027. The collaboration is intended to strengthen Google’s competitiveness in the custom ASIC market, and Marvell’s operating performance has been strong, which has pushed the stock price up.

ChainNewsAbmedia2h ago

Nvidia Stock Touches $199.86 as Google, Startups Challenge Its AI Chip Dominance

Nvidia's stock fell to $199.48 amid increased competition in the AI chip market, particularly with Google launching new TPUs focused on inference. AI chip startups raised $8.3 billion in 2026, signaling a robust sector, with Rebellions raising substantial funding to target U.S. customers.

GateNews2h ago

a16z latest report: Why blockchain is the missing infrastructure piece that AI agents need?

a16z crypto’s latest report says that AI agents are evolving from support tools into economic actors, yet there are still major gaps in core infrastructure such as identity, payments, and cross-platform collaboration. The report emphasizes that as AI becomes involved in governance and transactions, verification mechanisms become the key to trust, and blockchain technology can provide verifiable infrastructure to address these challenges. The future will require cryptographic mechanisms to ensure that AI agents truly represent users’ intent and to change traditional payment systems.

ChainNewsAbmedia4h ago

Moonshot AI Releases Kimi K2.6 with Enhanced Coding and Agent Capabilities

Moonshot AI has released Kimi K2.6, featuring chat and Agent modes on kimi.com. It excels in coding benchmarks, supports 4,000 tool invocations, and upgraded parallel functionality for autonomous scenarios.

GateNews6h ago

Optiver Takes Equity Stake in Crypto and AI-Focused VC Firm Eden Block

Optiver Holding BV has invested in Eden Block, a venture capital firm focusing on cryptocurrency and AI. This move aims to enhance Optiver's exposure to innovative companies in these sectors, as both technologies could transform trading and capital markets.

GateNews7h ago

Cerebras Refiles for Nasdaq IPO After Clearing National Security Review Over UAE Ties

Cerebras Systems is reviving its Nasdaq IPO plans after passing a national security review. The AI chipmaker has diversified its revenue and reported significant growth while securing major partnerships, positioning itself as a competitor to Nvidia.

GateNews7h ago
Comment
0/400
No comments