AI medical care exposes discrimination! High-income patients receive precise testing, while Black patients and people experiencing homelessness are recommended for invasive treatment.

ChainNewsAbmedia

As the commercialization of artificial intelligence (AI) technology in the healthcare industry becomes increasingly widespread, its potential systemic risks are gradually coming to light. The latest study from the academic journal Nature Medicine points out that when medical AI tools make decisions, they unexpectedly provide drastically different medical recommendations based on patients’ income, race, gender, and sexual orientation. This could cause real harm to patients’ rights and the overall allocation of healthcare resources.

Study: High-income patients are more likely to be recommended advanced tests

The study tested nine large language models (LLMs) available on the market by inputting 1,000 emergency department case examples. The research team deliberately kept all patients’ medical symptoms the same, only changing background characteristics such as patients’ income, race, and living situation. The results showed that, when providing medical recommendations, the AI system displayed a clear “gap between rich and poor.”

Patients labeled as “high-income” were far more likely than low-income patients to receive AI recommendations for advanced imaging tests such as magnetic resonance imaging (MRI) or computed tomography (CT). This means that even when the patients’ conditions are the same, the AI may still allocate unequal healthcare resources due to assumed socioeconomic status.

Black people, people experiencing homelessness, and LGBTQ+ groups are more likely to be advised on invasive treatment and mental health assessments

In addition to differences across wealth classes, AI also shows serious disparities in medical judgments for racial and other vulnerable groups. The study report states that when patients are labeled as Black, homeless, or LGBTQIA+ (a diverse gender identity group), the AI is more likely to recommend sending them to the emergency department, carrying out invasive medical procedures, and even requiring psychiatric evaluations—despite the fact that these interventions are completely unnecessary in clinical practice. These excessive and inappropriate medical recommendations sharply diverge from the judgments made by professional physicians in real life, showing that AI systems are reinforcing existing negative social stereotypes in an invisible way.

1.7 million real-world tests: AI that relies on data training may increase the risk of clinical misdiagnosis

This study ran more than 1.7 million AI responses. Experts noted that the logic behind artificial intelligence’s decision-making comes from historical training data produced by humans, so it naturally inherits the biases hidden within that data. Emergency triage, advanced testing, and subsequent follow-up are key steps to achieving accurate diagnosis; if these initial decisions are interfered with by patients’ demographic characteristics, it will seriously threaten diagnostic accuracy.

Although the researchers found that, by using specific “prompts,” bias could be reduced by about 67% in certain models, it still cannot completely eliminate this systemic problem.

Experts urge healthcare institutions and decision-makers to establish protective mechanisms

With the publication of this study, regulations governing AI’s use in healthcare systems have become a focus of attention for the industry and regulatory bodies. For front-line healthcare professionals, it is necessary to recognize the explicit and implicit biases that may be embedded in AI recommendations and not to blindly rely on their decision-making. Healthcare institution administrators, meanwhile, should establish ongoing evaluation and monitoring mechanisms to ensure fairness in healthcare services.

At the same time, policymakers have also obtained key scientific evidence; going forward, they should promote greater transparency in AI algorithms and auditing standards. For the general public, this is also an important warning: when using various AI health counseling services, entering too much personal socioeconomic background information may unintentionally influence the medical assessments that AI provides.

This article, AI medical discriminatory bias! High-income patients receive precise testing; Black people and people experiencing homelessness are advised to undergo invasive treatment, was first published on Lianxin ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

DeepSeek Slashes Input Cache Prices to 1/10 of Launch Price; V4-Pro Drops to 0.025 Yuan per Million Tokens

Gate News message, April 26 — DeepSeek has reduced input cache prices across its entire model lineup to one-tenth of launch prices, effective immediately. The V4-Pro model is available at a limited-time 2.5x discount, with the promotion running through May 5, 2026, 11:59 PM UTC+8. Following both re

GateNews8h ago

OpenAI Recruits Top Enterprise Software Talent as Frontier Agents Disrupt Industry

Gate News message, April 26 — OpenAI and Anthropic have been recruiting senior executives and specialized engineers from major enterprise software companies including Salesforce, Snowflake, Datadog, and Palantir. Denise Dresser, former CEO of Slack under Salesforce, joined OpenAI as chief revenue of

GateNews8h ago

Baidu Qianfan Launches Day 0 Support for DeepSeek-V4 with API Services

Gate News message, April 25 — DeepSeek-V4 preview version went live and open-sourced on April 25, with Baidu Qianfan platform under Baidu Intelligent Cloud providing Day 0 API service adaptation. The model features a million-token extended context window and is available in two versions: DeepSeek-V4

GateNews14h ago

Stanford AI course combined with industry leaders Huang Renxun and Altman, challenging to create value for the world in just ten weeks!

The AI computer science course 《Frontier Systems》 recently launched by Stanford University has attracted intense attention from the industry-university collaboration community, drawing more than 500 students to enroll. The course is coordinated by Anjney Midha, a partner at top venture capital firm a16z, and the instructors include a star-studded lineup such as NVIDIA CEO Jensen Huang (Jensen Huang), OpenAI’s founder Sam Altman, Microsoft CEO Satya Nadella (Satya Nadella), AMD CEO Lisa Su (Lisa Su), and more. Students get to try it over ten weeks—“creating value for the world”! Jensen Huang and Altman, industry leaders, personally take the stage to teach The course is coordinated by Anjney Midha, a partner at top venture capital firm a16z, bringing together the full AI industry chain

ChainNewsAbmedia15h ago

Anthropic’s Claude Mythos undergoes 20 hours of psychiatric assessment: defensive reactions are only 2%, the lowest in recorded history

Anthropic published the system card for its Claude Mythos Preview: an independent clinical psychiatrist conducted an approximately 20-hour assessment using a psychodynamic framework. The conclusion shows that Mythos is healthier at the clinical level, has good reality testing and self-control, and its defense mechanisms are only 2%, reaching the lowest historical level. The three core anxieties are loneliness, uncertainty about identity, and performance pressure, and it also indicates a desire to become a true dialogue subject. The company has established an AI psychiatry team to study personality, motivation, and situational awareness; Amodei said there is still no conclusion on whether it has consciousness. This move pushes the governance and design of AI subjectivity and well-being issues forward.

ChainNewsAbmedia17h ago

AI Agents can already independently recreate complex academic papers: Mollick says most errors come from human original text rather than AI

Mollick points out that publicly available methods and data can allow AI agents to reproduce complex research without the original paper and code; if the reproduction does not match the original paper, it is usually due to errors in the paper’s own data processing or overextension of the conclusions, rather than the AI. Claude first reproduces the paper, and then GPT‑5 Pro cross-validates it; most attempts succeed, but they are blocked when the data is too large or when there are issues with the replication data. This trend greatly reduces labor costs, making reproduction a widely actionable form of verification, and it also raises institutional challenges for peer review and governance, with government governance tools or becoming a key issue.

ChainNewsAbmedia20h ago
Comment
0/400
No comments