International Journal on Science and Technology

E-ISSN: 2229-7677     Impact Factor: 9.88

A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal

Call for Paper Volume 16 Issue 4 October-December 2025 Submit your research before last 3 days of December to publish your research paper in the issue of October-December.

Bias detection and mitigation in AI models trained on clinical datasets

Author(s) Veerendra Nath Jasthi
Country United States
Abstract The usage of Artificial Intelligence (AI) models has penetrated clinical decision-making systems, being used in diagnostics as well as recommendation of treatment. Nevertheless, these models may be poor because of occurring biases in the clinical datasets which are utilized in training. Such biases are likely to lead to unbalanced performance in various demographics, which is ethically, legally, and clinically problematic. This paper examines origin and source of bias in clinical AI models and methods of detection as well as executing mitigation measures such as reweighting, data augmentation, and algorithms fairness measures. Evidence-based on experimental analysis using benchmark clinical datasets illustrates how the over-looked bias may produce the unequal effects on the gender, age, and ethnicity subgroups. Model fairness scores went up without a drastic accuracy sacrifice following the implementation of mitigation strategies. These findings raise the need to produce equitable and credible applications with the help of bias-aware AI development pipelines in healthcare environments.
Keywords AI fairness, clinical datasets, algorithmic bias, bias detection, bias mitigation, healthcare AI, data disparity, ethical AI, model equity, demographic parity.
Field Engineering
Published In Volume 16, Issue 1, January-March 2025
Published On 2025-02-07
DOI https://doi.org/10.71097/IJSAT.v16.i1.7999
Short DOI https://doi.org/g92nt6

Share this