Keywords
artificial Intelligence, preventive medicine, public health, health promotion
Document Type
Research Studies
Abstract
Background: Artificial intelligence (AI) applications within the health sciences including education and healthcare practice are now common. Patients are likely to apply AI to better understand their health status or to learn what they can do to improve their health.
Purpose: The purpose of this study was to evaluate outputs from ChatGPT for three common health conditions seen in the southern United States as to whether it would include medically biased care or lifestyle related advice, legitimate sources of information and to assess reading levels generated.
Methods: Health care professionals rated outputs in 4 areas and interrater agreement was assessed via Cronbach’s alpha. Reading levels were assessed with Flesch-Kincaid via Word.
Results: Raters agreed information was not medically biased, gave lifestyle advice, did not suggest resources, and did not provide referenced information. Interrater comparisons made among professional groups had very strong agreements among nurses as well as among physicians (Cronbach’s Alpha >0.90, p
Conclusions: Additional research should be performed on the trustworthiness of AI generated health advice. AI algorithms should consider reading levels of average Americans.
Recommended Citation
Evans Jr, M. W.,
Harrison Swartz, J.,
Ndetan, H.,
Francis, D.,
Moore, J.,
Kozub, M.,
Kaninjing, E.,
Jones-Locklear, J.,
Greene, D.,
&
Doss, J.
(2025). Personal Health Advice in the World of Artificial Intelligence: An Assessment of Responses from ChatGPT for Three Common Health Conditions in The Southern United States.
Journal of Public Health in the Deep South, 5(3), 6.
DOI: https://doi.org/10.55533/2996-6833.1111
Included in
Biomedical Informatics Commons, Health Information Technology Commons, Public Health Commons, Social and Behavioral Sciences Commons