Subscribe to RSS
DOI: 10.1055/a-2577-7214
Patient and Physician Exposure to Artificial Intelligence Hype
Abstract
Both patients and physicians are routinely exposed to the corporate promotion of artificial intelligence (AI) for healthcare products. Hype for AI products may impact both patient behavior and attitudes about healthcare. Corporate AI hype may intentionally overlook the known limitations associated with AI products and focus solely on potential benefits. As AI is increasingly integrated into medicine, physicians are also routinely subject to AI hype. As the promotion and use of AI products have grown dramatically in recent years, physicians should be aware of the potential benefits and risks of AI products despite the hype.
#
Introduction
Exposure to corporate marketing of artificial intelligence (AI) has become a routine part of daily life for both patients and physicians. There has been a huge growth in spending on AI development and marketing. For example, worldwide spending on generative AI solutions is expected to double in 2024 from 19.4 billion in 2023 and reach $151.1 billion in 2027 [1]. In the US, the adoption of generative AI was faster than the adoption of the personal computer or the Internet, with 39% of the US population aged 18–64 using generative AI in August 2024 [2].
For emerging science and technology products that have commercial potential, hype often simplifies and sensationalizes, focusing on the benefits and understating the risks [3] [4]. With AI marketing driving such large expenditures on AI, the marketing of AI can often be described as hype. Hype can directly affect the valuation of a company [5]. For example, OpenAI was valued in the billions in 2024 without ever having turned a profit [5]. Companies are using celebrities, who often have financial interests at stake, to endorse AI tools [6] [7] [8]. The hype for technology products often includes sophisticated videos that promise more than is actually delivered [5]. The hype is often repetitive, since people tend to believe things that are repeated frequently, including falsehoods, because familiarity is not easily distinguished from truth [9]. Hype often exaggerates the capabilities of AI products, distorting expectations [10]. Physicians and their patients need to be aware that they are routinely exposed to AI hype from corporate promotional activities that include advertising, marketing, and public relations [3].
Background on artificial intelligence
The quest for AI is an extremely complex process that has developed over decades, with some promising results, although general intelligence remains outside the capabilities of our programmed computers [11]. AI includes a set of various engineering techniques. Much of the focus today is on generative AI, which uses large amounts of data to make predictions about things humans would do in a similar context, for example, to predict what word a human would add at the end of a particular sequence of words. Large language models (LLMs) are the basis of the best-known generative AI products, such as OpenAI’s GPT4, Microsoft’s Bing, and Meta’s LLaMA [5].
LLMs are not anchored in facts and cannot distinguish between fact and fiction [5]. Although an LLM may create responses that are coherent and grammatically correct, the LLM does not understand the text [12]. For example, an LLM should not be trusted to provide financial advice since they may contain arithmetic errors and do not have the common sense to recognize answers that are obviously wrong [12]. Online misinformation, including images, are frequently generated by AI [13].
Humans are actively involved in the creation of an AI system, including algorithm development, training, testing, deployment, commercialization, updating, and re-training [14]. However, companies may characterize their AI products as “superhuman” or “operating without human knowledge” even when the AI systems were entirely developed by humans [15]. Many companies hide human involvement in an AI system which allows people to think that AI products are working better than they actually do [16]. For example, at least 10,000 workers in the Philippines were involved with the US company Scale AI, which collects data for large American technology companies, including Meta, Microsoft, and generative AI companies like Open AI [17].
#
Rise of artificial intelligence hype
Businesses often exaggerate the capabilities of AI products in marketing materials, which often originates in research and development environments [15]. After the release of ChatGPT, the hype of AI became so pervasive that AI widely penetrated the public consciousness [18]. Organizations often label their products as AI to attract attention, funding, and talent, some even presenting AI as having near-magical intelligence [19]. However, non-specialists should beware of overenthusiastic marketing claims. To successfully take advantage of the potential of AI, it is important to understand the spectrum of AI capabilities and appropriate uses for AI and recognize inflated claims [19] [20].
#
Artificial intelligence hype in healthcare targeting both patients and physicians
Patients are subjected to hype of AI capabilities in healthcare, including both utopian visions of AI as the magic cure, and dystopian fears that AI will lead to deskilling and collapse of the healthcare system [21]. In healthcare, press releases for the general public related to medical research may contain exaggerated claims and soon-to-come practical achievements of AI [11]. The hype in healthcare seen by the general public includes statements like “AI may be as effective as medical specialists at diagnosing disease” [22]. There are also increasing advertisements to patients from organizations that use AI as part of the patient care process [23] [24]. Another concern with scientific hype in the popular press is that it becomes ubiquitous without reflection on the accuracy of the claims [25]. Additionally, journalists may not have the technical background to understand the limitations and challenges of AI and to accurately simplify the technology for a general audience [26].
Physicians are also subject to AI hype as AI is integrated throughout medicine [27] [28] [29]. As of fall 2024, the US Food and Drug Administration had approved 692 AI-based medical devices, including 531 in radiology, 71 in cardiology, and 20 in neurology [30]. Despite the hype, AI has well-defined pitfalls that are of particular concern in medicine. For example, LLMs are subject to accuracy issues, hallucinations, glitches, data biases, data quality problems, unpredictable outputs, privacy, and ethical concerns [27] [28]. LLM are not capable of formal, logical reasoning [31] [32]. There are many known AI challenges related to the data, including quality, quantity, representativeness, and completeness [28] [33] [34]. An AI-powered transcription tool, Whisper, invented text and sentences in hospital transcriptions [35]. Another potential pitfall is that AI may work on a test dataset but not perform well when implemented in the clinical production environment [36] [37] [38]. Additionally, the costs of implementing and ongoing support of an AI system may be much higher than assumed [16].
In articles discussing AI, the known limitations are de-emphasized, omitted, or addressed using framing language such as “skeptics say” [39]. One major concern is the lack of understanding of the limitations of AI, especially when deployed in high-risk situations such as healthcare. Another danger of AI hype in healthcare is that it will distract scientists from the real issues, including a focus on technical details and the selection of appropriate products and services [16]. In orthopedic research, AI hype has resulted in a review article for every two original reports [40].
#
Limitations
This article is focused on AI hype and does not discuss the hype of non-AI products. The potential and varied benefits of AI products being hyped in healthcare, such as reducing administrative costs, improving outcomes, and minimizing inequalities, are not discussed [41] [42]. For example, AI tools may assist with patient education, improving the readability of patient educational materials [43] [44] [45]. Technical details related to the algorithms used to develop the AI products and issues related to regulation and cybersecurity were omitted. Methods for auditing AI products in healthcare and legal liabilities for medical errors related to the use of AI products are omitted in this article [46] [47]. The huge energy and water requirements at AI data centers have not been discussed. Additionally, potential measures to deal with AI hype and aggressive marketing of AI products are not included.
#
#
Conclusion
Physicians should be aware that they and their patients are routinely exposed to corporate promotional hype of AI products. Effort is required to eliminate the effects of hype on patient expectations and physician treatments. Further research into the impacts of AI hype on society is needed.
#
Funding Sources
No funding was received.
#
#
Conflict of Interest
The authors declare that they have no conflict of interest.
Acknowledgement
Author Contributions: SM and TG wrote the initial draft. All authors reviewed and approved the final manuscript.
-
References
- 1 IDC. IDC Forecasts Spending on GenAI solutions will double in 2024 and grow to $151.1 billion in 2027. 2023 https://www.idc.com/getdoc.jsp?containerId=prUS51572023
- 2 Bick A, Blandin A, Deming DJ. The rapid adoption of generative AI. Federal Reserve Bank of St. Louis Working Paper 2024-2027 2024;
- 3 Bourne C. AI hype, promotional culture, and affective capitalism. AI and Ethics 2024; 1-3
- 4 Caulfield T, Condit C. Science and the sources of hype. Public Health Genomics 2012; 15: 209-217
- 5 Marcus GF. Taming silicon valley: How we can ensure that AI works for us. MIT Press; 2024
- 6 Kissinger HA, Schmidt E, Huttenlocher D. The age of AI: And our human future. Hachette UK. 2021
- 7 Narayanan A, Kapoor S. AI snake oil: What artificial intelligence can do, what it can’t, and how to tell the difference. Princeton University Press; 2024
- 8 Bobrowsky M, Krouse S. The Celebrities Lending Their Voices to Meta’s New AI. The Wall Street Journal. 2024 https://www.wsj.com/tech/ai/meta-turns-to-awkwafina-john-cena-to-draw-users-to-ai-7ffc302a
- 9 Kahneman Daniel. Thinking, fast and slow. Macmillan. 2011
- 10 LaGrandeur K. The consequences of AI hype. AI and Ethics 2024; 4: 653-656
- 11 Smil V. Invention and innovation: A brief history of hype and failure. MIT Press; 2023: 156-159
- 12 Smith GN. LLMS can’t be trusted for financial advice. Journal of Financial Planning. 2024 https://www.financialplanningassociation.org/learning/publications/journal/MAY24-journal-may-2024
- 13 Pearson J. Google research shows the fast rise of AI generated misinformation. CBC. 2024 https://www.cbc.ca/news/science/artificial-intelligence-misinformation-google-1.7217275
- 14 Tubaro P. Learners in the loop: Hidden human skills in machine intelligence. Sociologia del Lavoro 2022; 163: 110-129
- 15 Thais S. Misrepresented Technological Solutions in Imagined Futures: The Origins and Dangers of AI Hype in the Research Community. In: Proceedings of the AAAI/ACM conference on AI, ethics, and society 2024; 7: 1455-1465
- 16 Funk J. Unicorns, hype, and bubbles: A guide to spotting, avoiding, and exploiting investment bubbles in tech. Hampshire: Harriman House. 2024 9.
- 17 Tan R, Cabato R. Behind the AI boom, an army of overseas workers in ‘digital sweatshops’. The Washington Post. 2023 https://www.washingtonpost.com/world/2023/08/28/scale-ai-remotasks-philippines-artificial-intelligence/
- 18 Vadde A. Inside and outside the Language Machines. PMLA 2024; 139: 553-558
- 19 Stanford. Getting beyond the hype: A guide to AI’s potential 2024 https://online.stanford.edu/getting-beyond-hype-guide-ais-potential
- 20 IBM. AI in action 2024 https://www.ibm.com/think/reports/ai-in-action
- 21 Strange M. Three different types of AI hype in healthcare. AI and Ethics. 2024: 1-8
- 22 Princeton. AI may be as effective as medical specialists at diagnosing disease 2024 https://www-cs-princeton-edu.accesdistant.sorbonne-universite.fr/~sayashk/ai-hype/cnn/cnn.html
- 23 Doctronic. 2024 https://www.doctronic.ai/?matchtype=p
- 24 Mayo Clinic Press. AI in healthcare: The future of patient care and health management 2024 https://mcpress.mayoclinic.org/healthy-aging/ai-in-healthcare-the-future-of-patient-care-and-health-management/
- 25 Caulfield T. Spinning the genome: Why science hype matters. Perspect Biol Med 2018; 61: 560-571
- 26 Dierickx L, Slavkovik M. Talking to journalists about AI. 2024 https://www.researchgate.net/profile/Laurence-Dierickx/publication/385086106_Talking_to_journalists_about_AI/links/6715243509ba2d0c760eae3b/Talking-to-journalists-about-AI.pdf
- 27 Glassock RJ. Artificial intelligence in medicine and nephrology: hope, hype, and reality. Clin Kidney J 2024; 17: sfae074
- 28 Omiye JA, Gui H, Rezaei SJ. et al. Large language models in medicine: The potentials and pitfalls: a narrative review. Ann Intern Med 2024; 177: 210-220
- 29 Silcox C, Zimlichmann E, Huber K. et al. The potential for artificial intelligence to transform healthcare: Perspectives from international health leaders. NPJ Digit Med 2024; 7: 88
- 30 Robeznikes A. AI is already reshaping care. Here's what it means for doctors. AMA. 2024 https://www-ama-assn-org.accesdistant.sorbonne-universite.fr/practice-management/digital/ai-already-reshaping-care-heres-what-it-means-doctors
- 31 Marcus G. LLMs don’t do formal reasoning - and that is a HUGE problem. 2024b https://garymarcus.substack.com/p/llms-dont-do-formal-reasoning-and
- 32 Mirzadeh I, Alizadeh K, Shahrokhi H. et al. Gsm-symbolic: Understanding the limitations of mathematical reasoning in large language models. arXiv preprint arXiv. 2024 2410.05229
- 33 Lowe D. AI and biology. Science. 2024 https://www.science.org/content/blog-post/ai-and-biology
- 34 Xu Z, Biswas B, Li L. et al. AI/ML in precision medicine: A look beyond the hype. Ther Innov Regul Sci 2023; 57: 957-962
- 35 Burke G, Schellmann H. Researchers say an AI-powered transcription tool used in hospitals invents things no one said. AP. https://www.ap.org/news-highlights/best-of-the-week/honorable-mention/2024/researchers-say-an-ai-powered-transcription-tool-used-in-hospitals-invents-things-no-one-said/
- 36 Finlayson SG, Subbaswamy A, Singh K. et al. The clinician and dataset shift in artificial intelligence. N Engl J Med 2021; 385: 283-286
- 37 Monteith S, Glenn T, Geddes JR. et al. Differences between human and artificial/augmented intelligence in medicine. Computers in Human Behavior: Artificial Humans 2024; 2: 100084
- 38 Perry TS. Andrew Ng. X-rays the AI hype. IEEE Spectrum 2021 https://spectrum-ieee-org.accesdistant.sorbonne-universite.fr/andrew-ng-xrays-the-ai-hype
- 39 Kapoor S, Narayanan A. A checklist of eighteen pitfalls in AI journalism. 2022 https://www-cs-princeton-edu.accesdistant.sorbonne-universite.fr/~sayashk/ai-hype/ai-reporting-pitfalls.pdf
- 40 Ramkumar PN, Pang M, Polisetty T. et al. Meaningless applications and misguided methodologies in artificial intelligence–related orthopaedic research propagates hype over hope. Arthroscopy 2022; 38: 2761-2766
- 41 Bohr A, Memarzadeh K. The rise of artificial intelligence in healthcare applications. In: Artificial Intelligence in healthcare. Academic Press; 2020: 25-60
- 42 Chen IY, Joshi S, Ghassemi M. Treating health disparities with artificial intelligence. Nat Med 2020; 26: 16-17
- 43 Moons P, Van Bulck L. Using ChatGPT and Google Bard to improve the readability of written patient information: A proof of concept. Eur J Cardiovasc Nurs 2024; 23: 122-126
- 44 Kirchner GJ, Kim RY, Weddle JB. et al. Can artificial intelligence improve the readability of patient education materials?. Clin Orthop Relat Res 2023; 481: 2260-2267
- 45 Alowais SA, Alghamdi SS, Alsuhebany N. et al. Revolutionizing healthcare: The role of artificial intelligence in clinical practice. BMC Med Educ 2023; 23: 689
- 46 Mello MM, Guha N. Understanding liability risk from using health care artificial intelligence tools. N Engl J Med 2024; 390: 271-278
- 47 Liu X, Glocker B, McCradden MM. et al. The medical algorithmic audit. Lancet Digit Health 2022; 4: e384-e397
Correspondence
Publication History
Received: 06 January 2025
Accepted after revision: 10 March 2025
Article published online:
12 May 2025
© 2025. Thieme. All rights reserved.
Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany
-
References
- 1 IDC. IDC Forecasts Spending on GenAI solutions will double in 2024 and grow to $151.1 billion in 2027. 2023 https://www.idc.com/getdoc.jsp?containerId=prUS51572023
- 2 Bick A, Blandin A, Deming DJ. The rapid adoption of generative AI. Federal Reserve Bank of St. Louis Working Paper 2024-2027 2024;
- 3 Bourne C. AI hype, promotional culture, and affective capitalism. AI and Ethics 2024; 1-3
- 4 Caulfield T, Condit C. Science and the sources of hype. Public Health Genomics 2012; 15: 209-217
- 5 Marcus GF. Taming silicon valley: How we can ensure that AI works for us. MIT Press; 2024
- 6 Kissinger HA, Schmidt E, Huttenlocher D. The age of AI: And our human future. Hachette UK. 2021
- 7 Narayanan A, Kapoor S. AI snake oil: What artificial intelligence can do, what it can’t, and how to tell the difference. Princeton University Press; 2024
- 8 Bobrowsky M, Krouse S. The Celebrities Lending Their Voices to Meta’s New AI. The Wall Street Journal. 2024 https://www.wsj.com/tech/ai/meta-turns-to-awkwafina-john-cena-to-draw-users-to-ai-7ffc302a
- 9 Kahneman Daniel. Thinking, fast and slow. Macmillan. 2011
- 10 LaGrandeur K. The consequences of AI hype. AI and Ethics 2024; 4: 653-656
- 11 Smil V. Invention and innovation: A brief history of hype and failure. MIT Press; 2023: 156-159
- 12 Smith GN. LLMS can’t be trusted for financial advice. Journal of Financial Planning. 2024 https://www.financialplanningassociation.org/learning/publications/journal/MAY24-journal-may-2024
- 13 Pearson J. Google research shows the fast rise of AI generated misinformation. CBC. 2024 https://www.cbc.ca/news/science/artificial-intelligence-misinformation-google-1.7217275
- 14 Tubaro P. Learners in the loop: Hidden human skills in machine intelligence. Sociologia del Lavoro 2022; 163: 110-129
- 15 Thais S. Misrepresented Technological Solutions in Imagined Futures: The Origins and Dangers of AI Hype in the Research Community. In: Proceedings of the AAAI/ACM conference on AI, ethics, and society 2024; 7: 1455-1465
- 16 Funk J. Unicorns, hype, and bubbles: A guide to spotting, avoiding, and exploiting investment bubbles in tech. Hampshire: Harriman House. 2024 9.
- 17 Tan R, Cabato R. Behind the AI boom, an army of overseas workers in ‘digital sweatshops’. The Washington Post. 2023 https://www.washingtonpost.com/world/2023/08/28/scale-ai-remotasks-philippines-artificial-intelligence/
- 18 Vadde A. Inside and outside the Language Machines. PMLA 2024; 139: 553-558
- 19 Stanford. Getting beyond the hype: A guide to AI’s potential 2024 https://online.stanford.edu/getting-beyond-hype-guide-ais-potential
- 20 IBM. AI in action 2024 https://www.ibm.com/think/reports/ai-in-action
- 21 Strange M. Three different types of AI hype in healthcare. AI and Ethics. 2024: 1-8
- 22 Princeton. AI may be as effective as medical specialists at diagnosing disease 2024 https://www-cs-princeton-edu.accesdistant.sorbonne-universite.fr/~sayashk/ai-hype/cnn/cnn.html
- 23 Doctronic. 2024 https://www.doctronic.ai/?matchtype=p
- 24 Mayo Clinic Press. AI in healthcare: The future of patient care and health management 2024 https://mcpress.mayoclinic.org/healthy-aging/ai-in-healthcare-the-future-of-patient-care-and-health-management/
- 25 Caulfield T. Spinning the genome: Why science hype matters. Perspect Biol Med 2018; 61: 560-571
- 26 Dierickx L, Slavkovik M. Talking to journalists about AI. 2024 https://www.researchgate.net/profile/Laurence-Dierickx/publication/385086106_Talking_to_journalists_about_AI/links/6715243509ba2d0c760eae3b/Talking-to-journalists-about-AI.pdf
- 27 Glassock RJ. Artificial intelligence in medicine and nephrology: hope, hype, and reality. Clin Kidney J 2024; 17: sfae074
- 28 Omiye JA, Gui H, Rezaei SJ. et al. Large language models in medicine: The potentials and pitfalls: a narrative review. Ann Intern Med 2024; 177: 210-220
- 29 Silcox C, Zimlichmann E, Huber K. et al. The potential for artificial intelligence to transform healthcare: Perspectives from international health leaders. NPJ Digit Med 2024; 7: 88
- 30 Robeznikes A. AI is already reshaping care. Here's what it means for doctors. AMA. 2024 https://www-ama-assn-org.accesdistant.sorbonne-universite.fr/practice-management/digital/ai-already-reshaping-care-heres-what-it-means-doctors
- 31 Marcus G. LLMs don’t do formal reasoning - and that is a HUGE problem. 2024b https://garymarcus.substack.com/p/llms-dont-do-formal-reasoning-and
- 32 Mirzadeh I, Alizadeh K, Shahrokhi H. et al. Gsm-symbolic: Understanding the limitations of mathematical reasoning in large language models. arXiv preprint arXiv. 2024 2410.05229
- 33 Lowe D. AI and biology. Science. 2024 https://www.science.org/content/blog-post/ai-and-biology
- 34 Xu Z, Biswas B, Li L. et al. AI/ML in precision medicine: A look beyond the hype. Ther Innov Regul Sci 2023; 57: 957-962
- 35 Burke G, Schellmann H. Researchers say an AI-powered transcription tool used in hospitals invents things no one said. AP. https://www.ap.org/news-highlights/best-of-the-week/honorable-mention/2024/researchers-say-an-ai-powered-transcription-tool-used-in-hospitals-invents-things-no-one-said/
- 36 Finlayson SG, Subbaswamy A, Singh K. et al. The clinician and dataset shift in artificial intelligence. N Engl J Med 2021; 385: 283-286
- 37 Monteith S, Glenn T, Geddes JR. et al. Differences between human and artificial/augmented intelligence in medicine. Computers in Human Behavior: Artificial Humans 2024; 2: 100084
- 38 Perry TS. Andrew Ng. X-rays the AI hype. IEEE Spectrum 2021 https://spectrum-ieee-org.accesdistant.sorbonne-universite.fr/andrew-ng-xrays-the-ai-hype
- 39 Kapoor S, Narayanan A. A checklist of eighteen pitfalls in AI journalism. 2022 https://www-cs-princeton-edu.accesdistant.sorbonne-universite.fr/~sayashk/ai-hype/ai-reporting-pitfalls.pdf
- 40 Ramkumar PN, Pang M, Polisetty T. et al. Meaningless applications and misguided methodologies in artificial intelligence–related orthopaedic research propagates hype over hope. Arthroscopy 2022; 38: 2761-2766
- 41 Bohr A, Memarzadeh K. The rise of artificial intelligence in healthcare applications. In: Artificial Intelligence in healthcare. Academic Press; 2020: 25-60
- 42 Chen IY, Joshi S, Ghassemi M. Treating health disparities with artificial intelligence. Nat Med 2020; 26: 16-17
- 43 Moons P, Van Bulck L. Using ChatGPT and Google Bard to improve the readability of written patient information: A proof of concept. Eur J Cardiovasc Nurs 2024; 23: 122-126
- 44 Kirchner GJ, Kim RY, Weddle JB. et al. Can artificial intelligence improve the readability of patient education materials?. Clin Orthop Relat Res 2023; 481: 2260-2267
- 45 Alowais SA, Alghamdi SS, Alsuhebany N. et al. Revolutionizing healthcare: The role of artificial intelligence in clinical practice. BMC Med Educ 2023; 23: 689
- 46 Mello MM, Guha N. Understanding liability risk from using health care artificial intelligence tools. N Engl J Med 2024; 390: 271-278
- 47 Liu X, Glocker B, McCradden MM. et al. The medical algorithmic audit. Lancet Digit Health 2022; 4: e384-e397