Published on in Vol 27 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/76340, first published .
Generative Artificial Intelligence in Medical Education: Enhancing Critical Thinking or Undermining Cognitive Autonomy?

Generative Artificial Intelligence in Medical Education: Enhancing Critical Thinking or Undermining Cognitive Autonomy?

Generative Artificial Intelligence in Medical Education: Enhancing Critical Thinking or Undermining Cognitive Autonomy?

Viewpoint

1One Health Research Group, Universidad de Las Américas, Quito, Ecuador

2Institute for Diagnostic and Interventional Radiology, School of Medicine and Health, Technical University of Munich, Munich, Germany

Corresponding Author:

Esteban Ortiz-Prado, MD, MSc, MPH, PhD

One Health Research Group

Universidad de Las Américas

Via Nayon S/N

Quito, 170134

Ecuador

Phone: 593 0992561230

Email: e.ortizprado@gmail.com


Generative artificial intelligence (GenAI) enables the production of coherent and contextually relevant text by processing large-scale linguistic datasets. Tools such as ChatGPT, Gemini, Claude, and LLaMA are increasingly integrated into medical education, assisting students with a range of tasks, including clinical reasoning, literature review, scientific writing, and formative assessment. Although these tools offer significant advantages in terms of productivity, personalization, and cognitive support, their impact on critical thinking—a cornerstone of medical education—remains uncertain. The aim of this viewpoint paper is to critically assess the influence of GenAI on critical thinking within medical training, examining both its potential to enhance cognitive skills and the risks it poses to cognitive autonomy. Users have reported increased efficiency and improved linguistic output; however, concerns have also been raised regarding the risk of cognitive overreliance. Current evidence presents a mixed picture, indicating both improvements in learner engagement and potential drawbacks such as passivity or susceptibility to misinformation. Without curricular integration that prioritizes ethical use, prompt engineering, and critical evaluation, GenAI may compromise the cognitive autonomy of medical students. Conversely, when thoughtfully embedded into pedagogical frameworks, these tools can act as cognitive enhancers—supporting, rather than replacing, clinical reasoning. Medical education must adapt to ensure that future physicians engage with GenAI in a critical, ethical, and context-aware manner, especially in complex decision-making scenarios. This transformation demands not only technological fluency but also reflective practice and sustained oversight by faculty and academic institutions.

J Med Internet Res 2025;27:e76340

doi:10.2196/76340

Keywords



Critical reasoning is an essential component of medical training, enabling students to develop complex cognitive abilities that directly contribute to improved decision-making, deeper clinical insight, and safer patient care [1]. This mode of thinking extends beyond the mere recall or application of knowledge; it encompasses problem analysis, evidence evaluation, and the formulation of well-informed clinical judgments [2]. As such, critical reasoning is widely recognized as a cornerstone of high-quality medical education and professional practice [3].

Over the past few decades, technological advancements have profoundly transformed educational ecosystems, including those within medical schools [4]. The advent of the internet and metasearch engines revolutionized access to scientific information, democratizing medical education on a global scale [4-6]. More recently, a new paradigm shift has emerged with the rise of generative artificial intelligence (GenAI) [7]. Tools such as ChatGPT, Claude, DeepSeek, Gemini, Perplexity, LLaMA, and Google Med-PaLM are becoming increasingly embedded in the academic routines of medical students [8,9]. From supporting clinical reasoning and diagnostic processes to assisting with critical essay writing and literature reviews, these platforms are rapidly reshaping the landscape of medical education [10].

This manuscript does not present original data but offers a critical synthesis of current evidence and theoretical perspectives on the role of GenAI in medical education. It explores key issues such as the balance between technological assistance and reflective thinking, the role of faculty guidance, and the ethical implications of artificial intelligence (AI) use. Rather than providing definitive answers, this paper aims to inform pedagogical discussion and support future curricular development.

The aim of this paper is to critically examine the dual impact of GenAI on critical thinking in medical education, exploring both its potential to enhance cognitive skills and the risks it poses to cognitive autonomy.


GenAI is a subfield of artificial intelligence that employs advanced machine learning models to generate humanlike language [10]. Large language models such as ChatGPT, Gemini, Claude, LLaMA, and Mistral are built on transformer architectures that leverage self-attention mechanisms to evaluate the contextual relevance of words within a sequence [11]. Trained on massive datasets, these models can generate coherent, context-sensitive, and semantically rich responses across a wide range of tasks [12].

Potential Cognitive Enhancer

GenAI tools have rapidly permeated educational ecosystems, often outpacing the development of appropriate regulations, curricular integration, and institutional guidelines. In medical education—a field characterized by dense curricula and high-performance pressure—these tools offer notable advantages: instant support for complex inquiries, rapid summarization of extensive content, clinical case simulations, and formative feedback [13]. Additionally, GenAI significantly enhances the presentation of assignments, essay quality, and research processes, streamlining academic productivity [14].

Threat to Cognitive Autonomy

Despite its potential benefits, concerns persist regarding GenAI’s impact on students’ cognitive autonomy and critical thinking in medical education. Although some evidence points to enhancements in specific dimensions of cognition—particularly when GenAI is used within well-designed instructional frameworks—other studies highlight cognitive stagnation or even deterioration when AI tools are used without pedagogical scaffolding [15-17].

Cognitive autonomy refers to the learner’s capacity to make independent judgments, analyze information critically, and regulate decisions based on internal reasoning rather than external cues [18]. In medical education, fostering cognitive autonomy is essential to developing safe, competent professionals capable of making sound clinical decisions.

This concept aligns with the principles of self-regulated learning, defined as the extent to which students are metacognitively, motivationally, and behaviorally involved in their own learning process [19]. Self-regulated learning promotes autonomy, sustained engagement, and strategic thinking, all of which are central to critical reasoning in medical education.

However, uncritical or habitual reliance on GenAI tools may lead to cognitive offloading, wherein learners increasingly delegate analytical and creative tasks to external systems. This behavior may erode internal reasoning structures and reduce opportunities for deliberate practice and error-based learning—both core elements of clinical expertise. Without proper supervision or contextual understanding, students may also internalize biased, inaccurate, or hallucinated content [12,20,21]. Concerns extend beyond performance metrics. In a study of younger learners, Gerlich [22] found a significant negative correlation between frequent GenAI use and critical thinking scores, with cognitive offloading identified as a mediating factor [22].

These divergent findings suggest that the effectiveness of GenAI in supporting—or impairing—critical thinking among medical students is likely influenced by several variables, including tool design, task complexity, and the degree of faculty oversight [15,23].

Moreover, ethical risks such as plagiarism, dependency, or erosion of academic authorship may further compromise learners' sense of intellectual agency and accountability. In such cases, GenAI shifts from being a support tool to a potential threat to students' development as independent thinkers and professionals [24,25].

These conceptual risks are further supported by empirical and theoretical analyses (Table 1), which emphasize the negative impact of unsupervised GenAI use on learners’ cognitive engagement and higher-order reasoning skills.

Table 1. Representative studies illustrating the cognitive risks and benefits of generative artificial intelligence in educational settings.
StudyStudy design, populationKey outcomeRelevance to our argument
Zhai et al [21]Systematic review (18 studies)Overreliance on AIa correlated with reduced problem-solving ability (effect size = −0.41) and increased cognitive passivity in 78% of the studiesSupports cognitive autonomy risks with unsupervised use
Gonsalves [26]Theoretical analysisGenAIb use disproportionately benefits lower-order Bloom’s skills (recall/comprehension) while weakening evaluation/creation without guided reflectionExplains why critical thinking erodes without design safeguards
Zhou et al [15]Mixed methods, 325 graduate studentsAI self-regulation (eg, bias-checking prompts) mediated 29% of the critical thinking gains. No gains occurred without this trainingValidates digital literacy as prerequisite for AI integration
Gerlich [22]Cross-sectional survey, 712 high school and early university studentsSignificant negative correlation between GenAI usage frequency and critical thinking scores; cognitive offloading was a mediating factorDemonstrates how habitual GenAI use undermines critical thinking via cognitive offloading, especially in younger learners

aAI: artificial intelligence.

bGenAI: generative artificial intelligence.


Although the current literature remains limited, recent empirical studies offer a more nuanced understanding of GenAI’s measurable cognitive effects in real-world medical education settings. Roos et al [27] conducted a large-scale study among German medical students, comparing the performance of GPT-4, Bing, and GPT-3.5 on items from the 2022 German Medical State Examinations. GPT-4 and Bing significantly outperformed students, achieving correct response rates of 88.1% and 86%, respectively, compared to 74.6% among the students. These results suggest considerable potential for AI-assisted preparation tools, although the study did not assess higher-order cognitive outcomes [27].

Regarding learning outcomes, Sakelaris et al [28] found no statistically significant differences in the exam scores between students who used GenAI tools (primarily ChatGPT) for studying and those who did not, indicating a limited direct impact on knowledge acquisition and academic performance in preclinical settings. Nonetheless, broader concerns regarding cognitive offloading and critical thinking persist.

In a separate study, Güvel et al [29] compared the performance of GenAI tools—including ChatGPT-4o, Gemini, and Claude—with that of human experts in generating case-based rational pharmacotherapy questions. Although AI-generated items showed comparable discrimination indices and correct answer rates (with Claude producing the highest proportion of error-free items—12 out of 20—and ChatGPT generating the fewest unusable items—5 out of 20), expert validation remained essential to eliminate flawed or unsuitable content.

More instructively, 2 recent randomized controlled trials clarify how instructional design mediates GenAI’s impact on critical thinking and engagement. Shalong et al [30] demonstrated that LearnGuide, a ChatGPT-based facilitator, improved scores on the Cornell Critical Thinking Test (+7.11; P<.001), self-directed learning (+4.15; P=.01), and cognitive flow in problem-based learning environments. These gains were sustained at a 14-week follow-up, highlighting the potential of structured, reflective GenAI use [30].

Conversely, Çiçek et al [31] found that ChatGPT-generated feedback did not significantly improve critical thinking and was inferior to expert-written feedback in managing complex diagnostic tasks. Nonetheless, students who later learned of the AI involvement exhibited heightened critical awareness, suggesting that transparency and reflection can positively shape digital literacy attitudes (P<.001).

Taken together, these findings underscore the context-dependent nature of GenAI’s impact and reinforce the importance of pedagogical scaffolding, expert supervision, and reflective engagement to preserve cognitive autonomy in medical education settings (Table 2).

Table 2. Comparative summary of generative artificial intelligence–assisted educational interventions and their impact on learning and critical thinking in medical education.
StudyStudy designPopulationGenAIa tool assessedMain outcomeImplication
Roos et al [27], 2024Comparative performance studyGermany medical studentsGPT-4, Bing, GPT-3.5GPT-4 and Bing outperformed students in knowledge recallHigh potential for AIb-assisted test preparation
Sakelaris et al [28], 2024Preclinical study (observational)Medical students (n=38)ChatGPTNo significant difference in exam scores with AI usageLimited direct academic benefit in preclinical context
Güvel et al [29], 2025Tool validation with expert comparisonMedical students (n=103)ChatGPT-4o, Gemini, ClaudeAI-generated items needed expert validationAI tools can assist item generation but require expert oversight
Çiçek et al [31], 2024RCTcMedical students (n=129)ChatGPT-generated feedbackExpert feedback outperformed AI in complex cases; AI raised critical awarenessTransparency and complexity shape AI\'s effectiveness
Shalong et al [30], 2024RCTMedical students (n=103)LearnGuide (ChatGPT-based tool)Improved self-directed learning, critical thinking, and engagement with AI toolStructured integration enhances learning outcomes

aGenAI: generative artificial intelligence.

bAI: artificial intelligence.

cRCT: randomized clinical trial.


In response to the cognitive limitations of GenAI, reasoning AI has emerged as a complementary paradigm [22]. While GenAI is known for its linguistic fluency and pattern recognition capabilities, reasoning AI systems are designed to emulate structured, goal-directed thinking by integrating logical inference, sequential problem-solving, and cognitive reasoning frameworks into their architecture [32]. These systems are typically built on neurosymbolic or neurological models that combine deep learning with symbolic reasoning, enabling them to construct traceable logical chains rather than merely predicting the next plausible output [33].

Although GenAI can generate broad lists of potential diagnoses, it often fails to establish step-by-step reasoning that accurately links clinical signs and laboratory findings. In contrast, a reasoning AI system could model diagnostic probabilities by using Bayesian networks, such as correlating hyperkalemia, peaked T waves, and angiotensin-converting enzyme inhibitor use to support a diagnosis of hyperkalemic αβ insufficiency [34].

Reasoning AI tools such as neurosymbolic workstations or diagnostic decision-support systems based on graphical models can be integrated into emerging educational approaches such as problem-based learning. In this context, they guide students through structured reasoning processes. For example, “Given this patient’s vital signs and laboratory values, propagate the data through an acid–base physiological model, then adjust the anion gap calculation step by step.” This scaffolding reinforces explicit clinical inference across a range of instructional scenarios [34,35].

Although GenAI may stimulate early stages of critical thinking by proposing hypotheses or generating broad explanations, reasoning AI requires disciplined, verifiable reasoning pathways—bringing learners closer to the structured deductive logic required in clinical practice. In curricular terms, an integrated model might begin with GenAI-driven brainstorming (eg, list all possible causes of chest pain) and progress to decision-analysis modules powered by reasoning AI (eg, apply a decision-tree model to differentiate myocardial infarction from pulmonary embolism) [35]. This would ultimately support a complementary AI integration model aimed at enhancing logical, structured reasoning in favor of more reflective, accurate clinical decision-making.


Barriers to Safe Integration

Critical thinking is not simply a function of information access; it stems from confronting uncertainty, evaluating conflicting hypotheses, and making reasoned decisions often under pressure [36]. In medical training, these are not optional skills but core competencies [37,38]. The concern, then, is not that medical students are using GenAI as part of their learning process but that they may be delegating essential cognitive tasks to these tools without accompanying them with reflective processing [3].

This risk is especially acute in environments where students are not trained to critically engage with AI-generated outputs [21,23]. When AI becomes a substitute for expertise rather than a complement, students may forgo the processes that foster critical reasoning skills foundational to safe clinical practice [39]. If today’s and tomorrow’s medical students are not equipped to question information—regardless of its source—the integrity of their clinical judgment will be compromised [40,41].

Principles for Ethical Implementation

Although some GenAI tools have shown promise in guided reasoning tasks, others still lack the contextual sensitivity and ethical awareness necessary to manage complex, atypical, or morally nuanced clinical scenarios [32,42]. Overreliance on such tools can result in the overestimation of their accuracy and reliability, particularly when design features are opaque or fail to include safeguards that alert users—such as medical students—to potential or common errors [26]. In a systematic review, Zhai et al [21] reported that excessive dependence on AI was associated with a significant reduction in problem-solving ability (effect size = −0.41) and cognitive passivity in 78% of the studies analyzed [21].

The ethical integration of GenAI into medical education demands a multidimensional approach that includes pedagogical scaffolding, algorithmic transparency, and robust institutional oversight. Đerić et al [43] highlight critical ethical domains for GenAI use in higher education, including copyright and authorship, transparency, user accountability, and academic integrity [43]. These areas must be explicitly incorporated into curriculum design and professional conduct training to prevent the normalization of ethically ambiguous practices.

Faculty should play an active role in cultivating students’ critical engagement with GenAI. This involves teaching students to scrutinize AI-generated outputs, identify algorithmic biases such as through prompt engineering tasks like “list the limitations of this AI-generated differential diagnosis,” and cross-check information against peer-reviewed sources [44]. Equally essential is the promotion of ethical literacy, especially among undergraduate medical students, whose understanding of academic responsibility and professional standards has been shown to be lower than that of faculty and researchers within higher education settings [43].


Curriculum Innovation

Although the evidence remains mixed regarding its influence on higher-order cognitive skills such as critical thinking, the integration of technology into modern curricula appears both inevitable and essential to prepare students for the demands of 21st-century education [45]. Faculty members must go beyond basic familiarity and proactively embed GenAI into their teaching strategies. This involves promoting active engagement with the tools [39,46,47], encouraging students to question outputs, identify biases, validate sources, and reflect critically on AI-generated information, such that integration must not diminish, but rather strengthen, cognitive autonomy [48]. This multidimensional relationship is summarized in Figure 1, which outlines the key variables that mediate whether GenAI enhances or undermines cognitive autonomy in medical education.

Figure 1. Conceptual model of the impact of GenAI on critical thinking in medical education. (A) Potential cognitive enhancer: GenAI can provide rapid scientific summaries, synthesis of complex information, instant support for inquiries, assistance in academic writing, and formative feedback. These applications can enhance reasoning AI by supporting clinical reasoning and, when guided by faculty and institutional oversight, contribute to curriculum integration. (B) Threat to cognitive autonomy: overreliance on GenAI may foster mental passivity, weak reasoning structures, and exposure to biased or false information. This creates barriers to safe integration, including compromised autonomy, absent reflective processing, judgment substitution, intellectual passivity, and hidden risks, and raises ethical concerns such as the need for algorithmic transparency, pedagogical scaffolding, institutional oversight, cross-validation, bias identification, and digital literacy. (C) Future directions: the safe and effective adoption of GenAI in medical education requires the integration of immersive technologies, development of digital literacy as a core competency, and promotion of a new educational paradigm firmly rooted in human judgment. AI: artificial intelligence; GenAI: generative artificial intelligence.

Universities bear the institutional responsibility of evaluating the quality, accuracy, and ethical implications of GenAI tools before their widespread adoption. As personalized tutors or virtual classroom assistants, GenAI systems can support adaptive learning paths and formative assessment. However, this requires clear transparency in model design, strong data governance protocols, and continuous oversight by academic staff to avoid misinformation or algorithmic bias [49]. Zhou et al [15] demonstrated that 29% of AI’s critical thinking benefits depend on self-regulation training—validating digital literacy as a curricular imperative.

Emerging Synergies: GenAI + Immersive Technology

Crucially, the future of GenAI in medical education must also embrace the expanding potential of immersive technologies. Virtual learning environments, augmented reality, and extended reality platforms, when combined with GenAI, can simulate clinical encounters, anatomy labs, or public health scenarios—offering experiential, safe, and scalable training modalities. These immersive spaces enhance engagement, reduce learning gaps, and prepare students for complex decision-making under uncertainty [50,51].

To ensure ethical and effective use, digital literacy should become a foundational component of medical training [52]. This includes prompt engineering, AI output interpretation, critical appraisal of algorithmic content, and bias recognition. Without these competencies, students risk becoming passive consumers of AI output rather than critical users capable of navigating complex health information landscapes [53,54].

Ultimately, critical thinking will not emerge from an algorithmic echo chamber but through deliberate practice, intellectual curiosity, and a curriculum designed to blend human judgment with machine efficiency. The opportunity lies not in resisting AI integration but in designing an educational paradigm where GenAI enhances not replaces clinical reasoning and ethical decision-making.


This manuscript presents a theoretical perspective and does not include primary data collection, code generation, or statistical modeling. It is also important to note that the development of this manuscript did not follow a systematic search strategy, which introduces a potential risk of selection bias and the omission of relevant studies. Nevertheless, the analysis is grounded in a critical synthesis of publicly available scientific literature, drawing exclusively from peer-reviewed studies indexed in major academic databases such as PubMed, Scopus, and Web of Science, intentionally selected for their relevance and impact.

Another important limitation is that the cognitive effects of the GenAI tools analyzed are highly dependent on the educational contexts in which they were evaluated. Therefore, the conclusions should be interpreted with caution. The generalizability of the findings is constrained by the heterogeneity of learner populations, curricular frameworks, and levels of pedagogical oversight across the reviewed studies.

Consequently, we propose several future research directions. First, controlled comparative trials are needed to assess the differential impact of GenAI versus reasoning AI systems on the development of critical thinking. Second, it is essential to design and validate instruments capable of measuring cognitive autonomy in AI-mediated learning environments. Finally, ethical and digital literacy interventions should be developed and implemented for both students and educators, aiming to mitigate risks of overreliance and to promote reflective, context-sensitive use of these technologies in medical education.


The emergence of GenAI in medical education represents a profound inflection point—rich in potential, yet fraught with risk. These tools can support overwhelmed learners and educators, personalize instruction, and facilitate the development of cognitive skills. However, they are not pedagogically neutral. Without intentional design and critical engagement, GenAI may erode the very attributes of clinical judgment, ethical reasoning, and intellectual autonomy that define competent physicians.

The future of medical education lies not in rejection or blind adoption, but in thoughtful, ethically grounded integration. This requires digital literacy training, faculty-mediated scaffolding, and curricular frameworks that reinforce reflective reasoning. GenAI should be viewed not as a threat or a panacea but as a catalyst that exposes both the strengths and vulnerabilities of the current educational models. The challenge ahead is to prepare physicians who are not only technologically fluent but also critically empowered. That—more than any algorithm—will shape the future of care.

Authors' Contributions

Conceptualization: JSIC

Methodology: JSIC, MAI, ATDlT

Resources: JSIC, MAI, ATDlT, FB, EOP

Software: JSIC, MAI, EOP

Validation: JSIC, FB, EOP

Analysis: JSIC, MAI

Visualization: JSIC, MAI, ATDlT, EOP

Writing—original draft: JSIC, MAI, ATDlT

Writing—review and editing: JSIC, FB, EOP

All authors have read and agreed to the published version of the manuscript.

Conflicts of Interest

None declared.

  1. Araújo B, Gomes SF, Ribeiro L. Critical thinking pedagogical practices in medical education: a systematic review. Front Med (Lausanne). 2024;11:1358444. [FREE Full text] [CrossRef] [Medline]
  2. Châlon B, Lutaud R. Enhancing critical thinking in medical education: a narrative review of current practices, challenges, and future perspectives in context of infodemics. La Presse Médicale Open. 2024;5:100047. [CrossRef]
  3. Kaur M, Mahajan R. Inculcating critical thinking skills in medical students: ways and means. Int J Appl Basic Med Res. 2023;13(2):57-58. [FREE Full text] [CrossRef] [Medline]
  4. Lewis K, Popov V, Fatima S. From static web to metaverse: reinventing medical education in the post-pandemic era. Annals of Medicine. Jan 24, 2024;56(1):1-20. [CrossRef]
  5. Ruiz JG, Mintzer MJ, Leipzig RM. The impact of e-learning in medical education. Acad Med. Mar 2006;81(3):207-212. [CrossRef] [Medline]
  6. Izquierdo-Condoy JS, Arias-Intriago M, Nati-Castillo HA, Gollini-Mihalopoulos R, Cardozo-Espínola CD, Loaiza-Guevara V, et al. Exploring smartphone use and its applicability in academic training of medical students in Latin America: a multicenter cross-sectional study. BMC Med Educ. Nov 30, 2024;24(1):1401. [FREE Full text] [CrossRef] [Medline]
  7. Gehrman E. How generative AI is transforming medical education. Harvard Medicine. 2024. URL: https://magazine.hms.harvard.edu/articles/how-generative-ai-transforming-medical-education [accessed 2025-04-19]
  8. Boubker O. From chatting to self-educating: can AI tools boost student learning outcomes? Expert Systems with Applications. Mar 15, 2024;238:121820-121813. [CrossRef]
  9. Busch F, Hoffmann L, Truhn D, Ortiz-Prado E, Makowski MR, Bressem KK, et al. COMFORT Consortium. Global cross-sectional student survey on AI in medical, dental, and veterinary education and practice at 192 faculties. BMC Med Educ. Sep 28, 2024;24(1):1066. [FREE Full text] [CrossRef] [Medline]
  10. Aydin O, Karaarslan E, Erenay F, Bacanin N. Generative AI in academic writing: a comparison of DeepSeek, Qwen, ChatGPT, Gemini, Llama, Mistral, and Gemma. SSRN. Jan 04, 2025:1-24. [CrossRef]
  11. Li J, Zhang M, Li N, Weyns D, Jin Z, Tei K. Generative AI for self-adaptive systems: state of the art and research roadmap. ACM Trans Auton Adapt Syst. Sep 30, 2024;19(3):1-60. [CrossRef]
  12. Janumpally R, Nanua S, Ngo A, Youens K. Generative artificial intelligence in graduate medical education. Front Med (Lausanne). 2024;11:1525604. [FREE Full text] [CrossRef] [Medline]
  13. Bahroun Z, Anane C, Ahmed V, Zacca A. Transforming education: a comprehensive review of generative artificial intelligence in educational settings through bibliometric and content analysis. Sustainability. Aug 17, 2023:1-40. [CrossRef]
  14. Liu Y, Park J, McMinn S. Using generative artificial intelligence/ChatGPT for academic communication: students' perspectives. Int J App Linguistics. Jun 27, 2024;34(4):1437-1461. [CrossRef]
  15. Zhou X, Teng D, Al-Samarraie H. The mediating role of generative AI self-regulation on students? critical thinking and problem-solving. Education Sciences. Nov 27, 2024:1-14. [FREE Full text] [CrossRef]
  16. Essien A, Bukoye OT, O’Dea X, Kremantzis M. The influence of AI text generators on critical thinking skills in UK business schools. Studies in Higher Education. Feb 17, 2024;49(5):865-882. [CrossRef]
  17. Sardi J, Candra O, Yuliana D, Yanto D, Eliza F. How generative AI influences students’ self-regulated learning and critical thinking skills? a systematic review. International Journal of Engineering Pedagogy. Jan 10, 2025:94-108. [FREE Full text] [CrossRef]
  18. Kaur A, Noman M, Zhang S, Baafi MA. Examining the psychological constructs for independent learning of undergraduates in sino-foreign universities in China. International Journal of Chinese Education. Jun 10, 2025;14(2):2212585X251350418. [CrossRef]
  19. Panadero E. A review of self-regulated learning: six models and four directions for research. Front Psychol. 2017;8:422. [FREE Full text] [CrossRef] [Medline]
  20. Dwivedi YK, Kshetri N, Hughes L, Slade EL, Jeyaraj A, Kar AK, et al. Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int J Info Mgmt. Aug 2023;71:102642. [CrossRef]
  21. Zhai C, Wibowo S, Li LD. The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review. Smart Learn Environ. Jun 18, 2024;11(1):28. [CrossRef]
  22. Gerlich M. AI tools in society: impacts on cognitive offloading and the future of critical thinking. Societies. Jan 03, 2025;15(1):6. [CrossRef]
  23. Sauder M, Tritsch T, Rajput V, Schwartz G, Shoja M. Exploring generative artificial intelligence-assisted medical education: assessing case-based learning for medical students. Cureus. Jan 2024;16(1):e51961. [FREE Full text] [CrossRef] [Medline]
  24. Duah JE, McGivern P. How generative artificial intelligence has blurred notions of authorial identity and academic norms in higher education, necessitating clear university usage policies. IJILT. Feb 19, 2024;41(2):180-193. [CrossRef]
  25. Izquierdo-Condoy J, Vásconez-González J, Ortiz-Prado E. “AI et al.” The perils of overreliance on artificial intelligence by authors in scientific research. Clinical eHealth. Sep 17, 2024:133-135. [FREE Full text] [CrossRef]
  26. Gonsalves C. Generative AI’s impact on critical thinking: revisiting Bloom’s taxonomy. Journal of Marketing Education. Nov 23, 2024:1-16. [CrossRef]
  27. Roos J, Kasapovic A, Jansen T, Kaczmarczyk R. Artificial intelligence in medical education: comparative analysis of ChatGPT, Bing, and medical students in Germany. JMIR Med Educ. Sep 04, 2023;9:e46482. [FREE Full text] [CrossRef] [Medline]
  28. Sakelaris PG, Novotny KV, Borvick MS, Lagasca GG, Simanton EG. Evaluating the use of artificial intelligence as a study tool for preclinical medical school exams. J Med Educ Curric Dev. 2025;12:23821205251320150. [FREE Full text] [CrossRef] [Medline]
  29. Güvel MC, Kıyak YS, Varan HD, Sezenöz B, Coşkun, Uluoğlu C. Generative AI vs. human expertise: a comparative analysis of case-based rational pharmacotherapy question generation. Eur J Clin Pharmacol. Jun 2025;81(6):875-883. [CrossRef] [Medline]
  30. Shalong W, Yi Z, Bin Z, Ganglei L, Jinyu Z, Yanwen Z, et al. Enhancing self-directed learning with custom GPT AI facilitation among medical students: a randomized controlled trial. Med Teach. Jul 2025;47(7):1126-1133. [CrossRef] [Medline]
  31. Çiçek FE, Ülker M, Özer M, Kıyak YS. ChatGPT versus expert feedback on clinical reasoning questions and their effect on learning: a randomized controlled trial. Postgrad Med J. Apr 22, 2025;101(1195):458-463. [CrossRef] [Medline]
  32. Rodman A, Topol EJ. Is generative artificial intelligence capable of clinical reasoning? The Lancet. Mar 2025;405(10480):689. [CrossRef]
  33. Schwartzstein RM. Clinical reasoning and artificial intelligence: can AI really think? Trans Am Clin Climatol Assoc. 2024;134:133-145. [Medline]
  34. Potter L, Jefferies C. Enhancing communication and clinical reasoning in medical education: building virtual patients with generative AI. Future Healthcare Journal. Apr 2024;11:100043. [CrossRef]
  35. Koumpis A, Graefe ASL. Considerations on the basis of medical reasoning for the use in AI applications. Front Med (Lausanne). 2024;11:1451649. [FREE Full text] [CrossRef] [Medline]
  36. Southworth J. Bridging critical thinking and transformative learning: the role of perspective-taking. Theory and Research in Education. Apr 27, 2022;20(1):44-63. [CrossRef]
  37. Scott IA, Hubbard RE, Crock C, Campbell T, Perera M. Developing critical thinking skills for delivering optimal care. Intern Med J. Apr 2021;51(4):488-493. [CrossRef] [Medline]
  38. Stark M, Fins J. The ethical imperative to think about thinking: diagnostics, metacognition, and medical professionalism. Cambridge Quarterly of Healthcare Ethics. Jul 17, 2014:386-396. [CrossRef]
  39. Çela E, Fonkam M, Potluri R. Risks of AI-assisted learning on student critical thinking: a case study of Albania. International Journal of Risk and Contingency Management. Aug 2024;12(1):1-19. [CrossRef]
  40. Zayapragassarazan Z, Menon V, Kar S, Batmanabane G. Understanding critical thinking to create better doctors. Journal of Advances in Medical Education and Research. Apr 2016. URL: https://files.eric.ed.gov/fulltext/ED572834.pdf [accessed 2025-10-29]
  41. He Y, Du X, Toft E, Zhang X, Qu B, Shi J, et al. A comparison between the effectiveness of PBL and LBL on improving problem-solving abilities of medical students using questioning. Innovations in Education and Teaching International. Feb 10, 2017;55(1):44-54. [CrossRef]
  42. Čartolovni A, Tomičić A, Lazić Mosler E. Ethical, legal, and social considerations of AI-based medical decision-support tools: a scoping review. International Journal of Medical Informatics. Mar 15, 2022;161:1-19. [CrossRef]
  43. Đerić E, Frank D, Vuković D. Exploring the ethical implications of using generative AI tools in higher education. Informatics. Apr 07, 2025;12(2):36. [CrossRef]
  44. Alam F, Lim MA, Zulkipli IN. Integrating AI in medical education: embracing ethical usage and critical understanding. Front Med (Lausanne). 2023;10:1279707. [FREE Full text] [CrossRef] [Medline]
  45. Md Sabri S, Ismail I, Annuar N, Abdul Rahman NR, Abd Hamid NZ, Abd Mutalib H. A conceptual analysis of technology integration in classroom instruction towards enhancing student engagement and learning outcomes. IJEPC. Sep 30, 2024;9(55):750-769. [CrossRef]
  46. Rajabi P, Taghipour P, Cukierman D, Doleck T. Unleashing ChatGPT's impact in higher education: student and faculty perspectives. Computers in Human Behavior: Artificial Humans. Aug 2024;2(2):100090. [CrossRef]
  47. Saenz AD, Mass General Brigham AI Governance Committee, Centi A, Ting D, You JG, Landman A, et al. Establishing responsible use of AI guidelines: a comprehensive case study for healthcare institutions. NPJ Digit Med. Nov 30, 2024;7(1):348. [FREE Full text] [CrossRef] [Medline]
  48. Choudhury A, Chaudhry Z. Large language models and user trust: consequence of self-referential learning loop and the deskilling of health care professionals. J Med Internet Res. Apr 25, 2024;26:e56764. [FREE Full text] [CrossRef] [Medline]
  49. Felzmann H, Fosch-Villaronga E, Lutz C, Tamò-Larrieux A. Towards transparency by design for artificial intelligence. Sci Eng Ethics. Dec 2020;26(6):3333-3361. [FREE Full text] [CrossRef] [Medline]
  50. Rashidian N, Giglio MC, Van Herzeele I, Smeets P, Morise Z, Alseidi A, et al. Effectiveness of an immersive virtual reality environment on curricular training for complex cognitive skills in liver surgery: a multicentric crossover randomized trial. HPB (Oxford). Dec 2022;24(12):2086-2095. [FREE Full text] [CrossRef] [Medline]
  51. Hemminki-Reijonen U, Hassan NMAM, Huotilainen M, Koivisto J, Cowley BU. Design of generative AI-powered pedagogy for virtual reality environments in higher education. NPJ Sci Learn. May 23, 2025;10(1):31. [FREE Full text] [CrossRef] [Medline]
  52. McDonald N, Johri A, Ali A, Collier AH. Generative artificial intelligence in higher education: evidence from an analysis of institutional policies and guidelines. Computers in Human Behavior: Artificial Humans. Mar 2025;3:100121. [CrossRef]
  53. Rincón EHH, Jimenez D, Aguilar LAC, Flórez JMP, Tapia, Peñuela CLJ. Mapping the use of artificial intelligence in medical education: a scoping review. BMC Med Educ. Apr 12, 2025;25(1):526. [FREE Full text] [CrossRef] [Medline]
  54. Gordon M, Daniel M, Ajiboye A, Uraiby H, Xu NY, Bartlett R, et al. A scoping review of artificial intelligence in medical education: BEME Guide No. 84. Medical Teacher. Feb 29, 2024;46(4):446-470. [CrossRef]


AI: artificial intelligence
GenAI: generative artificial intelligence


Edited by A Coristine, T de Azevedo Cardoso; submitted 21.Apr.2025; peer-reviewed by V Jawa, Z Yu, KA Swygert, B Tasci; comments to author 22.May.2025; revised version received 03.Jun.2025; accepted 23.Sep.2025; published 03.Nov.2025.

Copyright

©Juan S Izquierdo-Condoy, Marlon Arias-Intriago, Andrea Tello-De-la-Torre, Felix Busch, Esteban Ortiz-Prado. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 03.Nov.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.