Review
Abstract
Background: Virtual patients are interactive digital simulations of clinical scenarios for the purpose of health professions education. There is no current collated evidence on the effectiveness of this form of education.
Objective: The goal of this study was to evaluate the effectiveness of virtual patients compared with traditional education, blended with traditional education, compared with other types of digital education, and design variants of virtual patients in health professions education. The outcomes of interest were knowledge, skills, attitudes, and satisfaction.
Methods: We performed a systematic review on the effectiveness of virtual patient simulations in pre- and postregistration health professions education following Cochrane methodology. We searched 7 databases from the year 1990 up to September 2018. No language restrictions were applied. We included randomized controlled trials and cluster randomized trials. We independently selected studies, extracted data, and assessed risk of bias and then compared the information in pairs. We contacted study authors for additional information if necessary. All pooled analyses were based on random-effects models.
Results: A total of 51 trials involving 4696 participants met our inclusion criteria. Furthermore, 25 studies compared virtual patients with traditional education, 11 studies investigated virtual patients as blended learning, 5 studies compared virtual patients with different forms of digital education, and 10 studies compared different design variants. The pooled analysis of studies comparing the effect of virtual patients to traditional education showed similar results for knowledge (standardized mean difference [SMD]=0.11, 95% CI −0.17 to 0.39, I2=74%, n=927) and favored virtual patients for skills (SMD=0.90, 95% CI 0.49 to 1.32, I2=88%, n=897). Studies measuring attitudes and satisfaction predominantly used surveys with item-by-item comparison. Trials comparing virtual patients with different forms of digital education and design variants were not numerous enough to give clear recommendations. Several methodological limitations in the included studies and heterogeneity contributed to a generally low quality of evidence.
Conclusions: Low to modest and mixed evidence suggests that when compared with traditional education, virtual patients can more effectively improve skills, and at least as effectively improve knowledge. The skills that improved were clinical reasoning, procedural skills, and a mix of procedural and team skills. We found evidence of effectiveness in both high-income and low- and middle-income countries, demonstrating the global applicability of virtual patients. Further research should explore the utility of different design variants of virtual patients.
doi:10.2196/14676
Keywords
Introduction
Background
Health care education is confronted with many global challenges. Shorter hospital stays, specialization of care, higher patient safety measures, and shortage of clinical teachers all diminish the traditional opportunities for the training of health professions through direct patient contact [
, ]. Early health professions education is often dominated by theoretical presentations with insufficient connection to clinical practice [ ]. The need to increase numbers and quality of the health workforce is especially visible in low-and-middle-income countries, where the need to scale up high-quality health education and introduce educational innovations is most pressing [ ]. Therefore, the global medical education community is perpetually searching for methods that can be applied to improve the relevance, increase the spread, and accelerate the educational process for health professions [ ].Digital education (often referred to as e-learning) is “the act of teaching and learning by means of digital technologies” [
]. It encompasses a multitude of educational concepts, approaches, methods, and technologies. Digital health education comprises, for example, offline learning, mobile learning, serious games, or virtual reality environments. We have conducted this systematic review as part of a review series on digital health education [ - ] and focused it on the simulation modality called virtual patients.Virtual patients are defined as interactive computer simulations of real-life clinical scenarios for the purpose of health professions training, education, or assessment [
]. This broad definition encompasses a variety of systems that use different technologies and address various learning needs [ ]. The learner is cast into the role of a health care provider who makes decisions about the type and order of clinical information acquired, differential diagnosis, and management and follow-up of the patient. Virtual patients are hypothesized to primarily address learning needs in clinical reasoning [ , ]. However, an influence of the use of virtual patients on other educational outcomes has been reported in previous literature [ , ].The educational use of virtual patients may be understood through experiential learning theory [
, ]. Following this theoretical model of action and reflection, virtual patients expose learners to simulated clinical experiences, providing mechanisms for information gathering and clinical decision making in a safe environment [ ]. Exposing the learner to many simulated clinical scenarios supports learning diagnostic processes [ ] while acquainting learners with a standardized set of clinical conditions common in the population, but rare or nonaccessible in highly specialized teaching hospitals [ ].Some concerns have been raised about educational use of virtual patients. Virtual patients should not replace but complement contact with real patients [
]. There are concerns around the use of virtual patients potentially resulting in less empathic learners [ ]. The use of unfamiliar technology as part of virtual patients’ education can represent a barrier to learning, even for younger generations [ , ]. Virtual patients may also prove ineffective when technological objectives drive teaching instead of being motivated by learning needs [ ].This virtual patient simulation review has been preceded by several narrative reviews [
, - ] and 2 systematic reviews with meta-analyses [ , ]. Our preliminary literature analysis showed that the number of studies including the term virtual patient or virtual patients has more than doubled on the MEDLINE database in comparison with available evidence provided in previous systematic reviews (February 2009 [ ] and July 2010 [ ]). Thus, our review will update the evidence base with studies not included in previous analyses.Objectives
The objective of this review was to evaluate the effectiveness of virtual patient simulation for delivering pre- and postregistration health care professions education using the following comparisons:
- Virtual patient versus traditional education
- Virtual patient blended learning versus traditional education
- Virtual patient versus other types of digital education
- Virtual patient design comparison
By traditional education, we mean all nondigital educational methods. This includes lectures, reading exercises, group discussion in classroom, and nondigital simulation as standardized patients or mannequin-based training. Virtual patient blended learning is the addition of virtual patients as a supplement to traditional education when the control intervention uses nondigital education methods only. Other types of digital education may include interventions such as video recordings, Web-based tutorials, or virtual classrooms.
We assessed the impact of virtual patient interventions on learners’ knowledge, skills, attitude, and satisfaction. Our secondary objective was to assess the cost-effectiveness, patient outcomes, and adverse effects of these interventions.
Methods
Protocol and Registration
While conducting the review, we adhered to the Cochrane methodology [
], followed a published protocol [ ], and presented results following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines [ ].Eligibility Criteria
We included randomized controlled trials (RCTs) and cluster RCTs (cRCTs). We excluded crossover trials because of the high likelihood of carryover effect.
Participants in the included studies had to be enrolled in a pre- or postregistration health-related education or training program (see glossary in
). This included students from disciplines such as medicine, dentistry, nursing and midwifery, medical diagnostic and treatment technology, physiotherapy and rehabilitation, and pharmacy.This review focused on screen-based virtual patient simulations that form a computerized, dynamically unfolding representation of patient cases. A virtual patient simulation is introduced by a case description and might contain answers given by the patient, clinical data (eg, laboratory results, medical images), and descriptions of patients’ signs and symptoms. Only the representations of the patient as a whole were of interest, rather than studies that focused on single parts of the body. As a matter of a policy followed in the Digital Health Education Collaboration [
] and aiming at avoiding duplication of reviews, we deliberately excluded virtual patients in 3-dimensional (3D) virtual learning environments from this study. We judged that a higher level of immersion of learners in 3D virtual environments, connected with potential technical challenges (eg, difficulties in navigating such environments, lags because of increased computational time or limited internet bandwidth), was likely to influence the educational outcomes and therefore merited a separate analysis covered already by the virtual reality review [ ] of this Digital Health Education Collaboration series. We also excluded those virtual patient interventions which require nonstandard equipment (eg, haptic devices, mannequins) or those virtual patients which are human controlled (eg, simulated email correspondence or chat room conversations). We excluded studies in which virtual patients were just a small part of the intervention and those in which the influence of virtual patients was not evaluated separately.Furthermore, 2-arm RCTs comparing virtual patients with a control group not involved in any type of subject-related learning activity were not considered eligible as previous meta-analyses have already shown a large positive effect when virtual patients were compared with no intervention [
].We decided to introduce to the review a comparison of virtual patients blended learning with traditional education as a consequence of the discussion in the community on the need to eliminate traditional types of learning activities to make space for virtual patients. For instance, Berman et al [
] noticed that the students’ subjective learning effect perceptions and satisfaction with integration were lower at universities that increased the workload of students by adding virtual patients without releasing time resources in the curriculum. As most of health professions education is conducted on campus, an integrated effect of virtual patients is possible. Blending virtual patients with traditional education is challenging and qualitatively different than a nonintervention control group comparison.Eligible primary outcomes were students’ (1) knowledge, (2) skills, (3) attitudes, and (4) satisfaction—together representing clinical competencies measured post intervention with validated or nonvalidated instruments. Secondary outcomes were (1) economic cost and cost-effectiveness, (2) patient outcomes, and (3) observed adverse effects.
Search Methods for Identification of Studies
We searched the following 7 databases: MEDLINE (via Ovid), EMBASE (via Elsevier), The Cochrane Library (via Wiley), PsycINFO (via Ovid), Educational Resource Information Centre (ERIC; via Ovid), Cumulative Index to Nursing and Allied Health Literature (CINAHL; via EBSCO), and Web of Science Core Collection (via Thomson Reuters). We adapted the MEDLINE strategy and keywords presented in
for use with each of the databases above. We searched databases from the year 1990 to September 20, 2018 to highlight recent developments and did not apply language restrictions. For all included studies, we searched references lists and conducted author and citation searches. We searched lists of references from other identified relevant systematic reviews while running our electronic searches.Data Collection and Analysis
Data Selection, Extraction, and Management
The search results were combined in a single EndNote library (version X7; Thomson Reuters) [
]. Overall, 2 authors independently screened titles and abstracts to identify potentially eligible studies. In the next phase, full-text versions of these papers were retrieved and 2 review authors independently assessed these papers against eligibility criteria. We piloted data extraction to maximize consistency in the information extracted. Disagreements were resolved through discussion. A third review author was consulted to arbitrate when differences in opinion arose. All relevant data were extracted using a structured form in Microsoft Excel. We contacted study authors for crucial missing information, particularly if required to judge inclusion criteria and study outcomes.Data Items
Information was extracted from each included study on (1) the characteristics of study participants (field of study; stage of education: pre/postregistered; year of study; and country where the study was conducted and its World Bank income category: high-income/low-and-middle-income country), (2) the type of outcome measure (type of tool used to measure outcome and information on whether the tool was validated), (3) the type of virtual patient intervention (topic and language of presented virtual patient simulations; information on whether the language of virtual patient was native to the majority of participants; source of virtual patient simulations: internal/external; was the study an individual or group assignment, and in case of group assignments, the number of students in a group; whether access to virtual patient simulation was from home or in a computer laboratory; number of virtual patient cases presented; time when the virtual patients were available; and duration of use of virtual patients), and (4) the type of virtual patient system (name of the system; navigation scheme: linear, branched, and free access; control mechanism: menu-based, keyboard, or speech recognition; feedback delivery and timing; and whether video clips where included in virtual patient cases). A glossary of the terms in use in the review may be found in
.Measures of Treatment Effect
We reported the treatment effects for continuous outcomes as mean values and SDs post intervention in each intervention group, along with the number of participants and P values. As the studies presented data using different tools, the mean differences were recalculated into standardized mean difference (SMD). We interpreted the effect size as small (SMD=0.2), moderate (SMD=0.5), and large (SMD=0.8) effect sizes [
]. If studies had multiple arms and no clear main comparison, we compared the virtual patient intervention arm with the most common control arm, excluding the nonintervention and mixed-intervention controls. If that was impossible to decide, we selected the least active control arm. If multiple outcomes in the same category (knowledge, skills, attitudes, and outcomes) were reported, we selected the primary measure, and if that was impossible, we calculated the mean value of all measures. For papers that reported median and range for the outcomes, we converted these to mean and SD using methods described by Wan [ ]. If a study did not report SD but provided CIs, we estimated SD from those using a method described in previous literature [ ].Data Synthesis and Analysis
Owing to the significant differences between studies, we employed a random-effects model in the meta-analysis using Review Manager (version 5.3; The Nordic Cochrane Centre) [
]. We displayed the results of the meta-analysis in forest plots and evaluated heterogeneity numerically using I2 statistics. For comparisons with more than 10 outcomes in the meta-analysis, we attempted a subgroup analysis. As the planned 15 subgroup analyses in the protocol [ ] did not explain the heterogeneity, we visualized the outcomes using albatross plots [ ]. These plots were implemented using a script created for the purpose of the study by one of the review authors (AK) in the statistical package R (version 3.4.3; R Foundation for Statistical Computing) [ ]. This explorative approach resulted in a new subgroup analysis in which we divided the control interventions into active (group discussion, mannequin-based simulation) and passive (lectures, reading assignments). Findings unsuitable for inclusion in a meta-analysis (eg, comparison of individual items in surveys) were presented using a narrative synthesis.Assessment of Risk of Bias
Two authors independently assessed the risk of bias using the Cochrane tool [
]. We considered the following domains: random sequence generation, allocation sequence concealment, blinding of participants or personnel, blinding to outcome assessment, completeness of outcome data, selective outcome reporting, and other sources of bias (eg, differences in baseline evaluation, volunteer bias, commercial grants). For cRCTs, we also assessed the risk of the following additional biases: recruitment bias, baseline imbalance, loss of clusters, incorrect analysis, and comparability with individually randomized trials. The publication bias in our review was difficult to investigate in a formal way because of high levels of heterogeneity which limit the interpretation possibilities of funnel plots.Summary of Findings Tables
We prepared summary of findings tables to present results of the meta-analysis [
]. We presented the results for major comparisons of the review and for each of the major primary outcomes. We considered the Grading of Recommendations Assessment, Development and Evaluation (GRADE) criteria to assess the quality of the evidence and downgraded the quality where appropriate [ ].Results
Included Studies
Our searches yielded a total of 44,054 citations, and 51 studies with 4696 participants were included (
). Overall, 2 reports described results already included in the review [ , ].Types of Studies
All included studies were published in peer-reviewed journals. All included studies had an RCT design, with the exception of 3 cRCTs [
- ].Types of Comparisons
A total of 25 studies compared virtual patients with traditional education [
- ], 11 compared a blend of virtual patients and traditional education with traditional education [ , , - ], 5 studies compared virtual patients with different forms of digital health education [ , - ], and 10 studies compared different types of virtual patient interventions [ - ].The traditional education control group involved a reading assignment in 6 studies [
, , , - ]; 4 studies each involving a lecture [ , , , ], group assignment [ , , , ], and mannequin-based training [ , , , ]; and 1 each involving standardized patients [ ] and ward-based education [ ]. In 5 studies, the intervention was a mix of different forms of traditional education (eg, lecture, small group assignment, and mannequin-based training) [ - , ].The digital education control group was in 2 studies—a Web tutorial or course [
, ] and video recording [ , ]—and in 1 study, a mix of traditional lectures and Web materials including video clips [ ] was used.Studies comparing different types of virtual patients contrasted narrative with problem-solving structure of virtual patients [
]; virtual patients with and without usability enhancements [ ]; different forms of feedback in virtual patients [ , ]; worked with unworked versions of virtual patients [ ]; differences between self-determined and mandatory access to virtual patients [ ]; virtual patients collections in which all the cases were presented at once to those automatically activated spaced in time [ ]; effects of virtual patient solving with virtual patient construction exercises [ ]; linear versus branched design of virtual patients [ ]; and finally the addition of representation scaffolding (see glossary in ) to virtual patients [ ].Furthermore, 41 studies had 2 study arms (see the first table in
), 7 studies had 3 arms [ , , , - ], and 3 studies had 4 arms [ , , ].Types of Participants
In total, 41 studies involved preregistered professionals (see the first table in
), with 8 studies focused on postregistered participants [ , , , , , , , ]; 2 studies involved both pre- and postregistered participants [ , ].In 37 out of 51 studies, participants were from the field of medicine. The studies from fields other than medicine were as follows: 6 studies in nursing [
, , , , , ]; 2 in pharmacy [ , ]; and 1 each in physical therapy [ ], osteopathic medicine [ ], and dentistry [ ]. In addition, 3 studies involved interprofessional education [ , , ].A total of 44 out of 51 studies were conducted in high-income countries; 19 were from the United States (see the first table in
); 5 from Germany [ , , , , ]; 3 each from Australia [ , , ] and Sweden [ , , ]; 2 each from Canada [ , ], the Netherlands [ , ], and the United Kingdom [ , ]; and 1 study was conducted each in Belgium and Switzerland [ ], Denmark [ ], France [ ], Hong Kong [ ], Japan [ ], Poland [ ], Singapore [ ], and Slovenia [ ]. From the 7 studies conducted in low-and-middle-income countries, 3 were from China [ , , ], 2 from Colombia [ , ], and 1 each from the Republic of South Africa [ ] and Iran [ ].In
we present the technical characteristics of virtual patient systems, topics of educational content presented, applied instructional design methods, setting of use, information on the validity of outcome measurement, and applied educational theories in the included studies. summarizes the reasons for excluding studies following a review of their full-text versions.Effects of Interventions
Knowledge
In total, 33 studies assessed outcomes of knowledge. In all studies, knowledge was measured using paper-based tests (see the second table in
). In 19 studies, the test consisted of multiple-choice questions (MCQs). Other knowledge test designs contained multiple-response questions [ ], true/false questions [ ], and key feature format questions [ ]. In 4 studies, the participants had to formulate free-text answers [ , , , ]. In 4 studies [ , , , ], the knowledge tests comprised a mix of different formats. Li et al [ ] used a combination of multiple-choice and short answer questions; Miedzybrodzka et al [ ] used MCQs and modified essays; Harris et al [ ] applied MCQs with confidence levels combined with script concordance testing questions; and Braun et al [ ] used a test consisting of multiple-choice items, key feature problems, and problem-solving tasks. Secomb et al [ ] measured cognitive growth using a survey requiring selection of the most significant items regarding learning environment preferences. In 3 studies [ , , ], the nature of the knowledge test was unclear. In the case of MCQs in which the nature of items was unclear or mixed, we classified the outcome as knowledge instead of, for example, clinical reasoning skills, but the borderline between those was sometimes blurred. Meta-knowledge (eg, knowledge about the clinical reasoning process itself) was classified as knowledge outcomes following the framework by Kraiger [ ].The effects of interventions on knowledge outcomes are summarized in the second table in
.Virtual Patient Versus Traditional Education
In 4 [
, , , ] of 18 studies comparing virtual patients with traditional education, the intervention resulted in more positive knowledge outcomes. In 2, the control group attended a lecture [ , ], whereas in the remaining 2 studies, students participated in a reading exercise [ , ]. In 1 study [ ], the control intervention arm (Problem-based learning (PBL) small group discussion) had significantly better results than the virtual patient intervention (SMD=−0.65, 95% CI −1.02 to −0.28, P=.001). In the remaining 13 studies, the difference did not reach a statistically significant level (see the second table in ).We excluded 2 studies [
, ] from our meta-analysis because of missing crucial outcome data. Jeimy et al [ ] presented outcomes of a knowledge test compared item-by-item and the study was therefore excluded from the meta-analysis. We also excluded 1 study [ ] owing to its outlier value of SMD=12.5 being most likely because of reporting error and excluded another study [ ] as we regarded meta-knowledge as very different from the other types of core knowledge outcomes.The pooled effect for knowledge outcomes (SMD=0.11, 95% CI −0.17 to 0.39, I2=74%, n=927;
) suggests that virtual patient interventions are as efficient as traditional education.Virtual Patient Blended Learning Versus Traditional Education
In 4 [
, , , ] of the 5 studies comparing virtual patients as a supplement with traditional education in the domain of knowledge, the group having the additional resource scored better than the control group. Only in 1 study [ ] did the addition of virtual patients not lead to statistically significant difference in knowledge outcomes (P=.11).The pooled effect for knowledge outcomes (SMD=0.73, 95% CI 0.24 to 1.22, I2=81%, n=439;
) suggests moderate effects preferring the mix of virtual patients with traditional education over traditional education alone.Virtual Patient Versus Other Types of Digital Education
A total of 2 studies compared the difference in knowledge outcomes between virtual patients and digital health education interventions. Courteille et al [
] compared virtual patients with a video-recorded lecture, whereas Trudeau et al [ ] compared with a static Web course. Neither of these comparisons showed significant differences in knowledge outcomes.Virtual Patient Design Comparison
In total, 8 studies focused on detecting the difference between variants of virtual patient design in the domain of knowledge. Only in 1 study by Friedman et al [
] were the differences at a statistically significant level. In this study, the pedagogic design of virtual patients was better than problem-solving and high-fidelity designs (P<.01). Comparing linear and branched virtual patients [ ], scaffolded versus nonscaffolded [ ], worked and unworked examples [ ], virtual patient with usability extensions [ ], self-determined versus mandatory integration [ ], spaced versus nonspaced release of cases [ ], and virtual patient solving versus virtual patient design exercises [ ] resulted in no significant differences in knowledge outcomes.Skills
A total of 28 studies assessed skills outcomes (see the third table in
). Skills were assessed by performance on a mannequin in 9 studies [ , , , , , , , , ], by performance on a live standardized patient in 8 studies [ , , , , , , , ], and performance on virtual patients [ ] and real patients [ ] in 1 study each. In 6 studies, outcomes were measured by a written assignment involving description of photographed clinical cases [ ], radiographs [ ], carrying out and structuring a mental state examination based on videotaped material [ ], solving paper cases [ , ], and a modular paper-based test [ ]. In 2 studies [ , ], outcomes were measured by a mix of paper cases and virtual patients. Kumta et al [ ] combined computer-based assessment, objective structured clinical examination (OSCE), and clinical examination comprising patients in the ward into 1 score.The effects of interventions on skills outcomes are summarized in the third table in
.Virtual Patient Versus Traditional Education
In 9 of 14 studies comparing virtual patients with traditional education, the intervention resulted in better skills outcomes (see the third table in
). The virtual patient intervention showed larger effects than lectures [ , ], reading exercises [ - ], group discussions [ ], and activities comprising traditional methods, including lectures or hands-on training with mannequins [ - ].Those skills which improved were clinical reasoning [
, , , ], procedural skills [ , - ], and a mix of procedural and team skills [ ].We did not include in the meta-analysis 2 studies with incomplete reported data [
, ]. We also excluded skills outcomes from the study by Haerling [ ] as these were available for a randomly selected subgroup only and from Wang et al [ ] as they were measured for teams of students only and not individually.The pooled effect on skills outcomes was (SMD=0.90, 95% CI 0.49 to 1.32, I2=88%, n=897;
). Overall, this suggests that virtual patients have moderate to large positive effects in comparison with traditional education in the investigated types of skills.Virtual Patient Blended Learning Versus Traditional Education
In 3 [
, , ] out of 7 studies, the groups using virtual patients blended learning scored better than the control group in the skills domain. Lehmann et al [ ] demonstrated significantly improved procedural skills (P<.001), whereas Weverling et al [ ] reported on improved clinical reasoning skills (P<.001) and Schittek Janda et al [ ] on communication skills (P<.01). Furthermore, 2 studies [ , ] involving nursing students showed no significant difference. The study by Bryant et al [ ] evaluated communication skills (P=.38), whereas Gu et al [ ] measured procedural skills (P>.05). We excluded 3 studies [ , , ] from the meta-analysis because of insufficient data provided in the report or item-by-item comparison of a skills checklist.The pooled effect for skills outcomes (SMD=0.60, 95% CI −0.07 to 1.27, I2=83%, n=247;
) suggests that virtual patients blended with traditional education have moderate positive effects in comparison with traditional education alone.Virtual Patient Versus Other Types of Digital Education
Out of 3 studies comparing skills outcomes in virtual patients with other types of digital education studies, Kumta et al [
] showed a significant difference (P<.001). In this study, virtual patients were better than a range of different traditional teaching methods supplemented by Web content that included video clips, PowerPoint presentations, digital notes, and handouts. The target outcomes were clinical skills assessed by OSCE stations and examination of patients in the wards. In a study by Dankbaar et al [ ], virtual patients were not significantly better in teaching procedural skill than an electronic module only (P>.05). Finally, in the study by Foster et al [ ], virtual patients showed no significant difference when compared with video recordings in teaching communication skills (P>.05).Virtual Patient Design Comparison
From the 4 studies that compared the influence of different virtual patient designs on skills outcomes, Foster et al [
] showed that virtual patients with emphatic feedback were significantly better in training communication skills than those virtual patients without feedback (P<.03). In the study by Bearman et al [ ], narrative virtual patients were significantly better than problem-solving virtual patients in conveying communication skills (P=.03). In a study by Braun et al [ ], the addition of representational scaffolding to a virtual patient intervention significantly improved diagnostic efficiency (P=.045). Finally, in the study by Tolsgaard et al [ ], there was no significant difference in integrated clinical performance when students constructed or solved virtual patients (P=.54).Attitudes
A total of 11 studies reported attitudinal outcomes (see the third table in
). The attitudes related to confidence, preparedness, comfort, self-efficacy, and perceived ability in topics such as history taking and clinical breast examination [ ], diagnostic and management abilities [ ], contrast reaction management and teamwork [ ], ethical, legal, and communication issues [ , ], opioid therapy [ ], cultural competence [ ], procedural knowledge in pediatric basic life support [ ], performing pharmacy triage [ ], caring for distress disorders patients [ ], and anxiety [ ].The effects of interventions on attitudinal outcomes are presented in the fourth table in
. Furthermore, 3 studies presented pooled scores on students‘ self-assessment. In the study by Lehmann et al [ ], students felt more confident in their knowledge and skills on performing pediatric basic life support with additional access to virtual patients that supplemented their traditional course (P<.001). There were no significant differences in the remaining 2 studies focusing on communication-related self-efficacy [ ] and attitudes related to opioid therapy [ ].In the study by Williams et al [
], more items related to self-assessment of competences (in dealing with ethical aspects and managing anxiety) were scored lower in the virtual patient group than in the traditional education groups. There were no differences in analyzed items related to attitudes in 4 studies [ , , , ]. In the study by Smith et al [ ], the results regarding attitudes toward clinical cultural competence were presented separately for bilingual and English-speaking students, which makes it difficult to aggregate not knowing the number of bilingual students in each study group. However, the descriptive conclusion of the authors was that general cultural competence measures were the same for the virtual patient and control group. In 2 studies [ , ], the results were compared item-by-item and only within the groups (pre/posttest), not between the study groups.Satisfaction
In total, 17 studies measured satisfaction resulting from an intervention (see the fifth table in
). All outcomes in this category were measured by satisfaction questionnaires. Different facets of satisfaction were measured, which we classified in the following 5 dimensions: general impression (global score or willingness to recommend), comfort in use (learning style preference, engagement or motivation, positive climate or safety, and enjoyment or pleasure), integration in curriculum (time constraints, relevance, and level of difficulty), academic factors (feedback quality, structure, and clarity), and satisfaction with technical features (usability and information technology readiness).In 4 out of 17 studies evaluating the satisfaction of students receiving a virtual patient intervention, the result was presented as 1 aggregated score of several items. Furthermore, 3 of those studies compared different design variants of virtual patients. In the study by Friedman et al [
], the pedagogic format (menus, guided) resulted in higher satisfaction scores than the high-fidelity (free text, unguided) format (P<.01). There was no statistically significant difference between the virtual patients with and without usability enhancements [ ] (P=.13) and solving versus constructing virtual patients [ ] (P=.46). One study [ ] presented comparison of virtual patients with mannequin-based training using a single score for student satisfaction and self-confidence in learning, showing no difference between the simulation modalities (P=.11).In the remaining 13 out 17 studies, the survey responses were compared item-by-item. In 4 studies, the majority of the items indicated preference for the virtual patient intervention, in comparison with lecture [
], reading assignment [ ], video-based learning [ ], and Web tutorial [ ]. In 7 studies, most items were indifferent between the groups [ , , , , , , ]. In 1 study [ ], most items (5 out of 6) in a satisfaction survey were better rated in the mannequin-based training than in the virtual patient group.Secondary Outcomes
One study had cost-effectiveness as an outcome [
]. In 9 studies, statements were made regarding the cost of the intervention—either monetary or in development time [ , , , - , , ]. Only 1 study provided numerical data on both types of intervention [ ]. The comparison was qualitative in 3 studies [ , , ]. In 5 studies, estimations of costs were made for the virtual patient group without contrasting it with the cost of the control intervention [ , , , , ]. None of the included studies had patient outcomes or adverse effects as the main outcome measure. Even though none of the studies reported direct patient outcomes, in 2 studies, the participants were observed by raters while performing tasks on real patients as an outcome assessment [ , ]. In the study by Kumta et al [ ], the score was included in more complex assessment (including MCQ tests and OSCE examination) and the patient-related outcome was not explicitly reported. In the study by Schittek Janda et al [ ], first year students of dentistry were asked to perform history taking with real patients and were rated by the instructor. The patients’ perspective was, however, not considered. Even though none of the studies had adverse effects as the major outcome, 6 studies [ , , , , , ] reported findings related to noticed unexpected effects of the intervention.Cost
Haerling [
] showed a better cost-utility ratio of US $1.08 for virtual patients versus US $3.62 for the mannequin-based training. Foster et al [ ] compared the cost of human-provided (Mechanical Turk) feedback with backstory video feedback; the cost of human answers was US $0.05 per question assisted, whereas videos required 4 hours of development time and the license cost of a video game (Sims 3 by Electronic Arts). This does not provide a direct answer to the question of which method was more cost-efficient as it depends on the number of participants and time of use. It is also important to notice that the human-generated feedback in virtual patients showed positive effects on the communication skills outcomes, whereas the backstory video did not. Bryant et al [ ] estimated, but without providing numerical evidence, that the cost of a virtual patient was similar to that of a course text that was eliminated by the new intervention. Liaw et al [ ], without providing concrete numbers, noticed that despite “initial startup costs for developing the virtual patient simulation, its implementation was less resource intensive than the mannequin-based simulation.” The cost savings were because of reduced instructor time, use of expensive equipment, or simulation facilities. Maleck et al [ ] saw cost savings in the virtual patient group because of spared radiograph printouts. The cost of the virtual patient intervention was expressed in hours of work; in 2 cases, the cost was 12 to 15 hours per virtual patient [ , ]; in 1 case it was 15 to 30 hours [ ] and 100 hours in another [ ]. The cost expressed in amounts of money was estimated at US $500 for content development and technical implementation [ ] and US $4800 for a total clerkship restructuring, including adding virtual patients [ ]. It is worth noticing that in both cases the virtual patients were developed by students. Deladisma et al [ ] used in their study a virtual patient system that involved a speech recognition engine, tracked user’s body movements, and projected a life-sized avatar on the wall. The cost of the technology used in the pilot study (including 2 networked personal computers, 1 data projector, and 2 Web cameras) was estimated in 2006 to be less than US $7000 [ ].Patient Outcomes
In the study by Schittek Janda et al [
], an experienced clinician rated the professional behavior (language precision, order of question, and empathy) of first year students’ of dentistry toward real patients as significantly higher (P<.01) in the group having access to a supplementary virtual patient case than in the group that underwent standard instruction.Adverse Effects
Dankbaar et al [
] hypothesize based on their study results that high-fidelity virtual patients may increase motivation, but at the same time be more distracting for novice students and by that impede learning. Authors of 2 studies [ , ] observe that the language of virtual patients might be a significant factor showing greater effects on nonnative English speaking and bilingual learners than in native English speakers. In the study by Qayumi et al [ ], it is observed that that lower-achieving students benefit more from virtual patients than high performers. In the study by Botezatu et al [ ], students knowing about the possibility of being assessed by virtual patients opposed being tested with paper cases. In the study by Al-Dahir et al [ ], it is observed that analysis of individual learner traces in the virtual patient system negates benefits of social learning.Subgroup Analysis
None of the initially planned subgroup analyses explained the heterogeneity of the results.
Among many analyzed aspects, we looked into differences regarding the efficiency of learning with virtual patients between the health professions disciplines. Most of the located studies involved students of medicine as participants. For instance, when comparing virtual patients with traditional education in the domain of skills, out of the 12 outcomes included for subgroup analyses, only 2 were from other health profession disciplines than medicine (ie, studies from nursing [
, ]). When analyzing knowledge outcomes out of the 12 included studies, 4 were nonmedical but represented 3 very different disciplines, nursing [ , ], pharmacy [ ], and physiotherapy [ ]. The conducted subgroup analyses showed no significant differences between the subgroups and high heterogeneity.While analyzing aspects of instructional design implemented in the virtual patient scenarios, we were able to locate a very balanced number of studies implementing the narrative and problem-solving designs [
] in the domain of knowledge outcomes (6 studies in each branch). Yet, the pooled results showed no difference (narrative: SMD=0.12, 95% CI −0.41 to 0.64, I2=85%, n=525 versus problem solving: SMD=0.11, 95% CI −0.17 to 0.38, I2=51%, n=520; subgroup differences P=.97). Interestingly, when looking into the domain of skills outcomes, all studies had either the problem-solving or unclear design (in 2 cases). This might be an indication that narrative (linear, branched) virtual patients are seen as being better suited for knowledge outcomes rather than skills.Finally, we were unable to see any pattern in efficiency when analyzing the timing of feedback as being either during activity or post activity. However, in almost half of the studies, we were unable to decide, based on the description of the intervention, which model of feedback was implemented or whether the study had a mixed (during/post activity) mode of providing feedback.
To further explore the reasons for heterogeneity, we visualized the outcomes in the form of albatross plots of the knowledge and skills outcomes for virtual patients to traditional education comparisons.
presents an albatross plot for knowledge and for skills outcomes. Comparisons of virtual patients to passive forms of learning (reading exercises and lectures) tended to display large positive effect sizes, whereas those comparing virtual patients to active learning (group discussion or mannequin-based learning) show small effects or even negative effects (left hand side in the and and ).Risk of Bias
Following the Cochrane methodology [
], we have assessed the risk of bias in all included studies. The results of the analysis are summarized in .Overall, we do not consider allocation bias as a significant issue in the review as most of the studies either described an adequate randomization method (17 of 51 studies) or even when the description was unclear (31 of 51), it was judged unlikely that the randomization was seriously flawed. Performance bias in comparisons with traditional education is an issue but at the same time is impossible to avoid in this type of research. The blinding of participants in virtual patient design comparisons is possible, but those studies are still relatively uncommon (n=10). The risk of assessor bias was avoided in many studies by using automated or formalized assessment instruments. Consequently, we assessed the risk as low in 42 of 51 studies. However, it is often unclear whether the instruments (eg, MCQ tests, assessment rubrics) were properly validated. We felt that in the majority of studies, attrition bias was within acceptable levels (low risk in 36 of 51 studies). This does not exclude volunteer bias, which is likely to be common, but its influence is difficult to estimate. As there is little tradition of publishing protocols in medical education research, it was problematic to assess selective reporting bias, but we judged the risk as low in 35 out of 51 studies. We were unable to reliably assess publication bias considering the high heterogeneity of studies. None of the cRCT studies considered in the statistical analysis had corrections for clustering, but we have decreased the number of participants in those studies using a method from the Cochrane Handbook to compensate for that. We present more details of the risk of bias analysis in
.We rated down the quality of evidence for knowledge and skills outcomes in virtual patients to traditional education comparison because of the high heterogeneity of included studies and limitations in study design (lack of participant blinding, nonvalidated instruments, and potential volunteer bias). For attitudinal and satisfaction outcomes and for other types of comparisons, we additionally rated down the quality as the outcomes were presented as independent items in questionnaires that were not amenable to statistical analysis or the analyses contained just a handful of studies and the CIs were wide. Summary of findings table (GRADE) are presented in
.Discussion
Principal Findings
The aim of this review was to evaluate the effectiveness of virtual patients in comparison with other existing educational methods.
There is low quality evidence that virtual patients are at least as effective as traditional education for knowledge outcome and more effective for skills outcomes. On the basis of the visual analysis of albatross plots, we may hypothesize that replacing passive forms of traditional education with virtual patients brings more benefit than replacing active learning methods. We collected positive evidence of effectiveness from both high-income and low-and-middle-income countries demonstrating the global applicability of virtual patients. Students were generally satisfied with the use of virtual patients, but we also located studies in our review where the use of virtual patients was connected with diminished confidence.
The strength of our systematic review is the broad perspective which shows the landscape of RCTs in the domain of virtual patients. Our systematic review updates the evidence on virtual patient effectiveness, which was last summarized in a meta-analysis almost a decade ago.
Limitations
The limitation of our work is that the wide scope of the review does not allow nuances in the studies to be explored in detail. We were unable to make a firm assessment of publication bias. The high heterogeneity of the results leads to the conclusion that without further consideration of needs and implementation details, we cannot expect that the introduction of virtual patients will always lead to detectable positive outcomes. Evidence to determine the effective factors is sparse and represented by only 10 studies in our review, with very diverse research questions.
Our review is limited by the decision to exclude crossover design studies. However, this has been discussed in detail in the potential biases in the review process section in
. We excluded studies published before 1991 as we consider the technology available before the World Wide Web to be materially different from that currently available. Finally, we are limited by the sparse description of the interventions in some of the papers, which occasionally might have led to misclassification of the studies.Comparison With Prior Work
Extending the results of the meta-analysis by Cook et al [
] and in agreement with the one by Consorti et al [ ], our review shows that virtual patients have an overall positive pooled effect when compared with some other types of traditional educational methods. Our observations regarding the influence of the type of outcome (knowledge/skills) and comparison (active/passive traditional learning) supplement the evidence in the previous reviews [ , ], which included studies until 2010. This time point divides the evidence collected in 2 parts: (1) time already covered by previous reviews (1991-2010) and (2) time not included in the previous reviews (2011-2018). It is interesting to note that, though the former timeframe spans over 20 years compared with 8 years in the latter, more studies were included from the latter period, 22 studies (until 2010) versus 29 studies (after 2010). This demonstrates increased interest in virtual patients and medical education in general. The research community around digital health education has long been criticized for publishing media-comparative studies [ , ]. Media-comparative research aims to make comparisons between different media formats such as paper, face-to-face, and digital education [ ]. Both Friedman and Cook argue [ , ] that the limitations of this type of comparison boils down to the inability to produce an adequate control group as interventions are bound to be influenced by too many confounding factors to be generalizable. Even though there are still many media-comparative studies, the number of studies comparing different forms of digital education seem to increase: 3/22 (14%) until 2010 versus 11/29 (38%) after 2010. The number of studies in which students worked from home as an intervention has also increased; before 2011 there was just 1/16 (6%; in 6 studies it was unclear), whereas after 2011, it was 11/22 (50%; in 7 studies it was unclear) studies. However, this potentially raises concerns about how controlled the interventions and measures were, and thus the validity of the conclusions.Our observation that virtual patient simulations predominantly effect skills rather than knowledge outcomes can be interpreted as an indication that for lower levels of Bloom's taxonomy [
], (remember, understands) there is little added value of introducing virtual patients when compared with traditional methods of education. Virtual patients can have greater impact when applied where knowledge is combined with skills and applied in problem solving, and when direct patient contact is not yet possible. We found little evidence to support the use of virtual patients at higher levels of the taxonomy. We also warn against using our result in justifying diminished hours of bedside teaching as this was investigated in just 1 study [ ] and did not show positive outcomes. Consequently, virtual patients can be said to be a modality for learning in which learners actively use and train their clinical reasoning and critical thinking abilities before bedside learning, as was previously suggested in their critical literature review by Cook and Triola [ ].The perceptions of students toward studying with virtual patients are generally positive. However, some exceptions can be noted. In 1 study [
], students were less confident in their skills when compared with facilitated group discussion and lecture. This is in contrast with no observable differences or even better performance in the virtual patient group when considering the objective outcomes in those studies. This could be explained by disbelief in the effectiveness of the new computer-based methods of learning or anxiety of losing direct patient contact.The results of our subgroup analysis, though inconsistent, encourage the introduction of more active forms of education. Yet, we note that the range from active to passive learning forms a continuum, and the decision on how to classify each intervention is hampered by sparse descriptions in the reports. Nevertheless, questioning the utility of passive learning is not a new finding and is observed elsewhere, for instance, in the literature on the flipped-classroom learning approach [
]. As the effects of comparing virtual patients with other forms of active learning were small and we could not detect any other variables explaining the heterogeneity, it seems reasonable to individually consider other factors such as cost of use, time flexibility, personnel shortage, and availability in different settings (eg, students’ homes or locations remote from academic centers) when determining which methods to use.The need for more guidance within virtual patient simulations is apparent in studies differing by instructional methods where narrative virtual patient design was better than more autonomous problem-oriented designs [
]. Feedback given by humans at distance in a virtual patient system was better than an animated backstory in increasing empathy [ ], whereas more active constructing virtual patients with more time on a task but no feedback had no more positive result on the outcomes than learning from a virtual patient scenario [ ]. This reminds us that presenting realistic patient scenarios with a great degree of freedom cannot be an excuse for neglecting guidance in relation to learning objectives [ , ].Outlook
We join the plea of Friedman [
] and Cook [ ] to abandon media-comparative research as it is difficult to interpret and we instead encourage greater focus on exploring the utility of different design variants of virtual patient simulations. The current knowledge on the influence of these factors is sparse. A carefully planned study backed up in sound educational theory should provide many valuable research opportunities. However, sufficiently powered samples are needed, as the effects are likely to be small. The second consideration pertains to the need to use previously validated measurement tools that are well-aligned with the learning objectives. Comparisons of outcomes in tools on an item-by-item basis is methodologically questionable and makes the aggregations of results difficult in systematic reviews. We also call for more studies in other health professions disciplines than medicine, as our subgroup analysis showed that evidence of virtual patient effectiveness in such programs as nursing, physiotherapy, or pharmacy is underrepresented. Investigations into patient outcomes and cost-effectiveness of virtual patients are not yet explored directly and form a key avenue for future efforts.Conclusions
Low to modest and mixed evidence suggests that when compared with traditional education, virtual patients can more effectively improve skills, and at least as effectively improve knowledge outcomes as traditional education. Education with virtual patients provides an active form of learning that is beneficial for clinical reasoning skills. Implementations vary and are likely to be broad across pre- and postregistration education, although current studies do not provide clear guidance on when to use virtual patients. We recommend further research be focused on exploring the utility of different design variants of virtual patients.
Acknowledgments
The authors wish to acknowledge the help of Jayne Alfredsson, Carina Georg, Anneliese Lilienthal, Italo Masiello, and Monika Semwal at the stage of protocol writing. The authors would like to thank Jim Campbell for his contribution to coordination and conceptual formation of the Digital Health Education Collaboration. The authors would also like to thank Carl Gornitzki, GunBrit Knutssön, and Klas Moberg from the University Library, Karolinska Institute, for developing the search strategy and Ram Chandra Bajpai from Nanyang Technological University for statistical consulting and Pawel Posadzki for his supportive feedback on the manuscript.
At the time of the study, SE was affiliated with the Karolinska Institutet and Linköping University, and N Saxena was affiliated with Health Services and Outcomes Research, National Healthcare Group, Singapore, Singapore.
Authors' Contributions
JC conceived the idea for the review. AK drafted the protocol with substantial contributions from LW, SE, NS, DD, NSA, LTC, and NZ. The Digital Health Education Collaboration developed the search strategy, obtained copies of studies, and screened the studies. JC and JCD contributed to the coordination and conceptual formation of the Digital Health Education Collaboration. AK, LW, NS, and DD extracted data from studies and conducted the risk of bias assessment. LW contacted the authors in case of missing data. AK carried out the analysis of collected data. LTC provided methodological guidance. AK, LW, SE, NS, DD, LTC, and NZ interpreted the analysis and contributed to the discussion. AK drafted the final review with substantial contributions from all authors. All authors revised and approved the final version of review.
Conflicts of Interest
None declared.
Multimedia Appendix 4
Summary of technical and educational features of included studies.
DOC File, 458KB
References
- Ramani S, Leinster S. AMEE Guide no. 34: teaching in the clinical environment. Med Teach 2008 Jan;30(4):347-364. [CrossRef] [Medline]
- Moalem J, Salzman P, Ruan DT, Cherr GS, Freiburg CB, Farkas RL, et al. Should all duty hours be the same? Results of a national survey of surgical trainees. J Am Coll Surg 2009 Jul;209(1):47-54, 54.e1-2. [CrossRef] [Medline]
- Dev P, Schleyer T. Computers in health care education. In: Shortliffe E, Cimino J, editors. Biomedical Informatics. London: Springer; 2014:675-693.
- Frenk J, Chen L, Bhutta ZA, Cohen J, Crisp N, Evans T, et al. Health professionals for a new century: transforming education to strengthen health systems in an interdependent world. Lancet 2010 Dec 4;376(9756):1923-1958. [CrossRef] [Medline]
- Crisp N, Gawanas B, Sharp I, Task Force for Scaling Up EducationTraining for Health Workers. Training the health workforce: scaling up, saving lives. Lancet 2008 Feb 23;371(9613):689-691. [CrossRef] [Medline]
- Car J, Carlstedt-Duke J, Tudor Car L, Posadzki P, Whiting P, Zary N, Digital Health Education Collaboration. Digital education in health professions: the need for overarching evidence synthesis. J Med Internet Res 2019 Dec 14;21(2):e12913 [FREE Full text] [CrossRef] [Medline]
- Bajpai S, Semwal M, Bajpai R, Car J, Ho AH. Health professions' digital education: review of learning theories in randomized controlled trials by the Digital Health Education Collaboration. J Med Internet Res 2019 Mar 12;21(3):e12912 [FREE Full text] [CrossRef] [Medline]
- Dunleavy G, Nikolaou CK, Nifakos S, Atun R, Law GC, Tudor Car L. Mobile digital education for health professions: systematic review and meta-analysis by the Digital Health Education Collaboration. J Med Internet Res 2019 Feb 12;21(2):e12937 [FREE Full text] [CrossRef] [Medline]
- Gentry SV, Gauthier A, L'Estrade Ehrstrom B, Wortley D, Lilienthal A, Tudor Car L, et al. Serious gaming and gamification education in health professions: systematic review. J Med Internet Res 2019 Mar 28;21(3):e12994 [FREE Full text] [CrossRef] [Medline]
- George PP, Zhabenko O, Kyaw BM, Antoniou P, Posadzki P, Saxena N, et al. Online digital education for postregistration training of medical doctors: systematic review by the Digital Health Education Collaboration. J Med Internet Res 2019 Feb 25;21(2):e13269 [FREE Full text] [CrossRef] [Medline]
- Huang Z, Semwal M, Lee SY, Tee M, Ong W, Tan WS, et al. Digital health professions education on diabetes management: systematic review by the Digital Health Education Collaboration. J Med Internet Res 2019 Feb 21;21(2):e12997 [FREE Full text] [CrossRef] [Medline]
- Kyaw BM, Posadzki P, Dunleavy G, Semwal M, Divakar U, Hervatis V, et al. Offline digital education for medical students: systematic review and meta-analysis by the Digital Health Education Collaboration. J Med Internet Res 2019 Mar 25;21(3):e13165 [FREE Full text] [CrossRef] [Medline]
- Kyaw BM, Saxena N, Posadzki P, Vseteckova J, Nikolaou CK, George PP, et al. Virtual reality for health professions education: systematic review and meta-analysis by the Digital Health Education collaboration. J Med Internet Res 2019 Jan 22;21(1):e12959 [FREE Full text] [CrossRef] [Medline]
- Lall P, Rees R, Law GC, Dunleavy G, Cotic Ž, Car J. Influences on the implementation of mobile learning for medical and nursing education: qualitative systematic review by the Digital Health Education Collaboration. J Med Internet Res 2019 Feb 28;21(2):e12895 [FREE Full text] [CrossRef] [Medline]
- Martinengo L, Yeo NJY, Tang ZQ, Markandran KD, Kyaw BM, Tudor Car L. Digital education for the management of chronic wounds in health care professionals: protocol for a systematic review by the Digital Health Education Collaboration. JMIR Res Protoc 2019 Mar 25;8(3):e12488 [FREE Full text] [CrossRef] [Medline]
- Posadzki P, Bala MM, Kyaw BM, Semwal M, Divakar U, Koperny M, et al. Offline digital education for postregistration health professions: systematic review and meta-analysis by the Digital Health Education Collaboration. J Med Internet Res 2019 Apr 24;21(4):e12968 [FREE Full text] [CrossRef] [Medline]
- Semwal M, Whiting P, Bajpai R, Bajpai S, Kyaw BM, Tudor Car L. Digital education for health professions on smoking cessation management: systematic review by the Digital Health Education Collaboration. J Med Internet Res 2019 Mar 4;21(3):e13000 [FREE Full text] [CrossRef] [Medline]
- Tudor Car L, Kyaw BM, Dunleavy G, Smart NA, Semwal M, Rotgans JI, et al. Digital problem-based learning in health professions: systematic review and meta-analysis by the Digital Health Education Collaboration. J Med Internet Res 2019 Feb 28;21(2):e12945 [FREE Full text] [CrossRef] [Medline]
- Wahabi HA, Esmaeil SA, Bahkali KH, Titi MA, Amer YS, Fayed AA, et al. Medical doctors' offline computer-assisted digital education: systematic review by the Digital Health Education Collaboration. J Med Internet Res 2019 Mar 1;21(3):e12998 [FREE Full text] [CrossRef] [Medline]
- Ellaway R, Candler C, Greene P, Smothers V. MedBiquitous. An Architectural Model for MedBiquitous Virtual Patients URL: http://tinyurl.com/jpewpbt [accessed 2019-05-10] [WebCite Cache]
- Kononowicz AA, Zary N, Edelbring S, Corral J, Hege I. Virtual patients - what are we talking about? A framework to classify the meanings of the term in healthcare education. BMC Med Educ 2015;15:11 [FREE Full text] [CrossRef] [Medline]
- Cook DA, Triola MM. Virtual patients: a critical literature review and proposed next steps. Med Educ 2009 Apr;43(4):303-311. [CrossRef] [Medline]
- Posel N, McGee JB, Fleiszer DM. Twelve tips to support the development of clinical reasoning skills using virtual patient cases. Med Teach 2014 Dec 19:1-6. [CrossRef] [Medline]
- Berman NB, Durning SJ, Fischer MR, Huwendiek S, Triola MM. The role for virtual patients in the future of medical education. Acad Med 2016 Sep;91(9):1217-1222. [CrossRef] [Medline]
- Kolb DA. Experiential Learning: Experience As The Source Of Learning And Development. Upper Saddle River, New Jersey: Prentice Hall; 1984.
- Yardley S, Teunissen PW, Dornan T. Experiential learning: AMEE Guide No. 63. Med Teach 2012;34(2):e102-e115. [CrossRef] [Medline]
- Edelbring S, Dastmalchi M, Hult H, Lundberg IE, Dahlgren LO. Experiencing virtual patients in clinical learning: a phenomenological study. Adv Health Sci Educ Theory Pract 2011 Aug;16(3):331-345. [CrossRef] [Medline]
- Norman G. Research in clinical reasoning: past history and current trends. Med Educ 2005 Apr;39(4):418-427. [CrossRef] [Medline]
- Berman N, Fall LH, Smith S, Levine DA, Maloney CG, Potts M, et al. Integration strategies for using virtual patients in clinical clerkships. Acad Med 2009 Jul;84(7):942-949. [CrossRef] [Medline]
- Kenny NP, Beagan BL. The patient as text: a challenge for problem-based learning. Med Educ 2004 Oct;38(10):1071-1079. [CrossRef] [Medline]
- Button D, Harrington A, Belan I. E-learning & information communication technology (ICT) in nursing education: a review of the literature. Nurse Educ Today 2014 Oct;34(10):1311-1323. [CrossRef] [Medline]
- Watson JA, Pecchioni LL. Digital natives and digital media in the college classroom: assignment design and impacts on student learning. Educ Media Int 2011 Dec;48(4):307-320. [CrossRef]
- Schifferdecker KE, Berman NB, Fall LH, Fischer MR. Adoption of computer-assisted learning in medical education: the educators' perspective. Med Educ 2012 Nov;46(11):1063-1073. [CrossRef] [Medline]
- Baumann-Birkbeck L, Florentina F, Karatas O, Sun J, Tang T, Thaung V, et al. Appraising the role of the virtual patient for therapeutics health education. Curr Pharm Teach Learn 2017 Dec;9(5):934-944. [CrossRef] [Medline]
- Cendan J, Lok B. The use of virtual patients in medical school curricula. Adv Physiol Educ 2012 Mar;36(1):48-53 [FREE Full text] [CrossRef] [Medline]
- Poulton T, Balasubramaniam C. Virtual patients: a year of change. Med Teach 2011 Jan;33(11):933-937. [CrossRef] [Medline]
- Saleh N. The value of virtual patients in medical education. Ann Behav Sci Med Educ 2010 Oct 16;16(2):29-31. [CrossRef]
- Cook DA, Erwin PJ, Triola MM. Computerized virtual patients in health professions education: a systematic review and meta-analysis. Acad Med 2010 Oct;85(10):1589-1602. [CrossRef] [Medline]
- Consorti F, Mancuso R, Nocioni M, Piccolo A. Efficacy of virtual patients in medical education: a meta-analysis of randomized studies. Comput Educ 2012 Nov;59(3):1001-1008. [CrossRef]
- Higgins J, Green S. Cochrane Handbook For Systematic Reviews Of Interventions. Chichester, England: John Wiley & Sons; 2008.
- Kononowicz AA, Woodham L, Georg C, Edelbring S, Stathakarou N, Davies D, et al. Virtual patient simulations for health professional education. Cochrane Database Syst Rev 2016 May 19(5):CD012194. [CrossRef]
- Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med 2009 Jul 21;6(7):e1000100 [FREE Full text] [CrossRef] [Medline]
- EndNote X7. Philadelphia, PA: Thomson Reuters; 2016. URL: https://endnote.com/ [accessed 2019-05-10]
- Wan X, Wang W, Liu J, Tong T. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range. BMC Med Res Methodol 2014 Dec 19;14:135 [FREE Full text] [CrossRef] [Medline]
- Cochrane Community. Copenhagen, Denmark: The Nordic Cochrane Centre; 2014. Review Manager (RevMan) Version 5.3 URL: https://community.cochrane.org/help/tools-and-software/revman-5 [accessed 2019-05-10]
- Harrison S, Jones HE, Martin RM, Lewis SJ, Higgins JP. The albatross plot: a novel graphical tool for presenting results of diversely reported studies in a systematic review. Res Synth Methods 2017 Sep;8(3):281-289 [FREE Full text] [CrossRef] [Medline]
- R: A Language and Environment for Statistical Computing. Vienna, Austria: R Core Team; 2017. URL: https://www.r-project.org/ [accessed 2019-05-10]
- Kurihara Y, Kuramoto S, Matsuura K, Miki Y, Oda K, Seo H, et al. Academic performance and comparative effectiveness of computer- and textbook-based self-instruction. Stud Health Technol Inform 2004;107(Pt 2):894-897. [Medline]
- Succar T, Grigg J. A new vision for teaching ophthalmology in the medical curriculum: The Virtual Ophthalmology clinic. 2010 Presented at: ASCILITE Conference; December 5-8, 2010; Sydney, Australia p. 944-947. [CrossRef]
- Kononowicz AA, Krawczyk P, Cebula G, Dembkowska M, Drab E, Fraczek B, et al. Effects of introducing a voluntary virtual patient module to a basic life support with an automated external defibrillator course: a randomised trial. BMC Med Educ 2012 Jun 18;12:41 [FREE Full text] [CrossRef] [Medline]
- Kumta SM, Tsang PL, Hung LK, Cheng JCY. Fostering critical thinking skills through a web-based tutorial programme for final year medical students--A randomized controlled study. J Educ Multimed Hypermedia 2003;12(3):267-273.
- Succar T, Zebington G, Billson F, Byth K, Barrie S, McCluskey P, et al. The impact of the Virtual Ophthalmology Clinic on medical students' learning: a randomised controlled trial. Eye (Lond) 2013 Oct;27(10):1151-1157. [CrossRef] [Medline]
- Al-Dahir S, Bryant K, Kennedy KB, Robinson DS. Online virtual-patient cases versus traditional problem-based learning in advanced pharmacy practice experiences. Am J Pharm Educ 2014 May 15;78(4):76 [FREE Full text] [CrossRef] [Medline]
- Bonnetain E, Boucheix JM, Hamet M, Freysz M. Benefits of computer screen-based simulation in learning cardiac arrest procedures. Med Educ 2010 Jul;44(7):716-722. [CrossRef] [Medline]
- Botezatu M, Hult H, Tessma MK, Fors UGH. Virtual patient simulation for learning and assessment: superior results in comparison with regular course exams. Med Teach 2010;32(10):845-850. [CrossRef] [Medline]
- Botezatu M, Hult H, Tessma MK, Fors U. Virtual patient simulation: knowledge gain or knowledge loss? Med Teach 2010;32(7):562-568. [CrossRef] [Medline]
- Fleetwood J, Vaught W, Feldman D, Gracely E, Kassutto Z, Novack D. MedEthEx Online: a computer-based learning program in medical ethics and communication skills. Teach Learn Med 2000;12(2):96-104. [CrossRef] [Medline]
- Haerling KA. Cost-utility analysis of virtual and mannequin-based simulation. Simul Healthc 2018 Feb;13(1):33-40. [CrossRef] [Medline]
- Jeimy S, Wang JY, Richardson L. Evaluation of virtual patient cases for teaching diagnostic and management skills in internal medicine: a mixed methods study. BMC Res Notes 2018 Jun 5;11(1):357 [FREE Full text] [CrossRef] [Medline]
- Kandasamy T, Fung K. Interactive internet-based cases for undergraduate otolaryngology education. Otolaryngol Head Neck Surg 2009 Mar;140(3):398-402. [CrossRef] [Medline]
- Kinney P, Keskula DR, Perry JF. The effect of a computer assisted instructional program on physical therapy students. J Allied Health 1997;26(2):57-61. [Medline]
- Leong SL, Baldwin CD, Adelman AM. Integrating web-based computer cases into a required clerkship: development and evaluation. Acad Med 2003 Mar;78(3):295-301. [Medline]
- Li J, Li QL, Li J, Chen ML, Xie HF, Li YP, et al. Comparison of three problem-based learning conditions (real patients, digital and paper) with lecture-based learning in a dermatology course: a prospective randomized study from China. Med Teach 2013;35(2):e963-e970. [CrossRef] [Medline]
- Liaw SY, Chan SW, Chen F, Hooi SC, Siau C. Comparison of virtual patient simulation with mannequin-based simulation for improving clinical performances in assessing and managing clinical deterioration: randomized controlled trial. J Med Internet Res 2014;16(9):e214 [FREE Full text] [CrossRef] [Medline]
- Maleck M, Fischer MR, Kammer B, Zeiler C, Mangel E, Schenk F, et al. Do computers teach better? A media comparison study for case-based teaching in radiology. Radiographics 2001;21(4):1025-1032. [CrossRef] [Medline]
- Miedzybrodzka Z, Hamilton NM, Gregory H, Milner B, Frade I, Sinclair T, et al. Teaching undergraduates about familial breast cancer: comparison of a computer assisted learning (CAL) package with a traditional tutorial approach. Eur J Hum Genet 2001 Dec;9(12):953-956 [FREE Full text] [CrossRef] [Medline]
- Qayumi AK, Kurihara Y, Imai M, Pachev G, Seo H, Hoshino Y, et al. Comparison of computer-assisted instruction (CAI) versus traditional textbook methods for training in abdominal examination (Japanese experience). Med Educ 2004 Oct;38(10):1080-1088. [CrossRef] [Medline]
- Schwid HA, Rooke GA, Ross BK, Sivarajan M. Use of a computerized advanced cardiac life support simulator improves retention of advanced cardiac life support guidelines better than a textbook review. Crit Care Med 1999 Apr;27(4):821-824. [Medline]
- Schwid HA, Rooke GA, Michalowski P, Ross BK. Screen-based anesthesia simulation with debriefing improves performance in a mannequin-based anesthesia simulator. Teach Learn Med 2001;13(2):92-96. [CrossRef] [Medline]
- Secomb J, McKenna L, Smith C. The effectiveness of simulation activities on the cognitive abilities of undergraduate third-year nursing students: a randomised control trial. J Clin Nurs 2012 Dec;21(23-24):3475-3484. [CrossRef] [Medline]
- Sobocan M, Turk N, Dinevski D, Hojs R, Balon BP. Problem-based learning in internal medicine: virtual patients or paper-based problems? Intern Med J 2017 Jan;47(1):99-103. [CrossRef] [Medline]
- Subramanian A, Timberlake M, Mittakanti H, Lara M, Brandt ML. Novel educational approach for medical students: improved retention rates using interactive medical software compared with traditional lecture-based format. J Surg Educ 2012;69(2):253-256. [CrossRef] [Medline]
- Tao H. Computer-based simulative training system—a new approach to teaching pre-hospital trauma care. J Med Coll PLA 2011 Dec;26(6):335-344. [CrossRef]
- Triola M, Feldman H, Kalet AL, Zabar S, Kachur EK, Gillespie C, et al. A randomized trial of teaching clinical skills using virtual and live standardized patients. J Gen Intern Med 2006 May;21(5):424-429 [FREE Full text] [CrossRef] [Medline]
- Vash JH, Yunesian M, Shariati M, Keshvari A, Harirchi I. Virtual patients in undergraduate surgery education: a randomized controlled study. ANZ J Surg 2007;77(1-2):54-59. [CrossRef] [Medline]
- Wang CL, Chinnugounder S, Hippe DS, Zaidi S, O'Malley RB, Bhargava P, et al. Comparative effectiveness of hands-on versus computer simulation-based training for contrast media reactions and teamwork skills. J Am Coll Radiol 2017 Jan;14(1):103-10.e3. [CrossRef] [Medline]
- Williams C, Aubin S, Harkin P, Cottrell D. A randomized, controlled, single-blind trial of teaching provided by a computer-based multimedia package versus lecture. Med Educ 2001 Sep;35(9):847-854. [Medline]
- Bryant R, Miller CL, Henderson D. Virtual clinical simulations in an online advanced health appraisal course. Clin Simul Nurs 2015 Oct;11(10):437-444. [CrossRef]
- Deladisma AM, Gupta M, Kotranza A, Bittner JG, Imam T, Swinson D, et al. A pilot study to integrate an immersive virtual patient with a breast complaint and breast examination simulator into a surgery clerkship. Am J Surg 2009 Jan;197(1):102-106. [CrossRef] [Medline]
- Gu Y, Zou Z, Chen X. The Effects of vSIM for Nursing™ as a teaching strategy on fundamentals of nursing education in undergraduates. Clin Simul Nurs 2017 Apr;13(4):194-197. [CrossRef]
- Kaltman S, Talisman N, Pennestri S, Syverson E, Arthur P, Vovides Y. Using technology to enhance teaching of patient-centered interviewing for early medical students. Simul Healthc 2018 Jun;13(3):188-194. [CrossRef] [Medline]
- Lehmann R, Thiessen C, Frick B, Bosse HM, Nikendei C, Hoffmann GF, et al. Improving pediatric basic life support performance through blended learning with web-based virtual patients: randomized controlled trial. J Med Internet Res 2015 Jul 2;17(7):e162 [FREE Full text] [CrossRef] [Medline]
- Schittek Janda M, Mattheos N, Nattestad A, Wagner A, Nebel D, Färbom C, et al. Simulation of patient encounters using a virtual patient in periodontology instruction of dental students: design, usability, and learning effect in history-taking skills. Eur J Dent Educ 2004 Aug;8(3):111-119. [CrossRef] [Medline]
- Smith BD, Silk K. Cultural competence clinic: an online, interactive, simulation for working effectively with Arab American Muslim patients. Acad Psychiatry 2011;35(5):312-316. [CrossRef] [Medline]
- Wahlgren CF, Edelbring S, Fors U, Hindbeck H, Ståhle M. Evaluation of an interactive case simulation system in dermatology and venereology for medical students. BMC Med Educ 2006;6:40 [FREE Full text] [CrossRef] [Medline]
- Weverling GJ, Stam J, ten Cate TJ, van Crevel H. [Computer-assisted education in problem-solving in neurology; a randomized educational study]. Ned Tijdschr Geneeskd 1996 Feb 24;140(8):440-443. [Medline]
- Courteille O, Fahlstedt M, Ho J, Hedman L, Fors U, von Holst H, et al. Learning through a virtual patient vs recorded lecture: a comparison of knowledge retention in a trauma case. Int J Med Educ 2018 Mar 28;9:86-92 [FREE Full text] [CrossRef] [Medline]
- Dankbaar ME, Alsma J, Jansen EE, van Merrienboer JJ, van Saase JL, Schuit SC. An experimental study on the effects of a simulation game on students' clinical cognitive skills and motivation. Adv Health Sci Educ Theory Pract 2016 Aug;21(3):505-521. [CrossRef] [Medline]
- Foster A, Chaudhary N, Murphy J, Lok B, Waller J, Buckley PF. The use of simulation to teach suicide risk assessment to health profession trainees—rationale, methodology, and a proof of concept demonstration with a virtual patient. Acad Psychiatry 2015 Dec;39(6):620-629. [CrossRef] [Medline]
- Trudeau KJ, Hildebrand C, Garg P, Chiauzzi E, Zacharoff KL. A randomized controlled trial of the effects of online pain management education on primary care providers. Pain Med 2017 Dec 1;18(4):680-692. [CrossRef] [Medline]
- Bearman M, Cesnik B, Liddell M. Random comparison of 'virtual patient' models in the context of teaching clinical communication skills. Med Educ 2001 Sep;35(9):824-832. [Medline]
- Berger J, Bawab N, de Mooij J, Widmer DS, Szilas N, de Vriese C, et al. An open randomized controlled study comparing an online text-based scenario and a serious game by Belgian and Swiss pharmacy students. Curr Pharm Teach Learn 2018 Dec;10(3):267-276. [CrossRef] [Medline]
- Braun LT, Zottmann JM, Adolf C, Lottspeich C, Then C, Wirth S, et al. Representation scaffolds improve diagnostic efficiency in medical students. Med Educ 2017 Nov;51(11):1118-1126. [CrossRef] [Medline]
- Davids MR, Chikte UM, Halperin ML. Effect of improving the usability of an e-learning resource: a randomized trial. Adv Physiol Educ 2014 Jun;38(2):155-160 [FREE Full text] [CrossRef] [Medline]
- Foster A, Chaudhary N, Kim T, Waller JL, Wong J, Borish M, et al. Using virtual patients to teach empathy: a randomized controlled study to enhance medical students' empathic communication. Simul Healthc 2016 Jun;11(3):181-189. [CrossRef] [Medline]
- Friedman CP, France CL, Drossman DD. A randomized comparison of alternative formats for clinical simulations. Med Decis Making 1991;11(4):265-272. [CrossRef] [Medline]
- Harris JM, Sun H. A randomized trial of two e-learning strategies for teaching substance abuse management skills to physicians. Acad Med 2013 Sep;88(9):1357-1362 [FREE Full text] [CrossRef] [Medline]
- Mahnken AH, Baumann M, Meister M, Schmitt V, Fischer MR. Blended learning in radiology: is self-determined learning really more effective? Eur J Radiol 2011 Jun;78(3):384-387. [CrossRef] [Medline]
- Maier EM, Hege I, Muntau AC, Huber J, Fischer MR. What are effects of a spaced activation of virtual patients in a pediatric course? BMC Med Educ 2013 Mar 28;13:45 [FREE Full text] [CrossRef] [Medline]
- Tolsgaard MG, Jepsen RMHG, Rasmussen MB, Kayser L, Fors U, Laursen LC, et al. The effect of constructing versus solving virtual patient cases on transfer of learning: a randomized trial. Perspect Med Educ 2016 Feb;5(1):33-38 [FREE Full text] [CrossRef] [Medline]
- Kraiger K, Ford JK, Salas E. Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation. J Appl Psychol 1993;78(2):311-328. [CrossRef] [Medline]
- Stevens A, Hernandez J, Johnsen K, Dickerson R, Raij A, Harrison C, et al. The use of virtual patients to teach medical students history taking and communication skills. Am J Surg 2006 Jun;191(6):806-811. [CrossRef] [Medline]
- Friedman CP. The research we should be doing. Acad Med 1994 Jun;69(6):455-457. [Medline]
- Cook DA. The research we still are not doing: an agenda for the study of computer-based learning. Acad Med 2005 Jun;80(6):541-548. [Medline]
- Krathwohl DR. A revision of Bloom's taxonomy: an overview. Theory Pract 2002 Nov;41(4):212-218. [CrossRef]
- Chen F, Lui AM, Martinelli SM. A systematic review of the effectiveness of flipped classrooms in medical education. Med Educ 2017 Jun;51(6):585-597. [CrossRef] [Medline]
- Kirschner PA, Sweller J, Clark RE. Why minimal guidance during instruction does not work: an analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educ Psychol 2006 Jun;41(2):75-86. [CrossRef]
- Edelbring S, Wahlström R. Dynamics of study strategies and teacher regulation in virtual patient learning activities: a cross sectional survey. BMC Med Educ 2016 Apr 23;16:122 [FREE Full text] [CrossRef] [Medline]
Abbreviations
3D: 3-dimensional |
cRCT: cluster randomized controlled trials |
GRADE: Grading of Recommendations Assessment, Development and Evaluation Working Group |
MCQ: multiple-choice question |
OSCE: objective structured clinical examination |
RCT: randomized controlled trial |
SMD: standardized mean difference |
Edited by G Eysenbach, A Marusic; submitted 10.05.19; peer-reviewed by SY Liaw, M Sobocan; comments to author 31.05.19; revised version received 06.06.19; accepted 08.06.19; published 02.07.19
Copyright©Andrzej A Kononowicz, Luke A Woodham, Samuel Edelbring, Natalia Stathakarou, David Davies, Nakul Saxena, Lorainne Tudor Car, Jan Carlstedt-Duke, Josip Car, Nabil Zary. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 02.07.2019.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.