Published on in Vol 22, No 8 (2020): August

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/16504, first published .
Blended Learning Compared to Traditional Learning in Medical Education: Systematic Review and Meta-Analysis

Blended Learning Compared to Traditional Learning in Medical Education: Systematic Review and Meta-Analysis

Blended Learning Compared to Traditional Learning in Medical Education: Systematic Review and Meta-Analysis

Review

1Diagnosis and Therapeutic Center, Hypertension and Cardiovascular Prevention Unit, Hôtel-Dieu Hospital, Assistance Publique–Hôpitaux de Paris, Paris-Descartes University, Paris, France

2Cochin University Hospital, Paris, France

Corresponding Author:

Alexandre Vallée, MD, PhD

Diagnosis and Therapeutic Center, Hypertension and Cardiovascular Prevention Unit

Hôtel-Dieu Hospital, Assistance Publique–Hôpitaux de Paris

Paris-Descartes University

Paris,

France

Phone: 33 142348587

Email: alexandre.g.vallee@gmail.com


Background: Blended learning, which combines face-to-face learning and e-learning, has grown rapidly to be commonly used in education. Nevertheless, the effectiveness of this learning approach has not been completely quantitatively synthesized and evaluated using knowledge outcomes in health education.

Objective: The aim of this study was to assess the effectiveness of blended learning compared to that of traditional learning in health education.

Methods: We performed a systematic review of blended learning in health education in MEDLINE from January 1990 to July 2019. We independently selected studies, extracted data, assessed risk of bias, and compared overall blended learning versus traditional learning, offline blended learning versus traditional learning, online blended learning versus traditional learning, digital blended learning versus traditional learning, computer-aided instruction blended learning versus traditional learning, and virtual patient blended learning versus traditional learning. All pooled analyses were based on random-effect models, and the I2 statistic was used to quantify heterogeneity across studies.

Results: A total of 56 studies (N=9943 participants) assessing several types of learning support in blended learning met our inclusion criteria; 3 studies investigated offline support, 7 studies investigated digital support, 34 studies investigated online support, 8 studies investigated computer-assisted instruction support, and 5 studies used virtual patient support for blended learning. The pooled analysis comparing all blended learning to traditional learning showed significantly better knowledge outcomes for blended learning (standardized mean difference 1.07, 95% CI 0.85 to 1.28, I2=94.3%). Similar results were observed for online (standardized mean difference 0.73, 95% CI 0.60 to 0.86, I2=94.9%), computer-assisted instruction (standardized mean difference 1.13, 95% CI 0.47 to 1.79, I2=78.0%), and virtual patient (standardized mean difference 0.62, 95% CI 0.18 to 1.06, I2=78.4%) learning support, but results for offline learning support (standardized mean difference 0.08, 95% CI –0.63 to 0.79, I2=87.9%) and digital learning support (standardized mean difference 0.04, 95% CI –0.45 to 0.52, I2=93.4%) were not significant.

Conclusions: From this review, blended learning demonstrated consistently better effects on knowledge outcomes when compared with traditional learning in health education. Further studies are needed to confirm these results and to explore the utility of different design variants of blended learning.

J Med Internet Res 2020;22(8):e16504

doi:10.2196/16504

Keywords



Background

New types of learning, such as e-learning, have become popular in medical education [1,2] since the emergence of the internet [2]. These new models allow learning to transcend boundaries of space and time; they improve collaborative and individualized learning effectiveness and are more convenient [3-5]. Nevertheless, e-learning presents some disadvantages, including high cost multimedia materials, high costs for platform maintenance, and often user training is required. In parallel, traditional learning presents several limitations, including requiring the physical presence of students and teachers at a specific time and place [6].

Blended learning is characterized by the combination of traditional face-to-face learning and asynchronous or synchronous e-learning [7]. Blended learning is a promising alternative for medical education because of its advantages over traditional learning. In academia, this learning format has had a rapid growth and is now widely used [8].

Increased research on blended learning has been reported since the 1990s [9-11]. Synthesis of these studies may inform students and teachers on the effectiveness of blended learning [12]. Previous systematic reviews have reported that blended learning has the potential to improve clinical training among medical students [13] and undergraduate nursing education [14]. In parallel, many reviews have summarized the potential of blended learning in medical education [15,16]. A meta-analysis [12] showed that blended learning was more effective than nonblended learning but with a high level of heterogeneity.

Nevertheless, these reviews were limited to only some areas of health education, and few have used quantitative synthesis in the evaluation of the effectiveness of blended learning; therefore, the purpose of this study was to quantitatively synthesize the studies that evaluated the efficacy (using knowledge outcomes) of blended learning for health education (with students, postgraduate trainees, or practitioners).

Objective

The objective of this review was to evaluate the effectiveness of blended learning for health education on knowledge outcomes assessed with subjective (eg, learner self-report) or objective evaluations (eg, multiple-choice question knowledge test) of learners’ factual or conceptual understanding of the course in studies where blended learning was compared with traditional learning.


Comparison Categories and Definitions

Blended learning was compared with traditional learning, overall, and after stratification by type of learning support, the following comparisons were made: offline blended learning versus traditional learning, online blended learning versus traditional learning, digital blended learning versus traditional learning, computer-aided instruction blended learning versus traditional learning, and virtual patient blended learning versus traditional learning.

Offline learning was defined as the use of personal computers or laptops to assist in delivering stand-alone multimedia materials without the need for internet or local area network connections [17]. These could be supplemented by videoconferences, emails, and audio-visual learning materials kept in either magnetic storage (CD-ROM, floppy disk, flash memory, multimedia cards, external hard disks) as long as the learning activities did not rely on this connection [18].

Online support was defined as all online materials used in learning courses.

Digital education was a broad construct describing a wide range of teaching and learning strategies that were exclusively based on the use of electronic media and devices as training, communication, and interactions tools [19]. These aspects could pertain to educational approaches, concepts, methods, or technologies. Moreover, these concepts facilitated remote learning, which could help address the shortage of health professionals in settings with limited resources by reducing the time constraints and geographic barriers to training.

Computer-assisted instruction was defined as the use of interactive CD-ROM, multimedia software, or audio-visual material to augment instruction including multimedia presentations, live synchronous virtual sessions offered via a web-based learning platform, presentations with audio-visuals, and synchronous or asynchronous discussion forums to enhance participation and increase engagement [20,21].

Virtual patients were defined as interactive computers simulations of real-life clinical scenarios for health professional training, education, or assessment. This broad definition encompassed a variety of systems that used different technologies and addressed various learning needs [22].

Traditional learning, in this paper, was used to describe all nonblended learning such as nondigital and not online, but also only online, only e-learning, or other single support educational methods (lectures, face-to-face, reading exercises, group discussion in classroom).

Reporting Standards

We conducted and reported our study according to PRISMA guidelines [23] and Cochrane systematic review guidelines [24].

Eligibility Criteria

Inclusion criteria for studies were based on the PICOS (population, intervention, comparison, outcome, and study design) framework.

Studies were included if they were conducted among health learners, used a blended learning intervention in the experimental group, involved a comparison of blended learning with traditional learning, included quantitative outcomes with respect to knowledge assessed with either subjective or objective evaluations, and were randomized controlled trials or nonrandomized studies (which are widely used in health education). Only studies published in English were included.

Data Sources

To identify relevant studies, we conducted a search of citations published in MEDLINE between January 1990 and July 2019. Key search terms included delivery concepts (blended, hybrid, integrated, computer-aided, computer assisted, virtual patient, learning, training, education, instruction, teaching, course), participant characteristics (physician, medic*, nurs*, pharmac*, dent*, health*), and study design concepts (compar*, trial*, evaluat*, assess*, effect*, pretest*, pre-test, posttest*, post-test, preintervention, pre-intervention, postintervention, post-intervention). Asterisks were used as a truncation symbol for searching. Multimedia Appendix 1 describes the complete research strategy.

Study Selection

Using the eligibility criteria, AV and ES independently screened all articles and abstracts and reviewed the full text of potentially eligible abstracts.

Data Extraction

AV and ES independently extracted relevant characteristics related to participants, intervention, comparators, outcome measures, and results from the studies that were found to be eligible using a standard data collection form. Any disagreements were resolved through discussion with a third research team member until agreement was reached.

Risk of Bias Assessment

During the data extraction process, researchers independently assessed the risk of bias for each study using the Cochrane Collaboration’s risk of bias tool [25]. Evaluation criteria included the following: random sequence generation, allocation concealment, blinding of students and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, or other which included publication bias. Funnel plots were used to evaluate publication bias. Risk of bias for each criterion was rate as low, high, or unclear according to the Cochrane risk of bias instructions.

Data Synthesis

Analyses were performed for knowledge outcomes using SAS software (version 9.4; SAS Institute). The standardized mean difference (standard mean difference; Hedges g effect size), converted from means and standard deviation from each study, was used [15]. When the mean was available, but the standard deviation was not, we used the mean standard deviation of all other studies. Since the overall scores of included studies were not the same and standard mean difference could eliminate the effects of absolute values, we adjusted the mean and standard deviation so that the average standard deviation could replace the missing value of standard deviation. We employed a random-effects model for the meta-analysis (statistically significant if P<.05).

The I2 statistic was used to quantify heterogeneity across studies [26]. When the estimated I2 was equal to or greater than 50%, this indicated a large amount of heterogeneity. As the studies were functionally different and involved different study designs, participants, interventions, and settings; a random-effects model that allowed more heterogeneity was used. Forest plots were created to display the meta-analysis findings. To explore publication bias, funnel plots were created and Begg tests were performed (statistically significant if P<.05). To explore potential sources of heterogeneity, multiple meta-regression and subgroup analyses based on the study design were performed. Sensitivity analyses to test the robustness of findings were also performed.


Study Selection

The search strategy identified 3389 articles from MEDLINE. After scanning the titles and abstracts, 93 articles were found to be potentially eligible, and their full texts were read for further assessment. Of these, 56 articles were included [9-11,22,27-78] (Figure 1). All articles that were included had been published in peer-reviewed journal.

Figure 1. Study flow diagram.
View this figure

Type of Participants

In the 56 articles, 9943 participants were included. In 30 out of 56 participant subgroups, participants were from the field of medicine [11,31-34,36,38,39,41,43,44,46-50,53,64-67,69-74, 76,78,79]. The participant subgroups from fields other than medicine were as follows: 16 studies in nursing [9,10,12,27,29,35,37,40,51,52,57,58,61-63,75], 1 in pharmacy [37], 3 in physiotherapy [12,30,45], 5 in dentistry [10,42, 54,55,59], and 4 interprofessional education [56,60,68,77].

Of the 56 studies, 47 were conducted in high-income countries: 14 were from the United States [9,10,29,31,36,38,43,50, 53-55,59,61,73], 2 from Canada [47,58], 5 from Germany [39,41,46,57,76], 3 from the United Kingdom [42,56,75], 3 from Spain [30,45,48], 1 from France [74], 1 from Greece [34], 1 from Sweden [67], 1 from the Netherlands [37], 1 from Korea [40], 1 from Poland [79], 1 from Serbia [70], 1 from Croatia [64], 1 from Turkey [32], 2 from Taiwan [28,51], 1 from Japan [69], and 7 from Australia [44,49,63,65,66,68,78]. Of the 56 studies, 9 studies were conducted in low- or middle-income countries: 2 from Thailand [52,62], 1 from China [77], 1 from Malaysia [72], 2 from Iran [27,71], 1 from Jordan [35], 1 from South Africa [11], and 1 from Uruguay [60].

The technical characteristics of the blended learning systems, topics of educational content, applied design methods, and other information on the validity of outcome measurements can be found in Multimedia Appendix 1.

Effects of Interventions

Blended Learning Versus Traditional Learning

The pooled effect size reflected a significantly large effect on knowledge outcome (standard mean difference 1.07, 95% CI 0.85 to 1.28, z=9.72, n=9943, P<.001). A significant heterogeneity was observed among studies (I2=94.3%). Figure 2 shows details of the main analysis. The test of asymmetry funnel plot (Figure 3) indicated publication bias among studies (Begg test P=.01). The trim and fill method indicated that the effect size changed to 0.41 (95% CI 0.16 to 0.66, P<.001) after adjusting for publication bias, which suggested that blended learning was more effective than traditional learning.

Figure 2. Forest plot of blended learning to traditional learning comparison for knowledge outcomes. df: degree of freedom; CI: confidence interval; SMD: standard mean difference.
View this figure
Figure 3. Funnel plot of blended learning versus traditional learning.
View this figure
Offline Blended Learning Versus Traditional Learning

Of the 3 studies [27-29] comparing offline blended learning to traditional learning, in 2 studies [27,28], the groups with blended resources scored better than their corresponding control groups with significant positive standard mean differences. The other study did not show a statistically significant difference in knowledge outcome (standard mean difference 0.08, 95% CI –0.63 to 0.79) [29]. The pooled effect for knowledge outcomes suggested no significant effects from offline blended learning over traditional education alone (standard mean difference 0.67, 95% CI –0.50 to 1.84, I2=87.9%, n=327) (Figure 4).

Figure 4. Forest plot of offline blended learning to traditional learning comparison for knowledge outcomes. df: degree of freedom; CI: confidence interval.
View this figure
Online Blended Learning Versus Traditional Learning

In studies comparing online blended learning to traditional learning, 26 [34-37,39-41,43,46,50,53-55,58,60,64,69-72, 74,75,77,78] of the 41 studies [34-47,50,51,53-55,57,58,60,62, 64,68-72,74,75,77,78] showed that groups with blended learning had better scores than those of their corresponding control groups. The pooled effect for knowledge outcomes was a standard mean difference of 0.73 (95% CI 0.60 to 0.86, n=6976) (Figure 5). There was a substantial amount of heterogeneity in the pooled analysis (I2=94.9%).

Figure 5. Forest plot of online blended learning to traditional learning comparison for knowledge outcomes. df: degree of freedom; CI: confidence interval; SMD: standard mean difference.
View this figure
Digital Learning Versus Traditional Learning

Only 3 [32,33,63] of 7 studies [30-33,56,63,76] comparing digital learning to traditional learning presented a better score than the control group. The pooled effect for knowledge outcomes suggested no significant effects between blended and traditional learning (standard mean difference 0.04, 95% CI –0.45 to 0.52, I2=93.4%, n=1093) (Figure 6).

Figure 6. Forest plot of digital blended learning to traditional learning comparison for knowledge outcomes. df: degree of freedom; CI: confidence interval.
View this figure
Computer-Assisted Instruction Blended Learning Versus Traditional Learning

Of the studies focusing on computer-assisted instruction blended learning, 5 [10,11,48,49,73] of the 8 studies [9-11,38,48, 49,52,73] showed significantly higher scores than those of traditional learning. Only 1 study [38] showed a significant negative effect compared to traditional learning (standard mean difference –0.68, 95% CI –1.32 to –0.04). The other studies showed no significant difference [9,52]. The pooled effect for knowledge outcomes suggested a significant improvement of computer-assisted instruction blended with traditional education over traditional education alone (standard mean difference 1.13, 95% CI 0.47 to 1.79, I2=78.0%, n=926) (Figure 7).

Figure 7. Forest plot of computer-assisted instruction blended learning to traditional learning comparison for knowledge outcomes. df: degree of freedom; CI: confidence interval.
View this figure
Virtual Patient Blended Learning Versus Traditional Learning

In 4 [59,65,66,79] of the 5 studies [59,65-67,79] on knowledge outcomes when using virtual patients as a supplement to traditional learning, the groups with supplementary virtual patient learning support scored better than their corresponding control groups. Only 1 study with virtual patients did not show a statistically significant difference in knowledge outcomes (standard mean difference 0.13, 95% CI –0.30 to 0.56) [67]. The pooled effect for knowledge outcomes suggested significant effects for virtual patient blended learning (standard mean difference 0.62, 95% CI 0.18 to 1.06, I2=78.4%, n=621) (Figure 8).

Figure 8. Forest plot of virtual patient blended learning to traditional learning comparison for knowledge outcomes. df: degree of freedom; CI: confidence interval.
View this figure

Sensitivity Analyses

None of the subgroup analyses that were initially planned explained the heterogeneity of the results. Among many analyzed aspects, we considered the differences regarding the efficiency of learning with blended learning between the health professions disciplines. Most of the studies involved students of medicine as participants (30/56, 54%).

When analyzing knowledge outcomes in medicine, nursing, and dentistry, some differences were apparent. The pooled effect of medicine studies showed a standard mean difference of 0.91 (95% CI 0.65 to 1.17, z= 6.77, I2=95.8%, n=3418, P<.001) (Figure 9), nursing studies showed a standard mean difference of 0.75 (95% CI 0.26 to 1.24, z=2.99, I2=94.9%, n=1590, P=.008) (Figure 10), and dentistry studies showed a standard mean difference of 0.35 (95% CI 0.17 to 0.53, z=3.78, I2=37.6%, n=1130, P=<.001) (Figure 11). Dentistry studies included 3 online blended learning studies (standard mean difference 0.37, 95% CI 0.14 to 0.64, z=2.63, I2=58.3%, n=879), 1 virtual patient learning study, and 1 computer-assisted instruction learning study.

Additional interest was observed for offline blended learning in nursing compared to traditional learning (standard mean difference 1.28, 95% CI 0.25 to 2.31, z=2.43, I2=86.2%, n=249), and in computer-assisted instruction (standard mean difference 0.53, 95% CI 0.17 to 0.90, z=2.84, I2=23.9%, n=174), but not for online blended learning (standard mean difference 0.68, 95% CI –0.07 to 1.45, z=1.76, I2=96.7%, n=1091).

Additional interest was observed for digital blended learning compared to traditional learning in medicine (standard mean difference 0.26, 95% CI 0.07 to 0.45, z=2.71, I2=95.6%, n=417) [31-33,76], in virtual patient (standard mean difference 0.71, 95% CI 0.14 to 1.28, z=2.45, I2=85.8%, n=416) [65-67,79], in online (standard mean difference 1.26, 95% CI 0.81 to 1.71, z=5.49, I2=96.1%, n=1879) [34,36,38,39,41,43,44,46,47,50,53, 64,69-72,74,78], and in computer-aided-instruction (standard mean difference 2.1, 95% CI 0.68 to 3.44, z=2.91, I2=97.9%, n=706) [11,38,48,49,73] suggesting more positive effects of blended learning over traditional learning alone for learning in medicine.

Figure 9. Forest plot of blended learning to traditional learning comparison for knowledge outcomes for medical students. df: degree of freedom; CI: confidence interval.
View this figure
Figure 10. Forest plot of blended learning to traditional learning comparison for knowledge outcomes for nurses as students. df: degree of freedom; CI: confidence interval.
View this figure
Figure 11. Forest plot of blended learning to traditional learning comparison for knowledge outcomes for dentistry students. df: degree of freedom; CI: confidence interval.
View this figure

Risk of Bias

Risk of bias is shown in Figure 12. The risk of bias of evaluators was avoided in several studies by using automated assessment instruments. Thus, we rated the risk as low in 50 of the 56 studies. Nevertheless, it was still unclear whether the instruments had been correctly validated. Attribution bias was within acceptable levels in some studies (low risk in 24 of the 56 studies), but this did not exclude voluntary bias and its influence on the estimated effect. Reporting bias was considered low in 28 of the 56 studies.

We cannot consider allocation bias as a significant problem in this review because, if studies described an adequate randomization method or an unclear description, it was assumed that randomization was unlikely be defective. Performance bias on traditional learning may be a problem, but it is impossible to avoid in this type of research. It is possible to blind participants in blended learning design comparisons, but these studies are still rare in the literature. We cannot reliably estimate publication bias given the high degree of heterogeneity of the included studies.

Figure 12. Risk of bias summary (+ low risk of bias; - high risk of bias; ? unclear risk of bias).
View this figure

Principal Findings

This meta-analysis provided several findings. First, blended learning had a large consistent positive effect (standard mean difference 1.07, 95% CI 0.85 to 1.28) on knowledge acquisition in comparison to traditional learning in health professions. A possible explanation could be that, compared to traditional learning, blended learning allowed students to review electronic materials as often as necessary and at their own pace, and this likely enhanced learning performance [80]. The trim and fill method showed that the pooled effect size changed to 0.41 (95% CI 0.16 to 0.66), meaning that blended learning remained more effective than traditional learning. The strength of this meta-analysis was that it reinforced previous results [12]; however, a large heterogeneity was observed across the studies. The participant subgroup analyses partially explained these differences.

The effectiveness of blended learning is complex and dependent on how well the evaluation fits, since it occurs before the implementation of any innovation as well as allowing planners to determine the needs, considering participant characteristics, analyzing contextual matters, and gathering baseline information [81]. Some interventional studies have highlighted the potential of blended learning to increase course completion rates, improve retention, and increase student satisfaction [82]. Nevertheless, comparisons between blended learning environments and the disparity between academic achievement or grade dispersions have been studied; no significant differences were observed [83]. The effectiveness of blended learning may be dependent on student characteristics, design features, and learning outcomes. Learner success is dependent on the ability to cope with technical difficulty, technical skills, and knowledge in computer operations and internet navigation. Thus, the success of blended learning is highly dependent on experience with the internet and computer apps. Some studies have observed that the success of blended learning was largely associated on the capability to participate in blended course. A previous study [84] showed that high motivation among blended learning led to persistence in their courses. Moreover, time management is a crucial effectiveness factor for successful online learning [85].

Second, offline blended learning did not show a positive pooled effect compared to traditional learning; however, 2 of the 3 studies were in nursing. These results were consistent with a previous meta-analysis on offline digital education [86]. Nevertheless, potential benefits of offline education such as unrestrained knowledge transfer and enriched accessibility of health education have previously been suggested [87]. These interventions could be focused on an interactive, associative, and perceptual learning experience by text, images, audio-video, or other components [88,89].

Third, the effect of digital learning on knowledge outcomes presented inconsistent effects according to the group or subgroup analysis. Overall, the 8 digital blended learning studies showed a nonsignificant effect compared to traditional learning whereas in the medicine subgroup, digital learning had a positive effect (standard mean difference 0.26, 95% CI 0.07 to 0.45). Previous studies [18,90] have shown similar results. Nevertheless, George et al [18] showed the effectiveness of digital learning for undergraduate health professionals compared to traditional learning.

Fourth, in the 10 studies related to computer-assisted instruction, we observed a significant difference in knowledge acquisition outcomes. Furthermore, the difference was higher in the medicine subgroup. This finding must be interpreted with caution because of the high level of heterogeneity (all computer-assisted instruction: I2=78.0%; medicine computer-assisted instruction: I2=97.9%). Previous studies showed that computer-assisted instruction was equally as effective as traditional learning [91]. Nevertheless, the results of these studies also had high levels of heterogeneity and require cautious interpretation. We believe that a comparative approach focusing on the differences in intervention design, sample characteristics, and context of learning is needed to better understand the effectiveness of computer-assisted instruction. Computer-assisted instruction could be perceived negatively by some students and impact outcomes.

Fifth, the participants in Al-Riyami et al’s study [92] reported difficulties accessing the course because of network difficulties with university’s server and internet; therefore, the asynchronous features of the discussion boards were not used to their full potential in this study. Both problems could have emerged regardless of the online course. In traditional learning, students may choose not to engage in discussions, and internet connectivity issues can happen anywhere. This supports the contention above that local conditions, rather than a general effect, may render one or the other mode of instruction preferable to the other.

Sixth, virtual patient blended learning had an overall positive pooled effect when compared to traditional learning on knowledge outcomes; this was also found in a similar meta-analysis [93]. Our observations also supplement the evidence in previous reviews [94,95] which included studies since 2010. Nevertheless, virtual patient simulations predominantly affect skill rather than knowledge outcomes. This could explain the low number of studies and the low added value of virtual patient in comparison to traditional learning. Virtual patients have greater impact in skills training, in applying problem solving, and when direct patient contact is not possible [93]. As proposed by Cook and Triola [96], virtual patients can be said to be a modality for learning in which learners actively use and train their clinical reasoning and critical thinking abilities before bedside learning [96]. Nevertheless, some exceptions can be noted. A need for more guidance within virtual patient simulations may appear in studies with different instructional methods where narrative virtual patient design was better than more autonomous problem-oriented designs [97]. Feedback given by humans in a virtual patient system was better than an animated backstory in increasing empathy [98], but no feedback had no more positive result on the outcomes than learning from a virtual patient scenario [99]. This reminds us that presenting realistic patient scenarios with a great degree of freedom may not be an excuse for neglecting guidance in relation to learning objectives [100].

Strengths and Limitations

This meta-analysis had numerous strengths. An evaluation of the effectiveness of blended learning for health professions is timely and very important for both health educators and learners. We intentionally kept our scope broad in terms of learning topic and included all studies with learners from health professions.

The samples used in this study consisted in various health professionals (in medicine, nursing, dentistry, and others) across a wide variety of health care disciplines. Although, these observations could explain the high level of heterogeneity, we found moderate or large effects for the pooled effects sizes of almost all subgroup analyses exploring variations in participant types. Thus, these results could suggest that health care learning should use blended learning in several and various disciplines of health learning.

However, some limitations must be considered. The systematic literature search encompassed one database (MEDLINE) with few exclusion criteria. The quality of the analyses was dependent on the quality of data from the included studies. Although the standard deviation of some interventions was not available due to poor reporting, we used the average standard deviation of the other studies and imputed effect sizes with concomitant potential for error. Results of subgroup analyses should be interpreted with caution because of the absence of a priori hypotheses in some cases, such as study design, country socioeconomic status, and outcome assessment. In addition, since variability of study interventions, assessment instruments, circumstances were not assessed and could be potential sources of heterogeneity, this should also be cause for cautious interpretation of results. Finally, publication bias was also found.

Conclusions

This study has implications for research on blended learning in health professions. Even though conclusions could be weakened by heterogeneity across studies, the results of this synthesis reinforced that blended learning may have a positive effect on knowledge acquisition related to health professions. Blended learning could be promising and worthwhile for further application in health professions. The difference in effects across subgroup analyses of health population indicated that different methods of conducting blended courses could demonstrate differing effectiveness. Therefore, researchers and educators should pay attention to how to implement a blended course effectively. This question may be answered successfully through studies directly comparing different blended instructional methods.

Authors' Contributions

AV performed the design and conceived the original idea. AV and ES read the articles and selected the articles. AV performed the statistical analyses. AV wrote the manuscript. ES, AC, and JB participated in the writing of the manuscript.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Supplemental appendix.

DOCX File , 82 KB

  1. Bediang G, Stoll B, Geissbuhler A, Klohn AM, Stuckelberger A, Nko'o S, et al. Computer literacy and e-learning perception in Cameroon: the case of Yaounde Faculty of Medicine and Biomedical Sciences. BMC Med Educ 2013;13:57 [FREE Full text] [CrossRef] [Medline]
  2. Choules AP. The use of elearning in medical education: a review of the current situation. Postgrad Med J 2007 Apr;83(978):212-216 [FREE Full text] [CrossRef] [Medline]
  3. Yu S, Yang K. Attitudes toward web-based distance learning among public health nurses in Taiwan: a questionnaire survey. Int J Nurs Stud 2006 Aug;43(6):767-774. [CrossRef] [Medline]
  4. Peng Y, Wu X, Atkins S, Zwarentein M, Zhu M, Zhan XX, et al. Internet-based health education in China: a content analysis of websites. BMC Med Educ 2014;14:16 [FREE Full text] [CrossRef] [Medline]
  5. Moreira IC, Ventura SR, Ramos I, Rodrigues PP. Development and assessment of an e-learning course on breast imaging for radiographers: a stratified randomized controlled trial. J Med Internet Res 2015;17(1):e3 [FREE Full text] [CrossRef] [Medline]
  6. Kemp N, Grieve R. Face-to-face or face-to-screen? Undergraduates' opinions and test performance in classroom vs. online learning. Front Psychol 2014;5:1278 [FREE Full text] [CrossRef] [Medline]
  7. Kim K, Bonk CJ, Oh E. The present and future state of blended learning in workplace learning settings in the United States. Perf. Improv 2008 Sep;47(8):5-16. [CrossRef]
  8. Dziuban C, Graham CR, Moskal PD, Norberg A, Sicilia N. Blended learning: the new normal and emerging technologies. Int J Educ Technol High Educ 2018 Feb 15;15(1):15-13. [CrossRef]
  9. Mangione S, Nieman LZ, Greenspon LW, Margulies H. A comparison of computer-assisted instruction and small-group teaching of cardiac auscultation to medical students. Med Educ 1991 Sep;25(5):389-395. [Medline]
  10. Rouse DP. The effectiveness of computer-assisted instruction in teaching nursing students about congenital heart disease. Comput Nurs 2000;18(6):282-287. [Medline]
  11. Mars M, McLean M. Students' perceptions of a multimedia computer-aided instruction resource in histology. S Afr Med J 1996 Sep;86(9):1098-1102. [Medline]
  12. Liu Q, Peng W, Zhang F, Hu R, Li Y, Yan W. The effectiveness of blended learning in health professions: systematic review and meta-analysis. J Med Internet Res 2016;18(1):e2 [FREE Full text] [CrossRef] [Medline]
  13. Rowe M, Frantz J, Bozalek V. The role of blended learning in the clinical education of healthcare students: a systematic review. Med Teach 2012;34(4):e216-e221. [CrossRef] [Medline]
  14. McCutcheon K, Lohan M, Traynor M, Martin D. A systematic review evaluating the impact of online or blended learning vs. face-to-face learning of clinical skills in undergraduate nurse education. J Adv Nurs 2015 Feb;71(2):255-270. [CrossRef] [Medline]
  15. Cook DA, Levinson AJ, Garside S, Dupras DM, Erwin PJ, Montori VM. Internet-based learning in the health professions: a meta-analysis. JAMA 2008 Sep 10;300(10):1181-1196. [CrossRef] [Medline]
  16. Jwayyed S, Stiffler KA, Wilber ST, Southern A, Weigand J, Bare R, et al. Technology-assisted education in graduate medical education: a review of the literature. Int J Emerg Med 2011;4:51 [FREE Full text] [CrossRef] [Medline]
  17. Rasmussen K, Belisario JM, Wark PA, Molina JA, Loong SL, Cotic Z, et al. Offline eLearning for undergraduates in health professions: a systematic review of the impact on knowledge, skills, attitudes and satisfaction. J Glob Health 2014 Jun;4(1):010405 [FREE Full text] [CrossRef] [Medline]
  18. George PP, Papachristou N, Belisario JM, Wang W, Wark PA, Cotic Z, et al. Online eLearning for undergraduates in health professions: a systematic review of the impact on knowledge, skills, attitudes and satisfaction. J Glob Health 2014 Jun;4(1):010406 [FREE Full text] [CrossRef] [Medline]
  19. Martinengo L, Yeo NJY, Tang ZQ, Markandran KD, Kyaw BM, Tudor Car L. Digital education for the management of chronic wounds in health care professionals: protocol for a systematic review by the Digital Health Education Collaboration. JMIR Res Protoc 2019 Mar 25;8(3):e12488 [FREE Full text] [CrossRef] [Medline]
  20. Bandla H, Franco RA, Simpson D, Brennan K, McKanry J, Bragg D. Assessing learning outcomes and cost effectiveness of an online sleep curriculum for medical students. J Clin Sleep Med 2012 Aug 15;8(4):439-443 [FREE Full text] [CrossRef] [Medline]
  21. Viljoen CA, Scott Millar R, Engel ME, Shelton M, Burch V. Is computer-assisted instruction more effective than other educational methods in achieving ECG competence among medical students and residents? Protocol for a systematic review and meta-analysis. BMJ Open 2017 Dec 26;7(12):e018811 [FREE Full text] [CrossRef] [Medline]
  22. Kononowicz AA, Zary N, Edelbring S, Corral J, Hege I. Virtual patients - what are we talking about? a framework to classify the meanings of the term in healthcare education. BMC Med Educ 2015;15:11 [FREE Full text] [CrossRef] [Medline]
  23. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol 2009 Oct;62(10):e1-34 [FREE Full text] [CrossRef] [Medline]
  24. Higgins J, Green S. editors. Cochrane Handbook for Systematic Reviews of Interventions Internet. Chichester, UK: John Wiley & Sons, Ltd; 2008.
  25. Higgins JPT, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, et al. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ 2011;343:d5928 [FREE Full text] [Medline]
  26. Higgins JPT, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ 2003 Sep 6;327(7414):557-560 [FREE Full text] [CrossRef] [Medline]
  27. Ebadi A, Yousefi S, Khaghanizade M, Saeid Y. Assessment competency of nurses in biological incidents. Trauma Mon 2015 Nov;20(4):e25607 [FREE Full text] [CrossRef] [Medline]
  28. Liu W, Chu K, Chen S. The development and preliminary effectiveness of a nursing case management e-learning program. Comput Inform Nurs 2014 Jul;32(7):343-352. [CrossRef] [Medline]
  29. Bayne T, Bindler R. Effectiveness of medication calculation enhancement methods with nurses. J Nurs Staff Dev 1997;13(6):293-301. [Medline]
  30. Fernández-Lao C, Cantarero-Villanueva I, Galiano-Castillo N, Caro-Morán E, Díaz-Rodríguez L, Arroyo-Morales M. The effectiveness of a mobile application for the development of palpation and ultrasound imaging skills to supplement the traditional learning of physiotherapy students. BMC Med Educ 2016 Oct 19;16(1):274 [FREE Full text] [CrossRef] [Medline]
  31. Johnston R, Hepworth J, Goldsmith M, Lacasse C. Use of iPod™ technology in medical-surgical nursing courses: effect on grades. Int J Nurs Educ Scholarsh 2010;7:Article43. [CrossRef] [Medline]
  32. Küçük S, Kapakin S, Göktaş Y. Learning anatomy via mobile augmented reality: effects on achievement and cognitive load. Anat Sci Educ 2016 Oct;9(5):411-421. [CrossRef] [Medline]
  33. Kulier R, Gülmezoglu AM, Zamora J, Plana MN, Carroli G, Cecatti JG, et al. Effectiveness of a clinically integrated e-learning course in evidence-based medicine for reproductive health training: a randomized trial. JAMA 2012 Dec 5;308(21):2218-2225. [CrossRef] [Medline]
  34. Kavadella A, Tsiklakis K, Vougiouklakis G, Lionarakis A. Evaluation of a blended learning course for teaching oral radiology to undergraduate dental students. Eur J Dent Educ 2012 Feb;16(1):e88-e95. [CrossRef] [Medline]
  35. Sowan AK, Jenkins LS. Use of the seven principles of effective teaching to design and deliver an interactive hybrid nursing research course. Nurs Educ Perspect 2013;34(5):315-322. [Medline]
  36. Lancaster JW, Wong A, Roberts SJ. 'Tech' versus 'talk': a comparison study of two different lecture styles within a Master of Science nurse practitioner course. Nurse Educ Today 2012 Jul;32(5):e14-e18. [CrossRef] [Medline]
  37. Dankbaar MEW, Storm DJ, Teeuwen IC, Schuit SCE. A blended design in acute care training: similar learning results, less training costs compared with a traditional format. Perspect Med Educ 2014 Sep;3(4):289-299 [FREE Full text] [CrossRef] [Medline]
  38. Shomaker TS, Ricks DJ, Hale DC. A prospective, randomized controlled study of computer-assisted learning in parasitology. Acad Med 2002 May;77(5):446-449. [Medline]
  39. Mahnken AH, Baumann M, Meister M, Schmitt V, Fischer MR. Blended learning in radiology: is self-determined learning really more effective? Eur J Radiol 2011 Jun;78(3):384-387. [CrossRef] [Medline]
  40. Sung YH, Kwon IG, Ryu E. Blended learning on medication administration for new nurses: integration of e-learning and face-to-face instruction in the classroom. Nurse Educ Today 2008 Nov;28(8):943-952. [CrossRef] [Medline]
  41. Woltering V, Herrler A, Spitzer K, Spreckelsen C. Blended learning positively affects students' satisfaction and the role of the tutor in the problem-based learning process: results of a mixed-method evaluation. Adv Health Sci Educ Theory Pract 2009 Dec;14(5):725-738. [CrossRef] [Medline]
  42. Lowe CI, Wright JL, Bearn DR. Computer-aided Learning (CAL): an effective way to teach the Index of Orthodontic Treatment Need (IOTN)? J Orthod 2001 Dec;28(4):307-311. [CrossRef] [Medline]
  43. Hilger AE, Hamrick HJ, Denny FW. Computer instruction in learning concepts of streptococcal pharyngitis. Arch Pediatr Adolesc Med 1996 Jun;150(6):629-631. [Medline]
  44. Ilic D, Hart W, Fiddes P, Misso M, Villanueva E. Adopting a blended learning approach to teaching evidence based medicine: a mixed methods study. BMC Med Educ 2013;13:169 [FREE Full text] [CrossRef] [Medline]
  45. Arroyo-Morales M, Cantarero-Villanueva I, Fernández-Lao C, Guirao-Piñeyro M, Castro-Martín E, Díaz-Rodríguez L. A blended learning approach to palpation and ultrasound imaging skills through supplementation of traditional classroom teaching with an e-learning package. Man Ther 2012 Oct;17(5):474-478. [CrossRef] [Medline]
  46. Raupach T, Münscher C, Pukrop T, Anders S, Harendza S. Significant increase in factual knowledge with web-assisted problem-based learning as part of an undergraduate cardio-respiratory curriculum. Adv Health Sci Educ Theory Pract 2010 Aug;15(3):349-356 [FREE Full text] [CrossRef] [Medline]
  47. Carbonaro M, King S, Taylor E, Satzinger F, Snart F, Drummond J. Integration of e-learning technologies in an interprofessional health science course. Med Teach 2008 Feb;30(1):25-33. [CrossRef] [Medline]
  48. Pereira J, Palacios M, Collin T, Wedel R, Galloway L, Murray A, et al. The impact of a hybrid online and classroom-based course on palliative care competencies of family medicine residents. Palliat Med 2008 Dec;22(8):929-937. [CrossRef] [Medline]
  49. Devitt P, Smith JR, Palmer E. Improved student learning in ophthalmology with computer-aided instruction. Eye (Lond) 2001 Oct;15(Pt 5):635-639. [CrossRef] [Medline]
  50. Kiviniemi MT. Effects of a blended learning approach on student outcomes in a graduate-level public health course. BMC Med Educ 2014;14:47 [FREE Full text] [CrossRef] [Medline]
  51. Hsu L, Hsieh S. Effects of a blended learning module on self-reported learning performances in baccalaureate nursing students. J Adv Nurs 2011 Nov;67(11):2435-2444. [CrossRef] [Medline]
  52. Kaveevivitchai C, Chuengkriankrai B, Luecha Y, Thanooruk R, Panijpan B, Ruenwongsa P. Enhancing nursing students' skills in vital signs assessment by using multimedia computer-assisted learning with integrated content of anatomy and physiology. Nurse Educ Today 2009 Jan;29(1):65-72. [CrossRef] [Medline]
  53. Kumrow DE. Evidence-based strategies of graduate students to achieve success in a hybrid web-based course. J Nurs Educ 2007 Mar;46(3):140-145. [Medline]
  54. Howerton WB, Enrique PRT, Ludlow JB, Tyndall DA. Interactive computer-assisted instruction vs. lecture format in dental education. J Dent Hyg 2004;78(4):10. [Medline]
  55. Gadbury-Amyot CC, Singh AH, Overman PR. Teaching with technology: learning outcomes for a combined dental and dental hygiene online hybrid oral histology course. J Dent Educ 2013 Jun;77(6):732-743 [FREE Full text] [Medline]
  56. Perkins GD, Fullerton JN, Davis-Gomez N, Davies RP, Baldock C, Stevens H, et al. The effect of pre-course e-learning prior to advanced life support training: a randomised controlled trial. Resuscitation 2010 Jul;81(7):877-881. [CrossRef] [Medline]
  57. Strickland S. The effectiveness of blended learning environments for the delivery of respiratory care education. J Allied Health 2009;38(1):E11-E16. [Medline]
  58. Gagnon M, Gagnon J, Desmartis M, Njoya M. The impact of blended teaching on knowledge, satisfaction, and self-directed learning in nursing undergraduates: a randomized, controlled trial. Nurs Educ Perspect 2013;34(6):377-382. [Medline]
  59. Boynton JR, Green TG, Johnson LA, Nainar SMH, Straffon LH. The virtual child: evaluation of an internet-based pediatric behavior management simulation. J Dent Educ 2007 Sep;71(9):1187-1193 [FREE Full text] [Medline]
  60. Llambí L, Esteves E, Martinez E, Forster T, García S, Miranda N, et al. Teaching tobacco cessation skills to Uruguayan physicians using information and communication technologies. J Contin Educ Health Prof 2011;31(1):43-48. [CrossRef] [Medline]
  61. Sherman H, Comer L, Putnam L, Freeman H. Blended versus lecture learning: outcomes for staff development. J Nurses Staff Dev 2012 Jul;28(4):186-190. [CrossRef] [Medline]
  62. Gerdprasert S, Pruksacheva T, Panijpan B, Ruenwongsa P. Development of a web-based learning medium on mechanism of labour for nursing students. Nurse Educ Today 2010 Jul;30(5):464-469. [CrossRef] [Medline]
  63. Farrell MJ, Rose L. Use of mobile handheld computers in clinical nursing education. J Nurs Educ 2008 Jan;47(1):13-19. [Medline]
  64. Taradi SK, Taradi M, Radic K, Pokrajac N. Blending problem-based learning with web technology positively impacts student learning outcomes in acid-base physiology. Adv Physiol Educ 2005 Mar;29(1):35-39 [FREE Full text] [CrossRef] [Medline]
  65. Lehmann R, Thiessen C, Frick B, Bosse HM, Nikendei C, Hoffmann GF, et al. Improving pediatric basic life support performance through blended learning with web-based virtual patients: randomized controlled trial. J Med Internet Res 2015 Jul 02;17(7):e162 [FREE Full text] [CrossRef] [Medline]
  66. Succar T, Zebington G, Billson F, Byth K, Barrie S, McCluskey P, et al. The impact of the Virtual Ophthalmology Clinic on medical students' learning: a randomised controlled trial. Eye (Lond) 2013 Oct;27(10):1151-1157. [CrossRef] [Medline]
  67. Wahlgren C, Edelbring S, Fors U, Hindbeck H, Ståhle M. Evaluation of an interactive case simulation system in dermatology and venereology for medical students. BMC Med Educ 2006;6:40 [FREE Full text] [CrossRef] [Medline]
  68. Ilic D, Nordin RB, Glasziou P, Tilson JK, Villanueva E. A randomised controlled trial of a blended learning education intervention for teaching evidence-based medicine. BMC Med Educ 2015 Mar 10;15:39 [FREE Full text] [CrossRef] [Medline]
  69. Shimizu I, Nakazawa H, Sato Y, Wolfhagen IHAP, Könings KD. Does blended problem-based learning make Asian medical students active learners?: a prospective comparative study. BMC Med Educ 2019 May 15;19(1):147 [FREE Full text] [CrossRef] [Medline]
  70. Milic NM, Trajkovic GZ, Bukumiric ZM, Cirkovic A, Nikolic IM, Milin JS, et al. Improving education in medical statistics: implementing a blended learning model in the existing curriculum. PLoS One 2016;11(2):e0148882 [FREE Full text] [CrossRef] [Medline]
  71. Sadeghi R, Sedaghat MM, Sha Ahmadi F. Comparison of the effect of lecture and blended teaching methods on students' learning and satisfaction. J Adv Med Educ Prof 2014 Oct;2(4):146-150 [FREE Full text] [Medline]
  72. Kho MHT, Chew KS, Azhar MN, Hamzah ML, Chuah KM, Bustam A, et al. Implementing blended learning in emergency airway management training: a randomized controlled trial. BMC Emerg Med 2018 Jan 15;18(1):1 [FREE Full text] [CrossRef] [Medline]
  73. Jarrett-Thelwell FD, Burke JR, Poirier J, Petrocco-Napuli K. A comparison of student performance and satisfaction between a traditional and integrative approach to teaching an introductory radiology course on the extremities. J Chiropr Educ 2019 Mar;33(1):21-29 [FREE Full text] [CrossRef] [Medline]
  74. Marchalot A, Dureuil B, Veber B, Fellahi J, Hanouz J, Dupont H, et al. Effectiveness of a blended learning course and flipped classroom in first year anaesthesia training. Anaesth Crit Care Pain Med 2018 Oct;37(5):411-415. [CrossRef] [Medline]
  75. McCutcheon K, O'Halloran P, Lohan M. Online learning versus blended learning of clinical supervisee skills with pre-registration nursing students: A randomised controlled trial. Int J Nurs Stud 2018 Jun;82:30-39 [FREE Full text] [CrossRef] [Medline]
  76. Noll C, von Jan U, Raap U, Albrecht U. Mobile augmented reality as a feature for self-oriented, blended learning in medicine: randomized controlled trial. JMIR Mhealth Uhealth 2017 Sep 14;5(9):e139 [FREE Full text] [CrossRef] [Medline]
  77. Zhan X, Zhang Z, Sun F, Liu Q, Peng W, Zhang H, et al. Effects of improving primary health care workers' knowledge about public health services in rural China: a comparative study of blended learning and pure e-learning. J Med Internet Res 2017 May 01;19(5):e116 [FREE Full text] [CrossRef] [Medline]
  78. Stewart A, Inglis G, Jardine L, Koorts P, Davies MW. A randomised controlled trial of blended learning to improve the newborn examination skills of medical students. Arch Dis Child Fetal Neonatal Ed 2013 Mar;98(2):F141-F144. [CrossRef] [Medline]
  79. Kononowicz AA, Krawczyk P, Cebula G, Dembkowska M, Drab E, Frączek B, et al. Effects of introducing a voluntary virtual patient module to a basic life support with an automated external defibrillator course: a randomised trial. BMC Med Educ 2012 Jun 18;12:41 [FREE Full text] [CrossRef] [Medline]
  80. Wu J, Tennyson RD, Hsia T. A study of student satisfaction in a blended e-learning system environment. Computers & Education 2010 Aug;55(1):155-164. [CrossRef]
  81. Kintu MJ, Zhu C, Kagambe E. Blended learning effectiveness: the relationship between student characteristics, design features and outcomes. Int J Educ Technol High Educ 2017 Feb 6;14(1):1-20. [CrossRef]
  82. Garrison D, Kanuka H. Blended learning: Uncovering its transformative potential in higher education. The Internet and Higher Education 2004 Apr;7(2):95-105. [CrossRef]
  83. Kazu I, Demirkol M. Effect of blended learning environment model on high school students' academic achievement. Turk Online J Educ Technol 2014 Jan 01;13(1):78-87 [FREE Full text]
  84. Lim DH, Kim H. Motivation and learner characteristics affecting online learning and learning application. Journal of Educational Technology Systems 2016 Jul 21;31(4):423-439. [CrossRef]
  85. Song L, Singleton ES, Hill JR, Koh MH. Improving online learning: student perceptions of useful and challenging characteristics. The Internet and Higher Education 2004 Jan;7(1):59-70. [CrossRef]
  86. Posadzki P, Bala MM, Kyaw BM, Semwal M, Divakar U, Koperny M, et al. Offline digital education for postregistration health professions: systematic review and meta-analysis by the digital health education collaboration. J Med Internet Res 2019 Apr 24;21(4):e12968 [FREE Full text] [CrossRef] [Medline]
  87. Greenhalgh T. Computer assisted learning in undergraduate medical education. BMJ 2001 Jan 6;322(7277):40-44 [FREE Full text] [Medline]
  88. Adams AM. Pedagogical underpinnings of computer-based learning. J Adv Nurs 2004 Apr;46(1):5-12. [CrossRef] [Medline]
  89. Brandt MG, Davies ET. Visual-spatial ability, learning modality and surgical knot tying. Can J Surg 2006 Dec;49(6):412-416 [FREE Full text] [Medline]
  90. Dunleavy G, Nikolaou CK, Nifakos S, Atun R, Law GCY, Tudor Car L. Mobile digital education for health professions: systematic review and meta-analysis by the Digital Health Education Collaboration. J Med Internet Res 2019 Feb 12;21(2):e12937 [FREE Full text] [CrossRef] [Medline]
  91. Tomesko J, Touger-Decker R, Dreker M, Zelig R, Parrott JS. The effectiveness of computer-assisted instruction to teach physical examination to students and trainees in the health sciences professions: a systematic review and meta-analysis. J Med Educ Curric Dev 2017;4:2382120517720428 [FREE Full text] [CrossRef] [Medline]
  92. Al-Riyami S, Moles DR, Leeson R, Cunningham SJ. Comparison of the instructional efficacy of an internet-based temporomandibular joint (TMJ) tutorial with a traditional seminar. Br Dent J 2010 Dec 11;209(11):571-576. [CrossRef] [Medline]
  93. Kononowicz AA, Woodham LA, Edelbring S, Stathakarou N, Davies D, Saxena N, et al. Virtual patient simulations in health professions education: systematic review and meta-analysis by the Digital Health Education Collaboration. J Med Internet Res 2019 Jul 02;21(7):e14676 [FREE Full text] [CrossRef] [Medline]
  94. Cook DA, Erwin PJ, Triola MM. Computerized virtual patients in health professions education: a systematic review and meta-analysis. Acad Med 2010 Oct;85(10):1589-1602. [CrossRef] [Medline]
  95. Xiao Z, Wang C, Li L, Tang X, Li N, Li J, et al. Clinical efficacy and safety of aidi injection plus docetaxel-based chemotherapy in advanced nonsmall cell lung cancer: a meta-analysis of 36 randomized controlled trials. Evid Based Complement Alternat Med 2018;2018:7918258 [FREE Full text] [CrossRef] [Medline]
  96. Cook DA, Triola MM. Virtual patients: a critical literature review and proposed next steps. Med Educ 2009 Apr;43(4):303-311. [CrossRef] [Medline]
  97. Bearman M, Cesnik B, Liddell M. Random comparison of 'virtual patient' models in the context of teaching clinical communication skills. Med Educ 2001 Sep;35(9):824-832. [Medline]
  98. Foster A, Chaudhary N, Kim T, Waller JL, Wong J, Borish M, et al. Using virtual patients to teach empathy: a randomized controlled study to enhance medical students' empathic communication. Simul Healthc 2016 Jun;11(3):181-189. [CrossRef] [Medline]
  99. Tolsgaard MG, Jepsen RMHG, Rasmussen MB, Kayser L, Fors U, Laursen LC, et al. The effect of constructing versus solving virtual patient cases on transfer of learning: a randomized trial. Perspect Med Educ 2016 Feb;5(1):33-38 [FREE Full text] [CrossRef] [Medline]
  100. Edelbring S, Wahlström R. Dynamics of study strategies and teacher regulation in virtual patient learning activities: a cross sectional survey. BMC Med Educ 2016 Apr 23;16:122 [FREE Full text] [CrossRef] [Medline]

Edited by G Eysenbach; submitted 04.10.19; peer-reviewed by Y Lecarpentier, M Grasl; comments to author 30.10.19; revised version received 07.11.19; accepted 13.11.19; published 10.08.20

Copyright

©Alexandre Vallée, Jacques Blacher, Alain Cariou, Emmanuel Sorbets. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 10.08.2020.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.