Published on in Vol 18, No 1 (2016): January

The Effectiveness of Blended Learning in Health Professions: Systematic Review and Meta-Analysis

The Effectiveness of Blended Learning in Health Professions: Systematic Review and Meta-Analysis

The Effectiveness of Blended Learning in Health Professions: Systematic Review and Meta-Analysis

Authors of this article:

Qian Liu1 Author Orcid Image ;   Weijun Peng1 Author Orcid Image ;   Fan Zhang1 Author Orcid Image ;   Rong Hu1 Author Orcid Image ;   Yingxue Li1 Author Orcid Image ;   Weirong Yan1 Author Orcid Image

Original Paper

Department of Epidemiology and Biostatistics, School of Public Health, Tongji Medical College of Huazhong University of Science &Technology, Wuhan, China

Corresponding Author:

Weirong Yan, PhD

Department of Epidemiology and Biostatistics

School of Public Health

Tongji Medical College of Huazhong University of Science &Technology

Tongji Medical College

No.13 Hangkong Road

Wuhan,

China

Phone: 86 (027)83650713

Fax:86 (027)83650713

Email: weirong.yan@hust.edu.cn


Background: Blended learning, defined as the combination of traditional face-to-face learning and asynchronous or synchronous e-learning, has grown rapidly and is now widely used in education. Concerns about the effectiveness of blended learning have led to an increasing number of studies on this topic. However, there has yet to be a quantitative synthesis evaluating the effectiveness of blended learning on knowledge acquisition in health professions.

Objective: We aimed to assess the effectiveness of blended learning for health professional learners compared with no intervention and with nonblended learning. We also aimed to explore factors that could explain differences in learning effects across study designs, participants, country socioeconomic status, intervention durations, randomization, and quality score for each of these questions.

Methods: We conducted a search of citations in Medline, CINAHL, Science Direct, Ovid Embase, Web of Science, CENTRAL, and ERIC through September 2014. Studies in any language that compared blended learning with no intervention or nonblended learning among health professional learners and assessed knowledge acquisition were included. Two reviewers independently evaluated study quality and abstracted information including characteristics of learners and intervention (study design, exercises, interactivity, peer discussion, and outcome assessment).

Results: We identified 56 eligible articles. Heterogeneity across studies was large (I2 ≥93.3) in all analyses. For studies comparing knowledge gained from blended learning versus no intervention, the pooled effect size was 1.40 (95% CI 1.04-1.77; P<.001; n=20 interventions) with no significant publication bias, and exclusion of any single study did not change the overall result. For studies comparing blended learning with nonblended learning (pure e-learning or pure traditional face-to-face learning), the pooled effect size was 0.81 (95% CI 0.57-1.05; P<.001; n=56 interventions), and exclusion of any single study did not change the overall result. Although significant publication bias was found, the trim and fill method showed that the effect size changed to 0.26 (95% CI -0.01 to 0.54) after adjustment. In the subgroup analyses, pre-posttest study design, presence of exercises, and objective outcome assessment yielded larger effect sizes.

Conclusions: Blended learning appears to have a consistent positive effect in comparison with no intervention, and to be more effective than or at least as effective as nonblended instruction for knowledge acquisition in health professions. Due to the large heterogeneity, the conclusion should be treated with caution.

J Med Internet Res 2016;18(1):e2

doi:10.2196/jmir.4807

Keywords



Electronic learning (e-learning) has quickly become popular for health education [1-3], especially since the emergence of the Internet has allowed its potential to be realized [4]. E-learning can not only transcend space and time boundaries and improve convenience and effectiveness for individualized and collaborative learning, but also provide reusable and up-to-date information through the use of interactive multimedia [3,5-9]. However, it also suffers from disadvantages such as high costs for preparing multimedia materials, continuous costs for platform maintenance and updating, as well as learners’ feelings of isolation in virtual environments [8,10,11]. Traditional learning must be conducted at a specific time and place and is considered vital in building a sense of community [12,13]. Blended learning, defined as the combination of traditional face-to-face learning and asynchronous or synchronous e-learning [14], has been presented as a promising alternative approach for health education because it is characterized as synthesizing the advantages of both traditional learning and e-learning [8,15,16]. Moreover, blended learning has shown rapid growth and is now widely used in education [17,18].

With the introduction of blended learning, increasing research has focused on concerns about its effectiveness. Three original research articles reporting on quantitative evaluations of blended learning were published in the 1990s [19-21], and then many were published after 2000 [16,22-29]. A quantitative synthesis of these studies could inform educators and students about evidence for, and factors influencing, the effectiveness of blended learning.

Rowe et al’s systematic review reported that blended learning has the potential to improve clinical competencies among health students [30]. In another systematic review, McCutcheon et al suggested a lack of evaluation of blended learning in undergraduate nursing education [31]. Several reviews have also summarized the evaluation of e-learning in medical education, but none separated blended learning from pure e-learning [32-34]. Furthermore, these systematic reviews were limited to only some areas or branches of health education; there has been no quantitative synthesis to evaluate the effectiveness of blended learning in all professions directly related to human and animal health.

Therefore, our study aimed to identify and quantitatively synthesize all studies evaluating the effectiveness of blended learning for health professional learners who were students, postgraduate trainees, or practitioners in a profession directly related to human or animal health. We conducted two meta-analyses: the first summarized studies comparing blended learning with no intervention, and the second explored blended learning compared with nonblended learning (including pure e-learning and traditional face-to-face learning). We also aimed to explore factors that could explain differences in learning effectiveness across characteristics of participants, interventions, and study designs. Based on previous research, we hypothesized that learning outcomes would be improved through exercises, cognitive interactivity, and peer discussion [35-38]. Exercises contain cases, quizzes, self-assessment test, and other activities requiring learners to apply knowledge acquired from the course [33]. Cognitive interactivity reflects cognitive engagement required for course participation, and multiple practice exercises, essays, and group collaborative projects account for high interactivity [38]. Peer discussion includes instructor-student or peer-peer face-to-face discussion that might arise in a typical lecture, and synchronous or asynchronous online communication such as discussion boards, email, chat, or Internet conferencing [33].


Reporting Standards

We conducted and reported our study according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [39] (see e-Table 7 in Multimedia Appendix 1) and meta-analyses of observational studies in epidemiology [40].

Eligibility Criteria

Inclusion criteria for studies were based on the PICOS (population, intervention, comparison, outcome, and study design) framework [39]. Studies were included only if they (1) were conducted among health professional learners, (2) used a blended learning intervention in the experimental group, (3) involved a comparison of blended learning with no intervention or nonblended learning, (4) included quantitative outcomes with respect to knowledge assessed with subjective (eg, learner self-report) or objective assessments (eg, multiple-choice question knowledge test) of learners’ factual or conceptual understanding of the course, and (5) were randomized controlled trials (RCTs) or nonrandomized studies (NRSs), which are widely used in health profession education [33]. Studies in any language and of any publication type were included. Gray literature was searched in CENTRAL and ERIC.

Studies were excluded if they did not compare blended learning with nonblended learning or no intervention, did not report quantitative outcomes with respect to knowledge, used a single-group posttest-only design, were not conducted with health professional learners, evaluated pure e-learning instead of blended learning, or used the computer only for administrative purposes. Reviews, editorials, or meeting abstracts without original data were also excluded.

Data Sources

To identify relevant studies, we conducted a search of citations in Medline, CINAHL, Science Direct, Ovid Embase, Web of Science, CENTRAL, and ERIC. Key search terms included delivery concepts (eg, blended, hybrid, integrated, computer-aided, computer-assisted; learning, training, education, instruction, teaching, course), participants’ characteristics (eg, physician*, medic*, nurs*, pharmac*, dent*, cme, health*), and study design concepts (eg, compar*, trial*, evaluat*, assess*, effect*, pretest*, pre-test, posttest*, post-test, preintervention, pre-intervention, postintervention, post-intervention). The asterisk (*) was used as a truncation symbol for searching. For instance, evaluat* retrieved entries containing the following words: evaluate, evaluation, or evaluative, etc. E-Table 1 in Multimedia Appendix 1 describes the complete search strategy for each database. The last date of search was September 25, 2014. In addition, all references of included studies were screened for any relevant articles.

Study Selection

Using these criteria, QL and FZ independently screened all titles and abstracts and reviewed the full text of all potentially eligible abstracts. Conflicts between these reviewers were resolved through discussion with other members of the research group until a consensus was obtained.

Data Extraction

QL and FZ developed a form (based on the Cochrane Consumers and Communication Review Group’s data extraction template), pilot-tested it on 10 randomly selected included publications, and refined it accordingly. Using the same form, data related to the following issues were extracted independently by QL and FZ: first author’s name, year of publication, country where the intervention was conducted, study design, study subjects, sample size, specific health profession of the intervention, comparison intervention, intervention duration, exercises, interactivity, peer discussion, outcome assessment, conflict of interest (whether there was a conflict of interest), and funding from company (whether funding was obtained from a source that had a direct interest in the results). Disagreements were resolved through discussion with another research team member until agreement was reached. If the required data for the meta-analyses were missing from the original report, attempts were made to obtain the information by contacting the corresponding authors by email.

Quality Assessment

Recognizing that many nonrandomized and observational studies would be included, the methodological quality of the studies was evaluated using a modified Newcastle-Ottawa Scale (also called the Newcastle-Ottawa Scale-Education), which is an instrument used to appraise the methodological quality of original medical education research studies, typically in the process of a literature review of a field or topic in medical education [33,41-43]. Each study could receive up to 6 points and was rated in the following five domains:

  • Representativeness: the intervention group was “truly” or “somewhat” representative of the average learner in this community (1 point).
  • Selection: the comparison group was drawn from the same community as the experimental cohort (1 point).
  • Comparability of cohorts (2 points possible): These include nonrandomized two-cohort studies (further classified into “controlled for baseline learning outcome [eg, adjusted for knowledge pretest scores; 1 point]” and “controlled for other baseline characteristics [1 point]”) and randomized studies (further classified into randomized [1 point] and allocation concealed [1 point]).
  • Blinding: outcome assessment was blinded (1 point). These include (1) blinded if the assessor cannot be influenced by group assignment; (2) assessments that do not require human judgment (eg, multiple-choice tests or computer-scored performance) are considered to be blinded; (3) one-group studies are not blinded unless scoring does not require judgment or authors describe a plausible method for hiding the timing of assessment; (4) participant-reported outcomes are never blinded.
  • Follow-up: subjects lost to follow-up were unlikely to introduce bias; small number lost (75% or greater follow-up) or description provided of those lost (1 point).

In addition, we evaluated the quality of evidence with the Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) instrument [44-53]. GRADE identifies five factors that may decrease the quality of evidence of studies, and three factors that may increase it. RCTs start with a high rating and observational studies with a low rating. Ratings are modified downward due to (1) study limitations (risk of bias) [47], (2) inconsistency of results [50], (3) indirectness of evidence [51], (4) imprecision [49], and (5) likely publication bias [48]. Ratings are modified upward due to (1) large magnitude of effect, (2) dose response, and (3) confounders likely to minimize the effect. Evaluating these elements, we determine the quality of evidence as “high” (ie, further research is very unlikely to change our confidence in the estimate of effect), “moderate” (ie, further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate), “low” (ie, further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate), or “very low” (ie, we are very uncertain about the estimate).

Data Synthesis

Analyses were carried out for knowledge outcomes using Stata Version 12.0 and R 3.1.2. The standardized mean difference (SMD; Hedges g effect sizes), converted from means and standard deviations from each study, was used [33,54]. When the mean was available but the standard deviation (SD) was not, we used the mean SD of all other included studies. As the overall scores of included studies were not the same and SMD could eliminate the effects of absolute values, we adjusted the mean and SD so that the average SD could replace the missing value of SD.

The I2 statistic was used to quantify heterogeneity across studies [55]. When the estimated I2 was equal to or greater than 50%, this indicated large heterogeneity. As the studies incorporated are functionally different and involve different study designs, participants, interventions, and settings, a random-effects model allowing more heterogeneity was used. Meta-analyses were conducted and forest plots were created. To explore publication bias, funnel plots were created and Begg’s tests were performed. To explore potential sources of heterogeneity, we performed multiple meta-regression and subgroup analyses based on factors selected in advance, such as study design, country socioeconomic status, participant type, duration of intervention, randomization, quality score, exercises, interactivity, peer discussion, outcome assessment, and intervention of the control group. Moreover, we performed sensitivity analyses to test the robustness of findings.


Study Selection

The search strategy identified 4815 citations from the databases, and 642 duplicates were removed. After scanning the titles and abstracts, 225 were found to be potentially eligible. Then, full texts were read for further assessment, and 62 remained. For 12 articles without accessible full texts and 6 without sufficient quantitative data (mean knowledge scores), we tried contacting the authors by email but received no reply. Thus, 56 publications were included, among which one publication compared blended learning with both no intervention and nonblended instruction (Figure 1). No more relevant articles were found by reviewing the references of the included articles. E-Table 2 in Multimedia Appendix 1 includes the references of articles excluded based on full text (n=163) and insufficient quantitative data reported (n=6).

Figure 1. Study selction process.
View this figure

Study Characteristics

In the meta-analysis, we included 13 publications representing 20 interventions published from 2004-2014, which compared blended learning with no intervention and included 2238 health professional participants [22-24,56-65]. The number of participants ranged from 6 [61] to 817 [62], and the duration of the intervention ranged from 24 hours [63] to one semester [58].

We included 44 publications representing 56 interventions comparing blended learning with nonblended learning published from 1991 to 2014 that covered 6110 health profession participants [16,19-21,25,26,28,29,63,66-100]. There was 1 pre-posttest one-group intervention, 27 posttest-only two-group interventions, and 28 pre-posttest two-group interventions. The number of participants ranged from 14 [72] to 609 [84], and the duration ranged from 1 hour [101] to 1 year [77].

Components or features of the study intervention were mostly “Web-based+ face-to-face”, “e-learning+ class session”, and “Web-based online instruction+ off-line instruction (review of the core contents on the online program, case analysis, small group discussion, and miscellaneous activities)”. “Modality or technology” varied, such as “Moodle, on-site workshops”, “asynchronous discussion forums, a live audio and text-based online synchronous session (Centra); online modules (Macromedia Breeze)”. More than 80% of the interventions were measured using objective assessment, which included multiple choice questions, true or false questions, matching questions, and essays. For most studies, there was no delay between the end of the intervention and the posttest. Table 1 summarizes the key features and e-Table 3 in Multimedia Appendix 1 describes the detailed information.

Table 1. Summary description of included studies.
Study characteristicsNo intervention comparisonNonblended learning comparison
Interventions, n (%)
(N=20)
Participants, n
(N=2238)
Interventions, n (%)
(N=56)
Participants, n
(N=6110)
Study design

Pre-posttest 1-group17 (85.0)165627 (48.2)97

Posttest 2-group2 (10.0)13028 (50.0)3468

Pre-posttest 2-group1 (5.0)4521 (1.8)2545
RCT/NRS

RCT2 (10.0)13031 (55.4)2919

NRS18 (90.0)210825 (44.6)3191
Country

Developed14 (70.0)167344 (78.6)4489

Developing6 (30.0)56512 (21.4)1621
Participant

Medical students9 (45.0)88737 (66.1)4593

Nursing students1 (5.0)699 (16.1)870

Nurses2 (10.0)1035 (8.9)259

Physicians6 (30.0)1372 (3.6)256

Public health workers1 (5.0)8171 (1.8)66

Others1 (5.0)2251 (1.8)66
Intervention duration

˂1 semester17 (85.0)203843 (76.8)4578

≥1 semester3 (15.0)20013 (23.2)1532
Exercises

Present15 (75.0)127341 (73.2)4526

Absent5 (25.0)96515 (26.8)1584
Interactivity

High15 (75.0)155935 (62.5)4460

Low5 (25.0)67921 (37.5)1650
Peer discussion

Present10 (50.0)145628 (50.0)3369

Absent10 (50.0)78228 (50.0)2741
Outcome assessment

Objective16 (80.0)183353 (93.6)5832

Subjective4 (20.0)4053 (6.4)278
Comparison intervention

E-learningNANA5 (8.9)205

Traditional learningNANA51 (91.1)5905
Conflict of interest

Yes002 (3.6)612

No20 (100.0)223854 (96.4)5498
Quality score

≥45 (25.0)73047 (83.9)4965

˂415 (75.0)15089 (16.1)1145

Study Quality

All of the intervention groups in the included studies were representative of average learners. Ten percent (2/20) of no-intervention controlled studies and 98% (55/56) of nonblended learning controlled studies selected the control group from the same community as the experimental group. Nearly a third (30%, 6/20) of the no-intervention controlled studies and 46% (26/56) of nonblended learning controlled studies reported blinded outcome assessment. All of the no-intervention controlled studies (100%) and 96% (54/56) of nonblended learning controlled studies reported completeness of follow-up. The mean (SD) quality score was 3.40 (0.82) for no-intervention controlled studies, and 4.45 (0.78) for nonblended learning controlled studies. The results of the quality assessment are shown in e-Table 4 in Multimedia Appendix 1.

Quantitative Data Synthesis

Comparisons With No Intervention

As effect sizes larger than 0.8 were considered to be large [102], the pooled effect size (SMD 1.40; 95% CI 1.04-1.77; Z=7.52, P<.001) suggests a significantly large effect. However, significant heterogeneity was observed among studies (P<.001, I2=94.8%, 95% CI 93.1-96.0), and individual effect sizes ranged from -0.12 to 4.24. Figure 2 shows detailed results of the meta-analysis. The test of funnel plots (Figure 3) indicated no significant publication bias among studies (Begg’s test P=.587). Based on risk of bias and large effect, we graded the quality of evidence as moderate. E-Table 5 in Multimedia Appendix 1 provides the GRADE evidence profile. E-Table 6 in Multimedia Appendix 1 contains the mean, standard difference, and number of participants for both blended learning and no intervention/nonblended learning.

Figure 2. Forest plot of blended learning versus no intervention.
View this figure
Figure 3. Funnel plot of blended learning versus no intervention.
View this figure
Meta-Regression and Subgroup Analysis

We investigated a multiple regression model with each possible source of heterogeneity (I2_res=85.33%, adjusted R2=48.89%; I2_res means residual variation due to heterogeneity) and found that the outcome assessment (P=.03) was a potential source of heterogeneity (Table 2). Studies with objective outcome assessments had larger pooled effect sizes. Furthermore, subgroup analyses were performed to evaluate the sources of heterogeneity. A statistically significant interaction favoring pre-posttest two-groups designs and pre-posttest one-group designs was found (P for interaction<.001), which was consistent with the result of the meta-regression. Statistical differences existed between the groups of participants (P for interaction<.001). Nonrandomized studies had larger effects than randomized ones (P for interaction=.01). The effect size was significantly larger for blended learning with objective assessment than with subjective assessment (P for interaction=.005). However, we did not find support for the hypotheses regarding subgroup interactions across levels of exercises (P for interaction=.92).

Sensitivity Analyses

Exclusion of any single study did not change the overall result, which ranged from 1.24 (95% CI 0.91-1.57) to 1.48 (95% CI 1.14-1.83).

Comparisons With Nonblended Learning

The pooled effect size (SMD 0.81; 95% CI 0.57-1.05; Z=6.59, P<.001) significantly reflected a large effect, and significant heterogeneity was observed among studies (P<.001, I2=94.6%, 95% CI 93.7-95.5). Figure 4 shows detailed results of the main analysis. The test of asymmetry funnel plot (Figure 5) indicated publication bias among studies (Begg’s test P=.01). The publication bias may have been towards larger studies with generally large magnitudes of effects. The trim and fill method indicated that the effect size changed to 0.26 (95% CI -0.01 to 0.54) after adjusting for publication bias, which suggested that blended learning was at least as effective as nonblended learning. Based on risk of bias, publication bias, and large effect, we graded the quality of evidence as low. E-Table 5 in Multimedia Appendix 1 provides the GRADE evidence profile.

Table 2. Subgroup analysis of blended learning versus no intervention.
SubgroupInterventions, nPooled effect sizes (95% CI)Heterogeneity (I2), PInteraction, PaMeta-regression
Coef.P
All interaction201.40 (1.04-1.77)94.8% (93.1-96.0), P<.001


Study design

Posttest 2-groups20.59 (0.00-1.18)57.0%, P=.13



Pre-posttest 1 group171.47 (1.05-1.88)95.0% (93.3-96.3), P<.001<.001.27.81

Pre-posttest 2-groups11.87 (1.62-2.13)0


Country

Developed141.29 (0.83-1.75)96.0% (94.6-97.1), P<.001.23-.22.90

Developing61.71 (1.20-2.22)76.5% (47.4-89.5), P=.001


Participant

Medical students91.13 (0.32-1.94)96.8% (95.4-97.8), P<.001



Nursing students12.14 (1.72-2.56)0



Nurses21.05 (0.79-1.91)0.0%, P=.56<.001.05.82

Physicians61.84 (1.14-2.54)81.2% (59.7-91.2), P<.001



Public health workers11.72 (1.60-1.83)0



Others11.37 (1.17-1.58)0


Intervention duration

˂1 semester171.39 (1.10-1.18)89.2% (84.2-92.6), P<.001.97-.33.69

≥1 semester31.43 (-0.82-3.68)98.9% (98.1-99.3), P<.001


Randomization

Randomized20.59 (.001-1.64)57.0%, P=.013.01.67.45

Nonrandomized181.49 (1.11-1.87)94.9% (93.2-96.2), P<.001


Quality score

≥451.89 (1.13-2.66)96.2% (93.4-97.8), P<.001.63-1.05.29

˂4151.23 (.77-1.69)94.3% (92.1-95.9), P<.001


Exercises

Present101.28 (0.64-1.90)95.1% (93.2-96.4), P<.001.92-.21.75

Absent101.53 (1.08-1.99)89.5% (88.7-96.7), P<.001


Interactivity

High151.54 (1.07-2.00)95.6% (94.0-96.7), P<.001.20-1.25.41

low51.05 (0.44-1.65)90.9% (81.7-95.5), P<.001


Peer discussion

Present101.25 (0.70-1.79)96.2% (94.2-97.2), P<.001.11-.07.97

Absent101.87 (1.21-2.53)93.1% (88.6-95.3), P<.001


Outcome assessment

Objective161.66 (1.29-2.04)91.9% (88.4-94.3), P<.001.005-2.02.03

Subjective40.46 (-0.30-1.22)95.8% (92.1-97.8), P<.001


Funding from company

Yes22.29 (-1.53 to 6.11)99.2%, P<.001.61-.93.37

No181.30 (.97-1.62)92.7% (88.9-94.7), P<.001


aP for interaction means the P of heterogeneity between groups.

Figure 4. Forest plot of blended learning versus non-blended learning.
View this figure
Meta-Regression and Subgroup Analysis

A multiple regression model for each possible source of heterogeneity was conducted (I2_res=94.59%, adjusted R2=-26.38%), and no significant source of heterogeneity was found (Table 3). Furthermore, subgroup analyses were performed to evaluate the sources of heterogeneity. We found both pre-posttest two-group studies and pre-posttest one-group studies showed larger effects than posttest-only studies (P for interaction<.001). It was shown that the presence of exercises could yield a larger SMD (P for interaction=.49). Studies with objective assessments yielded a larger effect than studies with subjective assessments (P for interaction=.01). Studies without conflicts of interest yielded a larger effect than those with conflicts of interest (P for interaction<.001). However, high interactivity and presence of peer discussion did not yield larger effect sizes (P for interaction>.85).

Figure 5. Funnel plot of blended learning versus non-blended learning.
View this figure
Sensitivity Analyses

Exclusion of any single study did not change the overall result, which ranged from 0.70 (95% CI 0.48-0.92) to 0.86 (95% CI 0.63-1.10).


Principal Findings

This meta-analysis shows that blended learning has a large consistent positive effect (SMD 1.40, 95% CI 1.04-1.77) on knowledge acquisition compared with no intervention, which suggested that blended learning was very effective and educationally beneficial in health professions. Moreover, we also found that blended learning had a large effect (SMD 0.81, 95% CI 0.57-1.05) in comparison with the nonblended learning group. This means that blended learning may be more effective than nonblended learning, including both traditional face-to-face learning and pure e-learning. Possible explanations could be as follows: (1) compared with traditional learning, blended learning allows students to review electronic materials as often as necessary and at their own pace, which likely enhances learning performance [8,16], and (2) compared with e-learning, blended learning learners are less likely to experience feelings of isolation or reduced interest in the subject matter [8,11,103]. However, publication bias was found in the nonblended learning comparison group, and the trim and fill method showed that the pooled effect size changed to 0.26 (-0.01 to 0.54), which means blended learning is at least as effective as nonblended learning. To the best of our knowledge, this may be the first meta-analysis to reveal the effectiveness of blended learning for knowledge acquisition in health professions, which includes all those directly related to human and animal health.

However, large heterogeneity was found across studies in both no-intervention and nonblended comparisons, and the subgroup comparisons partially explained these differences. The heterogeneity may be due to variations in study design, outcome assessment, exercises, conflict of interest, randomization, and type of participants. We found that effect sizes were significantly higher for studies using pre-posttest designs than posttest-only designs, which suggested that the former improved learning outcomes relative to the latter. As pretests may inform instructors about the knowledge learners have acquired before the course, which is considered to be one of the most important factors influencing education [104], they allow instructors to determine learning objectives and to prepare course materials accordingly [105]. Therefore, it is necessary for educators to administer pretests to learners to prepare well for courses. We also found that studies with objective assessments yielded a larger effect than those with subjective assessments. In contrast, Cook et al reported no difference between objective and subjective assessments in knowledge scores [33]. This is probably due to differences in personality traits of learners, as people with greater confidence tend to give higher ratings on subjective assessments than people who are less confident [106]. Thus, educators should objectively assess learners instead of using subjective evaluations.

Table 3. Subgroup analysis of blended learning versus nonblended learning.
SubgroupInterventions, nPooled effect sizes (95% CI)Heterogeneity (I2), PInteraction, PMeta-regression
Coef.P
All interventions560.81 (0.57-1.05)94.6% (93.7-95.5), P<.001


Study design

Posttest 2-groups270.70 (0.32-1.07)94.0% (92.3-95.3), P<.001<.001


Pre-posttest 2-groups28.89 (0.58-1.19)94.5% (93.0-95.6), P<.001-.001.99

Pre-posttest 1-group11.97 (1.63-2.32)0

Country

Developed440.80 (0.54-1.01)93.2% (91.7-94.4), P<.001.83.13.86

Developing120.87 (0.22-1.53)97.2% (96.2-97.9), P<.001

Participant

Medical students380.88 (0.60-1.17)94.8% (93.6-95.7), P<.001



Nursing students90.42 (-0.32-1.16)96.0% (94.0-97.3), P<.001


Nurses50.87 (0.09-1.65)87.7% (73.8-94.2), P<.001.03-.17.61

Physicians21.33 (1.05-1.60)0.0%, P=.996



Public health workers10.57 (0.08-1.07)0



Others10.66 (0.16-1.15)0


Intervention duration

˂1 semester430.73 (0.45-1.00)94.5% (93.3-95.5), P<.001.17-.29.68

≥1 semester131.10 (0.63-1.59)93.9% (91.3-95.8), P<.001

Randomization

Randomized310.75 (0.38-1.12)95.1% (94.0-96.1), P<.001.63.29.69

Nonrandomized250.87 (0.56-1.05)94.1% (92.3-95.4), P<.001

Quality score

≥4470.82 (0.55-1.09)94.9% (93.9-95.8), P<.001.99-.27.78

˂490.83 (0.39-1.26)90.4% (84.1-94.2), P<.001

Exercises

Present410.93 (0.63-1.25)95.7% (94.9-96.4), P<.001.49-.51.51

Absent150.53 (0.26-0.80)82.5% (72.2-88.9), P=0.011


Interactivity

High370.84 (0.55-1.13)95.2% (94.2-96.1), P<.001.85.48.60

Low190.78 (0.35-1.23)93.4% (91.2-95.1), P<.001


Peer discussion

Present280.82 (0.46-1.18)95.9% (94.9-96.7), P<.001.93-.43.96

Absent280.80 (0.48-1.12)92.7% (90.6-94.4), P<.001


Outcome assessment

Objective530.85 (0.61-1.10)94.8% (93.8-95.6), P<.001.01-.91.47

Subjective30.07 (-0.46 to 0.60)68.6% (0-90.9), P=.04


Comparison intervention

E-learning50.40 (-0.21-1.01)77.5% (34.8-87.8), P=.23.17.69.52

Traditional learning510.85 (0.60-1.11)95.0% (94.1-95.8), P<.001


Conflict of interest

Yes2-0.06 (-0.21 to 0.10)0.0%<.0011.17.44

No540.85 (0.60-1.10)94.5% (93.5-95.4), P<.001


Additionally, effect size was found to be significantly larger for blended courses with exercises versus no exercises, which was consistent with the results of a previous study conducted by Cook et al in 2006, which found that continuity clinics had higher test scores when using a question format compared to a standard format [37]. Thus, it is necessary for educators to include exercises in their teaching, such as cases and self-assessment questions. However, we failed to confirm our hypothesis that presence of peer discussion and high interactivity would yield larger effect sizes. Although we found statistical differences between the RCTs and NRS in the no-intervention comparison, it could probably be due to chance as there were only two RCTs (130 participants) included. Differences between studies with conflicts of interest and those without conflicts of interest in nonblended comparisons could be also due to chance, as only two studies with conflicts of interest (612 participants) were included. The remainder of the high heterogeneity may arise from other characteristics, such as individual learning styles, study intervention, assessment instrument, and ongoing access to learning materials [33,107,108], for which detailed information was not available in the included studies. As Wong et al cited in their review, different modes of course delivery suit different learners in different environments [109].

Our samples consisted of various health professional learners (nurses, medical students, nursing students, physicians, public health workers, and other health professionals) across a wide variety of health care disciplines, such as medicine, nursing, ethics, health policy, pharmacy, radiology, genetics, histology, and emergency preparedness. Moreover, we found medium or large effects for the pooled effect sizes of almost all subgroup analyses exploring variations in study design, participant type, randomization, quality scores, exercises, interactivity, and peer discussion. Thus, our results suggest that health care educators should use blended learning as a teaching component in various disciplines and course settings.

Strengths and Limitations

Our meta-analysis also has several strengths. Evaluations of the effectiveness of blended learning for health professions are timely and very important for both medical educators and learners. We intentionally kept our scope broad in terms of subjects and included all studies with learners from health professions. We searched for relevant studies in manifold research databases up to September 2014. The systematic literature search encompassed multiple databases and had few exclusion criteria. We also conducted all aspects of the review process in duplicate.

However, there are limitations to consider. First, although we searched gray literature in two databases (CENTRAL and ERIC), gray literature indexed by other databases may have been missed, which could be the reason for the observed publication bias. Second, the quality of meta-analyses is dependent on the quality of data from the included studies. Although the standard deviation of eight interventions was not available due to poor reporting, we used the average standard deviation of other included studies and imputed effect sizes with concomitant potential for error. Third, despite conducting the review and extraction independently and in duplicate, the process was subjective and dependent on the descriptions of the included articles instead of direct evaluation of interventions. Fourth, although the modified Newcastle–Ottawa scale is a useful and reliable tool for appraising methodological quality of medical education research and enhances flexibility for different study designs, it increases the risk of reviewer error or bias due to a certain amount of rater subjectivity. Then, results of subgroup analyses should be interpreted with caution because of the absence of a priori hypotheses in some cases, such as study design, country socioeconomic status, and outcome assessment. Moreover, although the subgroup analyses showed the variability of participant types, socioeconomic status of country, intervention duration, interactivity, peer discussion, and study design of RCT or NRS did not make a difference in the overall results, the large clinical heterogeneity and inconsistent magnitude of effects across studies makes it difficult to generalize the conclusions. In addition, as variability of study interventions, assessment instruments, circumstances and so on, which were not assessed, could be potential sources of heterogeneity, the results of both meta-analyses should be treated with caution. Furthermore, publication bias was found in the meta-analysis with the nonblended comparison. Although we used the trim and fill method for adjustment, the results should be treated with caution.

Implications

Our study has implications for both research on blended learning and education in health professions. Despite the fact that conclusions could be weakened by heterogeneity across studies, the results of our quantitative synthesis demonstrated that blended learning may have a positive effect on knowledge acquisition across a wide range of learners and disciplines directly related to health professions. In summary, blended learning could be promising and worthwhile for further application in health professions. The difference in effects across subgroup analyses indicates that different methods of conducting blended courses may demonstrate differing effectiveness. Therefore, researchers and educators should pay attention to how to implement a blended course effectively. This question could be answered successfully through studies directly comparing different blended instructional methods. Thus, such studies are of critical importance.

Studies comparing blended learning with no intervention suggested that blended learning in health professions might be invariably effective. However, although observational studies yielded a large effect size, the quality of evidence was lower due to their inherent study design limitations. Additionally, owing to the small number of RCTs, the meta-analysis did not meet the optimal size (imprecision) and therefore, quality of evidence was ranked lower. Thus, despite the consistency of effect and no significant reporting bias, the evidence of the no-intervention comparison was of moderate quality, which means further research is likely to have an impact on our confidence in the estimate of effect and may change the estimate, and RCTs with large samples may modify the estimates. Thus, there is still great value in further research comparing blended learning with no intervention, and RCTs with large samples may modify the estimates. For nonblended comparisons, pooled estimates showed that blended learning is more effective than or at least as effective as pure e-learning and pure traditional learning. However, due to publication bias towards larger studies with generally large magnitudes of effects, the evidence was of low quality, which means further research is very likely to change our estimate. Furthermore, only four studies using e-learning were included. Therefore, the effect of blended learning especially in comparison with e-learning should be evaluated in future research, and studies with small magnitudes of effect should merit publication.

Conclusions

Blended learning appears to have a consistent positive effect in comparison with no intervention and appears to be more effective than or at least as effective as nonblended instruction for knowledge acquisition in health professions. Moreover, pre-posttest study design, presence of exercises, and objective outcome assessment in blended courses could improve health care learners’ knowledge acquisition. Due to the large heterogeneity, the conclusion should be treated with caution.

Acknowledgments

The research leading to these results has received funding from the European Union’s Seventh Framework Programme (FP7/2007-2013) under grant agreement number 281930, ARCADE RSDH. Our research was partly supported by the project “Strengthening Primary Healthcare Workers’ Competence by Using an Internet-based Interactive Platform in Rural China” funded by the Ministry of Science and Technology, China.

Authors' Contributions

WRY conceptualized and designed the study. QL and FZ performed the review, extraction, and data analysis. QL prepared the first draft of the paper. WRY, WJP, RH, YXL, and FZ contributed to the revision of the manuscript. All authors have read and approved the final manuscript.

Conflicts of Interest

None declared.

Multimedia Appendix 1

E-tables 1-7.

PDF File (Adobe PDF File), 228KB

  1. Bediang G, Stoll B, Geissbuhler A, Klohn AM, Stuckelberger A, Nko'o S, et al. Computer literacy and E-learning perception in Cameroon: the case of Yaounde Faculty of Medicine and Biomedical Sciences. BMC Med Educ 2013;13:57 [FREE Full text] [CrossRef] [Medline]
  2. Sun P, Tsai RJ, Finger G, Chen Y, Yeh D. What drives a successful e-Learning? An empirical investigation of the critical factors influencing learner satisfaction. Computers & Education 2008 May;50(4):1183-1202. [CrossRef]
  3. Choules AP. The use of elearning in medical education: a review of the current situation. Postgrad Med J 2007 Apr;83(978):212-216 [FREE Full text] [CrossRef] [Medline]
  4. Berners-Lee T, Cailliau R, Luotonen A, Nielsen HF, Secret A. The World-Wide Web. Communications of the ACM 1994 Aug;37(8):76-82 [FREE Full text]
  5. Cook DA. Web-based learning: pros, cons and controversies. Clinical Medicine 2007 Jan 01;7(1):37-42. [CrossRef]
  6. Yu S, Yang K. Attitudes toward Web-based distance learning among public health nurses in Taiwan: a questionnaire survey. Int J Nurs Stud 2006 Aug;43(6):767-774. [CrossRef] [Medline]
  7. Peng Y, Wu X, Atkins S, Zwarentein M, Zhu M, Zhan XX, et al. Internet-based health education in China: a content analysis of websites. BMC Med Educ 2014;14:16 [FREE Full text] [CrossRef] [Medline]
  8. Wu J, Tennyson RD, Hsia T. A study of student satisfaction in a blended e-learning system environment. Computers & Education 2010 Aug;55(1):155-164. [CrossRef]
  9. Moreira IC, Ventura SR, Ramos I, Rodrigues PP. Development and assessment of an e-learning course on breast imaging for radiographers: a stratified randomized controlled trial. J Med Internet Res 2015;17(1):e3 [FREE Full text] [CrossRef] [Medline]
  10. Wu J, Tennyson RD, Hsia T, Liao Y. Analysis of E-learning innovation and core capability using a hypercube model. Computers in Human Behavior 2008 Sep;24(5):1851-1866. [CrossRef]
  11. Hara N. Student distress in a Web-based distance education course. Information, Communication & Society 2000 Jan;3(4):557-579. [CrossRef]
  12. Kemp N, Grieve R. Face-to-face or face-to-screen? Undergraduates' opinions and test performance in classroom vs. online learning. Front Psychol 2014;5:1278 [FREE Full text] [CrossRef] [Medline]
  13. Conole G, de Laat M, Dillon T, Darby J. ‘Disruptive technologies’, ‘pedagogical innovation’: What’s new? Findings from an in-depth study of students’ use and perception of technology. Computers & Education 2008 Feb;50(2):511-524. [CrossRef]
  14. Bonk CJ, Graham CE. The handbook of blended learning: global perspectives, local designs. San Francisco, CA: Pfeiffer; 2006.
  15. Thakore H, McMahon T. Virtually there: e-learning in medical education. Clinical Teacher 2006 Dec;3(4):225-228. [CrossRef]
  16. Makhdoom N, Khoshhal KI, Algaidi S, Heissam K, Zolaly MA. ‘Blended learning’ as an effective teaching and learning strategy in clinical medicine: a comparative cross-sectional university-based study. Journal of Taibah University Medical Sciences 2013 Apr;8(1):12-17. [CrossRef]
  17. Moore M. Emerging practice and research in blended learning. In: Handbook of distance education. New York: Routledge; 2012.
  18. Norberg A, Dziuban CD, Moskal PD. A time‐based blended learning model. On the Horizon 2011 Aug 16;19(3):207-216. [CrossRef]
  19. Mars M, McLean M. Students' perceptions of a multimedia computer-aided instruction resource in histology. S Afr Med J 1996 Sep;86(9):1098-1102. [Medline]
  20. Rouse DP. The effectiveness of computer-assisted instruction in teaching nursing students about congenital heart disease. Comput Nurs 2000;18(6):282-287. [Medline]
  21. Mangione S, Nieman LZ, Greenspon LW, Margulies H. A comparison of computer-assisted instruction and small-group teaching of cardiac auscultation to medical students. Med Educ 1991 Sep;25(5):389-395. [Medline]
  22. Cho K, Shin G. Operational effectiveness of blended e-learning program for nursing research ethics. Nurs Ethics 2013 Nov 19;21(4):484-495. [CrossRef] [Medline]
  23. Riesen E, Morley M, Clendinneng D, Ogilvie S, Ann MM. Improving interprofessional competence in undergraduate students using a novel blended learning approach. J Interprof Care 2012 Jul;26(4):312-318. [CrossRef] [Medline]
  24. Wallen GR, Cusack G, Parada S, Miller-Davis C, Cartledge T, Yates J. Evaluating a hybrid web-based basic genetics course for health professionals. Nurse Educ Today 2011 Aug;31(6):638-642 [FREE Full text] [CrossRef] [Medline]
  25. Sung YH, Kwon IG, Ryu E. Blended learning on medication administration for new nurses: integration of e-learning and face-to-face instruction in the classroom. Nurse Educ Today 2008 Nov;28(8):943-952. [CrossRef] [Medline]
  26. Mukti MA, Razali D, Ramli MF, Zaman HB, Ahmad A. Hybrid learning and online collaborative enhance students performance. : IEEE; 2005 Presented at: 5th IEEE International Conference on Advanced Learning Technologies; July 5-8, 2005; Taiwan p. 481-483. [CrossRef]
  27. Seabra D, Srougi M, Baptista R, Nesrallah LJ, Ortiz V, Sigulem D. Computer aided learning versus standard lecture for undergraduate education in urology. J Urol 2004 Mar;171(3):1220-1222. [CrossRef] [Medline]
  28. Shomaker TS, Ricks DJ, Hale DC. A prospective, randomized controlled study of computer-assisted learning in parasitology. Acad Med 2002 May;77(5):446-449. [Medline]
  29. Fleetwood J, Vaught W, Feldman D, Gracely E, Kassutto Z, Novack D. MedEthEx Online: a computer-based learning program in medical ethics and communication skills. Teach Learn Med 2000;12(2):96-104. [CrossRef] [Medline]
  30. Rowe M, Frantz J, Bozalek V. The role of blended learning in the clinical education of healthcare students: a systematic review. Med Teach 2012;34(4):e216-e221. [CrossRef] [Medline]
  31. McCutcheon K, Lohan M, Traynor M, Martin D. A systematic review evaluating the impact of online or blended learning vs. face-to-face learning of clinical skills in undergraduate nurse education. J Adv Nurs 2015 Feb;71(2):255-270. [CrossRef] [Medline]
  32. Jwayyed S, Stiffler KA, Wilber ST, Southern A, Weigand J, Bare R, et al. Technology-assisted education in graduate medical education: a review of the literature. Int J Emerg Med 2011;4:51 [FREE Full text] [CrossRef] [Medline]
  33. Cook DA, Levinson AJ, Garside S, Dupras DM, Erwin PJ, Montori VM. Internet-based learning in the health professions: a meta-analysis. JAMA 2008 Sep 10;300(10):1181-1196. [CrossRef] [Medline]
  34. Lahti M, Välimäki M. Is computer assisted learning among nurses or nursing students more effective than traditional learning? - A mini-review. Stud Health Technol Inform 2009;146:842. [Medline]
  35. Davis D, O'Brien MA, Freemantle N, Wolf FM, Mazmanian P, Taylor-Vaisey A. Impact of formal continuing medical education: do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? JAMA 1999 Sep 1;282(9):867-874. [Medline]
  36. Mayer RE. The Cambridge handbook of multimedia learning. New York: Cambridge University Press; 2005.
  37. Cook DA, Thompson WG, Thomas KG, Thomas MR, Pankratz VS. Impact of self-assessment questions and learning styles in Web-based learning: a randomized, controlled, crossover trial. Acad Med 2006 Mar;81(3):231-238. [Medline]
  38. Cook DA, McDonald FS. E-learning: is there anything special about the "E"? Perspect Biol Med 2008;51(1):5-21. [CrossRef] [Medline]
  39. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol 2009 Oct;62(10):e1-34 [FREE Full text] [CrossRef] [Medline]
  40. Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA 2000 Apr 19;283(15):2008-2012. [Medline]
  41. Cook DA, Levinson AJ, Garside S. Method and reporting quality in health professions education research: a systematic review. Med Educ 2011 Mar;45(3):227-238. [CrossRef] [Medline]
  42. Cook DA, Reed DA. Appraising the quality of medical education research methods: the Medical Education Research Study Quality Instrument and the Newcastle-Ottawa Scale-Education. Acad Med 2015 Aug;90(8):1067-1076. [CrossRef] [Medline]
  43. Wells G, Shea B, Petersen J, Welch V, Losos M. Department of Epidemiology and Community Medicine, University of Ottawa, Canada. 2015. The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomized studies in meta-analyses   URL: http:/​/www.​medicine.mcgill.ca/​rtamblyn/​Readings/​The%20Newcastle%20-%20Scale%20for%20assessing%20the%20quality%20of%20nonrandomised%20studies%20in%20meta-analyses.​pdf [accessed 2015-05-30] [WebCite Cache]
  44. Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, et al. GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. J Clin Epidemiol 2011 Apr;64(4):383-394. [CrossRef] [Medline]
  45. Guyatt GH, Oxman AD, Kunz R, Atkins D, Brozek J, Vist G, et al. GRADE guidelines: 2. Framing the question and deciding on important outcomes. J Clin Epidemiol 2011 Apr;64(4):395-400. [CrossRef] [Medline]
  46. Balshem H, Helfand M, Schünemann HJ, Oxman AD, Kunz R, Brozek J, et al. GRADE guidelines: 3. Rating the quality of evidence. J Clin Epidemiol 2011 Apr;64(4):401-406. [CrossRef] [Medline]
  47. Guyatt GH, Oxman AD, Vist G, Kunz R, Brozek J, Alonso-Coello P, et al. GRADE guidelines: 4. Rating the quality of evidence--study limitations (risk of bias). J Clin Epidemiol 2011 Apr;64(4):407-415. [CrossRef] [Medline]
  48. Guyatt GH, Oxman AD, Montori V, Vist G, Kunz R, Brozek J, et al. GRADE guidelines: 5. Rating the quality of evidence--publication bias. J Clin Epidemiol 2011 Dec;64(12):1277-1282. [CrossRef] [Medline]
  49. Guyatt GH, Oxman AD, Kunz R, Brozek J, Alonso-Coello P, Rind D, et al. GRADE guidelines 6. Rating the quality of evidence--imprecision. J Clin Epidemiol 2011 Dec;64(12):1283-1293. [CrossRef] [Medline]
  50. Guyatt GH, Oxman AD, Kunz R, Woodcock J, Brozek J, Helfand M, et al. GRADE guidelines: 7. Rating the quality of evidence--inconsistency. J Clin Epidemiol 2011 Dec;64(12):1294-1302. [CrossRef] [Medline]
  51. Guyatt GH, Oxman AD, Kunz R, Woodcock J, Brozek J, Helfand M, et al. GRADE guidelines: 8. Rating the quality of evidence--indirectness. J Clin Epidemiol 2011 Dec;64(12):1303-1310. [CrossRef] [Medline]
  52. Guyatt GH, Oxman AD, Sultan S, Glasziou P, Akl EA, Alonso-Coello P, et al. GRADE guidelines: 9. Rating up the quality of evidence. J Clin Epidemiol 2011 Dec;64(12):1311-1316. [CrossRef] [Medline]
  53. Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S, et al. Grading quality of evidence and strength of recommendations. BMJ 2004 Jun 19;328(7454):1490 [FREE Full text] [CrossRef] [Medline]
  54. Dunlap WP, Cortina JM, Vaslow JB, Burke MJ. Meta-analysis of experiments with matched groups or repeated measures designs. Psychological Methods 1996;1(2):170-177. [CrossRef]
  55. Higgins JPT, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ 2003 Sep 6;327(7414):557-560 [FREE Full text] [CrossRef] [Medline]
  56. Flys T, González R, Sued O, Suarez CJ, Kestler E, Sosa N, et al. A novel educational strategy targeting health care workers in underserved communities in Central America to integrate HIV into primary medical care. PLoS One 2012;7(10):e46426 [FREE Full text] [CrossRef] [Medline]
  57. Puri R, Bell C, Evers WD. Dietetics students' ability to choose appropriate communication and counseling methods is improved by teaching behavior-change strategies in computer-assisted instruction. J Am Diet Assoc 2010 Jun;110(6):892-897. [CrossRef] [Medline]
  58. Buchowski MS, Plaisted C, Fort J, Zeisel SH. Computer-assisted teaching of nutritional anemias and diabetes to first-year medical students. Am J Clin Nutr 2002 Jan;75(1):154-161 [FREE Full text] [Medline]
  59. Weaver SB, Oji V, Ettienne E, Stolpe S, Maneno M. Hybrid e-learning approach to health policy. Currents in Pharmacy Teaching and Learning 2014 Mar;6(2):313-322. [CrossRef]
  60. Pereira J, Palacios M, Collin T, Wedel R, Galloway L, Murray A, et al. The impact of a hybrid online and classroom-based course on palliative care competencies of family medicine residents. Palliat Med 2008 Dec;22(8):929-937. [CrossRef] [Medline]
  61. Karamizadeh Z, Zarifsanayei N, Faghihi AA, Mohammadi H, Habibi M. The study of effectiveness of blended learning approach for medical training courses. Iran Red Crescent Med J 2012 Jan;14(1):41-44 [FREE Full text] [Medline]
  62. Chandler T, Qureshi K, Gebbie KM, Morse SS. Teaching emergency preparedness to public health workers: use of blended learning in web-based training. Public Health Rep 2008;123(5):676-680 [FREE Full text] [Medline]
  63. Karaksha A, Grant G, Davey A, Anoopkumar-Dukie S. Development and evaluation of computer-assisted learning (CAL) teaching tools compared to the conventional didactic lecture in pharmacology education. In: Proceedings of EDULEARN11 Conference. 2011 Presented at: EDULEARN11 Conference; July 4-6, 2011; Spain p. 3580-3589.
  64. Cragun DL, Couch SC, Prows CA, Warren NS, Christianson CA. A success of a genetics educational intervention for nursing and dietetic students: A model for incorporating genetics into nursing and allied health curricula. J Allied Health 2005;34(2):90-96. [Medline]
  65. Baumlin KM, Bessette MJ, Lewis C, Richardson LD. EMCyberSchool: an evaluation of computer-assisted instruction on the Internet. Acad Emerg Med 2000 Aug;7(8):959-962 [FREE Full text] [Medline]
  66. Mahnken AH, Baumann M, Meister M, Schmitt V, Fischer MR. Blended learning in radiology: is self-determined learning really more effective? Eur J Radiol 2011 Jun;78(3):384-387. [CrossRef] [Medline]
  67. Raupach T, Münscher C, Pukrop T, Anders S, Harendza S. Significant increase in factual knowledge with web-assisted problem-based learning as part of an undergraduate cardio-respiratory curriculum. Adv Health Sci Educ Theory Pract 2010 Aug;15(3):349-356 [FREE Full text] [CrossRef] [Medline]
  68. Devitt P, Smith JR, Palmer E. Improved student learning in ophthalmology with computer-aided instruction. Eye (Lond) 2001 Oct;15(Pt 5):635-639. [CrossRef] [Medline]
  69. Hsu L, Hsieh S. Effects of a blended learning module on self-reported learning performances in baccalaureate nursing students. J Adv Nurs 2011 Nov;67(11):2435-2444. [CrossRef] [Medline]
  70. Kaveevivitchai C, Chuengkriankrai B, Luecha Y, Thanooruk R, Panijpan B, Ruenwongsa P. Enhancing nursing students' skills in vital signs assessment by using multimedia computer-assisted learning with integrated content of anatomy and physiology. Nurse Educ Today 2009 Jan;29(1):65-72. [CrossRef] [Medline]
  71. Howerton WB, Enrique PRT, Ludlow JB, Tyndall DA. Interactive computer-assisted instruction vs. lecture format in dental education. J Dent Hyg 2004;78(4):10. [Medline]
  72. Strickland S. The effectiveness of blended learning environments for the delivery of respiratory care education. J Allied Health 2009;38(1):E11-E16. [Medline]
  73. Llambí L, Esteves E, Martinez E, Forster T, García S, Miranda N, et al. Teaching tobacco cessation skills to Uruguayan physicians using information and communication technologies. J Contin Educ Health Prof 2011;31(1):43-48. [CrossRef] [Medline]
  74. Perkins GD, Fullerton JN, Davis-Gomez N, Davies RP, Baldock C, Stevens H, et al. The effect of pre-course e-learning prior to advanced life support training: a randomised controlled trial. Resuscitation 2010 Jul;81(7):877-881. [CrossRef] [Medline]
  75. Lancaster JW, McQueeney ML, Van Amburgh JA. Online lecture delivery paired with in class problem-based learning … Does it enhance student learning? Currents in Pharmacy Teaching and Learning 2011 Jan;3(1):23-29. [CrossRef]
  76. Sowan AK, Jenkins LS. Use of the seven principles of effective teaching to design and deliver an interactive hybrid nursing research course. Nurs Educ Perspect 2013;34(5):315-322. [Medline]
  77. Lancaster JW, Wong A, Roberts SJ. 'Tech' versus 'talk': a comparison study of two different lecture styles within a Master of Science nurse practitioner course. Nurse Educ Today 2012 Jul;32(5):e14-e18. [CrossRef] [Medline]
  78. Dankbaar MW, Storm DJ, Teeuwen IC, Schuit SE. A blended design in acute care training: similar learning results, less training costs compared with a traditional format. Perspect Med Educ 2014 Sep;3(4):289-299 [FREE Full text] [CrossRef] [Medline]
  79. Stewart A, Inglis G, Jardine L, Koorts P, Davies MW. A randomised controlled trial of blended learning to improve the newborn examination skills of medical students. Arch Dis Child Fetal Neonatal Ed 2013 Mar;98(2):F141-F144. [CrossRef] [Medline]
  80. Lowe CI, Wright JL, Bearn DR. Computer-aided Learning (CAL): an effective way to teach the Index of Orthodontic Treatment Need (IOTN)? J Orthod 2001 Dec;28(4):307-311. [CrossRef] [Medline]
  81. Arroyo-Morales M, Cantarero-Villanueva I, Fernández-Lao C, Guirao-Piñeyro M, Castro-Martín E, Díaz-Rodríguez L. A blended learning approach to palpation and ultrasound imaging skills through supplementation of traditional classroom teaching with an e-learning package. Man Ther 2012 Oct;17(5):474-478. [CrossRef] [Medline]
  82. Kiviniemi MT. Effects of a blended learning approach on student outcomes in a graduate-level public health course. BMC Med Educ 2014;14:47 [FREE Full text] [CrossRef] [Medline]
  83. Kumrow DE. Evidence-based strategies of graduate students to achieve success in a hybrid Web-based course. J Nurs Educ 2007 Mar;46(3):140-145. [Medline]
  84. Gadbury-Amyot CC, Singh AH, Overman PR. Teaching with technology: learning outcomes for a combined dental and dental hygiene online hybrid oral histology course. J Dent Educ 2013 Jun;77(6):732-743 [FREE Full text] [Medline]
  85. Gagnon M, Gagnon J, Desmartis M, Njoya M. The impact of blended teaching on knowledge, satisfaction, and self-directed learning in nursing undergraduates: a randomized, controlled trial. Nurs Educ Perspect 2013;34(6):377-382. [Medline]
  86. Boynton JR, Green TG, Johnson LA, Nainar SMH, Straffon LH. The virtual child: evaluation of an internet-based pediatric behavior management simulation. J Dent Educ 2007 Sep;71(9):1187-1193 [FREE Full text] [Medline]
  87. Kulier R, Gülmezoglu AM, Zamora J, Plana MN, Carroli G, Cecatti JG, et al. Effectiveness of a clinically integrated e-learning course in evidence-based medicine for reproductive health training: a randomized trial. JAMA 2012 Dec 5;308(21):2218-2225. [CrossRef] [Medline]
  88. Kavadella A, Tsiklakis K, Vougiouklakis G, Lionarakis A. Evaluation of a blended learning course for teaching oral radiology to undergraduate dental students. Eur J Dent Educ 2012 Feb;16(1):e88-e95. [CrossRef] [Medline]
  89. Woltering V, Herrler A, Spitzer K, Spreckelsen C. Blended learning positively affects students' satisfaction and the role of the tutor in the problem-based learning process: results of a mixed-method evaluation. Adv Health Sci Educ Theory Pract 2009 Dec;14(5):725-738. [CrossRef] [Medline]
  90. Ilic D, Hart W, Fiddes P, Misso M, Villanueva E. Adopting a blended learning approach to teaching evidence based medicine: a mixed methods study. BMC Med Educ 2013;13:169 [FREE Full text] [CrossRef] [Medline]
  91. Carbonaro M, King S, Taylor E, Satzinger F, Snart F, Drummond J. Integration of e-learning technologies in an interprofessional health science course. Med Teach 2008 Feb;30(1):25-33. [CrossRef] [Medline]
  92. Pereira JA, Pleguezuelos E, Merí A, Molina-Ros A, Molina-Tomás MC, Masdeu C. Effectiveness of using blended learning strategies for teaching and learning human anatomy. Med Educ 2007 Feb;41(2):189-195. [CrossRef] [Medline]
  93. Raupach T, Muenscher C, Anders S, Steinbach R, Pukrop T, Hege I, et al. Web-based collaborative training of clinical reasoning: a randomized trial. Med Teach 2009 Sep;31(9):e431-e437. [Medline]
  94. Sherman H, Comer L, Putnam L, Freeman H. Blended versus lecture learning: outcomes for staff development. J Nurses Staff Dev 2012 Jul;28(4):186-190. [CrossRef] [Medline]
  95. Gerdprasert S, Pruksacheva T, Panijpan B, Ruenwongsa P. Development of a web-based learning medium on mechanism of labour for nursing students. Nurse Educ Today 2010 Jul;30(5):464-469. [CrossRef] [Medline]
  96. Wahlgren C, Edelbring S, Fors U, Hindbeck H, Ståhle M. Evaluation of an interactive case simulation system in dermatology and venereology for medical students. BMC Med Educ 2006;6:40 [FREE Full text] [CrossRef] [Medline]
  97. Farrell MJ, Rose L. Use of mobile handheld computers in clinical nursing education. J Nurs Educ 2008 Jan;47(1):13-19. [Medline]
  98. Taradi SK, Taradi M, Radic K, Pokrajac N. Blending problem-based learning with Web technology positively impacts student learning outcomes in acid-base physiology. Adv Physiol Educ 2005 Mar;29(1):35-39 [FREE Full text] [CrossRef] [Medline]
  99. de Sousa EE, de Arruda MM, Ferreira MJ. Oral health promotion through an online training program for medical students. J Dent Educ 2011 May;75(5):672-678 [FREE Full text] [Medline]
  100. Hilger AE, Hamrick HJ, Denny FW. Computer instruction in learning concepts of streptococcal pharyngitis. Arch Pediatr Adolesc Med 1996 Jun;150(6):629-631. [Medline]
  101. Teeraporn C. Computer-aided learning for medical chart review instructions. Afr J Pharm Pharmacol 2012 Jul 22;6(27):2061-2067. [CrossRef]
  102. Cohen J. Statistical power analysis for the behavioral sciences. Hillsdale, NJ: L. Erlbaum Associates; 1988.
  103. Maki RH, Maki WS, Patterson M, Whittaker PD. Evaluation of a Web-based introductory psychology course: I. Learning and satisfaction in on-line versus lecture courses. Behav Res Methods Instrum Comput 2000 May;32(2):230-239. [Medline]
  104. Cruickshank DR, Jenkins DB, Metcalf KK. The act of teaching. Boston: McGraw-Hill; 2006.
  105. Hartley J, Davies IK. Preinstructional Strategies: The Role of Pretests, Behavioral Objectives, Overviews and Advance Organizers. Review of Educational Research 1976 Jan 01;46(2):239-265. [CrossRef]
  106. Wilson JD. Subjective assessment. In: Evaluation of human work: a practical ergonomics methodology. Washington: Taylor & Francis; 1995.
  107. Cook DA, Levinson AJ, Garside S, Dupras DM, Erwin PJ, Montori VM. Instructional design variations in internet-based learning for health professions education: a systematic review and meta-analysis. Acad Med 2010 May;85(5):909-922. [CrossRef] [Medline]
  108. Manochehr NN. Computers in Higher Education Economics Review. 2006. The influence of learning styles on learners in e-learning environments: An empirical study   URL: http://www.webcitation.org/6Yug8S5O3 [accessed 2015-05-30]
  109. Wong G, Greenhalgh T, Pawson R. Internet-based medical education: a realist review of what works, for whom and in what circumstances. BMC Med Educ 2010;10:12 [FREE Full text] [CrossRef] [Medline]


GRADE: Grades of Recommendation, Assessment, Development, and Evaluation
PICOS: population, intervention, comparison, outcome, and study design
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
SD: standard deviation
SMD: standardized mean difference


Edited by P Bamidis; submitted 24.06.15; peer-reviewed by S Kitsiou, T Raupach, A Sowan; comments to author 18.07.15; revised version received 28.08.15; accepted 07.10.15; published 04.01.16

Copyright

©Qian Liu, Weijun Peng, Fan Zhang, Rong Hu, Yingxue Li, Weirong Yan. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 04.01.2016.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.