Published on in Vol 21, No 10 (2019): October

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/15118, first published .
Effects of E-Learning in a Continuing Education Context on Nursing Care: Systematic Review of Systematic Qualitative, Quantitative, and Mixed-Studies Reviews

Effects of E-Learning in a Continuing Education Context on Nursing Care: Systematic Review of Systematic Qualitative, Quantitative, and Mixed-Studies Reviews

Effects of E-Learning in a Continuing Education Context on Nursing Care: Systematic Review of Systematic Qualitative, Quantitative, and Mixed-Studies Reviews

Review

1Faculty of Nursing, Université Laval, Quebec, QC, Canada

2University of Montreal Hospital Research Centre, Montreal, QC, Canada

3Centre de Recherche sur les Soins et les Services de Première Ligne de l'Université Laval, Quebec, QC, Canada

4Faculty of Nursing, Université de Montréal, Montreal, QC, Canada

5School of Nursing, McGill University, Montreal, QC, Canada

6Public Health Research Institute, Université de Montréal, Montreal, QC, Canada

7Department of Management, Evaluation and Health Policy, School of Public Health, Université de Montréal, Montreal, QC, Canada

8Education and Health Practices Laboratory, Paris 13 University, Sorbonne Paris Cité University, Paris, France

9Department of Education for Non-Medical Personnel, French Military Health Service Academy, École du Val-de-Grâce, Paris, France

Corresponding Author:

Marie-Pierre Gagnon, PhD

Faculty of Nursing

Université Laval

Pavillon Ferdinand-Vandry 1050 Avenue de la Médecine

Quebec, QC, G1V 0A6

Canada

Phone: 1 418 656 2131 ext 407576

Email: marie-pierre.gagnon@fsi.ulaval.ca


Background: E-learning is rapidly growing as an alternative way of delivering education in nursing. Two contexts regarding the use of e-learning in nursing are discussed in the literature: (1) education among nursing students and (2) nurses’ continuing education within a life-long learning perspective. A systematic review of systematic reviews on e-learning for nursing and health professional students in an academic context has been published previously; however, no such review exists regarding e-learning for registered nurses in a continuing education context.

Objective: We aimed to systematically summarize the qualitative and quantitative evidence regarding the effects of e-learning on nursing care among nurses in a continuing education context.

Methods: We conducted a systematic review of systematic qualitative, quantitative, and mixed-studies reviews, searching within four bibliographic databases. The eligibility criteria were formulated using the population, interventions, comparisons, outcomes, and study design (PICOS) format. The included population was registered nurses. E-learning interventions were included and compared with face-to-face and any other e-learning interventions, as well as blended learning. The outcomes of interest were derived from two models: nursing-sensitive indicators from the Nursing Care Performance Framework (eg, teaching and collaboration) and the levels of evaluation from the Kirkpatrick model (ie, reaction, learning, behavior, and results).

Results: We identified a total of 12,906 records. We retrieved 222 full-text papers for detailed evaluation, from which 22 systematic reviews published between 2008 and 2018 met the eligibility criteria. The effects of e-learning on nursing care were grouped under Kirkpatrick’s levels of evaluation: (1) nurse reactions to e-learning, (2) nurse learning, (3) behavior, and (4) results. Level 2, nurse learning, was divided into three subthemes: knowledge, skills, attitude and self-efficacy. Level 4, results, was divided into patient outcomes and costs. Most of the outcomes were reported in a positive way. For instance, nurses were satisfied with the use of e-learning and they improved their knowledge. The most common topics covered by the e-learning interventions were medication calculation, preparation, and administration.

Conclusions: The effects of e-learning are mainly reported in terms of nurse reactions, knowledge, and skills (ie, the first two levels of the Kirkpatrick model). The effectiveness of e-learning interventions for nurses in a continuing education context remains unknown regarding how the learning can be transferred to change practice and affect patient outcomes. Further scientific, methodological, theoretical, and practice-based breakthroughs are needed in the fast-growing field of e-learning in nursing education, especially in a life-learning perspective.

Trial Registration: International Prospective Register of Systematic Reviews (PROSPERO) CRD42016050714; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=50714

J Med Internet Res 2019;21(10):e15118

doi:10.2196/15118

Keywords



Background

E-learning is rapidly growing as an alternative way of delivering education [1,2]. Nicoll et al [3] used the term technology-enhanced learning and stated that “it is a means by which learners can be provided with enhanced or transformed educational experiences.” Many other terms have been used synonymously and interchangeably to designate e-learning, such as computer-assisted learning, online learning, or Web-based learning [4]. For the purpose of this paper, we will use e-learning as an umbrella term to entail a variety of electronic, digital, or mobile devices used to support learning [5]. Clark and Mayer [6] specify elements about the what, how, and why of e-learning. The what includes content and instructional methods. The how encompasses elements such as the format (eg, asynchronous and webinars) and the use of multimedia (eg, video, animation, and printed words). The why is about, for instance, the achievement of learning objectives and/or the performance of skills applied in a workplace context.

In the literature targeting the use of e-learning in nursing, two populations and contexts are discussed. The first one is education among nursing students (eg, in Voutilainen et al [7]) who participate in educational programs mainly offered in academic settings. For instance, undergraduate nursing students have to develop entry-level competencies to meet the practice expectations required in getting their registered nurse (RN) licensure in order to “provide safe, competent, compassionate, and ethical nursing care in a variety of practice settings” [8]. The second context is continuing education (CE), also called continuing professional development [9] or continuing competency [8], targeting a life-long learning perspective and staff development (eg, Knapp and Byers [10]). RNs have to meet CE expectations to be eligible to renew their licensure and registration each year, with the goals of acquiring new competencies, maintaining the acquired ones, enhancing their practice, and keeping their skills relevant and up-to-date [8]. We refer here to CE programs that are applicable in workplace settings. The use of e-learning by nurses in a CE context is the one that retained our attention [11] for two main reasons: much more attention has been given to nursing students than to RNs [5,12] and this CE context will be informative to lay the groundwork for a wider research project.

A previous systematic review of systematic reviews (SRSRs) of e-learning for nursing and health professional students in an academic context has been conducted [5,12]. The findings show that e-learning is equivalent to traditional learning. However, e-learning has proven to have large effects compared to no intervention in health professions [13]. To the best of our knowledge, we found no SRSRs about e-learning used by RNs in a CE context.

Objective

We aimed to systematically summarize the qualitative and quantitative evidence that comes from systematic qualitative, quantitative, and mixed-studies reviews regarding the effects of e-learning on nursing care among nurses in a CE context.


The protocol was registered in the International Prospective Register of Systematic Reviews (PROSPERO) (number: CRD42016050714) and was published elsewhere [11].

Design

We conducted a systematic review (SR) of systematic qualitative, quantitative, and mixed-studies reviews with the intent of bringing together, summarizing, and enhancing accessibility of existing evidence [14]. We combined outcomes from various types of SRs and synthesized qualitative and quantitative evidence. This type of synthesis is useful in identifying existing e-learning interventions used by RNs in their workplace settings and in describing the range of outcomes of interest measured, documented, and informed by the Nursing Care Performance Framework (NCPF).

We used the Cochrane Collaboration methodology [15] and other relevant works in this domain [16,17] to structure and report the SRSRs.

Nursing Care Performance Framework

We planned to use the NCPF to guide our synthesis. The NCPF is based on Henriksen et al’s work [18], which depicts a conception of nursing care as a complex, whole entity; this entity is encompassed by many interrelated and interdependent subsystems and components that are logically coordinated and oriented toward the achievement of optimal outcomes for patients [19]. The NCPF is a systemic and organizational model aimed to measure the performance of nurses in the health care system through a set of indicators sensitive to various aspects of nursing care. The rationale for using the NCPF was based on our previous work [20]. We conducted an SRSRs of the effects of information and communications technologies on nursing care. We then categorized these effects based on the following nursing care subsystems pertaining to the NCPF: nursing care structure (eg, nursing staff supply and profiles, work conditions, and nursing staff stability), nursing services (eg, professional practice environment, nursing processes, and interventions), and patient outcomes (eg, patient functional status and care safety). Our first intent in this current SRSRs was to use the NCPF for guidance and as a starting point for data extraction and analysis, while remaining open to the emergence of new data (ie, outcomes) that were not part of the framework. We expected to get a comprehensive portrait of how dimensions and indicators of nursing care, as developed in the NCPF, could be influenced by the use of e-learning interventions in a nursing CE context. In other words, we identified, from the NCPF, a pre-established range of possible outcomes and indicators related to nursing care, for which data would be sought in this SRSRs [11].

Eligibility Criteria

The scope was formulated using the population, intervention, comparison, outcomes, and study design (PICOS) format [15,21]. Eligibility criteria are presented in Table 1.

Table 1. Eligibility criteria.
SRa componentsInclusion criteriaExclusion criteria
PopulationRNsb, according to the professional legislation of each countryUndergraduate nursing students in an academic context
InterventionE-learning (ie, use of electronic, digital, or mobile devices to support learning) used in a continuing education contextAny type of simulation with a physical mannequin
ComparisonFace-to-face learning, any other e-learning intervention, or blended learningN/Ac
OutcomesPrimary outcomes: effects of e-learning on nursing care, including (1) nursing resources (eg, working conditions, nursing staff supply, and staff maintenance) and (2) nursing services (eg, nurses’ practice environments, nursing processes and interventions, and professional satisfaction)
Secondary outcomes: Effects of e-learning on patient outcomes (eg, patients’ empowerment, comfort, and quality of life)
N/A
Study designSystematic qualitative, quantitative, and mixed-studies reviews published in French, English, or SpanishGrey literature and non-SR, such as literature reviews

aSR: systematic review.

bRN: registered nurse.

cN/A: not applicable.

Search Strategy and Selection Criteria

We searched for articles that were published between January 1, 2006, and January 26, 2017, in the following electronic databases: PubMed, Embase, Cumulative Index of Nursing and Allied Health Literature (CINAHL), and Joanna Briggs Institute (JBI). We updated the search strategy to include articles published between January 1, 2017, and November 11, 2018. The search strategy time frame was partially informed by the work of de Caro et al [5,12], who conducted an SRSRs on a similar topic. They performed search strategies on articles published between 2003 and 2013. Included SRs were published between 2008 and 2013. We extended our search strategy to include articles published from 2006 onward to capture SRs that could have been missed in previous work.

We developed a structured search strategy that was validated by a health information specialist. We used the thesaurus terms from each database and used free text to target the title and abstract fields. An example of the search strategy in PubMed has been presented elsewhere [11]. For the JBI database, we hand searched the whole database for relevant literature. We collected the results of each database search in EndNote reference manager, version X7.7.1 (Clarivate Analytics) and we removed duplicate citations. Furthermore, we hand searched for relevant SRs, contacted authors to find other relevant works in this domain, and consulted the reference lists of included SRs.

Selection of Systematic Reviews

We used DistillerSR (Evidence Partners), a Web-based SR software, to perform the screening and selection of SRs as well as the data extraction. A group of three reviewers (GR, JPG, and EH) independently screened the title and abstract of the papers in order to assess their eligibility. Each paper was reviewed twice, by two of the three reviewers. When consensus was not reached, arbitration was done with the third review author who was not involved in the selection of a specific SR. After the first round of screening, we retrieved full-text copies of publications that met the pre-established inclusion criteria and we assessed them twice.

Methodological Quality Assessment of Included Systematic Reviews

The methodological quality assessment is defined as the critical appraisal of each SR and the extent to which authors of each SR met the highest possible standards in conducting and reporting their research process. Methodological quality also refers to risk of bias (ie, deviations of findings from the truth); flaws in design, conduct, analysis, and/or reporting can be the cause of these deviations [17].

Methodological quality was done independently by two reviewers (GR and JBP) using two critical appraisal tools in order to cover a wider and complementary range of criteria: Assessment of Multiple Systematic Reviews (AMSTAR) 2 [22] and Risk Of Bias In Systematic Reviews (ROBIS) [23]. AMSTAR 2 is a 16-item instrument that provides detailed and comprehensive assessment of SRs that include randomized or nonrandomized studies of health care interventions. ROBIS contains 21 signaling questions divided into three phases:

  1. The assessment of relevance (optional).
  2. The identification of concerns with the review process, in which bias can be introduced from within four domains:
    1. Study eligibility criteria.
    2. Identification and selection of studies.
    3. Data collection and study appraisal.
    4. Synthesis and findings.
  3. Overall judgment about risk.

These tools are best suited for quantitative SRs and were not designed for systematic qualitative and mixed-studies reviews. However, because there was no consensus on how to assess methodological quality of qualitative and mixed-studies reviews at the time we began the SRSRs, we used both tools: AMSTAR 2 and ROBIS. Any disagreements that arose between the reviewers during the methodological quality assessment process were resolved through discussion.

Data Extraction

A team of three authors (GR, JPG, and EH) conducted data extraction. Each paper was extracted independently by two of the three reviewers. We extracted the following data: (1) general characteristics of the SRs (eg, purpose, type of SR, number of primary studies included, populations, and settings); (2) details about e-learning interventions, comparisons, and the use of theories (ie, in the development and evaluation processes); and (3) outcomes, including their nature (ie, qualitative and/or quantitative) and direction (ie, positive, negative, or no effect). We used the adapted version of the NCPF [19] as a guide to extract outcomes. The dual extraction of outcomes data is particularly important, since these data are directly used in synthesizing the evidence that informs the conclusions of the review [23].

As recommended by Higgins et al [24], all the steps mentioned before (ie, selection of SRs, methodological quality assessment, and data extraction) were done by two authors working independently in order to minimize the risk of making mistakes and of being influenced by a single person’s biases. In total, four authors were involved in performing these steps (GR, JPG, EH, and JBP).

Data Synthesis

The first author (GR) performed a qualitative thematic synthesis using a data-based convergent synthesis design [25,26]. We qualified quantitative data by using a narrative synthesis to describe the effect of e-learning on nursing care. We transformed the numerical data in specific themes and subthemes. This approach in conducting the data synthesis was chosen considering the mixed nature of evidence (ie, qualitative and quantitative) and the exploratory lens of this SRSRs. Two reviewers (EH and JPG) were involved in validating the data synthesis.

Overlap in Systematic Reviews

As mentioned in the protocol [11], one of the challenges encountered when conducting SRSRs is the identification of overlap in SRs [27-29] (ie, “when the primary studies included within the SRs had multiple related publications that were referenced differently across SRs” [29]). Authors of SRSRs need to closely examine the content of the SRs and their included primary studies to accurately assess the extent and nature of the overlap. The ways of managing overlap in SRs depend on the purpose of the SRSRs and the method of data analysis [17]. Pollock et al [17] suggest that when the purpose of the SRSRs is to present and describe the body of knowledge that comes from SRs, it may be appropriate to include the results of all relevant SRs, regardless of the overlap across primary studies. Considering the exploratory lens of our SRSRs and our intent to draw a broad picture of the effects of e-learning interventions used by nurses in a CE context, we created a citation matrix [17] (see Multimedia Appendix 1) to visually represent the overlap between primary studies within included SRs. The implications of these overlaps do not have consequences on, for example, overestimating the effects of e-learning interventions that would bias recommendations to use a specific intervention over the others. The purpose of this SRSRs is not prescriptive and is not intended to inform or guide decision making, policy, or practice recommendations.

Deviation From the Protocol

As previously mentioned, we used the NCPF to extract and classify the outcomes but we did not use it, as we had planned, to synthesize data regarding the effects of e-learning on nursing care. Indeed, this framework offers a broad perspective of the nursing care system, considering the diversity of nursing-sensitive indicators that are centered on structure and resources, nursing services and processes, and patient outcomes. However, most of these indicators do not reflect the current state of knowledge deriving from the effects of e-learning reported in the literature. These effects are rather circumscribed around nurses’ level of satisfaction, knowledge, or skills acquisition, which fit more with the Kirkpatrick model [30]. This model proposes four distinct levels as a sequence of ways to evaluate the effectiveness of an educational program: (1) reactions, (2) learning, (3) behavior, and (4) results. Level 1 (ie, reactions) is about nurses’ satisfaction with e-learning interventions. Level 2 (ie, learning) refers to the extent to which nurses change or improve attitudes, knowledge, skills, and/or self-efficacy as a result of attending the e-learning interventions [30]. Level 3 (ie, behavior) is the extent to which nurses’ learning has been translated into their postlearning behavior or their clinical performance [9]. Level 4 (ie, results) can be seen as patients’ health outcomes resulting from the influence of e-learning interventions on nurses’ behavior changes, which was adapted from Légaré et al [9]; this level can also be seen as other outcomes, such as costs [30]. In Figure 1, the Kirkpatrick model is presented, supported by some concrete examples provided by Shen et al [31].

Figure 1. The Kirkpatrick model.
View this figure

Other frameworks could have been selected to extract and synthesize data, including the Expanded Outcomes Framework [32] or the Jeffries simulation model [33], since the outcomes component of the latter model can be potentially applicable to e-learning. However, we chose the Kirkpatrick model because it is well documented and extensively used in many educational contexts, including e-learning in the CE context [31,34].

Once we performed the first extraction using the NCPF, the data were read several times by three authors (GR, EH, and JPG). The first author built a thematic tree based on the reading of all material through line by line coding (ie, the inductive part) and based on existing works [19,30] (ie, the deductive part). This SRSRs was then guided by these two models [19,30] at different points in time: the use of the NCPF [19] was preplanned, and the use of the Kirkpatrick model [30] was decided during the process of data analysis and synthesis. The presentation of findings are supported by the four levels of evaluation [30].

Finally, we did not calculate the corrected covered area [27] in order to measure the actual degree of overlap in the SRSRs. We simply illustrated the overlap in a matrix (see Multimedia Appendix 1), as explained earlier.


Search Results and Eligibility of Systematic Reviews

The overall process of SR selection is illustrated with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram [35] (see Figure 2). We identified a total of 12,906 records. After removing duplicate references, we assessed 11,775 records for eligibility. We retrieved 222 full-text papers for detailed evaluation, from which 22 SRs published between 2008 and 2018 met the eligibility criteria. The list of included SRs is presented in Multimedia Appendix 2. In Multimedia Appendix 3, we provide the references of excluded papers as well as the reasons for exclusion.

Figure 2. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flowchart. CE: continuing education.
View this figure

Methodological Quality Assessment Results

We did not exclude papers based on methodological grounds, considering the scope of the SRSRs, which was not intended to inform action or decision making in terms of the most effective e-learning to impact nursing care. The assessment of methodological quality is presented individually for each SR (see Table 2) and globally (ie, all included SRs) using ROBIS (see Figure 3) and AMSTAR 2 (see Figure 4). Out of 22 SRs, 9 (41%) were at low risk of bias, 8 (36%) were at high risk of bias, and 5 (23%) had an unclear risk of bias. The assessment with AMSTAR 2 yielded the following results: out of 22 SRs, 6 (27%) had a high level of confidence, 4 (18%) had a moderate level of confidence, 10 (45%) had a low level of confidence, and 2 (9%) had a critically low level of confidence. The findings regarding the risk of bias and the level of confidence for the same SR were consistent across the two tools. For example, an SR [36] at high risk of bias according to the ROBIS tool was rated as having a low level of confidence using the AMSTAR 2.

Table 2. Methodological quality assessment for each individual SRa included in this study using a combination of the ROBISb tool and the AMSTARc 2.
Author, year (type of SR)Risk of bias using the ROBIS tool, Phase 2d: Identifying concerns with the review process—the four domains of biasRisk of bias using the ROBIS tool, Phase 3: Judging overall risk of bias in the reviewLevel of confidence using the AMSTAR 2: final judgment

Study eligibility criteriaIdentification and selection of studiesData collection and study appraisalSynthesis and findings

Bloomfield, 2008 [36] (QTe)LowHighHighHighHighLow
Brunero, 2012 [37] (MSRf)LowUnclearUnclearHighUnclearLow
Byrne, 2008 [38] (QT)HighUnclearHighHighUnclearLow
Carroll, 2009 [39] (MSR)HighUnclearHighHighHighCritically low
Chipps, 2012 [40] (QT)LowLowLowUnclearLowModerate
Coyne, 2018 [41] (MSR)LowLowLowUnclearUnclearModerate
Du, 2013 [42] (QT)LowLowLowLowLowHigh
Feng, 2013 [43] (QT)LowUnclearLowLowLowHigh
Freire, 2013 [44] (MSR)UnclearLowHighHighHighLow
Harkanen, 2016 [45] (QT)LowLowLowLowLowHigh
Hegland, 2017 [46] (QT)LowLowLowLowLowHigh
Hines, 2015 [47] (QT)LowLowUnclearLowLowModerate
Kakushi, 2016 [48] (MSR)UnclearUnclearHighHighHighLow
Kang, 2017 [49] (QT)LowUnclearLowLowLowHigh
Knapp, 2008 [10] (MSR)HighHighHighHighHighCritically low
Lahti, 2014 [1] (QT)LowUnclearLowLowLowModerate
Lam-Antoniades, 2009 [50] (QT)LowUnclearUnclearHighHighLow
Lawn, 2017 [51] (MSR)UnclearHighLowHighUnclearLow
Nicoll, 2018 [3] (MSR)LowUnclearUnclearHighHighLow
Philips, 2012 [52] (MSR)UnclearUnclearUnclearUnclearUnclearLow
Sinclair, 2016 [4] (QT)LowLowLowLowLowHigh
Tomlinson, 2013 [53] (QT)LowUnclearHighHighHighLow

aSR: systematic review.

bROBIS: Risk Of Bias In Systematic Reviews.

cAMSTAR: Assessment of Multiple Systematic Reviews.

dPhase 1 is optional and consists of assessing the relevance of SRs. It was not performed nor described.

eQT: quantitative review.

fMSR: mixed-studies review.

Figure 3. Methodological quality using the Risk Of Bias In Systematic Reviews (ROBIS) tool. The total risk of bias and the four domains of bias are shown. The numbers within the bars represent the number of systematic reviews.
View this figure
Figure 4. Methodological quality using the Assessment of Multiple Systematic Reviews (AMSTAR) 2. The numbers within the bars represent the number of systematic reviews.
View this figure

General Characteristics of Systematic Reviews and Participants

General characteristics of included SRs are shown in Multimedia Appendix 4. The first authors of the included SRs were from various countries: Australia (n=7), United Kingdom (n=4), Brazil (n=2), South Africa (n=1), China (n=1), Taiwan (n=1), Korea (n=1), Finland (n=2), Norway (n=1), United States (n=1), and Canada (n=1). In 8 SRs out of 22 (36%), there was an overlap regarding primary studies (see Multimedia Appendix 1).

We included any SRs that contained one or many primary studies focusing on RNs using e-learning interventions in a CE context, which means that SRs with populations other than RNs (eg, nursing students and other health care providers) were included as long as information about RNs was clearly retrievable. The ratio of primary studies targeting nurses in a CE context to the total number of primary studies pertaining to an SR was very low. For example, from Brunero et al [37], we extracted only 2 out of the 25 (8%) primary studies that met all of the eligibility criteria. Only 1 SR [10] included all primary studies (n=5) that concerned the population of interest (ie, RNs), the e-learning intervention, the outcomes of interest, and the CE context. When reported, the number of RNs across the SRs varied from 15 [53] to 658 [3]. RNs had different job titles (eg, nurse specialists, practice nurses, community nurses, and school nurses) and worked in different settings (eg, intensive care units, emergency departments, coronary critical care, medical-surgical, pediatrics, mental health, palliative care, geriatric hospitals, and primary care).

E-Learning Interventions and Comparison Groups

There were a variety of e-learning interventions targeting nurses in a CE context (see Multimedia Appendix 5). Some examples are an online learning module regarding the use of brief motivational interviewing as a communication style to influence health behavior change [51] and online and interactive CD-ROM programs on medication administration skills and safety [45]. Other e-learning interventions were presented in terms of configuration, such as computer-assisted instructions [36], computer-based simulation [38], videoconferencing [53,52], and situated e-learning [43], while few had details on the instructional method, such as case-based learning [37].

Examples of comparison interventions included the following: electronic intervention, face-to-face intervention, no intervention, and blended learning. In four SRs [3,40,47,50], information about the theories or models used was reported regarding the engagement of stakeholders and the development or evaluation of e-learning interventions. These theories and models included engagement models [54,55], adult learning theory [56], the Kirkpatrick model [30], and the effects of information systems quality on nurses’ acceptance of the e-learning system [57].

Effects of E-Learning

Overview

The outcomes are presented under different formats. First, the findings are grouped per systematic review along with a description of the interventions and the comparisons (see Multimedia Appendix 5). Second, they are described under a frequency table (see Table 3). Overall, positive outcomes (ie, effects reported in favor of e-learning interventions) are overrepresented compared to negative outcomes and those with no effect. Finally, the outcomes are synthesized and presented narratively under four main themes, informed by the Kirkpatrick model [30].

Table 3. Frequency and direction of outcomes.
Levels of evaluation from the Kirkpatrick model and subthemesNumber of documented outcomes from primary studies and direction of the effect

NegativeNo effectPositiveTotal
1. Nurse reactions with e-learning (n=11 SRsa)




Total702734

General0099

Anonymity0011

Authentic scenario0011

Computer and internet experience1001

Confidence in e-learning0011

Content0011

Discussion0011

Information sharing1001

Interactions2024

Learners\' experience0011

Overall satisfaction0011

Person-centered approach0011

Satisfaction with interactive case studies0022

Scope of reflection0011

Sense of belonging0011

Technical support0011

Technology characteristics3036
2. Nurse learning (n=18 SRs)




Total3104053

Knowledge (n=13 SRs)





Total151824


General0369


Acute Physiology and Chronic Health Evaluation III scoring system0011


Arterial blood gas interpretation0011


Assessment (ability of neurological function)0011


Assessment (general)0011


Emergency preparedness0011


Hospital quality0011


Intravenous injections0022


Medication administration0011


Medication calculation1012


Neonatal care0011


Pain, physical and psychological symptoms, and loss0101


Palliative care0112

Attitude and self-efficacy (n=3 SRs)





Total0044


Confidence postintervention0011


Perceived effectiveness of e-learning0011


Personal and professional development0011


Stress in nurse-patient relationship0011


Self-efficacy (general)0011

Skills (n=10 SRs)





Total251825


General0044


Assessment (depression)0011


Cannulation1001


Cardiopulmonary resuscitation-defibrillation1001


Care practice changes0011


Child abuse detection0011


Communication0011


Critical appraisal of research literature0112


Emergency preparedness skills performance0011


Intravenous injections0101


Medication preparation and administration0336


Motivational interviewing0011


Monitoring0011


Neonatal care0011


Universal precautions-related behaviors0011


Scheduling activities0011
3. Behavior (change in practice) (n=0 SRs)0000
4. Results (n=2 SRs)




Total0011

Patient outcomes (n=1 SR)


Nurses’ perceptions of care for older adults0011

Cost (n=2 SRs)0022
Total10107090

aSR: systematic review.

Level 1: Nurse Reactions With E-Learning Interventions

Nurse reactions with e-learning interventions have been described in 11 of the 22 SRs (50%) [3,10,36,39-42,44,50,51,52].

Positive outcomes were described in 8 out of 22 SRs (36%) [10,36,39,40,42,44,48,51], mostly in terms of nurse satisfaction with using e-learning for the following reasons: quality of content [44], importance of social interactions [40,48], active learning [48], flexibility [10,51], effectiveness and convenience of the technology, as well as quality of support received [10]. Other sources of nurse satisfaction were reported as follows: patient-centered approach, time-saving, and self-directed learning. Nurses stressed the importance of authentic scenarios and of practicing skills in the work context [51]. Nurses found higher satisfaction with e-learning than from videotaped courses [42], while in the SR by Lam-Antoniades et al [50], nurses found that there were advantages of e-continuing education over lecture courses. Otherwise, nurses felt satisfied with both e-learning programs and traditional in-classroom programs [3].

In 3 out of 22 SRs (14%), nurse dissatisfaction with e-learning interventions was explained by the following reasons: technical difficulties [10,40], a lack of computer experience and internet literacy, slower information exchange [10], and a preference for face-to-face format [52]. In one SR [51], nurses identified access, navigation, and time as challenges.

Level 2: Nurse Learning
Overview

Nurse learning outcomes were reported in 18 of 22 SRs (82%) [1,3,4,10,36-38,40-47,49,51]. We divided learning into three subthemes: knowledge, attitude and self-efficacy, and skills.

Knowledge

In 13 SRs out of 22 (59%) [1,3,10,37,53,40-45,49,52], nurses improved their knowledge with the help of e-learning interventions on many topics, including assessment of ability of neurological function [42], medication administration and calculation [45], physiology and chronic health evaluation [10], arterial blood gas interpretation, intervention focusing on a rare disease [3], and palliative care [52]. With the help of e-learning, nurses improved their knowledge compared to no intervention [43]. However, nurse acquisition of knowledge in a classroom was superior to e-learning for drug dose calculations [45].

In 7 SRs out of 22 (32%) [10,36,53,40,46,49,52], no effects were reported on nurse knowledge. There were nonsignificant differences in knowledge scores on drug dose calculation between groups [46] and in learning effectiveness outcomes between the face-to-face versus videoconference formats [53,52]. The effect size difference reported was not significant in these 2 SRs [10,49]. There were no significant outcomes related to learning on the topics of intravenous (IV) injections and medication administration and preparation [36]. No significant change was found in nurse knowledge related to pain, physical and psychological symptoms, and loss [40].

Attitude and Self-Efficacy

Higher self-efficacy and performance scores were generally found among nurses using the e-learning intervention [42]. Nurses had positive attitudes toward effectiveness of online learning modules for motivational interviewing [51]. They perceived benefits of e-learning on their personal and professional development [52]. Other nurses improved their confidence in reducing stress in the nurse-patient relationship [37].

Skills

In 9 SRs out of 22 (41%) [3,4,36,37,42-44,46,47], positives outcomes were documented related to the increase of skills following nurses’ participation in e-learning.

Nurses had better performance outcomes with e-learning compared to no intervention [42,43]. Nurses improved their skills after attending a 1-hour, e-learning-based, mental health education program on self-harm, demanding behavior, manipulation, and splitting and attention-seeking behavior [37]. These nurses also had positive comments regarding assessment, monitoring, communication, and interventions such as scheduling pleasant activities [37]. Furthermore, they rated items highly that were related to the extent to which training changed their care practices [37].

Nurses using e-learning interventions experienced positive outcomes related to universal precautions, IV injections, and medication administration [36,45]. The meta-analysis on computer-based simulation compared to other learning strategies showed significant effect in favor of e-learning for medication administration and preparation [46]. Nurses’ perceived skills in performing, and clinical use of, brief motivational interviewing were more favorable postintervention [3]. Nurses had better emergency preparedness as a result of e-learning than with no intervention and they improved child abuse detection with e-learning compared to no intervention [4]. An increase in nurses’ skills scores with e-learning related to neonatal care has been reported [44]. In terms of cognitive skills, nurses self-assessed their critical appraisal competencies positively regarding research literacy [47].

Negative outcomes were reported in 2 out of 22 SRs (9%). Cardiopulmonary resuscitation-defibrillation and defibrillation performance was worse among nurses using long-distance learning than that of the control group [42]. Nurses using the computer-based simulation to cannulate a real patient with force feedback had lower success at the first attempt.

Finally, 2 SRs out of 22 (9%) reported no effect by e-learning on skills. Nurses found no improvement in one critical appraisal competency related to research literacy: the identification of the sample [47]. Core 2 errors related to preparation and administration of medication increased but the rate was not significant, as underlined in Bloomfield et al’s SR [36].

Level 3: Behavior

No outcomes related to nurses’ changes in practice were reported.

Level 4: Results
Patient Outcomes

In 1 of 22 SRs (5%), a positive outcome related to nurses’ perceptions of care outcomes for older adults was reported [37].

Cost

In 2 SRs out of 22 (9%) [10,37], positive outcomes were reported in terms of using intranet- and CD-ROM-based education as a low-cost method of providing education for nursing staff.


Principal Findings

Our SRSRs aimed at synthesizing qualitative and quantitative evidence regarding the effects of e-learning interventions on nursing care in a CE context. To the best of our knowledge, this is the first broad synthesis on the impact of e-learning on nurses in a CE context. As we expected, heterogeneity was found between populations (ie, RNs and workplace settings), interventions, comparisons, outcomes, types of SRs, and corresponding evidence. Conducting a meta-analysis was not the purpose of this SRSRs.

Main Outcomes: Four Levels of Evaluation

The most reported outcomes were learning (18/22, 82%), corresponding to Kirkpatrick’s evaluation level 2. Nurse skills were the most frequently reported, followed by knowledge. Outcomes related to evaluation level 1 (ie, nurses’ reactions with e-learning) were found in 11 out of the 22 SRs (50%). Authors of SRs described these reactions mainly with respect to technology characteristics, including perceived advantages and disadvantages (eg, navigability, technical difficulties, access, and flexibility). We found no SRs that reported outcomes regarding the translation of the content of e-learning interventions into nurses’ practice and behavior (ie, evaluation level 3). This finding does not mean that e-learning had no outcomes on practice. During the data analysis and interpretation, we used a conservative approach to classify the outcomes. Limited granularity of reported details is a well-known issue for authors of SRSRs and was observed in the included SRs. Therefore, it was difficult to know if skills, for example, improved nurses’ knowledge of medication administration and preparation or if it changed nurses’ practice. Only 1 SR included nurses’ perceptions of patients’ outcomes (ie, evaluation level 4) regarding care of elders; it also included 2 outcomes about costs. Overall, most reported outcomes were positive (n=70) as compared to negative (n=10) and neutral ones (n=10). This could indicate the presence of a reporting bias at the level of primary studies and SRs because of the disproportionate number of positive results [58].

Our findings related to the overrepresentation of the effects of e-learning interventions on reactions and learning, as well as the underrepresentation on practice and patient outcomes, are similar to those found in the literature among health care students, including nursing students [7,59]; physicians [9,60,61]; allied health practitioners [62]; and various health care providers [2]. However, Militello et al [34] conducted an SRSRs on the efficacy of computer-mediated continuing education for health care providers, including nurses, and they performed a meta-analysis. They classified their outcomes according to the Kirkpatrick model [30]. They found that 8 of the 11 SRs included measures of learner satisfaction (Level 1), 10 SRs included learning outcomes (Level 2), 9 included outcomes on provider behavior or performance (Level 3), and 5 included health and patient outcomes (Level 4). We can suppose that Militello et al [34] were more inclusive in their way of classifying outcomes related to practice change than we were in our data analysis and synthesis. Furthermore, many authors (eg, Légaré et al [9], Légaré et al [63], and Kitto et al [64]) are interested by this transition from Level 2 to Level 3 that can occur as a result of changes promoted by the content and format of continuing professional development activities, as well as how competencies are acquired and assessed. This transition not only depends on the acquisition of knowledge and skills, but also on a myriad of other elements related, for instance, to the intervention (eg, relative advantage), the outer context (eg, resources), the inner context (eg, organizational culture), individual characteristics (eg, learning style), and process (eg, planning) [65].

Methodological Quality

The methodological quality of SRs varied greatly: 59% of SRs (13/22) had an overall high or unclear risk of bias, while 55% (12/22) had a low or critically low level of confidence. Only 41% (9/22) of SRs were assessed with low risk of bias while 45% (12/22) had a moderate or high level of confidence. Our results are different from those of Militello et al [34], who synthesized the methodological quality of SRs (n=11) on computer-mediated CE for health care providers. They used 11 items from the AMSTAR [66]. Out of 11 SRs, 5 were of moderate quality and 6 were of high quality. The authors only included quantitative SRs and meta-analyses.

These findings might be explained by several reasons. We used two tools that have been designed to assess systematic quantitative reviews. When we started this SRSRs in 2017 [11], no tool was available to appraise the quality of qualitative and mixed-studies reviews. Some criteria from the ROBIS tool and the AMSTAR 2, as well as their corresponding vocabulary (eg, meta-analysis, heterogeneity, and risk of bias), were not adapted to fit with the specificities of qualitative and mixed-studies reviews. Furthermore, the systematic methodology of some included SRs was not obvious. Some authors (eg, Knapp and Byers [10] and Carroll et al [39]) mentioned the word systematic in their paper but they did not provide all the details to fully explain the systematic nature of their work. It is important to highlight that methodological quality is one of the three dimensions of quality [67]. However, methodological quality is only one dimension of critical appraisal that could be performed and it is centered on how the SR is conducted [67]. It does not capture other concepts, such as the social relevance of findings and the applicability and transferability of findings to other contexts. The dimension of conceptual clarity can also be appraised and it is related to insightfulness, including the clarity, richness, and depth of description of a phenomenon [68]. Campbell et al [69] observed that methodological and conceptual quality can be inversely correlated. It means that papers that are appraised with a low methodological quality score are usually those providing good conceptual insight. This can be partially explained by the inadequacy regarding the reporting of qualitative research methods. We recommend appreciating the richness of our findings as a means to get a broad picture of the effects of e-learning interventions on nursing care. However, our results must be interpreted with caution and are not meant to guide or inform practice, nor are they meant to determine which e-learning interventions are better in supporting CE for nurses.

Strengths and Potential Biases in the Systematic Review of Systematic Reviews Process

We used a comprehensive and systematic process throughout all stages of this SRSRs. In the search strategy, we used general keywords to explore the e-learning concept as an umbrella term, such as virtual learning environment, distance learning, Web-based learning, e-learning, and m-learning, among others. However, we did not use all specific key terms representing all forms of digital education, such as serious games and gamification interventions, massive open online course, virtual reality, and virtual patient [70]. Recent publications focused, for example, on serious games [59,71] and virtual reality [72], either in a context of preregistration training in health students or postregistration training among health care professionals such as nurses. Nonetheless, the use of general key terms allowed for the coverage of a wide range of potentially relevant references, considering the initial 12,906 records screened.

During the screening of titles, abstracts, and full texts, we observed that information regarding the population was sometimes misleading or incomplete, such as a population of “health students” (eg, Coyne et al [41]). In that case, instead of presuming that this abstract was not eligible based on the population, we decided to retrieve the full text. We discovered that nurses were targeted in some of these papers. Even if we were inclusive during the screening process, we may have excluded some references based on limited information provided in titles and/or abstracts. In order to limit the risk of excluding potentially relevant papers, we conducted the screening process as a team of three reviewers.

Future Research

Our SRSRs targeted specific questions about the effects of e-learning interventions on nursing care in a CE context. Few details were provided regarding RN characteristics (eg, age and educational background) and interventions, including the SRs’ instructional designs. The lack of information granularity provided by the authors of SRs [69,70] is a limitation of conducting SRSRs. Cook [73] argued that these instructional designs can have an impact on the outcomes. Furthermore, few theoretical cues were given about active ingredients pertaining to the interventions that predict or explain professional or behavior change.

We would recommend using other types of knowledge synthesis to explore complementary and broader research questions. The following are some examples:

  1. What are the contexts and mechanisms through which nurses and nursing students translate knowledge and skills from e-learning interventions to their practice and, consequently, how could they lead to specific outcomes among patients? How does it work? In that case, a realist review could be performed in a digital-based nursing education and CE context, with a lens similar to the work conducted by Wong et al [74].
  2. How do nurses experience e-learning interventions in their work setting? How do they describe their impact on their practice or in their environment? A meta-synthesis of qualitative studies could be done to answer these questions.

It would be useful if authors of primary studies provided enough information regarding the intervention, the context, and mechanisms, including theoretical underpinnings, which could allow researchers to understand the components that can affect outcomes.

We would also suggest exploring other types of outcomes that can be related to having e-learning interventions in workplace settings. We are in agreement with Bernt et al [62], in that the relationship between access to continuing professional development and workforce retention is unknown. Other works could be done to investigate the influence of e-learning on nursing resources or structures [19], for instance, on nurse retention and working conditions.

Furthermore, most outcomes found in the literature focus on reactions and nurses’ satisfaction, learning, and change in practice. Change in knowledge and learning can be seen under a cognitivist learning approach. This approach targets the work of single individuals versus, for example, the social interactions that contribute to the learning experience of learners, seen under a social constructivist lens [3]. We would benefit from using a diversity of theoretical underpinnings, educational learning theories [75], and critical [76,77] and complexity theories [78] that have the potential to shed light on many perspectives (eg, individual, interpersonal, organizational, and sociopolitical) of envisioning education, professional development, and learners’ experience.

Conclusions

The findings of this SRSRs show that the effects of e-learning are mainly reported in terms of reactions, knowledge, attitude, self-efficacy, and skills (ie, the first two evaluation levels from the Kirkpatrick model). The effectiveness of e-learning interventions used by nurses in a CE context remain unknown regarding how the learning can be transferred to change practice and affect patient outcomes. Further scientific, methodological, theoretical, and practice-based breakthroughs must feed the fast-growing field of e-learning in nursing education, especially in a life-learning perspective.

Acknowledgments

This work was supported by the Quebec Network on Nursing Intervention Research with funds from the Fonds de recherche du Québec Santé (FRQS)—Ministère de la Santé et des Services sociaux (MSSS) (grant number: 26674). GR received funding for her doctoral studies from the Canadian Institutes of Health Research (CIHR), the FRQS, the Quebec Strategy for Patient-Oriented Research—Support for People and Patient-Oriented Research and Trials (SPOR-SUPPORT) Unit, and the Quebec Network on Nursing Intervention Research.

Special thanks to Tripti Pande, who performed the search strategy, and Frédéric Bergeron, health information specialist at Université Laval, who gave valuable advice in developing the search strategy. Thank you to Gisèle Irène Claudine Mmemba who updated the search strategy.

Authors' Contributions

GR conceived and designed the SRSRs with input from MPG and JC. GR informed the search strategy. GR, JPG, and EH were responsible for data extraction. GR and JBP assessed the methodological quality of the SRs. JPG and EH were involved in the interpretation of results. GR, MPG, JC, JPG, EH, CAD, and JBP were engaged in the drafting of this manuscript and they all read and approved the final version.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Overlap between primary studies within included systematic reviews.

XLSX File (Microsoft Excel File)19 KB

Multimedia Appendix 2

List of included systematic reviews.

PDF File (Adobe PDF File)77 KB

Multimedia Appendix 3

List of excluded papers.

PDF File (Adobe PDF File)329 KB

Multimedia Appendix 4

General characteristics of included systematic reviews.

PDF File (Adobe PDF File)325 KB

Multimedia Appendix 5

Interventions, comparisons, and outcomes from the studies.

PDF File (Adobe PDF File)327 KB

  1. Lahti M, Hätönen H, Välimäki M. Impact of e-learning on nurses' and student nurses knowledge, skills, and satisfaction: A systematic review and meta-analysis. Int J Nurs Stud 2014 Jan;51(1):136-149. [CrossRef] [Medline]
  2. Vaona A, Banzi R, Kwag KH, Rigon G, Cereda D, Pecoraro V, et al. E-learning for health professionals. Cochrane Database Syst Rev 2018 Jan 21;1:CD011736 [FREE Full text] [CrossRef] [Medline]
  3. Nicoll P, MacRury S, van Woerden HC, Smyth K. Evaluation of technology-enhanced learning programs for health care professionals: Systematic review. J Med Internet Res 2018 Apr 11;20(4):e131 [FREE Full text] [CrossRef] [Medline]
  4. Sinclair PM, Kable A, Levett-Jones T, Booth D. The effectiveness of Internet-based e-learning on clinician behaviour and patient outcomes: A systematic review. Int J Nurs Stud 2016 May;57:70-81. [CrossRef] [Medline]
  5. De Caro W, Marucci AR, Lancia L, Sansoni J. Case study in nursing. In: Biondi-Zoccai G, editor. Umbrella Reviews: Evidence Synthesis with Overviews of Reviews and Meta-Epidemiologic Studies. Cham, Switzerland: Springer International Publishing; 2016:273-303.
  6. Clark RC, Mayer RE. E-Learning and the Science of Instruction: Proven Guidelines for Consumers and Designers of Multimedia Learning. 4th edition. Hoboken, NJ: John Wiley & Sons; 2016.
  7. Voutilainen A, Saaranen T, Sormunen M. Conventional vs e-learning in nursing education: A systematic review and meta-analysis. Nurse Educ Today 2017 Mar;50:97-103. [CrossRef] [Medline]
  8. Canadian Nurses Association. Framework for the Practice of Registered Nurses in Canada. Ottawa, ON: Canadian Nurses Association; 2015.   URL: https:/​/www.​cna-aiic.ca/​~/​media/​cna/​page-content/​pdf-en/​framework-for-the-pracice-of-registered-nurses-in-canada.​pdf?la=en [accessed 2019-06-06]
  9. Légaré F, Freitas A, Thompson-Leduc P, Borduas F, Luconi F, Boucher A, et al. The majority of accredited continuing professional development activities do not target clinical behavior change. Acad Med 2015 Feb;90(2):197-202. [CrossRef] [Medline]
  10. Knapp SJ, Byers JF. Use of the Internet in staff development and its application in helping critical care nurses to lower family stress. J Nurses Staff Dev 2008;24(1):E1-E8. [CrossRef] [Medline]
  11. Rouleau G, Gagnon MP, Côté J, Payne-Gagnon J, Hudson E, Bouix-Picasso J, et al. Effects of e-learning in a continuing education context on nursing care: A review of systematic qualitative, quantitative and mixed studies reviews (protocol). BMJ Open 2017 Oct 16;7(10):e018441 [FREE Full text] [CrossRef] [Medline]
  12. De Caro W, Marucci AR, Giordani M, Sansoni J. E-learning and university nursing education: An overview of reviews [Article in Italian]. Prof Inferm 2014;67(2):107-116. [CrossRef] [Medline]
  13. Cook DA, Levinson AJ, Garside S, Dupras DM, Erwin PJ, Montori VM. Internet-based learning in the health professions: A meta-analysis. JAMA 2008 Sep 10;300(10):1181-1196. [CrossRef] [Medline]
  14. Hunt H, Pollock A, Campbell P, Estcourt L, Brunton G. An introduction to overviews of reviews: Planning a relevant research question and objective for an overview. Syst Rev 2018 Mar 01;7(1):39 [FREE Full text] [CrossRef] [Medline]
  15. O'Connor D, Green S, Higgins JPT. Cochrane Handbook for Systematic Reviews of Interventions. Version 5.1.0 (updated March 2011). London, UK: The Cochrane Collaboration; 2011. Defining the review question and developing criteria for including studies   URL: https:/​/handbook-5-1.​cochrane.org/​chapter_5/​5_defining_the_review_question_and_developing_criteria_for.​htm [accessed 2019-05-07]
  16. Pollock M, Fernandes RM, Becker LA, Featherstone R, Hartling L. What guidance is available for researchers conducting overviews of reviews of healthcare interventions? A scoping review and qualitative metasummary. Syst Rev 2016 Nov 14;5(1):190 [FREE Full text] [CrossRef] [Medline]
  17. Pollock M, Fernandes R, Becker L, Pieper D, Hartling L. Chapter V: Overviews of reviews. Draft version (October 8, 2018). In: Cochrane Handbook for Systematic Reviews of Interventions. London, UK: Cochrane Collaboration; 2018.
  18. Henriksen K, Dayton E, Keyes M, Carayon P, Hughes R. Patient Safety and Quality: An Evidence-Based Handbook for Nurses. Rockville, MD: AHRQ Publication; 2008. Understanding adverse events: A human factors framework   URL: https://www.umbrellacaremanagement.com/wp-content/uploads/2018/02/nurseshdbk.pdf [accessed 2019-07-30]
  19. Dubois CA, D'Amour D, Pomey M, Girard F, Brault I. Conceptualizing performance of nursing care as a prerequisite for better measurement: A systematic and interpretive review. BMC Nurs 2013 Mar 07;12:7 [FREE Full text] [CrossRef] [Medline]
  20. Rouleau G, Gagnon MP, Côté J, Payne-Gagnon J, Hudson E, Dubois CA. Impact of information and communication technologies on nursing care: Results of an overview of systematic reviews. J Med Internet Res 2017 Apr 25;19(4):e122 [FREE Full text] [CrossRef] [Medline]
  21. Systematic Reviews: CRD's Guidance for Undertaking Reviews in Health Care. York, UK: Centre for Reviews and Dissemination, University of York; 2009 Jan.   URL: https://www.york.ac.uk/media/crd/Systematic_Reviews.pdf [accessed 2017-05-07]
  22. Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, et al. AMSTAR 2: A critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 2017 Sep 21;358:j4008 [FREE Full text] [CrossRef] [Medline]
  23. Whiting P, Savović J, Higgins JPT, Caldwell DM, Reeves BC, Shea B, ROBIS group. ROBIS: A new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol 2016 Jan;69:225-234 [FREE Full text] [CrossRef] [Medline]
  24. Higgins JPT, Lasserson T, Chandler J, Tovey D, Churchill R. Methodological Expectations of Cochrane Intervention Reviews. London, UK: Cochrane; 2019 Jul. Standards for the conduct of new Cochrane Intervention Reviews   URL: https://community.cochrane.org/book_pdf/550 [accessed 2019-07-24]
  25. Pluye P, Hong QN. Combining the power of stories and the power of numbers: Mixed methods research and mixed studies reviews. Annu Rev Public Health 2014;35:29-45. [CrossRef] [Medline]
  26. Creswell JW. Research Design: Qualitative, Quantitative, and Mixed-Methods Approaches. 4th edition. Thousand Oaks, CA: SAGE Publications; 2014.
  27. Pieper D, Antoine S, Mathes T, Neugebauer EA, Eikermann M. Systematic review finds overlapping reviews were not mentioned in every other overview. J Clin Epidemiol 2014 Apr;67(4):368-375. [CrossRef] [Medline]
  28. Pollock M, Fernandes RM, Newton AS, Scott SD, Hartling L. A decision tool to help researchers make decisions about including systematic reviews in overviews of reviews of healthcare interventions. Syst Rev 2019 Jan 22;8(1):29 [FREE Full text] [CrossRef] [Medline]
  29. Pollock M, Fernandes RM, Newton AS, Scott SD, Hartling L. The impact of different inclusion decisions on the comprehensiveness and complexity of overviews of reviews of healthcare interventions. Syst Rev 2019 Jan 11;8(1):18 [FREE Full text] [CrossRef] [Medline]
  30. Kirkpatrick DL, Kirkpatrick JD. Evaluating Training Programs: The Four Levels. 3rd edition. San Francisco, CA: Berrett-Koehler Publishers; 2006.
  31. Shen N, Yufe S, Saadatfard O, Sockalingam S, Wiljer D. Rebooting Kirkpatrick: Integrating information system theory into the evaluation of Web-based continuing professional development interventions for interprofessional education. J Contin Educ Health Prof 2017;37(2):137-146. [CrossRef] [Medline]
  32. Moore DE, Green JS, Gallis HA. Achieving desired results and improved outcomes: Integrating planning and assessment throughout learning activities. J Contin Educ Health Prof 2009;29(1):1-15. [CrossRef] [Medline]
  33. Jeffries PR. A framework for designing, implementing, and evaluating simulations used as teaching strategies in nursing. Nurs Educ Perspect 2005;26(2):96-103. [Medline]
  34. Militello LK, Gance-Cleveland B, Aldrich H, Kamal R. A methodological quality synthesis of systematic reviews on computer-mediated continuing education for healthcare providers. Worldviews Evid Based Nurs 2014 Jun;11(3):177-186. [CrossRef] [Medline]
  35. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. PLoS Med 2009 Jul 21;6(7):e1000100 [FREE Full text] [CrossRef] [Medline]
  36. Bloomfield JG, While AE, Roberts JD. Using computer assisted learning for clinical skills education in nursing: Integrative review. J Adv Nurs 2008 Aug;63(3):222-235. [CrossRef] [Medline]
  37. Brunero S, Jeon Y, Foster K. Mental health education programmes for generalist health professionals: An integrative review. Int J Ment Health Nurs 2012 Oct;21(5):428-444. [CrossRef] [Medline]
  38. Byrne AJ, Pugsley L, Hashem MA. Review of comparative studies of clinical skills training. Med Teach 2008;30(8):764-767. [CrossRef] [Medline]
  39. Carroll C, Booth A, Papaioannou D, Sutton A, Wong R. UK health-care professionals' experience of on-line learning techniques: A systematic review of qualitative data. J Contin Educ Health Prof 2009;29(4):235-241. [CrossRef] [Medline]
  40. Chipps J, Brysiewicz P, Mars M. A systematic review of the effectiveness of videoconference-based tele-education for medical and nursing education. Worldviews Evid Based Nurs 2012 Apr;9(2):78-87. [CrossRef] [Medline]
  41. Coyne E, Rands H, Frommolt V, Kain V, Plugge M, Mitchell M. Investigation of blended learning video resources to teach health students clinical skills: An integrative review. Nurse Educ Today 2018 Apr;63:101-107. [CrossRef] [Medline]
  42. Du S, Liu Z, Liu S, Yin H, Xu G, Zhang H, et al. Web-based distance learning for nurse education: A systematic review. Int Nurs Rev 2013 Jun;60(2):167-177. [CrossRef] [Medline]
  43. Feng J, Chang Y, Chang H, Erdley WS, Lin C, Chang Y. Systematic review of effectiveness of situated e-learning on medical and nursing education. Worldviews Evid Based Nurs 2013 Aug;10(3):174-183. [CrossRef] [Medline]
  44. Freire LM, Paula MA, Duarte ED, Bueno M. Distance education in neonatal nursing scenarios: A systematic review [Article in Portuguese]. Rev Esc Enferm USP 2015 Jun;49(3):515-521 [FREE Full text] [CrossRef] [Medline]
  45. Härkänen M, Voutilainen A, Turunen E, Vehviläinen-Julkunen K. Systematic review and meta-analysis of educational interventions designed to improve medication administration skills and safety of registered nurses. Nurse Educ Today 2016 Jun;41:36-43. [CrossRef] [Medline]
  46. Hegland PA, Aarlie H, Strømme H, Jamtvedt G. Simulation-based training for nurses: Systematic review and meta-analysis. Nurse Educ Today 2017 Jul;54:6-20. [CrossRef] [Medline]
  47. Hines S, Ramsbotham J, Coyer F. The effectiveness of interventions for improving the research literacy of nurses: A systematic review. Worldviews Evid Based Nurs 2015 Oct;12(5):265-272. [CrossRef] [Medline]
  48. Kakushi LE, Évora YD. Social networking in nursing education: Integrative literature review. Rev Lat Am Enfermagem 2016;24:e2709 [FREE Full text] [CrossRef] [Medline]
  49. Kang J, Seomun G. Evaluating Web-based nursing education's effects: A systematic review and meta-analysis. West J Nurs Res 2018 Nov;40(11):1677-1697. [CrossRef] [Medline]
  50. Lam-Antoniades M, Ratnapalan S, Tait G. Electronic continuing education in the health professions: An update on evidence from RCTs. J Contin Educ Health Prof 2009;29(1):44-51. [CrossRef] [Medline]
  51. Lawn S, Zhi X, Morello A. An integrative review of e-learning in the delivery of self-management support training for health professionals. BMC Med Educ 2017 Oct 10;17(1):183 [FREE Full text] [CrossRef] [Medline]
  52. Phillips JL, Piza M, Ingham J. Continuing professional development programmes for rural nurses involved in palliative care delivery: An integrative review. Nurse Educ Today 2012 May;32(4):385-392. [CrossRef] [Medline]
  53. Tomlinson J, Shaw T, Munro A, Johnson R, Madden DL, Phillips R, et al. How does tele-learning compare with other forms of education delivery? A systematic review of tele-learning educational outcomes for health professionals. N S W Public Health Bull 2013 Nov;24(2):70-75 [FREE Full text] [CrossRef] [Medline]
  54. Seibert DC, Guthrie JT, Adamo G. Improving learning outcomes: Integration of standardized patients & telemedicine technology. Nurs Educ Perspect 2004;25(5):232-237. [Medline]
  55. Stain SC, Mitchell M, Belue R, Mosley V, Wherry S, Adams CZ, et al. Objective assessment of videoconferenced lectures in a surgical clerkship. Am J Surg 2005 Jan;189(1):81-84. [CrossRef] [Medline]
  56. Conole G, Dyke M, Oliver M, Seale J. Mapping pedagogy and tools for effective learning design. Comput Educ 2004 Aug;43(1-2):17-33. [CrossRef]
  57. Cheng Y. The effects of information systems quality on nurses' acceptance of the electronic learning system. J Nurs Res 2012 Mar;20(1):19-30. [CrossRef] [Medline]
  58. Mlinarić A, Horvat M, Šupak Smolčić V. Dealing with the positive publication bias: Why you should really publish your negative results. Biochem Med (Zagreb) 2017 Oct 15;27(3):030201 [FREE Full text] [CrossRef] [Medline]
  59. Gentry SV, Gauthier A, L'Estrade Ehrstrom B, Wortley D, Lilienthal A, Tudor Car L, et al. Serious gaming and gamification education in health professions: Systematic review. J Med Internet Res 2019 Mar 28;21(3):e12994 [FREE Full text] [CrossRef] [Medline]
  60. Cho D, Cosimini M, Espinoza J. Podcasting in medical education: A review of the literature. Korean J Med Educ 2017 Dec;29(4):229-239 [FREE Full text] [CrossRef] [Medline]
  61. Chen F, Lui AM, Martinelli SM. A systematic review of the effectiveness of flipped classrooms in medical education. Med Educ 2017 Jun;51(6):585-597. [CrossRef] [Medline]
  62. Berndt A, Murray CM, Kennedy K, Stanley MJ, Gilbert-Hunt S. Effectiveness of distance learning strategies for continuing professional development (CPD) for rural allied health practitioners: A systematic review. BMC Med Educ 2017 Jul 12;17(1):117 [FREE Full text] [CrossRef] [Medline]
  63. Légaré F, Borduas F, Freitas A, Jacques A, Godin G, Luconi F, CPD-KT team. Development of a simple 12-item theory-based instrument to assess the impact of continuing professional development on clinical behavioral intentions. PLoS One 2014;9(3):e91013 [FREE Full text] [CrossRef] [Medline]
  64. Kitto S, Danilovich N, Delva D, Meuser J, Presseau J, Grimshaw J, et al. Uncharted territory: Knowledge translation of competency-based continuing professional development in family medicine. Can Fam Physician 2018 Apr;64(4):250-253 [FREE Full text] [Medline]
  65. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implement Sci 2009 Aug 07;4:50 [FREE Full text] [CrossRef] [Medline]
  66. Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, et al. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol 2009 Oct;62(10):1013-1020. [CrossRef] [Medline]
  67. Hong QN, Pluye P. A conceptual framework for critical appraisal in systematic mixed studies reviews. J Mix Methods Res 2018 Apr 21:1. [CrossRef]
  68. Toye F, Seers K, Allcock N, Briggs M, Carr E, Andrews J, et al. 'Trying to pin down jelly': Exploring intuitive processes in quality assessment for meta-ethnography. BMC Med Res Methodol 2013 Mar 21;13:46 [FREE Full text] [CrossRef] [Medline]
  69. Campbell R, Pound P, Morgan M, Daker-White G, Britten N, Pill R, et al. Evaluating meta-ethnography: Systematic analysis and synthesis of qualitative research. Health Technol Assess 2011 Dec;15(43):1-164 [FREE Full text] [CrossRef] [Medline]
  70. Car J, Carlstedt-Duke J, Tudor Car L, Posadzki P, Whiting P, Zary N, Digital Health Education Collaboration. Digital education in health professions: The need for overarching evidence synthesis. J Med Internet Res 2019 Feb 14;21(2):e12913 [FREE Full text] [CrossRef] [Medline]
  71. Ijaz A, Khan M, Ali S, Qadir J, Boulos M. Serious games for healthcare professional training: A systematic review. Eur J Biomed Inform 2019;15:12-28 [FREE Full text]
  72. Kyaw BM, Saxena N, Posadzki P, Vseteckova J, Nikolaou CK, George PP, et al. Virtual reality for health professions education: Systematic review and meta-analysis by the Digital Health Education Collaboration. J Med Internet Res 2019 Jan 22;21(1):e12959 [FREE Full text] [CrossRef] [Medline]
  73. Cook DA, Levinson AJ, Garside S, Dupras DM, Erwin PJ, Montori VM. Instructional design variations in Internet-based learning for health professions education: A systematic review and meta-analysis. Acad Med 2010 May;85(5):909-922. [CrossRef] [Medline]
  74. Wong G, Greenhalgh T, Pawson R. Internet-based medical education: A realist review of what works, for whom and in what circumstances. BMC Med Educ 2010 Feb 02;10:12 [FREE Full text] [CrossRef] [Medline]
  75. Lavoie P, Michaud C, Bélisle M, Boyer L, Gosselin E, Grondin M, et al. Learning theories and tools for the assessment of core nursing competencies in simulation: A theoretical review. J Adv Nurs 2018 Feb;74(2):239-250. [CrossRef] [Medline]
  76. Mooney M, Nolan L. A critique of Freire's perspective on critical social theory in nursing education. Nurse Educ Today 2006 Apr;26(3):240-244. [CrossRef] [Medline]
  77. Josephsen J. Critically reflexive theory: A proposal for nursing education. Adv Nurs 2014;2014:1-7. [CrossRef]
  78. Opfer VD, Pedder D. Conceptualizing teacher professional learning. Rev Educ Res 2011 Sep;81(3):376-407. [CrossRef]


AMSTAR: Assessment of Multiple Systematic Reviews
CE: continuing education
CIHR: Canadian Institutes of Health Research
CINAHL: Cumulative Index of Nursing and Allied Health Literature
FRQS: Fonds de recherche du Québec Santé
IV: intravenous
JBI: Joanna Briggs Institute
MSR: mixed-studies review
MSSS: Ministère de la Santé et des Services sociaux
NCPF: Nursing Care Performance Framework
PICOS: population, intervention, comparison, outcomes, and study design
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
PROSPERO: International Prospective Register of Systematic Reviews
QT: quantitative review
RN: registered nurse
ROBIS: Risk Of Bias In Systematic Reviews
SPOR-SUPPORT: Strategy for Patient-Oriented Research—Support for People and Patient-Oriented Research and Trials
SR: systematic review
SRSRs: systematic review(s) of systematic reviews


Edited by G Eysenbach; submitted 24.06.19; peer-reviewed by I Paraskakis, S Sarbadhikari, E Ray Chaudhuri, B Worm, T Taveira-Gomes; comments to author 16.07.19; revised version received 30.07.19; accepted 31.07.19; published 02.10.19

Copyright

©Geneviève Rouleau, Marie-Pierre Gagnon, José Côté, Julie Payne-Gagnon, Emilie Hudson, Carl-Ardy Dubois, Julien Bouix-Picasso. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 02.10.2019

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.