Published on in Vol 23, No 12 (2021): December

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/25012, first published .
Establishing a Working Definition of User Experience for eHealth Interventions of Self-reported User Experience Measures With eHealth Researchers and Adolescents: Scoping Review

Establishing a Working Definition of User Experience for eHealth Interventions of Self-reported User Experience Measures With eHealth Researchers and Adolescents: Scoping Review

Establishing a Working Definition of User Experience for eHealth Interventions of Self-reported User Experience Measures With eHealth Researchers and Adolescents: Scoping Review

Review

1Department of Pediatrics, University of Alberta, Edmonton, AB, Canada

2School of Psychology and Counselling, Centre for Health Research, University of Southern Queensland, Springfield Central, Australia

3Knowledge Institute for Child and Youth Mental Health and Addictions, Ottawa, ON, Canada

4CHEO (Children’s Hospital of Eastern Ontario) Research Institute, Ottawa, ON, Canada

Corresponding Author:

Amanda S Newton, PhD

Department of Pediatrics

University of Alberta

3-526 Edmonton Clinic Health Academy

11405-87 Avenue

Edmonton, AB, T6G 1C9

Canada

Phone: 1 7802485581

Email: mandi.newton@ualberta.ca


Background: Across eHealth intervention studies involving children, adolescents, and their parents, researchers have measured user experience to assist with intervention development, refinement, and evaluation. To date, no widely accepted definitions or measures of user experience exist to support a standardized approach for evaluation and comparison within or across interventions.

Objective: We conduct a scoping review with subsequent Delphi consultation to identify how user experience is defined and measured in eHealth research studies, characterize the measurement tools used, and establish working definitions for domains of user experience that could be used in future eHealth evaluations.

Methods: We systematically searched electronic databases for published and gray literature available from January 1, 2005, to April 11, 2019. We included studies assessing an eHealth intervention that targeted any health condition and was designed for use by children, adolescents, and their parents. eHealth interventions needed to be web-, computer-, or mobile-based, mediated by the internet with some degree of interactivity. We required studies to report the measurement of user experience as first-person experiences, involving cognitive and behavioral factors reported by intervention users. We appraised the quality of user experience measures in included studies using published criteria: well-established, approaching well-established, promising, or not yet established. We conducted a descriptive analysis of how user experience was defined and measured in each study. Review findings subsequently informed the survey questions used in the Delphi consultations with eHealth researchers and adolescent users for how user experience should be defined and measured.

Results: Of the 8634 articles screened for eligibility, 129 articles and 1 erratum were included in the review. A total of 30 eHealth researchers and 27 adolescents participated in the Delphi consultations. On the basis of the literature and consultations, we proposed working definitions for 6 main user experience domains: acceptability, satisfaction, credibility, usability, user-reported adherence, and perceived impact. Although most studies incorporated a study-specific measure, we identified 10 well-established measures to quantify 5 of the 6 domains of user experience (all except for self-reported adherence). Our adolescent and researcher participants ranked perceived impact as one of the most important domains of user experience and usability as one of the least important domains. Rankings between adolescents and researchers diverged for other domains.

Conclusions: Findings highlight the various ways in which user experience has been defined and measured across studies and what aspects are most valued by researchers and adolescent users. We propose incorporating the working definitions and available measures of user experience to support consistent evaluation and reporting of outcomes across studies. Future studies can refine the definitions and measurement of user experience, explore how user experience relates to other eHealth outcomes, and inform the design and use of human-centered eHealth interventions.

J Med Internet Res 2021;23(12):e25012

doi:10.2196/25012

Keywords



Background

Over the past 15 years, the number of eHealth interventions available for use by children, adolescents, and their parents has grown considerably. A commonly used approach to eHealth intervention development involves human-centered design (also known as patient- or user-centered design) [1,2]. This approach includes the active participation of intervention users—children, adolescents, and parents—in the intervention design and development process. By including user perspectives and input into intervention design, the likelihood that an intervention will be easy to use and be compatible with the user and their individual context, and therefore deemed useful, is improved [3-5]. More recently, the importance of users’ involvement in intervention evaluation has been recognized, with measures of user experience included in evaluations to identify whether and how an eHealth intervention meets the preferences and needs of the users. Understanding user experience can more or less reflect the quality of human-centered design principles associated with an intervention.

The term user experience initially arose in the field of human-computer interaction and technology design and was broadly defined as, “a person’s perception and responses that result from the use or anticipated use of a product, system or service” [6]. To date, across eHealth studies, a wide range of definitions and concepts have been used to evaluate user experience, such as satisfaction, acceptability, adherence, engagement, and usability, with an eHealth intervention [7-14]. Similarly, user experience data collection methods have also varied, such as with the use of self-report questionnaires, in-person or telephone-based interview guides, or different types of automatic data capture of users’ interactions with an intervention [15]. These variations suggest that user experience may be a multidimensional concept with several important constructs to define and measure within an eHealth intervention, and a consensus among researchers is yet to be reached. Similar to how the need to define, standardize, and measure adherence has been mounting in recent years [16,17], a need to converge on a common understanding of user experience is also becoming more apparent. A set of accepted domains, definitions, and evaluation measures used in eHealth intervention development and evaluation would benefit children, adolescents, and parents by allowing them to compare user experiences between multiple interventions and inform decisions about their own eHealth intervention use. These accepted approaches would also provide guidance to researchers in the eHealth field and allow for continued advancement and improvement of the study of user experience and other eHealth outcomes (eg, intervention effectiveness, user safety, and user empowerment), intervention design components (eg, content and technological features), and factors that can influence the intervention experience of users (eg, context of use and user expectations).

This Study

This study involves two phases: a scoping review and Delphi consultations. Our decision to conduct a review plus consultation reflects a hermeneutic position that user experience cannot be fully understood without examining it in its current context (existing literature) and the meanings attributed to it (Delphi consultations). The scoping review includes diverse literature to identify how user experience has been defined and measured in eHealth research studies of children, adolescents, and parents. These findings subsequently informed the development of surveys used in the Delphi consultations with eHealth researchers and adolescent users of an eHealth intervention, which focused on establishing a working definition of user experience (and the domains that it may encompass) and developing recommendations for measuring user experience in future evaluations of eHealth interventions.


Study Design

We followed the scoping review framework proposed by Arksey and O’Malley [18] with a Delphi consultation recommended by Levac et al [19]. Reporting of the review adheres to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) Extension for Scoping Reviews checklist [20]. The consultation exercise followed synthesis of findings from the literature. Approvals from the Health Research Ethics Board at the University of Alberta and Human Research Ethics Committee at the University of Southern Queensland were received for the Delphi consultation. The approved study protocol is available upon request.

Development of the Search Strategy

The search strategy (Multimedia Appendix 1) was developed using an iterative process. First, we developed a list of search terms using key concepts and terms from a convenience sample of indexed studies published in various years that examined user experience with an eHealth intervention. A research librarian provided input on appropriate filters, such as Medical Subject Headings terms, and modified these terms to comply with different databases. For this review, we were interested in identifying both published and unpublished English language studies of user experiences, and we sought to include studies made available between January 1, 2005, and April 11, 2019. We included the literature published since 2005 to focus on contemporary studies of eHealth technologies (eg, mobile apps, desktop-based, and multisession or single-session interventions). The search terms and parameters were tested for sensitivity, determined by whether the search strategy successfully filtered the 63 citations that were manually selected a priori (see Multimedia Appendix 2 for the list of test citations). We then conducted 2 rounds of preliminary screening to further refine the strategy. The finalized search strategy was peer-reviewed before implementation.

Search Strategy

We identified studies from the following databases: Ovid MEDLINE, PsycINFO, CINAHL, EBM Reviews (Cochrane Database of Systematic Reviews, ACP Journal Club, Database of Abstracts of Reviews of Effects, Cochrane Central Register of Controlled Trials, Cochrane Methodology Register, Health Technology Assessment, and NHS Economic Evaluation Database), Cochrane Central Register of controlled trials, and ClinicalTrials.gov. We searched Google Scholar from January 2005 to April 2019 and conference proceedings of the International Society for Research on Internet Interventions from January 2016 to April 2019, as there are no archives of International Society for Research on Internet Interventions previous to 2016. We reviewed the reference lists from reviews (systematic, narrative, etc) to identify additional, potentially relevant studies.

Criteria for Considering Studies for the Scoping Review

We included studies of any design that assessed user experience with an eHealth treatment or prevention intervention designed for children or adolescents (aged ≤19 years). Studies with a sample that contained young adults could be included in the review if the mean age of participants was reported to be ≤19 years. eHealth interventions could target any health condition but needed to be web-, computer-, or mobile-based, mediated by the internet and include some degree of interactivity. Studies of telehealth interventions were not included. Studies could assess the eHealth user experience of children, adolescents, or parents. Given the wide range of pre-existing definitions and measurement approaches used to evaluate eHealth intervention user experience, multiple domains could be included in the user experience evaluation, such as cognitive factors (ie, beliefs, attitudes, and intention; such as satisfaction and acceptability of the intervention) or behavioral factors (ie, how the intervention was used, such as self-reported adherence to and engagement with the intervention) related to the use of an eHealth intervention. To be included, studies needed to report the measurement of user experience as first-person experiences reported by eHealth intervention users (parents, children, and adolescents). Studies that only reported indirect user data (ie, proxy report by a parent whose child used a program and intervention metadata [number of sessions completed]), which do not reflect the user’s subjective experience, were excluded. Studies also needed to detail the evaluation measures (eg, tool, instrument, or interview questions) used to collect user experience data so that we could identify how user experience was defined and measured. Studies that did not detail evaluation questions but referenced an original publication of the evaluation measure were included if we could obtain the referenced publication to extract information.

Screening for Article Eligibility

We organized and screened identified studies using EndNote (Clarivate Analytics) bibliographic management software. In pairs, 3 reviewers (authors NDG and AKR and review contributor Marcus O’Neill) independently screened the title and abstract of articles, classifying each as relevant, irrelevant, or unclear using the predetermined inclusion and exclusion criteria. To assess the clarity of the criteria for each reviewer during screening, we calculated the interrater agreement for screening outcomes for the first 100 articles using the κ statistic [21]. The agreement was substantial (κ=0.61). We wanted interrater agreement to be ≥0.80, indicating an almost perfect agreement [22], so reviewers met to review the screening criteria alongside the articles for which they disagreed and sought consensus on the screening outcome for each article. Agreement increased to almost perfect (κ=0.81) for the next set of 100 articles and therefore the screening progressed. At this point, reviewers divided the remaining articles to be screened. Articles screened as relevant or unclear underwent independent screening and discussion by each reviewer pair to determine study inclusion or exclusion. The reviewers contacted the primary authors of 9 articles when additional information was needed to determine eligibility. The reviewers documented the articles that were excluded after full-text review to ensure transparency and replicability.

Data Extraction

Process

Data were extracted into a standardized spreadsheet. The spreadsheet underwent pilot testing with 3 independent reviewers (authors NDG and AKR and review contributor MO) who extracted data from the first 5 included studies to ensure that the spreadsheet was adequate in scope and that consensus was achieved on data categorization. Subsequently, each reviewer extracted data from one-third of the remaining included studies. Each reviewer verified the accuracy and completeness of the other reviewers’ respective thirds. Data extraction discrepancies were resolved through consensus and third-party consultation (author ASN). Corresponding authors were contacted when reporting was unclear or details were lacking in the article.

Data Extracted for Analysis

We extracted the following information from the studies:

1. General information on participants (age [range] and intervention [user or respondent]) and the eHealth intervention (name, mode of delivery, target population or health condition and duration or frequency).

2. How first-person user experience was defined in studies, which included looking for definitions and terms of user-reported experiences as well as extracting individual questions used to measure user experience. We then compared author-reported user experience definitions to a priori definitions and the identified measures were categorized into 6 domains: satisfaction, acceptability, credibility, impact, adherence, and use. The domains were based on a preliminary literature review of user experiences in eHealth studies that was conducted by one of the authors [23]. All tools fit into one or more of the 6 domains. The original working definitions for the 6 domains are presented in Multimedia Appendix 3.

3. Major characteristics of the evaluation measures used to assess user experience, including its purpose and scope, delivery time points, type of respondent (child, adolescent, or parent), administration approach (web-, telephone-, or paper-based or face-to-face interview), the number of items and item-response format (eg, Likert scale or open-ended questions), and any notations by study authors regarding limitations of the evaluation measures and recommendations for future measurement or evaluation.

4. Information on the measure’s psychometrics was extracted, if available, including information on measure validity (face, content, construct, or criterion), reliability (internal consistency, interrater, or test-retest), and findings from a factor analysis. In studies where an evaluation measure was referenced, the original reference was reviewed and psychometric data were extracted, if available.

Quality Assessment

Two independent reviewers (NDG and AKR) assessed the quality of the evaluation measures reported in the studies and met to resolve discrepancies through consensus. The reviewers used 3 criteria developed by the Society for Pediatric Psychology Assessment Task Force [24]. The first criterion was the availability of details on the instrument or measure to allow evaluation and replication. This involved the reviewers confirming whether a measure was available for review in published or gray literature; study authors could have also included their measure as supplementary material to their publication. The second criterion concerned the availability of reliability and validity data for the instrument or measure. This could include psychometric data (eg, for surveys or rating scales) and data from pilot testing (eg, face and content validity or interview guide reliability for author-developed interviews). The third criterion, use of the instrument or measure by multiple, independent investigative teams as described in peer-reviewed articles, necessitated the measure or tool (including a qualitative interview guide) to have been used by more than one group.

Using the abovementioned criteria, measures were classified into 4 categories: well-established, approaching well-established, promising, or not yet established. We rated a measure as well-established if we could identify 2 peer-reviewed articles with very good detail of the measures and good to strong or excellent published information on both validity and reliability. We rated a measure as approaching well-established if we could identify 2 peer-reviewed articles with very good detail of the measure and with published information on validity or reliability either missing or presented in vague or poor to moderate values. We rated a measure as promising if we could identify 1 peer-reviewed article with sufficient detail of the measure (eg, some, but not all of the questions present) and with published information on validity or reliability either missing or presented in vague or poor to moderate values. Although not included in the original task force rating scheme, we rated a measure as not yet established if we could identify 1 peer-reviewed article with sufficient detail of the measure but with published validity or reliability information not available. The quality of validity and reliability data were interpreted using a guide presented by Phillips et al [25] (Multimedia Appendix 4).

Data Analysis

Evidence tables and a bar graph were developed to aggregate findings into descriptive and thematic summaries [5]. Descriptive summaries include information on study, participant, and eHealth intervention characteristics, user experience measures, and related psychometric statistics. Thematic summaries include grouping measures according to the quality assessment categories used to define the measures: well-established, approaching well-established, promising, or not yet established.

Delphi Consultation Process

The Delphi consultation phase of the scoping review was a stepwise process involving multiple, structured rounds of surveys to gain consensus [26-28] on how user experience should be defined and measured.

Participants

We sought input from 2 groups of individuals: researchers of the studies included in our review who had published eHealth intervention user experience evaluations and adolescents (aged 16-18 years) currently using an eHealth intervention.

We identified and contacted researcher participants using the published email contact information of lead or corresponding authors of studies included in the scoping review. We used snowball sampling so that contacted authors could also recommend colleagues with relevant expertise who may be interested in participating [26]. Each potential participant received an email invitation to participate along with an information sheet describing the Delphi consultation; those who completed the survey were considered to have given implied consent. The Delphi consultation with researchers was held over 7 months between September 2019 and March 2020.

For practical and feasibility reasons, we recruited adolescent participants among current users of an evidence-based eHealth intervention, the web-based BRAVE Self-Help program [12,29], who had previously consented to be contacted for future research studies. Recruitment involved a pop-up invitation that appeared when users logged into the BRAVE Self-Help program throughout a 6-month period from June 7, 2019, to December 10, 2019. Interested participants were directed to an external site and invited to read a separate information sheet regarding the study and provide informed consent to participate in the Delphi consultation. After providing consent on the web, participants completed a survey that was used for the Delphi consultation.

Process

The process that we followed with each participant group along with the response and participation rates is presented in Figure 1. Broadly, each survey included questions that sought consensus on participants’ opinions on the importance of the user experience domains and definitions used in the scoping review (Multimedia Appendix 3), additional domains for consideration, and the appropriateness of measures used across eHealth studies of user experience. Consensus on responses to each survey question was defined as having ≥80% agreement [26].

Figure 1. Delphi consultation process. REDCap: Research Electronic Data Capture.
View this figure
Survey Development and Scaling

Our team pilot-tested the surveys used in round 1 for both participant groups. We reviewed each survey for face validity, clarity, cohesiveness, flow, and completion time and made changes as needed (eg, modifying the wording of questions and changing the order of questions).

Researcher participants were first asked questions regarding their demography (age, gender, professional position, country of residence, and experience in developing and measuring user experience) followed by a series of questions on their level of agreement with the preliminary definitions of the proposed user experience domains (satisfaction, acceptability, credibility, impact, adherence, and use); initial definitions were based on a preliminary literature review that informed this scoping review as having accepted definitions or domains for measuring user experience. In round 1, an open text box for respondents to provide text supporting their answers was included. We originally wanted to survey participants for their opinions regarding the appropriateness of the user experience measures used across eHealth as well as findings from the review itself. We used a 6-point Likert scale ranging from strongly agree to strongly disagree to measure participants’ level of agreement. In round 1, we also included an open text box for suggestions and comments regarding proposed definitions of identified domains and for those domains not identified in the survey. For proposed domains that could be measured at multiple time points, we also asked participants to indicate their preferred timing of measurement (before, during, or after the intervention). Researcher participants were then asked to rank the importance of the domains relative to one another using a 6-point Likert scale ranging from most important to least important when measuring user experience. Participants were also asked in both rounds to indicate the importance of studies to measure user experience; however, we realized that definitions would need to be determined before this could occur.

Adolescent participants were asked questions regarding their demography (age, gender, and frequency of using web-based health interventions) followed by questions on their level of agreement with how important it is for researchers to ask them about each of the 6 user experience domains when using an eHealth intervention. Given the complexity and technical nature of the domains presented, adolescents were provided with lay descriptions of each of the domains and asked to rate how important they felt each domain was rather than commenting on the definition (as we did with the expert researcher sample). In this way, adolescents were able to provide input into the domains that were most important from their perspective. We used a 6-point Likert scale ranging from extremely important to not at all important to measure participants’ level of agreement and also included an open text box in round 1 so that participants could identify reasons for why they felt a particular domain was important or not. Adolescent participants were then asked to rank the importance of the domains relative to one another from most important to least important. In round 1, an open text box was available for adolescents to add any other comments they had on the presented domains and any other aspects of user experience they thought were important or missing.

Consultation Rounds

We conducted independent Delphi consultations with each participant group—2 rounds with researcher participants and 2 rounds with adolescent participants—using a survey tailored to each group. The same participants participated in both rounds as the intent was to gain consensus (agreement) within the 2 participant groups. Given the nature of the questions being asked, we felt that 2 rounds were sufficient to capture participants’ opinions. In round 2, we presented the summary of responses for survey questions where consensus was not achieved in round 1 or where comments and feedback for a survey question necessitated further clarification. This approach allowed each participant in round 2 to express their opinion after observing and reflecting on the opinions of other researchers or adolescents. The purpose of this approach was to decrease the degree of dispersion or increase the degree of consensus in participants’ answers. The consultation exercise concluded after 2 rounds irrespective of whether consensus was reached on all survey items.

We used REDCap (Research Electronic Data Capture), a secure web-based platform [30], to administer the electronic surveys to researcher participants and the University of Southern Queensland Survey Tool based on Lime Survey and hosted on secure University of Southern Queensland servers for adolescent participants. Participants spent 15-30 minutes (researchers) or 10-15 minutes (adolescents) to complete the survey in each round. To maximize the response rate for each round, nonrespondents were sent an email reminder about the survey after every 7 days until 3 contact attempts had been made. Participants answered questions anonymously so that individual opinions did not influence other participants’ opinions [26]. Adolescent participants who completed the surveys were given the opportunity to enter a draw to win one of the 10 vouchers valued at Aus $40 (US $28.92) to be drawn at completion of the study. Researchers who completed the surveys were given the opportunity to enter a draw to win a CAD $150 (US $118.14) electronic gift card.

Data Analysis

In round 1, we calculated the response rate for researcher participants only, as we were not able to determine the number of adolescent invitees from the open invitation to participate. In round 2, we calculated the participation rate for both researchers and adolescents using the denominator from round 1. For both rounds, we generated descriptive statistics (frequencies and percentages) to determine the level of agreement (consensus) among participants for each survey question. This involved grouping the responses at each end of the Likert scales (eg, grouping strongly agree with agree and disagree with strongly disagree). We also reviewed the responses to middle Likert categories (eg, agree/disagree and slightly agree/slightly disagree) to identify the range of opinions. Among researcher participants, following round 1, we collated the open-text answers or feedback on definitions and used this text to revise the definitions for the domains of user experience. We revised each domain definition using suggestions from participants irrespective of whether consensus was reached on the definition in round 1. The rationale was that the suggestions added critical details and improvements to each definition. The revised definitions were then presented for consensus in round 2 during which participants were asked to indicate their level of agreement with the revised definitions. We used IBM SPSS (version 26) for all analyses.


Literature Search and Selection

The search strategy identified 8634 unique citations. Of these citations, 1087 were considered potentially relevant based on their title and abstract (Figure 2). After full-text review, 129 articles and 1 erratum met inclusion criteria and were included in the review.

Figure 2. Literature search flow diagram.
View this figure

Description of Included Studies

A summary of the general characteristics of 129 eHealth studies that evaluated user experience is presented in Table 1. Most studies were published in 2018 (30/129, 23.3%), 2017 (25/129, 19.4%), and 2015 (24/129, 18.6%). The most commonly measured user experience domains were acceptability, usability, and satisfaction; this trend occurred across years of publication (Figure 3). Additional details for each study are presented in Multimedia Appendix 5 [7-14,31-152], grouped by year of publication to examine trends in the domains measured over time.

Table 1. Summary of the eHealth studies that measured user experience (N=129).
CharacteristicsStudies within scoping review, n (%)
eHealth usera

Children aged ≤9 years26 (20.2)

Adolescents aged 10-19 years118 (91.5)

Young adults up to 24 years25 (19.4)

Parents33 (25.6)
Type of eHealth intervention

Web-based85 (65.9)

Mobile-based44 (34.1)

Tablet-based11 (8.5)
User experience domain that was measured

Satisfaction86 (66.7)

Acceptability77 (59.7)

Credibility17 (13.2)

Perceived impact66 (51.2)

User-reported adherence11 (8.5)

Usability74 (57.4)

aAge categories defined using World Health Organization definitions [153].

Figure 3. The type and frequency of user experience domains measured across eHealth studies over time. The domains named in this figure reflect the agreed-on terminology that resulted from the Delphi consultation with researchers.
View this figure

User Experience Measures

Research teams used 128 unique user experience evaluation measures in the 129 published studies included in this review. Details of the quality assessment outcomes using the Society for Pediatric Psychology Assessment Task Force [24] criteria are provided in Multimedia Appendix 6 [7-14,31-111,113-121,123-152,154,155]. Of the 128 measures, 10 (7.8%) were assessed to be well-established measures (Table 2) and 5 (3.9%) were assessed to be approaching well-established (Table 3). These measures were used in research studies to primarily capture user experiences related to satisfaction with and acceptability of eHealth interventions and program usability and perceived impact of an intervention. Evaluation measures used in research studies that were assessed to be promising (13/129, 10.1%) are presented in Multimedia Appendix 7 [75,90,97,99,107,109,132,136,139,152]. The remaining 100 measures identified were assessed as not yet established and primarily represented studies in which author-developed evaluation questions were used. Information on evaluation measures used in research studies that were assessed to be not yet established is available upon request to the corresponding author.

Table 2. Well-established evaluation measures of user experience.
Measure name and targeted user experience domainFormat and administration featuresPsychometric propertieseHealth study


ValidityReliability
SUSa; usability10 items; 4- and 5-point Likert scales; administration: paper- and web-based and telephonicTwo-factor scale [156]; usable (8 items), α=.91; learnable (2 items), α=.70; overall α=.92Internal consistency; 10 years of SUS samples: α=.91 [157]; eHealth study sample: α=.86 [31], α=.95 [32][14,31-39]
SUS (Portuguese version); usability10 items; 5-point Likert scale; administration: paper-basedConstruct validity with PSSUQb [158]: r=0.70Interrater reliability; Portuguese validation sample [158]: intraclass correlation coefficient=0.36 with modest agreement between ratings (76.67%)[40]
Client Satisfaction Questionnaire 8; acceptability, satisfaction, and usability8 items; 4-point Likert scale; administration: paper- and web-basedCriterion-related validity; other measures of satisfaction [159]: r=0.60-0.80Internal consistency across 9 studies [159]: α=.83-.93; eHealth study sample: α=.92 [41][41-44]
CEQc; credibility and perceived impact satisfaction6 items; 9-point Likert scale and 0%-100% scale; administration: not reportedTwo-factor scale [160]: expectancy (3 items), eigenvalue=3.42; credibility (3 items), eigenvalue=1.53; 2 factors accounted for 82.46% of the total varianceInternal consistency; CEQ validation across 3 studies [160]: expectancy, α=.79-.90; credibility, α=.81-.86; overall, α=.84-.85; test-retest reliability, CEQ validation across 3 studies [160]: expectancy, α=.82; credibility, α=.75[41]
GEQd; acceptability and satisfaction33 items; 5-point Likert scale; administration: web-basedFive-factor scale; GEQ validation study [161]: factor 1 (5 items); factor 2 (7 items); factor 3 (4 items); factor 4 (5 items); factor 5 (4 items); all correlation coefficients >0.30Internal consistency; GEQ development sample [162]: α=.81[7,10]
PSSUQ also known as the Computer Systems Usability Questionnairee; acceptability, perceived impact, satisfaction, and usability19 items; 7-point Likert scale; administration: web-basedThree-factor scale; correlation coefficients on 5 years of PSSUQ samples [154]: system usefulness and informational quality, r=0.70; system usefulness and interface quality, r=0.70; informational quality and interface quality, r=0.60; factors scores shared 36% to 50% of the varianceInternal consistency; 5 years of PSSUQ samples [154]: system usefulness, α=.96; informational quality, α=.92; interface quality, α=.83; overall, α=.96[45,46]
SSSf; satisfaction5 items; 4-point Likert scale and open ended; administration: telephonicOne-factor scale factor loadingsg [163]: Youth version, α=.77-.90; Parent version, α=.71-.83Internal consistency; SSS development samples [163]: youth α=.86; parent α=.85; eHealth study sample across different time points [47]: α=.77-.95[47]
TEI-SFh; acceptability and perceived impact satisfaction9 items; 5-point Likert scale; administration: paper-based2 factor scale [164]: acceptability (8 items), α=.49-.93, 57% of total item variance; discomfort (1 item), α=.82, 12% of total item varianceInternal consistency; TEI-SF development samples: α=.94 [165], α=.85 [164][48]
USEi questionnaire; acceptability, satisfaction, and usability19 items (4 subscales); 7-point Likert scale; administration: web-basedCriterion-related validity [166]; compared with SUS (2 evaluations); usefulness: r1=0.60, r2=0.69; ease of learning: r1=0.71, r2=0.78; ease of use: r1=0.78, r2=0.81; satisfaction: r1=0.66, r2=0.71Internal consistency; USE development sample [166]: α=.98; eHealth study sample [32]: usefulness, α=.90; ease of learning, α=.98; ease of use, α=.95; satisfaction, α=.96[32]
WAI-SRj; credibility and perceived impact12 items; 5-point Likert scale; administration: web-basedThree-factor scale; correlation with WAI-SR within 2 samples (S1 and S2) [167]: goal (4 items), α=.89 (S1), α=.87 (S2); task (4 items), α=.87 (S1), α=.90 (S2); bond (4 items), α=.86 (S1), α=.84 (S2); overall, α=.95 (S1), α=.94 (S2)Internal consistency; WAI-SR development within 2 samples [167]: goal, α=.87 (S1), α=.85 (S2); task, α=.85 (S1), α=.87 (S2); bond, α=.90 (S1), α=.85 (S2); overall, α=.91 (S1), α=.92 (S2); eHealth study sample [41]: α=.95[41,49]

aSUS: System Usability Scale.

bPSSUQ: Poststudy System Usability Questionnaire.

cCEQ: Credibility Expectancy Questionnaire.

dGEQ: Game Experience Questionnaire.

eThe Computer Systems Usability Questionnaire and PSSUQ are the same questionnaire; the only difference is that the Computer Systems Usability Questionnaire wording is appropriate for use in field settings or surveys rather than in a scenario-based usability evaluation [154,155].

fSSS: Satisfaction with Services Scale.

gFactor loading: correlation coefficient for the variable and factor.

hTEI-SF: Treatment Evaluation Inventory-Short Form.

iUSE: Usefulness, Satisfaction, and Ease of use.

jWAI-SR: Working Alliance Inventory: Revised Short form.

Table 3. Evaluation measures assessed to be approaching well-established.
Measure name and targeted user experience domainFormat and administration featuresPsychometric propertieseHealth study


ValidityReliability
Client Satisfaction Scale; perceived impact and satisfaction10 items; 5-point Likert scale; administration: not reportedNot reportedInternal consistency; eHealth study sample [50]: Child scale, α=.75; Parent scale, α=.85[50-52]
Standardized SUMIa; acceptability, satisfaction, and usability55 items; 3-point Likert scale and open ended; administration: paper-basedNot reportedInternal consistency; SUMI development sample [168]: global subscale, α=.92; efficiency subscale, α=.81; affect subscale, α=.85; helpfulness subscale, α=.83; control subscale, α=.71; learnability subscale, α=.82[53]
WAMMIb; acceptability and usability20 items; 5-point Likert scale; administration: not reportedNot reportedInternal consistency; WAMMI development sample [169]: α=.96[54]
Author-adapted TEI-SFc; acceptability and satisfaction11 items; 5-point Likert scale; administration: paper-basedNot reported for author adaption of TEI-SFInternal consistency; eHealth study sample [55]: Child scale, α=.82; Parent scale, α=.81[55,56]
Author-developed questionnaire; acceptability and perceived impact7 items; 5-point Likert scale; administration: web-basedOne-factor scale [57]: 69% of total item varianceInternal consistency; eHealth study sample [57]: α=.94[57]
Author-developed questionnaire; perceived impact and usability>14 items; 5- and 10-point Likert scales; administration: not reportedNot reportedInternal consistency; eHealth study sample [58]: perceived benefits of intervention content, α=.92; perceived benefits of the interpersonal principles in the intervention, α=.85; ease of use, α=.94; ease of understanding, α=.96; ease of reading, α=.97; internal rationale, α=.96; identification/relevance, α=.96[58,59]
Author-developed questionnaire; perceived impact, satisfaction, and usability7 items; 4-point Likert scale; administration: not reportedTwo-factor scale: utility of program (5 items), 45% of total item variance; user friendliness of program (2 items), 21% of total item varianceInternal consistency; eHealth study sample [60]: user friendliness, α=.71; utility items, α=.84[60]
Author-developed questionnaire; acceptability, satisfaction, and perceived impact17 items; scale type not reported; administration: not reportedThree-factor scale; eHealth study sample [61]: program evaluation (7 items); program benefits (7 items); overall satisfaction (3 items); subscales ranged from 1 (negative evaluation) to 10 (positive evaluation)Internal consistency; eHealth study sample [61]: subscales, α=.90 or higher[61]

aSUMI: Software Usability Measurement Inventory.

bWAMMI: Website Analysis and Measurement Inventory.

cTEI-SF: Treatment Evaluation Inventory- Short Form.

Delphi Consultation

Overview

In round 1, 30 researchers and 27 adolescents participated. In round 2, the number of participants decreased to 23 researchers (23/30, 77% participation rate) and 6 adolescents (6/27, 22% participation rate; Figure 1). The demographic characteristics of researcher and adolescent participants are presented in Table 4. Researcher participants were mainly women and employed in academic positions, and all participants had used a measure of user experience in their work. In addition, most adolescent participants were female participants, and most had limited experience in using eHealth interventions (despite being currently enrolled in an eHealth intervention for anxiety).

Table 4. Demographic information about the participants.
CharacteristicsRound 1Round 2
Researcher participants, n (%)30 (100)23 (77)

Age (years), mean (SD)42.6 (1.4)43.7 (1.7)

Sex, n (%)


Female26 (86.7)20 (87)


Male4 (13.3)3 (13)

Primary role or position, n (%)


Academic (professor and lecturer)19 (63.3)13 (56.5)


Scientist (researcher and research fellow)5 (16.7)5 (21.7)


Clinician4 (13.3)4 (17.4)


Trainee (PhD candidate and postdoctoral fellow)2 (6.7)1 (4.3)

Country, n (%)


Australia4 (12.9)3 (13)


Canada3 (9.7)2 (8.7)


Finland1 (3.2)1 (4.3)


Ireland2 (6.5)1 (4.3)


Italy1 (3.2)1 (4.3)


Korea1 (3.2)1 (4.3)


New Zealand4 (12.9)3 (13)


Sweden2 (6.5)2 (8.7)


United Kingdom1 (3.2)1 (4.3)


United States11 (35.5)8 (34.8)

Has measured user experience, n (%)30 (100)23 (100)

Has developed a user experience measure, n (%)7 (23.3)6 (26.1)
Adolescent participants, n (%)27 (100)6 (22)

Age (years), mean (SD)16.44 (0.6)N/Aa,b

Sex, n (%)


Female20 (74.1)N/A


Male5 (18.5)N/A


Other2 (7.4)N/A

Use of eHealth programs, n (%)


Never used until the day of the survey14 (51.9)N/A


<once per week7 (25.9)N/A


1-2 times per week4 (14.8)N/A


3-4 times per week0 (0)N/A


5-6 times per week1 (3.7)N/A


≥7 times per week1 (3.7)N/A

aN/A: not applicable.

bDemographics for adolescent participants in round 2 were not collected to ensure anonymity as per the research ethics board’s requirements.

Researcher Participants

Over 2 rounds of consultation, researcher participants made several suggestions for refining the original definition of each user experience domain. The revised definitions achieved by round 2 are presented in Table 5 along with the consensus scores for the definition (the percentage of participants who strongly agreed or agreed with the definition). Researchers met or surpassed the threshold for agreement on all definitions except the definition for perceived impact. Regarding the importance of each of the domains to the overall assessment of user experience in round 1, researchers achieved consensus in their ranking of credibility as less important to measure relative to the other 5 domains (80%; Table 6); no other rankings achieved consensus in this round. Perceived impact was ranked in both rounds as more important than the other domains, but consensus (≥80% agreement) was not achieved. Regarding when assessment of user experience should be conducted, the responses differed across domains. By round 2, researchers agreed that eHealth intervention acceptability (87% consensus) and satisfaction (97% consensus) should be measured after intervention completion. Researchers were divided on when credibility should be measured, with equal proportions of researchers indicating credibility should be measured at all 3 time points (before the intervention: 61%; during the intervention: 65%; and after the intervention: 61%).

Table 5. Working definitions of user experience domains developed with researcher participants.a
DomainUser experience definitionDefinition consensus (%)
AcceptabilityAcceptability refers to whether the intervention content, features, and delivery meet user expectations (eg, relevance, convenience, accessibility, feasibility, appropriateness, appeal [fun, interesting, and likable], value, engaging, and privacy). These aspects may be different depending on the user (ie, child, adolescent, or parent).100
SatisfactionSatisfaction refers to the user’s overall impression of the intervention and whether it meets their needs (eg, global satisfaction rating, value for money or time, helpful, whether they would they use it again or recommend it to a friend, and ratio between expectations and results).96
CredibilityCredibility refers to the extent to which the user perceives the intervention to be trustworthy and has the potential to work (eg, perceived accuracy and quality of information in the intervention and/or evidence base supporting the intervention).96
UsabilitybUsability refers to the user’s perceived ease of use of the intervention based on technical factors (eg, interface/equipment/reminder features or problems) and environmental/personal factors (eg, content, time, and competing priorities) that impact the individual’s use of the intervention (eg, frequency).87
User-reported adherencecUser-reported adherence refers to how and why the user did or did not follow the intervention or research protocol (eg, completing outcome measures and content) as recommended.d83
Perceived impactPerceived impact refers to the extent to which the user perceives the effect of the intervention’s impacts (eg, impressions of change in symptom levels and skills and perception of overall effectiveness).78

aItalics represent additions or changes to the original definition. Domains are listed in descending order of consensus.

bUsability should also be measured in conjunction with objective measures of use (ie, intervention metadata).

cUser-reported adherence should also be measured in conjunction with objective measures of adherence (ie, intervention metadata) and clinician expectations and adherence to the protocol (if relevant).

dContent removed from the definition.

Table 6. The relative rankings among researcher participants for the importance of the user experience domains across 2 rounds of consultation.
DomainRound 1 (%)Round 2 (%)

More importantLess importantMore importantLess important
Acceptability73274852
Satisfaction43573070
Credibility2080a
Usability47532674
User-reported adherence47532674
Perceived impact70306535

aRound 2 not conducted as consensus was achieved in round 1.

Researcher participants’ opinions varied greatly on whether a universal measure of user experience was needed to enable direct comparisons between studies of eHealth interventions. By round 2, a total of 70% (16/23) stated that a universal measure was extremely or quite important, 22% (5/23) stated that it was slightly important, and 9% (2/23) stated it was slightly unimportant. In round 1, a total of 37% (11/30) of the participants also provided comments. Most stated that although a universal measure may be impractical given variability across users’ developmental stage, technology types, and language, a core set of user experience items would be a valuable addition to the eHealth field. Participants suggested that other measures could be added to this core set to obtain intervention-specific feedback as needed. Moreover, 1 participant pointed out that accepted definitions of user experience domains are also important to ensure that the domains are not used differently from study to study even if different measures are used to assess them.

Adolescent Participants

Adolescents were asked to indicate which user experience domains were important for a researcher to ask them about. In round 1, adolescents reached consensus that acceptability was more important for a researcher to ask about compared with other domains (81% consensus); satisfaction and perceived impact were indicated as important, but adolescents did not reach consensus in round 1 (78% consensus for both). After the second round, adolescents achieved consensus with satisfaction, credibility, and perceived impact (100% consensus) and usability (83% consensus) identified as important domains for researchers to ask about when measuring adolescents’ user experience. After the 2 survey rounds, adolescents remained divided on the importance of measuring user-reported adherence, with 50% (3/6) of the participants rating it more important and 50% (3/6) rating it as less important

When asked to rank the domains in order of importance to one another (ie, which domains were more important than the other domains), perceived impact was considered the most important domain to measure (83% consensus) based on 2 rounds of consultation. Credibility and acceptability were also ranked highly and were considered more important than the other domains, but they did not meet or surpass the 80% threshold to achieve consensus. User-reported adherence was ranked the least important domain relative to the others (83% consensus). Usability (76%) and satisfaction (67%) were also ranked as less important to measure, although consensus was not reached for these 2 domains either.


Summary of Principal Findings

To our knowledge, this is the first study to review how eHealth intervention studies have defined and measured user experience, consult with experts (researchers) as to how user experience could be defined and measured, and consult with users (adolescents) as to which domains they considered important for examining their user experience. In the scoping review, we made 2 important discoveries: several well-established measures are available to quantify user experiences and a large proportion of published eHealth studies did not involve the use of a well-established measure, with authors having developed user experience questions specific to their eHealth intervention. Key findings from our Delphi consultation are the alignment between researcher and adolescent relative rankings of user experience domains and the refinement of definitions for the 6 proposed domains of user experience.

Discussion of Principal Findings

We identified 10 well-established measures available in the current literature to quantify 5 of the 6 proposed domains of user experience (satisfaction, acceptability, credibility, perceived impact, and usability). Therefore, we recommend that eHealth researchers use an available, well-established measure instead of developing their own measure to assess these 5 user experience domains. This approach will allow for between-study comparisons of user experiences, including whether and how experiences with similar eHealth interventions are considered the same or different among users. Not only could such comparisons help inform parents and their children with their decisions for which eHealth intervention to use but also findings across studies could be used by researchers to identify and understand the relationship between user experience and other eHealth outcomes (ie, intervention effectiveness and program adoption or use).

To date, no well-established measure exists for studying user-reported adherence. This is not surprising, as adherence is typically measured using intervention metadata, such as the number of sessions completed and time spent per session. We propose that in addition to such objective metrics, researchers measure how and why a user did or did not follow the intervention or research protocol (eg, completing outcome measures and content) as recommended or as encouraged by intervention design. This subjective information can expand the understanding of metadata—for example, why did the user complete the number of sessions or outcome measures that they did. These data could be collected using a set of open-ended questions designed to be used across studies and tailored, when needed, to specific interventions, studies, or user profiles.

Definitions of the 6 domains of user experience that resulted from our international consultation with eHealth researchers offer a guidepost for new studies of user experience. Although the proposed definitions may continue to be refined over time as the eHealth field advances, we see them as an important cornerstone to user experience measurement. A set of commonly accepted domains, similar to the efforts to define, standardize, and measure eHealth adherence [16,17] and engagement [170,171], can introduce a taxonomy that can be applied across eHealth studies even if different populations, interventions, and measures are used. However, it remains unclear, as to whether a universal measure of user experience would be useful for researchers. Although 16 of the 23 of the researchers in this study responded that it is important to use a universal measure, several concerns were brought forward regarding the challenge to implementing such an approach. As a follow-up to the results we reported here, further investigation of the utility and feasibility of a core set of items and what this core set should be is needed. It is possible that consideration needs to be given to how measures can be adapted for different respondents (eg, parents, adolescents, and clinicians) or different intervention contexts (eg, open access vs therapist supported). For example, the findings of this study showed that perceived impact was rated as important by both the researchers and adolescents; however, it is entirely possible that each would describe the desired impact of the intervention in different ways. Further investigation is necessary to examine how these constructs can be best assessed using different respondents and contexts.

The Delphi consultations we conducted provide insight into the value that researchers and adolescents place on the different domains of a user’s experience with an eHealth intervention. Both adolescent and researcher respondents ranked perceived impact as one of the most important aspects of a user’s eHealth experience, indicating that whether the user perceives the intervention to have had an impact on their health is a central component of user experience. There were also divergent perspectives, with one the most divergent being researchers’ ranking of satisfaction among the most important domains compared with adolescents’ ranking of it as less important. At the core of the definition of satisfaction is an emphasis on whether an intervention meets a user’s expectations and needs. From the adolescent point of view, the definition of perceived impact, which focuses on the impressions of change in symptom levels and skills or the perception of overall effectiveness, may have been a more meaningful way to identify whether their needs would be met, as perceived impact directly links to an observable or tangible change in their symptoms or behavior because of the eHealth program. The ranking of credibility also differed between researchers (lower ranking) and adolescents (higher ranking). With the choice of eHealth interventions increasing, as consumers, it is not surprising that adolescents would place greater priority on credibility as an important aspect to the selection and use of an eHealth intervention. This prioritization should be considered by researchers when developing and marketing their interventions for use.

Usability was considered one of the least important domains of user experience from the perspective of both adolescent and researcher respondents. This ranking may reflect the discrepancy between the more commonly used definitions of usability and the one proposed in our Delphi consultation. Originating from the field of computer science, usability is considered a main design component (design heuristic), similar to aesthetics, user safety, or data privacy, meaning that a certain degree of (technical or functional) usability is fundamental to or expected with the use of an eHealth intervention. Although an intervention may be usable, it does not mean that it will be used or perceived as useful by adolescents [172]. Therefore, if usability is currently understood as a measure of the technical ease of use or the functionality of an intervention, then examining it may add little richness to the understanding of what and how users describe their user experience with eHealth intervention to be like. However, if usability comes to be associated with the conditions (barriers or constraints, facilitators, and context) of use and reasons for usefulness (meets the needs and preferences of users), then usability may be considered an important indicator of the user experience.

Future Directions

This scoping review with Delphi consultations has provided a broad overview of the current state of user experience measurement in the eHealth field along with expert (researcher) and user (adolescent) input into how user experience could be defined and measured with an eHealth intervention. An extension of this work may include investigating whether a core set of items used to measure the various domains of user experience would add value to the field and could be feasibly applied across different interventions and user populations [23]. Future research could benefit from qualitative investigations with adolescents to further define their understanding and definitions of the user experience domains within different eHealth contexts and test the feasibility of core assessment items for these domains from their perspective. It would then be beneficial to validate their definition of user experience (and the associated domains) and test the feasibility of core measurement item sets across large numbers of adolescent users of eHealth interventions internationally. With greater awareness and emphasis on patient-oriented research and improving outcomes important to patients, assessment of user experience can become an important part of patient-centered treatment planning.

Further attention could also be directed toward the definition and measurement of self-reported adherence. In our study, although a definition of self-reported adherence was achieved through consensus, its importance in measurement was not established among researcher or adolescent participants. Although a wealth of measurement studies exists for understanding adherence from an objective standpoint [16,173,174], few explanatory studies have been conducted to explore how and why the user did or did not follow the intervention and research protocol as recommended [23]. This why component is critical for meaningful improvement of eHealth interventions to increase program adherence and therefore achieve related health benefits [175].

Future studies may also look at how to apply both objective and subjective intervention outcomes or user experience measures to improve the validity of eHealth evaluations. For example, adolescents and researchers in our study reported perceived impact to be an important aspect of the user experience. Objective measures of intervention impact, such as changes in diagnostic severity, global functioning, symptom checklists [176], or self-reported minimal clinically important difference [177], could reinforce or complement the findings generated by more subjective measures [23]. In this way, we could better understand how various measures converge or diverge on similar user experience concepts, begin to develop more psychometrically and theoretically robust assessment measures, and establish indicators of clinically meaningful outcomes based on users’ perspectives.

Limitations

Although this scoping review and associated Delphi consultations were conducted according to published guidelines, this study is not without limitations. First, our study focus was placed on eHealth interventions that were web-, computer-, or mobile-based and mediated by the internet; we excluded studies that did not primarily include these features and therefore, our results will not be representative of all technologies for which user experience may be measured. In addition, eHealth interventions being used and evaluated in health care systems that have not been scientifically investigated and reported in the published or gray literature were not included in our review. Our scoping review focus also required studies to describe the measure or measures used to collect user experience data. This requirement resulted in the exclusion of 512 studies. Given that this was a review of definition and measurement of user experience, such details were essential to understanding the current state of the eHealth field. This approach was systematic in that we applied the same working definitions to each study; however, it may have resulted in the classification of a domain of user experience that differed from what study investigators intended. The challenge in grouping some of the studies confirms that agreement regarding definitions of user experience domains would be of value to the eHealth field. The proposed domains and definitions are not intended to be static, and we expect that they will be refined to reflect advances in the eHealth field. Finally, although we present the results from the first international Delphi consultations, our sample size was limited; particularly in representation from adolescent users.

Conclusions

eHealth interventions are now widely available for use by children, adolescents, and parents, and positive user experiences are generally reported across individual studies. The outcomes of this review and Delphi process highlight the various ways in which user experience has been defined and measured across studies, with a large proportion of research studies using study-specific, nonstandardized instruments. Through the conduct of this study, we propose definitions for 6 user experience domains: acceptability, satisfaction, credibility, usability, user-reported adherence, and perceived impact, as informed by empirical literature and agreed upon by eHealth researchers and adolescent users. Findings revealed 10 well-established measures that assess 5 of the 6 user experience domains (satisfaction, acceptability, credibility, perceived impact, and usability), and we recommend that eHealth researchers use an available, well-established measure over developing their own to assess these domains. The proposed working definitions and importance rankings from researcher and adolescent participants can be used to inform eHealth user experience research in the future and encourage consistency in reporting and to guide the development of measurement tools. Future studies should examine whether a core set of items used to measure the various domains of user experience would add value to the field and be feasibly applied across different interventions and user populations. Closing these gaps has the potential to enable comparisons across the user experience literature and better understand how user experience relates to other outcomes, such as effectiveness or objective measures of adherence, in eHealth interventions.

Acknowledgments

This work was supported by the Alberta Strategy for Support for People and Patient-Oriented Research and Trials (SUPPORT) Unit Knowledge Translation Platform, which is funded by Alberta Innovates and the Canadian Institutes of Health Research. The authors would like to acknowledge the contributions of Robin Featherstone, MLIS, who developed and conducted the initial search strategy; Tara Landry, MLIS, who peer-reviewed the search strategy and conducted an updated search; and Marcus O’Neill who assisted with article screening for eligibility and data abstraction.

Conflicts of Interest

Although SM does not own the BRAVE Program, it is possible that at some point in the future, she may receive royalties from future commercialization of the program.

Multimedia Appendix 1

Search strategy for Ovid MEDLINE Epub ahead of print, in-process, and other nonindexed citations; Ovid MEDLINE Daily; and Ovid MEDLINE.

DOCX File , 52 KB

Multimedia Appendix 2

The list of 63 test citations that were used to develop search terms for the search strategy used to identify studies on user experiences.

DOCX File , 57 KB

Multimedia Appendix 3

The definitions of user experience domains used for data extraction in the scoping review.

DOCX File , 51 KB

Multimedia Appendix 4

The parameters used for reviewing and interpreting psychometric data during the quality assessment of evaluation measures (adapted from a published table [<xref ref-type="bibr" rid="ref25">25</xref>]).

DOCX File , 51 KB

Multimedia Appendix 5

eHealth studies that measured user experience published from 2008 to 2019.

DOCX File , 74 KB

Multimedia Appendix 6

Results from the quality assessment of user experience measures used in the 129 studies included in the review.

DOCX File , 57 KB

Multimedia Appendix 7

Evaluation measures used in the eHealth studies assessed to be promising.

DOCX File , 54 KB

  1. Zhang T, Dong H. Human-Centred Design: An Emergent Conceptual Model. London, UK: Royal College of Art; 2009:8-10.
  2. Ergonomics of human-centred system interaction. British Standards Document BS EN ISO 9241-210. 2019.   URL: https://landingpage.bsigroup.com/LandingPage/Undated?UPI=000000000030388991 [accessed 2021-11-21]
  3. McCurdie T, Taneva S, Casselman M, Yeung M, McDaniel C, Ho W, et al. mHealth consumer apps: the case for user-centered design. Biomed Instrum Technol 2012;Suppl:49-56. [CrossRef] [Medline]
  4. Poole ES. HCI and mobile health interventions: how human-computer interaction can contribute to successful mobile health interventions. Transl Behav Med 2013 Dec;3(4):402-405 [FREE Full text] [CrossRef] [Medline]
  5. Wilson E. Patient-Centered e-Health. Hershey, PA: IGI Global; 2009:10-25.
  6. Schulze K, Kromker H. A framework to measure user experience of interactive online products. In: Proceedings of the 7th International Conference on Methods and Techniques in Behavioral Research. 2010 Presented at: 7th International Conference on Methods and Techniques in Behavioral research; August 24 - 27, 2010; Eindhoven The Netherlands p. 1-5. [CrossRef]
  7. Coulter RW, Sang JM, Louth-Marquez W, Henderson ER, Espelage D, Hunter SC, et al. Pilot testing the feasibility of a game intervention aimed at improving help seeking and coping among sexual and gender minority youth: protocol for a randomized controlled trial. JMIR Res Protoc 2019 Feb 15;8(2):e12164 [FREE Full text] [CrossRef] [Medline]
  8. Guagliano JM, Brown HE, Coombes E, Hughes C, Jones AP, Morton KL, et al. The development and feasibility of a randomised family-based physical activity promotion intervention: the Families Reporting Every Step to Health (FRESH) study. Pilot Feasibility Stud 2019;5:21 [FREE Full text] [CrossRef] [Medline]
  9. Bruggers CS, Baranowski S, Beseris M, Leonard R, Long D, Schulte E, et al. A prototype exercise-empowerment mobile video game for children with cancer, and its usability assessment: developing digital empowerment interventions for pediatric diseases. Front Pediatr 2018;6:69 [FREE Full text] [CrossRef] [Medline]
  10. De Cock N, Van Lippevelde W, Vangeel J, Notebaert M, Beullens K, Eggermont S, et al. Feasibility and impact study of a reward-based mobile application to improve adolescents' snacking habits. Public Health Nutr 2018 Aug;21(12):2329-2344. [CrossRef] [Medline]
  11. Leung MM, Mateo KF, Verdaguer S, Wyka K. Testing a web-based interactive comic tool to decrease obesity risk among minority preadolescents: protocol for a pilot randomized control trial. JMIR Res Protoc 2018 Nov 09;7(11):e10682 [FREE Full text] [CrossRef] [Medline]
  12. March S, Spence SH, Donovan CL, Kenardy JA. Large-scale dissemination of internet-based cognitive behavioral therapy for youth anxiety: feasibility and acceptability study. J Med Internet Res 2018 Jul 04;20(7):e234. [CrossRef] [Medline]
  13. Turner T, Hingle M. Evaluation of a mindfulness-based mobile app aimed at promoting awareness of weight-related behaviors in adolescents: a pilot study. JMIR Res Protoc 2017 Apr 26;6(4):e67 [FREE Full text] [CrossRef] [Medline]
  14. Dexheimer JW, Kurowski BG, Anders SH, McClanahan N, Wade SL, Babcock L. Usability evaluation of the SMART application for youth with mTBI. Int J Med Inform 2017 Dec;97:163-170 [FREE Full text] [CrossRef] [Medline]
  15. Feather JS, Howson M, Ritchie L, Carter PD, Parry DT, Koziol-McLain J. Evaluation methods for assessing users' psychological experiences of web-based psychosocial interventions: a systematic review. J Med Internet Res 2016;18(6):e181 [FREE Full text] [CrossRef] [Medline]
  16. Sieverink F, Kelders SM, van Gemert-Pijnen JE. Clarifying the concept of adherence to eHealth technology: systematic review on when usage becomes adherence. J Med Internet Res 2017 Dec 06;19(12):e402 [FREE Full text] [CrossRef] [Medline]
  17. Beatty L, Binnion C. A systematic review of predictors of, and reasons for, adherence to online psychological interventions. Int J Behav Med 2016 Dec;23(6):776-794. [CrossRef] [Medline]
  18. Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol 2005 Feb;8(1):19-32. [CrossRef]
  19. Levac D, Colquhoun H, O'Brien KK. Scoping studies: advancing the methodology. Implement Sci 2010;5:69 [FREE Full text] [CrossRef] [Medline]
  20. Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for Scoping Reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med 2018 Oct 02;169(7):467-473. [CrossRef] [Medline]
  21. Altman DG. Practical Statistics for Medical Research. London, UK: Chapman and Hall/CRC Press; 1990:406-407.
  22. Viera AJ, Garrett JM. Understanding interobserver agreement: the kappa statistic. Fam Med 2005 May;37(5):360-363 [FREE Full text] [Medline]
  23. Radomski AD, Bagnell A, Curtis S, Hartling L, Newton AS. Examining the usage, user experience, and perceived impact of an internet-based cognitive behavioral therapy program for adolescents with anxiety: randomized controlled trial. JMIR Ment Health 2020 Feb 07;7(2):e15795 [FREE Full text] [CrossRef] [Medline]
  24. Cohen LL, La Greca AM, Blount RL, Kazak AE, Holmbeck GN, Lemanek KL. Introduction to special issue: Evidence-based assessment in pediatric psychology. J Pediatr Psychol 2008 Oct;33(9):911-915 [FREE Full text] [CrossRef] [Medline]
  25. Phillips NM, Street M, Haesler E. A systematic review of reliable and valid tools for the measurement of patient participation in healthcare. BMJ Qual Saf 2016 Feb;25(2):110-117. [CrossRef] [Medline]
  26. Hsu CC, Standford BA. The Delphi technique: making sense of consensus. Pract Assess Res Eval 2007;12:1-8. [CrossRef]
  27. Diamond IR, Grant RC, Feldman BM, Pencharz PB, Ling SC, Moore AM, et al. Defining consensus: a systematic review recommends methodologic criteria for reporting of Delphi studies. J Clin Epidemiol 2014 Apr;67(4):401-409. [CrossRef] [Medline]
  28. Hasson F, Keeney S, McKenna H. Research guidelines for the Delphi survey technique. J Adv Nurs 2000 Oct;32(4):1008-1015. [Medline]
  29. Spence SH, Donovan CL, March S, Gamble A, Anderson R, Prosser S, et al. Online CBT in the treatment of child and adolescent anxiety disorders: issues in the development of BRAVE–ONLINE and two case illustrations. Behav Cogn Psychother 2008 Jul 01;36(4):411-430. [CrossRef]
  30. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform 2009 Apr;42(2):377-381 [FREE Full text] [CrossRef] [Medline]
  31. Kobak KA, Mundt JC, Kennard B. Integrating technology into cognitive behavior therapy for adolescent depression: a pilot study. Ann Gen Psychiatry 2015;14:37 [FREE Full text] [CrossRef] [Medline]
  32. Lattie EG, Ho J, Sargent E, Tomasino KN, Smith JD, Brown CH, et al. Teens engaged in collaborative health: the feasibility and acceptability of an online skill-building intervention for adolescents at risk for depression. Internet Interv 2017 Jun;8:15-26. [CrossRef] [Medline]
  33. Jacobson AE, Vesely SK, Haamid F, Christian-Rancy M, O'Brien SH. Mobile application vs paper pictorial blood assessment chart to track menses in young women: a randomized cross-over design. J Pediatr Adolesc Gynecol 2018 Apr;31(2):84-88. [CrossRef] [Medline]
  34. Moulos I, Maramis C, Ioakimidis I, van den Boer J, Nolstam J, Mars M, et al. Objective and subjective meal registration via a smartphone application. In: Proceedings from the 18th International Conference on Image Analysis and Processing. 2015 Presented at: 18th International Conference on Image Analysis and Processing; September 7-11, 2015; Genoa, Italy. [CrossRef]
  35. Thabrew H, Stasiak K, Merry S. Protocol for co-design, development, and open trial of a prototype game-based ehealth intervention to treat anxiety in young people with long-term physical conditions. JMIR Res Protoc 2017 Sep 22;6(9):e171 [FREE Full text] [CrossRef] [Medline]
  36. Bauermeister JA, Golinkoff JM, Horvath KJ, Hightow-Weidman LB, Sullivan PS, Stephenson R. A multilevel tailored web app-based intervention for linking young men who have sex with men to quality care (get connected): protocol for a randomized controlled trial. JMIR Res Protoc 2018 Aug 02;7(8):e10444 [FREE Full text] [CrossRef] [Medline]
  37. Bernier A, Fedele D, Guo Y, Chavez S, Smith MD, Warnick J, et al. New-onset diabetes educator to educate children and their caregivers about diabetes at the time of diagnosis: usability study. JMIR Diabetes 2018 Jun 06;3(2):e10 [FREE Full text] [CrossRef] [Medline]
  38. Liu A, Coleman K, Bojan K, Serrano PA, Oyedele T, Garcia A, et al. Developing a mobile app (LYNX) to support linkage to HIV/sexually transmitted infection testing and pre-exposure prophylaxis for young men who have sex with men: protocol for a randomized controlled trial. JMIR Res Protoc 2019 Jan 25;8(1):e10659 [FREE Full text] [CrossRef] [Medline]
  39. Nitsch M, Adamcik T, Kuso S, Zeiler M, Waldherr K. Usability and engagement evaluation of an unguided online program for promoting a healthy lifestyle and reducing the risk for eating disorders and obesity in the school setting. Nutrients 2019 Mar 27;11(4):713 [FREE Full text] [CrossRef] [Medline]
  40. Madeira RN, Mestre V, Ferreirinha T. Phonological Disorders in Children? Design and user experience evaluation of a mobile serious game approach. Procedia Comput Sci 2017;113:416-421. [CrossRef]
  41. Bjureberg J, Sahlin H, Hedman-Lagerlöf E, Gratz KL, Tull MT, Jokinen J, et al. Extending research on Emotion Regulation Individual Therapy for Adolescents (ERITA) with nonsuicidal self-injury disorder: open pilot trial and mediation analysis of a novel online version. BMC Psychiatry 2018 Oct 11;18(1):326 [FREE Full text] [CrossRef] [Medline]
  42. Wilansky P, Eklund JM, Milner T, Kreindler D, Cheung A, Kovacs T, et al. Cognitive behavior therapy for anxious and depressed youth: improving homework adherence through mobile technology. JMIR Res Protoc 2016 Nov 10;5(4):e209 [FREE Full text] [CrossRef] [Medline]
  43. Jensen CD, Duncombe KM, Lott MA, Hunsaker SL, Duraccio KM, Woolford SJ. An evaluation of a smartphone-assisted behavioral weight control intervention for adolescents: pilot study. JMIR Mhealth Uhealth 2016 Aug 23;4(3):e102 [FREE Full text] [CrossRef] [Medline]
  44. Lalouni M, Ljótsson B, Bonnert M, Hedman-Lagerlöf E, Högström J, Serlachius E, et al. Internet-delivered cognitive behavioral therapy for children with pain-related functional gastrointestinal disorders: feasibility study. JMIR Ment Health 2017 Aug 10;4(3):e32 [FREE Full text] [CrossRef] [Medline]
  45. Cho H, Powell D, Pichon A, Thai J, Bruce J, Kuhns LM, et al. A mobile health intervention for HIV prevention among racially and ethnically diverse young men: usability evaluation. JMIR Mhealth Uhealth 2018 Sep 07;6(9):e11450 [FREE Full text] [CrossRef] [Medline]
  46. Sun T, Dunsmuir D, Miao I, Devoy GM, West NC, Görges M, et al. In-hospital usability and feasibility evaluation of Panda, an app for the management of pain in children at home. Paediatr Anaesth 2018 Oct;28(10):897-905. [CrossRef] [Medline]
  47. Hill RM, Pettit JW. Pilot randomized controlled trial of LEAP: a selective preventive intervention to reduce adolescents' perceived burdensomeness. J Clin Child Adolesc Psychol 2019;48(sup1):45-56. [CrossRef] [Medline]
  48. Law EF, Beals-Erickson SE, Noel M, Claar R, Palermo TM. Pilot randomized controlled trial of internet-delivered cognitive-behavioral treatment for pediatric headache. Headache 2015;55(10):1410-1425 [FREE Full text] [CrossRef] [Medline]
  49. Richards D, Caldwell P. Improving health outcomes sooner rather than later via an interactive website and virtual specialist. IEEE J Biomed Health Inform 2018 Sep;22(5):1699-1706. [CrossRef] [Medline]
  50. Vigerland S, Ljótsson B, Thulin U, Öst L, Andersson G, Serlachius E. Internet-delivered cognitive behavioural therapy for children with anxiety disorders: a randomised controlled trial. Behav Res Ther 2016 Jan;76:47-56 [FREE Full text] [CrossRef] [Medline]
  51. Vigerland S, Thulin U, Ljótsson B, Svirsky L, Ost L, Lindefors N, et al. Internet-delivered CBT for children with specific phobia: a pilot study. Cogn Behav Ther 2013;42(4):303-314. [CrossRef] [Medline]
  52. Jolstedt M, Ljótsson B, Fredlander S, Tedgård T, Hallberg A, Ekeljung A, et al. Implementation of internet-delivered CBT for children with anxiety disorders in a rural area: a feasibility trial. Internet Interv 2018 Jun;12:121-129 [FREE Full text] [CrossRef] [Medline]
  53. O'Malley G, Dowdall G, Burls A, Perry IJ, Curran N. Exploring the usability of a mobile app for adolescent obesity management. JMIR Mhealth Uhealth 2014 Jun 27;2(2):e29 [FREE Full text] [CrossRef] [Medline]
  54. Danielson CK, McCauley JL, Gros KS, Jones AM, Barr SC, Borkman AL, et al. SiHLEWeb.com: Development and usability testing of an evidence-based HIV prevention website for female African-American adolescents. Health Informatics J 2016 Jun;22(2):194-208 [FREE Full text] [CrossRef] [Medline]
  55. Palermo TM, Law EF, Fales J, Bromberg MH, Jessen-Fiddick T, Tai G. Internet-delivered cognitive-behavioral treatment for adolescents with chronic pain and their parents: a randomized controlled multicenter trial. Pain 2016 Jan;157(1):174-185 [FREE Full text] [CrossRef] [Medline]
  56. Palermo TM, Wilson AC, Peters M, Lewandowski A, Somhegyi H. Randomized controlled trial of an internet-delivered family cognitive-behavioral therapy intervention for children and adolescents with chronic pain. Pain 2009 Nov;146(1-2):205-213 [FREE Full text] [CrossRef] [Medline]
  57. Carrasco AE. Acceptability of an adventure video game in the treatment of female adolescents with symptoms of depression. ResPsy 2016 Apr 18;19(1):10-18. [CrossRef]
  58. Gladstone T, Marko-Holguin M, Henry J, Fogel J, Diehl A, Van Voorhees BW. Understanding adolescent response to a technology-based depression prevention program. J Clin Child Adolesc Psychol 2014;43(1):102-114. [CrossRef] [Medline]
  59. Gladstone TG, Marko-Holguin M, Rothberg P, Nidetz J, Diehl A, DeFrino DT, et al. An internet-based adolescent depression preventive intervention: study protocol for a randomized control trial. Trials 2015 May 01;16:203 [FREE Full text] [CrossRef] [Medline]
  60. Doumas DM. Web-based personalized feedback: is this an appropriate approach for reducing drinking among high school students? J Subst Abuse Treat 2015 Mar;50:76-80. [CrossRef] [Medline]
  61. Wade SL, Cassedy AE, Sklut M, Taylor HG, McNally KA, Kirkwood MW, et al. The relationship of adolescent and parent preferences for treatment modality with satisfaction, attrition, adherence, and efficacy: The Coping With Head Injury Through Problem-Solving (CHIPS) Study. J Pediatr Psychol 2019 Apr 01;44(3):388-401. [CrossRef] [Medline]
  62. Bauer S, Bilić S, Reetz C, Ozer F, Becker K, Eschenbeck H, ProHEAD Consortium. Efficacy and cost-effectiveness of Internet-based selective eating disorder prevention: study protocol for a randomized controlled trial within the ProHEAD Consortium. Trials 2019 Jan 30;20(1):91 [FREE Full text] [CrossRef] [Medline]
  63. Nieto R, Boixadós M, Hernández E, Beneitez I, Huguet A, McGrath P. Quantitative and qualitative testing of DARWeb: an online self-guided intervention for children with functional abdominal pain and their parents. Health Informatics J 2019 Dec;25(4):1511-1527 [FREE Full text] [CrossRef] [Medline]
  64. Ahmed B, Monroe P, Hair A, Tan CT, Gutierrez-Osuna R, Ballard KJ. Speech-driven mobile games for speech therapy: user experiences and feasibility. Int J Speech Lang Pathol 2018 Oct 09:1-15. [CrossRef] [Medline]
  65. Anderson LM, Leonard S, Jonassaint J, Lunyera J, Bonner M, Shah N. Mobile health intervention for youth with sickle cell disease: impact on adherence, disease knowledge, and quality of life. Pediatr Blood Cancer 2018 Aug;65(8):e27081. [CrossRef] [Medline]
  66. Benyakorn S, Calub CA, Riley SJ, Schneider A, Iosif A, Solomon M, et al. Computerized cognitive training in children with autism and intellectual disabilities: feasibility and satisfaction study. JMIR Ment Health 2018 May 25;5(2):e40 [FREE Full text] [CrossRef] [Medline]
  67. Braciszewski JM, Wernette GK, Moore RS, Bock BC, Stout RL, Chamberlain P. A pilot randomized controlled trial of a technology-based substance use intervention for youth exiting foster care. Child Youth Serv Rev 2018 Nov;94:466-476 [FREE Full text] [CrossRef] [Medline]
  68. de la Vega R, Roset R, Galán S, Miró J. Fibroline: a mobile app for improving the quality of life of young people with fibromyalgia. J Health Psychol 2018 Jan;23(1):67-78. [CrossRef] [Medline]
  69. Grist R, Porter J, Stallard P. Acceptability, use, and safety of a mobile phone app (BlueIce) for young people who self-harm: qualitative study of service users' experience. JMIR Ment Health 2018 Feb 23;5(1):e16 [FREE Full text] [CrossRef] [Medline]
  70. Hieftje K, Pendergrass T, Montanaro E, Kyriakides T, Florsheim O, Fiellin L. "But do they like it?" Participant satisfaction and gameplay experience of a public health videogame intervention in adolescents. In: Proceedings of the IEEE 6th International Conference on Serious Games and Applications for Health (SeGAH). 2018 Presented at: IEEE 6th International Conference on Serious Games and Applications for Health (SeGAH); May 16-18, 2018; Vienna, Austria. [CrossRef]
  71. Husted GR, Weis J, Teilmann G, Castensøe-Seidenfaden P. Exploring the influence of a smartphone app (Young with Diabetes) on young people's self-management: qualitative study. JMIR Mhealth Uhealth 2018 Feb 28;6(2):e43 [FREE Full text] [CrossRef] [Medline]
  72. Jolstedt M, Wahlund T, Lenhard F, Ljótsson B, Mataix-Cols D, Nord M, et al. Efficacy and cost-effectiveness of therapist-guided internet cognitive behavioural therapy for paediatric anxiety disorders: a single-centre, single-blind, randomised controlled trial. Lancet Child Adolesc Health 2018 Dec;2(11):792-801. [CrossRef] [Medline]
  73. Kapitány-Fövény M, Vagdalt E, Ruttkay Z, Urbán R, Richman MJ, Demetrovics Z. Potential of an interactive drug prevention mobile phone app (Once Upon a High): questionnaire study among students. JMIR Serious Games 2018 Dec 04;6(4):e19 [FREE Full text] [CrossRef] [Medline]
  74. Klee P, Bussien C, Castellsague M, Combescure C, Dirlewanger M, Girardin C, et al. An intervention by a patient-designed do-it-yourself mobile device app reduces HbA1c in children and adolescents with type 1 diabetes: a randomized double-crossover study. Diabetes Technol Ther 2018 Dec;20(12):797-805. [CrossRef] [Medline]
  75. Leonard NR, Casarjian B, Fletcher RR, Praia C, Sherpa D, Kelemen A, et al. Theoretically-based emotion regulation strategies using a mobile app and wearable sensor among homeless adolescent mothers: acceptability and feasibility study. JMIR Pediatr Parent 2018;1(1):e1 [FREE Full text] [CrossRef] [Medline]
  76. Lorusso ML, Biffi E, Molteni M, Reni G. Exploring the learnability and usability of a near field communication-based application for semantic enrichment in children with language disorders. Assist Technol 2018;30(1):39-50. [CrossRef] [Medline]
  77. Parisod H, Pakarinen A, Axelin A, Löyttyniemi E, Smed J, Salanterä S. Feasibility of mobile health game "Fume" in supporting tobacco-related health literacy among early adolescents: a three-armed cluster randomized design. Int J Med Inform 2018 May;113:26-37. [CrossRef] [Medline]
  78. Parisod H, Pakarinen A, Axelin A, Loyttyniemi E, Smed J, Salantera S. Acceptability of and demand for a tobacco-related health game application in early adolescents with different gender, age and gaming habits. In: Proceedings of the 1st International GamiFIN Conference. 2017 Presented at: 1st International GamiFIN Conference; May 9-10, 2017; Pori, Finland p. 135-140   URL: https://tinyurl.com/5wnan8p7
  79. Ramsey RR, Holbein CE, Powers SW, Hershey AD, Kabbouche MA, O'Brien HL, et al. A pilot investigation of a mobile phone application and progressive reminder system to improve adherence to daily prevention treatment in adolescents and young adults with migraine. Cephalalgia 2018 Dec;38(14):2035-2044. [CrossRef] [Medline]
  80. Verberg FL, Helmond P, Overbeek G. Study protocol: a randomized controlled trial testing the effectiveness of an online mindset intervention in adolescents with intellectual disabilities. BMC Psychiatry 2018 Dec 04;18(1):377 [FREE Full text] [CrossRef] [Medline]
  81. Verdaguer S, Mateo KF, Wyka K, Dennis-Tiwary TA, Leung MM. A web-based interactive tool to reduce childhood obesity risk in urban minority youth: usability testing study. JMIR Form Res 2018 Nov 01;2(2):e21 [FREE Full text] [CrossRef] [Medline]
  82. Wade SL, Bedell G, King JA, Jacquin M, Turkstra LS, Haarbauer-Krupa J, et al. Social Participation and Navigation (SPAN) program for adolescents with acquired brain injury: pilot findings. Rehabil Psychol 2018 Aug;63(3):327-337 [FREE Full text] [CrossRef] [Medline]
  83. Cai RA, Beste D, Chaplin H, Varakliotis S, Suffield L, Josephs F, et al. Developing and evaluating JIApp: acceptability and usability of a smartphone app system to improve self-management in young people with juvenile idiopathic arthritis. JMIR Mhealth Uhealth 2017 Aug 15;5(8):e121 [FREE Full text] [CrossRef] [Medline]
  84. Castensøe-Seidenfaden P, Reventlov HG, Teilmann G, Hommel E, Olsen BS, Kensing F. Designing a self-management app for young people with type 1 diabetes: methodological challenges, experiences, and recommendations. JMIR Mhealth Uhealth 2017 Oct 23;5(10):e124 [FREE Full text] [CrossRef] [Medline]
  85. Goyal S, Nunn CA, Rotondi M, Couperthwaite AB, Reiser S, Simone A, et al. A mobile app for the self-management of type 1 diabetes among adolescents: a randomized controlled trial. JMIR Mhealth Uhealth 2017 Jun 19;5(6):e82 [FREE Full text] [CrossRef] [Medline]
  86. Hatfield M, Murray N, Ciccarelli M, Falkmer T, Falkmer M. Pilot of the BOOST-A™: an online transition planning program for adolescents with autism. Aust Occup Ther J 2017 Dec;64(6):448-456. [CrossRef] [Medline]
  87. Haug S, Castro RP, Meyer C, Filler A, Kowatsch T, Schaub MP. A mobile phone-based life skills training program for substance use prevention among adolescents: pre-post study on the acceptance and potential effectiveness of the program, Ready4life. JMIR Mhealth Uhealth 2017 Oct 04;5(10):e143 [FREE Full text] [CrossRef] [Medline]
  88. Holtz BE, Murray KM, Hershey DD, Richman J, Dunneback JK, Vyas A, et al. The design and development of MyT1DHero: a mobile app for adolescents with type 1 diabetes and their parents. J Telemed Telecare 2017 Jan 01:172-180. [CrossRef] [Medline]
  89. Jibb LA, Stevens BJ, Nathan PC, Seto E, Cafazzo JA, Johnston DL, et al. Implementation and preliminary effectiveness of a real-time pain management smartphone app for adolescents with cancer: a multicenter pilot clinical study. Pediatr Blood Cancer 2017 Oct;64(10):26554. [CrossRef] [Medline]
  90. Khalil GE, Wang H, Calabro KS, Mitra N, Shegog R, Prokhorov AV. From the experience of interactivity and entertainment to lower intention to smoke: a randomized controlled trial and path analysis of a web-based smoking prevention program for adolescents. J Med Internet Res 2017 Feb 16;19(2):e44 [FREE Full text] [CrossRef] [Medline]
  91. Kong G, Goldberg AL, Dallery J, Krishnan-Sarin S. An open-label pilot study of an intervention using mobile phones to deliver contingency management of tobacco abstinence to high school students. Exp Clin Psychopharmacol 2017 Oct;25(5):333-337 [FREE Full text] [CrossRef] [Medline]
  92. Kuosmanen T, Fleming TM, Newell J, Barry MM. A pilot evaluation of the SPARX-R gaming intervention for preventing depression and improving wellbeing among adolescents in alternative education. Internet Interv 2017 Jun;8:40-47 [FREE Full text] [CrossRef] [Medline]
  93. Lee J, Song S, Ahn JS, Kim Y, Lee JE. Use of a mobile application for self-monitoring dietary intake: feasibility test and an intervention study. Nutrients 2017 Jul 13;9(7):13 [FREE Full text] [CrossRef] [Medline]
  94. Lyles AA, Amresh A, Huberty J, Todd M, Lee RE. A mobile, avatar-based app for improving body perceptions among adolescents: a pilot test. JMIR Serious Games 2017 Mar 02;5(1):e4 [FREE Full text] [CrossRef] [Medline]
  95. McManama O'Brien KH, LeCloux M, Ross A, Gironda C, Wharff EA. A pilot study of the acceptability and usability of a smartphone application intervention for suicidal adolescents and their parents. Arch Suicide Res 2016 May 2:1-11. [CrossRef] [Medline]
  96. Narad ME, Bedell G, King JA, Johnson J, Turkstra LS, Haarbauer-Krupa J, et al. Social Participation and Navigation (SPAN): description and usability of app-based coaching intervention for adolescents with TBI. Dev Neurorehabil 2017 Aug 01:1-10. [CrossRef] [Medline]
  97. Navarro J, Escobar P, Cebolla A, Lison J, Pitti J, Guizerres J, et al. Exergames on line for childhood obesity: using a web platform as an ambulatory program to increase the acceptance and adherence to physical activity (PA). In: Proceedings of the International Summit on eHealth - eHealth 360°. 2016 Presented at: International Summit on eHealth - eHealth 360°; June 14-16, 2016; Budapest, Hungary p. 127-134. [CrossRef]
  98. Newton AS, Dow N, Dong K, Fitzpatrick E, Wild TC, Johnson DW, Pediatric Emergency Research Canada. A randomised controlled pilot trial evaluating feasibility and acceptability of a computer-based tool to identify and reduce harmful and hazardous drinking among adolescents with alcohol-related presentations in Canadian pediatric emergency departments. BMJ Open 2017 Aug 11;7(8):e015423 [FREE Full text] [CrossRef] [Medline]
  99. Stoll R, Pina A, Gary K, Amresh A. Usability of a smartphone application to support the prevention and early intervention of anxiety in youth. Cogn Behav Pract 2017 Nov;24(4):393-404 [FREE Full text] [CrossRef] [Medline]
  100. Widman L, Golin CE, Kamke K, Massey J, Prinstein MJ. Feasibility and acceptability of a web-based HIV/STD prevention program for adolescent girls targeting sexual communication skills. Health Educ Res 2017 Aug 01;32(4):343-352 [FREE Full text] [CrossRef] [Medline]
  101. Chapman R, Loades M, O'Reilly G, Coyle D, Patterson M, Salkovskis P. ‘Pesky gNATs’: investigating the feasibility of a novel computerized CBT intervention for adolescents with anxiety and/or depression in a Tier 3 CAMHS setting. Cogn Behav Ther 2016 Dec 01;9:e35. [CrossRef]
  102. Glynn P, Eom S, Zelko F, Koh S. Feasibility of a mobile cognitive intervention in childhood absence epilepsy. Front Hum Neurosci 2016;10:575 [FREE Full text] [CrossRef] [Medline]
  103. Kurowski BG, Wade SL, Dexheimer JW, Dyas J, Zhang N, Babcock L. Feasibility and potential benefits of a web-based intervention delivered acutely after mild traumatic brain injury in adolescents: a pilot study. J Head Trauma Rehabil 2016;31(6):369-378 [FREE Full text] [CrossRef] [Medline]
  104. Svensson A, Magnusson M, Larsson C. Overcoming barriers: adolescents' experiences using a mobile phone dietary assessment app. JMIR Mhealth Uhealth 2016 Jul 29;4(3):e92 [FREE Full text] [CrossRef] [Medline]
  105. Armbrust W, Bos JJ, Cappon J, van Rossum MA, Sauer PJ, Wulffraat N, et al. Design and acceptance of Rheumates@Work, a combined internet-based and in person instruction model, an interactive, educational, and cognitive behavioral program for children with juvenile idiopathic arthritis. Pediatr Rheumatol Online J 2015 Jul 23;13:31 [FREE Full text] [CrossRef] [Medline]
  106. Blackman KC, Zoellner J, Kadir A, Dockery B, Johnson SB, Almeida FA, et al. Examining the feasibility of smartphone game applications for physical activity promotion in middle school students. Games Health J 2015 Oct;4(5):409-419. [CrossRef] [Medline]
  107. Brady SS, Sieving RE, Terveen LG, Rosser BR, Kodet AJ, Rothberg VD. An interactive website to reduce sexual risk behavior: process evaluation of TeensTalkHealth. JMIR Res Protoc 2015 Sep 02;4(3):e106 [FREE Full text] [CrossRef] [Medline]
  108. Bul KC, Franken IH, Van der Oord S, Kato PM, Danckaerts M, Vreeke LJ, et al. Development and user satisfaction of "Plan-It Commander," a serious game for children with ADHD. Games Health J 2015 Dec;4(6):502-512. [CrossRef] [Medline]
  109. Cox LE, Ashford JM, Clark KN, Martin-Elbahesh K, Hardy KK, Merchant TE, et al. Feasibility and acceptability of a remotely administered computerized intervention to address cognitive late effects among childhood cancer survivors. Neurooncol Pract 2015 Jun;2(2):78-87 [FREE Full text] [CrossRef] [Medline]
  110. Enah C, Piper K, Moneyham L. Qualitative evaluation of the relevance and acceptability of a web-based HIV prevention game for rural adolescents. J Pediatr Nurs 2015 Apr;30(2):321-328. [CrossRef] [Medline]
  111. Kenny R, Dooley B, Fitzgerald A. Feasibility of "CopeSmart": a telemental health app for adolescents. JMIR Ment Health 2015 Aug 10;2(3):e22 [FREE Full text] [CrossRef] [Medline]
  112. Kobak KA, Mundt JC, Kennard B. Erratum to: integrating technology into cognitive behavior therapy for adolescent depression: a pilot study. Ann Gen Psychiatry 2016;15:2 [FREE Full text] [CrossRef] [Medline]
  113. Marsac ML, Winston FK, Hildenbrand AK, Kohser KL, March S, Kenardy J, et al. Systematic, theoretically-grounded development and feasibility testing of an innovative, preventive web-based game for children exposed to acute trauma. Clin Pract Pediatr Psychol 2015;3(1):12-24 [FREE Full text] [CrossRef] [Medline]
  114. Marsch LA, Guarino H, Grabinski MJ, Syckes C, Dillingham ET, Xie H, et al. Comparative effectiveness of web-based vs. educator-delivered HIV prevention for adolescent substance users: a randomized controlled trial. J Subst Abuse Treat 2015 Dec;59:30-37 [FREE Full text] [CrossRef] [Medline]
  115. Mustanski B, Greene GJ, Ryan D, Whitton SW. Feasibility, acceptability, and initial efficacy of an online sexual health promotion program for LGBT youth: the Queer Sex Ed intervention. J Sex Res 2015;52(2):220-230. [CrossRef] [Medline]
  116. Pretlow RA, Stock CM, Allison S, Roeger L. Treatment of child/adolescent obesity using the addiction model: a smartphone app pilot study. Child Obes 2015 Jun;11(3):248-259 [FREE Full text] [CrossRef] [Medline]
  117. Sousa P, Fonseca H, Gaspar P, Gaspar F. Controlled trial of an internet-based intervention for overweight teens (Next.Step): effectiveness analysis. Eur J Pediatr 2015 Sep;174(9):1143-1157 [FREE Full text] [CrossRef] [Medline]
  118. Sousa P, Fonseca H, Gaspar P, Gaspar F. Usability of an internet-based platform (Next.Step) for adolescent weight management. J Pediatr (Rio J) 2015;91(1):68-74 [FREE Full text] [CrossRef] [Medline]
  119. Sze YY, Daniel TO, Kilanowski CK, Collins RL, Epstein LH. Web-based and mobile delivery of an episodic future thinking intervention for overweight and obese families: a feasibility study. JMIR Mhealth Uhealth 2015;3(4):e97 [FREE Full text] [CrossRef] [Medline]
  120. Voerman JS, Remerie S, Westendorp T, Timman R, Busschbach JJ, Passchier J, et al. Effects of a guided internet-delivered self-help intervention for adolescents with chronic pain. J Pain 2015 Nov;16(11):1115-1126. [CrossRef] [Medline]
  121. Williamson H, Griffiths C, Harcourt D. Developing young person's Face IT: online psychosocial support for adolescents struggling with conditions or injuries affecting their appearance. Health Psychol Open 2015 Jul;2(2):2055102915619092 [FREE Full text] [CrossRef] [Medline]
  122. Williamson H, Hamlet C, White P, Marques EMR, Cadogan J, Perera R, et al. Study protocol of the YP Face IT feasibility study: comparing an online psychosocial intervention versus treatment as usual for adolescents distressed by appearance-altering conditions/injuries. BMJ Open 2016 Dec 03;6(10):e012423 [FREE Full text] [CrossRef] [Medline]
  123. Wozney L, Baxter P, Newton AS. Usability evaluation with mental health professionals and young people to develop an internet-based cognitive-behaviour therapy program for adolescents with anxiety disorders. BMC Pediatr 2015;15:213 [FREE Full text] [CrossRef] [Medline]
  124. Bannink R, Broeren S, van Zwanenburg E, van As E, van de Looij-Jansen P, Raat H. Use and appreciation of a web-based, tailored intervention (E-health4Uth) combined with counseling to promote adolescents' health in preventive youth health care: survey and log-file analysis. JMIR Res Protoc 2014 Jan 06;3(1):e3 [FREE Full text] [CrossRef] [Medline]
  125. Breakey VR, Ignas DM, Warias AV, White M, Blanchette VS, Stinson JN. A pilot randomized control trial to evaluate the feasibility of an internet-based self-management and transitional care program for youth with haemophilia. Haemophilia 2014 Nov;20(6):784-793. [CrossRef] [Medline]
  126. Comer JS, Furr JM, Cooper-Vince CE, Kerns CE, Chan PT, Edson AL, et al. Internet-delivered, family-based treatment for early-onset OCD: a preliminary case series. J Clin Child Adolesc Psychol 2014;43(1):74-87 [FREE Full text] [CrossRef] [Medline]
  127. Hetrick S, Yuen HP, Cox G, Bendall S, Yung A, Pirkis J, et al. Does cognitive behavioural therapy have a role in improving problem solving and coping in adolescents with suicidal ideation? Cogn Behav Therap 2014 Aug 07;7:e13. [CrossRef]
  128. Lubans DR, Smith JJ, Skinner G, Morgan PJ. Development and implementation of a smartphone application to promote physical activity and reduce screen-time in adolescent boys. Front Public Health 2014;2:42 [FREE Full text] [CrossRef] [Medline]
  129. Manicavasagar V, Horswood D, Burckhardt R, Lum A, Hadzi-Pavlovic D, Parker G. Feasibility and effectiveness of a web-based positive psychology program for youth mental health: randomized controlled trial. J Med Internet Res 2014;16(6):e140 [FREE Full text] [CrossRef] [Medline]
  130. Newton NC, Conrod PJ, Rodriguez DM, Teesson M. A pilot study of an online universal school-based intervention to prevent alcohol and cannabis use in the UK. BMJ Open 2014 May 19;4(5):e004750 [FREE Full text] [CrossRef] [Medline]
  131. Ybarra ML, Bull SS, Prescott TL, Birungi R. Acceptability and feasibility of CyberSenga: an internet-based HIV-prevention program for adolescents in Mbarara, Uganda. AIDS Care 2014 Apr;26(4):441-447 [FREE Full text] [CrossRef] [Medline]
  132. Ezendam NP, Noordegraaf VS, Kroeze W, Brug J, Oenema A. Process evaluation of FATaintPHAT, a computer-tailored intervention to prevent excessive weight gain among Dutch adolescents. Health Promot Int 2013 Mar;28(1):26-35. [CrossRef] [Medline]
  133. Haug S, Schaub MP, Venzin V, Meyer C, John U, Gmel G. A pre-post study on the appropriateness and effectiveness of a web- and text messaging-based intervention to reduce problem drinking in emerging adults. J Med Internet Res 2013;15(9):e196 [FREE Full text] [CrossRef] [Medline]
  134. Hardy KK, Willard VW, Allen TM, Bonner MJ. Working memory training in survivors of pediatric cancer: a randomized pilot study. Psychooncology 2013 Aug;22(8):1856-1865 [FREE Full text] [CrossRef] [Medline]
  135. Harrell W, Eack S, Hooper SR, Keshavan MS, Bonner MS, Schoch K, et al. Feasibility and preliminary efficacy data from a computerized cognitive intervention in children with chromosome 22q11.2 deletion syndrome. Res Dev Disabil 2013 Sep;34(9):2606-2613 [FREE Full text] [CrossRef] [Medline]
  136. Ritterband LM, Thorndike FP, Lord HR, Borowitz S, Walker LS, Ingersoll KS, et al. An RCT of an internet intervention for pediatric encopresis with one year follow-up. Clin Pract Pediatr Psychol 2013 Mar;1(1):68-80 [FREE Full text] [CrossRef] [Medline]
  137. Spook JE, Paulussen T, Kok G, Van Empelen P. Monitoring dietary intake and physical activity electronically: feasibility, usability, and ecological validity of a mobile-based Ecological Momentary Assessment tool. J Med Internet Res 2013;15(9):e214 [FREE Full text] [CrossRef] [Medline]
  138. Bradley KL, Robinson LM, Brannen CL. Adolescent help-seeking for psychological distress, depression, and anxiety using an Internet program. Int J Ment Health Prom 2012 Feb;14(1):23-34. [CrossRef]
  139. Lau EY, Lau PW, Chung P, Ransdell LB, Archer E. Evaluation of an internet-short message service-based intervention for promoting physical activity in Hong Kong Chinese adolescent school children: a pilot study. Cyberpsychol Behav Soc Netw 2012 Aug;15(8):425-434. [CrossRef] [Medline]
  140. Shegog R, Markham CM, Leonard AD, Bui TC, Paul ME. "+CLICK": pilot of a web-based training program to enhance ART adherence among HIV-positive youth. AIDS Care 2012;24(3):310-318. [CrossRef] [Medline]
  141. Whittaker R, Merry S, Stasiak K, McDowell H, Doherty I, Shepherd M, et al. MEMO--a mobile phone depression prevention intervention for adolescents: development process and postprogram findings on acceptability from a randomized controlled trial. J Med Internet Res 2012;14(1):e13 [FREE Full text] [CrossRef] [Medline]
  142. Coyle D, McGlade N, Doherty G, O'Reilly G. Exploratory evaluations of a computer game supporting cognitive behavioural therapy for adolescents. In: Proceedings of the 29th Annual CHI Conference on Human Factors in Computing Systems. 2011 Presented at: CHI '11: CHI Conference on Human Factors in Computing Systems; May 7 - 12, 2011; Vancouver BC Canada p. 2937-2946. [CrossRef]
  143. Iloabachie C, Wells C, Goodwin B, Baldwin M, Vanderplough-Booth K, Gladstone T, et al. Adolescent and parent experiences with a primary care/Internet-based depression prevention intervention (CATCH-IT). Gen Hosp Psychiatry 2011;33(6):543-555. [CrossRef] [Medline]
  144. Marsch LA, Grabinski MJ, Bickel WK, Desrosiers A, Guarino H, Muehlbach B, et al. Computer-assisted HIV prevention for youth with substance use disorders. Subst Use Misuse 2011;46(1):46-56 [FREE Full text] [CrossRef] [Medline]
  145. Tillfors M, Andersson G, Ekselius L, Furmark T, Lewenhaupt S, Karlsson A, et al. A randomized trial of internet-delivered treatment for social anxiety disorder in high school students. Cogn Behav Ther 2011;40(2):147-157. [CrossRef] [Medline]
  146. Newton NC, Teesson M, Vogl LE, Andrews G. Internet-based prevention for alcohol and cannabis use: final results of the Climate Schools course. Addiction 2010 Apr;105(4):749-759. [CrossRef] [Medline]
  147. Stinson J, McGrath P, Hodnett E, Feldman B, Duffy C, Huber A, et al. Usability testing of an online self-management program for adolescents with juvenile idiopathic arthritis. J Med Internet Res 2010;12(3):e30 [FREE Full text] [CrossRef] [Medline]
  148. Markham CM, Shegog R, Leonard AD, Bui TC, Paul ME. +CLICK: harnessing web-based training to reduce secondary transmission among HIV-positive youth. AIDS Care 2009 May;21(5):622-631 [FREE Full text] [CrossRef] [Medline]
  149. Landback J, Prochaska M, Ellis J, Dmochowska K, Kuwabara SA, Gladstone T, et al. From prototype to product: development of a primary care/internet based depression prevention intervention for adolescents (CATCH-IT). Community Ment Health J 2009 Oct;45(5):349-354. [CrossRef] [Medline]
  150. Long AC, Palermo TM. Brief report: web-based management of adolescent chronic pain: development and usability testing of an online family cognitive behavioral therapy program. J Pediatr Psychol 2009 Jun;34(5):511-516 [FREE Full text] [CrossRef] [Medline]
  151. O'Conner-Von S. Preparation of adolescents for outpatient surgery: using an internet program. AORN J 2008 Feb;87(2):374-398. [CrossRef] [Medline]
  152. Ritterband LM, Ardalan K, Thorndike FP, Magee JC, Saylor DK, Cox DJ, et al. Real world use of an internet intervention for pediatric encopresis. J Med Internet Res 2008 Jun 30;10(2):e16 [FREE Full text] [CrossRef] [Medline]
  153. World Health Organization. Young people's health--a challenge for society. Report of a WHO Study Group on young people and "Health for All by the Year 2000". World Health Organ Tech Rep Ser 1986;731:1-117. [Medline]
  154. Lewis JR. Psychometric evaluation of the PSSUQ using data from five years of usability studies. Int Hum Comput Interact 2002 Sep;14(3-4):463-488. [CrossRef]
  155. Lewis JR. IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. Int J Hum-Comp Interact 1995 Jan;7(1):57-78. [CrossRef]
  156. Lewis J, Sauro J. The factor structure of the system usability scale. In: Proceedings of the International Conference on Human Centered Design. 2009 Presented at: International Conference on Human Centered Design; July 19-24, 2009; San Diego, CA, USA p. 94-103. [CrossRef]
  157. Bangor A, Kortum PT, Miller JT. An empirical evaluation of the system usability scale. Int J Hum Comput Interact 2008 Jul 30;24(6):574-594. [CrossRef]
  158. Martins AI, Rosa AF, Queirós A, Silva A, Rocha NP. European Portuguese validation of the System Usability Scale (SUS). Procedia Comput Sci 2015;67:293-300. [CrossRef]
  159. Greenfield T, Attkisson C. The Service Satisfaction Scale-30. In: Maruish ME, editor. The Use of Psychological Testing for Treatment Planning and Outcome Assessment. Mahwah, NJ: Lawrence Erlbaum Associates; 2004:799-811.
  160. Devilly GJ, Borkovec TD. Psychometric properties of the credibility/expectancy questionnaire. J Behav Ther Exp Psychiatry 2000 Jun;31(2):73-86. [CrossRef] [Medline]
  161. Johnson D, Gardner MJ, Perry R. Validation of two game experience scales: The Player Experience of Need Satisfaction (PENS) and Game Experience Questionnaire (GEQ). Int J Hum Comput Stud 2018 Oct;118:38-46. [CrossRef]
  162. Poels K, de Kort YA, IJsselsteijn WA. D3.3 : Game Experience Questionnaire: development of a self-report measure to assess the psychological impact of digital games. In: European Community - New and Emerging Science and Technology (NEST) Programme. Eindhoven: Technische Universiteit Eindhoven; 2007:1-46.
  163. Athay MM, Bickman L. Development and psychometric evaluation of the youth and caregiver Service Satisfaction Scale. Adm Policy Ment Health 2012 Mar;39(1-2):71-77 [FREE Full text] [CrossRef] [Medline]
  164. Kelley ML, Heffer RW, Gresham FM, Elliott SN. Development of a modified treatment evaluation inventory. J Psychopathol Behav Assess 1989 Sep;11(3):235-247. [CrossRef]
  165. Newton JT, Sturmey P. Development of a short form of the Treatment Evaluation Inventory for acceptability of psychological interventions. Psychol Rep 2004 Apr;94(2):475-481. [CrossRef] [Medline]
  166. Gao M, Kortum P, Oswald F. Psychometric evaluation of the USE (Usefulness, Satisfaction, and Ease of use) Questionnaire for reliability and validity. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 2018 Sep 27;62(1):1414-1418. [CrossRef]
  167. Hatcher RL, Gillaspy JA. Development and validation of a revised short version of the working alliance inventory. Psychother Res 2006 Jan;16(1):12-25. [CrossRef]
  168. Kirakowski J. The use of questionnaire methods for usability assessment. Background Notes on the SUMI Questionnaire. 1994.   URL: https:/​/www.​researchgate.net/​profile/​Jurek_Kirakowski/​project/​SUMI-Questionnaire/​attachment/​5ae302e1b53d2f63c3c86f11/​AS:619982895456256@1524826849051/​download/​SUMIPAPP.​pdf?context=ProjectUpdatesLog [accessed 2021-11-21]
  169. Kirakowski J, Claridge N, Whitehand R. Human centered measures of success in web site design. In: Proceedings of the 4th Conference on Human Factors & the Web. 1998 Presented at: 4th Conference on Human Factors & the Web; June 5, 1998; Basking Ridge, NJ, USA   URL: https:/​/scholar.​google.com/​scholar_lookup?hl=en&publication_year=1998&author=J.​+Kirakowski&author=N+Claridge&author=R+Whitehand&title=Human+Centered+Measures+of+Success+in+Web+Site+Design
  170. Perski O, Blandford A, West R, Michie S. Conceptualising engagement with digital behaviour change interventions: a systematic review using principles from critical interpretive synthesis. Transl Behav Med 2016 Dec 13:254-267. [CrossRef] [Medline]
  171. Yeager CM, Benight CC. If we build it, will they come? Issues of engagement with digital health interventions for trauma recovery. Mhealth 2018;4:37 [FREE Full text] [CrossRef] [Medline]
  172. Baumel A, Muench F. Heuristic evaluation of eHealth interventions: establishing standards that relate to the therapeutic process perspective. JMIR Ment Health 2016 Jan 13;3(1):e5 [FREE Full text] [CrossRef] [Medline]
  173. Beintner I, Vollert B, Zarski A, Bolinski F, Musiat P, Görlich D, et al. Adherence reporting in randomized controlled trials examining manualized multisession online interventions: systematic review of practices and proposal for reporting standards. J Med Internet Res 2019 Aug 15;21(8):e14181 [FREE Full text] [CrossRef] [Medline]
  174. Arsenijevic J, Tummers L, Bosma N. Adherence to electronic health tools among vulnerable groups: systematic literature review and meta-analysis. J Med Internet Res 2020 Feb 06;22(2):e11613 [FREE Full text] [CrossRef] [Medline]
  175. Donkin L, Christensen H, Naismith SL, Neal B, Hickie IB, Glozier N. A systematic review of the impact of adherence on the effectiveness of e-therapies. J Med Internet Res 2011;13(3):e52 [FREE Full text] [CrossRef] [Medline]
  176. Stinson J, Wilson R, Gill N, Yamada J, Holt J. A systematic review of internet-based self-management interventions for youth with health conditions. J Pediatr Psychol 2009 Jun;34(5):495-510 [FREE Full text] [CrossRef] [Medline]
  177. Jaeschke R, Singer J, Guyatt GH. Measurement of health status. Ascertaining the minimal clinically important difference. Control Clin Trials 1989 Dec;10(4):407-415. [CrossRef] [Medline]


PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
REDCap: Research Electronic Data Capture


Edited by R Kukafka; submitted 14.10.20; peer-reviewed by B Holtz, K Mateo, B Sindelar, M Bestek; comments to author 13.03.21; revised version received 27.03.21; accepted 23.09.21; published 02.12.21

Copyright

©Amanda S Newton, Sonja March, Nicole D Gehring, Arlen K Rowe, Ashley D Radomski. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 02.12.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.