Viewpoint
Abstract
Engagement in electronic health (eHealth) and mobile health (mHealth) behavior change interventions is thought to be important for intervention effectiveness, though what constitutes engagement and how it enhances efficacy has been somewhat unclear in the literature. Recently published detailed definitions and conceptual models of engagement have helped to build consensus around a definition of engagement and improve our understanding of how engagement may influence effectiveness. This work has helped to establish a clearer research agenda. However, to test the hypotheses generated by the conceptual modules, we need to know how to measure engagement in a valid and reliable way. The aim of this viewpoint is to provide an overview of engagement measurement options that can be employed in eHealth and mHealth behavior change intervention evaluations, discuss methodological considerations, and provide direction for future research. To identify measures, we used snowball sampling, starting from systematic reviews of engagement research as well as those utilized in studies known to the authors. A wide range of methods to measure engagement were identified, including qualitative measures, self-report questionnaires, ecological momentary assessments, system usage data, sensor data, social media data, and psychophysiological measures. Each measurement method is appraised and examples are provided to illustrate possible use in eHealth and mHealth behavior change research. Recommendations for future research are provided, based on the limitations of current methods and the heavy reliance on system usage data as the sole assessment of engagement. The validation and adoption of a wider range of engagement measurements and their thoughtful application to the study of engagement are encouraged.
J Med Internet Res 2018;20(11):e292doi:10.2196/jmir.9397
Keywords
Introduction
Electronic health (eHealth) and mobile health (mHealth) behavioral interventions offer wide-reaching support at a low cost, while retaining the capacity to provide comprehensive, ongoing, tailored, and interactive support necessary for improving public health [
, ]. Although there is evidence that eHealth and mHealth behavior change interventions can be effective, low levels of adherence and high levels of attrition have been commonly reported [ - ]. In response, there have been calls to design and implement more engaging interventions to address these concerns [ - ].It is generally agreed that a certain level of engagement is necessary for intervention effectiveness. However, there is a lack of clarity on how to conceptualize engagement. Some researchers have defined engagement solely as a psychological process relating to user perceptions and experience, whereas others consider engagement a purely behavioral construct, synonymous with intervention usage [
, ]. Consequently, it is often confused with adherence, which refers to whether the intervention is used as intended by the developers [ , , ]. There have also been interdisciplinary differences. Behavioral scientists tend to characterize good engagement as high acceptability, satisfaction, or intervention adherence, whereas computer scientists tend to consider high engagement as a mental state associated with increased attention and enjoyment [ ]. To consolidate these viewpoints and provide a less fragmented foundation for future research, 2 new conceptual models of engagement have been proposed [ , ].Using a process of expert consensus, Yardley et al [
] proposed distinguishing between micro- and macrolevel engagement when examining the relationships between the user experience, usage, and behavior change. Microlevel engagement refers to the moment-to-moment engagement with the intervention, including the extent of use of the intervention (eg, number of activities completed) and the user experience (eg, level of user interest and attention when completing activities). Macrolevel engagement is defined as the depth of involvement with the behavior change process (eg, extent of motivation for changing behavior) and is linked to the behavioral goals of the intervention. The timing and relationship between micro and macro forms of engagement depend on the intervention, the user, and the broader context. Yardley’s model suggests that after a period of effective engagement at the microlevel, the user may disengage from the platform but still be immersed in the behavior change process. Perski et al [ ] offer a similar but more extensive framework based on a systematic review. Similar to Yardley et al, they define engagement as both the extent of usage and a subjective experience but refine this further by characterizing the subjective experience as being related specifically to attention, interest, and affect. These constructs are said to capture the cognitive and emotional aspects of engagement as they are described in computer science disciplines (eg, flow, immersion, and presence), all of which relate to a level of absorption and preoccupation (see for definitions of these constructs). According to Perksi et al [ ], high engagement influences behavior change through its influence on the determinants of behavior (similar to macroengagement, as described by Yardley et al). Engagement itself is hypothesized to be influenced by intervention features such as content, mode of delivery, and contextual features such as the physical environment (eg, internet access) and individual characteristics (eg, internet self-efficacy).Both Perski et al and Yardley et al extend previous models [
, - ] by considering the interaction between usage and psychological processes. By doing so, both models suggest that intervention usage may be a useful indicator of overall engagement with the intervention but is not a valid indicator of engagement in the behavior change process per se. Perski et al also highlight potential moderators and mediators of the engagement process and outline possible pathways in which engagement can influence overall intervention efficacy. These models serve as useful tools to refine and test hypotheses about how to influence engagement and how engagement impacts efficacy, which is necessary if we are to advance eHealth and mHealth behavioral science. However, an understanding of how to measure engagement is needed to test these models.Basic overviews of the types of measures to assess engagement in eHealth and mHealth interventions have been provided by Yardley et al [
] as well as Perski et al [ ]. Yardley et al briefly described the potential usefulness of different measurement types, including qualitative measures, self-report questionnaires, ecological momentary assessment, system usage data, sensor data, and psychophysiological measures. Perski et al identified over 100 studies related to engagement and noted the data collection methods used (eg, survey, website logs, and face-to-face interviews) in each study. Our aim is to extend their work by providing a comprehensive overview of the measurement options currently available. Our overall goal is to summarize and appraise measures of engagement used in eHealth and mHealth research and to highlight future areas of research when evaluating engagement in eHealth and mHealth behavior change interventions. We anticipate this will serve as a useful primer for those interested in the study of engagement and help to advance the field of eHealth and mHealth and behavior change by facilitating the use and validation of a wider range of engagement measurements and their thoughtful application to the study of engagement.Overview of Methods Used to Identify and Assess Engagement Measures
We used a snowballing approach to identify relevant engagement measures. To begin, we extracted measures identified by Perski et al [
] as well as other systematic reviews and published articles known to us through our former work in the field [ - ]. A data extraction table (see ) focusing on measurement type, engagement domain, and validity information was used to extract, sort, and explore measurement information to aid synthesis. During the writing and revision process, we searched for additional articles using Google Scholar and reran Perski’s [ ] original search strategy on MEDLINE and PsycINFO to identify more recent relevant literature. Readers should, therefore, consider this as a comprehensive, but not exhaustive, overview of the literature.In line with Yardley et al’s suggestions [
], our overview focuses on a wide range of methods to measure engagement. These include qualitative measures, self-report questionnaires, ecological momentary assessment, psychophysiological measures, as well as the analysis of system usage data, sensor data, and social media data. Methods that capture microlevel constructs were included in our synthesis if they were related to emotional, cognitive, or behavioral aspects of the user experience that could be characterized as interest, attention, affect, or intervention usage. This includes the constructs of flow, cognitive absorption, presence, and immersion, which have been commonly used in other disciplines. An overview of definitions for each of these constructs is provided in . Macrolevel measures were included if they related specifically to engagement in the behavior change process because of the digital intervention or its features. A single author initially drafted each section below, with all other authors providing a critical review.Construct | Description |
Interest | Individual interest is an enduring preference for certain topics and activities. It is impacted by pre-existing knowledge, personal experiences, and emotions. Situational interest is an emotional state brought about by situational stimuli (eg, the unexpectedness of information). It is evoked spontaneously and is presumed to be transitory. Both types of interest are related to liking and willful engagement in a cognitive activity that affects the use of specific learning strategies and how we allocate attention [ | , ].
Attention | A state of focused awareness of specific perceptual information [ | ]. Focalization and concentration of consciousness are the essence of attention. Paying attention implies withdrawal from some perceptual information to deal effectively with others [ ].
Affect | Affect is an intrinsic part of the sensory experience. It represents how an object or situation impacts how a person feels. It can be described by 2 psychological properties: hedonic valence (pleasure/displeasure) and arousal (activation/sleepy). It can be a central or background feature of consciousness, depending on where and how attention is applied [ | , ].
Flow | Flow refers to an optimal state that arises when an individual is deeply absorbed in a task. It is characterized by enjoyment, focused attention, absorption, and distorted time perception and is considered intrinsically rewarding. It assumes the complete absence of negative affect [ | ].
Cognitive absorption | Cognitive absorption is a state of deep involvement, similar to flow, though it does not assume intrinsic motivation or the complete absence of negative affect. Cognitive absorption may still occur when a user is frustrated (and, therefore, the experience is not optimal) or extrinsically motivated (eg, by winning a competition with friends; [ | ]).
Immersion | Immersion is also similar to cognitive absorption and flow, though it is often used to describe a less extreme experience of engagement, one where one may still have some awareness of one’s surroundings [ | , ].
Presence | The term presence has been popular since the development of virtual reality technologies. Definitional consensus for presence is still emerging, though it is often described as the psychological sense of being there [ | ].
Intervention usage | The extent to which the intervention has been observed or interacted with by the user. It is made up of several components, including frequency of use, time spent on the intervention, and the type of interaction participated in. This is distinct from intended usage, which is the way in which users should utilize the intervention to derive the minimum benefit, as defined by the intervention developers [ | ].
Overview of Engagement Measures
Qualitative Methods
Focus Areas
Qualitative measures enable evaluation of micro- and macrolevel engagement and include methods such as focus groups, observations, interviews, and think-aloud activities (
). At the microlevel, they allow for an in-depth account of the users’ experience of the intervention. At the macrolevel, they can be used to explore the users’ perceptions of how the intervention has helped them to engage in the behavior change process.Current Use and Future Directions
Qualitative methodologies are commonly employed in the digital health setting to inform the development of interventions (ie, usability testing) and as an evaluation measure (eg, [
- ]). In most cases, the focus of the evaluation has been on perceptions of usability and acceptability, rather than engagement. However, there are some notable exceptions. For example, some studies have used think-aloud measures to understand cognitive processes and emotional reactions when navigating the intervention and viewing intervention content in real time [ - ]. Others have explored users’ flow experiences, adherence and lived experience of technology using qualitative interviews [ - ], focus groups [ ], or a combination of think-aloud and interview methods [ ].Along with exploring the direct user experience, qualitative measures are also often used to probe the perceived usefulness of the intervention experience. Although this can relate to macroengagement (eg, by providing insights into how the intervention may have helped the user to achieve behavioral goals), efforts to explore the users’ experience of the behavior change process in more depth are recommended. For example, researchers could explore how certain intervention features impact intentions and self-efficacy and how the relationship between intervention features and changes in psychosocial factors relate to use or disuse. This could be achieved using simple methods such as open-ended items in a questionnaire or more elaborate methods such as postintervention focus groups, which may help users to reflect on how the intervention has or has not engaged them in the behavior change process in more detail. Assessing these constructs at different time points may be particularly fruitful, especially given the cyclical nature of behavior change [
]. Exploring users’ real-time engagement in the behavior change process was achieved in 1 recent study by thematically analyzing participant responses to intervention text messages [ ]. By doing so, the authors were able to demonstrate that the study participants frequently gained positive cognitive and behavioral benefits from the text messages.Considerations
A limitation of qualitative measures is that the results can be difficult to compare between studies. Results are also often not generalizable, mostly due to sampling bias. Qualitative measures are often used to collect rich data rather than representative data. For this reason, qualitative methods may be particularly suited to help generate hypotheses about engagement including how engagement relates to efficacy and effectiveness. They may also be useful for exploring hypotheses, especially when the focus is on understanding engagement on an individual level such as in n-of-1 studies [
]. In instances where representative data can be collected, such as in the text messaging study described above [ ], hypothesis testing at the group level may be possible. However, the time and expertise needed to analyze data, which would ideally involve more than 1 person, is a barrier. This may be overcome in the future using machine learning tools to automate the coding of qualitative data [ ].Qualitative approach | Description | Example items | Considerations (pros/cons) |
Semistructured interviews | Provide an opportunity for sharing of lived experiences and feelings to uncover concealed perceptions related to digital health intervention or the technology; includes informal conversational interviews (spontaneous-suited to ethnographic research), semistructured interviews (interview guide used to steer otherwise spontaneous conversation), or standardized open-ended interviews (worded questions used for all participants). | Microlevel: Tell us what you think about the content; How did completing that module make you feel?; Please explain your pattern of use?; Why did you log on when you did?; Macrolevel: Did you notice any change to your thinking as a result of using the …(“app”)?; What impact did using the ... (“website”) have on how you are going about changing your behavior? | Pros: inform modifications to increase acceptability, interactivity and tailor to end-user needs; identify a range of issues associated with use (both short and long term); augment interpretation of quantitative evaluation; generally small sample sizes. Cons: subject to bias (eg, recall and social desirability), especially if leading questions are asked; time consuming to collect and transcribe; time consuming to analyze and often requires more than 1 person to decide on and confirm themes. |
Think aloud | Aim to capture the experience of using the technology in real time. The user is provided with a specific task to complete and is observed while they perform the task. The user is prompted to think aloud throughout the process. | Microlevel: Tell me what you are thinking; What are you looking at?; What’s on your mind?; How are you feeling?; Why did you click on this?; Why did you frown/smile/sigh?; Macrolevel: Are you learning anything new? | Pros: can be used at various stages of development and implementation to understand how intervention features impact on engagement; occurs in real time, so less subject to recall bias. Cons: subject to observer bias; can be cognitively difficult for participants and requires practice; may require additional resources such as video or sound recording equipment to obtain a comprehensive picture. Acquired data can be time consuming and complex to analyze; may be most useful for exploring microlevel engagement. |
Focus groups | Used to identify the social and contextual factors in specific population subgroups that influence engagement with digital health intervention and needs for technological characteristics and operations that promote user alignment and functional utility. | Microlevel: What did you think of the intervention?; Which components caught your attention the most?; What about them caught your attention?; Were there any components that caused frustration?; Did any aspects make you feel guilty? Macrolevel: How often did you think of the intervention during the week?; Was the intervention in the back of your mind?; How did the intervention help or hinder you reach your goals? | Pros: allow for spontaneous discussion of topics and subsequent voicing of ideas and perceptions that may go unnoticed in semistructured or structured interviews; Can obtain rich data from multiple people at the same time. Cons: subject to group or social desirability bias; some participants may not express themselves as fully in a group situation; requires practice to manage group discussion; can take a long time to transcribe due to interruptions/butting in; time consuming to analyze and often requires more than 1 person to decide on and confirm themes. |
To facilitate the use of qualitative measures in the future, a brief overview of example questions by qualitative method type, as well as key considerations are provided in
.Self-Report Questionnaires
Focus Areas
Questionnaires can be used to assess both experiential and behavioral aspects of microlevel engagement as well as aspects of macrolevel engagement.
Current Use and Future Directions
Self-report questionnaires have most often been used to gain insight into users’ subjective experience of digital platforms. Although questionnaire items have often been purpose-built and not subjected to psychometric testing (see
), there are a number of more rigorously developed scales. An overview of scales identified by our search [ , - ] is presented in . In brief, most scales have been developed to assess subjective experiential engagement with e-commerce websites or video games. Only 2 scales developed specifically for the eHealth and mHealth setting were identified (ie, the eHealth Engagement Scale [ ] and the Digital Behavior Change Intervention Engagement Scale [ ]), and only 1 of these has been validated [ ], whereas validation of the other is currently underway [ ]. Of note, some of the available scales assess attributes posited to predict engagement (eg, aesthetic appeal and usability experience [ - ]) as well as attributes considered to be a part of engagement (interest, attention, and affect). This is particularly the case for scales developed in the e-commerce setting and raises some validity concerns. Several of the scales are also quite long, which may place an undue burden on participants. The development and evaluation of high-quality short questionnaires relevant to eHealth and mHealth are therefore encouraged.Questionnaires have also been used to assess behavioral aspects of engagement (ie, intervention usage). Although objective behavioral data are often available (see usage data below), questionnaires have been used when this is not the case. For example, a study comparing the relative efficacy of 2 off-the-shelf apps used questionnaires to assess the frequency and time of app use [
]. Although there are several scales with reasonable psychometric properties available for assessing the users’ subjective experience ( ), scales for assessing behavioral aspects of engagement in eHealth and mHealth interventions are lacking. Perski et al’s self-report measure [ ], which includes 2 items on behavioral engagement, is an exception. However, the validity of the measure is still being investigated. Perski’s items and the purpose-built item used by other researchers usually have reasonable face-validity (eg, “how many times per week did you use the app?”) but might lead to over- or underreporting depending on how items are phrased [ , ]. The validity of the chosen scale should be considered when interpreting the findings of self-reported behavioral data, and we recommend efforts to test the psychometric properties of developed items before use, if not yet available. This could be achieved by comparing the self-reported data with objectively collected data in a controlled setting (eg, [ ]). The development of self-reported usage questionnaires that complement and provide useful context for objective usage measures should be considered. For example, if time on site or using an app is of interest, questionnaire data may identify cases where the user has left the program running in the background but has not been actively engaged. Likewise, information on behavioral cues at the point of engagement (eg, “what were you doing before you logged your steps using the app?”) may complement usage data and provide a more comprehensive measure of usage patterns. Lessons may be gleaned from the scales developed to assess social networking intensity [ ].The third use of questionnaires relevant to the study of engagement at the macrolevel is the repeated assessment of psychological mechanisms hypothesized to account for behavioral changes (eg, self-efficacy). The assessment of change in these mechanisms and the conduction of a formal mediation analysis have been increasingly encouraged in the behavioral sciences [
, ] to investigate whether interventions are working as intended (ie, that the selected eHealth and mHealth strategies are indeed influencing determinants and changes in determinants are influencing behavior, eg, [ ]). This methodology can be adopted to study engagement. Arguably, a user who demonstrates favorable changes in 1 or more of these determinants can be considered engaged in the behavior change process (eg, self-efficacy significantly increases over time). Furthermore, someone demonstrating changes at a prespecified cut point or where changes are associated with behavioral outcomes could be said to be engaged effectively. There are a number of pre-existing scales that can be used to assess changes in psychological determinants of behavior (eg, [ - ]) as well as guides for constructing purpose-built questions if existing scales are not suitable (eg, [ , ]). Decisions regarding what psychological constructs to assess changes in should be based on the theoretical underpinning of the intervention and the key intervention objectives and strategies used to achieve them.Considerations
Overall, questionnaires can be a useful tool for measuring various aspects of engagement in a systematic, standardized, and convenient way. This can allow for easy comparison across studies and between experimental arms [
]. Limitations include questionnaire length (and, therefore, duration of completion); a lack of experiential measures designed and tested within a health context; a lack of focus on the behavioral aspects of engagement; and in some cases, the inclusion of items that measure predictors of engagement within engagement scales.To select an appropriate scale, an understanding of the different constructs used to describe engagement across disciplines will be necessary (see
). Reviewing the wording of the items and assessing how they will fit within the context of one’s project may further help with scale selection. To this end, example items for each scale summarized above are provided in . Most items will need to be adapted for a health setting, and not all scales will be applicable across study types or useful for assessing all aspects of engagement (ie, interest, attention, affect, intervention usage, and involvement in behavior change process). In some cases, it may be necessary to generate completely new items or a completely new scale. In such cases, researchers are encouraged to report a measure of internal consistency (preferably McDonald omega) and present factor-analytic evidence confirming the dimensionality of the scale [ ]. Attention to the length of the scale should also be given. This will likely be necessary to minimize missing data. The perceptions of those who drop out of the study are currently often not captured in evaluations of eHealth and mHealth interventions, which is problematic as those who drop out are usually those who have used the intervention the least. Ecological momentary assessments (EMAs; described in more detail below) may be useful to assess relevant engagement parameters regularly during the intervention and give a better impression of engagement throughout use [ ]. Alternatively, selecting a representative subsample to administer surveys to and reimbursing them for their time might be a viable solution.Ecological Momentary Assessments
Focus Area
EMAs can be used to assess both experiential and behavioral aspects of microlevel engagement as well as aspects of macrolevel engagement. The main objective of EMAs is to assess behaviors, perceptions, or experiences in real time and as they occur in their natural setting [
]. By prompting users to self-report data at varying times per day, EMAs allow these phenomena to be studied in different contexts and times.Current Use and Future Directions
In EMAs, short surveys can either be accessed by the user on demand (eg, when logging a recent behavior), sent at specific or random intervals (eg, every 2 hours per day: time-based sampling), or they can be triggered by a certain event (eg, only when an activity tracker indicates the user is performing moderate to vigorous physical activity: event-based sampling). The latter is especially useful to capture rare behaviors, perceptions, or experiences. EMAs are often conducted on smartphone screens, but wearable devices can also be used (eg, CamNtech ProDiary, Philips Actiwatch Spectrum Plus, or Samsung Gear Life) [
].EMAs have mostly been applied in eHealth and mHealth studies to measure health behavior and determinants (eg, [
, ]). We identified 1 study from previous reviews that used EMA to measure user engagement. This study [ ] used event-based sampling to assess the breaks in levels of presence with a shooter game (not intended to improve health). The events that were sampled consisted of several parts of game play. No validity or reliability information for the slider was explicitly provided.Despite the limited application of EMAs to measure engagement so far, EMAs may be well suited to study moment-to-moment or microlevel engagement with an intervention [
]. EMAs could provide data-driven insights into reasons for low adherence or dropout. EMAs are usually conducted over a short period with regular measurements over the day or week. However, it is also possible to adjust the timing and measurement intervals to collect longer-term insights into engagement. Contextual data and determinant data provided in EMA may enrich intervention usage data obtained from other sources to provide further insights into reasons for dropout.Considerations
EMA surveys are intended to be very brief, because the purpose is to capture experiences in the moment and often to collect many data points over time, which can pose a burden to users [
]. Ensuring measures are brief is, therefore, important for both validity and for promoting adherence to the EMA protocol. Recent reviews of adherence to EMA protocols in health settings [ , ] suggest that compliance rates (proportion of EMAs completed) are reasonable (>70%), especially when sampling protocols are easy to follow. This speaks to the feasibility of utilizing this measurement approach; however, data analysis can be challenging for those unfamiliar with intensive longitudinal datasets (for a discussion regarding the challenges of EMA and example analysis approaches, see [ - ]). Advantages of EMAs include less recall bias than retrospective self-reports and potential for high ecological validity, as it studies behavior or effects in real-world contexts [ , ].System Usage Data
Focus Area
System usage data quantitatively capture how the intervention is physically used by each participant. This relates to the behavioral component of microlevel engagement. When paired with other data sources, system usage data can provide insights into how usage patterns, intervention dose, and different adherence rates relate to other aspects of engagement (eg, interest, attention, affect, and changes in determinants) and efficacy and effectiveness outcomes (eg, [
- ]).Current Use and Future Directions
System usage data are the most commonly collected and reported measures of engagement in eHealth and mHealth interventions [
]. Although the focus has predominantly been on nonusage attrition and overall adherence to the intervention [ , ], more recent studies have begun to explore the multidimensional nature of usage data [ - ], focusing on the depth and type of engagement as well as frequency measures. As the field progresses, it would be helpful to have shared ways of conceptualizing these data, as recent reviews have tended to categorize types of usage data differently using an inductive approach [ , ]. The FITT acronym [ ], which stands for frequency, intensity, time, and type, and is commonly used in physical activity research, might be a useful tool in this sense, especially for considering usage data as an engagement measure a priori. Specific examples of how usage data could be categorized using this principle are given in . Frequency provides information on how often a participant visits the intervention site or uses the app. Intensity measures the strength or depth of engagement with the intervention, for example, the proportion of the intervention site or app features used out of the total available features [ ]. Type refers to the type of engagement, for example, this could be categorized as reflective (eg, self-reporting behavior change), altruistic (eg, helping others), or gamified (eg, participating in a challenge) in nature. Type can also be divided into “active” (eg, active input such as when responding to a quiz, self-monitoring, or writing an action plan) or “passive” (eg, an individual can view the intervention without having to interact with it) categories. Time is a measure of the duration of engagement during any single visit or a measure to assess level of exposure as an aggregate over the intervention period.Examining usage data by aggregating data across the FITT categories can provide greater insights into engagement than focusing on any one domain [
, , ]. For example, although the total time on site for users may appear similar (time data), their intensity data could be meaningfully different, which could lead to differences in engagement profiles (eg, attention, elaboration, and experience [ , ]). Separating users with similar data for time on site but markedly different patterns of use in terms of the type of activities may be helpful for identifying what aspects of the intervention are more engaging than others [ ]; what aspects may be more influential for achieving behavior change, and in addition, whether this is moderated by user profiles (eg, [ ]). The insight obtained from careful examination of system usage data in this way can assist intervention developers with data-driven solutions to encourage engagement [ ].Frequency, intensity, time, and type (FITT) principle | Example application | |
Frequency of engagement with the intervention | [ | - ]|
Log-in (number of log-ins recorded per participant, average log-ins per unit of time or total for intervention duration) | ||
Visits to the site (number of visits/hits per participant, average per unit of time or total) | ||
Intensity of engagement | [ | - ]|
Pages viewed (number) | ||
Lessons or modules viewed (total number, % of prescribed) | ||
Posts viewed (eg, lurking) | ||
Number of emails sent | ||
Number of posts written | ||
Accessed “Expert forum” (Ask the Expert) to pose a question/seek advice (number) | ||
Action plan created | ||
Number of quizzes attempted | ||
Time or duration of engagement with the program | [ | , ]|
Amount of time spent at each visit per participant (average and total minutes) | ||
Number of days between first and last log-in (duration or intervention stickiness) | ||
Type of engagement | [ | , ]|
Reflective (eg, participant recording of behavior or health status) | ||
Gamified (eg, accepting challenges and sending gifts) | ||
Altruistic (eg, helping others) or malevolent (eg, trolling others) | ||
Didactic (eg, reading posts and taking quizzes) | ||
Active (eg, recording behavior) versus Passive (eg, reading posts). |
Considerations
User behavior in digital health interventions can be tracked by embedding programming code as part of the development process or by using third-party services. For both methods, it is important during software design (or selection) to consider the type of data desired or needed to track behavioral engagement and ensure the data are adequately captured and can be extracted easily. The most commonly used third-party service is Google Analytics, a service that can be implemented by connecting to the Google Analytics application programming interface. Google Analytics can be used to collect information on the users’ environment (location, browser, and connection speed), and the users’ behavior (eg, number of page visits, time on site, where users came from, and which page they visited last before exiting [
]). Capturing usage data more specific to the intervention platform, such as participation in a quiz or percentages of answers correct, require, as in Google Analytics, intentional programming and capture at the level of the software. Before programming, considerable thought should be given to how the usage data will be analyzed, as good tracking generates a large amount of data (ie, every navigational move that every participant has ever made and even the moves they did not make) that can be hard to make sense of; therefore, an a priori analysis plan is recommended. Visualization tools [ ] and engagement indices such as those discussed by Baltierra et al [ ] and Couper et al [ ], or consideration of new data analyses techniques may be useful to get insights into data [ , ]. Although system usage data are often considered objective and reliable, some caution interpreting data is recommended. The increasing use of dynamic internet protocol (IP) addresses and virtual private networks (which change or hide your IP address), the use of IP addresses shared by multiple users (eg, via the family computer and internet cafes), and typical browsing behavior (eg, leaving multiple tabs open) may obscure usage data, especially for applications that do not require a unique log-in. This may be less of an issue for mobile apps compared with websites.Intervention developers should, wherever possible, collect and analyze system usage data. Compared with the usage of other behavioral interventions (eg, a printed booklet), these data can be easily collected with early planning and good data capture techniques. Although usage data does not provide direct information on the psychological form of user engagement [
, ], it can provide some information to help us to understand what is engaging about an intervention, and what is not, in an unobtrusive way. There is also some evidence of predictive validity, with technology usage generally correlating with positive behavior change or health outcomes [ , , , ]. However, more research to establish the predictive validity of system usage data is needed, especially given that most analyses to date have lacked a suitable control group.As with analyzing intensive longitudinal EMA data, the analysis of system usage data can be challenging. This is due to the intensive longitudinal and multidimensional nature of the data as well as the pattern of missingness (which tends to be nonrandom and nonignorable). Recognizing this, a comprehensive analysis plan should be developed before the commencement of the study. Exploration of the data visualization tools, composite engagement metrics, and analysis approaches referenced above might assist with the development of this plan.
It is also recommended that developers consider and outline the intended usage of the intervention. Intended usage is the way in which individuals should experience the intervention to derive maximum benefit, based on the conceptual framework informing intervention design (ie, developers’ views on how the intervention should work best for who). Notably, intended usage may not be the same for all individuals (eg, in adaptive interventions [
, ]). By specifying intended usage a priori and comparing this with observed usage, we can establish whether individuals have adhered to the intervention and, in turn, the impact of adherence on efficacy [ ].Sensor Data
Focus Area
Sensors such as global positioning systems (GPS), cameras (eg, facilitating eye tracking analyses), microphones, and accelerometers can unobtrusively monitor users’ behavior and the physical context in which this behavior takes place. They can be provided by the investigator, but many of them are embedded in smartphones or trackers. This relates to the behavioral component of microlevel (eg, information on intervention fidelity) and macrolevel (eg, tracking behavior in real-life settings) engagement.
Current Use and Future Directions
Analyzing sensor data presents an unobtrusive way of measuring engagement that requires no additional time effort from users other than the time spent engaging with the program. Their value lies in being able to track behavior of many users [
] and to enrich usage information in real-life situations or combining them with other user engagement measures such as EMA. There are calls for a different evaluation of eHealth and mHealth behavior change interventions than traditional interventions, to more nimbly respond to rapidly changing technologies and user preferences for functionalities [ - ]. Adaptations to eHealth and mHealth interventions are likely to be needed soon after first design and again after first implementation. Information from sensors that automatically track usage in real-life situations can help in measuring engagement with these interventions and distinguishing between successful mastery of intervention goals or need for continued engagement [ ]. For example, in physical activity interventions, accelerometer information could continuously monitor the current activity level and indicate whether lower adherence to the intervention should be considered as a successful completion or disengagement. Sensor data paired with usage information may thus provide insights in macrolevel engagement as a mediator of positive intervention outcomes. In a similar vein, GPS information can enrich macrolevel engagement measures. GPS gives information on where people use the intervention and where it is less often used. For example, an app designed to facilitate healthy food choices may be used at home or at grocery stores but shows lower usage in restaurants. The GPS data give further insight into offline engagement with the intervention goals.Sensors can also provide an indication of intervention fidelity. For example, distance traveled as measured by GPS and phone cameras taking pictures of meals can indicate whether the intervention is used in the appropriate manner and context [
]. The combination of usage and commonly included sensors can provide more detailed measures of real-life user engagement than usage information by itself. Sensor data can, moreover, trigger the event-based form of EMA. For example, users may be prompted to indicate their engagement with the intervention when the accelerometer shows the person is physically inactive or assess user engagement when GPS data show the person is in a certain physical context (eg, at a bar where there is a personal risk of smoking or alcohol consumption).Considerations
A challenge of using GPS data for this purpose is the time-intensive nature of GPS data preparation and analysis. This will likely get easier in the future as new analysis packages become available to facilitate automation. Sensors, moreover, have the advantage of presenting a low level of respondent burden. However, especially with context-aware sensing using GPS, users are concerned about privacy issues [
, ]. In addition, sensors integrated in smartphones tend to negatively impact the battery life of the mobile device, and users may, therefore, be less compliant with running these sensors on their phones. This may especially be the case when users are skeptical toward the accuracy and relevance of context-aware smartphone sensing [ ]. Therefore, communicating research findings about the validity of such measures [ , ]) and conducting pilot tests and validity studies of new measures may be necessary to increase their use in future interventions and optimize uptake among participants.Social Media
Focus Area
Another unobtrusive, low-burden approach to capturing engagement with the intervention is to analyze users’ social media patterns. In social media, users create online communities (eg, social networking sites) via which they share information, opinions, personal messages, or visual material. Despite the interest of behavior change professionals in using social media to increase intervention effectiveness (see eg, [
]), to our knowledge, little research is available on the use of social media to measure engagement with eHealth and mHealth. The available resources mostly come from marketing and media audience research [ , ]. Social media message threads may provide useful information on user experience (microlevel engagement with the intervention) but might also provide insights in macrolevel engagement (eg, wall posts on behavioral achievements).Current Use and Future Directions
One study examined the number of wall posts made over time as an indication of engagement with a social networking physical activity intervention [
]. An approach to reduce the burden in analysis is to use markers that are previously nonexisting words launched exclusively within the intervention [ ]. These markers are used to trace any conversation that takes place on social media in relation to the intervention and are a way to measure social proliferation associated with the intervention content. An example comes from a video intervention on cognitive problems that may result from being a victim of violence [ ]. To clearly identify all conversations and mentions on social media that would result from this topic, they launched the word falterhead to describe how the main character experienced the negative effects on his brain functioning after being violently attacked. This marker allowed a quick identification of all social media content related to the program, as this nonexisting word is unlikely to occur for content unrelated to the intervention. Several social media sources are then searched with text- and data-mining tools (eg, HowardsHome Finchline) for the occurrence and content of messages that contain these markers. The messages are next analyzed in terms of quantity (eg, Is the intervention being talked about?; What are patterns of social proliferation over time?) and quality (eg, How is the topic mentioned or discussed?; Is this how we wished viewers would think and talk about the intervention?). Social media messages relating to the eHealth and mHealth intervention might also be analyzed for their occurrence of certain profiles in social media engagement. On a continuum from passive and uninterested to more active and engaged, profiles of lurkers, casuals, actives, committed, and loyalists can be distinguished. Although to our knowledge, this has not yet been applied to analyze engagement with eHealth and mHealth behavior change interventions, interventions showing more actives, committed, and loyalists on social media might indicate higher user engagement than those receiving more lurkers and casuals [ ]. This might especially be useful to assess comments on engagement in behavior change programs in real-life settings.Considerations
The vast amount of social media content may make it difficult to extract what is relevant to the intervention. Markers mentioned earlier and audit tools are useful to facilitate such social media analyses. Examples of free audit tools to analyze social media are Sprout Social Simply Measured, Instagram Insights, and Union Metrics. The free statistical software program R also has many packages to analyze social media data. The analysis of these social media patterns requires a combination of qualitative techniques to assess discussion or post sentiment and topic, and quantitative methods, for example, to assess reach by combining number of followers for each mention on social media [
]. Text analytic tools available in many statistical packages such as R and SAS may also be useful here.Psychophysiological Measures
Focus Area
Psychophysiological methods of measurement are used to examine the relationship between physiology and overt behavior or cognitive processes and variables. Psychophysiological measures are operationalization of cognitive processes or variables, just as self-reported questionnaires are used to measure processes or variables derived from theory [
]. They have been shown to be valuable approaches for measuring the experiential aspects of microengagement [ ].Current Use and Future Directions
There are several types of psychophysiological measures used to study cognitive and affective processes (for a comprehensive overview of measures used in human-computer interaction and user experience research, see [
- ]). We describe the 2 most common methods with a strong temporal resolution (ie, electroencephalography [EEG] and eye-tracking). A strong temporal solution (ie, precision of measurement with respect to time) is warranted to investigate engagement over time. It needs to be stressed, however, that other methods show promising results as well [ - ]. For example, predicting engagement using a novel visual analysis approach to recognize affect performed significantly better or on par with using self-reports [ ]. The methods presented here are noninvasive but obtrusive in comparison with, for example, most measurements of system usage data. These methods are mostly used in laboratory settings and during intervention development (eg, pretesting of a website), but the opportunities to use them in field settings are increasing (eg, [ ]). Moreover, it is also possible to use these methods in parallel with a trial or afterward to gain more insight into user engagement and, thereby, shed more light on trial findings.EEG records electrical activity in the brain using small, flat metal discs (electrodes) attached to a person’s scalp. Using this method requires adequate expertise, both in terms of measurement [
] and analysis [ ] of data. Event-related potentials (ERPs) are the average changes in the EEG signal in response to a stimulus, and characteristic ERP responses are referred to as components [ ]. For example, Leiker et al [ ], in a study on motion-controlled video games, focused on the amplitude of a specific component (labeled eP3a), which is a reliable index of attentional reserve [ , ]. This study revealed that participants who reported higher levels of engagement (as measured by the Intrinsic Motivation Inventory) showed a smaller eP3a, which is indicative of paying more attention to the primary task (eg, playing the game). Another study revealed that late negative slow wave components of the ERP were indicative of attention, which was partly confirmed by findings from self-reports (ie, the Immersive Experience Questionnaire) [ ].Eye-tracking is based on the strong association between eye movements and attention [
]. It is a suitable method to assess the course of attention over time [ ]. For example, fixation data of an experimental study revealed that participants’ eye movements in the immersive condition decreased over time, which is indicative of increased attention [ ]. Another example is a study comparing a video with a text condition of a physical activity intervention. This study revealed that participants in the video condition displayed greater attention to the physical activity feedback in terms of gaze duration, total fixation duration, and focusing on feedback [ ]. Another study using eye-tracking found that participants focused more on certain experimentally manipulated aspects of a health-related website (ie, in terms of frequency and duration), but this did not affect usage data (ie, the number of pages visited or the time on the website) [ ]. It might be that these aspects attract attention, but there is a trade-off in the sense that participants then focus less on other aspects of the website. However, it could also be that attention only partly predicts engagement.Considerations
With regard to both EEG and eye-tracking, it is important to note that attention is only the first appraisal in the process of engagement [
]. There are other psychophysiological methods besides EEG and eye-tracking that are mostly focused on measuring arousal. A previous study, for example, recorded electrodermal activity (EDA) and facial muscle activity (electromyography [EMG]) in addition to a Game Experience Questionnaire [ ]. The association between these measures, however, was not straightforward. For example, EMG orbicularis oculi (periocular) is usually used to indicate positive emotions and high arousal but was negatively correlated to competence (which is a positive dimension of the Game Experience Questionnaire). Another study measured engagement in 5 different ways: self-reports using 4 dimensions of the Temple Presence Inventory, content analyses of user videos, EDA, mouse movements, and click logs (the latter 2 are measurements of usage data) [ ]. These 5 measures correlated in limited ways. The authors concluded that “engagements as a construct is more complex than is captured in any of these measures individually and that using multiple methods to assess engagement can illuminate aspects of engagement not detectable by a single method of measurement” [ ].This is indicative of the complexity of engagement as a construct and reflects recent calls from the human-computer interaction field for future studies to identify valid combinations of psychophysiological measures that more fully capture the multidimensional nature of engagement [
].Discussion
It is generally agreed that some form of engagement is necessary for eHealth and mHealth behavior change interventions to be effective. However, cohesive and in-depth knowledge about how to develop engaging interventions and the pathways between engagement and efficacy are lacking. Several models of engagement have been proposed in the literature to address this deficit, but little testing of the models has been conducted. To support research in this area and progress the science of user engagement, we aimed to provide a comprehensive overview of the measurement options available to assess engagement in an eHealth and mHealth behavioral intervention setting. The overview should not be treated as exhaustive; however, it should serve as a useful point of reference when considering engagement measures for behavioral eHealth and mHealth research.
The best measurement approach will likely depend on the stage of research and the specific research context, although there are benefits from using multiple methods and pairing the data (eg, self-report data relating to interest, attention and affect combined with system usage data). It is also important to make an inventory—before data collection—to check whether the available expertise for using different methods (eg, EEG) is available. Given the complexity of engagement as a construct, using multiple methods may be necessary to illuminate it fully [
, ]. At present, most studies in the eHealth and mHealth behavioral intervention space rely on system usage data only. Although system usage data is undoubtedly a valuable engagement marker, it is not considered a valid measure of micro- or macroengagement on its own [ , ]. Greater efforts are needed to also assess the psychological aspects of engagement to better understand the interplay between perceptions, usage, and efficacy.Questionnaires are perhaps the most accessible way to assess microlevel engagement in terms of cost. However, there is currently a lack of validated self-report questionnaires specific to the eHealth and mHealth behavior change intervention context. This is reflected in the large number of purpose-built questionnaires (ie, questionnaires designed for a specific study) that have been used to date [
]. As the main benefit of questionnaires is that they allow for the collection of subjective data in a standardized way, greater efforts are needed to develop and implement standard items. Although not yet validated, the questionnaire developed by Perski et al [ ] is promising in this regard, as it includes constructs related to both psychological and behavioral aspects of engagement and only focuses on engagement constructs. The other questionnaires identified focus only on the psychological aspects of engagement, and some include constructs more aligned with standard acceptability items (eg, perceived credibility), rather than the constructs of interest, attention, and affect. It may be best to avoid these questionnaires when testing models that hypothesize that acceptability markers influence engagement parameters.There are several other measures of engagement that may also be used to test engagement models (eg, sensors, social media data, EMA, and psychophysiological measures). Despite their potential advantages, little research has been conducted exploring their use (and validity) in the digital behavior change setting. This is likely due to higher cost, time, and data analysis requirements relative to other measures. To mitigate this, behavioral researchers are increasingly drawing on expertise across other relevant disciplines (eg, informatics, human-computer interaction, experimental, and cognitive psychology). It is hoped that this paper will help to facilitate this research, especially research establishing the criterion, as well as divergent and predictive validity of these measures.
Overall, establishing the validity of engagement measures across multiple settings and learning how to triangulate measures in a complementary way are necessary next steps to advance the field. This will allow us to thoroughly test contemporary models of user engagement and hence, deepen our understanding of the interplay between intervention perceptions, usage, and efficacy across different settings.
Acknowledgments
The authors would like to thank Celine Chong for her assistance extracting data presented in the literature and reviewing engagement measures. Celine was supported by a Freemasons Foundation Centre for Men’s Health summer scholarship. CES was supported by a National Health and Medical Research Council (NHMRC) Early Career Research fellowship (ID 1090517). LP is funded by the Research Foundation—Flanders. CV (ID 100427) is funded through a Future Leader Fellowship from the National Heart Foundation of Australia. CM is supported by an NHMRC Career Development fellowship (ID 1125913). AD is supported by a Research Foundation Flanders grant (FWO16/PDO/060, 12H6717N).
Authors' Contributions
CES conceived of the idea for this viewpoint. CES, AD, RC, CW, and SLW defined the scope of the manuscript and drafted the initial sections and revisions. CM, AMM, AM, PAW, CV, LP, and MDH provided critical review, refined the scope, and contributed to redrafting and editing of the manuscript.
Conflicts of Interest
None declared
Initial data extraction table.
XLSX File (Microsoft Excel File), 35 KB
Self-report questionnaires for measuring microlevel engagement.
PDF File (Adobe PDF File), 78 KB
Example items in self-report questionnaires for measuring microlevel engagement.
PDF File (Adobe PDF File), 51 KBReferences
- Vandelanotte C, Müller AM, Short CE, Hingle M, Nathan N, Williams SL, et al. Past, present, and future of eHealth and mHealth research to improve physical activity and dietary behaviors. J Nutr Educ Behav 2016 Mar;48(3):219-228.e1. [CrossRef] [Medline]
- Michie S, Yardley L, West R, Patrick K, Greaves F. Developing and evaluating digital interventions to promote behavior change in health and health care: recommendations resulting from an international workshop. J Med Internet Res 2017 Jun 29;19(6):e232 [FREE Full text] [CrossRef] [Medline]
- Kelders M, Kok RN, Ossebaard HC, Van Gemert-Pijnen JE. Persuasive system design does matter: a systematic review of adherence to web-based interventions. J Med Internet Res 2012 Nov 14;14(6):e152 [FREE Full text] [CrossRef] [Medline]
- Perski O, Blandford A, West R, Michie S. Conceptualising engagement with digital behaviour change interventions: a systematic review using principles from critical interpretive synthesis. Transl Behav Med 2017 Dec;7(2):254-267 [FREE Full text] [CrossRef] [Medline]
- Yardley L, Spring BJ, Riper H, Morrison LG, Crane DH, Curtis K, et al. Understanding and promoting effective engagement with digital behavior change interventions. Am J Prev Med 2016 Dec;51(5):833-842. [CrossRef] [Medline]
- Short CE, Rebar A, Plotnikoff RC, Vandelanotte C. Designing engaging online behaviour change interventions: a proposed model of user engagement. Health Psychol Rev 2015;17(1):32-38 [FREE Full text]
- Walton H, Spector A, Tombor I, Michie S. Measures of fidelity of delivery of, and engagement with, complex, face-to-face health behaviour change interventions: a systematic review of measure quality. Br J Health Psychol 2017 Dec;22(4):872-903 [FREE Full text] [CrossRef] [Medline]
- Sieverink F, Kelders SM, van Gemert-Pijnen JE. Clarifying the concept of adherence to eHealth technology: systematic review on when usage becomes adherence. J Med Internet Res 2017 Dec 06;19(12):e402 [FREE Full text] [CrossRef] [Medline]
- Ryan C, Bergin M, Wells JS. Theoretical perspectives of adherence to web-based interventions: a scoping review. Int J Behav Med 2018 Dec;25(1):17-29. [CrossRef] [Medline]
- O'Brien H, Toms EG. What is user engagement? A conceptual framework for defining user engagement with technology. J Am Soc Inf Sci Technol 2008 Apr;59(6):938-955. [CrossRef]
- Crutzen R, Ruiter R. Interest in behavior change interventions: a conceptual model. The European Health Psychologist 2015;17(1):a-11.
- Saket B. Stasko, Beyond Usability Performance: A Review of User Experience-focused Evaluations in Visualization, in Proceedings of the Sixth Workshop on Beyond Time Errors on Novel Evaluation Methods for Visualization. 2016 Presented at: BELIV '16 Proceedings of the Sixth Workshop on Beyond Time and Errors on Novel Evaluation Methods for Visualization; October 24, 2016; Baltimore, MD, USA p. A-142 URL: http://bahadorsaket.com/publication/BELIV2016.pdf
- Denisova A, Nordin AI, Cairns P. The Convergence of Player Experience Questionnaires. In: Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play. 2016 Presented at: CHI PLAY '16; October 16 - 19, 2016; Austin, Texas, USA p. 33-37.
- Boyle EA, Connolly TM, Hainey T, Boyle JM. Engagement in digital entertainment games: a systematic review. Comput Human Behav 2012 May;28(3):771-780. [CrossRef]
- Schiefele U. Interest, learning, and motivation. Educ Psychol 1991 Jun;26(3-4):299-323. [CrossRef]
- Schraw G, Lehman S. Situational interest:a review of the literature and directions for future research. Educ Psychol Rev 2001;13(1):23-52. [CrossRef]
- Gerrig R, Zimbardo P. Psychology and Life. CA, United States: Pearson; 2012.
- James W. In: Miller G, editor. The Principles of Psychology. New York: Dover Publications; 1950.
- Duncan S, Barrett LF. Affect is a form of cognition: a neurobiological analysis. Cogn Emot 2007 Sep;21(6):1184-1211 [FREE Full text] [CrossRef] [Medline]
- Barrett LF, Russell JA. The structure of current affect: controversies and emerging consensus. Curr Dir Psychol Sci 2016 Jun 22;8(1):10-14. [CrossRef]
- Csikszentmihalyi M. Flow: The Psychology of Optimal Experience. New York: Harper Perennial; 1990.
- Brockmyer JH, Fox CM, Curtiss KA, McBroom E, Burkhart KM, Pidruzny JN. The development of the Game Engagement Questionnaire: a measure of engagement in video game-playing. J Exp Soc Psychol 2009 Jul;45(4):624-634. [CrossRef]
- Baños RM, Botella C, Alcañiz M, Liaño V, Guerrero B, Rey B. Immersion and emotion: their impact on the sense of presence. Cyberpsychol Behav 2004 Dec;7(6):734-741. [CrossRef] [Medline]
- Lombard M, Ditton T. At the heart of it all: the concept of presence. J Comput Mediat Commun 1997 Sep 1;3(2). [CrossRef]
- Dennison L, Morrison L, Conway G, Yardley L. Opportunities and challenges for smartphone applications in supporting health behavior change: qualitative study. J Med Internet Res 2013 Apr 18;15(4):e86 [FREE Full text] [CrossRef] [Medline]
- Milward J, Khadjesari Z, Fincham-Campbell S, Deluca P, Watson R, Drummond C. User preferences for content, features, and style for an app to reduce harmful drinking in young adults: analysis of user feedback in app stores and focus group interviews. JMIR Mhealth Uhealth 2016 May 24;4(2):e47 [FREE Full text] [CrossRef] [Medline]
- Kernot J, Olds T, Lewis LK, Maher C. Usability testing and piloting of the Mums Step It Up program--a team-based social networking physical activity intervention for women with young children. PLoS One 2014;9(10):e108842 [FREE Full text] [CrossRef] [Medline]
- Short C, James EL, Rebar AL, Duncan MJ, Courneya KS, Plotnikoff RC, et al. Designing more engaging computer-tailored physical activity behaviour change interventions for breast cancer survivors: lessons from the iMove More for Life study. Support Care Cancer 2017 Dec;25(11):3569-3585. [CrossRef] [Medline]
- Kirwan M, Duncan MJ, Vandelanotte C, Mummery WK. Design, development, and formative evaluation of a smartphone application for recording and monitoring physical activity levels: the 10,000 Steps “iStepLog”. Health Educ Behav 2013 Apr;40(2):140-151. [CrossRef] [Medline]
- Perski O, Blandford A, Ubhi HK, West R, Michie S. Smokers' and drinkers' choice of smartphone applications and expectations of engagement: a think aloud and interview study. BMC Med Inform Decis Mak 2017 Dec 28;17(1):25 [FREE Full text] [CrossRef] [Medline]
- Bradbury K, Morton K, Band R, van Woezik A, Grist R, McManus RJ, et al. Using the person-based approach to optimise a digital intervention for the management of hypertension. PLoS One 2018;13(5):e0196868 [FREE Full text] [CrossRef] [Medline]
- Crane D, Garnett C, Brown J, West R, Michie S. Factors influencing usability of a smartphone app to reduce excessive alcohol consumption: Think Aloud and Interview Studies. Front Public Health 2017;5:39 [FREE Full text] [CrossRef] [Medline]
- Alkhaldi G, Modrow K, Hamilton F, Pal K, Ross J, Murray E. Promoting engagement with a dimgital health intervention (HeLP-Diabetes) using email and text message prompts: mixed-methods study. Interact J Med Res 2017 Aug 22;6(2):e14 [FREE Full text] [CrossRef] [Medline]
- El-Hilly A, Iqbal SS, Ahmed M, Sherwani Y, Muntasir M, Siddiqui S, et al. Game On? Smoking cessation through the gamification of mHealth: a longitudinal qualitative study. JMIR Serious Games 2016 Oct 24;4(2):e18 [FREE Full text] [CrossRef] [Medline]
- Hwang M, Hong J, Hao Y, Jong J. Elders' usability, dependability, and flow experiences on embodied interactive video games. Educ Gerontol 2011 Aug;37(8):715-731. [CrossRef]
- Morrison L, Moss-Morris R, Michie S, Yardley L. Optimizing engagement with Internet-based health behaviour change interventions: comparison of self-assessment with and without tailored feedback using a mixed methods approach. Br J Health Psychol 2014 Nov;19(4):839-855 [FREE Full text] [CrossRef] [Medline]
- Horsch C, Lancee J, Beun RJ, Neerincx MA, Brinkman WP. Adherence to technology-mediated insomnia treatment: a meta-analysis, interviews, and focus groups. J Med Internet Res 2015 Sep 04;17(9):e214 [FREE Full text] [CrossRef] [Medline]
- Ritterband LM, Thorndike FP, Cox DJ, Kovatchev BP, Gonder-Frederick LA. A behavior change model for internet interventions. Ann Behav Med 2009 Aug;38(1):18-27 [FREE Full text] [CrossRef] [Medline]
- Irvine L, Melson AJ, Williams B, Sniehotta FF, McKenzie A, Jones C, et al. Real time monitoring of engagement with a text message intervention to reduce binge drinking among men living in socially disadvantaged areas of Scotland. Int J Behav Med 2017 Dec;24(5):713-721 [FREE Full text] [CrossRef] [Medline]
- McDonald S, Quinn F, Vieira R, O'Brien N, White M, Johnston DW, et al. The state of the art and future opportunities for using longitudinal n-of-1 methods in health behaviour research: a systematic literature overview. Health Psychol Rev 2017 Dec;11(4):307-323. [CrossRef] [Medline]
- Kevin C, Xiaozhong L, Eileen E. Machine learning and rule-based automated coding of qualitative data. 2010 Presented at: ASIS&t '10 Proceedings of the 73rd ASIS&T Annual Meeting on Navigating Streams in an Information Ecosystem; October 22-27, 2010; Pittsburgh, Pennsylvania p. 1-2 URL: https://pdfs.semanticscholar.org/2f4a/60cb9abf8da062f362c4c47819cc16471bcb.pdf
- Lefebvre C, Tada Y, Hilfiker SW, Baur C. The assessment of user engagement with eHealth content: the eHealth engagement scale. J Comput Mediat Commun 2010;15(4):666-681. [CrossRef]
- Perski O. Osf. 2017. Study protocol: Development and psychometric evaluation of a self-report instrument to measure engagement with digital behaviour change interventions URL: https://osf.io/cj9y7/ [accessed 2018-07-13] [WebCite Cache]
- O'Brien HL, Toms EG. The development and evaluation of a survey to measure user engagement. J Am Soc Inf Sci 2009 Oct 19;61(1):50-69. [CrossRef]
- Laugwitz B, Held T, Schrepp M. Construction and Evaluation of a User Experience Questionnaire. 2008 Presented at: Symposium of the Austrian HCI and Usability Engineering Group USAB 2008: HCI and Usability for Education and Work; November 20-21, 2008; Graz, Austria p. 63-76. [CrossRef]
- Jackson S, Marsh HW. Development and validation of a scale to measure optimal experience: the flow state scale. J Sport Exerc Psychol 1996 Mar;18(1):17-35. [CrossRef]
- Direito A, Jiang Y, Whittaker R, Maddison R. Apps for IMproving FITness and increasing physical activity among young people: the AIMFIT pragmatic randomized controlled trial. J Med Internet Res 2015 Aug 27;17(8):e210 [FREE Full text] [CrossRef] [Medline]
- Boase J, Ling R. Measuring mobile phone use: self-report versus log data. J Comput-Mediat Comm 2013 Jun 10;18(4):508-519. [CrossRef]
- Scharkow M. The accuracy of self-reported internet use—a validation study using client log data. Commun Methods Meas 2016 Mar 24;10(1):13-27. [CrossRef]
- Sigerson L, Cheng C. Scales for measuring user engagement with social network sites: A systematic review of psychometric properties. Comput Human Behav 2018 Jun;83:87-105. [CrossRef]
- MacKinnon D, Fairchild AJ, Fritz MS. Mediation analysis. Annu Rev Psychol 2007;58:593-614 [FREE Full text] [CrossRef] [Medline]
- Murray J, Brennan SF, French DP, Patterson CC, Kee F, Hunter RF. Mediators of behavior change maintenance in physical activity interventions for young and middle-aged adults: a systematic review. Ann Behav Med 2018 May 18;52(6):513-529. [CrossRef] [Medline]
- Rhodes R, Pfaeffli LA. Mediators of physical activity behaviour change among adult non-clinical populations: a review update. Int J Behav Nutr Phys Act 2010 May 11;7:37 [FREE Full text] [CrossRef] [Medline]
- Dewar DL, Lubans DR, Morgan PJ, Plotnikoff RC. Development and evaluation of social cognitive measures related to adolescent physical activity. J Phys Act Health 2013 May;10(4):544-555. [CrossRef] [Medline]
- Rhodes R, Hunt Matheson D, Mark R. Evaluation of social cognitive scaling response options in the physical activity domain. Meas Phys Educ Exerc Sci 2010 Jul 28;14(3):137-150. [CrossRef]
- Hall E, Chai W, Koszewski W, Albrecht J. Development and validation of a social cognitive theory-based survey for elementary nutrition education program. Int J Behav Nutr Phys Act 2015 Apr 09;12:47 [FREE Full text] [CrossRef] [Medline]
- Francis J, Johnston M, Eccles M, Walker A, Grimshaw JM, Foy R, et al. Constructing questionnaires based on the theory of planned behaviour: a manual for Health Services Researchers. Newcastle upon Tyne, UK: Centre for Health Service Research; 2004. URL: http://openaccess.city.ac.uk/1735/1/TPB%20Manual%20FINAL%20May2004.pdf [WebCite Cache]
- Bandura A. Guide for constructing self-efficacy scales. J Phys Act Res 2006:307-337. [CrossRef]
- Crutzen R, Peters GJ. Scale quality: alpha is an inadequate estimate and factor-analytic evidence is needed first of all. Health Psychol Rev 2017 Dec;11(3):242-247. [CrossRef] [Medline]
- Doherty K, Doherty G. The construal of experience in HCI: uUnderstanding self-reports. Int J Hum Comput Stud 2018 Feb;110:63-74. [CrossRef]
- Reis HT. Why Researchers Should Think “Real World”: A Conceptual Rationale. In: Mehl MR, Conner TS, editors. Handbook of Research Methods for Studying Daily Life. New York: The Guilford Press; 2013:3-22.
- Hernandez J, McDuff D, Infante C, Maes P, Quigley K, Picard R. Wearable ESM: differences in the experience sampling method across wearable devices. 2016 Presented at: MobileHCI '16 Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services; September 6 to 9, 2016; Florance, Italy p. 195-205.
- Dunton GF, Liao Y, Intille SS, Spruijt-Metz D, Pentz M. Investigating children's physical activity and sedentary behavior using ecological momentary assessment with mobile phones. Obesity (Silver Spring) 2011 Jun;19(6):1205-1212 [FREE Full text] [CrossRef] [Medline]
- Fanning J, Mackenzie M, Roberts S, Crato I, Ehlers D, McAuley E. Physical activity, mind wandering, affect, and sleep: an ecological momentary assessment. JMIR Mhealth Uhealth 2016 Aug 31;4(3):e104 [FREE Full text] [CrossRef] [Medline]
- Chung J, Gardner HJ. Temporal presence variation in immersive computer games. Int J Hum Comput Stud 2012 Aug;28(8):511-529. [CrossRef]
- Wen C, Schneider S, Stone AA, Spruijt-Metz D. Compliance with mobile ecological momentary assessment protocols in children and adolescents: a systematic review and meta-analysis. J Med Internet Res 2017 Dec 26;19(4):e132 [FREE Full text] [CrossRef] [Medline]
- Cain AE, Depp CA, Jeste DV. Ecological momentary assessment in aging research: a critical review. J Psychiatr Res 2009 Jul;43(11):987-996 [FREE Full text] [CrossRef] [Medline]
- Modecki KL, Mazza GL. Are we making the most of ecological momentary assessment data? A comment on Richardson, Fuller-Tyszkiewicz, O'Donnell, Ling, & Staiger, 2017. Health Psychol Rev 2017 Dec;11(3):295-297. [CrossRef] [Medline]
- Richardson B, Fuller-Tyszkiewicz M, O'Donnell R, Ling M, Staiger PK. Regression tree analysis of ecological momentary assessment data. Health Psychol Rev 2017 Dec;11(3):235-241. [CrossRef] [Medline]
- Ginexi EM, Riley W, Atienza AA, Mabry PL. The promise of intensive longitudinal data capture for behavioral health research. Nicotine Tob Res 2014 May;16 Suppl 2:S73-S75 [FREE Full text] [CrossRef] [Medline]
- Hamaker E, Wichers M. No Time Like the Present. Curr Dir Psychol Sci 2017 Feb 08;26(1):10-15. [CrossRef]
- Burke L, Shiffman S, Music E, Styn MA, Kriska A, Smailagic A, et al. Ecological momentary assessment in behavioral research: addressing technological and human participant challenges. J Med Internet Res 2017 Dec 15;19(3):e77 [FREE Full text] [CrossRef] [Medline]
- Kelders S, Van Gemert-Pijnen JE, Werkman A, Nijland N, Seydel ER. Effectiveness of a Web-based intervention aimed at healthy dietary and physical activity behavior: a randomized controlled trial about users and usage. J Med Internet Res 2011 Apr 14;13(2):e32 [FREE Full text] [CrossRef] [Medline]
- Danaher BG, Boles SM, Akers L, Gordon JS, Severson HH. Defining participant exposure measures in Web-based health behavior change programs. J Med Internet Res 2006 Aug 30;8(3):e15 [FREE Full text] [CrossRef] [Medline]
- Graham ML, Strawderman MS, Demment M, Olson CM. Does usage of an eHealth intervention reduce the risk of excessive gestational weight gain? Secondary analysis from a randomized controlled trial. J Med Internet Res 2017 Dec 09;19(1):e6 [FREE Full text] [CrossRef] [Medline]
- Mattila E, Lappalainen R, Välkkynen P, Sairanen E, Lappalainen P, Karhunen L, et al. Usage and dose response of a mobile acceptance and commitment therapy app: secondary analysis of the intervention arm of a randomized controlled trial. JMIR Mhealth Uhealth 2016 Jul 28;4(3):e90 [FREE Full text] [CrossRef] [Medline]
- Taki S, Lymer S, Russell CG, Campbell K, Laws R, Ong K, et al. Assessing user engagement of an mHealth intervention: development and implementation of the growing healthy app engagement index. JMIR Mhealth Uhealth 2017 Jun 29;5(6):e89 [FREE Full text] [CrossRef] [Medline]
- McCallum C, Rooksby J, Gray CM. Evaluating the impact of physical activity apps and wearables: interdisciplinary review. JMIR Mhealth Uhealth 2018 Mar 23;6(3):e58 [FREE Full text] [CrossRef] [Medline]
- Baltierra NB, Muessig KE, Pike EC, LeGrand S, Bull SS, Hightow-Weidman LB. More than just tracking time: Complex measures of user engagement with an internet-based health promotion intervention. J Biomed Inform 2016 Feb;59:299-307 [FREE Full text] [CrossRef] [Medline]
- Barisic A, Leatherdale ST, Kreiger N. Importance of frequency, intensity, time and type (FITT) in physical activity assessment for epidemiological research. Can J Public Health 2011;102(3):174-175. [Medline]
- Funk KL, Stevens VJ, Appel LJ, Bauck A, Brantley PJ, Champagne CM, et al. Associations of internet website use with weight change in a long-term weight loss maintenance program. J Med Internet Res 2010 Jul 27;12(3):e29 [FREE Full text] [CrossRef] [Medline]
- Arden-Close EJ, Smith E, Bradbury K, Morrison L, Dennison L, Michaelides D, et al. A visualization tool to analyse usage of web-based interventions: the example of positive online weight reduction (POWeR). JMIR Hum Factors 2015 May 19;2(1):e8 [FREE Full text] [CrossRef] [Medline]
- Manwaring JL, Bryson SW, Goldschmidt AB, Winzelberg AJ, Luce KH, Cunning D, et al. Do adherence variables predict outcome in an online program for the prevention of eating disorders? J Consult Clin Psychol 2008 Apr;76(2):341-346. [CrossRef] [Medline]
- van Mierlo T. The 1% rule in four digital health social networks: an observational study. J Med Internet Res 2014 Feb 04;16(2):e33 [FREE Full text] [CrossRef] [Medline]
- Forbes C, Blanchard CM, Mummery WK, Courneya KS. Feasibility and preliminary efficacy of an online intervention to increase physical activity in Nova Scotian cancer survivors: a randomized controlled trial. JMIR Cancer 2015 Nov 23;1(2):e12 [FREE Full text] [CrossRef] [Medline]
- Short CE, Rebar A, James EL, Duncan MJ, Courneya KS, Plotnikoff RC, et al. How do different delivery schedules of tailored web-based physical activity advice for breast cancer survivors influence intervention use and efficacy? J Cancer Surviv 2017 Feb;11(1):80-91. [CrossRef] [Medline]
- Hales SB, Davidson C, Turner-McGrievy GM. Varying social media post types differentially impacts engagement in a behavioral weight loss intervention. Transl Behav Med 2014 Dec;4(4):355-362 [FREE Full text] [CrossRef] [Medline]
- Couper MP, Alexander GL, Zhang N, Little RJ, Maddy N, Nowak MA, et al. Engagement and retention: measuring breadth and depth of participant use of an online intervention. J Med Internet Res 2010 Nov 18;12(4):e52 [FREE Full text] [CrossRef] [Medline]
- Mohr DC, Duffecy J, Ho J, Kwasny M, Cai X, Burns MN, et al. A randomized controlled trial evaluating a manualized TeleCoaching protocol for improving adherence to a web-based intervention for the treatment of depression. PLoS One 2013;8(8):e70086 [FREE Full text] [CrossRef] [Medline]
- Cussler EC, Teixeira PJ, Going SB, Houtkooper LB, Metcalfe LL, Blew RM, et al. Maintenance of weight loss in overweight middle-aged women through the Internet. Obesity (Silver Spring) 2008 May;16(5):1052-1060 [FREE Full text] [CrossRef] [Medline]
- Glasgow RE, Christiansen SM, Kurz D, King DK, Woolley T, Faber AJ, et al. Engagement in a diabetes self-management website: usage patterns and generalizability of program use. J Med Internet Res 2011 Jan 25;13(1):e9 [FREE Full text] [CrossRef] [Medline]
- Davies C, Corry K, Van Itallie A, Vandelanotte C, Caperchione C, Mummery WK. Prospective associations between intervention components and website engagement in a publicly available physical activity website: the case of 10,000 Steps Australia. J Med Internet Res 2012 Jan 11;14(1):e4 [FREE Full text] [CrossRef] [Medline]
- Kim JY, Wineinger NE, Taitel M, Radin JM, Akinbosoye O, Jiang J, et al. Self-Monitoring Utilization Patterns Among Individuals in an Incentivized Program for Healthy Behaviors. J Med Internet Res 2016 Dec 17;18(11):e292 [FREE Full text] [CrossRef] [Medline]
- Crutzen R, Roosjen JL, Poelman J. Using Google Analytics as a process evaluation method for Internet-delivered interventions: an example on sexual health. Health Promot Int 2013 Mar;28(1):36-42. [CrossRef] [Medline]
- Scherer EA, Ben-Zeev D, Li Z, Kane JM. Analyzing mHealth Engagement: joint models for intensively collected user engagement data. JMIR Mhealth Uhealth 2017 Jan 12;5(1):e1 [FREE Full text] [CrossRef] [Medline]
- Fan W, Bifet A. Mining big data. SIGKDD Explor Newsl 2013 Apr 30;14(2):1-5. [CrossRef]
- Cugelman B, Thelwall M, Dawes P. Online interventions for social marketing health behavior change campaigns: a meta-analysis of psychological architectures and adherence factors. J Med Internet Res 2011 Feb 14;13(1):e17 [FREE Full text] [CrossRef] [Medline]
- Donkin L, Christensen H, Naismith SL, Neal B, Hickie IB, Glozier N. A systematic review of the impact of adherence on the effectiveness of e-therapies. J Med Internet Res 2011 Aug 05;13(3):e52 [FREE Full text] [CrossRef] [Medline]
- Kidwell K, Hyde LW. Adaptive interventions and SMART designs: application to child behavior research in a community setting. Am J Eval 2016 Sep;37(3):344-363 [FREE Full text] [CrossRef] [Medline]
- Collins LM, Murphy SA, Bierman KL. A conceptual framework for adaptive preventive interventions. Prev Sci 2004 Sep;5(3):185-196 [FREE Full text] [CrossRef] [Medline]
- Althoff T, Sosič R, Hicks JL, King AC, Delp SL, Leskovec J. Large-scale physical activity data reveal worldwide activity inequality. Nature 2017 Dec 20;547(7663):336-339 [FREE Full text] [CrossRef] [Medline]
- Mohr D, Schueller SM, Riley WT, Brown CH, Cuijpers P, Duan N, et al. Trials of intervention principles: evaluation methods for evolving behavioral intervention technologies. J Med Internet Res 2015 Jul 08;17(7):e166 [FREE Full text] [CrossRef] [Medline]
- Jacobs M, Graham A. Iterative development and evaluation methods of mHealth behavior change interventions. Curr Opin Psychol 2016 Jun;9:33-37. [CrossRef]
- Vandelanotte C, Duncan MJ, Kolt GS, Caperchione CM, Savage TN, Van Itallie A, et al. More real-world trials are needed to establish if web-based physical activity interventions are effective. Br J Sports Med 2018 Jul 03 Epub ahead of print. [CrossRef] [Medline]
- Collins L, Murphy SA, Strecher V. The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): new methods for more potent eHealth interventions. Am J Prev Med 2007 May;32(5 Suppl):S112-S118 [FREE Full text] [CrossRef] [Medline]
- Shaw R, Steinberg DM, Zullig LL, Bosworth HB, Johnson CM, Davis LL. mHealth interventions for weight loss: a guide for achieving treatment fidelity. J Am Med Inform Assoc 2014;21(6):959-963 [FREE Full text] [CrossRef] [Medline]
- Cornet VP, Holden RJ. Systematic review of smartphone-based passive sensing for health and wellbeing. J Biomed Inform 2018 Jan;77:120-132. [CrossRef] [Medline]
- Barkhuus L, Dey A. Location-Based Services for Mobile Telephony: a Study of Users' Privacy Concerns (2003). 2003 Presented at: 9TH IFIP TC13 International Conference On Human-Computer Interaction, Interact 2003; 1st-5th September, 2003; Zürich, Switzerland.
- Case MA, Burwick HA, Volpp KG, Patel MS. Accuracy of smartphone applications and wearable devices for tracking physical activity data. J Am Med Assoc 2015 Feb 10;313(6):625-626. [CrossRef] [Medline]
- Gordon BA, Bruce L, Benson AC. Physical activity intensity can be accurately monitored by smartphone global positioning system 'app'. Eur J Sport Sci 2016 Aug;16(5):624-631. [CrossRef] [Medline]
- Maher C, Lewis LK, Ferrar K, Marshall S, De Bourdeaudhuij I, Vandelanotte C. Are health behavior change interventions that use online social networks effective? A systematic review. J Med Internet Res 2014 Feb 14;16(2):e40 [FREE Full text] [CrossRef] [Medline]
- Hale TM, Pathipati AS, Zan S, Jethwani K. Representation of health conditions on Facebook: content analysis and evaluation of user engagement. J Med Internet Res 2014 Aug 04;16(8):e182 [FREE Full text] [CrossRef] [Medline]
- D'heer E, Verdegem P. What social media data mean for audience studies: a multidimensional investigation of Twitter use during a current affairs TV programme. Inform Comm Soc 2014 Sep 22;18(2):221-234. [CrossRef]
- Ryan J, Edney S, Maher C. Engagement, compliance and retention with a gamified online social networking physical activity intervention. Transl Behav Med 2017 Dec;7(4):702-708 [FREE Full text] [CrossRef] [Medline]
- Bouman M, Drossaert CH, Pieterse ME. Mark My Words: the design of an innovative methodology to detect and analyze interpersonal health conversations in web and social media. J Technol Hum Serv 2012 Jul;30(3-4):312-326. [CrossRef]
- Delahaye Paine K. Measure What Matters: Online Tools for Understanding Customers, Social Media, Engagement, and Key Relationships. United Kingdom: John Wiley and Sons Ltd, United Kingdom; 2018.
- Murdough C. Social Media Measurement. J Interact Advert 2009 Sep;10(1):94-99. [CrossRef]
- Peters GY, Crutzen R. Pragmatic nihilism: how a Theory of Nothing can help health psychology progress. Health Psychol Rev 2017 Dec;11(2):103-121. [CrossRef] [Medline]
- Dirican A, Göktürk M. Psychophysiological measures of human cognitive states applied in human computer interaction. Procedia Comp Sci 2011;3:1361-1367. [CrossRef]
- Cowley B, Filetti M, Lukander K, Torniainen J, Henelius A, Ahonen L, et al. The Psychophysiology Primer: a guide to methods and a broad review with a focus on human-computer interaction. Found Trends Hum Comp Interact 2015;9(3-4):151-308.
- Ganglbauer E, Schrammel J, Deutsch S, Tscheligi M. Applying psychophysiological methods for measuring user experience: possibilities, challenges and feasibility. 2009 Presented at: User Experience Evaluation Methods in Product Development (UXEM'09) in conjunction with Interact'09; August 24-28, 2009; Sweden.
- Harmat L, de Manzano Ö, Theorell T, Högman L, Fischer H, Ullén F. Physiological correlates of the flow experience during computer game playing. Int J Psychophysiol 2015 Jul;97(1):1-7. [CrossRef] [Medline]
- Burns C, Fairclough SH. Use of auditory event-related potentials to measure immersion during a computer game. Int J Hum Comput Stud 2015 Jan;73(Supplement C):107-114. [CrossRef]
- Martey R, Kenski K, Folkestad J, Feldman L, Gordis E, Shaw A, et al. Measuring Game Engagement. Simul Gaming 2014 Nov 04;45(4-5):528-547. [CrossRef]
- Dhamija S, Boult TE. Automated mood-aware engagement prediction. 2017 Presented at: Seventh International Conference on Affective Computing Intelligent Interaction (ACII); 23-26 October, 2017; San Antonio, TX, USA.
- Huynh S, Kim S, Ko J, Balan RK, Lee Y. EngageMon. Proc ACM Interact Mob Wearable Ubiquitous Technol 2018 Mar 26;2(1):1-27. [CrossRef]
- Bevilacqua F, Engström H, Backlund P. Changes in heart rate and facial actions during a gaming session with provoked boredom and stress. Entertainment Computing 2018 Jan;24:10-20. [CrossRef]
- Reinecke K, Cordes M, Lerch C, Koutsandréou F, Schubert M, Weiss M, et al. From lab to field conditions: a pilot study on EEG methodology in applied sports sciences. Appl Psychophysiol Biofeedback 2011 Dec;36(4):265-271. [CrossRef] [Medline]
- Kayser J, Tenke CE. Issues and considerations for using the scalp surface Laplacian in EEG/ERP research: A tutorial review. Int J Psychophysiol 2015 Sep;97(3):189-209 [FREE Full text] [CrossRef] [Medline]
- Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods 2004 Mar 15;134(1):9-21. [CrossRef] [Medline]
- Luck SJ. An Introduction to the Event-Related Potential Technique. Cambridge, MA: MIT Press; 2005.
- Leiker A, Miller M, Brewer L, Nelson M, Siow M, Lohse K. The relationship between engagement and neurophysiological measures of attention in motion-controlled video games: a randomized controlled trial. JMIR Serious Games 2016 Apr 21;4(1):e4 [FREE Full text] [CrossRef] [Medline]
- Dyke FB, Leiker AM, Grand KF, Godwin MM, Thompson AG, Rietschel JC, et al. The efficacy of auditory probes in indexing cognitive workload is dependent on stimulus complexity. Int J Psychophysiol 2015 Jan;95(1):56-62. [CrossRef] [Medline]
- Takeda Y, Okuma T, Kimura M, Kurata T, Takenaka T, Iwaki S. Electrophysiological measurement of interest during walking in a simulated environment. Int J Psychophysiol 2014 Sep;93(3):363-370. [CrossRef] [Medline]
- Rayner K. Eye movements in reading and information processing: 20 years of research. Psychol Bull 1998 Nov;124(3):372-422. [Medline]
- Hermans D, Vansteenwegen D, Eelen P. Eye movement registration as a continuous index of attention deployment: data from a group of spider anxious students. Cognition & Emotion 1999 Jul;13(4):419-434. [CrossRef]
- Jennett C, Cox A, Cairns P, Dhoparee S, Epps A, Tijs T, et al. Measuring and defining the experience of immersion in games. Int J Hum Comput Stud 2008 Sep;66(9):641-661. [CrossRef]
- Alley S, Jennings C, Persaud N, Plotnikoff RC, Horsley M, Vandelanotte C. Do personally tailored videos in a web-based physical activity intervention lead to higher attention and recall? - An eye-tracking study. Front Public Health 2014;2:13 [FREE Full text] [CrossRef] [Medline]
- Crutzen R, Cyr D, Larios H, Ruiter RA, de Vries NK. Social presence and use of internet-delivered interventions: a multi-method approach. PLoS One 2013;8(2):e57067 [FREE Full text] [CrossRef] [Medline]
- Nacke L, Grimshaw MN, Lindley CA. More than a feeling: measurement of sonic user experience and psychophysiology in a first-person shooter game. Interact Comput 2010 Sep;22(5):336-343. [CrossRef]
Abbreviations
EDA: electrodermal activity |
EEG: electroencephalography |
eHealth: electronic health |
EMA: ecological momentary assessment |
EMG: electromyography |
ERP: event-related potentials |
FITT: frequency, intensity, time, and type |
IP: internet protocol |
mHealth: mobile health |
NHMRC: National Health and Medical Research Council |
Edited by G Eysenbach; submitted 15.11.17; peer-reviewed by O Perski, V Pekurinen, K Frie, C Yeager; comments to author 15.03.18; revised version received 01.08.18; accepted 10.09.18; published 16.11.18
Copyright©Camille E Short, Ann DeSmet, Catherine Woods, Susan L Williams, Carol Maher, Anouk Middelweerd, Andre Matthias Müller, Petra A Wark, Corneel Vandelanotte, Louise Poppe, Melanie D Hingle, Rik Crutzen. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 16.11.2018.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.