Published on in Vol 23, No 12 (2021): December

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/29071, first published .
Assessing the Implementation and Effectiveness of the Electronic Patient-Reported Outcome Tool for Older Adults With Complex Care Needs: Mixed Methods Study

Assessing the Implementation and Effectiveness of the Electronic Patient-Reported Outcome Tool for Older Adults With Complex Care Needs: Mixed Methods Study

Assessing the Implementation and Effectiveness of the Electronic Patient-Reported Outcome Tool for Older Adults With Complex Care Needs: Mixed Methods Study

Original Paper

1Bridgepoint Collaboratory for Research and Innovation, Lunenfeld-Tanenebaum Research Institute, Sinai Health, Toronto, ON, Canada

2Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada

3Logibec Inc (QoC Health Inc), Toronto, ON, Canada

4Ray D Wolfe Department of Family Medicine, Mount Sinai Hospital, Toronto, ON, Canada

5Department of Family and Community Medicine, Faculty of Medicine, University of Toronto, Toronto, ON, Canada

6Usher Institute, College of Medicine and Veterinary Medicine, University of Edinburgh, Edinburgh, United Kingdom

7Institute for Better Health, Trillium Health Partners, Mississauga, ON, Canada

8Institute for Health Research, Kaiser Permanente Colorado, Denver, CO, United States

9Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada

10School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada

Corresponding Author:

Carolyn Steele Gray, MSc, PhD

Bridgepoint Collaboratory for Research and Innovation

Lunenfeld-Tanenebaum Research Institute

Sinai Health

1 Bridgepoint Drive

Toronto, ON, M4M 2B5

Canada

Phone: 1 4168047100

Email: Carolyn.SteeleGray@sinaihealth.ca


Background: Goal-oriented care is being adopted to deliver person-centered primary care to older adults with multimorbidity and complex care needs. Although this model holds promise, its implementation remains a challenge. Digital health solutions may enable processes to improve adoption; however, they require evaluation to determine feasibility and impact.

Objective: This study aims to evaluate the implementation and effectiveness of the electronic Patient-Reported Outcome (ePRO) mobile app and portal system, designed to enable goal-oriented care delivery in interprofessional primary care practices. The research questions driving this study are as follows: Does ePRO improve quality of life and self-management in older adults with complex needs? What mechanisms are likely driving observed outcomes?

Methods: A multimethod, pragmatic randomized controlled trial using a stepped-wedge design and ethnographic case studies was conducted over a 15-month period in 6 comprehensive primary care practices across Ontario with a target enrollment of 176 patients. The 6 practices were randomized into either early (3-month control period; 12-month intervention) or late (6-month control period; 9-month intervention) groups. The primary outcome measure of interest was the Assessment of Quality of Life-4D (AQoL-4D). Data were collected at baseline and at 3 monthly intervals for the duration of the trial. Ethnographic data included observations and interviews with patients and providers at the midpoint and end of the intervention. Outcome data were analyzed using linear models conducted at the individual level, accounting for cluster effects at the practice level, and ethnographic data were analyzed using qualitative description and framework analysis methods.

Results: Recruitment challenges resulted in fewer sites and participants than expected; of the 176 target, only 142 (80.6%) patients were identified as eligible to participate because of lower-than-expected provider participation and fewer-than-expected patients willing to participate or perceived as ready to engage in goal-setting. Of the 142 patients approached, 45 (32%) participated. Patients set a variety of goals related to self-management, mental health, social health, and overall well-being. Owing to underpowering, the impact of ePRO on quality of life could not be definitively assessed; however, the intervention group, ePRO plus usual care (mean 15.28, SD 18.60) demonstrated a nonsignificant decrease in quality of life (t24=−1.20; P=.24) when compared with usual care only (mean 21.76, SD 2.17). The ethnographic data reveal a complex implementation process in which the meaningfulness (or coherence) of the technology to individuals’ lives and work acted as a key driver of adoption and tool appraisal.

Conclusions: This trial experienced many unexpected and significant implementation challenges related to recruitment and engagement. Future studies could be improved through better alignment of the research methods and intervention to the complex and diverse clinical settings, dynamic goal-oriented care process, and readiness of provider and patient participants.

Trial Registration: ClinicalTrials.gov NCT02917954; https://clinicaltrials.gov/ct2/show/NCT02917954

J Med Internet Res 2021;23(12):e29071

doi:10.2196/29071

Keywords



Background

The rising population of older adults with multimorbidity and complex care needs requires that health systems adjust to meet this new demand [1,2]. Complex care needs of patients go beyond multimorbidity alone, as these individuals will experience biopsychosocial challenges and barriers that make it more difficult for them to manage their multiple chronic physical and mental illnesses [3,4]. Increasingly, digital health solutions are being adopted to support this patient population through tools that enable medication management [5], information sharing [6], care planning [7], chronic disease management and monitoring [8,9], and virtual care tools, particularly since the onset of the COVID-19 pandemic [10]. Of particular use to older adults with complex care needs are solutions that enable person-centered and holistic care delivery to better address their multiple health and social care needs [3,11-17]. Although a person-centered approach has been identified as a priority [16], organizations and providers continue to struggle with how to put the vision of person-centered care into practice [18].

Person-centered care may be operationalized by adopting a goal-oriented care (GOC) approach that involves eliciting patient-identified goals to drive care planning and decision-making [13,14,19,20]. Effectively, this model of care shifts from asking a patient “What is the matter with you?” to “What matters to you?” [21] From a patient perspective, GOC represents a more meaningful and holistic approach to care and decision-making [22]. Emerging studies of GOC report reduced treatment burden for patients with multiple chronic conditions [23] and reductions in acute inpatient days and mortality [24]. The pragmatic trial of the Health TAPESTRY program, a digitally enabled, community-based GOC program, evaluated the program’s impact on goal attainment, self-efficacy, quality of life, optimal aging, social support, empowerment, physical activity, falls, and access. The Health TAPESTRY trial demonstrated a shift from reactive to proactive care [25]; however, similar to many other studies of person-centered care [26,27], Health TAPESTRY did not demonstrate an impact on patient outcomes.

Among the challenges in evaluating an approach such as GOC, in particular a digitally enabled GOC model, is that it is a complex intervention that is delivered to a complex patient population within a complex system. Conventional methods, such as randomized controlled trials, that apply rigid methods and rely on assumptions of linear cause-effect perspectives [28] may result in controlling for the variables that we need to capture [29]. Greenhalgh and Papoutsi [28] instead suggest methods that adopt a systems mindset that allows for adaptability, iteration, and design-thinking better suited to capturing “changing interrelationships between parts of the system.” The evaluation presented in this paper adopts this systems mindset to evaluate the electronic Patient-Reported Outcome (ePRO) tool, a novel mobile device and a linked portal system that enables GOC delivery to older adults with complex care needs receiving care in interdisciplinary primary care practices. This evaluation is the latest iteration of a multiphase developmental evaluation of ePRO that took place in Ontario, Canada, from April 2018 to June 2019.

Objective and Research Questions

This developmental evaluation incorporates a pragmatic, stepped-wedge, cluster trial with embedded ethnographic case studies, building on previous stages of design, development, and testing [30-33] (see Figure 1 for a visual representation of how this work builds on previous stages). This study expands the findings from our exploratory trial [33] as a means to engage in what Tsoukas terms “conjunctive theorizing to generate rich pictures of complex phenomena by drawing together different kinds of data from multiple sources” [34]. This work was guided by the following research questions:

  1. Does ePRO improve quality of life and self-management in older adults with complex needs?
  2. What mechanisms are likely driving observed outcomes?

Regarding the first research question, it is hypothesized that the ePRO tool will have a positive impact on quality of life and patients’ ability to self-manage.

Figure 1. electronic Patient-Reported Outcome (ePRO) co-design steps.
View this figure

Design

Aligned with Medical Research Council guidelines for evaluating complex interventions [35], a developmental evaluation approach was applied to collect outcome, process, and context measures to support decision-making and technology modifications [36]. A pragmatic, stepped-wedge, cluster randomized trial design was used to assess the effectiveness of the ePRO tool [37]. The Pragmatic Explanatory Continuum Indicator Summary (PRECIS-2) wheel in Figure 2 (see Multimedia Appendix 1 for description of the wheel domains as related to this trial) describes the degree to which the trial represents a pragmatic design. This design was the most feasible and appropriate approach given the nature of the intervention, time, and resources available [38] and the desire to complete in a real-world setting [39]. An embedded ethnographic case study was included, aligned with the methods outlined by Greenhalgh and Swinglehurt [40] for evaluating complex technological innovations. The case studies offer rich contextual and process information that accounts for complex interrelationships between variables that are missed by looking at outcome data alone.

Figure 2. PRECIS-2 (Pragmatic Explanatory Continuum Indicator Summary) Wheel for electronic Patient-Reported Outcome (ePRO) trial.
View this figure

The trial was conducted in 6 comprehensive primary care practices called Family Health Teams (FHTs) across Ontario, Canada, over a 15-month period. Following the stepped-wedge design, all FHT sites started in the control period where all recruited patients received usual care (no change to their care delivery from the primary care team). A random number generator assigned sites to either the early intervention (n=3 sites) or late intervention (n=3 sites) groups. The early intervention group (group 1) was assigned to the intervention for 12 months after the initial 3-month control period. The FHTs in group 2 were switched to the intervention group for 9 months after a 6-month control period. Figure 3 shows a diagram of the stepped-wedge design.

Figure 3. Stepped-wedge design for electronic Patient-Reported Outcome (ePRO) evaluation.
View this figure

Intervention: The ePRO Tool

The ePRO tool was developed via multiphase user-centered co-design methods and represents an important divergence from many available systems that are focused on a single disease or are built to enhance existing provider-led models of care. The tool is designed to encourage a shift in the care process toward a person-centered model by enabling the full GOC process, including goal elicitation, ongoing monitoring, and goal modification [41]. Consistent with co-design methods, the tool was iteratively developed with input from patients with complex care needs, caregivers, and a multidisciplinary primary care team [30,31]. The tool has undergone usability testing [32] and an exploratory trial [33]. Findings from these studies were used to update and adapt the tool to user needs and different primary care settings. At the time of the trial, the ePRO tool did not connect to other existing technology systems, such as electronic medical records (EMRs) or other available platforms; however, the system was built so interoperability would be possible (see Multimedia Appendix 2 for wireframes).

Population and Setting

A 2-stage sampling strategy was implemented, first recruiting FHTs, followed by complex patients within each FHT. FHTs in Ontario are similar to Patient-Centered Medical Homes in the United States in that they both seek to provide comprehensive primary care services through a physician-led multidisciplinary team [42]. Working in collaboration with the project’s decision-making partner, the Association of Family Health Teams of Ontario (AFHTO; representing all 184 FHTs in Ontario), a multipronged FHT recruitment strategy was pursued, including (1) email invitations sent to AFHTO member sites, (2) a webinar session with AFHTO quality improvement specialists who could identify eligible sites, and (3) an information booth at the annual AFHTO conference (October 2016 in Toronto, Ontario) where study information was shared with delegates. From these avenues, 29 sites expressed interest to be assessed for eligibility, with 6 FHTs agreeing to participate (see the Figure 4 CONSORT flow diagram of site recruitment). As FHTs are geographically diverse, there is no chance of cross-contamination of providers across different sites. The characteristics of the participating sites, as compared with FHTs across Ontario, are summarized in Table 1, and the population densities of the regions are depicted in Figure 5. As can be seen in Figure 5, sites A and F were in rural settings, sites D and E were in urban settings, and sites B and C were medium urban as consistent with Statistics Canada definitions of rurality [43]. Approximately 36% (59/165) of FHTs are located in rural settings.

Providers eligible to participate in the study had to be provided care to patients rostered at the FHT. Providers can be employed full-time, part-time, or casual.

Figure 4. CONSORT (Consolidated Standard of Reporting Trials) flow diagram–Family Health Team (FHT) recruitment.
View this figure
Figure 5. Population density across the regions (persons per square km). Source: Statistics Canada 2016 Census [43].
View this figure
Table 1. Family Health Team characteristicsa.
CharacteristicsGroup 1Group 2Ontario FHTsb, mean (SD)

Site ASite DSite ESite BSite CSite F

n (%)Nn (%)Nn (%)Nn (%)Nn (%)Nn (%)N
Number of providers enrolled in the ePROc study

Total number of providers9 (69)134 (22)186 (27)221 (6)172 (17)127 (54)13N/Ad
GPse0 (0)41 (18)121 (17)141 (19)112 (22)96 (75)813.53 (17.72)f
NPsg8 (100)81 (33)34 (67)60 (0)40 (0)21 (0)h2.65 (3.04)
RDi1 (100)12 (100)21 (100)10 (0)10 (0)10 (0)01.19 (1.63)
Pharmacist0 (0)00 (0)10 (0)10 (0)10 (0)10 (0)00.63 (0.99)

aSite names were assigned based on the timing of recruitment. Ontario FHT data available from 165 FHT sites.

bFHT: Family Health Team.

cePRO: electronic Patient-Reported Outcome.

dN/A: not applicable.

eeGPs: general practitioners.

fInformation available from 165 FHTs across Ontario.

gNP: nurse practitioner.

hNot available.

iRD: registered dietitian.

Patient Recruitment and Eligibility Criteria

Patient recruitment followed exploratory trial procedures, using practice EMRs to identify patients aged ≥60 years with ≥10 visits to the FHT within the previous 12 months. This number of visits has been identified as an indicator of complexity in previous studies [44] and guided the recruitment strategy for the exploratory trial [33]. Age 60 years was chosen as a cut-off over 65 years as the study’s site leads and primary care knowledge user partners identified that many individuals, particularly those living in rural settings, experience complex care needs at an earlier age. EMR-generated patient lists were given to providers to assess whether these individuals met the additional eligibility criteria: (1) perceived willingness to engage in GOC conversations, (2) ability to use a smartphone or tablet in English or have a caregiver who could do this on their behalf, (3) capable of providing consent to participate, (4) willing to complete surveys every 3 months thereafter until the trial concluded. Previous studies have identified that the provider knowledge of the patient is often necessary to identify complexity given the high degree of patient variability [38].

Posters describing the study were hung in waiting rooms at sites, and the study was presented at chronic disease management programs that targeted patients with chronic disease and complex care needs for patient self-identification. Some patients self-identified as eligible after presentations at the programs, but none came to the study via posters. Recruitment materials and processes were built on what was learned from the exploratory trial and were reviewed and modified by the project’s patient partner. Recruitment occurred during a scheduled office visit or by phone by a research coordinator assigned to that site. Patient and provider consent was obtained before randomization.

The minimum sample size required for the recruitment of sites and patients was determined using closed-form analytic formulas with a power of 80% based on a minimal clinically important difference of our core measure of quality of life (the AQoL-4D) of 0.06 [45], an expected SD in assessment of quality of life (AQoL) of 0.22 [46], an expected intraclass correlation coefficient (ICC) of 0.01 (calculated based on total primary care use over a 1-year period among a 10% sample of the Ontario population, which served here as a proxy measure for patient outcomes), and an expected attrition rate of 10% (rated based on previous studies in similar population groups using similar technology [47]). A minimum sample size of 176 patients was calculated, with a target of recruiting 29 patients per site.

Technology Training

Providers and patients were trained on the tool before switching from the control to the intervention during an onboarding session. Training for providers was done at the clinic level, where groups of providers were presented the technology by a research team member who walked through the technology by setting up goals for a mock patient. Patients were trained one-on-one with a research team member on how to use the mobile device and platform just before their onboarding visit with the provider. Providers and patients were also provided with user manuals [48] and contact information for the research team for technology support.

Data Collection

Context, process, and outcome data were collected via patient-reported surveys, interviews, ethnographic observations, and chart audits. Survey and chart audit data were collected across all 6 sites, whereas qualitative data were collected at the 3 case sites (sites A, E, and F). In total, 4 of 6 agreed to participate as case sites, and 1 dropped out as a case site because of low patient recruitment.

Patient-Reported Surveys

The primary outcome for this study was health-related quality of life measured using the Assessment of Quality of Life–4 Dimensions (AQoL-4D) [49]. The AQoL-4D is a 12-item questionnaire that addresses the activities of daily living, mental health, relationships with others, and physical aspects of a patient’s quality of life. AQoL-4D responses were aggregated to generate a raw, continuous score with higher scores indicating a greater quality of life. The secondary outcome, self-management, was measured using the 13-item Patient Activation Measure (PAM), which measures the extent to which a patient is activated in their own care [50]. PAM is considered a valid and reliable metric of patient self-management for older adults with multimorbidity [50-52]. PAM generates a score from 0 to 100, with higher scores indicating greater activation (patients are better able to manage their care) [53]. Outcome data were collected at baseline and every subsequent 3 months until the end of the trial. For 3 patients in group 2 who were enrolled in the study late, outcome data were collected between October 2018 and June 2019 (Multimedia Appendix 3).

Patient and provider demographic information were collected at baseline. A chart review was conducted posttrial to collect missing data in the patient demographic forms, particularly the number of types of chronic conditions and medications.

Interviews

Semistructured interviews were conducted with patients, providers, and managers at case sites at the midpoint (6 months for sites A and E and 4.5 months for site F) and end of the trial. Interviews were conducted by research team members trained in qualitative data collection, with initial interviews conducted in pairs to ensure consistency in the approach. The interview guide was developed to capture the experiences of patients, providers, and managers using the tool or engaged with the trial. Probes were used to delve into implementation factors found to be important to the intervention in the exploratory trial, for example, patient-provider relationships and team environment [33] (Multimedia Appendix 4). Interviews were conducted in person (with one follow-up midpoint interview conducted over the phone), lasted between 20 and 60 minutes, and were audio-recorded and transcribed verbatim.

Ethnographic Observation

Ethnographic observations of case sites occurred at multiple points throughout the study, mainly when conducting other activities such as training, patient onboarding, and interviews. At these points, a member of the research team would observe the clinic visits between the patient and provider. Providers were also encouraged to inform the team when patients were coming in for visits so that additional ad hoc observations could be conducted; however, no such invitations occurred. Field memos were taken during and immediately after the observation periods. Field note guides helped research staff attend to contexts and processes anticipated to be relevant based on findings from the exploratory trial. Observations were conducted by research coordinators who had graduate training in qualitative health services methods or were provided training by the project lead in the approach. For coordinators, newer to the method observation debriefs and field memo reviews were conducted by the lead to provide ongoing training and skill building.

Use Logs

Use logs from the ePRO tool were used to track tool use and the types of goals set by the participants. Goals were categorized into types using qualitative content analysis. Tool use was determined by the number of interactions defined as any log-in or data entered into the system; participants completing one interaction in a given month were considered active that month. The total number of active participants was calculated monthly to categorize participants into long-term (using the app for 3 or more months), short-term (using the app for <3 months), or nonuser (participants who did not use the app after initial onboarding) groups. The 3-month cut-off was consistent with previous mobile health (mHealth) clinical trials [54]. Use categories helped to interpret the qualitative and quantitative findings.

Data Analyses

Statistical Analysis

Descriptive statistics were calculated for the cohort stratified by groups of FHTs using counts and mean (SD) values for categorical and continuous variables, respectively. To estimate the degree to which the ePRO tool plus usual care affects health-related quality of life and self-management relative to usual care alone, linear models were fitted with exposure identified by a fixed-effect ordinal variable of calendar time (accounting for staggered implementation) and adjusting for clustering at the FHT site level [55]. The primary effect estimates are summarized as the mean differences for continuous outcomes. Each comparison was evaluated using a 2-sided test at a nominal significance level of α=.05. Statistical analyses of outcome data were performed using the intention-to-treat principle. All descriptive analyses and multilevel modeling were completed using Stata 15.1 statistical software (StataCorp LLC). Owing to the size of the data set, mixed effects models that included covariates such as age, sex, income level, rurality, chronic disease management, and number of chronic conditions could not be included in the modeling.

While missingness in cluster randomized trials in primary care can be handled via multiple imputation methods, using any such imputation to estimate the absence of data points in the cohort was deemed inappropriate owing to the high degree of missingness [56,57]. Aligned with the intention-to-treat approach, individuals were not excluded from the analysis based on their nonresponses to the survey; only variables were excluded, not individuals.

Interview and Observation Data

Interview and observational data were used to address the second research question and were analyzed using inductive qualitative descriptive [58] and narrative descriptive methods [59], with separate analyses conducted for patient and provider interviews. Manager interviews were coded with provider data as they were asked similar questions and addressed many of the same implementation constructs. Consistent with this method, codes that described the dominant themes within participant groups were identified. The coding was conducted by researcher pairs trained using qualitative methods to support the validation. Observational memos were coded with patient interviews and were also reviewed as part of the analytic process to provide context information where appropriate to guide interpretation. All team members involved in qualitative data collection and analysis were trained to attend to reflexivity in their approach and all kept fulsome analytic memos to track their own positionality with regard to the qualitative analysis.

To support directed analysis for the purposes of this evaluation, a deductive approach was used to map descriptive codes to Normalization Process Theory (NPT) [60] to understand implementation mechanisms. Exploratory trial findings suggest that NPT is a likely theory of change that underpins this intervention [33]. NPT suggests that new processes become embedded as part of actors’ routines through the social production of work, enabled through 4 generative mechanisms: coherence, cognitive participation, collective action, and reflexive monitoring (Multimedia Appendix 5 offers descriptions of the concepts). These 4 NPT constructs were applied to descriptively coded patient, provider, and manager interviews and observational data, and cross-referenced with patient user groups (long, short, and non) and case site characteristics to generate insights regarding factors that drove implementation and outcomes. Data coded to relevant themes were extracted and organized using tables using a framework analysis approach [61] to identify patterns and trends. The research team reviewed the tables as part of the qualitative validation (supporting credibility and trustworthiness). NVivo 11 software (QSR International, version 11, 2015) was used to manage data in the initial coding phase, and Microsoft Excel and Word files were used to organize data for the framework analysis.

Integrating Quantitative and Qualitative Data

Integration of quantitative and qualitative data sets followed a convergent design that involves collecting all sources of data concurrently, separately analyzing data, and then comparing results through interpretation and discussion of findings [62,63].

Ethics

Research ethics approval was granted by the University of Toronto’s Health Sciences Research Ethics Board (#33944) and the ethics committees of all participating practices.


Participant Recruitment

Although the study design target was 176 patients, only 142 (80.6%) were identified as eligible and approached. Of the 142 approached, 46 (32.4%) consented to participate. One participant withdrew before any data collection, leaving 45 (31.6%) participants. This relatively low acceptance rate is an additional challenge. Patient-reported reasons for not participating included perceiving that they did not have complex or chronic conditions, lack of time, perceived conflicts with other life responsibilities (eg, planned vacations and travel), feeling as though they did not have a goal to work on, or were uninterested in this research. In total, 7% (3/45) patients dropped out of the study (1) because of a decline in health condition, making it difficult to participate, and (2) because of loss of interest in participating. Figure 6 shows the CONSORT flow diagram depicting patient recruitment. Figure 7 offers a summary of the number of patient and provider participants per site.

Figure 6. CONSORT (Consolidated Standard of Reporting Trials) flow diagram of patient recruitment arranged by group.
View this figure
Figure 7. Number of providers and patients participating at each site.
View this figure

Participant Characteristics and Goals Set

Patient demographics are summarized in Table 2 and presented by randomized groups to allow for between-group comparisons. There was a statistically significant difference in the rurality and socioeconomic status between the groups.

Patients set a variety of goals related to self-management, mental health, social health, and overall well-being and self-care (Multimedia Appendix 6). Patient-provider pairs varied in terms of the degree of specificity of the goals they set, ranging from highly specific goals (eg, walking 20 minutes 3 times per week or losing 10 pounds) to more general goals (eg, reducing meat consumption or getting more sleep).

Table 2. Baseline characteristics of the cohort of Family Health Team patients with complex chronic diseases and disabilities (n=44)a.
CharacteristicsGroup 1 (site A, site D, and site E; n=23)Group 2 (site B, site C, and site F; n=21)P value
Age (years), mean (SD)68.65 (7.10)71.98 (6.20).08
Sex, n (%).07

Female15 (65.22)7 (33.33)

Male8 (34.78)14 (66.67)
Place of residence, n (%).08

Urban9 (39.13)14 (66.67)

Rural14 (60.87)7 (33.33)
Living alone, n (%).36

Yes10 (43.48)6 (28.57)

No13 (56.52)15 (71.43)
Born in Canada, n (%).17

Yes19 (82.62)13 (61.90)

No4 (17.39)8 (38.10)
Family income [CAD $ (US $)]b, n (%).04

0-29,000 (0-24,199)7 (30.43)1 (4.76)

30,000-59,000 (24,200-48,398)7 (30.43)5 (23.81)

60,000-89,000 (48,399-72,598)2 (8.70)8 (38.10)

>90,000 (>72,599)7 (30.43)7 (33.33)
Educationc, n (%).02

Less than high school4 (17.39)1 (4.76)

High school4 (17.39)1 (4.76)

Some college or university9 (39.13)4 (19.05)

University (undergraduate or graduate)6 (26.09)15 (71.43)
Ethnicity, n (%).43

East Asian0 (0.00)1 (4.76)

South Asian1 (4.35)0 (0.00)

Metis0 (0.00)1 (4.76)

White (North American or European)21 (91.30)17 (80.95)

Mixed heritage1 (4.35)2 (9.52)
Chronic disease management program, n (%)>.99

Yes6 (26.09)2 (9.52)

No1 (4.40)1 (5.00)

Missing16 (69.57)18 (85.71)
Total number of chronic conditions, mean (SD)4.21 (2.00)3.20 (2.00)<.001
Chronic conditions diagnoses, n (%)

Arthritis7 (30.43)2 (9.52)d

Asthma5 (21.74)3 (14.30)

Atrial fibrillation1 (4.40)2 (9.52)

Cancer8 (35.00)3 (14.30)

Chronic obstructive pulmonary disease10 (44.00)2 (9.52)

Congestive heart failure0 (0.00)0 (0.00)

Diabetes10 (44.00)3 (14.30)

Enlarged prostate0 (0.00)6 (29.00)

Epilepsy1 (4.40)0 (0.00)

Gastroparesis1 (4.40)0 (0.00)

Hypercholesterolemia13 (56.52)4 (19.04)

Hypertension15 (65.22)8 (38.10)

Hypothyroidism3 (13.04)0 (0.00)

Ischemic heart disease0 (0.00)2 (9.52)

Kidney failure2 (9.00)1 (5.00)

Macular degeneration1 (4.40)0 (0.00)

Mental health conditions1 (4.40)0 (0.00)

Pain6 (26.10)6 (29.00)

Sleep apnea2 (9.00)3 (14.30)

Stroke4 (17.40)3 (14.30)

Urinary retention0 (0.00)0 (0.00)

Othere5 (21.74)6 (29.00)

aBalance in the distribution of covariates between group 1 and 2 family health team sites was assessed using the Kruskal Wallis and Fisher exact test. Percentages may not be equal to 100% because of rounding.

bFamily income before taxes in CAD $. US $1=CAD $1.3.

cUniversity indicates individual has either completed a degree or is currently an undergraduate or graduate student.

dMissing data not applicable as there were not enough data per individual chronic illness to generate a meaningful P value.

eMood disorders (anxiety or depression), multiple sclerosis, acute myocardial infarction, peripheral vascular disease, peripheral neuropathy, and osteoporosis.

Intervention Impact on Quality of Life and Self-management

Missing survey data were substantial, ranging between 14% and 91%, mainly because of nonresponse rather than attrition. There were 2 individuals who withdrew during the trial; therefore, the loss to follow-up was 4.5%.

Tables 3 and 4 present the descriptive statistics of the AQoL-4D and PAM scores from the sites at each data collection time point, where the gray boxes represent the control periods. Raw AQoL scores over time suggest that most patients (with the exception of those at site B) started and remained relatively healthy over the course of the study (with a notably wide SD).

After adjusting for the covariate of time in the model, patients with ePRO combined with usual care (mean 15.28, SD 18.60) demonstrated a nonsignificant decrease in quality of life (t24=−1.20; P=.24) as compared with usual care only (mean 21.76, SD 2.17). With regard to patient engagement, ePRO combined with usual care (mean 66.5, SD 17.3) demonstrated a nonsignificant decrease in patient activation, t27=−1.41; P=.17, as compared with usual care (mean 59.49, SD 9.60).

No patterns were evident when exploring descriptive trends in outcomes related to ePRO user intensity (eg, those who used the tool regularly versus those who rarely used or abandoned it all together). There were fewer completed follow-up surveys in the short term and nonuser groups.

Table 3. Mean (SD) of patient health-related quality of life at each discrete time pointa,b.
Calendar timeGroup 1Group 2

Site A (n=17)Site D (n=5)Site E (n=1)Site B (n=13)Site C (n=7)Site F (n=1)
Baseline (January 2018)19.30 (10.10)c22.22 (20.03)6.0010.61 (6.78)28.00 (16.78)17.00
Period 1: April-July 201820.94 (7.32)28.47 (10.72)6.0011.11 (11.50)31.00 (35.40)6.00
Period 2: July-October 201815.83 (7.64)28.47 (22.00)d10.00 (5.74)37.04 (23.62)11.11
Period 3: October 2018-January 201918.00 (10.00)20.83 (25.53)8.00 (8.19)42.00 (35.40)17.00
Period 4: January-April 201922.83 (12.94)28.70 (13.13)10.42 (9.00)8.33 (8.00)e22.22
Period 5: April-July 201911.11 (3.00)36.11

aAQoL scoring 0 to 45 with 45 being the worst possible health.

bMean (SD) quality-of-life scores could not always be calculated for each site and period because of missingness or lack of variability in the questionnaire responses.

cItalicization represents the usual care (control) period of the intervention.

dMissing data.

eFor this site, there were only 2 respondents in periods 3 and 4. The two who responded in period 3 had a wide spread between scores (16.67 and 66.06), and the 2 respondents in period 4 were both lower overall (2.77 and 13.89).

Table 4. Mean (SD) of patient self-activation scores at each discrete time point during the triala
Calendar timeGroup 1Group 2

Site A (n=17)Site D (n=5)Site E (n=1)Site B (n=13)Site C (n=7)Site F (n=1)
Baseline (January 2018)60.42 (15.00)b61.10 (9.00)53.2063.10 (15.00)55.00 (11.30)58.10
Period 1: April-July 201860.01 (10.59)58.80 (8.26)56.0072.00 (19.12)52.10 (1.60)66.00
Period 2: July-October 201870.00 (14.92)63.10 (10.00)56.0069.40 (19.04)59.20 (9.00)58.10
Period 3: October 2018-January 201971.79 (20.31)53.00 (15.00)c68.00 (25.82)56.30 (13.10)56.00
Period 4: January-April 201968.57 (19.41)63.00 (4.20)98.00 (5.00)56.00 (7.00)61.00
Period 5: April-July 201973.57 (13.94)48.9076.93 (20.54)66.00

aMean (SD) patient self-activation scores could not always be calculated for each site and period because of missingness or lack of variability in the questionnaire responses.

bItalicization represents the usual care (control) period of the intervention.

cMissing data.

Mechanisms Likely Driving Outcomes: Selected Findings From Ethnographic Case Studies

Use log data revealed significant attrition on the tool for both long-and short-term user groups with 46% (21/46) of patients using the tool as intended, 15% (7/46) discontinued use after 3 months, and 36% (17/46) abandoned the app after initial training. Data from the ethnographic case studies are analyzed to provide insights into factors that may drive use and potentially influence outcomes.

Table 5 presents a summary of the data sources, and Multimedia Appendix 5 offers a summary of NPT constructs and analysis.

Table 5. Ethnographic data sources.
Case sitesPatient interviews (n=24)Provider interviews (n=22)Observations (n=21)
Site A
  • 6 midterm
  • 5 end of project
  • 6 midterm
  • 5 end of project
  • 1 onboarding
  • 5 ad hoc
Site B
  • 3 midterm
  • 3 end of project
  • 4 midterm
  • 3 end of project
  • 1 onboarding
  • 9 ad hoc
Site C
  • 2 midterm
  • 2 end of project
  • 2 midterm
  • 2 end of project
  • 2 onboarding
  • 3 ad hoc
The Role of Coherence, Cognitive Participation, Collective Action, and Reflexive Monitoring

Consistent with findings from the exploratory trial, the meaningfulness of the ePRO tool to patients and providers had a significant influence on how and when it was used. Revealed in this analysis, is that the meaningfulness of the ePRO tool (its coherence to the participants) changed over time, and was reliant on (1) alignment to previously held notions of chronic disease management by providers and patients; (2) alignment to daily work and life activities (enabling cognitive participation); (3) strong relationships between patients and providers (enabling collective action); and (4) consistent positive assessments of the tool’s utility (regular reflexive monitoring). An additional challenge is the interactional aspect of the ePRO tool, which means that both individual and collective coherence need to be aligned to the tool, as depicted in Figure 8.

Figure 8. Visual depiction of the normalization process of the electronic Patient-Reported Outcome tool.
View this figure

The figure offers a simplified illustration of a complex ongoing process, highlighting 2 key drivers of adoption in this study. The first is the need for alignment between how individuals within a shared process understand that process (coherence) and then act on it (cognitive participation and collective action), depicted by gray arrows in the figure. Collective action (the actual use of the tool) proved to be highly influenced by this individual and shared coherence but differed depending on where users were in the process of GOC. For example, the data demonstrate that collective action occurred more toward the beginning of the intervention during goal-setting, when there was better alignment between individual and communal coherence of the intervention. This important time variable is indicated by the orange arrows. Second, the evaluation and assessment of the tool (reflexive monitoring) is continuous and interactive rather than a demonstration of normalization, as originally theorized, depicted in Figure 8 as black bidirectional arrows. As participants moved through participation and action in using ePRO, they consistently reflected on their individual and collective coherence, assessing whether it was worth continuing. Our data suggest that when alignment is high between individual- and group-level coherence, there is a greater likelihood of ongoing collective action; in this case, the use of the ePRO tool. This relationship is not currently depicted in the tool, as it will need to be tested in future studies.


Participants and Study Implementation

Only 142 eligible patients of the total 176 patients were identified. The minimum recruitment numbers could not be reached owing to 3 challenges. First, some sites had few provider participants join the study. The usability study and exploratory trial suggested that providers who were just starting with the ePRO tool should manage a maximum of 5 patients at a time to reduce burden. The recruitment strategy required 6 to 8 providers to identify 5 to 10 patients each whom they could manage for the duration of the trial. As such, sites with fewer participants identified fewer patients to participate in the study (sites B and C in particular). Second, the requirement that patients be ready to engage in goal-setting proved to be a more significant barrier than at previous stages of ePRO testing. Practice EMRs identified many potential patient participants, but when reviewed by providers, few were identified as ready. Related to this point, the stepped-wedge design requires all participants to start an intervention at the same time. This rigid timing created an unanticipated third challenge.

The patient participants also likely represent a healthier group overall. Compared with similar patients in Canada, the United States, Australia, and the United Kingdom, patient participants had a lower number of reported chronic illnesses and a higher level of reported education [64-67]. The AQoL scores of patients were aligned with previously published population norms [68]. Participants PAM standardized scores demonstrate slightly higher activation levels as compared with similar multimorbid populations, for instance, in a validation study of PAM that found a mean score of 56.6 (SD 12.9) [51].

Another important point to highlight is the relatively large number of registered dietitians and nurse practitioners who participated. One systematic review found several examples of dietitian-supported diabetes prevention programs [69], and nurse practitioners have been shown to successfully support digitally enabled chronic disease management programs in outpatient settings [70]. These examples, along with findings from this study, suggest an important role for allied health professionals in the implementation of digitally enabled health interventions for chronic disease populations in primary care settings.

Finally, the nature of the research process itself influenced trial implementation and outcome, as it conflicted with the natural process of delivering GOC. First, providers were exasperated by recruitment challenges, which resulted in delays in the trial start date. Second, while having providers manage few patients’ reduced burden, it also meant providers had fewer opportunities to engage with the tool. As time went on, providers began to forget about the tool and why they valued it in the first place. Finally, the stepped-wedge required a time-bounded window for patient onboard. GOC, however, is a fluid approach in which goal-setting needs to occur at a point when patients are ready (as noted in the provider data around coherence). Providers expressed frustration that study parameters limited their ability to onboard patients later identified as individuals who could benefit from the tool.

Principal Findings

Recruitment challenges previously described resulted in the study being underpowered. As such, a conclusion regarding the effectiveness of the ePRO tool cannot be drawn. Analysis of the ethnographic data reveals interrelationships between use patterns, outcome trends, and patient and provider contexts to reveal the underlying mechanisms driving this complex intervention. Many patients and providers perceived the ePRO tool as valuable with the potential to improve engagement and healthy behaviors; however, over time, this excitement waned. Providers reverted to old ways of working, as did some patients. Waning engagement is not unique to digital health and has occurred in other behavior change interventions. Other patients for whom the tool was well-aligned to their values and aimed to manage their health demonstrated long-term adherence. For those high users whose coherence of the tool was tied to their interaction and relationship with their provider, they too began to fall away from the intervention as providers became less involved.

These findings uncover 2 tensions that have implications for digital health interventions for patients with complex care needs and multimorbidity in a primary care setting.

Challenge 1: Supporting Engagement in the Intervention Over Time

Engagement with an intervention is a well-documented challenge in primary care. Studies of medication adherence show similar ranges of adherence (40%-60%) [71,72] for chronic disease populations in primary care settings (ePRO adherence was 44%). Nonadherence reduces exposure and can lessen the effect of the intervention [73,74]. However, this lens suggests that it is the patient’s fault for not doing as they are told, rather than placing a critical lens on the intervention itself. Perhaps a more useful lens is to consider engagement both “(1) the extent (eg, amount, frequency, duration, and depth) of use and (2) a subjective experience characterized by attention, interest, and affect” [75].

The ePRO tool experienced low retention rates typical of many mHealth interventions [76], which are connected to both use and subjective experience. The ethnographic findings suggest that subjective experience is linked to patient coherence and the meaningfulness of the tool. This finding is consistent with other studies that have shown that psychological factors such as motivation, expectations of the app, mental health, cognitive burden, and personal relevance will influence patient engagement [75]. Usability of the technology and tech savviness of users can often act as a barrier to ongoing use [77]. The usability analysis for this trial was too extensive to be included in this study. One key finding from the usability analysis presented in another paper, is that tech savviness and usability issues were moderated by the patient-provider relationship, in that patients with stronger regular connections to their providers were more likely to troubleshoot and work through technology challenges regardless of reported savviness [78].

Importantly, in this trial, app burnout occurred for patients and providers, for whom attrition was similarly linked to reduced use and subjective experience. Primary care providers have also demonstrated declining engagement with interventions over time, an issue identified in the literature as clinical inertia [79]. With continuous interventions, such as GOC, the ePRO study findings suggest that tapping into coherence consistently may improve engagement by both patients and providers.

Challenge 2: Meaningfulness for the Individual Versus the Group

Alignment of the ePRO tool to what was important and meaningful to patients and providers (coherence) was foundational. The study findings not only lend support for the importance of meaningfulness in technology [80] but also demonstrate how meaningfulness is constructed at both the individual and group levels, as suggested by NPT [60]. The disconnect between how providers and patients approached GOC is likely a contributor to the abandonment of the tool by those patients who sat somewhere between the strongly self-motivated super users and somewhat indifferent nonusers. For patients, GOC was a way to motivate and feel accountable for goals co-constructed with their providers. For many providers, however, GOC was an approach to support patient self-management, which did not require the same amount of ongoing connection and feedback expected from patients. This view is well-represented by the quote from the physical therapist at site A, as shown in Multimedia Appendix 5, under the cognitive participation domain.

This approach to self-management suggests that a provider plays the role of a consultant, guiding patients through the management of their illnesses [81]. Although there is room for collaborative care and goal-setting in this model, the emphasis is on setting patients up to succeed and then sending them off. The qualitative data from this study suggest patients wanted more of a coaching approach with more touch points and interactions to maintain momentum, particularly for patients who started strong but then fizzled out. Theories of volition and self-regulation suggest that “feedback focused on the immediate benefits of behavior may be optimal during the early stages of behavior change” but can be reduced as individuals become more intrinsically motivated and confident [82]. What perhaps happened here is those patients who fizzled out were still at their early stage and, as such, required more engagement to keep moving forward. This finding suggests the need to better calibrate coherence when implementing digital health solutions with diverse user groups over time. Future studies should also probe how variation in the degree of goal specification found in this study may also influence patients at different stages of behavior change.

Strengths, Limitations, and Future Research

Similar to comparable studies of digital health interventions in primary care [83], both site and patient recruitment challenges were experienced. In addition, some values in the sample size calculation, such as attrition, were underestimated. Underpowering meant that all confounding variables collected via demographic baseline surveys and chart reviews could not be included in the modeling. In addition, some context data (notably participation in chronic disease management programs) may have changed over time and were only collected once at baseline. The smaller than anticipated sample size did allow for a more robust approach to ethnographic data collection, resulting in a rich, qualitative data set, which is a strength of the study. Future studies in primary care settings should consider both the setting context and the nature of the intervention being tested to better align trial methods to real-world implementation. More flexible adaptive trials or the application of an interrupted time series within the clusters may be more appropriate in these dynamic environments. Further exploration as to the reason why some providers were more successful in identifying eligible patients as compared with others in the study is another potential area of study to better understand this implementation challenge.

The findings may not be widely generalizable to older adults with complex needs, as patients in this study were generally healthier and more educated. However, the high proportion of complex older women living in rural environments in this project addresses a notable gap in the evidence on interventions for this population [84,85]. Relying on provider screening may have led to selection bias, which can reduce generalizability. However, as there is a lack of consensus on the definition of patient complexity, reliance on physician expertise and self-identification has been found to be a viable approach to identify this patient population [86].

Another important limitation is that this study lacks additional data on provider characteristics, such as age, years of experience, and employment status (eg, full-time or part-time). While a baseline demographic survey was deployed to all providers, few remitted these surveys despite multiple attempts to collect the data either via email, phone, or in person. While some key variables, such as comfort with technology, were collected via interviews, the other demographic variables would have aided in interpreting data and supported generalizability to other similar provider groups.

While this study offers a multimethod view of the effectiveness of the ePRO tool, the findings presented here focus on the major themes that emerged in the analysis. Further analyses will explore the interrelationships between NPT constructs and other context variables, in particular how these concepts relate to professional identities, organizational culture, and notions of how best to engage in chronic disease management. An important lesson from this trial is how the nature of GOC and chronic disease management in primary care settings is a fluid and complex process that is often unique to particular settings and provider-patient pairs. Highly adaptive trial designs, which allow the study to align to these contexts more closely, may have greater success in engagement for longer interventions.

Conclusions

Although this study is unable to provide a definitive answer to the effectiveness of the ePRO tool, it did generate novel insights regarding the implementation of digital health technologies in primary care settings. The findings demonstrate the critical role of coherence, or meaningfulness, of an intervention, and the great challenge of aligning coherence across diverse user groups over time. Future work in this area should pay careful attention to how chronic disease management, GOC, and self-management are understood and pursued when implementing digital health technologies to advance these models of care.

Acknowledgments

Funding support for this project was obtained from the Canadian Institutes of Health Research eHealth Innovation Partnership Program operating grant (FN-143559). The co-authors would like to acknowledge and thank the providers, administrators, and patients who participated in this study. The authors are grateful to each of them who gave their time, enthusiasm, and energy. The authors would also like to thank and acknowledge research support from Jasvinei Srinatharan, Tujuanna Austin, Karen Paik, Candis LePage, Janelle Gravesande, Zaki Hyatt-Shaw, and Parminder Hans for their coordination support throughout this study. They also acknowledge the contributions of collaborators Dr Cheryl Cott and Dr Renee Lyons, who offered early guidance in this work.

Conflicts of Interest

Funding for this study was obtained from a Canadian Federal Grant (Canadian Institutes of Health Research, FN-143559), with most co-authors’ salaries being funded through their academic and scientific positions at their respective institutions. One co-author, SH, is the co-founder and co-owner of the technology company that hosts the electronic Patient-Reported Outcome (ePRO) tool (QoC Health Inc). While QoC Health Inc owns the platform on which the ePRO tool sits, the tool itself is the intellectual property of the lead author (CSG) in partnership with the Health System Performance Network at the University of Toronto, which is led by WPW (senior author). Any future aims to commercialize the ePRO tool would be done on the foundation of building a research-supporting not-for-profit that would seek to advance the development and evaluation of technologies that support care delivery for patients with complex care needs.

Multimedia Appendix 1

PRECIS-2 (Pragmatic Explanatory Continuum Indicator Summary) Wheel domain description.

DOCX File , 20 KB

Multimedia Appendix 2

Electronic Patient-Reported Outcome wireframes.

PDF File (Adobe PDF File), 884 KB

Multimedia Appendix 3

Data collection schedule.

DOCX File , 20 KB

Multimedia Appendix 4

Sample interview guide questions.

DOCX File , 17 KB

Multimedia Appendix 5

Summary of how patients and providers understood, engaged with, and reflected on the adoption of the electronic Patient-Reported Outcome tool as aligned with Normalization Process Theory constructs.

DOCX File , 28 KB

Multimedia Appendix 6

Types of goals and tasks set during the trial.

DOCX File , 24 KB

  1. Hajat C, Stein E. The global burden of multiple chronic conditions: a narrative review. Prev Med Rep 2018 Dec;12:284-293 [FREE Full text] [CrossRef] [Medline]
  2. Dobrow M. Caring for people with chronic conditions: a health system perspective. Int J Integr Care 2009 Mar 16;9(1):298. [CrossRef]
  3. Schaink AK, Kuluski K, Lyons RF, Fortin M, Jadad AR, Upshur R, et al. A scoping review and thematic classification of patient complexity: offering a unifying framework. J Comorb 2012 Oct 10;2(1):1-9 [FREE Full text] [CrossRef] [Medline]
  4. Bayliss E, Bosworth H, Noel P, Wolff J, Damush T, Mciver L. Supporting self-management for patients with complex medical needs: recommendations of a working group. Chronic Illn 2007 Jun 29;3(2):167-175. [CrossRef] [Medline]
  5. Grindrod KA, Li M, Gates A. Evaluating user perceptions of mobile medication management applications with older adults: a usability study. JMIR Mhealth Uhealth 2014 Mar 14;2(1):e11 [FREE Full text] [CrossRef] [Medline]
  6. Sakaguchi-Tang DK, Bosold AL, Choi YK, Turner AM. Patient portal use and experience among older adults: systematic review. JMIR Med Inform 2017 Oct 16;5(4):e38 [FREE Full text] [CrossRef] [Medline]
  7. Berntsen G, Strisland F, Malm-Nicolaisen K, Smaradottir B, Fensli R, Røhne M. The evidence base for an ideal care pathway for frail multimorbid elderly: combined scoping and systematic intervention review. J Med Internet Res 2019 Apr 22;21(4):e12517 [FREE Full text] [CrossRef] [Medline]
  8. Matthew-Maich N, Harris L, Ploeg J, Markle-Reid M, Valaitis R, Ibrahim S, et al. Designing, implementing, and evaluating mobile health technologies for managing chronic conditions in older adults: a scoping review. JMIR Mhealth Uhealth 2016 Jun 09;4(2):e29 [FREE Full text] [CrossRef] [Medline]
  9. Evangelista L, Steinhubl SR, Topol EJ. Digital health care for older adults. Lancet 2019 Apr;393(10180):1493. [CrossRef]
  10. Herzer KR, Pronovost PJ. Ensuring quality in the era of virtual care. J Am Med Assoc 2021 Feb 02;325(5):429-430. [CrossRef] [Medline]
  11. Chehade MJ, Yadav L, Jayatilaka A, Gill TK, Palmer E. Personal digital health hubs for multiple conditions. Bull World Health Organ 2020 Jun 02;98(8):569-575. [CrossRef]
  12. Sinnott C, Mc Hugh S, Browne J, Bradley C. GPs' perspectives on the management of patients with multimorbidity: systematic review and synthesis of qualitative research. BMJ Open 2013 Sep 13;3(9):e003610 [FREE Full text] [CrossRef] [Medline]
  13. Tinetti ME, Naik AD, Dodson JA. Moving from disease-centered to patient goals-directed care for patients with multiple chronic conditions: patient value-based care. JAMA Cardiol 2016 Apr 01;1(1):9-10 [FREE Full text] [CrossRef] [Medline]
  14. Reuben DB, Tinetti ME. Goal-oriented patient care — an alternative health outcomes paradigm. N Engl J Med 2012 Mar;366(9):777-779. [CrossRef]
  15. Tinetti ME, Fried TR, Boyd CM. Designing health care for the most common chronic condition--multimorbidity. J Am Med Assoc 2012 Jun 20;307(23):2493-2494 [FREE Full text] [CrossRef] [Medline]
  16. World Health Organization. Regional Office for the Western Pacific. People-Centred Health Care : A Policy Framework. Manila: WHO Regional Office for the Western Pacific; 2007:1-20.
  17. Coulter A, Entwistle VA, Eccles A, Ryan S, Shepperd S, Perera R. Personalised care planning for adults with chronic or long-term health conditions. Cochrane Database Syst Rev 2015 Mar 03;2015(3):CD010523 [FREE Full text] [CrossRef] [Medline]
  18. Ahmad N, Ellins J, Krelle H, Lawrie M. Person-centred care: from ideas to action. The Health Foundation. 2014.   URL: https://www.health.org.uk/publications/person-centred-care-from-ideas-to-action [accessed 2021-10-09]
  19. Mold JW, Blake GH, Becker LA. Goal-oriented medical care. Fam Med 1991 Jan;23(1):46-51. [Medline]
  20. Secunda K, Wirpsa MJ, Neely KJ, Szmuilowicz E, Wood GJ, Panozzo E, et al. Use and meaning of "Goals of Care" in the healthcare literature: a systematic review and qualitative discourse analysis. J Gen Intern Med 2020 May 21;35(5):1559-1566 [FREE Full text] [CrossRef] [Medline]
  21. Bisognano M, Schummers D. Flipping healthcare: an essay by Maureen Bisognano and Dan Schummers. Br Med J 2014 Oct 03;349(oct03 3):g5852. [CrossRef] [Medline]
  22. Schellinger SE, Anderson EW, Frazer MS, Cain CL. Patient self-defined goals: essentials of person-centered care for serious illness. Am J Hosp Palliat Care 2018 Jan 23;35(1):159-165 [FREE Full text] [CrossRef] [Medline]
  23. Tinetti ME, Naik AD, Dindo L, Costello DM, Esterson J, Geda M, et al. Association of patient priorities-aligned decision-making with patient outcomes and ambulatory health care burden among older adults with multiple chronic conditions: a nonrandomized clinical trial. JAMA Intern Med 2019 Oct 07;179(12):1688 [FREE Full text] [CrossRef] [Medline]
  24. Berntsen GK, Dalbakk M, Hurley JS, Bergmo T, Solbakken B, Spansvoll L, et al. Person-centred, integrated and pro-active care for multi-morbid elderly with advanced care needs: a propensity score-matched controlled trial. BMC Health Serv Res 2019 Oct 03;19(1):682 [FREE Full text] [CrossRef] [Medline]
  25. Dolovich L, Oliver D, Lamarche L, Thabane L, Valaitis R, Agarwal G, et al. Combining volunteers and primary care teamwork to support health goals and needs of older adults: a pragmatic randomized controlled trial. Can Med Asso J 2019 May 06;191(18):491-500 [FREE Full text] [CrossRef] [Medline]
  26. Pirhonen L, Olofsson EH, Fors A, Ekman I, Bolin K. Effects of person-centred care on health outcomes-A randomized controlled trial in patients with acute coronary syndrome. Health Policy 2017 Feb;121(2):169-179. [CrossRef] [Medline]
  27. Kinmonth AL, Woodcock A, Griffin S, Spiegal N, Campbell MJ. Randomised controlled trial of patient centred care of diabetes in general practice: impact on current wellbeing and future disease risk. The Diabetes Care From Diagnosis Research Team. Br Med J 1998 Oct 31;317(7167):1202-1208 [FREE Full text] [CrossRef] [Medline]
  28. Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med 2018 Jun 20;16(1):95 [FREE Full text] [CrossRef] [Medline]
  29. Steele Gray C, Shaw J. From summative to developmental: incorporating design-thinking into evaluations of complex interventions. J Integrat Care 2019 Jun 20;27(3):241-248. [CrossRef]
  30. Steele Gray C, Miller D, Kuluski K, Cott C. Tying ehealth tools to patient needs: exploring the use of eHealth for community-dwelling patients with complex chronic disease and disability. JMIR Res Protoc 2014 Nov 26;3(4):e67 [FREE Full text] [CrossRef] [Medline]
  31. Steele Gray C, Khan AI, Kuluski K, McKillop I, Sharpe S, Bierman AS, et al. Improving patient experience and primary care quality for patients with complex chronic disease using the electronic patient-reported outcomes tool: adopting qualitative methods into a user-centered design approach. JMIR Res Protoc 2016 Feb 18;5(1):e28 [FREE Full text] [CrossRef] [Medline]
  32. Steele Gray C, Gill A, Khan AI, Hans PK, Kuluski K, Cott C. The electronic patient reported outcome tool: testing usability and feasibility of a mobile app and portal to support care for patients with complex chronic disease and disability in primary care settings. JMIR Mhealth Uhealth 2016 Jun 02;4(2):e58 [FREE Full text] [CrossRef] [Medline]
  33. Steele Gray C, Gravesande J, Hans PK, Nie JX, Sharpe S, Loganathan M, et al. Using exploratory trials to identify relevant contexts and mechanisms in complex electronic health interventions: evaluating the electronic patient-reported outcome tool. JMIR Form Res 2019 Feb 27;3(1):e11950 [FREE Full text] [CrossRef] [Medline]
  34. Tsoukas H. Don't simplify, complexify: from disjunctive to conjunctive theorizing in organization and management studies. J Manag Stud 2016 Jul 11;54(2):132-153. [CrossRef]
  35. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M, Medical Research Council Guidance. Developing and evaluating complex interventions: the new Medical Research Council guidance. Br Med J 2008 Sep 29;337:a1655 [FREE Full text] [CrossRef] [Medline]
  36. Patton MC. Utilization-Focused Evaluation. Thousand Oaks, CA: Sage Publications Inc; 2008:1-688.
  37. Hemming K, Haines TP, Chilton PJ, Girling AJ, Lilford RJ. The stepped wedge cluster randomised trial: rationale, design, analysis, and reporting. Br Med J 2015 Feb 06;350(feb06 1):h391 [FREE Full text] [CrossRef] [Medline]
  38. Hemming K, Taljaard M. Reflection on modern methods: when is a stepped-wedge cluster randomized trial a good study design choice? Int J Epidemiol 2020 Jun 01;49(3):1043-1052 [FREE Full text] [CrossRef] [Medline]
  39. Ford I, Norrie J. Pragmatic trials. N Engl J Med 2016 Aug 04;375(5):454-463. [CrossRef]
  40. Greenhalgh T, Swinglehurst D. Studying technology use as social practice: the untapped potential of ethnography. BMC Med 2011 Apr 27;9(1):45 [FREE Full text] [CrossRef] [Medline]
  41. Gray CS. Getting to goals: using the Electronic Patient Reported Outcome (ePRO) mobile app to support complex patients in primary care settings. In: Proceedings of the Annual Canadian Association of Health Services and Policy Research (CAHSPR) Conference. 2017 Presented at: Annual Canadian Association of Health Services and Policy Research (CAHSPR) Conference; May 2017; Toronto, Ontario p. 177   URL: https://cahspr.ca/wp-content/uploads/2020/11/Book-of-Abstracts-CAHSPR-2017.pdf
  42. Glazier RH, Redelmeier DA. Building the patient-centered medical home in Ontario. J Am Med Assoc 2010 Jun 02;303(21):2186-2187. [CrossRef] [Medline]
  43. Focus on georgraphy series, 2016 census. Statistics Canada. 2016.   URL: https://www12.statcan.gc.ca/census-recensement/2016/as-sa/fogs-spg/Index-eng.cfm [accessed 2020-09-10]
  44. Upshur R, Wang L, Moineddin R, Nie J, Tracy C. The complexity score: towards a clinically-relevant, clinician-friendly measure of patient multi-morbidity. Int J Person Cent Med 2012;2(4):315 [FREE Full text]
  45. Hawthorne G, Korn S, Richardson J. Population norms for the AQoL derived from the 2007 Australian National Survey of Mental Health and Wellbeing. Aust N Z J Public Health 2013 Feb;37(1):7-16. [CrossRef] [Medline]
  46. Osborne R, Hawthorne G, Lew EA, Gray LC. Quality of life assessment in the community-dwelling elderly validation of the Assessment of Quality of Life (AQoL) instrument and comparison with the SF-36. J Clin Epidemiol 2003 Feb;56(2):138-147. [CrossRef]
  47. Partners Advancing Transitions in Healthcare (PATH): Northumberland. The Change Foundation. 2015.   URL: https://nhh.ca/document/path-news-release-pdf [accessed 2021-10-21]
  48. Resources. ePRO.   URL: https://www.eprobridgepoint.com/resources [accessed 2021-10-24]
  49. AQoL instruments. Assessment of Quality of Life. 2020.   URL: http://aqol.com.au/index.php/aqolinstruments [accessed 2020-12-03]
  50. Hibbard J, Stockard J, Mahoney ER, Tusler M. Development of the Patient Activation Measure (PAM): conceptualizing and measuring activation in patients and consumers. Health Serv Res 2004 Aug;39(4 Pt 1):1005-1026 [FREE Full text] [CrossRef] [Medline]
  51. Skolasky R, Green AF, Scharfstein D, Boult C, Reider L, Wegener ST. Psychometric properties of the patient activation measure among multimorbid older adults. Health Serv Res 2011 Apr;46(2):457-478 [FREE Full text] [CrossRef] [Medline]
  52. Barker I, Steventon A, Williamson R, Deeny SR. Self-management capability in patients with long-term conditions is associated with reduced healthcare utilisation across a whole health economy: cross-sectional analysis of electronic health records. BMJ Qual Saf 2018 Dec 23;27(12):989-999 [FREE Full text] [CrossRef] [Medline]
  53. Hibbard JH, Mahoney ER, Stockard J, Tusler M. Development and testing of a short form of the patient activation measure. Health Serv Res 2005 Dec;40(6 Pt 1):1918-1930 [FREE Full text] [CrossRef] [Medline]
  54. Bradway M, Pfuhl G, Joakimsen R, Ribu L, Grøttland A, Årsand E. Analysing mHealth usage logs in RCTs: explaining participants' interactions with type 2 diabetes self-management tools. PLoS One 2018 Aug 30;13(8):e0203202 [FREE Full text] [CrossRef] [Medline]
  55. Davey C, Hargreaves J, Thompson JA, Copas AJ, Beard E, Lewis JJ, et al. Analysis and reporting of stepped wedge randomised controlled trials: synthesis and critical appraisal of published studies, 2010 to 2014. Trials 2015 Aug 17;16(1):358 [FREE Full text] [CrossRef] [Medline]
  56. Eikelenboom N, van Lieshout J, Jacobs A, Verhulst F, Lacroix J, van Halteren A, et al. Effectiveness of personalised support for self-management in primary care: a cluster randomised controlled trial. Br J Gen Pract 2016 Apr 14;66(646):354-361. [CrossRef]
  57. Jakobsen JC, Gluud C, Wetterslev J, Winkel P. When and how should multiple imputation be used for handling missing data in randomised clinical trials - a practical guide with flowcharts. BMC Med Res Methodol 2017 Dec 06;17(1):162 [FREE Full text] [CrossRef] [Medline]
  58. Sandelowski M. Whatever happened to qualitative description? Res Nurs Health 2000 Aug;23(4):334-340. [CrossRef]
  59. Sandelowski M. Telling stories: narrative approaches in qualitative research. Image J Nurs Sch 1991;23(3):161-166. [CrossRef] [Medline]
  60. May CR, Mair F, Finch T, MacFarlane A, Dowrick C, Treweek S, et al. Development of a theory of implementation and integration: Normalization Process Theory. Implement Sci 2009 May 21;4(1):29 [FREE Full text] [CrossRef] [Medline]
  61. Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol 2013 Sep 18;13(1):117 [FREE Full text] [CrossRef] [Medline]
  62. Cresswell JW, Poth CN. Qualitative Inqiury and Research Design: Choosing Among Five Approaches, 4th ed. Thousand Oaks, CA: Sage Publications Inc; 2017:1-488.
  63. Cresswell K, Sheikh A. Organizational issues in the implementation and adoption of health information technology innovations: an interpretative review. Int J Med Inform 2013 May;82(5):73-86. [CrossRef] [Medline]
  64. Salisbury C, Man M, Bower P, Guthrie B, Chaplin K, Gaunt DM, et al. Management of multimorbidity using a patient-centred care model: a pragmatic cluster-randomised trial of the 3D approach. Lancet 2018 Jul;392(10141):41-50. [CrossRef]
  65. Tracy CS, Bell SH, Nickell LA, Charles J, Upshur RE. The IMPACT clinic: innovative model of interprofessional primary care for elderly patients with complex health care needs. Can Fam Physician 2013 Mar;59(3):148-155 [FREE Full text] [Medline]
  66. Ploeg J, Matthew-Maich N, Fraser K, Dufour S, McAiney C, Kaasalainen S, et al. Managing multiple chronic conditions in the community: a Canadian qualitative study of the experiences of older adults, family caregivers and healthcare providers. BMC Geriatr 2017 Jan 31;17(1):40 [FREE Full text] [CrossRef] [Medline]
  67. Harrison C, Britt H, Miller G, Henderson J. Examining different measures of multimorbidity, using a large prospective cross-sectional study in Australian general practice. BMJ Open 2014 Jul 11;4(7):e004694 [FREE Full text] [CrossRef] [Medline]
  68. Hawthorne G, Richardson J, Day N. Using the Assessment of Quality of Life (AQoL), version 1. Technical Report 12 - Centre for Health Program Evaluation, Australia. 2009.   URL: https://www.aqol.com.au/papers/techreport12.pdf [accessed 2021-10-10]
  69. Joiner KL, Nam S, Whittemore R. Lifestyle interventions based on the diabetes prevention program delivered via eHealth: a systematic review and meta-analysis. Prev Med 2017 Jul;100:194-207 [FREE Full text] [CrossRef] [Medline]
  70. Gordon K, Steele Gray C, Dainty KN, DeLacy J, Ware P, Seto E. Exploring an innovative care model and telemonitoring for the management of patients with complex chronic needs: qualitative description study. JMIR Nurs 2020 Mar 6;3(1):e15691 [FREE Full text] [CrossRef] [Medline]
  71. Fernandez-Lazaro CI, García-González JM, Adams DP, Fernandez-Lazaro D, Mielgo-Ayuso J, Caballero-Garcia A, et al. Adherence to treatment and related factors among patients with chronic conditions in primary care: a cross-sectional study. BMC Fam Pract 2019 Sep 14;20(1):132 [FREE Full text] [CrossRef] [Medline]
  72. Osterberg L, Blaschke T. Adherence to medication. N Engl J Med 2005 Aug 04;353(5):487-497. [CrossRef]
  73. Donkin L, Christensen H, Naismith SL, Neal B, Hickie IB, Glozier N. A systematic review of the impact of adherence on the effectiveness of e-therapies. J Med Internet Res 2011 Aug 05;13(3):e52 [FREE Full text] [CrossRef] [Medline]
  74. Manwaring JL, Bryson SW, Goldschmidt AB, Winzelberg AJ, Luce KH, Cunning D, et al. Do adherence variables predict outcome in an online program for the prevention of eating disorders? J Consult Clin Psychol 2008;76(2):341-346. [CrossRef]
  75. Perski O, Blandford A, West R, Michie S. Conceptualising engagement with digital behaviour change interventions: a systematic review using principles from critical interpretive synthesis. Transl Behav Med 2017 Jun 13;7(2):254-267 [FREE Full text] [CrossRef] [Medline]
  76. Lee K, Kwon H, Lee B, Lee G, Lee JH, Park YR, et al. Effect of self-monitoring on long-term patient engagement with mobile health applications. PLoS One 2018 Jul 26;13(7):e0201166 [FREE Full text] [CrossRef] [Medline]
  77. Ross J, Stevenson F, Lau R, Murray E. Factors that influence the implementation of e-health: a systematic review of systematic reviews (an update). Implement Sci 2016 Oct 26;11(1):146 [FREE Full text] [CrossRef] [Medline]
  78. Tahsin F, Tracy S, Chau E, Harvey S, Loganathan M, McKinstry B, et al. Exploring the relationship between the usability of a goal-oriented mobile health application and non-usage attrition in patients with multimorbidity: a blended data analysis approach. Digit Health 2021 Oct 05;7:205520762110455. [CrossRef]
  79. Ziemer DC, Doyle JP, Barnes CS, Branch WT, Cook CB, El-Kebbi IM, et al. An intervention to overcome clinical inertia and improve diabetes mellitus control in a primary care setting: Improving Primary Care of African Americans with Diabetes (IPCAAD) 8. Arch Intern Med 2006 Mar 13;166(5):507-513. [CrossRef] [Medline]
  80. Steele Gray C. Seeking meaningful innovation: lessons learned developing, evaluating, and implementing the electronic patient-reported outcome tool. J Med Internet Res 2020 Jul 29;22(7):e17987 [FREE Full text] [CrossRef] [Medline]
  81. Bodenheimer T, Lorig K, Holman H, Grumbach K. Patient self-management of chronic disease in primary care. J Am Med Assoc 2002 Nov 20;288(19):2469-2475. [CrossRef] [Medline]
  82. Morrison LG. Theory-based strategies for enhancing the impact and usage of digital health behaviour change interventions: a review. Digit Health 2015 Jul 17;1:2055207615595335 [FREE Full text] [CrossRef] [Medline]
  83. O'Connor S, Hanlon P, O'Donnell CA, Garcia S, Glanville J, Mair FS. Understanding factors affecting patient and public engagement and recruitment to digital health interventions: a systematic review of qualitative studies. BMC Med Inform Decis Mak 2016 Sep 15;16(1):120 [FREE Full text] [CrossRef] [Medline]
  84. Liu KA, Mager NA. Women’s involvement in clinical trials: historical perspective and future implications. Pharm Pract (Granada) 2016 Mar 06;14(1):708. [CrossRef]
  85. Virani S, Burke L, Remick SC, Abraham J. Barriers to recruitment of rural patients in cancer clinical trials. J Oncol Pract 2011 May;7(3):172-177. [CrossRef]
  86. Grant RW, Ashburner JM, Hong CS, Hong CC, Chang Y, Barry MJ, et al. Defining patient complexity from the primary care physician's perspective: a cohort study. Ann Intern Med 2011 Dec 20;155(12):797-804. [CrossRef] [Medline]


AFHTO: Association of Family Health Teams of Ontario
AQoL-4D: Assessment of Quality of Life–4 Dimensions
CIHR: Canadian Institutes of Health Research
eHIPP: eHealth Innovation Partnership Program
EMR: electronic medical record
ePRO: electronic Patient-Reported Outcome
FHT: Family Health Team
GOC: goal-oriented care
GP: general practitioner
mHealth: mobile health
NP: nurse practitioner
NPT: Normalization Process Theory
PAM-13: Patient Activation Measure–13 item
RD: registered dietitian
RN: registered nurse
SMART: specific, measurable, attainable, realistic, timely


Edited by R Kukafka; submitted 24.03.21; peer-reviewed by M Khetani, MF Alwashmi, T Patel, A Garcia; comments to author 15.06.21; revised version received 19.07.21; accepted 12.09.21; published 02.12.21

Copyright

©Carolyn Steele Gray, Edward Chau, Farah Tahsin, Sarah Harvey, Mayura Loganathan, Brian McKinstry, Stewart W Mercer, Jason Xin Nie, Ted E Palen, Tim Ramsay, Kednapa Thavorn, Ross Upshur, Walter P Wodchis. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 02.12.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.