Published on in Vol 24, No 1 (2022): January

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/26652, first published .
Usability, Acceptability, and Satisfaction of a Wearable Activity Tracker in Older Adults: Observational Study in a Real-Life Context in Northern Portugal

Usability, Acceptability, and Satisfaction of a Wearable Activity Tracker in Older Adults: Observational Study in a Real-Life Context in Northern Portugal

Usability, Acceptability, and Satisfaction of a Wearable Activity Tracker in Older Adults: Observational Study in a Real-Life Context in Northern Portugal

Original Paper

1Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal

2ICVS/3B's, PT Government Associate Laboratory, Braga/Guimarães, Portugal

3iCognitus4ALL – IT Solutions, Braga, Portugal

4Clinical Academic Center – 2CA-B, Braga, Portugal

5Associação Centro de Medicina P5, School of Medicine, University of Minho, Braga, Portugal

*these authors contributed equally

Corresponding Author:

José Miguel Pêgo, MD, PhD

Life and Health Sciences Research Institute (ICVS)

School of Medicine

University of Minho

Largo do Paço

Braga, 4710-057

Portugal

Phone: 351 253 604 800

Email: jmpego@med.uminho.pt


Background: The use of activity trackers has significantly increased over the last few years. This technology has the potential to improve the levels of physical activity and health-related behaviors in older adults. However, despite the potential benefits, the rate of adoption remains low among older adults. Therefore, understanding how technology is perceived may potentially offer insight to promote its use.

Objective: This study aimed to (1) assess acceptability, usability, and user satisfaction with the Xiaomi Mi Band 2 in Portuguese community-dwelling older adults in a real-world context; (2) explore the mediating effect of the usability on the relationship between user characteristics and satisfaction; and (3) examine the moderating effect of user characteristics on the relationship between usability and user satisfaction.

Methods: Older adults used the Xiaomi Mi Band 2 over 15 days. The user experience was evaluated through the Technology Acceptance Model 3, System Usability Scale, and User Satisfaction Evaluation Questionnaire. An integrated framework for usability and user satisfaction was used to explore user experience. Statistical data analysis included descriptive data analysis, reliability analysis, confirmatory factor analysis, and mediation and moderation analyses.

Results: A sample of 110 older adults with an average age of 68.41 years (SD 3.11) completed the user experience questionnaires. Mean user acceptance was very high—perceived ease of use: 6.45 (SD 0.78); perceptions of external control: 6.74 (SD 0.55); computer anxiety: 6.85 (SD 0.47); and behavioral intention: 6.60 (SD 0.97). The usability was excellent with an average score of 92.70 (SD 10.73), and user satisfaction was classified as a good experience 23.30 (SD 2.40). The mediation analysis confirmed the direct positive effect of usability on satisfaction (β=.530; P<.01) and the direct negative effect of depression on usability (β=–.369; P<.01). Lastly, the indirect effect of usability on user satisfaction was higher in individuals with lower Geriatric Depression Scale levels.

Conclusions: Findings demonstrate that the Xiaomi Mi Band 2 is suitable for older adults. Furthermore, the results confirmed usability as a determinant of satisfaction with the technology and extended the existing knowledge about wearable activity trackers in older adults.

J Med Internet Res 2022;24(1):e26652

doi:10.2196/26652

Keywords



Background

Wearable devices are electronic devices that allow users to automatically track and monitor their physical fitness metrics, including number of steps, level of activity, walking distance, calories burned, heart rate, and sleep patterns [1-4]. Over the last few years, these devices have also become increasingly popular among researchers interested in assessing and intervening on physical activity (PA)–related behaviors in real-world contexts. Wearable devices offer the opportunity to collect objective PA data in a less intrusive and inexpensive manner and provide tailored and personalized interventions in real-time [3,5,6]. In fact, overall, academic and industry research has shown that their use can increase PA levels and promote a healthier lifestyle through real-time self-monitoring of health-related behaviors [3,5,7-10]. However, despite these potential benefits, older adults still show slow technology adoption rates [10,11], possibly because these technologies are mainly developed for a younger target group, without considering health psychology or gerontology theories [7]. Consequently, older adult users may have usability barriers to technology adoption [4,12]. Furthermore, factors associated with normal aging, such as physical and cognitive decline, could limit the ability to use the technology [11].

A better understanding of older adults’ intentions to use activity trackers, and examining actual usage behavior, is becoming increasingly relevant; however, only a few studies have been conducted to determine older adults’ perceptions [7,10,13,14]. Therefore, this study aimed to understand the user experience and acceptability of an activity tracker (Xiaomi Mi Band 2), throughout daily life activities, in a cohort of community-dwelling older adults.

Theoretical Framework

After carrying out a literature search, 3 major key concepts were identified regarding user experience and technology adoption: technology acceptance, usability, and user satisfaction. Variables regarding user characteristics were also selected, such as cognitive function, mood, and education, which may significantly influence user experience to develop our model. Thus, the theoretical framework was designed to explore older adults’ user experience with the Xiaomi Mi Band 2, by combining different theories as next described, while also enabling the examination of the impact of usability and individual characteristics on user satisfaction with the technology.

Technology Acceptance Model

Technology acceptance is an important factor in determining the long-term adoption of activity trackers [3]. The Technology Acceptance Model (TAM) is the most applied theoretical model for evaluating or predicting users’ acceptance of new technologies. The TAM was adapted from the Theory of Reasoned Action [15] and was initially developed by Davis [16]. This model assumes that the perceived ease of use (PEOU) and perceived usefulness (PU) are the primary factors influencing an individual’s intention to use new technology [3,12,16]. PEOU refers to the degree to which a person perceives how easy it is to use the technology, and PU refers to how using the technology will improve performance [16]. Moreover, PEOU and PU can be influenced by various external factors, including both the device and user characteristics [3,16,17]. The usability seems to be predictive of acceptance regarding the device characteristics because they directly relate to the PEOU and PU and may moderate attitudes and behavioral intentions (BIs) to use a system [3].

The original TAM was extended to TAM 2 by Venkatesh and Davis [18] to explain PU and usage intentions in terms of social influence and cognitive instrumental determinants. Later, Venkatesh and Bala [19] updated the model, including other variables affecting PEOU, such as individual differences (computer self-efficacy, computer anxiety [CANX], and computer playfulness), perceptions of external control (PEC), and system characteristics–related adjustments (perceived enjoyment and objective usability).

System Usability Scale

Initially proposed by John Brooke in 1986, the System Usability Scale (SUS) is the most widely used standardized questionnaire to measure perceived usability [8,17,20,21]. Recent literature shows that several studies extend the TAM by incorporating the SUS [17,22,23]. Although the SUS has been assumed to be unidimensional, recent research reveals that the SUS has 2 subscales—usability and learnability—with items 4 and 10 providing the learnability dimension and the other 8 items the usability dimension [24,25].

According to ISO-9241-11 [26], usability refers to the effectiveness, efficiency, and user satisfaction rating of a product in a specific environment by a particular user for a particular purpose. More precisely, effectiveness refers to which of the system’s intended goals can be achieved; efficiency is the effort required for a user to achieve the goals; and satisfaction depends on how comfortable the user feels using the system [8,21,27]. Therefore, usability is a critical factor that directly affects the use and adoption of technology by older adults.

User Satisfaction Evaluation Questionnaire

The literature on technology acceptance has included many model variants and extensions, including user satisfaction as a key indicator of user acceptance [28-34]. Moreover, satisfaction has been described as a predictor of behavior intention [29]. The User Satisfaction Evaluation Questionnaire (USEQ) was initially designed by Gil-Gómez et al [35] to evaluate the satisfaction of the users with virtual rehabilitation systems. Recently, the USEQ was adapted and validated into European Portuguese by Domingos et al [36] to evaluate an activity tracker (Xiaomi Mi Band 2) in older adults, showing psychometric properties consistent with the original version.

User Characteristics

In a theoretical framework developed by Venkatesh and Bala [19], individual differences, such as personality and demographics (eg, traits or individuals’ states, gender, and age), were suggested to influence individuals’ perceptions of PU and PEOU. Specifically, personality is related to individual differences in cognitive, emotional, and motivational aspects of mental states that result in stable behavioral action [37]. Moreover, personality has been found to affect technology perceptions and acceptance [3,38].

Additionally, older individuals may show age-related declines, including attention, memory, and processing speed, which may further impact how they interact with the technology [3]. The aging process is also associated with a decline in visual faculties, that is, visuospatial functioning, visual acuity, color discrimination, and contrast sensitivity, crucial for learning new information and executing technology-based tasks [39]. Thus, researchers have focused on the impact of cognitive abilities, self-efficacy, and technology-related anxiety in technology acceptance [11]. Lastly, compared with younger adults, the senior population may be more resistant to adopt new technologies due to cultural factors, education, and experience [3].

Research Framework and Hypotheses

This study uses a model based on the SUS to measure usability and the USEQ to measure user satisfaction and incorporate individual characteristics, such as education, mood, and cognitive performance (Figure 1). The design was founded on the basic theory studied to provide a clear causal relationship between the independent variables (exogenous) and the dependent variables (endogenous). The model has 5 variables exploring the user experience with the Xiaomi Mi Band 2 in older adults.

The following hypotheses were formulated:

H1: Usability has a positive effect on satisfaction.

H2: Education has a positive effect on satisfaction.

H3: Education has a positive effect on usability.

H4: Cognition has a positive effect on satisfaction.

H5: Cognition has a positive effect on usability.

H6: Depression has a negative effect on satisfaction.

H7: Depression has a negative effect on usability.

Additionally, user characteristics’ potential moderating effect on the direct effect between usability and satisfaction was tested separately for each variable (Figure 2).

Figure 1. Research hypothesis framework.
View this figure
Figure 2. Moderating effect of user characteristics.
View this figure

Participants and Research Ethics

A priori sample size calculation and power analysis were performed using G*Power version 3.1.9.3 (Heinrich-Heine-Universität Düsseldorf). Considering that the study is part of a larger project, which used a wearable device to measure and quantify free-living PA in older adults, a total of 120 participants were determined assuming an effect size of 0.32 [40-42], an α of .05, power of 0.95, and dropout rate of 23%. The power analysis for this user experience study was conducted considering the sample size calculated previously and a medium effect size [23] confirmed a power of 0.92. Moreover, the rule of thumb to determine sample size in multiple regression analyses confirmed that the minimum sampling requirements for the analysis were met [43]. Therefore, a total of 120 participants, representative of the general older Portuguese population living in the community within the age group 65-74 years, were recruited from health centers and local gyms in Northern Portugal. The older adults were defined according to the World Health Organization, which considers older people, in developed economies, as those aged 65 or older. To reduce variability due to the age effect, we used the first 10-year age group in the same way as Eurostat publication—Ageing Europe [44].

The applied exclusion criteria comprised inability to understand informed consent; diagnosed neuropsychiatric and neurodegenerative disorders; or disability that limited independent walking, visual, auditory, or fine motor skills. Participants having previous experience with other wearable activity trackers were not excluded from the study. A final sample of 110 participants was enrolled in the study. The study ran from April 2018 to July 2019.

The study was conducted according to the Helsinki Declaration and approved by the local and ethics committees (Approval Number 42-2018), developed in compliance with the new General Data Protection Regulation, and approved by the Portuguese Data Protection Authority (Approval Number 11286/2017). Study goals and assessments were explained during screening procedures. All participants provided written informed consent before study enrollment, which included consent to their data processing.

Data Collection and Instruments

A baseline characterization was performed through a sociodemographic questionnaire, and a neuropsychological evaluation to obtain mood (Geriatric Depression Scale [GDS]) [45] and global cognitive profiles (Mini-Mental State Examination [MMSE]) [46]. For screening “cognitive impairment” via the MMSE, the following cutoff values were used: individuals with no education, <15 points; 1-11 years of school completed, <22 points; and >11 years of school completed, <27 points [47]. For assessment of the presence of depressive symptomatology via the GDS, the cutoff value considered was a total number of depressive symptoms over 11 [48].

To assess the users’ experience, the Xiaomi Mi Band 2 was provided to participants that should be worn continuously for over 15 days, while performing their normal daily activities. The wearable was returned after the testing period for data analysis. In other studies, testing periods range from 3 to 7 days [13,49-51]; because a 7-day testing period corresponds to a short-term user experience, we decided to extend this period to 15 days. Upon completing the usage period, participants were also asked to provide information about their user experience. The TAM 3 [19] was used to collect information about technology acceptance, the SUS [25] for perceived usability, and the USEQ [35,36] for user satisfaction.

Xiaomi Mi Band 2

The selection of wearable activity tracker was based on a review of several different commercially available devices on the market [8,52,53]. The selection criteria included their popularity in the health tracking device market, availability, continuous monitoring of PA without a smartphone, price, battery life, various data captured via sensors, and ability to export data. The Xiaomi Mi Band 2 was selected because, at the study time, it offered the best price-quality ratio, had an estimated battery life of almost 30 days, was ergonomic, accessible, easy to operate, and did not require continuous communication with a smartphone. The system combines sensors that allow the objective assessment of daily free-living PA, with its algorithms calculating steps, intensity, energy expenditure, and distance traveled [49,53,54].

Technology Acceptance Model

The TAM 3 was adapted to the context of the use of activity tracking technologies by older adults, and the key dimensions of acceptance were investigated using the following constructs: PEOU, PEC, CANX, BI, and USE. PEOU was measured using all 4 items adapted from the TAM 3; PEC using 2; CANX using 3, BI and USE were measured using the only item on the original scale. Multimedia Appendix 1 presents a list of items for all the constructs. TAM items were measured on a 7-point Likert scale, starting from “1=strongly disagree” to “7=strongly agree”. The mean scores of each item were computed and the mean of means of each construct was calculated and used to perform statistical analysis [19].

System Usability Scale

The SUS is a 10-item questionnaire, consisting of 5 positive and 5 negative statements, with the 5 responses for each statement ranging from “5=strongly agree” to “1=strongly disagree” (Multimedia Appendix 2). The SUS score is calculated by taking 1 from all the scores on odd-numbered items and subtracting 5 from the even-numbered items scores. The sum of the scores is then multiplied by 2.5 to give an overall SUS score, and range from 0 (extremely poor usability) to 100 (excellent usability) [21,25]. The value of 68 is considered the average for the SUS score; a score above or less than 68 is considered above average or below average, respectively [55,56]. The grade rankings of scores proposed by Bangor et al [56] were here used to provide a more meaningful basis for the SUS score interpretation.

User Satisfaction Evaluation Questionnaire

The USEQ is a 6-item questionnaire with a 5-point Likert Scale (Multimedia Appendix 3). The total score ranges from 6 (poor satisfaction) to 30 (excellent satisfaction). All items are affirmative, except item 5, which is a negative item. The numerical value of the affirmative items is used to calculate the score. The negative item subtracts the numerical value of the response from 6 and then adds this result to the total score. The USEQ score is evaluated using the following classification: poor (0-5), fair (5-10), good (10-15), very good (15-20), or excellent (20-25) satisfaction [35,36].

Statistical Analysis

Overview

The statistical analysis was organized to address the following aims: (1) explore the mediating effect of the usability on the relationship between user characteristics and satisfaction; and (2) examine the moderating effect of user characteristics on the relationship between usability and user satisfaction. Briefly, the statistical analysis was performed according to the following steps: (1) descriptive statistics; (2) instruments’ psychometric proprieties; (3) structural equation modeling (SEM); and (4) moderation analysis.

Descriptive Statistics

Descriptive data analysis was performed using IBM SPSS Statistics (version 26) to depict the characteristics of the study. Descriptive statistics, including frequency, percentage, mean, standard deviation, minimum, maximum, skewness, and kurtosis, were calculated for each variable. Normality was considered adequate if absolute values for skewness and kurtosis were above 3.0 and 10.0, respectively [57,58]. The percentage of missing values across the variables was analyzed. Methods for handling missing data were not applied because there were no missing data.

Instruments’ Validation

Before structural modeling, the measurement model of latent variables for their dimensionality/structure and reliability was assessed.

Confirmatory factor analysis (CFA) was conducted using JASP (version 0.11.1; JASP Team, University of Amsterdam) to examine the structure of the SUS (used to measure usability) and USEQ (used to measure user satisfaction). Variables with factor loadings above 0.4 were included. To assess the goodness of fit of the model, the following indices and thresholds were applied: chi-square (χ2, P>.05), χ2/degrees of freedom (df) ratio (≤3), Comparative Fit Index (CFI ≥0.90), Tucker–Lewis Index (TLI ≥0.90), Goodness-of-Fit Index (GFI ≥0.90), root mean squared error of approximation (RMSEA <0.08), and standardized root mean squared residual (SRMR ≤0.08) [59-61].

Reliability analysis was performed using IBM SPSS Statistics (version 26) to analyze the internal consistency of item responses of the SUS and USEQ instruments. Reliability was estimated using the McDonald omega (ωt) coefficient [62,63]. Given ordinal response format items, the McDonald omega coefficient (ωt) provides more accurate estimates of reliability than Cronbach α [62,64,65]. Coefficients values over 0.70 are considered indicators of satisfactory item homogeneity [65,66].

Structural Equation Modeling

SEM was applied to check the hypothesis relationship between the proposed factors that directly and indirectly influence older adult’s user satisfaction (structural model) with technology. SEM allows to analyze the structural relationship between measured variables and latent variables. The derived scores for usability and user satisfaction were supported by CFA.

Data were analyzed using IBM SPSS AMOS (version 25) and the parameters were estimated by the maximum likelihood method. The significance level of 5% was used as a threshold for the research proposition testing. To determine whether the model was reasonable and acceptable, the following indices were considered: χ2, χ2/df ratio, CFI, TLI, GFI, and RMSEA. The criteria for an acceptable model fit were the same as those reported for the CFA.

To assess multicollinearity, the inspection of the correlation matrix of the predictor variables (education, MMSE, GDS, and usability) and the analysis of the variance inflation factor (VIF) and tolerance were performed (IBM SPSS Statistics, version 26). The tolerance values close to 1 were considered as an indicator of low multicollinearity, whereas a value close to 0 as a potential indicator of collinearity problem [67,68]. Moreover, VIF=1 was considered an indicator that the independent variables are not correlated, and 1<VIF<5 an indicator that the variables are moderately correlated with each other [67].

Moderation Analysis

Moderation analysis was performed to examine whether the relationship between usability (predictor) and user satisfaction (outcome variable) depended on user characteristics (moderator). The analysis was performed using the MedMod package in jamovi (version 1.2.27; The jamovi Project) software. The significance of the interaction term of usability on user satisfaction at specific values (–1 SD, mean, +1 SD) of GDS, education, and MMSE (moderators) was assessed, exploring when the effect of usability on user satisfaction depends on the level of the moderating test variable.


Study Participants

A total of 110 participants completed the final assessment after the testing period. Table 1 summarizes the demographic, mood, and global cognitive characteristics of the sample. Participants had a mean age of 68.41 (SD 3.11) years, and 45.5% (50/110) were identified as males. The mean years of formal education were 7.95 (SD 5.38).

Table 1. Characteristics of the study participants (N=110).
CharacteristicsValues
Gender

Male, n (%)50 (45.5)
Age (years), mean (SD)68.41 (3.11)

64-70, n (%)73 (66.4)

≥70, n (%)37 (33.6)
Education (years), mean (SD)7.95 (5.38)

1-4, n (%)58 (52.7)

5-11, n (%)24 (21.8)

≥12, n (%)28 (25.5)
MMSEa (total score), mean (SD)26.95 (2.00)

22-27, n (%)41 (37.3)

≥27, n (%)69 (62.7)
GDSb (total score), mean (SD)6.05 (4.58)

>11, n (%)17 (15.5)

aMMSE: Mini-Mental State Examination.

bGDS: Geriatric Depression Scale.

Instruments’ Descriptive Statistics

The results of descriptive statistics for the instruments (TAM 3, SUS, and USEQ) are presented in Tables 2-4, respectively. The skewness and kurtosis values indicate some degree of non-normality. In reality, most behavioral research data do not follow univariate normal distributions [69,70]. Moreover, the results reveal a severe violation of normality for the following items and constructs: USEQ 1, SUS 1, SUS 3, SUS 5, SUS 9, PEOU 3, PEC, and CANX. Thus, these were excluded from further path analysis.

Table 2. Descriptive statistics for Technology Acceptance Model 3 items.
ItemsRangeMean (SD)SkewnessKurtosis
Perceived Ease of Use (PEOU)




PEOU 12-76.28 (1.08)–1.622.25
PEOU 21-76.06 (2.01)–1.972.22
PEOU 33-76.84 (0.60)–4.3720.92
PEOU 43-76.60 (0.92)–2.425.11
PEOU score3.50-7.006.45 (0.78)–1.481.54
PEOU final3.67-7.006.31 (0.94)–1.250.24
Perceptions of External Control (PEC)





PEC 13-76.55 (0.97)–2.385.19
PEC 24-76.94 (0.41)–6.8446.91
PEC score4.00-7.006.74 (0.55)–2.627.34
Computer Anxiety (CANX)





CANX 16-76.99 (0.10)–10.49110.00
CANX 21-76.86 (0.83)–6.7245.64
CANX 32-76.71 (0.97)–3.4911.44
CANX score4.33-7.006.85 (0.47)–3.5512.72
Behavioral intention (BI)



BI1-76.60 (0.97)–3.0010.84

USE (hours)13-2423.85 (1.12)–8.9985.16
Table 3. Descriptive statistics for System Usability Scale items.
ItemsRangeMean (SD)SkewnessKurtosis
11-54.71 (0.65)–2.829.87
21-51.44 (1.03)2.404.70
32-54.92 (0.36)–5.8140.38
41-51.38 (1.04)2.514.75
51-54.85 (0.56)–4.4723.01
61-51.28 (0.83)3.129.26
71-53.58 (0.78)–2.15 4.88
81-51.30 (0.92)3.068.16
93-54.86 (0.46)–3.4210.71
101-51.41 (1.08)2.524.95
System Usability Scale score55-10092.70 (10.73)–1.611.77
Table 4. Descriptive statistics for User Satisfaction Evaluation Questionnaire items.
ItemsRangeMean (SD)SkewnessKurtosis
23-54.82 (0.47)–2.666.46
32-54.65 (0.71)–2.214.50
42-54.47 (0.75)–1.300.98
51-54.65 (0.93)–2.716.15
61-51.28 (0.83)3.129.26
User Satisfaction Evaluation Questionnaire score14-2523.30 (2.40)–1.812.99

Instruments’ Psychometric Proprieties

As reported by Domingos et al [36], the CFA supported the conceptual unidimensionality of the USEQ (χ24=1.83, P=.12, χ2/df=1.83; CFI=0.973, TLI=0.931, GFI=0.977, RMSEA=0.087, SRMR=0.038). Furthermore, the CFA for the SUS showed satisfactory values for the following indexes: CFI=0.816, GFI=0.928, and SRMR=0.074. The fit indices for the model are presented in Table 5.

Regarding internal consistency, for the SUS questionnaire reliability was calculated only with items included in path analysis (SUS 2, SUS 4, SUS 6, SUS 7, SUS 8, SUS 10). Moreover, the USEQ showed acceptable reliability (Cronbach α=.677; McDonald ω=0.722), as reported by Domingos et al [36]. The McDonald ω coefficients showed acceptable values for the SUS and USEQ questionnaires ranging from 0.712 to 0.722, respectively.

Table 5. Confirmatory factor analysis for instruments.
Fit indicesUser Satisfaction Evaluation QuestionnaireSystem Usability Scale
χ27.31330.074
df49
χ2/df1.833.34
P value .120<.001
Comparative Fit Index0.9730.816
Tucker–Lewis Index0.9310.694
Goodness-of-Fit Index0.9770.928
Root mean squared error of approximation0.0870.146
Standardized root mean squared residual0.0380.074

Users’ Experience

The high ratings of the TAM 3 indicate excellent technology acceptance by the participants. Overall, the average ratings for user experience with the Xiaomi Mi Band 2 were 6.45 (SD 0.78) for PEOU, 6.74 (SD 0.55) for PEC, 6.85 (SD 0.47) for CANX, and 6.60 (SD 0.97) for BI. Furthermore, the participants reported an average of 23.85 (SD 1.12) hours of use per day (Table 2). These results indicate that participants found that the Xiaomi Mi Band 2 is an easy-to-use and easy-to-control device, potentially perceiving its usefulness regarding health benefits and having the intention to use it in the future.

Regarding usability, the overall SUS score ranged from 55 to 100 (mean [SD] 92.70 [10.73]), with 96% (106/110) of the participants reporting a score above the acceptability baseline of the SUS. Moreover, 45.5% (50/110) of the participants classified the activity tracker achieved as best imaginable (Table 6). Thus, these results suggest that the Xiaomi Mi Band 2 is a usable wearable activity tracker among older adults.

Finally, all participants reported a user satisfaction experience above the USEQ baseline value defined as a good experience, with a mean USEQ score of 23.30 (SD 2.40; Table 4). Moreover, 85.5% (94/110) of the participants rated the satisfaction with the Xiaomi Mi Band 2 as excellent (Table 6). Still, despite older adults reporting good satisfaction with the device, concerns were noted regarding the clarity of the technology’s information.

Table 6. User experience classification for usability and satisfaction (N=110).
ClassificationValue, n (%)
Usability (System Usability Scale)

Ok8 (7.3)

Good16 (14.5)

Excellent36 (32.7)

Best imaginable50 (45.5)
Satisfaction (User Satisfaction Evaluation Questionnaire)

Good2 (1.8)

Very good14 (12.7)

Excellent 94 (85.5)

The Structural Equation Modeling for User Satisfaction

Table 7 shows fit indexes for the structural model, showing acceptable values for the χ2/df (1.67) and RMSEA (0.079) indexes and values slightly less than the threshold for a good model fit for the following indexes: GFI=0.880, TLI=0.818, and CFI=0.868. Based on these indexes, the model has a moderate acceptable fit.

The path diagram of the model is presented in Figure 3. Coefficients within paths are standardized coefficients from regressions. Table 8 summarizes the results of hypothesis testing, including standardized coefficients and significance levels. Specifically, results show that usability was significantly and positively associated with user satisfaction (β=.530; P<.01), thereby supporting Hypotheses 1. By contrast, depression was significantly and negatively associated with usability (β=–.369; P<.01), supporting Hypotheses 7.

Table 7. Fit indices for the hypothesized model.
Model fit indexValue
χ2110.475
df66
χ2/df1.67
P value <.001
Goodness-of-Fit Index0.880
Tucker–Lewis Index0.818
Comparative Fit Index0.868
Root mean squared error of approximation0.079
Figure 3. Path diagram for the research model. GDS: Geriatric Depression Scale; MMSE: Mini-Mental State Examination; SUS: System Usability Scale; USEQ: User Satisfaction Evaluation Questionnaire.
View this figure

Individual characteristics (education, cognition, and depression) collectively explained 16.8% of usability variance. Furthermore, individual characteristics and usability collectively explained 39.1% of the variance in satisfaction. Specifically, depression negatively impacted usability and satisfaction, with a significant effect on usability (β=–.369; P<.01); while, regarding education, a positive, but not significant, usability and satisfaction effect was observed (education > satisfaction: β=–.121; P<.23; education > usability: β=–.130; P<.25). Despite confirming the theoretical model, most research hypotheses were not statistically proven with adequate goodness of fit. Nonetheless, usability seems to be a strong predictor of user satisfaction.

Considering the possible multicollinearity issues in the SEM, the absolute values of correlation coefficients were calculated and ranged from 0.009 to 0.45. The tolerance values ranged from 0.82 to 0.87 and the VIF values from 1.15 to 1.22, indicating that no independent variable is in a perfect linear function with other any independent variable.

Table 8. Results of hypothesis testing based on standardized path coefficients for the research model.
HypothesisEstimateStandard errorCritical ratioP value
H1: Usability > Satisfaction0.5300.0893.008.003
H2: Education > Satisfaction0.1210.0051.194.23
H3: Education > Usability0.1300.0111.147 .25
H4: Cognition > Satisfaction–0.0980.013–0.999.32
H5: Cognition > Usability0.0110.0290.104.92
H6: Depression > Satisfaction–0.1400.006–1.376.17
H7: Depression > Usability–0.3690.014–3.010.003

Moderation Analysis

Moderating Effect of the GDS

Usability significantly predicted satisfaction (β=.43; P<.001; Table 9). The interaction effect of usability × GDS was not significant (β=–.028; P<.06); however, because the P-value is approximately .05, we can conclude there is a tendency to infer that the effect of the satisfaction is dependent on GDS levels. The simple slopes of the interaction at –1 SD, mean, and +1 SD of GDS are plotted in Figure 4. Results indicate a significant association for high and low values of low GDS, respectively, in the same direction (β=.30, P<.001; β=.56, P<.001; Table 10). Moreover, the effect of usability on user satisfaction through the GDS was higher in individuals with lower GDS levels.

Table 9. Estimates for the moderating effect of the GDSa and usability in the prediction of user satisfaction.
VariableEstimateStandard errorZP value
Usability0.430.0825.28<.001
GDS–0.0120.009–1.34.18
H1: Usability × GDS–0.0280.015–1.90.06

aGDS: Geriatric Depression Scale.

Figure 4. Simple slope plot for the moderating effect of Geriatric Depression Scale (GDS) and usability in the prediction of user satisfaction.
View this figure
Table 10. Effect of the usability on satisfaction at different levels of the GDSa.
EffectEstimateStandard errorZP value
Average0.430.0835.22<.001
Low (–1 SD)0.560.1354.16<.001
High (+1 SD)0.300.0704.36 <.001

aGDS: Geriatric Depression Scale.

Moderating Effect of Education

Usability significantly predicted satisfaction (β=.36; P<.001; Table 11). However, the interaction effect of usability × education in the direct path between usability and user satisfaction was not significant (β=3.63 × 10–4; P<.98). Results from simple slope estimates for the effect of usability on satisfaction indicated that education did not moderate the relationship between these variables (Table 12). Moreover, the interaction plot (Figure 5) showed no difference in simple slopes at –1 SD, mean, and +1 SD.

Table 11. Estimates for the moderating effect of education and usability in the prediction of user satisfaction.
VariableEstimateStandard errorZP value
Usability0.360.0705.23<.001
Education0.0040.0080.522.60
H2: Usability × Education3.63 × 10–40.0170.022.98
Table 12. Effect of the usability on satisfaction at different levels of education.
EffectEstimateStandard errorZP value
Average0.3640.0675.23<.001
Low (–1 SD)0.3620.0983.71<.001
High (+1 SD)0.3660.1282.85.004
Figure 5. Simple slope plot for the moderating effect of education and usability in the prediction of user satisfaction.
View this figure
Moderating Effect of the MMSE

The interaction effect of usability × MMSE in the direct path between usability and user satisfaction was not significant (β=.014; P<.66; Table 13). The simple slopes of the interaction at –1 SD, mean, and +1 SD of the GDS are plotted in Figure 6. Results indicate a positive relationship between usability and satisfaction for both low (β=.352; P<.001) and high (β=.407; P<.001) MMSE levels (Table 14). Additionally, the results suggested that the indirect effect of usability on user satisfaction through the MMSE is higher for individuals with higher levels of MMSE.

Table 13. Estimates for the moderating effect of the MMSEa and usability in the prediction of user satisfaction.
VariableEstimateStandard errorZP value
Usability0.3800.0705.46<.001
MMSE–0.0080.020–0.387.70
H3: Usability × MMSE0.0140.0310.445.66

aMMSE: Mini-Mental State Examination.

Figure 6. Simple slope plot for moderating effect of Mini-Mental State Examination (MMSE) and usability in the prediction of user satisfaction.
View this figure
Table 14. Effect of the usability on satisfaction at different levels of Mini-Mental State Examination (MMSE).
EffectEstimateStandard errorZP value
Average0.3800.0705.45<.001
Low (–1 SD)0.3520.0794.44<.001
High (+1 SD)0.4070.1053.87<.001

Principal Findings

In recent years, wearable activity trackers are part of a rapidly growing trend in biomedical research and medicine [13,71,72]. These devices have been employed in behavior change interventions due to their potential to motivate individuals to comply with a daily activity goal [13,71]. Because most older adults have insufficient levels of PA, these technologies may be especially beneficial in middle-aged and older age groups. Nonetheless, it is reported that only 16% of activity tracker owners are 55-64 years of age and 7% over 65 [4], indicating that possibly these devices may not be feasible or acceptable to older adults [73]. Therefore, it is necessary to understand how older adults perceive these new technologies. Thus, to better understand potential barriers to using these technologies, the acceptability, usability, and user satisfaction experience with the Xiaomi Mi Band 2 were examined in a population of older adults.

Results from users’ experience indicate an excellent technology acceptance with high ratings of the TAM 3 in all constructs, including PEOU, PEC, CANX, and BI. Previously, Puri et al [49] found a moderate level of acceptance (65%) for the Xiaomi Mi Band 2 among the Canadian community-dwelling older adults, where, interestingly, participants reported a significantly higher acceptance rate for the Xiaomi Mi Band 2 when compared with Microsoft Band. In our study, no other devices were tested, and thus, did not allow for any comparison between wearable devices.

Concerning usability, all participants scored their experience above the acceptability baseline for the SUS. Thus, these results indicate that the Xiaomi Mi Band 2 has excellent usability for older adults in this specific context. Participants also reported a user satisfaction experience above the USEQ baseline value defined as a good experience, suggesting excellent user satisfaction. Nonetheless, such a large score on the SUS, mean 92.70 (SD 10.73), was surprising. In the study by Liang et al [8], the Xiaomi Mi Band 2 was one of the devices that achieved the highest score among several selected wearable devices with distinct market performance, but its mean SUS score was 65.12 (SD 14.73). Possible explanations range from the intrinsic motivation to use the device and how the device is supplied; therefore, such aspects should be evaluated in future studies.

This study also examined factors influencing user satisfaction with the Xiaomi Mi Band 2, based on the proposed theoretical framework. The hypothetical model was supported by moderate acceptable fit indices values (χ2/df=1.67, GFI=0.880, TLI=0.818, CFI=0.868, and RMSEA=0.079). Furthermore, 2 of the testing hypotheses were proven. Overall, results indicate that usability is a significant predictor of user satisfaction (β=.530; P<.01), which, in turn, was negatively affected by depression symptoms (β=–.369; P<.01). The model shows that individual characteristics explain 16.8% of the usability variance and 39.1% of the variance in satisfaction collectively with usability. Specifically, a significant negative effect of depression on usability was found (β=–.369; P<.01).

Additionally, user characteristics’ potential moderating effect on the interaction between usability and user satisfaction was examined. Results suggested that the GDS moderates the usability effect on user satisfaction, and the effect is higher in individuals with lower GDS levels. However, we did not observe significant moderating effects for education (β=3.63 × 10–4; P<.98) and MMSE (β=.014; P<.66) on the interaction between usability and user satisfaction, contrary to our expectations. Future research should explore additional moderating effects through the user characteristics, including personal traits as well as motivational and cultural aspects to enable a better understanding of the factors that may influence user satisfaction and consequently facilitate technology adoption.

Overall, our results align with a recent study investigating the impact of depressive symptoms on web user experience measures, indicating that mood may be a factor influencing technology usability [74]. Additionally, recent research investigating the relationship between user perceptions and user characteristics has shown that older adults demonstrate positive attitudes toward mobile technologies and report technologies’ complexity. User characteristics, such as age, processing speed, and attention, significantly influence older adults’ usage behavior. Furthermore, the education level was found to be positively correlated with the diversity of use. Probably, individuals with higher education levels are typically more motivated to accept new concepts. The authors also mentioned that the usability problems could be attributed to poor memory, decreased vision, and poor literacy, thus older adults tended to perceive the technologies as difficult to use [39].

Beyond the proposed research framework of the study, we aimed to use an integrated TAM and user satisfaction, similar to other studies [29,31,34]. However, due to the severe violation of normality observed in TAM 3 constructs, we cannot integrate the TAM in path analysis. Nonetheless, previous research has shown a significant influence of PEOU on user satisfaction, with the latter proposed to be a key predictor of BI [32-34]. Additionally, Chao [29] showed that perceived enjoyment, effort expectancy, and performance expectancy have a significantly positive effect on satisfaction; thus, it would have been relevant to include these variables to predict satisfaction.

Regarding usability, Venkatesh et al [19,75] theorized that PEOU is affected by the objective usability of a specific system only after a direct experience with the system, where perceptions about the PEOU are determined solely by usability features, which in turn form the basis for acceptance or rejection. Moreover, if the system has higher objective usability, it means that system that is easy to use. Several studies suggested that usability is a determinant of PEOU [19,23,75].

Regarding study limitations, our sample is not representative of the entire older population because we used a convenience sample. Therefore, findings cannot be widely generalizable. Moreover, the population sample is more homogenous than the wider population on the common factors, possibly leading to attenuation in correlations or erroneous correlations among variables [76,77]. Although we have a minimum sample size adequate for the estimation method (>100 participants), the SEM is a large-sample technique [78]. Therefore, future studies should have a larger and more heterogeneous sample to obtain sufficiently accurate estimates, although our study had a larger sample size compared with previous ones [13,49-51]. A further limitation is that the user experience was assessed for a specific wearable activity tracker (Xiaomi Mi Band 2), and therefore, is not representative of the full range of devices currently available on the market. Moreover, the testing period was limited to 15 days. Short-term technology acceptance may not be indicative of long-term acceptance, as research indicates that use of activity trackers tend to drop after the first few weeks [1,49], with short timeframes also making it difficult to determine the impact of the novelty effect (defined as a person’s subjective “first responses to a technology, not the patterns of usage that will persist over time as the product ceases to be new” [79]). Moreover, research suggests that the declining novelty effect could be a reason for many activity tracker users discontinuing their use. Recently, Shin et al [80] explored the effect of novelty in the early stages (<3 months) of activity tracker adoption, as well as the motivation factors for sustained activity tracker use in the long term (>6 months). Findings reveal that the use beyond the novelty period is determined by intrinsic and extrinsic motivations. Finally, we selected the SUS for the usability evaluation because it is the most widely used questionnaire to measure perceived usability; however, this instrument does not comprise all of the concepts regarding usability. For instance, there are several different standards (eg, ISO-9241-11 [26], ISO/IEC 9126 [81]) and conceptual models to evaluate usability. Shackel [82] reported on the 4 important characteristics of usability, namely, effectiveness, learnability, flexibility, and attitude, and the Nielsen model (1993) [83] gave 5 subattributes of usability, namely, learnability, efficiency, memorability, errors, and satisfaction [84,85]. Therefore, there is a need for future studies evaluating key dimensions of usability.

Conclusions

In conclusion, while there is a pressing need for studies to include other devices currently on the market and evaluate longer-term use, our study extended on the existing research providing valuable insight into the use of wearable activity trackers among older adults. First, a significant contribution of this work was to demonstrate the relevance of usability as an important factor influencing user satisfaction, which probably has an impact on technology acceptance and on the intention to use activity trackers. However, we were not able to predict BI in our structural model. Furthermore, our results emphasize the need to consider strategies to minimize the usability barriers to technology adoption in older adults. In addition, system designers should provide systems that address these concerns, and the researchers must ensure that selected systems adequately address the usability issues to be effectively implemented in clinical and research settings. Second, our study investigated the impact of user characteristics as moderating factors influencing the relationship between usability and user satisfaction and found that depression symptoms have a significant influence on older adults’ perception of using technology. However, other individual differences/personal user characteristics should be examined, and the identified moderating effects should be taken into consideration when implementing strategies trying to promote technology adoption. Finally, our results suggested that the Xiaomi Mi Band 2 is a suitable wearable activity tracker for older adults to use in real-life context.

Acknowledgments

Financial support for this work was provided by FEDER funds through the Operational Programme Competitiveness Factors – COMPETE and National Funds through FCT under the project POCI-01-0145-FEDER-007038 (UIDB/50026/2020, and UIDP/50026/2020), by the projects NORTE-01-0145-FEDER-000013 and NORTE-01-0145-FEDER-000023 (supported by the North Portugal Regional Operational Programme [NORTE 2020], under the Portugal 2020 [P2020] Partnership Agreement, through the European Regional Development Fund [FEDER]), by POCI-01-0145-FEDER-016428 (supported by the Operational Programme Competitiveness and Internationalization [COMPETE 2020] and the Regional Operational Program of Lisbon and National Funding through Portuguese Foundation for Science and Technology [FCT, Portugal]), and by the Portuguese North Regional Operational Programme (ON.2 – O Novo Norte, under the National Strategic Reference Framework [QREN], through FEDER). The work was also developed under the scope of the 2CA-Braga Grant of the 2017 Clinical Research Projects. CD was supported by a combined PhD scholarship from FCT and the company iCognitus4ALL - IT Solutions, Lda, Braga, Portugal (Grant number PD/BDE/127831/2016).

Authors' Contributions

CD was responsible for conceptualization, data curation, formal analysis, investigation, methodology, and writing (original draft, review, and editing). PSC was responsible for formal analysis, methodology, and writing (review and editing). NCS and JMP were responsible for funding acquisition, supervision, and writing (review and editing). All authors reviewed and approved the final version of the manuscript.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Measurement items of TAM 3. TAM: Technology Acceptance Model.

DOCX File , 14 KB

Multimedia Appendix 2

Measurement items of USEQ. USEQ: User Satisfaction Evaluation Questionnaire.

DOCX File , 13 KB

Multimedia Appendix 3

Measurement items of SUS. SUS: System Usability Scale.

DOCX File , 13 KB

  1. Shin G, Jarrahi MH, Fei Y, Karami A, Gafinowitz N, Byun A, et al. Wearable activity trackers, accuracy, adoption, acceptance and health impact: A systematic literature review. J Biomed Inform 2019 May;93:103153 [FREE Full text] [CrossRef] [Medline]
  2. Shih P, Han K, Poole E, Rosson M, Carroll J. Use and adoption challenges of wearable activity trackers. In: Proceedings of the iConference. 2015 Presented at: iConference; March 15, 2015; Newport Beach, CA p. 1-12   URL: https:/​/www.​researchgate.net/​publication/​268746784_Use_and_adoption_challenges_of_wearable_activity_trackers
  3. Rupp MA, Michaelis JR, McConnell DS, Smither JA. The role of individual differences on perceptions of wearable fitness device trust, usability, and motivational impact. Appl Ergon 2018 Jul;70:77-87. [CrossRef] [Medline]
  4. Steinert A, Haesner M, Steinhagen-Thiessen E. Activity-tracking devices for older adults: comparison and preferences. Univ Access Inf Soc 2017 Apr 8;17(2):411-419. [CrossRef]
  5. Turner-McGrievy G, Jake-Schoffman DE, Singletary C, Wright M, Crimarco A, Wirth MD, et al. Using Commercial Physical Activity Trackers for Health Promotion Research: Four Case Studies. Health Promot Pract 2019 May;20(3):381-389 [FREE Full text] [CrossRef] [Medline]
  6. Alley S, van Uffelen JG, Schoeppe S, Parkinson L, Hunt S, Power D, et al. Efficacy of a computer-tailored web-based physical activity intervention using Fitbits for older adults: a randomised controlled trial protocol. BMJ Open 2019 Dec 23;9(12):e033305 [FREE Full text] [CrossRef] [Medline]
  7. Seifert A, Schlomann A, Rietz C, Schelling HR. The use of mobile devices for physical activity tracking in older adults' everyday life. Digit Health 2017;3:2055207617740088 [FREE Full text] [CrossRef] [Medline]
  8. Liang J, Xian D, Liu X, Fu J, Zhang X, Tang B, et al. Usability Study of Mainstream Wearable Fitness Devices: Feature Analysis and System Usability Scale Evaluation. JMIR Mhealth Uhealth 2018 Nov 08;6(11):e11066 [FREE Full text] [CrossRef] [Medline]
  9. Maher C, Ryan J, Ambrosi C, Edney S. Users' experiences of wearable activity trackers: a cross-sectional study. BMC Public Health 2017 Nov 15;17(1):880 [FREE Full text] [CrossRef] [Medline]
  10. Kononova A, Li L, Kamp K, Bowen M, Rikard R, Cotten S, et al. The Use of Wearable Activity Trackers Among Older Adults: Focus Group Study of Tracker Perceptions, Motivators, and Barriers in the Maintenance Stage of Behavior Change. JMIR Mhealth Uhealth 2019 Apr 05;7(4):e9832 [FREE Full text] [CrossRef] [Medline]
  11. Berkowsky R, Sharit J, Czaja SJ. Factors Predicting Decisions About Technology Adoption Among Older Adults. Innov Aging 2018 Jan;2(1):igy002 [FREE Full text] [CrossRef] [Medline]
  12. Preusse KC, Mitzner TL, Fausset CB, Rogers WA. Older Adults' Acceptance of Activity Trackers. J Appl Gerontol 2017 Feb;36(2):127-155 [FREE Full text] [CrossRef] [Medline]
  13. Mercer K, Giangregorio L, Schneider E, Chilana P, Li M, Grindrod K. Acceptance of Commercially Available Wearable Activity Trackers Among Adults Aged Over 50 and With Chronic Illness: A Mixed-Methods Evaluation. JMIR Mhealth Uhealth 2016 Jan 27;4(1):e7 [FREE Full text] [CrossRef] [Medline]
  14. Schlomann A, Seifert A, Rietz C. Relevance of Activity Tracking With Mobile Devices in the Relationship Between Physical Activity Levels and Satisfaction With Physical Fitness in Older Adults: Representative Survey. JMIR Aging 2019 Mar 06;2(1):e12303 [FREE Full text] [CrossRef] [Medline]
  15. LaCaille L. Theory of Reasoned Action. In: Gellman MD, Turner JR, editors. Encyclopedia of Behavioral Medicine. New York, NY: Springer; 2013:1964-1967.
  16. Davis FD. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly 1989 Sep;13(3):319-340. [CrossRef]
  17. Revythi A, Tselios N. Extension of technology acceptance model by using system usability scale to assess behavioral intention to use e-learning. Educ Inf Technol 2019 Feb 1;24(4):2341-2355. [CrossRef]
  18. Venkatesh V, Davis FD. A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Management Science 2000 Feb;46(2):186-204. [CrossRef]
  19. Venkatesh V, Bala H. Technology Acceptance Model 3 and a Research Agenda on Interventions. Decision Sciences 2008 May;39(2):273-315. [CrossRef]
  20. Lewis JR. The System Usability Scale: Past, Present, and Future. International Journal of Human–Computer Interaction 2018 Mar 30;34(7):577-590. [CrossRef]
  21. Brooke J. SUS - A quick and dirty usability scale. Usability Evaluation in Industry. 1996.   URL: https://www.researchgate.net/publication/228593520_SUS_A_quick_and_dirty_usability_scale [accessed 2022-01-11]
  22. Pande T, Saravu K, Temesgen Z, Seyoum A, Rai S, Rao R, et al. Evaluating clinicians' user experience and acceptability of , a smartphone application for tuberculosis in India. Mhealth 2017 Jul 27;3:30-30 [FREE Full text] [CrossRef] [Medline]
  23. M Scholtz B, Mahmud I, T. R. Does Usability Matter? An Analysis of the Impact of Usability on Technology Acceptance in ERP Settings. IJIKM 2016;11:309-330. [CrossRef]
  24. Lewis JR, Sauro J. The factor structure of the System Usability Scale. In: Human Centered Design. Berlin, Heidelberg: Springer; 2009:94-103.
  25. Martins AI, Rosa AF, Queirós A, Silva A, Rocha NP. European Portuguese Validation of the System Usability Scale (SUS). Procedia Computer Science 2015;67:293-300. [CrossRef]
  26. International Organization for Standardization. Ergonomics of Human-System Interaction — Part 11: Usability: Definitions and Concepts. 2018.   URL: https://www.iso.org/obp/ui/#iso:std:iso:9241:-11:ed-2:v1:en [accessed 2022-01-11]
  27. Bevan N, Carter J, Earthy J, Geis T, Harker S. New ISO standards for usability, usability reports and usability measures. In: Human-Computer Interaction. Theory, Design, Development and Practice. Cham, Switzerland: Springer; 2016:268-278.
  28. Dalcher I, Shine J. Extending the New Technology Acceptance Model to Measure the End User Information Systems Satisfaction in a Mandatory Environment: A Bank's Treasury. Technology Analysis & Strategic Management 2003 Dec;15(4):441-455. [CrossRef]
  29. Chao C. Factors Determining the Behavioral Intention to Use Mobile Learning: An Application and Extension of the UTAUT Model. Front Psychol 2019 Jul 16;10:1652 [FREE Full text] [CrossRef] [Medline]
  30. Wixom BH, Todd PA. A Theoretical Integration of User Satisfaction and Technology Acceptance. Information Systems Research 2005 Mar;16(1):85-102. [CrossRef]
  31. Mather D, Caputi P, Jayasuriya R. Is the technology acceptance model a valid model of user satisfaction of information technology in environments where usage is mandatory? ACIS 2002 Proceedings. 2002.   URL: https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1149&context=acis2002 [accessed 2022-01-11]
  32. Ghobakhloo M, Zulkifli B, Aziz F. The interactive model of user information technology acceptance and satisfaction in small and medium-sized enterprises. European Journal of Economics, Finance and Administrative Sciences 2010;19(1):7-27 [FREE Full text]
  33. Ho K, Ho C, Chung M. Theoretical integration of user satisfaction and technology acceptance of the nursing process information system. PLoS One 2019 Jun 4;14(6):e0217622 [FREE Full text] [CrossRef] [Medline]
  34. He L, Kim H, Gong H. The Influence of Consumer and Product Characteristics on Intention to Repurchase of Smart band. International Journal of Asia Digital Art and Design Association 2017;21(1):13-18. [CrossRef]
  35. Gil-Gómez JA, Manzano-Hernández P, Albiol-Pérez S, Aula-Valero C, Gil-Gómez H, Lozano-Quilis J. USEQ: A Short Questionnaire for Satisfaction Evaluation of Virtual Rehabilitation Systems. Sensors (Basel) 2017 Jul 07;17(7):1589 [FREE Full text] [CrossRef] [Medline]
  36. Domingos C, Costa PS, Santos NS, Pêgo JM. European Portuguese Version of the User Satisfaction Evaluation Questionnaire (USEQ): Transcultural Adaptation and Validation Study. JMIR Mhealth Uhealth 2021 Jun 29;9(6):e19245 [FREE Full text] [CrossRef] [Medline]
  37. Montag C, Panksepp J. Primary Emotional Systems and Personality: An Evolutionary Perspective. Front Psychol 2017;8:464 [FREE Full text] [CrossRef] [Medline]
  38. Svendsen GB, Johnsen JK, Almås-Sørensen L, Vittersø J. Personality and technology acceptance: the influence of personality factors on the core constructs of the Technology Acceptance Model. Behaviour & Information Technology 2013 Apr;32(4):323-334. [CrossRef]
  39. Li Q, Luximon Y. Understanding Older Adults Post-Adoption Usage Behavior and Perceptions of Mobile Technology. International Journal of Design 2018;12(3):93-110 [FREE Full text]
  40. Hogan CL, Mata J, Carstensen LL. Exercise holds immediate benefits for affect and cognition in younger and older adults. Psychol Aging 2013 Jun;28(2):587-594 [FREE Full text] [CrossRef] [Medline]
  41. Gajewski PD, Falkenstein M. Physical activity and neurocognitive functioning in aging - a condensed updated review. Eur Rev Aging Phys Act 2016 Jan 21;13(1):1 [FREE Full text] [CrossRef] [Medline]
  42. Lautenschlager NT, Cox K, Cyarto EV. The influence of exercise on brain aging and dementia. Biochim Biophys Acta 2012 Mar;1822(3):474-481 [FREE Full text] [CrossRef] [Medline]
  43. Green SB. How Many Subjects Does It Take To Do A Regression Analysis. Multivariate Behav Res 1991 Jul 01;26(3):499-510. [CrossRef] [Medline]
  44. Eurostat. Ageing Europe. Looking at the Lives of Older People in the EU. 2020.   URL: https:/​/ec.​europa.eu/​eurostat/​documents/​3217494/​11478057/​KS-02-20-655-EN-N.pdf/​9b09606c-d4e8-4c33-63d2-3b20d5c19c91?t=1604055531000 [accessed 2022-01-11]
  45. Yesavage JA, Brink T, Rose TL, Lum O, Huang V, Adey M, et al. Development and validation of a geriatric depression screening scale: A preliminary report. Journal of Psychiatric Research 1982 Jan;17(1):37-49. [CrossRef]
  46. Guerreiro M, Silva AA, Botelho MF, Leitão O, Castro-Caldas A, Garcia C. Adaptação à população portuguesa da tradução do Mini Mental State Examination (MMSE). Revista Portuguesa de Neurologia 1994;1(9):9-10.
  47. Santana I, Duro D, Lemos R, Costa V, Pereira M, Simões MR, et al. [Mini-Mental State Examination: Screening and Diagnosis of Cognitive Decline, Using New Normative Data]. Acta Med Port 2016 Apr;29(4):240-248 [FREE Full text] [CrossRef] [Medline]
  48. Pocinho M, Farate C, Dias C, Lee T, Yesavage J. Clinical and Psychometric Validation of the Geriatric Depression Scale (GDS) for Portuguese Elders. Clinical Gerontologist 2009 Feb 23;32(2):223-236. [CrossRef]
  49. Puri A, Kim B, Nguyen O, Stolee P, Tung J, Lee J. User Acceptance of Wrist-Worn Activity Trackers Among Community-Dwelling Older Adults: Mixed Method Study. JMIR Mhealth Uhealth 2017 Nov 15;5(11):e173 [FREE Full text] [CrossRef] [Medline]
  50. Vooijs M, Alpay LL, Snoeck-Stroband JB, Beerthuizen T, Siemonsma PC, Abbink JJ, et al. Validity and usability of low-cost accelerometers for internet-based self-monitoring of physical activity in patients with chronic obstructive pulmonary disease. Interact J Med Res 2014 Oct 27;3(4):e14 [FREE Full text] [CrossRef] [Medline]
  51. Farina N, Lowry RG. Older adults' satisfaction of wearing consumer-level activity monitors. J Rehabil Assist Technol Eng 2017 Oct 31;4:2055668317733258 [FREE Full text] [CrossRef] [Medline]
  52. Xie J, Wen D, Liang L, Jia Y, Gao L, Lei J. Evaluating the Validity of Current Mainstream Wearable Devices in Fitness Tracking Under Various Physical Activities: Comparative Study. JMIR Mhealth Uhealth 2018 Apr 12;6(4):e94 [FREE Full text] [CrossRef] [Medline]
  53. El-Amrawy F, Nounou MI. Are Currently Available Wearable Devices for Activity Tracking and Heart Rate Monitoring Accurate, Precise, and Medically Beneficial? Healthc Inform Res 2015 Oct;21(4):315-320 [FREE Full text] [CrossRef] [Medline]
  54. Mičková E, Machová K, Daďová K, Svobodová I. Does Dog Ownership Affect Physical Activity, Sleep, and Self-Reported Health in Older Adults? Int J Environ Res Public Health 2019 Sep 11;16(18):3355 [FREE Full text] [CrossRef] [Medline]
  55. Brooke J. SUS: a retrospective. Journal of Usability Studies 2013;8(2):29-40 [FREE Full text]
  56. Bangor A, Kortum P, Miller J. Determining what individual SUS scores mean: adding an adjective rating scale. The Journal of Usability Studies 2009;4(3):114-123 [FREE Full text]
  57. Kline RB. Principles and Practice of Structural Equation Modeling. New York, NY: Guilford Publications; 2015.
  58. Diop EB, Zhao S, Duy TV. An extension of the technology acceptance model for understanding travelers' adoption of variable message signs. PLoS One 2019 Apr 25;14(4):e0216007 [FREE Full text] [CrossRef] [Medline]
  59. Hooper D, Coughlan J, Mullen M. Structural equation modelling: guidelines for determining model fit. Electronic Journal on Business Research Methods 2008;6(1):53-60 [FREE Full text]
  60. Schermelleh-Engel K, Moosbrugger H, Müller H. Evaluating the fit of structural equation models: tests of significance and descriptive goodness-of-fit measures. Methods of Psychological Research 2003;8(2):23-74 [FREE Full text]
  61. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal 1999 Jan;6(1):1-55. [CrossRef]
  62. Şimşek GG, Noyan F. McDonald's ωt, Cronbach's α, and Generalized θ for Composite Reliability of Common Factors Structures. Communications in Statistics - Simulation and Computation 2013 Oct;42(9):2008-2025. [CrossRef]
  63. Trizano-Hermosilla I, Alvarado JM. Best Alternatives to Cronbach's Alpha Reliability in Realistic Conditions: Congeneric and Asymmetrical Measurements. Front Psychol 2016 May 26;7:769 [FREE Full text] [CrossRef] [Medline]
  64. Costello A, Osborne J. Best practices in exploratory factor analysis: four recommendations for getting the most from your analysis. Practical Assessment 2005;10(7):1-9 [FREE Full text] [CrossRef]
  65. Gadermann AM, Guhn M, Zumbo BD. Estimating ordinal reliability for Likert-type and ordinal item response data: A conceptual, empirical, and practical guide. Practical Assessment, Research, and Evaluation 2012;17(3):1-13 [FREE Full text]
  66. Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika 1951 Sep;16(3):297-334. [CrossRef]
  67. Shrestha N. Detecting Multicollinearity in Regression Analysis. AJAMS 2020 Jan 15;8(2):39-42. [CrossRef]
  68. Senaviratna NAMR, A. Cooray TMJ. Diagnosing Multicollinearity of Logistic Regression Model. AJPAS 2019 Oct 01:1-9. [CrossRef]
  69. Curran PJ, West SG, Finch JF. The robustness of test statistics to nonnormality and specification error in confirmatory factor analysis. Psychological Methods 1996 Mar;1(1):16-29. [CrossRef]
  70. Norman G. Likert scales, levels of measurement and the "laws" of statistics. Adv Health Sci Educ Theory Pract 2010 Dec 10;15(5):625-632. [CrossRef] [Medline]
  71. Bassett DR, Freedson PS, John D. Wearable Activity Trackers in Clinical Research and Practice. Kinesiology Review 2019 Feb;8(1):11-15. [CrossRef]
  72. Keogh A, Dorn JF, Walsh L, Calvo F, Caulfield B. Comparing the Usability and Acceptability of Wearable Sensors Among Older Irish Adults in a Real-World Context: Observational Study. JMIR Mhealth Uhealth 2020 Apr 20;8(4):e15704 [FREE Full text] [CrossRef] [Medline]
  73. Lyons EJ, Swartz MC, Lewis ZH, Martinez E, Jennings K. Feasibility and Acceptability of a Wearable Technology Physical Activity Intervention With Telephone Counseling for Mid-Aged and Older Adults: A Randomized Controlled Pilot Trial. JMIR Mhealth Uhealth 2017 Mar 06;5(3):e28 [FREE Full text] [CrossRef] [Medline]
  74. Thielsch M, Thielsch C. Depressive symptoms and web user experience. PeerJ 2018;6:e4439 [FREE Full text] [CrossRef] [Medline]
  75. Venkatesh V. Determinants of Perceived Ease of Use: Integrating Control, Intrinsic Motivation, and Emotion into the Technology Acceptance Model. Information Systems Research 2000 Dec;11(4):342-365. [CrossRef]
  76. Fabrigar LR, Wegener DT, MacCallum RC, Strahan EJ. Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods 1999 Sep;4(3):272-299. [CrossRef]
  77. Wolf EJ, Harrington KM, Clark SL, Miller MW. Sample Size Requirements for Structural Equation Models: An Evaluation of Power, Bias, and Solution Propriety. Educ Psychol Meas 2013 Dec 09;76(6):913-934 [FREE Full text] [CrossRef] [Medline]
  78. Kyriazos TA. Applied Psychometrics: Sample Size and Sample Power Considerations in Factor Analysis (EFA, CFA) and SEM in General. PSYCH 2018;09(08):2207-2230. [CrossRef]
  79. Sung J, Christensen HI, Grinter RE. Robots in the wild: Understanding long-term use. In: HRI '09: Proceedings of the 4th ACM/IEEE international conference on Human robot interaction. 2009 Presented at: HRI09: International Conference on Human Robot Interaction; March 9-13, 2009; La Jolla, CA p. 45-52. [CrossRef]
  80. Shin G, Feng Y, Jarrahi MH, Gafinowitz N. Beyond novelty effect: a mixed-methods exploration into the motivation for long-term activity tracker use. JAMIA Open 2019 Apr;2(1):62-72 [FREE Full text] [CrossRef] [Medline]
  81. International Organization for Standardization. Software Engineering — Product Quality — Part 1: Quality Model. 2001.   URL: https://www.iso.org/standard/22749.html [accessed 2022-01-11]
  82. Shackel B. Usability—context, framework, definition, design, and evaluation. In: Shackel B, Richardson S, editors. Human Factors for Informatics Usability. Cambridge, MA: Cambridge University Press; 1991:21-38.
  83. Nielsen J. Usability Engineering. San Diego, CA: Academic Press; 1993.
  84. Weichbroth P. Usability attributes revisited: a time-framed knowledge map. In: Proceedings of the Federated Conference on Computer Science and Information Systems. New York, NY: IEEE; 2018 Presented at: Federated Conference on Computer Science and Information Systems; September 9-12, 2018; Poznań, Poland p. 1005-1008   URL: https://annals-csis.org/Volume_15/drp/pdf/137.pdf [CrossRef]
  85. Madan A, Dubey SK. Usability evaluation methods: a literature review. International Journal of Engineering Science and Technology 2012 Feb;4(2):590-599 [FREE Full text]


BI: behavioral intention
CANX: computer anxiety
CFA: confirmatory factor analysis
CFI: Comparative Fit Index
GDS: Geriatric Depression Scale
GFI: Goodness of Fit Index
MMSE: Mini-Mental State Examination
PA: physical activity
PEC: perceptions of external control
PEOU: perceived ease of use
PU: perceived usefulness
RMSEA: Root Mean Squared Error of Approximation
SEM: structural equation modeling
SRMR: Standardized Root Mean Squared Residual
SUS: System Usability Scale
TAM: Technology Acceptance Model
TLI: Tucker–Lewis Index
USEQ: User Satisfaction Evaluation Questionnaire
VIF: variance inflation factor


Edited by R Kukafka; submitted 19.12.20; peer-reviewed by N Georgi, S Mukherjee, T Russell-Rose; comments to author 15.02.21; revised version received 24.03.21; accepted 05.07.21; published 26.01.22

Copyright

©Célia Domingos, Patrício Costa, Nadine Correia Santos, José Miguel Pêgo. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 26.01.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.