Published on in Vol 19, No 10 (2017): October

Reliability and Validity of the Telephone-Based eHealth Literacy Scale Among Older Adults: Cross-Sectional Survey

Reliability and Validity of the Telephone-Based eHealth Literacy Scale Among Older Adults: Cross-Sectional Survey

Reliability and Validity of the Telephone-Based eHealth Literacy Scale Among Older Adults: Cross-Sectional Survey

Original Paper

1Department of Health Education and Promotion, East Carolina University, Greenville, NC, United States

2Department of Health Education & Behavior, University of Florida, Gainesville, FL, United States

3ICF, Fairfax, VA, United States

4Kinesiology Department, California Polytechnic State University, San Luis Obispo, CA, United States

5Department of Community Health and Prevention, Drexel University, Philadelphia, PA, United States

Corresponding Author:

Michael Stellefson, PhD

Department of Health Education and Promotion

East Carolina University

3104 Carol Belk Building

Greenville, NC, 27858

United States

Phone: 1 252 328 2105

Fax:1 252 328 1285

Email: stellefsonm17@ecu.edu


Background: Only a handful of studies have examined reliability and validity evidence of scores produced by the 8-item eHealth literacy Scale (eHEALS) among older adults. Older adults are generally more comfortable responding to survey items when asked by a real person rather than by completing self-administered paper-and-pencil or online questionnaires. However, no studies have explored the psychometrics of this scale when administered to older adults over the telephone.

Objective: The objective of our study was to examine the reliability and internal structure of eHEALS data collected from older adults aged 50 years or older responding to items over the telephone.

Methods: Respondents (N=283) completed eHEALS as part of a cross-sectional landline telephone survey. Exploratory structural equation modeling (E-SEM) analyses examined model fit of eHEALS scores with 1-, 2-, and 3-factor structures. Subsequent analyses based on the partial credit model explored the internal structure of eHEALS data.

Results: Compared with 1- and 2-factor models, the 3-factor eHEALS structure showed the best global E-SEM model fit indices (root mean square error of approximation=.07; comparative fit index=1.0; Tucker-Lewis index=1.0). Nonetheless, the 3 factors were highly correlated (r range .36 to .65). Item analyses revealed that eHEALS items 2 through 5 were overfit to a minor degree (mean square infit/outfit values <1.0; t statistics less than –2.0), but the internal structure of Likert scale response options functioned as expected. Overfitting eHEALS items (2-5) displayed a similar degree of information for respondents at similar points on the latent continuum. Test information curves suggested that eHEALS may capture more information about older adults at the higher end of the latent continuum (ie, those with high eHealth literacy) than at the lower end of the continuum (ie, those with low eHealth literacy). Item reliability (value=.92) and item separation (value=11.31) estimates indicated that eHEALS responses were reliable and stable.

Conclusions: Results support administering eHEALS over the telephone when surveying older adults regarding their use of the Internet for health information. eHEALS scores best captured 3 factors (or subscales) to measure eHealth literacy in older adults; however, statistically significant correlations between these 3 factors suggest an overarching unidimensional structure with 3 underlying dimensions. As older adults continue to use the Internet more frequently to find and evaluate health information, it will be important to consider modifying the original eHEALS to adequately measure societal shifts in online health information seeking among aging populations.

J Med Internet Res 2017;19(10):e362

doi:10.2196/jmir.8481

Keywords



The increasing amount of online health information available to the public [1,2], coupled with the popularity of health-related Internet searches [3,4], has greatly increased Internet use for health-related purposes. With this increased use come both benefits and challenges. Greater Internet adoption has increased the availability of health information for consumers, yet disparities in access to relevant online health information persist, especially among users with insufficient skills to discriminate between credible and fraudulent online health information. The broad reach of the Internet has potential to increase health knowledge and to build self-efficacy to carry out protective health behaviors, yet the large volume of health information on the Internet often lacks quality, relevance, and veracity [5,6]. Online health information seeking is also generally an independent, goal-driven activity that puts the user in control of sifting through an abundant amount of health information. To do this effectively, users must possess skills to identify reliable sources, appraise the relevance of online health information, and translate knowledge gained into meaningful action that addresses a health-related concern.

Older Adults and Online Health Information Seeking

Proficiency in carrying out online health information-seeking behaviors varies by sociodemographic factors, including age [7]. For example, greater adoption of the Internet by older adults has increased the accessibility of health information to this subset of the population [8,9]. One recent study in the United States showed that Web adoption among older adults is climbing, with 67% of people over 65 years of age using the Internet and more than 40% using smartphones [10]. Over 50% of US adults aged 35 to 60 years reported searching for online health information, while only 31% over the age of 60 years reported doing so [11]. Older adults need high-quality, relevant, and accurate health information regarding age-related physical conditions and ailments that require regular and consistent medical attention [12,13]. However, research suggests that most older adults do not access high-quality health information that addresses their health concerns [14].

There are several reasons why older adults may be unable to benefit from increased access to online health information. Older adult populations report high computer anxiety, which compromises their ability to carry out functional tasks using Internet-based technologies [15]. Only 26% of older adult Internet users reported feeling confident when using the Internet to complete daily tasks [10]. This lack of confidence using digital devices often leads to lack of Internet use for health information among older adult populations [10,16-18]. Nevertheless, older adults who overcome anxiety toward using health information technology demonstrate greater patient activation (ie, enhanced knowledge, skills, and confidence a person has in managing their own health and health care) and are more satisfied after talking with their provider about their own medical questions [19,20].

Moreover, routine online health information seeking has the potential to motivate older adults living with chronic disease to become more proactive in their health care decision making [21,22]. Because it is very likely that older adults will increasingly use the Internet to access health information to improve their health, it is important to measure the extent to which they have the capacity to search for, retrieve, and evaluate health-related resources that they come across online (ie, eHealth literacy).

Measurement of eHealth Literacy

eHealth literacy was originally defined by Norman and Skinner [23] as “the ability to seek, find, understand, and appraise health information from electronic sources and apply the knowledge gained to addressing or solving a health problem.” To conceptualize eHealth literacy, Norman and Skinner [23] used the metaphor of a lily flower with 6 discrete petals (literacies) feeding into a core pistil. They categorized the core literacies proposed to contribute to eHealth literacy as being either context specific (ie, health, computer, and science literacies) or analytic specific (ie, traditional and numeracy, information, and media literacies). The concept of eHealth literacy is dynamic and evolving, meaning it varies per a variety of individual and contextual factors, including an individual’s health status, their purpose(s) for seeking health information, and the technology they select to access health information. Recent research suggests that people with greater eHealth literacy are more informed health decision makers [24], which ultimately increases their capacity to engage in health protective behaviors [25] and improve their quality of life [26]. While several studies have examined eHealth literacy, rigorous measurement of the 6 constituent eHealth literacies is underdeveloped and presents an ongoing challenge for health promotion researchers.

In 2006, Norman and Skinner [27] developed the eHealth Literacy Scale (eHEALS), an 8-item rating scale that measures consumers’ knowledge of and perceived confidence in their ability to seek, understand, and evaluate health information obtained from the Internet to address health-related concerns. Scores from eHEALS have supported its reliability as a unidimensional scale in diverse populations, including adolescents [27], college students [28], adults in the general US population [28], older adults recruited on the Internet [29], and people living with chronic disease [30,31]. eHEALS has been translated into many different languages and administered in countries around the world (eg, Germany, Italy, the Netherlands, Israel, and China).

Several studies have explored the dimensionality of data produced by eHEALS, reporting varied results. This literature describes some potential problems related to the internal structure of the eHEALS. Specifically, the number of factors (and factor loadings) derived in measurement studies of the eHEALS have shown some variability. Two recent studies reported that up to 3 unique, yet highly correlated, factors may be present when the scale is administered to older adults [30,32]. However, studies reporting the presence of multiple subscales have yet to explicate which eHEALS items load onto distinct factors (or constructs) when eHEALS is completed by older adults [29,30,32]. This variability has caused some difficulty when attempting to define what these unique factors, or subscales, are actually measuring.

Soellner and colleagues [33] translated the eHEALS into German and found that, despite poor global model fit, data from 18-year-old university students may best fit a 2-factor model, where eHEALS items measure online health information seeking (items 1-5, and 8) and online health resource appraisal (items 7 and 8). Neter and colleagues [7] also found adequate global model fit with a 2-factor model of eHEALS data when collected among adults over 21 years of age. This 2-factor structure consisted of 1 factor measuring online health information seeking (items 1-3) and another measuring online health resource appraisal (items 4-8). It should be noted, however, that 1 study [33] primarily consisted of adolescents. Younger people are more likely to report higher eHealth literacy than their older counterparts [7,31]. The factor structure and variance of eHEALS scores may differ as a function of age, which could influence results from eHEALS studies including younger versus older samples. Diviani et al [34] conducted a validation study of the Italian version of eHEALS administered among young to middle-aged adults (mean age 37.37 years, SD 13.78). Confirmatory factor analysis results showed suboptimal model fit among 2 rival models (1-factor structure vs 2-factor structure), yet parametric and nonparametric item response theory (IRT) analyses confirmed that the single-factor model best fit the data in the study sample. However, studies reporting the presence of multiple subscales have yet to explicate which eHEALS items load onto distinct factors (or constructs) when eHEALS is completed by older adults [29,30,32].

It is also important to note that the mode of survey administration can affect the reliability and validity evidence of survey data [35]. Many studies examining the internal structure of eHEALS data collected from older adults have only used Web-based survey methods [29,30,32]. Web-based surveys have several advantages, including time and cost efficiencies, but they are prone to response bias, especially when respondents demonstrate concerns about the privacy of disclosing information through Web-based survey portals [36]. Also, analyzing eHEALS data collected from only active Internet users may reduce the quality of reliability and validity assessments due to sampling bias. Older adults who use the Internet to complete the eHEALS are more likely to be more confident in their online health information-seeking skills; thus, solely relying on Web-based survey methods to establish evidence for the validity of eHEALS scores may introduce measurement bias. Administering Web-based versions of eHEALS to older Internet users may skew data toward respondents with high overall eHealth literacy, which may partly explain why existing studies report moderate to high eHealth literacy in older adult populations.

Dillman [37] recommended use of telephone-based surveys for collecting data among older populations, who often feel more comfortable answering questions asked by an actual person rather than via online or paper-and-pencil questionnaires. In a recent study, Neter and Brainin [38] conducted a nationally representative random digital dial telephone household survey of Israeli adults aged 50 years and older to determine their perceived eHealth literacy as measured by eHEALS. In this older population, perceived eHealth literacy was judged to be moderate (mean 3.17, SD 0.93), with a moderate correlation established between perceived and actual eHealth literacy (r=.34, P=.01). However, no psychometric data on eHEALS responses was reported in this age-restricted (50 years of age and older) sample. Therefore, much variability has been documented in the literature and has led to difficulty defining what the unique factors, or eHEALS subscales, may be measuring. These discrepancies in confirmatory factor analysis and IRT analysis results highlight the importance of conducting additional psychometric research that considers differences in eHEALS item measurement, factor structure, and item difficulty among older adults. The purpose of this study was to examine the reliability and explore the internal structure of eHEALS data, when the scale is administered to older adults using telephone-based survey methods.


Recruitment

We conducted a cross-sectional landline telephone survey as part of the Florida Consumer Confidence Index (F-CCI) Survey [39]. At least 500 households in the US state of Florida were contacted over 1 month. A minimum of 10 call attempts per household were made every Monday through Friday (between 9:00 AM and 9:00 PM), Saturday (between 12:00 PM and 6:00 PM), and Sunday (between 3:00 PM and 9:00 PM) using the random digit dialing method. The Institutional Review Board at the University of Florida approved the conduct of this study. Overall, 6695 calls were placed, and 493 individuals (response rate 7%) agreed to participate in the telephone survey. Participants were not provided incentives as part of participating in the F-CCI. We included data from these individuals in the main analyses if respondents reported being (1) at least 50 years old, and (2) Internet or email users. We selected the age cutoff based on Watkins and Xie’s [40] systematic review of eHealth literacy interventions for older adults, citing that chronological age for the older population “can range from 50 to over 100” years, and the age range of 50 years and older “is consistent with growing appreciation of the role that health behavior interventions play in healthy aging for those under age 65” years (pg e255). While screening participants for this study, we found that 393 F-CCI Survey respondents reported being at least 50 years old, yet 110 responded “no” when asked if they used the Internet or email. Therefore, the final sample size for this study was N=283.

Measures

Sociodemographics and Health Status

We asked respondents to provide the following personal information: (1) age (in years); (2) sex (male, female); (3) race (white, African American, Asian or Pacific Islander, American Indian or Alaskan Native, multiracial or mixed race nonwhite); (4) ethnicity (Spanish or Hispanic, non-Spanish or non-Hispanic); (5) education (less than high school, high school or general equivalency diploma, some college, college graduate, postgraduate); (6) income (less than US $20,000, $20,000-49,999, $50,000-$99,999, $100,000 or more); and (7) perceived health status (poor, fair, good, very good, excellent). Additionally, respondents reported whether they had any experience (yes/no) using social media platforms (ie, online support group, popular social media websites such as Facebook or Twitter, or online blogs) to access or share health information.

eHealth Literacy

Norman and Skinner’s [27] eHEALS was included as part of the FCC-I Survey. eHEALS comprises 8 items that measure consumers’ perceived knowledge about how to find, use, and evaluate Internet-based health information to make informed health decisions. Response options are based on a 5-point Likert-type scale that ranges from 1 (strongly disagree) to 5 (strongly agree), with total summed eHealth literacy scores ranging from 8 (lowest possible eHealth literacy) to 40 (highest possible eHealth literacy).

Data Analysis

An exploratory structural equation modeling (E-SEM) approach [41] using the weighted least squares and adjusted means and variances (WLSMV) estimator examined the model fit of eHEALS scores with 1-, 2-, and 3-factor structures. This model uses an exploratory factor analysis measurement model and applies a structural equation model to describe (1) which items significantly load onto the extracted factor(s); (2) the dimensionality or number of factors (or subscales) produced; and (3) the relationships between factors (if more than 1 factor is extracted). The following global model fit indices provided evidence of good model fit [42]: (1) root mean square error of approximation (RMSEA) value close to .06; (2) comparative fit index (CFI) value >.95; (3) Tucker-Lewis index (TLI) value >.95; and (4) nonstatistically significant chi-square test. We evaluated factor loadings of each item for statistical significance (P<.05) and computed fit indices for all 3 factor structures to determine the best overall model fit. We used Mplus v7.3 (Muthén & Muthén; [43]) to conduct all E-SEM analyses.

Following E-SEM analyses, we used the partial credit model (PCM), an IRT analysis [44,45], to explore the internal structure of the self-reported polytomous (ie, more than 2 possible response options) eHEALS data. This analysis was appropriate given that the final sample size (N=283) was over 200 cases and greater than 10 times the number of eHEALS items (ie, 8) [46,47]. PCM constrains item discrimination, or the strength (slope) of the relationship between responses and a latent trait. This provides important information on which response options have the greatest probability of being answered at a particular theta (ie, a person’s latent trait score) level on the latent continuum. Information from PCM analyses helps to evaluate stability across items, which reduces the potential for item bias [48,49]. Allowing step variability to vary across items provides useful information about the range of difficulties measured in a scale, including whether differences in step difficulties exist across items. RStudio’s eRm software package version 0.15-7 (R Foundation; [50]) computed all PCM estimates.

Finally, Linacre’s guidelines [46] for optimizing rating scales under IRT assumptions informed item fit analyses that calculated step difficulties of each response option. Optimized rating scales have threshold values (ie, relative difficulties to advance from one response option to the other) that increase across the theta continuum, which helps confirm that higher response options coincide with greater ability levels. Relative difficulties across response options helped to determine how precisely each eHEALS item was measured on the latent continuum. Values for each item that advanced less than 1.4 logits indicated a lack of variability across response categories, whereas values advancing more than 5.0 logits indicated extremely high variability, or low precision, between response categories.

Infit and outfit mean square (MSQ) and t statistics determined the level of noise or randomness in item response options. For outfit MSQ values, any value greater than 1.5 indicates unpredictable random error, whereas a value less than 1.0 indicates a degree of overpredictability and nonrandom error. Values less than 0.5 are interpreted as troublesome for overfit. For outfit t statistics, a value greater than 2.0 indicates underfit and less than –2.0 indicates overfit [50]. Measurement stability, which describes adequate item placement across the latent continuum, is determined based on adequate item reliability (>.80) and satisfactory item separability (>2.0) [51].


Participant Characteristics

As reported by Tennant and colleagues [14], the mean age of respondents was 67.46 years (SD 9.98 years). Most respondents were white (n=252, 89.1%) and non-Hispanic (n=264, 93.3%). A little over half identified as being male (n=155, 54.8%). Over three-quarters of the sample (n=215, 75.9%) reported at least some college-level education, and over half (n=138, 60.4%) reported earning more than US $50,000 per year. Additionally, nearly three-quarters of respondents reported their health as being “good” (n=72, 25.1%), “very good” (n=103, 36.4%), or “excellent” (n=62, 21.9%). A little more than one-third of respondents reported accessing social media (n=101, 35.7%) to locate or share health information.

Descriptive eHEALS Scores

Total eHEALS scores ranged from 11 to 40 (mean 29.05, SD 5.75). Table 1 presents the mean (SD) score for the response to each item. Internal consistency estimates of eHEALS data collected in this study were relatively high (Cronbach alpha=.91).

Exploratory Structural Equation Modeling Analyses

Table 2 lists global model fit statistics and factor loadings for models fitting 1, 2, and 3 factors.

E-SEM Model 1 (1 Factor)

Only the 1-factor eHEALS structure had an eigenvalue greater than 1 (eigenvalue = 5.55). Despite high CFI and TLI values (.96 and .94, respectively), the RMSEA value, .24, exceeded the recommended value around .06 (Table 3). This high RMSEA value suggested poor structural fit of eHEALS in a unidimensional model.

Table 1. Mean (SD) eHealth Literacy Scale (eHEALS) scores rated on a 5-point Likert-type scalea.
eHEALS itemsMeanSD
E1.I know what health resources are available on the Internet.3.610.91
E2.I know where to find helpful health resources on the Internet.3.760.86
E3.I know how to use the health information I find on the Internet to help me.3.810.85
E4.I know how to find helpful health resources on the Internet.3.800.86
E5.I have the skills I need to evaluate the health resources I find on the Internet.3.720.93
E6.I know how to use the Internet to answer my questions about health.3.820.88
E7.I can tell high quality health resources from low quality health resources on the Internet.3.351.06
E8.I feel confident in using information from the Internet to make health decisions.3.191.09

aScored from 1=strongly disagree to 5=strongly agree, where 1 indicates low confidence and 5 indicates high confidence.

Table 2. Factor loadings of the eHealth Literacy Scale (eHEALS) by dimension among adults 50 years of age and older surveyed by telephone (N=283).
eHEALS items1 Factor2 Factors3 Factors
1P value1P value2P value1P value2P value3P value
E1.I know what health resources are available on the Internet.0.71<.050.73<.050.05NSa0.71<.05–0.00NS0.21<.05
E2.I know where to find helpful health resources on the Internet.0.89<.051.01<.05–0.01NS0.82<.050.24<.050.00NS
E3.I know how to find helpful health resources on the Internet.0.94<.050.58<.050.41<.050.51<.050.55<.05–0.02NS
E4.I know how to use the Internet to answer my questions about health.0.85<.050.01NS0.88<.050.02NS0.8<.050.03NS
E5.I know how to use the health information I find on the Internet to help me.0.89<.050.03NS0.89<.050.01NS0.93<.050.00NS
E6.I have the skills I need to evaluate the health resources I find on the Internet.0.82<.05–0.15<.050.97<.05–0.03NS0.59<.050.37<.05
E7.I can tell high quality health resources from low quality health resources on the Internet.0.75<.05–0.03NS0.79<.050.15NS0.00NS0.88<.05
E8.I feel confident in using information from the Internet to make health decisions.0.72<.050.03NS0.72<.050.10NS0.45<.050.30<.05

aNS: not statistically significant at P<.05 alpha level.

Table 3. Global model fit indices.
Indices1 Factor2 Factors3 Factors
RMSEAa (90% CI).24 (.21-.26).15 (.13-.18).07 (.02-.11)
Comparative fit index.96.991.0
Tucker-Lewis index.94.981.0
Chi-square test, P value<.001<.001<.001
Eigenvalue5.550.830.53

aRMSEA: root mean square error of approximation.

E-SEM Model 2 (2 Factors)

Global model fit indices improved in the 2-factor model (Table 3). CFI and TLI fit statistics improved to .99 and .98 respectively, while the RMSEA value decreased to .15. Even though RMSEA decreased in the 2-factor model, it remained over .07, which suggests poor global model fit. In the 2-factor model, eHEALS items 1 to 3 loaded onto factor 1, while items 4 to 8 loaded onto factor 2. Interestingly, item 3 appeared to have 2 relatively high (and statistically significant) factor loadings on both factors (factor 1=0.58; factor 2=0.41). However, it should be noted that these 2 factors were both highly correlated (r=.71, P<.01).

E-SEM Model 3 (3 Factors)

For the 3-factor model, global model fit indices were near the acceptable range (Table 3). CFI and TLI both improved to 1.0 and RMSEA decreased to .07. While the chi-square test of model fit remained nonsignificant, this statistic is sensitive to sample size and thus should be interpreted with caution [52]. Items 1 and 2 loaded onto factor 1, while item 3 (“I know how to use the health information I find on the Internet to help me”) significantly loaded onto both factors 1 (λ=.51) and 2 (λ=.55), making its assignment to 1 unique factor unclear. Similarly, we found that item 6 (“I know how to use the Internet to answer my questions about health”) loaded onto factors 2 (λ=.59) and 3 (λ=.37), as did item 8 (“I feel confident in using information from the Internet to make health decisions”; factor 2: λ=.45; factor 3: λ=.30). In the 3-factor model, we also found statistically significant correlations between factors 1 and 2 (r=.58), and between factors 2 and 3 (r=.65). Factors 1 and 3 were also significantly correlated, albeit to a lesser degree (r=.36).

Partial Credit Model Analyses

The item reliability of eHEALS scores in this sample was estimated at .92 (observed variance=4.58), while the item separation index was 11.31. Both values were indicative of high reliability and stability across the latent continuum.

Table 4 shows that Linacre’s assumption of monotonicity was satisfied, with thresholds (ie, relative difficulty advancing from response options) increasing across the theta continuum, as demonstrated in the item characteristic curves and reported threshold values, confirming that greater eHealth literacy coincided with higher response options. However, not all step difficulties advanced from 1.4 to 5 logits. Relative difficulty moving from “strongly disagree” to “disagree” for almost all items was less than 1.4 logits, except for eHEALS item 8 (“I feel confident in using information from the Internet to make health decisions”), where it was 1.96. Relative difficulty moving from “agree” to “strongly agree” was within the acceptable range for all items (ie, all below 5.0 logits), but they were quite large as compared with advances in the relative difficulty for thresholds 1 (“strongly disagree” to “disagree”), 2 (“disagree” to “neutral”), and 3 (“neutral” to “agree”) [46].

Table 5 shows that all outfit MSQ values were <2.0 yet closer to 1.0, which suggested an optimal degree of randomness in responses to eHEALS items. However, the outfit MSQ values for items 2 to 5 fell well below 1.0, suggesting some level of overpredictability (ie, respondents with a particular eHealth literacy level were responding to items 2 to 5 using similar response options). Subsequently, we noted that infit t statistics for items 2 to 5 were all below –2.0, which is outside of the acceptable range of –2 to 2.

Table 4. Thresholda values of response options for 8-item eHealth Literacy Scale (eHEALS).
eHEALS itemsItem difficultyThreshold 1Threshold 2Threshold 3Threshold 4
E10.85–1.63–0.771.194.63
E20.69–1.45–0.840.364.68
E30.68–1.35–0.49–0.094.65
E40.66–1.15–0.47–0.194.47
E50.61–1.53–0.53–0.074.58
E60.86–1.28–0.090.104.70
E71.55–0.860.341.385.36
E81.76–1.120.841.835.49

aThresholds for response options on the 5-point Likert-type scale: 1 (from “strongly disagree” to “disagree”), 2 (from “disagree” to “neutral”), and 3 (from “neutral” to “agree”).

Table 5. Infit and outfit mean square (MSQ), and infit and outfit t statistics for eHealth Literacy Scale (eHEALS) items.
eHEALS itemsP valueInfit MSQOutfit MSQInfit t statisticOutfit t statistic
E1.041.161.161.651.16
E2>.990.700.70–2.090.80
E3>.990.540.54–3.740.64
E4>.990.610.61–2.560.74
E5>.990.600.60–3.550.66
E6.950.860.86–1.300.87
E7.321.031.030.471.04
E8.221.061.060.551.05

Figure 1 depicts item and test information functions. The test information curve shows a high degree of information with minimal standard measurement error around theta levels –2 to 2 on the latent continuum. The test information curve shows that eHEALS provides some degree of information for participants at the higher end of the latent continuum, but reliability and validity evidence for this level of information is likely unstable. Moreover, the test information function is positively skewed, rather than bell shaped. This result indicates that eHEALS items may capture more information about eHealth literacy among participants who place higher on the latent continuum (ie, those with high eHealth literacy) than among those at the lower end of the continuum (ie, those with low eHealth literacy).

Item information curves showed that all eHEALS items followed a similar curvature pattern, yet the peak of most item curves (greatest amount of information) were plotted at different points on the latent continuum. Despite test information functions that were positively skewed, item information curves suggested that each eHEALS item made important contributions to the complete measure of eHealth literacy. Interestingly, information obtained from items 2 to 5 did not vary across different points on the latent continuum. Therefore, items 2 to 5 may produce a similar amount of information at each point on the latent continuum.

Figure 1. Item and test information function curves for eHealth Literacy Scale (eHEALS).
View this figure

This measurement study was the first, to our knowledge, to provide evidence for the factor structure and dimensionality of eHEALS when administered to older adults over the telephone. Overall, results from E-SEM and PCM analyses support that use of eHEALS as a reliable measure of eHealth literacy produces a sufficient degree of internal structure reliability evidence when administered to older adults using telephone-based survey methods. Despite several poor-fitting items in this brief 8-item instrument, global model fit indices produced by E-SEM analyses suggest the eHEALS has the potential for 3 factors (or subscales) when measuring the latent construct of eHealth Literacy among older adults. However, 2 of these 3 factors were highly correlated with one another, providing additional evidence to support an overarching unidimensional structure of eHEALS data when collected in an older adult population.

Similar to this study, whose findings suggesting that a 3-factor solution is promising in the older adult population, the study of Sudbury-Riley et al [32] found 3 underlying factors in eHEALS data collected from baby boomers located in 3 different countries (United States, United Kingdom, and New Zealand). The 3 factors identified by those authors were awareness and learning about what online resources are available and where they are located (items 1-2), skills and behaviors needed to access Internet-based health resources (items 3-5), and the self-belief in one’s ability to evaluate online health content once accessed (items 6-8). Further, Sudbury-Riley and colleagues suggested that these 3 factors reflected social cognitive theory’s explanation of a triadic reciprocal causation among 3 dimensions (personal factors, behavioral factors, and environmental factors) that influence behavior change [53]. Data from our study produced acceptable fit indices for assigning eHEALS items to these 3 factors; however, item 3 (“I know how to use the health information I find on the Internet to help me”), item 6 (“I know how to use the Internet to answer my questions about health”), and item 8 (“I feel confident in using information from the Internet to make health decisions”) loaded onto multiple factors, which made it difficult to assign these particular items to the 3 unique eHEALS subscales. Moreover, these 3 factors showed moderate to high correlations with one another, which supports the reciprocity described in social cognitive theory. The relationship between personal motivations for health information seeking and an individual’s perceived capability to use digital technologies can be affected by online environments with socially persuasive forms of media. Since social cognitive theory was the theoretical foundation used during the original development of eHEALS [27], future research should investigate how eHEALS items map to the main theoretical constructs of social cognitive theory.

Linacre’s [46] guidelines for optimizing rating scales were satisfied regarding item fit of eHEALS data in this study. Even with constrained item discrimination, item characteristic curves showed that each response option had the highest probability of selection at a particular point on the latent continuum. This suggests that the rating scale is functioning as intended, where older adults higher on the eHealth literacy continuum demonstrate the greatest probability of selecting response option “agree” or “strongly agree,” and individuals scoring lower on the eHealth literacy continuum have the greatest probability of selection response option “disagree” or “strongly disagree.” This finding is similar to results reported in previous research exploring the internal structure of eHEALS [28,30], which noted that data produced by eHEALS among older adults showed evidence of monotonicity.

Step difficulties also advanced within acceptable standards [46] across the latent continuum for each eHEALS item. Tests of the internal structure of each item showed that step difficulties advancing across response options 1 to 3 were located close to one another on the latent continuum. In contrast, the relative difficulty of advancing from “agree” to “strongly agree” was located further away from the threshold, suggesting relative difficulty of advancing from “neutral” to “agree” response options. To capture this “dead zone” between these 2 response option thresholds, future research should consider analyzing the effects of adding more response options to each eHEALS item.

Given that the 3 factors identified in this study showed moderately strong correlations with one another and the 1-factor model showed adequate fit, we conducted item fit analyses using PCM analyses. Like in the work of Diviani et al [34], who administered the Italian version of eHEALS in young to middle-aged adults, in this study the level of random error in eHEALS responses from older adults was within the acceptable range. However, parametric IRT analyses did reveal that items 2 to 5, which assessed knowledge of using the Internet to access and use health information, showed a minor degree of overpredictability and random error. This minor level of overpredictability on eHEALS items 2 to 5 was less evident in the Diviani et al [34] study’s younger sample, although eHEALS items 1 through 5 in their study did show outfit MSQ values of less than 1. While data that are potentially overfit do not present a substantial threat to measurement validity [46], eHEALS items asking about finding knowledge and using Internet-based health information may be redundant, with the potential to (1) violate the assumption of local independence, (2) overestimate the reliability of eHEALS, and (3) underestimate the standard error of eHEALS measurements [54]. It is important to note that the negative impact of overfit in both studies is likely minimal, given relatively minor deviations from acceptable values [46]. However, in both Diviani et al [34] and our study of older adults, eHEALS items 2 and 4 had standardized infit t statistics less than the lower end of the acceptable range (less than –2.0). Conducting think-aloud cognitive interviews with respondents while they complete the eHEALS should provide much-needed information regarding whether older adults perceive different eHEALS items to be asking the same questions.

Limitations

There are several limitations to note in this study. First, this was a cross-sectional study and, therefore, we were unable to compute test-retest reliability or predictive validity estimates. Second, our analyses used telephone survey data with a very low overall response rate (7.4%), resulting in the possibility of nonresponse bias. Third, comparative measures of model fit did not inform decisions regarding the optimal internal structure of eHEALS data collected in this study. Comparative fit measures such as the Akaike information criterion can only be estimated with maximum likelihood data extraction methods, which Mplus v7.3 does not allow for under the WLSMV estimator. We selected the WLSMV estimator to examine model fit in this study for several reasons: (1) WLSMV estimation compensates more effectively than the maximum likelihood estimation for bias due to ordinal response options in the eHEALS, and (2) WLSMV estimation is less likely to produce unrealistic indices of overall model fit [55,56]. Therefore, we based conclusions regarding the internal structure of eHEALS data on noncriterion-based judgments made through interpretation of E-SEM and IRT analyses results.

Fourth, this study contacted participants through a landline sampling technique, which may have selectively excluded individuals who may only own a mobile phone. Although this telephone sampling method targeted older adults living in the state of Florida, this state is home to the greatest proportion (19.1%) of older adults in the United States [57].

Fifth, this study examined eHEALS responses derived from telephone administration of the survey, despite all participants reporting use of the Internet or email. Widespread adoption of the Internet and mobile phone technology has contributed to nonuse of landline telephones. We did not account for mobile devices and cellular telephones, which are increasingly being used by middle- to older-aged adults [58], in this sample. Partnering with community-engaged research programs or local community organizations to reach older adults via telephone may enhance recruitment efforts in this population traditionally underrepresented in health-related survey research.

Conclusions

Assessing consumer comfort and self-efficacy in using technology to access online health resources can help identify skill gaps and gauge the likelihood that users will be successful when using the Internet to access relevant health information [23]. Results from this study suggest that administering eHEALS to older adults via telephone produces a reliable measure with scores that possess sufficient construct validity evidence. Specifically, results from this study support the previously reported unidimensionality of eHEALS scores. Among older adults, however, there is potential for additional underlying subscales to measure older adults’ confidence to locate, use, and evaluate online health information. As older Internet users continue to visit online support groups and discussion forums to find new information about health care perspectives and experiences, it will be important to consider modifying the original eHEALS to adequately measure online health information-seeking behaviors in older populations.

Conflicts of Interest

None declared.

  1. Jha A, Pandey J. An empirical note on health information digital divide: a study of Indian patients. Int J Asian Bus Inf Manage 2017;8(2):15-34. [CrossRef]
  2. van Deursen AJAM, van Dijk JAGM. Internet skills performance tests: are people ready for eHealth? J Med Internet Res 2011 Apr;13(2):e35 [FREE Full text] [CrossRef] [Medline]
  3. Fox S, Duggan M. Health online 2013. Washington, DC: Pew Internet & American Life Project; 2013 Jan 15.   URL: http://www.pewinternet.org/files/old-media//Files/Reports/PIP_HealthOnline.pdf [accessed 2017-07-06] [WebCite Cache]
  4. Zickuhr K. Generations 2010. Washington, DC: Pew Internet & American Life Project; 2010 Dec 16.   URL: http://www.pewinternet.org/files/old-media//Files/Reports/2010/PIP_Generations_and_Tech10.pdf [accessed 2017-07-06] [WebCite Cache]
  5. Benigeri M, Pluye P. Shortcomings of health information on the Internet. Health Promot Int 2003 Dec;18(4):381-386. [Medline]
  6. Madathil KC, Rivera-Rodriguez AJ, Greenstein JS, Gramopadhye AK. Healthcare information on YouTube: a systematic review. Health Informatics J 2015 Sep;21(3):173-194. [CrossRef] [Medline]
  7. Neter E, Brainin E, Baron-Epel O. The dimensionality of health literacy and eHealth literacy. Euro Health Psych 2015;17(6):275-280.
  8. Chaudhuri S, Le T, White C, Thompson H, Demiris G. Examining health information-seeking behaviors of older adults. Comput Inform Nurs 2013 Nov;31(11):547-553 [FREE Full text] [CrossRef] [Medline]
  9. LeRouge C, Van SC, Seale D, Wright K. Baby boomers' adoption of consumer health technologies: survey on readiness and barriers. J Med Internet Res 2014;16(9):e200 [FREE Full text] [CrossRef] [Medline]
  10. Anderson M, Perrin A. Tech adoption climbs among older adults. Washington, DC: Pew Research Center; 2017 May 17.   URL: http://www.pewinternet.org/2017/05/17/tech-adoption-climbs-among-older-adults/ [accessed 2017-07-06] [WebCite Cache]
  11. Amante DJ, Hogan TP, Pagoto SL, English TM, Lapane KL. Access to care and use of the Internet to search for health information: results from the US National Health Interview Survey. J Med Internet Res 2015;17(4):e106 [FREE Full text] [CrossRef] [Medline]
  12. King DE, Matheson E, Chirina S, Shankar A, Broman-Fulks J. The status of baby boomers' health in the United States: the healthiest generation? JAMA Intern Med 2013 Mar 11;173(5):385-386. [CrossRef] [Medline]
  13. Stellefson M, Chaney B, Barry AE, Chavarria E, Tennant B, Walsh-Childers K, et al. Web 2.0 chronic disease self-management for older adults: a systematic review. J Med Internet Res 2013 Feb;15(2):e35 [FREE Full text] [CrossRef] [Medline]
  14. Tennant B, Stellefson M, Dodd V, Chaney B, Chaney D, Paige S, et al. eHealth literacy and Web 2.0 health information seeking behaviors among baby boomers and older adults. J Med Internet Res 2015;17(3):e70 [FREE Full text] [CrossRef] [Medline]
  15. Zhang S, Grenhart WC, McLaughlin AC, Allaire JC. Predicting computer proficiency in older adults. Comput Hum Behav 2017 Feb;67:106-112. [CrossRef]
  16. Jacobs W, Amuta AO, Jeon KC. Health information seeking in the digital age: an analysis of health information seeking behavior among US adults. Cogent Soc Sci 2017 Mar 13;3(1):1-11. [CrossRef]
  17. Kobayashi LC, Wardle J, von Wagner C. Internet use, social engagement and health literacy decline during ageing in a longitudinal cohort of older English adults. J Epidemiol Community Health 2015 Mar;69(3):278-283 [FREE Full text] [CrossRef] [Medline]
  18. Smith SG, O'Conor R, Curtis LM, Waite K, Deary IJ, Paasche-Orlow M, et al. Low health literacy predicts decline in physical function among older adults: findings from the LitCog cohort study. J Epidemiol Community Health 2015 May;69(5):474-480 [FREE Full text] [CrossRef] [Medline]
  19. Li N, Orrange S, Kravitz RL, Bell RA. Reasons for and predictors of patients' online health information seeking following a medical appointment. Fam Pract 2014 Oct;31(5):550-556. [CrossRef] [Medline]
  20. Tan SS, Goonawardene N. Internet health information seeking and the patient-physician relationship: a systematic review. J Med Internet Res 2017 Jan 19;19(1):e9 [FREE Full text] [CrossRef] [Medline]
  21. Bhandari N, Shi Y, Jung K. Seeking health information online: does limited healthcare access matter? J Am Med Inform Assoc 2014;21(6):1113-1117 [FREE Full text] [CrossRef] [Medline]
  22. Lee ST, Lin J. A self-determination perspective on online health information seeking: the Internet vs. face-to-face office visits with physicians. J Health Commun 2016 Jun;21(6):714-722. [CrossRef] [Medline]
  23. Norman CD, Skinner HA. eHealth literacy: essential skills for consumer health in a networked world. J Med Internet Res 2006 Jun;8(2):e9 [FREE Full text] [CrossRef] [Medline]
  24. Hsu W, Chiang C, Yang S. The effect of individual factors on health behaviors among college students: the mediating effects of eHealth literacy. J Med Internet Res 2014;16(12):e287 [FREE Full text] [CrossRef] [Medline]
  25. Mitsutake S, Shibata A, Ishii K, Oka K. Associations of eHealth literacy with health behavior among adult Internet users. J Med Internet Res 2016 Jul;18(7):e192 [FREE Full text] [CrossRef] [Medline]
  26. Milne RA, Puts MTE, Papadakos J, Le LW, Milne VC, Hope AJ, et al. Predictors of high eHealth literacy in primary lung cancer survivors. J Cancer Educ 2015 Dec;30(4):685-692. [CrossRef] [Medline]
  27. Norman CD, Skinner HA. eHEALS: the eHealth Literacy Scale. J Med Internet Res 2006 Nov;8(4):e27 [FREE Full text] [CrossRef] [Medline]
  28. Nguyen J, Moorhouse M, Curbow B, Christie J, Walsh-Childers K, Islam S. Construct validity of the eHealth literacy scale (eHEALS) among two adult populations: a Rasch analysis. JMIR Public Health Surveill 2016 May;2(1):e24 [FREE Full text] [CrossRef] [Medline]
  29. Chung S, Nahm E. Testing reliability and validity of the eHealth Literacy Scale (eHEALS) for older adults recruited online. Comput Inform Nurs 2015 Apr;33(4):150-156. [CrossRef] [Medline]
  30. Paige SR, Krieger JL, Stellefson M, Alber JM. eHealth literacy in chronic disease patients: an item response theory analysis of the eHealth literacy scale (eHEALS). Patient Educ Couns 2017 Feb;100(2):320-326. [CrossRef] [Medline]
  31. Paige SR, Krieger JL, Stellefson ML. The influence of eHealth literacy on perceived trust in online health communication channels and sources. J Health Commun 2017 Jan;22(1):53-65. [CrossRef] [Medline]
  32. Sudbury-Riley L, FitzPatrick M, Schulz PJ. Exploring the measurement properties of the eHealth literacy scale (eHEALS) among baby boomers: a multinational test of measurement invariance. J Med Internet Res 2017 Feb 27;19(2):e53 [FREE Full text] [CrossRef] [Medline]
  33. Soellner R, Huber S, Reder M. The concept of eHealth literacy and its measurement. J Media Psychol 2014 Jan;26(1):29-38. [CrossRef]
  34. Diviani N, Dima AL, Schulz PJ. A psychometric analysis of the Italian version of the eHealth Literacy Scale using item response and classical test theory methods. J Med Internet Res 2017 Apr 11;19(4):e114 [FREE Full text] [CrossRef] [Medline]
  35. Bowling A. Mode of questionnaire administration can have serious effects on data quality. J Public Health (Oxf) 2005 Sep;27(3):281-291 [FREE Full text] [CrossRef] [Medline]
  36. Wyatt JC. When to use web-based surveys. J Am Med Inform Assoc 2000;7(4):426-429 [FREE Full text] [Medline]
  37. Dillman D. Mail and Internet Surveys: The Tailored Design Method. 2nd edition. Hoboken, NJ: John Wiley Co; 2007.
  38. Neter E, Brainin E. Perceived and performed ehealth literacy: survey and simulated performance test. JMIR Hum Factors 2017 Jan 17;4(1):e2 [FREE Full text] [CrossRef] [Medline]
  39. University of Florida Survey Research Center. Telephone survey methods. Gainesville, FL: Bureau of Economic and Business Research, University of Florida; 2014 Dec 25.   URL: https://www.bebr.ufl.edu/sites/default/files/survey/what-we-do/UFSRC%20Telephone%20Methods.pdf [accessed 2017-07-12] [WebCite Cache]
  40. Watkins I, Xie B. eHealth literacy interventions for older adults: a systematic review of the literature. J Med Internet Res 2014;16(11):e225 [FREE Full text] [CrossRef] [Medline]
  41. Asparouhov T, Muthén B. Exploratory structural equation modeling. Struct Equ Model 2009 Jul 14;16(3):397-438. [CrossRef]
  42. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Model 1999 Jan;6(1):1-55. [CrossRef]
  43. Muthén LK, Muthén BO. Mplus: statistical analysis with latent variables. Users guide. 7th edition. Los Angeles, CA: Muthén & Muthén; 2012 Sep.   URL: http://www3.udg.edu/fcee/professors/gcoenders/Mplus_User_s_Guide_Version_7.pdf [accessed 2017-07-06] [WebCite Cache]
  44. Masters GN. A rasch model for partial credit scoring. Psychometrika 1982 Jun;47(2):149-174. [CrossRef]
  45. Masters G. The partial credit model. In: Hambleton RK, van der Linden WJ, editors. Handbook of Modern Item Response Theory. New York, NY: Springer; 1996:465-478.
  46. Linacre JM. Optimizing rating scale category effectiveness. J Appl Meas 2002;3(1):85-106. [Medline]
  47. MacCallum RC, Widaman KF, Zhang S, Hong S. Sample size in factor analysis. Psychol Methods 1999;4(1):84-99. [CrossRef]
  48. Embertson SE, Reise SP. Item Response Theory for Psychologists. Multivariate Applications Book Series. Mahwah, NJ: L Erlbaum Associates; 2000.
  49. Luo G. The relationship between the rating scale and partial credit models and the implication of disordered thresholds of the Rasch models for polytomous responses. J Appl Meas 2005;6(4):443-455. [Medline]
  50. Mair P, Hatzinger R, Maier MJ, Rusch T. Package 'eRm'. Vienna, Austria: R Foundation; 2016 Nov 20.   URL: https://cran.r-project.org/web/packages/eRm/eRm.pdf [accessed 2017-07-06] [WebCite Cache]
  51. Fox CM, Jones JA. Uses of Rasch modeling in counseling psychology research. J Couns Psychol 1998;45(1):30-45. [CrossRef]
  52. Barrett P. Structural equation modelling: adjudging model fit. Pers Individ Differ 2007 May;42(5):815-824. [CrossRef]
  53. Bandura A. Social Foundation of Thought and Action: A Social Cognitive Theory. Upper Saddle River, NJ: Prentice Hall; 1986.
  54. Beglar D. A Rasch-based validation of the Vocabulary Size Test. Lang Test 2009 Oct 16;27(1):101-118. [CrossRef]
  55. Beauducel A, Herzberg PY. On the performance of maximum likelihood versus means and variance adjusted weighted least squares estimation in CFA. Struct Equ Model 2006 Apr 28;13(2):186-203. [CrossRef]
  56. Olsson UH, Foss T, Troye SV, Howell RD. The performance of ML, GLS, and WLS estimation in structural equation modeling under conditions of misspecification and nonnormality. Struct Equ Model 2000 Oct;7(4):557-595. [CrossRef]
  57. Kent L. Where do the oldest Americans live?. Washington, DC: Pew Research Center; 2015 Jul 09.   URL: http://www.pewresearch.org/fact-tank/2015/07/09/where-do-the-oldest-americans-live/ [accessed 2017-07-06] [WebCite Cache]
  58. Anderson M, Perrin A. Tech adoption climbs among older adults. Washington, DC: Pew Research Center; 2017 May 17.   URL: http:/​/assets.​pewresearch.org/​wp-content/​uploads/​sites/​14/​2017/​05/​16170850/​PI_2017.​05.​17_Older-Americans-Tech_FINAL.​pdf [accessed 2017-10-16] [WebCite Cache]


CFI: comparative fit index
eHEALS: eHealth Literacy Scale
E-SEM: exploratory structural equation modeling
F-CCI: Florida Consumer Confidence Index
IRT: item response theory
MSQ: mean square
PCM: partial credit model
RMSEA: root mean square error of approximation
TLI: Tucker-Lewis index
WLSMV: weighted least squares and adjusted means and variances


Edited by G Eysenbach; submitted 21.07.17; peer-reviewed by E Neter, S Mitsutake; comments to author 10.08.17; revised version received 24.08.17; accepted 10.09.17; published 26.10.17

Copyright

©Michael Stellefson, Samantha R Paige, Bethany Tennant, Julia M Alber, Beth H Chaney, Don Chaney, Suzanne Grossman. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 26.10.2017.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.