Published on in Vol 24, No 2 (2022): February

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/28252, first published .
Measuring Electronic Health Literacy: Development, Validation, and Test of Measurement Invariance of a Revised German Version of the eHealth Literacy Scale

Measuring Electronic Health Literacy: Development, Validation, and Test of Measurement Invariance of a Revised German Version of the eHealth Literacy Scale

Measuring Electronic Health Literacy: Development, Validation, and Test of Measurement Invariance of a Revised German Version of the eHealth Literacy Scale

Original Paper

1Clinic for Psychosomatic Medicine and Psychotherapy, LVR–University Hospital Essen, University of Duisburg-Essen, Essen, Germany

2Institute for Patient Safety, University Hospital Bonn, Bonn, Germany

Corresponding Author:

Matthias Marsall, MSc

Clinic for Psychosomatic Medicine and Psychotherapy

LVR–University Hospital Essen

University of Duisburg-Essen

Virchowstr. 174

Essen, 45147

Germany

Phone: 49 17678909441

Email: matthias.marsall@stud.uni-due.de


Background: The World Wide Web has become an essential source of health information. Nevertheless, the amount and quality of information provided may lead to information overload. Therefore, people need certain skills to search for, identify, and evaluate information from the internet. In the context of health information, these competencies are summarized as the construct of eHealth literacy. Previous research has highlighted the relevance of eHealth literacy in terms of health-related outcomes. However, the existing instrument assessing eHealth literacy in the German language reveals methodological limitations regarding test development and validation. The development and validation of a revised scale for this important construct is highly relevant.

Objective: The objective of this study was the development and validation of a revised German eHealth literacy scale. In particular, this study aimed to focus on high methodological and psychometric standards to provide a valid and reliable instrument for measuring eHealth literacy in the German language.

Methods: Two internationally validated instruments were merged to cover a wide scope of the construct of eHealth literacy and create a revised eHealth literacy scale. Translation into the German language followed scientific guidelines and recommendations to ensure content validity. Data from German-speaking people (n=470) were collected in a convenience sample from October to November 2020. Validation was performed by factor analyses. Further, correlations were performed to examine convergent, discriminant, and criterion validity. Additionally, analyses of measurement invariance of gender, age, and educational level were conducted.

Results: Analyses revealed a 2-factorial model of eHealth literacy. By item-reduction, the 2 factors information seeking and information appraisal were measured with 8 items reaching acceptable-to-good model fits (comparative fit index [CFI]: 0.942, Tucker Lewis index [TLI]: 0.915, root mean square error of approximation [RMSEA]: 0.127, and standardized root mean square residual [SRMR]: 0.055). Convergent validity was comprehensively confirmed by significant correlations of information seeking and information appraisal with health literacy, internet confidence, and internet anxiety. Discriminant and criterion validity were examined by correlation analyses with various scales and could partly be confirmed. Scalar level of measurement invariance for gender (CFI: 0.932, TLI: 0.923, RMSEA: 0.122, and SRMR: 0.068) and educational level (CFI: 0.937, TLI: 0.934, RMSEA: 0.112, and SRMR: 0.063) were confirmed. Measurement invariance of age was rejected.

Conclusions: Following scientific guidelines for translation and test validation, we developed a revised German eHealth Literacy Scale (GR-eHEALS). Our factor analyses confirmed an acceptable-to-good model fit. Construct validation in terms of convergent, discriminant, and criterion validity could mainly be confirmed. Our findings provide evidence for measurement invariance of the instrument regarding gender and educational level. The newly revised GR-eHEALS questionnaire represents a valid instrument to measure the important health-related construct eHealth literacy.

J Med Internet Res 2022;24(2):e28252

doi:10.2196/28252

Keywords



Background

The concept of health literacy emerged in the 1990s as a competence to gather health information and use it to address health questions and problems [1]. Nutbeam [2] defined health literacy as “cognitive and social skills which determine the motivation and ability of individuals to gain access to, understand, and use information in ways which promote and maintain good health.” In the following years, health literacy has turned out to be an important predictor for various health outcomes (eg, behavior of patients with diabetes mellitus or heart failure) [3,4]. The World Health Organization has declared health literacy as a key determinant of health and defined it as a Sustainable Development Goal [5].

With the rise of the internet as a source of information, the gathering of health information was no longer limited to professional or face-to-face health sources but was available from many different health topic websites [6]. With the increasing availability of health information on the internet, the number of people using this source for seeking health information rose as well [7,8]. However, sources on the internet contain inconsistent information as contributions are not by professionals only [9]. As a result, the amount and differences in quality of information provided on the internet may lead to health information overload [10]. For example, in 2020, COVID-19 became a global pandemic, and disease-related information, especially from the internet, grew exponentially, leading to an “infodemic” [11,12]. Not only is a large amount of information available, but a significant amount of it must be considered misinformation because the sources of the information must be classified questionable [13,14].

For the context of information from the internet, Norman and Skinner [15] applied the concept of health literacy to electronic health literacy (eHealth literacy). With the development of the eHealth Literacy Scale (eHEALS) questionnaire [16], the concept of eHealth literacy became measurable and emerged as a growing interest in psychological and medical health sciences. Systematic reviews have shown that eHEALS is associated with different health-related outcomes, but findings could not be consistently confirmed [17,18]. Associations of eHealth literacy with different health outcomes have been found, such as health intentions [19], acquiring health knowledge [20-23], and health prevention behavior [21,24,25]. Furthermore, research showed associations between eHealth literacy and healthy behaviors like exercise behavior, balanced nutrition, and regular breakfast [26,27]. In the context of COVID-19, associations of eHealth literacy and lower psychological symptoms [28] and higher prevention behaviors [29] could be confirmed. To sum up, research indicates that eHealth literacy is associated with prevention behaviors, the acquisition of knowledge, and people’s ability to cope with diseases, which confirms eHealth literacy as an important construct in examining people’s health behavior.

To cope with information overload and use the information from the internet, Norman and Skinner [15] proposed a set of different competencies: skills to read, identify, and understand different information to distinguish helpful from less helpful or even false or harmful information. These competencies represent a sequential process of handling available information. In the first step, basic cognitive skills are needed to search for information regarding a certain topic. In a subsequent cognitive process, information available must be distinguished as helpful or less helpful in order to answer specific questions. These steps represent an elaborated cognitive information process rather than a heuristic one. The distinction of cognitive processes was formerly described within dual-process theories in psychological literature and confirmed in multiple studies [30-32]. Dual-process theories distinguish between fast cognitive processes, which describe heuristic and holistic approaches representing intuitive, implicit cognitions, and slow cognitive processes, which are analytic and rule-based and focus on explicit learning [33]. Slow cognitive processes run serially and require cognitive capacity to answer or address specific questions. In the context of eHealth literacy, the handling of health information from the internet clearly represents a serial process of subsequent cognitions that require different competencies building on each other.

eHEALS: Translations of the Original eHEALS Questionnaire and its Limitations

Since its publication, the original eHEALS questionnaire has been translated into many languages, including Italian [34,35], Spanish [36], Dutch [37], Chinese [38], Serbian [39], Korean [40], Indonesian [41] and German [42]. However, some of these studies could not confirm the 1-factorial model as assumed by Norman and Skinner [16]. Looking at many different validation studies of the eHEALS questionnaire, a consistent factorial structure has not been verified; 1-factorial [16,37,43], 2-factorial [42,44,45], and 3-factorial models [46-48] have been identified in different validation studies and languages. These results indicate that the eHEALS questionnaire lacks consistent factorial structure.

The German version of the questionnaire validated by Soellner and colleagues [42] especially lacks methodological and content-related accuracy. They developed an initial instrument for assessing eHealth literacy for the German-speaking community (G-eHEALS). However, Soellner and colleagues [42] did not meet scientific criteria substantially; first, they did not meet the criteria scientifically recommended for translation of instruments. Second, in their 2-factorial model content validity was questionable because some items reflected the subdimension of information appraisal rather than the assigned subdimension of information seeking (“I know how to use the health information I find on the internet to help me” or “I feel confident in using information from the internet to make health decisions”). In addition, Soellner and colleagues [42] collected their data on a limited sample of 327 students aged 16 to 21 years at only one type of school (gymnasium: a German school type preparing for university attendance), and people of older age were not considered for validation. However, as people of older age may be less familiar using the internet [49-51] and eHealth literacy especially depicts a particular digital literacy, the model proposed by Soellner and colleagues [42] is possibly not valid for assessing eHealth literacy in older people. Moreover, the educational level of the participants could not be considered within their biased study sample. Juvalta and colleagues [52], who used the G-eHEALS, have also collected their data on a limited sample of young parents (88.5% female). In another German-speaking study, Reder and colleagues [53] have shown a 3-factorial structure for the G-eHEALS. However, only women participated in this study, which is a limited sample for examining the validity of the G-eHEALS. Inconsistent findings and methodological limitations of these studies indicate an unclear factorial structure of the G-eHEALS.

Another limitation of the original eHEALS questionnaire refers to insufficient representation of an elaborated cognitive information process. The original scale does not reflect the above-mentioned complexity of an information process in its entirety. Petrič and colleagues [54] focused on this limitation and developed an extended eHealth literacy scale (eHEALS-E). Creating a 20-item questionnaire, they found a 6-factorial structure. Despite this extension and other concepts and questionnaires [55-57], eHEALS is still the instrument most used for measuring eHealth literacy.

Aims of This Study

In summary, the G-eHEALS validated by Soellner and colleagues [42] was a valuable first approach to the important topic of eHealth literacy, but it underlies significant methodological limitations and lacks in psychometric quality. Nevertheless, as eHealth literacy could be confirmed as an important construct of health-related outcomes, the possibility of assessing eHealth literacy is crucial for health care practitioners and researchers in understanding health competence in German-speaking people. In response to the practical and scientific demands and described limitations, we developed a new instrument for measuring eHealth literacy with 4 objectives:

  • Extension of the existing questionnaire of Norman and Skinner [16] by 8 nonoverlapping items proposed by Petrič and colleagues [54]. By combining the questionnaires, a better representation of the construct of eHealth literacy regarding the cognitive processes of seeking, identifying, and evaluating health information should be achieved.
  • German translation of the items according to common scientific recommendations [58,59] to ensure content validity.
  • Validation of the revised GR-eHEALS at a convenience sample in terms of construct and criterion validity. We decided to collect data in a convenience sample to reach participants with varied socioeconomic backgrounds. Furthermore, our goal was not to limit the sample in order to develop a measurement model that is as generic as possible.
  • To our knowledge, there is no study examining measurement invariance of eHealth literacy between gender, age, or educational level in a German sample. Nevertheless, the interpretation of statistical differences between different groups of people requires measurement invariance between these groups [60]. As eHealth literacy represents competencies that are important for people regardless of their sociodemographic status, its measurement should obviously be independent of these influencing variables.

All in all, we are pursuing the study goals to develop a revised and validated instrument for measuring eHealth literacy. Further, we sought to examine the measurement invariance of the instrument regarding relevant sociodemographic variables.


Development of the New Instrument

The revised eHealth Literacy Scale (GR-eHEALS) is based on the original items from the eHEALS [16] extended by adding items from the eHEALS-E questionnaire from Petrič and colleagues [54]. The translation was conducted following the guidelines proposed by Beaton and colleagues [58] and Guillemin and colleagues [59] for translation of academic literature to ensure content validity. Accordingly, in a first step, 2 of the authors translated the items into German and merged these translations into a first translation proposal. In the second step, this proposal was discussed within a systematic expert panel consisting of the 2 translators and 2 psychologists who are experts in the context of health care and eHealth. The resulting second proposal was translated back into English in the third step to confirm that the essential meaning of the items is consistent with the original items. In the fourth step, cognitive interviews were conducted to make sure that all items are easy to understand, do not include offensive speech, and do not discriminate for age or gender. Interviewees were aged 23 to 72 years and had different educational backgrounds. The resulting final version of the translated and extended version consisted of 16 items. The original items and the translated items are displayed in Multimedia Appendix 1. Items 1 to 8 are translated from the original eHEALS questionnaire from Norman & Skinner [16], and items 9 to 16 are translated from the questionnaire (eHEALS-E) from Petrič and colleagues [54]. All subsequent nominations of item numbers refer to the item numbers mentioned in Multimedia Appendix 1. To validate the GR-eHEALS, we performed a prestudy in which we aimed to check for any complications in answering the translated items and to conduct an item analysis The results of this analysis are displayed in Multimedia Appendix 2. As the prestudy showed solid item characteristics, the developed instrument was considered good fitting for the purpose of the main study.

Study Design and Participants

The cross-sectional study was conducted via Unipark (Tivian XI GmbH), an online survey tool, between October and November 2020. The ethics committee of the Faculty of Medicine of the University of Duisburg–Essen reviewed and approved this study (20-9592-BO).

All data were collected anonymously. Participants for this study were recruited via personal and occupational networks and online social networks (Xing, Facebook, LinkedIn). In our analyses, only complete data sets were considered. From a total of 1634 participants, 524 have completed our questionnaire in full, which represents a completion rate of 32.1% and can be considered typical for an online survey [61]. We excluded cases in which participants took less than 5:34 minutes (5% percentile) or more than 25:45 minutes (95% percentile) to complete the survey. Furthermore, we excluded 1 participant for being under 18 years old. As only 1 person indicated gender as diverse, we excluded this case in order to perform the analysis of measurement invariance of gender. The resulting sample consisted of 470 respondents. The sample size is in accordance with recommendations for validation studies [62,63]. Answering the questionnaire took 11:32 (SD 4:24) minutes on average. All data supporting the conclusion of the study are included in Multimedia Appendix 3.

In the main study, it was our objective to validate the GR-eHEALS in a convenience sample to verify its convergent, discriminant, and criterion validity and test for measurement invariance.

We verified convergent validity by assuming a positive correlation between eHealth literacy and health literacy, which measures a similar construct but does not take the source of information into account. Furthermore, we assumed eHealth literacy to be positively interrelated with internet confidence and negatively associated with internet anxiety as eHealth literacy particularly focuses on the gathering of information from the internet.

To verify discriminant validity, we captured impulsivity and common personality traits assuming no significant interrelations. As eHealth literacy reflects competencies in dealing with health-related information [15] rather than a personality trait, there should be no content-related overlaps between eHealth literacy and personality traits.

Additionally, we considered the possible outcome variables mental and physical health status and life satisfaction to examine criterion validity. Criterion validity of an instrument describes the ability to prove relationships between the construct itself and possible outcomes [64]. Thus, we expected eHealth literacy to be associated with above mentioned health-related variables.

The survey included the following questionnaires (sample items presented below are translations). Most scales were assessed on 5-point Likert scales from 1=strongly disagree to 5=strongly agree. Exceptions are separately explained below. Scales contained inverted items that were recoded prior to statistical analyses.

Measurements

Health Literacy

Participants rated their health literacy on 16 items from the Health Literacy Questionnaire from Röthlin and colleagues [65]. A sample item is “How easy/difficult is it to find information about therapies for diseases that affect you?” Health literacy was measured on a 2-point scale (easy/hard). Therefore, it is used as a sum-score indicating the extent of health literacy between 0 and 16 (mean 12.63 [SD 2.99]). Cronbach alpha of this scale was .79.

Impulsivity

We used the 8-item Impulsive Behavior–8 Scale from Kovaleva and colleagues [66] to measure impulsivity (eg, “Sometimes I spontaneously do things that I should not have done”). Cronbach alpha of this scale was .72 (mean 2.78 [SD 0.59]).

Personality Traits

Personality traits (extraversion, neuroticism, openness, conscientiousness, and agreeableness) were each assessed by 2 items from Rammstedt and colleagues [67]. A sample item for neuroticism is “I get nervous and insecure easily.” Extraversion (mean 3.30 [SD 1.04]), neuroticism (mean 3.08 [SD 0.97]), openness (mean 3.61 [SD 0.99]), conscientiousness (mean 3.59 [SD 0.75]), and agreeableness (mean 3.15 [SD 0.76]) had Cronbach alphas of .79, .66, .62, .38, and .19, respectively. Due to low reliabilities, conscientiousness and agreeableness were excluded from the following analyses.

Further Constructs

In addition, we asked for internet confidence (3 items; mean 3.74 [SD 0.72], Cronbach alpha .89), internet anxiety (3 items; mean 1.81 [SD 0.82], Cronbach alpha .81) and single items to measure physical (mean 7.37 [SD 1.58]) and mental health (mean 7.27 [SD 1.90]) on 11-point Likert scales from 0=very bad health to 10=very good health (all self-formulated), and life satisfaction at a 5-point Likert scale from 1=not satisfied at all to 5=totally satisfied (mean 3.76 [SD 0.83]) from Beierlein and colleagues [68].

Furthermore, sociodemographic variables (age, gender, marital status, educational level, financial situation, internet availability, and community size) were considered to make sure that the sample represents the population.

Statistical Analysis

All data analyses were conducted using R (R Foundation for Statistical Computing), RStudio, and several packages.

Prior to conducting confirmatory factor analysis (CFA), we performed an exploratory factor analysis (EFA) to evaluate whether data were suitable for factor analysis. We used the Kaiser-Meyer-Olkin (KMO) and Bartlett test of sphericity for evaluation. Factor extraction was conducted using maximum likelihood estimation with Promax oblique rotation and number of factors were identified by scree plot inspection and Kaiser criterion (eigenvalue >1). Factor loadings ≥0.4 were considered as significant [69].

Subsequently, we performed consecutive CFA and compared fit indices and factor loadings to confirm the best-fitting model by considering the recommendations of Hu and Bentler [70] who assume to achieve a comparative fit index (CFI) and Tucker Lewis index (TLI) about 0.95 and root mean square error of approximation (RMSEA) and standardized root mean square residual (SRMR) about 0.06 and 0.08, respectively. We used the robust maximum likelihood estimator as our prestudy showed that items were slightly negative skewed, and a robust estimator is more likely to produce less biased model statistics than maximum likelihood estimator [71].

Two-tailed Pearson correlations were conducted considering a significance level of 5% to examine convergent, discriminant, and criterion validity.

We performed tests of measurement invariance on our final model to examine whether the measurement is reliable for both genders as well as 2 age groups and 3 groups of educational level. For this purpose, we performed consecutive multigroup CFA with progressively stricter model assumptions by fixing an increasing number of model parameters for each of 3 measurement invariance models.

Measurement invariance—as a prerequisite for the interpretation of mean differences—is verified by 3 consecutive steps with increasingly strict model assumptions for (1) the number of factors and the pattern of factor-indicator relationships (configural invariance), (2) factor loadings (metric invariance), and (3) intercepts of indicators (scalar invariance) [72]. These 3 steps assume that there are no differences between observed groups regarding these parameters, and interpretation of mean differences is valid when scalar invariance is confirmed [73]. Differences between groups should only be interpreted when measurement invariance is confirmed since otherwise differences between groups may occur due to the fact that an instrument does not measure equally between different groups [60,73,74].

We applied a cutoff criterion of a difference of CFI (ΔCFI) of 0.01 as it is proposed as appropriate to assume invariance between two models [75,76]. Thus, for evaluation of measurement invariance we considered the model fit indices and difference of CFI between compared models.


Sample Characteristics

Mean age of participants was 37.16 (SD 13.4, min 18, max 82, median 33) years. Sample characteristics of all other sociodemographic variables are shown in Table 1.

Table 1. Summary of sample characteristics (n=470).
CharacteristicsValues, n (%)
Gender

Female332 (70.6)

Male138 (29.4)
Marital status

Married161 (34.3)

Not married, in partnership183 (38.9)

Single115 (24.5)

Other11 (2.3)
Educational level

Lower secondary school5 (1.1)

Upper secondary school24 (5.1)

University entrance qualification77 (16.4)

Vocational training91 (19.4)

University degree273 (58.1)
Financial situation

Very good9 (1.9)

Good47 (10.0)

Middling114 (24.3)

Bad220 (46.8)

Very bad80 (17.0)
Internet availability

Always available288 (61.3)

Mostly available177 (37.7)

Occasionally available5 (1.1)

Not available0 (0.0)
Community size

Big city (>100,000 inhabitants)244 (51.9)

Medium city (>20,000 inhabitants)88 (18.7)

Small city (>5000 inhabitants)76 (16.2)

Rural village (<5000 inhabitants)62 (13.2)

Exploratory Factor Analysis

KMO revealed a value of 0.92 and Bartlett test of sphericity was highly significant (P<.001), indicating that data were suitable for factor analysis. Empirical Kaiser criterion and scree plot implied a 2-factor model. Table 2 shows factor loadings of the 2 factors.

As item 14 did not significantly load on any of the 2 factors it was excluded from the following analysis. The remaining 15 items were considered in the CFA.

Table 2. Results of exploratory factor analysis.
Item noFactor 1Factor 2
10.88–0.06
20.800.03
30.840.00
40.97–0.06
50.490.37
60.100.62
70.030.70
80.280.49
90.000.78
10–0.120.78
11–0.110.75
12–0.070.56
130.120.56
140.320.31
150.440.01
160.44–0.09

Confirmatory Factor Analysis

In model 1, 15 items were assigned on the 2 factors identified by the EFA. Based on the content meanings of the underlying items, factor 1 represents information seeking and factor 2 represents information appraisal. However, items 13, 5, and 15 did not fit the factor proposed by the EFA in terms of their content. Therefore, item 13 was reassigned to information seeking whereas items 5 and 15 were reassigned to information appraisal in model 2. For model 3, we removed 6 items due to low factor loadings (<0.65). Moreover, we excluded 1 more item to develop a parsimonious model resulting in a 2-factorial model with 4 items on each of the 2 factors. Table 3 shows the model fits of the 3 models.

CFI, TLI, and SRMR practically meet the criteria of a good model fit. RMSEA is slightly above the recommendations of Hu and Bentler [70]. Considering the recommendations, model 3 shows an acceptable-to-good model fit.

Figure 1 depicts the structure of the 2-factorial model with its factor loadings. All item factor loadings were greater than λ=0.71.

Information seeking and information appraisal achieved satisfactory Cronbach alphas of .92 and .83, respectively. Table 4 shows the statistics of the final items. Based on mean and standard deviation, lower levels of information seeking and information appraisal are below a mean score of 2.99 and 3.20, respectively. Higher levels can be assumed above mean scores of 4.71 and 4.69, respectively.

Table 3. Results of the confirmatory factor analyses.
ModelChi-squaredfCFIaTLIbRMSEAcSRMRdAICeBICf
1433.5890.8910.8710.1000.06716029.83216158.567
2519.8890.8630.8390.1120.08416136.60816265.343
3117.0190.9420.9150.1270.0557782.0437852.640

aCFI: comparative fit index.

bTLI: Tucker Lewis index.

cRMSEA: root mean square error of approximation.

dSRMR: standardized root mean square residual.

eAIC: Akaike information criterion.

fBIC: Bayesian information criterion.

Figure 1. A 2-factorial, intercorrelated model of eHealth literacy.
View this figure
Table 4. Descriptive statistics of the revised German eHealth Literacy Scale (GR-eHEALS) items.
ItemMean (SD)MedianSkew
Information seeking3.85 (0.86)4.00–0.78

1. Ich weiß, wie ich Internetseiten mit hilfreichen Gesundheitsinformationen finden kann.3.93 (0.95)4.00–0.93

2. Ich weiß, wie ich das Internet nutzen kann, um Antworten auf meine Gesundheitsfragen zu erhalten.4.04 (0.87)4.00–1.01

3. Ich weiß, welche Seiten mit Gesundheitsinformationen im Internet verfügbar sind.3.63 (1.00)4.00–0.60

4. Ich weiß, wo ich im Internet hilfreiche Gesundheitsinformationen finden kann.3.81 (1.01)4.00–0.89
Information appraisal3.95 (0.74)4.00–0.77

5. Ich weiß Gesundheitsinformationen aus dem Internet so zu nutzen, dass sie mir weiterhelfen.3.91 (0.88)4.00–0.77

6. Ich bin in der Lage, Internetseiten mit Gesundheitsinformationen kritisch zu bewerten.4.18 (0.87)4.00–1.24

7. Ich kann zwischen vertrauenswürdigen und fragwürdigen Internetseiten mit Gesundheitsinformationen unterscheiden.4.07 (0.84)4.00–0.93

8. Ich fühle mich sicher darin, Informationen aus dem Internet zu nutzen, um Entscheidungen in Bezug auf meine Gesundheit zu treffen.3.62 (1.03)4.00–0.57

Validation of the GR-eHEALS

To examine convergent, discriminant, and criterion validity of the GR-eHEALS, we performed correlation analyses with the 2 factors (information seeking and information appraisal). Moreover, correlations of the 2 factors with demographic variables were calculated. Results are shown in Table 5. Both factors were strongly positively correlated with health literacy and internet confidence and strongly negatively correlated with internet anxiety. None of the 2 scales correlated significantly with impulsivity or extraversion. Information appraisal was interrelated with neuroticism while information seeking was associated with openness. Information appraisal was correlated with mental and physical health and life satisfaction, which was not true for information seeking. Furthermore, information seeking was significantly associated with age.

Table 5. Pearson correlation coefficients of the eHealth literacy factors.
ScalesInformation seeking (P value)Information appraisal (P value)
Convergent validity

Health literacy0.43 (<.001)0.53 (<.001)

Internet confidence0.17 (<.001)0.17 (<.001)

Internet anxiety–0.21 (<.001)–0.23 (<.001)
Discriminant validity

Impulsivity–0.06 (.16)–0.05 (.28)

Extraversion–0.03 (.58)0.03 (.56)

Neuroticism–0.08 (.09)–0.14 (.001)

Openness0.10 (.03)0.07 (.12)
Criterion validity

Mental health0.06 (.20)0.19 (<.001)

Physical health0.06 (.21)0.12 (.01)

Life satisfaction–0.01 (.83)0.12 (.01)
Sociodemographic variables

Age0.10 (.02)0.06 (.16)

Gender–0.03 (.55)0.01 (.78)

Marital status–0.02 (.71)–0.07 (.15)

Educational level–0.04 (.39)–0.02 (.68)

Financial situation–0.05 (.27)0.04 (.45)

Internet availability0.01 (.76)0.02 (.71)

Community size0.02 (.60)–0.04 (.41)

Test of Measurement Invariance

Measurement invariance of the GR-eHEALS was performed to test whether the scale is a suitable measurement independently of gender, age, and educational level. Prior to these analyses, a median split was performed to separate participants into 2 groups according to age. Median age was 33 years. Also, to divide the study sample into 3 groups of educational levels, we separated participants into people who held a university degree, people who completed a vocational training, and people who had any school certificate. Results of the analyses are shown in Table 6.

Besides chi-square and fit indices, Table 6 shows the differences of CFI between models. Regarding measurement invariance of gender and education, all changes in CFI are below 0.01, indicating that model fits did not substantially decrease between more constraint models. Measurement invariance regarding age must be rejected as configural invariance could not be confirmed.

Table 6. Results of measurement invariance for gender, age, and education using multigroup confirmatory factor analysis.
ModelChi-squaredfCFIaTLIbRMSEAcSRMRdΔCFIe
Genderf

Configuralg154.937380.940.9050.1350.0560.006

Metric166.889440.930.9160.1280.0660.002

Scalar181.273500.930.9230.1220.0680.002
Ageh

Configuralg187.672380.920.8830.1500.0590.021

Metric185.713440.920.9010.1380.059–0.002

Scalar197.419500.920.9130.1300.0600.001
Educationi

Configuralg170.758570.940.9040.1360.0580.007

Metric174.474690.940.9260.1190.061–0.004

Scalar196.107810.940.9340.1120.0630.002

aCFI: comparative fit index.

bTLI: Tucker Lewis index.

cRMSEA: root mean square error of approximation.

dSRMR: standardized root mean square residual.

eChange in CFI compared to preceding model.

fFemale n=332; male n=138.

gChange of CFI compared to model 3.

hAge>median n=240; age<median n=230.

iUniversity degree n=273; vocational training n=91; school certificate n=106.


Principal Findings

The results of our factor analyses show that eHealth literacy consists of 2 factors, information seeking and information appraisal. Our first study aim was to examine whether the measurement of eHealth literacy could be improved by adding nonoverlapping items from the eHEALS-E [54] to the original eHEALS [16]. We performed an EFA and several CFAs to examine the factorial structure of our instrument. Our analyses show that the measurement of eHealth literacy could not be improved by adding additional items to the well-established eHEALS questionnaire.

However, our study significantly contributes to the existing measurement of eHealth literacy. By strongly following scientific recommendations regarding academic translations, we developed the GR-eHEALS with high content validity. By taking statistical and content-related consideration into account when conducting factor analyses, we developed a measurement model of eHealth literacy with high content validity and acceptable-to-good model fit. Cronbach alpha was satisfactory for the 2 factors indicating good internal consistency and confirming reliability of the instrument.

Our findings on the examination of convergent, discriminant, and criterion validity of our instrument were not completely consistent with our expectations and require critical discussion. As expected, the 2 factors showed significant correlations with the convergent constructs of health literacy, internet confidence, and internet anxiety. By contrast, while impulsivity and extraversion consistently showed, as expected, no significant correlations with the 2 factors, neuroticism and openness indicated more inconsistent interrelations. Neuroticism was strongly negatively correlated with information appraisal, but not with information seeking. On the other hand, openness was only correlated with information seeking but not with information appraisal. To understand these unexpected correlational patterns, we examined findings of studies discovering the associations of personality traits and health-related constructs. Other studies showed that neuroticism is associated with lower health behavior self-efficacy and health behaviors [77] and lower internet use for learning and education [78]. These findings could indicate that neuroticism distorts cognitive processes of higher elaboration that are required for information appraisal but not necessarily for information seeking. Regarding the personality trait of openness, Bogg and Vo [79] have shown that people with higher openness more often search the internet regarding health-related topics. One could think that openness promotes people to search for new information in a sense of curiosity. However, the subsequent and cognitively demanding process of information appraisal may not be promoted by people’s openness.

Referring to the examination of criterion validity, positive correlations with the possible outcome variable mental health, physical health, and life satisfaction were expected, although only information appraisal was significantly related to these constructs. These results could be potentially explained by the idea that information seeking is a process that requires cognitive efforts but may not be sufficient to promote satisfaction and health status on its own but needs a high competency in information appraisal as a mandatory precondition. However, the search of information is a necessary process to perform the subsequent process of information appraisal.

To sum up, convergent validity of our instrument can be comprehensively confirmed. Examination of discriminant validity and criterion validity reveal unexpected findings that should be subjects of further studies. Despite our results not completely meeting our expectations, findings indicate that the 2 factors represent different cognitive processes in line with dual-process theories of analytic and rule-based processes: information seeking as a first of 2 consecutive competencies exclusively focuses on the process of searching information on the internet but not on a deeper application of the information found. Within a second consecutive competency built on information seeking, information appraisal describes a cognitive process of interpretation of information and its application on personal health-related questions.

Furthermore, we investigated the measurement invariance for gender, age, and educational level. The results of our study suggest that measurement invariance of the GR-eHEALS can be assumed for gender and educational level at a scalar level of invariance but not for age. Our study is the first to examine measurement invariance for these sociodemographic variables. Particularly regarding sample limitations of previous studies investigating eHealth literacy, the GR-eHEALS is the first instrument that can be deployed and interpreted regardless of gender and educational level. Therefore, future researchers are able to interpret statistical differences of these sociodemographic variables on eHealth literacy by using the GR-eHEALS. This is highly important as one could think of differential levels of eHealth literacy due to gender, which was confirmed for the construct of health literacy [80]. Regarding educational level, studies suggest that education also plays a role in the context of eHealth literacy [81,82], but, to our knowledge, neither used instruments confirmed to be measurement invariant.

Concerning the finding of inequality of our instrument with respect to age, one potential explanation could be that older people are less familiar with using the internet than younger people in terms of a digital divide [49] and have a different understanding of information seeking and information appraisal than younger people. Chesser and colleagues [83] suggest that age is a relevant variable in the context of eHealth literacy. Further, in our data we found significant interrelations of age and information seeking but not of age and information appraisal. This should be examined further in upcoming research.

In summary, prior research indicates that the investigation of differences of eHealth literacy of different groups of people is of high scientific interest. Nonetheless, previous studies were lacking considering statistical differences should not be interpreted unless measurement invariance is confirmed. With the GR-eHEALS, we close this gap and contribute substantially to the understanding of the concept of eHealth literacy and the interpretation of mean differences for gender and educational level.

Due to its high validity, the GR-eHEALS provides researchers and practitioners with a measurement for the increasingly important construct of eHealth literacy. As eHealth literacy is linked with many health-related outcomes and behaviors [19,26,27], the GR-eHEALS could provide a basis for educational programs to improve eHealth literacy by focusing on the main cognitive processes important for interpreting health information from the internet. Also, there is evidence that students lack in competencies regarding eHealth literacy [84]. Hence, the assessment and development of eHealth literacy should be a part of students’ curriculum to provide young people with the competencies needed to maintain or improve one’s health status. Consequently, the GR-eHEALS could be part of educational psychologists’ diagnostic repertoire as well as a foundation for specialist training programs in schools and universities. We propose that the results of the GR-eHEALS should be interpreted based on the 2 competencies for diagnostic and interventions of eHealth literacy considering the described mean scores for higher and lower levels of information seeking and information appraisal.

Strengths and Limitations

The main strengths of this study are the high methodological and psychometric standards applied to develop GR-eHEALS and confirm its content, construct, and criterion validity. Furthermore, confirmation of measurement invariance is a state-of-the-art approach with strong practical implications regarding the interpretations of group differences.

One limitation of our study was that we measured eHealth literacy by self-assessment only. Since this construct is intended to measure skills and competencies, eHealth literacy should either be compared with actual behaviors or assessed using behavior-based measurement. Furthermore, our data were collected in a cross-sectional study. Therefore, correlational directions show relationships but are not interpretable causally. Future research should explore if our 2 factors show different effects on health-related outcomes. Additionally, as we used an online survey, participation by people familiar with the internet was more likely than by people who rarely use the internet. Thus, the possibility of selection bias should be considered. In our sample, a high proportion of people holding a university degree limits the representativeness regarding the education level. As in Germany about 19% of the population hold a university degree [85], our sample with a proportion of 58% holding a university degree clearly overrepresents academic persons. Even though it was our goal to collect data on a convenience sample, our study sample consisted of 71% female participants and cannot be considered as population-representative. Therefore, future studies should replicate our findings using a population-representative sample.

Conclusion

eHealth literacy reflects the important competence of people in maintaining and improving their health status. This competence will become more and more important since the internet provides a rapidly increasing amount of health information with considerable bandwidth of quality and trustworthiness. The GR-eHEALS, with its 8 items on 2 factors, is a validated instrument to capture eHealth literacy in the German language. The GR-eHEALS contributes to the measurement of eHealth literacy in 3 ways: (1) instrument has high content validity because of a translation following scientific recommendations, (2) instrument has an acceptable-to-good model fit and confirms measurement invariance for gender and educational level, and (3) instrument revises the existing G-eHEALS and fills an important gap in measuring eHealth literacy to provide researchers and practitioners an accurate and valid assessment.

Acknowledgments

We acknowledge support by the Open Access Publication Fund of the University of Duisburg-Essen. The authors wish to thank Daniela Geiß and Maria Spank for their support as experts in the systematic expert panel within the translational process.

Authors' Contributions

MM, GE, AB, and EMS conceptualized the study. Project administration was performed by MM, GE, and AB. Statistical analyses were conducted by MM. MM and GE interpreted the data and wrote the original draft of the manuscript. AB, EMS, and MT supervised the project and contributed to the study design, data collection, and critical revision of the manuscript. All authors reviewed and approved the final manuscript. All data supporting the conclusion of the study are included in Multimedia Appendix 3.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Original and translated items.

PDF File (Adobe PDF File), 105 KB

Multimedia Appendix 2

Item statistics of the prestudy (n=50).

PDF File (Adobe PDF File), 102 KB

Multimedia Appendix 3

Dataset.

XLSX File (Microsoft Excel File), 391 KB

  1. Parker RM, Baker DW, Williams MV, Nurss JR. The test of functional health literacy in adults: a new instrument for measuring patients' literacy skills. J Gen Intern Med 1995 Oct;10(10):537-541. [CrossRef] [Medline]
  2. Nutbeam D. Health promotion glossary. Health Promot 1986 May;1(1):113-127. [CrossRef] [Medline]
  3. Peterson PN, Shetterly SM, Clarke CL, Bekelman DB, Chan PS, Allen LA, et al. Health literacy and outcomes among patients with heart failure. JAMA 2011 Apr 27;305(16):1695-1701 [FREE Full text] [CrossRef] [Medline]
  4. Schillinger D, Grumbach K, Piette J, Wang F, Osmond D, Daher C, et al. Association of health literacy with diabetes outcomes. JAMA 2002;288(4):475-482. [CrossRef] [Medline]
  5. World Health Organization. Shanghai declaration on promoting health in the 2030 Agenda for Sustainable Development. Health Promot Int 2017 Feb 01;32(1):7-8. [CrossRef] [Medline]
  6. Cline RJ, Haynes KM. Consumer health information seeking on the Internet: the state of the art. Health Educ Res 2001 Dec;16(6):671-692. [CrossRef] [Medline]
  7. Chen Y, Li C, Liang J, Tsai C. Health information obtained from the internet and changes in medical decision making: questionnaire development and cross-sectional survey. J Med Internet Res 2018 Dec 12;20(2):e47 [FREE Full text] [CrossRef] [Medline]
  8. Trotter MI, Morgan DW. Patients' use of the Internet for health related matters: a study of Internet usage in 2000 and 2006. Health Informatics J 2008 Sep;14(3):175-181. [CrossRef] [Medline]
  9. Korp P. Health on the Internet: implications for health promotion. Health Educ Res 2006 Feb;21(1):78-86 [FREE Full text] [CrossRef] [Medline]
  10. Khaleel I, Wimmer BC, Peterson GM, Zaidi STR, Roehrer E, Cummings E, et al. Health information overload among health consumers: a scoping review. Patient Educ Couns 2020 Jan;103(1):15-32. [CrossRef] [Medline]
  11. Ahmed ST. Managing news overload (MNO): the COVID-19 infodemic. Information 2020 Jul 25;11(8):375. [CrossRef]
  12. Rathore FA, Farooq F. Information overload and infodemic in the COVID-19 pandemic. J Pak Med Assoc 2020 May;70(Suppl 3)(5):S162-S165. [CrossRef] [Medline]
  13. Lancet. The COVID-19 infodemic. Lancet Infect Dis 2020 Aug;20(8):875 [FREE Full text] [CrossRef] [Medline]
  14. Orso D, Federici N, Copetti R, Vetrugno L, Bove T. Infodemic and the spread of fake news in the COVID-19-era. Eur J Emerg Med 2020 Apr 23:1 [FREE Full text] [CrossRef] [Medline]
  15. Norman CD, Skinner HA. eHealth literacy: essential skills for consumer health in a networked world. J Med Internet Res 2006 Jun;8(2):e9 [FREE Full text] [CrossRef] [Medline]
  16. Norman CD, Skinner HA. eHEALS: the eHealth literacy scale. J Med Internet Res 2006 Nov;8(4):e27 [FREE Full text] [CrossRef] [Medline]
  17. Karnoe A, Kayser L. How is eHealth literacy measured and what do the measurements tell us? A systematic review. Knowl Manag E-Learn Int J 2015:576-600. [CrossRef]
  18. Neter E, Brainin E. Association between health literacy, eHealth literacy, and health outcomes among patients with long-term conditions. Eur Psychol 2019 Jan;24(1):68-81. [CrossRef]
  19. Britt RK, Collins WB, Wilson KM, Linnemeier G, Englebert AM. The role of eHealth literacy and HPV vaccination among young adults: implications from a planned behavior approach. Comm Res Rep 2015 Jul 15;32(3):208-215. [CrossRef]
  20. An L, Bacon E, Hawley S, Yang P, Russell D, Huffman S, et al. Relationship between coronavirus-related eHealth literacy and COVID-19 knowledge, attitudes, and practices among US adults: web-based survey study. J Med Internet Res 2021 Mar 29;23(3):e25042 [FREE Full text] [CrossRef] [Medline]
  21. Mitsutake S, Shibata A, Ishii K, Oka K. Association of eHealth literacy with colorectal cancer knowledge and screening practice among internet users in Japan. J Med Internet Res 2012;14(6):e153 [FREE Full text] [CrossRef] [Medline]
  22. Park H, Moon M, Baeg JH. Association of eHealth literacy with cancer information seeking and prior experience with cancer screening. Comput Inform Nurs 2014 Sep;32(9):458-463. [CrossRef] [Medline]
  23. Stellefson ML, Shuster JJ, Chaney BH, Paige SR, Alber JM, Chaney JD, et al. Web-based health information seeking and eHealth literacy among patients living with chronic obstructive pulmonary disease (COPD). Health Commun 2017 Sep 05:1-15. [CrossRef] [Medline]
  24. Lin C, Ganji M, Griffiths MD, Bravell ME, Broström A, Pakpour AH. Mediated effects of insomnia, psychological distress and medication adherence in the association of eHealth literacy and cardiac events among Iranian older patients with heart failure: a longitudinal study. Eur J Cardiovasc Nurs 2020 Feb;19(2):155-164. [CrossRef] [Medline]
  25. Noblin AM, Wan TTH, Fottler M. The impact of health literacy on a patient's decision to adopt a personal health record. Perspect Health Inf Manag 2012;9:1-13 [FREE Full text] [Medline]
  26. Mitsutake S, Shibata A, Ishii K, Oka K. Associations of eHealth literacy with health behavior among adult internet users. J Med Internet Res 2016 Jul;18(7):e192 [FREE Full text] [CrossRef] [Medline]
  27. Tsukahara S, Yamaguchi S, Igarashi F, Uruma R, Ikuina N, Iwakura K, et al. Association of eHealth literacy with lifestyle behaviors in university students: questionnaire-based cross-sectional study. J Med Internet Res 2020 Jun 24;22(6):e18155 [FREE Full text] [CrossRef] [Medline]
  28. Yang BX, Xia L, Huang R, Chen P, Luo D, Liu Q, et al. Relationship between eHealth literacy and psychological status during COVID-19 pandemic: a survey of Chinese residents. J Nurs Manag 2020 Dec 02:1 [FREE Full text] [CrossRef] [Medline]
  29. Li X, Liu Q. Social media use, eHealth literacy, disease knowledge, and preventive behaviors in the COVID-19 pandemic: cross-sectional study on chinese netizens. J Med Internet Res 2020 Oct 09;22(10):e19684 [FREE Full text] [CrossRef] [Medline]
  30. Evans JSBT, Stanovich KE. Dual-process theories of higher cognition: advancing the debate. Perspect Psychol Sci 2013 May;8(3):223-241. [CrossRef] [Medline]
  31. Kruglanski AW, Gigerenzer G. Intuitive and deliberate judgments are based on common principles. Psychol Rev 2011 Jan;118(1):97-109. [CrossRef] [Medline]
  32. Pennycook G, Rand DG. Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. J Pers 2020 Apr;88(2):185-200. [CrossRef] [Medline]
  33. Stanovich KE, West RF. Individual differences in reasoning: implications for the rationality debate? Behav Brain Sci 2000 Oct;23(5):645-665. [CrossRef] [Medline]
  34. De Caro W, Corvo E, Marucci AR, Mitello L, Lancia L, Sansoni J. eHealth literacy scale: an nursing analysis and Italian validation. Stud Health Technol Inform 2016;225:949. [Medline]
  35. Del Giudice P, Bravo G, Poletto M, De Odorico A, Conte A, Brunelli L, et al. Correlation between eHealth literacy and health literacy using the eHealth literacy scale and real-life experiences in the health sector as a proxy measure of functional health literacy: cross-sectional web-based survey. J Med Internet Res 2018 Oct 31;20(10):e281 [FREE Full text] [CrossRef] [Medline]
  36. Paramio PG, Almagro BJ, Hernando GA, Aguaded GJI. [Validation of the eHealth literacy scale (eHEALS) in Spanish university students]. Rev Esp Salud Publica 2015;89(3):329-338 [FREE Full text] [CrossRef] [Medline]
  37. van der Vaart R, van Deursen AJ, Drossaert CH, Taal E, van Dijk JA, van de Laar MA. Does the eHealth Literacy Scale (eHEALS) measure what it intends to measure? Validation of a Dutch version of the eHEALS in two adult populations. J Med Internet Res 2011;13(4):e86 [FREE Full text] [CrossRef] [Medline]
  38. Chang A, Schulz PJ. The measurements and an elaborated understanding of Chinese eHealth literacy (C-eHEALS) in chronic patients in China. Int J Environ Res Public Health 2018 Dec 23;15(7):1553. [CrossRef] [Medline]
  39. Gazibara T, Cakic J, Cakic M, Pekmezovic T, Grgurevic A. eHealth and adolescents in Serbia: psychometric properties of eHeals questionnaire and contributing factors to better online health literacy. Health Promot Int 2018 May 25:770-778. [CrossRef] [Medline]
  40. Chung S, Park BK, Nahm E. The Korean eHealth Literacy Scale (K-eHEALS): reliability and validity testing in younger adults recruited online. J Med Internet Res 2018 Apr 20;20(4):e138 [FREE Full text] [CrossRef] [Medline]
  41. Wijaya MC, Kloping YP. Validity and reliability testing of the Indonesian version of the eHealth Literacy Scale during the COVID-19 pandemic. Health Informatics J 2021;27(1):1460458220975466 [FREE Full text] [CrossRef] [Medline]
  42. Soellner R, Huber S, Reder M. The concept of eHealth literacy and its measurement. J Media Psychol 2014 Jan;26(1):29-38. [CrossRef]
  43. Ma Z, Wu M. The psychometric properties of the Chinese eHealth Literacy Scale (C-eHEALS) in a Chinese rural population: cross-sectional validation study. J Med Internet Res 2019 Oct 22;21(10):e15720 [FREE Full text] [CrossRef] [Medline]
  44. Dale JG, Lüthi A, Fundingsland Skaraas B, Rundereim T, Dale B. Testing measurement properties of the Norwegian version of electronic Health Literacy Scale (eHEALS) in a group of day surgery patients. J Multidiscip Healthc 2020;13:241-247 [FREE Full text] [CrossRef] [Medline]
  45. Richtering SS, Morris R, Soh S, Barker A, Bampi F, Neubeck L, et al. Examination of an eHealth literacy scale and a health literacy scale in a population with moderate to high cardiovascular risk: Rasch analyses. PLoS One 2017;12(4):e0175372 [FREE Full text] [CrossRef] [Medline]
  46. Hyde LL, Boyes AW, Evans T, Mackenzie LJ, Sanson-Fisher R. Three-factor structure of the eHealth Literacy Scale among magnetic resonance imaging and computed tomography outpatients: a confirmatory factor analysis. JMIR Hum Factors 2018 Feb 19;5(1):e6 [FREE Full text] [CrossRef] [Medline]
  47. Sudbury-Riley L, FitzPatrick M, Schulz PJ. Exploring the measurement properties of the eHealth Literacy Scale (eHEALS) among baby boomers: a multinational test of measurement invariance. J Med Internet Res 2017 Feb 27;19(2):e53 [FREE Full text] [CrossRef] [Medline]
  48. Stellefson M, Paige SR, Tennant B, Alber JM, Chaney BH, Chaney D, et al. Reliability and validity of the telephone-based eHealth Literacy Scale among older adults: cross-sectional survey. J Med Internet Res 2017 Oct 26;19(10):e362 [FREE Full text] [CrossRef] [Medline]
  49. Loges W, Jung J. Exploring the digital divide: internet connectedness and age. Commun Res 2016 Jun 30;28(4):536-562. [CrossRef]
  50. Paige SR, Miller MD, Krieger JL, Stellefson M, Cheong J. Electronic health literacy across the lifespan: measurement invariance study. J Med Internet Res 2018 Jul 09;20(7):e10434 [FREE Full text] [CrossRef] [Medline]
  51. Tennant B, Stellefson M, Dodd V, Chaney B, Chaney D, Paige S, et al. eHealth literacy and Web 2.0 health information seeking behaviors among baby boomers and older adults. J Med Internet Res 2015;17(3):e70 [FREE Full text] [CrossRef] [Medline]
  52. Juvalta S, Kerry MJ, Jaks R, Baumann I, Dratva J. Electronic health literacy in Swiss-German parents: cross-sectional study of eHealth Literacy Scale unidimensionality. J Med Internet Res 2020 Mar 13;22(3):e14492 [FREE Full text] [CrossRef] [Medline]
  53. Reder M, Soellner R, Kolip P. Do women with high eHealth literacy profit more from a decision aid on mammography screening? Testing the moderation effect of the eHEALS in a randomized controlled trial. Front Public Health 2019;7:46 [FREE Full text] [CrossRef] [Medline]
  54. Petrič G, Atanasova S, Kamin T. Ill literates or illiterates? Investigating the eHealth literacy of users of online health communities. J Med Internet Res 2017 Oct 04;19(10):e331 [FREE Full text] [CrossRef] [Medline]
  55. Kayser L, Karnoe A, Furstrand D, Batterham R, Christensen KB, Elsworth G, et al. A multidimensional tool based on the eHealth literacy framework: development and initial validity testing of the eHealth Literacy Questionnaire (eHLQ). J Med Internet Res 2018 Feb 12;20(2):e36 [FREE Full text] [CrossRef] [Medline]
  56. Norgaard O, Furstrand D, Klokker L, Karnoe A, Batterham R, Kayser L, et al. The e-health literacy framework: a conceptual framework for characterizing e-health users and their interaction with e-health systems. Knowl Manag E-Learn Int J 2015;7(4):540. [CrossRef]
  57. van der Vaart R, Drossaert C. Development of the digital health literacy instrument: measuring a broad spectrum of Health 1.0 and Health 2.0 skills. J Med Internet Res 2017 Jan 24;19(1):e27 [FREE Full text] [CrossRef] [Medline]
  58. Beaton DE, Bombardier C, Guillemin F, Ferraz MB. Guidelines for the process of cross-cultural adaptation of self-report measures. Spine (Phila Pa 1976) 2000 Dec 15;25(24):3186-3191. [CrossRef] [Medline]
  59. Guillemin F, Bombardier C, Beaton D. Cross-cultural adaptation of health-related quality of life measures: literature review and proposed guidelines. J Clin Epidemiol 1993 Dec;46(12):1417-1432. [CrossRef] [Medline]
  60. Vandenberg RJ, Lance CE. A review and synthesis of the measurement invariance literature: suggestions, practices, and recommendations for organizational research. Org Res Meth 2016 Jun 29;3(1):4-70. [CrossRef]
  61. Nulty DD. The adequacy of response rates to online and paper surveys: what can be done? Assess Eval Higher Educ 2008 Jun;33(3):301-314. [CrossRef]
  62. Rouquette A, Falissard B. Sample size requirements for the internal validation of psychiatric scales. Int J Methods Psychiatr Res 2011 Dec;20(4):235-249 [FREE Full text] [CrossRef] [Medline]
  63. Kyriazos TA. Applied psychometrics: sample size and sample power considerations in factor analysis (EFA, CFA) and SEM in general. Psychology 2018;09(08):2207-2230. [CrossRef]
  64. Cronbach L, Meehl P. Construct validity in psychological tests. Psychol Bull 1955 Jul;52(4):281-302. [CrossRef] [Medline]
  65. Röthlin F, Pelikan J, Ganahl K. Die Gesundheitskompetenzder 15-jährigen Jugendlichen in Österreich. Abschlussbericht der österreichischen Gesundheitskompetenz Jugendstudie im Auftrag des Hauptverbandsder österreichischen Sozialversicherungsträger (HVSV). Wien: Ludwig Boltzmann Gesellschaft GmbH; 2013.   URL: https://www.sozialversicherung.at/cdscontent/load?contentid=10008.715507&version [accessed 2022-01-14]
  66. Kovaleva A, Beierlein C, Kemper C, Rammstedt B. Die Skala Impulsives-Verhalten-8 (I-8). Zusammenstellung Sozialwissenschaftlicher Items Skalen ZIS 2014:1-22 [FREE Full text] [CrossRef]
  67. Rammstedt B, Kemper C, Klein M, Beierlein C, Kovaleva A. Big Five Inventory (BFI-10). Zusammenstellung Sozialwissenschaftlicher Items Skalen ZIS 2014:1-23 [FREE Full text] [CrossRef]
  68. Beierlein C, Kovaleva A, László Z, Kemper C, Rammstedt B. Kurzskala zur Erfassung der Allgemeinen Lebenszufriedenheit (L-1). Zusammenstellung Sozialwissenschaftlicher Items Skalen ZIS 2015:1-18 [FREE Full text] [CrossRef]
  69. Field A. Discovering Statistics Using IBM SPSS Statistics. 4th Edition. Los Angeles: SAGE Publications Ltd; 2013.
  70. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Structural Eq Modeling Multidisc J 1999 Jan;6(1):1-55. [CrossRef]
  71. Finney S, DiStefano C. Nonnormal and categorical data in structural equation modeling. In: Hancock GR, Mueller RO, editors. Structural Equation Modeling, Second Course. 2nd Edition. Charlotte: IAP Information Age Publishing; 2013:439-492.
  72. Widaman K, Reise S. Exploring the measurement invariance of psychological instruments: applications in the substance use domain. In: Bryant KJ, Windle M, West SG, editors. The Science of Prevention: Methodological Advances From Alcohol and Substance Abuse Research. Washington: American Psychological Association; 1997:281-324.
  73. Dimitrov DM. Testing for factorial invariance in the context of construct validation. Meas Eval Couns Devel 2017 Mar 13;43(2):121-149. [CrossRef]
  74. Milfont T, Fischer R. Testing measurement invariance across groups: applications in cross-cultural research. Int J Psychol Res 2010 Jun 30;3(1):111-130. [CrossRef]
  75. Hirschfeld G, Von Brachel R. Improving multiple-group confirmatory factor analysis in R? A tutorial in measurement invariance with continuous and ordinal indicators. Pract Assess Res Eval 2014;19(1):7. [CrossRef]
  76. Putnick DL, Bornstein MH. Measurement invariance conventions and reporting: the state of the art and future directions for psychological research. Dev Rev 2016 Sep;41:71-90 [FREE Full text] [CrossRef] [Medline]
  77. Williams PG, O'Brien CD, Colder CR. The effects of neuroticism and extraversion on self-assessed health and health-relevant cognition. Personality Indiv Diff 2004 Jul;37(1):83-94. [CrossRef]
  78. Tuten TL, Bosnjak M. Understanding differences in web usage: the role of need for cognition and the five factor model of personality. Soc Behav Pers 2001 Jan 01;29(4):391-398. [CrossRef]
  79. Bogg T, Vo PT. Openness, neuroticism, conscientiousness, and family health and aging concerns interact in the prediction of health-related Internet searches in a representative U.S. sample. Front Psychol 2014;5:370 [FREE Full text] [CrossRef] [Medline]
  80. Lee HY, Lee J, Kim NK. Gender differences in health literacy among Korean adults: do women have a higher level of health literacy than men? Am J Mens Health 2015 Sep;9(5):370-379. [CrossRef] [Medline]
  81. Xesfingi S, Vozikis A. eHealth literacy: in the quest of the contributing factors. Interact J Med Res 2016 May 25;5(2):e16 [FREE Full text] [CrossRef] [Medline]
  82. Levin-Zamir D, Lemish D, Gofin R. Media Health Literacy (MHL): development and measurement of the concept among adolescents. Health Educ Res 2011 Apr;26(2):323-335. [CrossRef] [Medline]
  83. Chesser A, Burke A, Reyes J, Rohrberg T. Navigating the digital divide: a systematic review of eHealth literacy in underserved populations in the United States. Inform Health Soc Care 2016;41(1):1-19. [CrossRef] [Medline]
  84. Stellefson M, Hanik B, Chaney B, Chaney D, Tennant B, Chavarria EA. eHealth literacy among college students: a systematic review with implications for eHealth education. J Med Internet Res 2011 Dec;13(4):e102 [FREE Full text] [CrossRef] [Medline]
  85. Autorengruppe Bildungsberichterstattung. Bildung in Deutschland 2020: Ein indikatorengestützter Bericht mit einer Analyse zu Bildung in einer digitalisierten Welt. Bielefeld: wbv Media GmbH & Co KG; 2020.


CFA: confirmatory factor analysis
CFI: comparative fit index
EFA: exploratory factor analysis
eHEALS: eHealth Literacy Scale
eHEALS-E: extended eHealth Literacy Scale
G-eHEALS: German eHealth Literacy Scale
GR-eHEALS: revised German eHealth Literacy Scale
KMO: Kaiser-Meyer-Olkin test
RMSEA: root mean square error of approximation
SRMR: standardized root mean square residual
TLI: Tucker Lewis index


Edited by G Eysenbach; submitted 26.02.21; peer-reviewed by L Sudbury-Riley, E Roehrer, JR Bautista; comments to author 08.06.21; revised version received 24.07.21; accepted 19.11.21; published 02.02.22

Copyright

©Matthias Marsall, Gerrit Engelmann, Eva-Maria Skoda, Martin Teufel, Alexander Bäuerle. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 02.02.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.