Published on in Vol 25 (2023)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/42638, first published .
Smartphone App–Based and Paper-Based Patient-Reported Outcomes Using a Disease-Specific Questionnaire for Dry Eye Disease: Randomized Crossover Equivalence Study

Smartphone App–Based and Paper-Based Patient-Reported Outcomes Using a Disease-Specific Questionnaire for Dry Eye Disease: Randomized Crossover Equivalence Study

Smartphone App–Based and Paper-Based Patient-Reported Outcomes Using a Disease-Specific Questionnaire for Dry Eye Disease: Randomized Crossover Equivalence Study

Original Paper

1Department of Hospital Administration, Juntendo University Graduate School of Medicine, Tokyo, Japan

2Department of Ophthalmology, Juntendo University Graduate School of Medicine, Tokyo, Japan

3Department of Digital Medicine, Juntendo University Graduate School of Medicine, Tokyo, Japan

4AI Incubation Farm, Juntendo University Graduate School of Medicine, Tokyo, Japan

Corresponding Author:

Takenori Inomata, MD, PhD, MBA

Department of Ophthalmology

Juntendo University Graduate School of Medicine

2-1-1 Hongo, Bunkyo-ku

Tokyo, 1130033

Japan

Phone: 81 338133111

Fax:81 356890394

Email: tinoma@juntendo.ac.jp


Background: Using traditional patient-reported outcomes (PROs), such as paper-based questionnaires, is cumbersome in the era of web-based medical consultation and telemedicine. Electronic PROs may reduce the burden on patients if implemented widely. Considering promising reports of DryEyeRhythm, our in-house mHealth smartphone app for investigating dry eye disease (DED) and the electronic and paper-based Ocular Surface Disease Index (OSDI) should be evaluated and compared to determine their equivalency.

Objective: The purpose of this study is to assess the equivalence between smartphone app–based and paper-based questionnaires for DED.

Methods: This prospective, nonblinded, randomized crossover study enrolled 34 participants between April 2022 and June 2022 at a university hospital in Japan. The participants were allocated randomly into 2 groups in a 1:1 ratio. The paper-app group initially responded to the paper-based Japanese version of the OSDI (J-OSDI), followed by the app-based J-OSDI. The app-paper group responded to similar questionnaires but in reverse order. We performed an equivalence test based on minimal clinically important differences to assess the equivalence of the J-OSDI total scores between the 2 platforms (paper-based vs app-based). A 95% CI of the mean difference between the J-OSDI total scores within the ±7.0 range between the 2 platforms indicated equivalence. The internal consistency and agreement of the app-based J-OSDI were assessed with Cronbach α coefficients and intraclass correlation coefficient values.

Results: A total of 33 participants were included in this study. The total scores for the app- and paper-based J-OSDI indicated satisfactory equivalence per our study definition (mean difference 1.8, 95% CI –1.4 to 5.0). Moreover, the app-based J-OSDI total score demonstrated good internal consistency and agreement (Cronbach α=.958; intraclass correlation=0.919; 95% CI 0.842 to 0.959) and was significantly correlated with its paper-based counterpart (Pearson correlation=0.932, P<.001).

Conclusions: This study demonstrated the equivalence of PROs between the app- and paper-based J-OSDI. Implementing the app-based J-OSDI in various scenarios, including telehealth, may have implications for the early diagnosis of DED and longitudinal monitoring of PROs.

J Med Internet Res 2023;25:e42638

doi:10.2196/42638

Keywords



Dry eye disease (DED) is the most common disease of the ocular surface, with a prevalence ranging from 5% to 50% [1,2]. DED presents in a highly personalized manner with numerous symptoms, including eye dryness, discomfort, decreased visual acuity, and generalized fatigue [3-5]. These symptoms decrease the quality of life and work productivity, and DED management imposes a burden on families and health care infrastructure [6]. However, DED has no cure, and the current standard of care revolves around the post facto management of subjective symptoms and preventive measures to halt disease progression [7]. A large proportion of patients with DED may be undiagnosed and untreated despite the presence of DED symptoms [8]. Hence, early detection and intervention, followed by longitudinal monitoring, are crucial to preventing disease progression [4,9,10].

The current diagnostic standards proposed by 2 leading organizations on DED, namely the Tear Film & Ocular Surface Society and the Asian Dry Eye Society, suggest a holistic assessment of patients’ subjective symptoms and tear film breakup time (TFBUT) for diagnosing DED [6,11]. Subjective symptoms of DED should be assessed using disease-specific questionnaires to quantify the degree and types of symptoms [12]. Questionnaire results do not always correlate with clinical impressions and are frequently affected by individual lifestyle patterns, habits, and the quality of life [12,13]. Therefore, the longitudinal measurement of subjective symptoms in a true-to-life environment to negate the fluctuations of symptom-based questionnaire scores is necessary to accurately evaluate the patients’ condition and the effectiveness of ongoing treatment [8,14,15].

To date, mobile health (mHealth) [16] research and implementation have augmented the capability of telehealth by screening large populations for early diagnosis and long-term monitoring of chronic illnesses via remote analyses of results [17-19]. Moreover, researchers have investigated the merits of mHealth, including electronic patient-reported outcomes (ePROs), in routine assessments [20,21]. ePROs offer insights into the patients’ subjective experiences, particularly their disease symptoms and satisfaction with the treatment outcomes, which are collected through digital questionnaires [13,21-24]. Traditional patient-reported outcomes (PROs), such as paper-based questionnaires, are cumbersome for collecting daily subjective symptoms in telemedicine and web-based practice settings. Conversely, ePRO reports indicate that the remote accessibility and usability of electronic adaptation reduce the burden on patients and that widely implemented ePROs may be relatively well-accepted by patients [21,23,25,26]. However, the ePRO Good Research Task Force by the Professional Society for Health Economics and Outcomes Research cautions that numerous questionnaires are developed as PRO tools under the assumption of an on-site paper-based administration; thus, a thorough comparison of the 2 platforms is warranted to ensure the reliability of substituting ePRO tools for traditional PRO tools [21,23].

In November 2016, we developed and released DryEyeRhythm, an in-house mHealth smartphone app for DED research [4,8,14,15,27-29]. Other apps on DED screening include Optrex, a web-based blink test app released in 2018, and Dry eye or not? a smartphone app released in Thailand in 2019 [30,31]. Both DryEyeRhythm and Dry eye or not? collect ePROs by using an electronic version of the Ocular Surface Disease Index (OSDI), which helps clinicians assess subjective symptoms of DED on a standardized scale [32]. So far, research studies have facilitated overcoming the challenges of traditional medicine in DED, such as the early recognition of undiagnosed patients with DED and an unintrusive longitudinal analysis of subjective symptoms to generate data in daily life [8,15,33,34]. However, the Professional Society for Health Economics and Outcomes Research recommends a comprehensive evaluation and comparison of electronic- and paper-based OSDI to assess their equivalency.

Therefore, in this study, we aimed to compare the characteristics of the app- and paper-based OSDI and assess the equivalency and validity of the app-based OSDI as an appropriate substitute for the traditional OSDI.


Study Design and Participants

This prospective, nonblinded, randomized crossover study was conducted at the Department of Ophthalmology at Juntendo University Hospital, Tokyo, Japan. Patients aged ≥20 years were recruited between April 20, 2022, and June 8, 2022. Patients with a history of eyelid disorders, ptosis, mental disease, Parkinson disease, or any other disease affecting blinking were excluded. Furthermore, we excluded patients with missing data from the analysis.

Ethics Approval

Written informed consent was obtained from all participants. This study was approved by the Independent Ethics Committee of Juntendo University Faculty of Medicine (E21-0324-H02) and was conducted in accordance with the ethical standards laid down in an appropriate version of the Declaration of Helsinki (as revised in Brazil, 2013). All the involved parties attempted to protect the personal information and privacy of the participants. Data related to the participants were anonymized, and research data were stored in locked cabinets with access strictly controlled by the research staff. The participants were not compensated for participating in this study.

DryEyeRhythm Smartphone App

The DryEyeRhythm app was developed using the open-source framework ResearchKit (Apple Inc; Figure 1) [14]. This app was released in November 2016 and September 2020 for iOS and Android, respectively, under a consignment contract with the Juntendo University Graduate School of Medicine and InnoJin Inc. It is freely available on Apple’s App Store and Google Play. The DryEyeRhythm app collects data regarding user demographics, medical history, lifestyle questionnaires, daily subjective symptoms, the Japanese version of the OSDI (J-OSDI) questionnaire (Figure 1), blink sensing, the Zung Self-Rating Depression questionnaires for depression, and the Work Productivity and Activity Impairment Questionnaire for work productivity (Figure 1) [3,4,8,14,15,35]. In this study, we assessed only the J-OSDI collected through the app for its equivalence, reliability, and validity compared with the paper-based J-OSDI, and we did not use data on the remaining functions.

Figure 1. Screenshots of the DryEyeRhythm app. (A) Screenshot of the DryEyeRhythm test results. (B) Screenshot of the DryEyeRhythm app-based J-OSDI. (C) Screenshot of the DryEyeRhythm measuring menu. J-OSDI: Japanese version of the Ocular Surface Disease Index.

Study Procedures

Figure 2 depicts the study schema. All participants underwent visual acuity measurements, intraocular pressure measurements, and other DED examinations, including TFBUT, corneal fluorescein staining (CFS), and maximum blink interval (MBI). Subsequently, the participants were allocated randomly into (1) the paper-app group and (2) the app-paper group in a 1:1 ratio. Patients in the paper-app group initially responded to the paper-based J-OSDI, followed by the app-based J-OSDI through DryEyeRhythm. Those in the app-paper group initially responded to the app-based J-OSDI through DryEyeRhythm, followed by the paper-based J-OSDI. Each participant was requested to complete both versions of the J-OSDI. All participants responded to the app-based J-OSDI questionnaire on their own by tapping on the screen of a smartphone with preinstalled DryEyeRhythm (Figure 1). They responded to the Dry Eye-Related Quality-of-Life Score (DEQS) questionnaire before responding to the second round of J-OSDI (app-based J-OSDI for those who began with the paper-based J-OSDI and vice versa).

Figure 2. Study schema. CFS: corneal fluorescein staining; J-OSDI: Japanese version of the Ocular Surface Disease Index; MBI: maximum blink interval; TFBUT: tear film breakup time.

Assessment of Subjective DED Symptoms

Subjective DED symptoms were assessed using the J-OSDI and DEQS questionnaires. The J-OSDI is used for assessing subjective DED symptoms; it is a 12-item instrument with the following 3 subscales: ocular symptoms, vision-related functions, and environmental triggers [3]. The J-OSDI questionnaire has been validated in Japan [3]. It records the frequency of each symptom on a 5-point scale from “all of the time” (a score of 4) to “none of the time” (a score of 0). The patients selected “not applicable” if the questions 6 to 12 were irrelevant. The J-OSDI total score and each subscale score ranged from 0 to 100 points and were separately reported [3].

The DEQS questionnaire was administered to the participants to assess DED symptom severity and the multifaceted effects of DED on daily life [36]. The DEQS is a subjective measurement of DED symptoms; 0 and 100 points indicate the best (no symptoms) and worst (maximum symptoms) scores, respectively.

Clinical Assessment of DED

We performed DED examinations using the TFBUT, CFS, MBI measurements, the Schirmer test I, and Meibomian gland dysfunction assessment [4].

TFBUT was measured using fluorescein sodium staining (fluorescence ocular examination test paper; Ayumi Pharmaceutical Co) [11]. The mean values of the 3 measurements were used.

CFS was evaluated according to the van Bijsterveld grading system [37], which divides the ocular surface into the 3 following zones: the nasal bulbar conjunctiva, the temporal bulbar conjunctiva, and the cornea. Each zone was evaluated on a scale ranging from 0 to 3, with 0 indicating no staining and 3 indicating confluent staining; the maximum score was 9.

MBI was defined as the duration for which the participants could keep their eyes open before blinking [38]. We measured MBI twice using a stopwatch under a light microscope; MBI was recorded at 30 seconds if it exceeded 30 seconds.

We performed Schirmer test I without topical anesthesia after completing other examinations. Schirmer test strips (Ayumi Pharmaceutical Co) were placed on the outer third of the temporal lower conjunctival fornix for 5 minutes. These strips were removed, and the length (in mm) of the dampened filter paper was recorded [39].

Meibomian gland function was assessed by applying digital pressure onto the lower central eyelid in conjunction with slit-lamp microscopy, according to the standard method [40].

DED Diagnosis

DED and non-DED were diagnosed using the 2016 Asian Dry Eye Society and Tear Film & Ocular Surface Society Dry Eye Workshop II diagnostic criteria [6,11]. The diagnosis was based on the following 2 findings: positive subjective symptoms (paper-based J-OSDI total score ≥13) and decreased TFBUT (≤5.0 seconds).

Randomization

The participants were randomized by simple random sampling using the lottery method [41]. The total sample size was determined to be 34, as described in the Statistical Analyses section. To assign participants to their respective groups, shuffled cards numbered from 1 to 34 were drawn randomly from an opaque envelope. Those who drew odd and even numbers were assigned to the paper-app and app-paper groups, respectively [42].

Statistical Analyses

The sample size was predetermined based on the methodology presented in the Professional Society for Health Economics and Outcomes Research ePRO Good Research Practices Task Force report [21]. The sample size required for crossover design comparisons of means from 2 different PRO administration modes is calculated by multiplying the total sample size required for a parallel group design by a factor of (1–r)/2, where r is an estimate of the expected correlation between the 2 modes of administration [21]. With a power of 80%, a significance level of 5%, a minimal clinically important difference (MCID) of 7.0 points for the J-OSDI total score [43], an SD of 20.0 for the paper-based J-OSDI score [4], and a correlation coefficient of 0.89 between the paper- and app-based J-OSDI [4], the required sample size was calculated as 30 (15 cases per group [21]). For 34 cases (17 in each group), we considered 10% dropouts because of missing data or the withdrawal of consent.

The equivalence margin was defined as ±7.0 points from the MCID of the J-OSDI total score [43]. A 95% CI of the mean difference between the J-OSDI total scores of the app- and paper-based J-OSDI within the ±7.0 range denoted equivalence [44,45].

We assessed the internal consistency of the app-based J-OSDI using the Cronbach α coefficient [4]. Cronbach α>.70 was considered acceptable [46]. The intraclass correlation coefficient (ICC) was used to evaluate the agreement of the J-OSDI total score and subscale scores between the app- and paper-based J-OSDI. An ICC value ≥0.70 was considered acceptable [47]. To assess the agreement and correlation between the app- and paper-based J-OSDI, we performed Bland-Altman analysis and Pearson correlation coefficient estimation.

To compare the characteristics of the participants between the app-paper and paper-app groups, we conducted an unpaired 2-tailed t test and a chi-squared test for continuous and categorical variables, respectively. All analyses were performed using the STATA software package (version 17.0; StataCorp). Statistical significance was set at P<.05.


Participant Characteristics

We enrolled 34 patients; we excluded 1 patient because of missing data caused by a poor internet connection. Table 1 summarizes the characteristics of the 33 participants. The mean age was 63.6 years, and 32 (97%) participants were female. The number of participants with DED in the paper-app and app-paper groups was 8 (50%) and 10 (59%), respectively. We observed no statistically significant differences in the demographic characteristics or results of ophthalmological examinations between the groups.

Table 1. Participant characteristics.

Total (n=33)Paper-app group (n=16)App-paper group (n=17)P valuea
Age (years), mean (SD)63.6 (12.9)66.2 (10.7)61.3 (14.6).29
Female, n (%)32 (97)16 (100)16 (94).32
DEDb, n (%)18 (55)8 (50)10 (58).61
BCVAc (LogMAR), mean (SD)–0.043 (0.073)–0.044 (0.072)–0.043 (0.076).98
IOPd (mm Hg), mean (SD)13.8 (3.8)13.4 (4.6)14.3 (2.8).52
Paper-based J-OSDIe total score (0-100), mean (SD)33.6 (24.2)29.6 (24.2)37.3 (24.3).37
App-based J-OSDI total score (0-100), mean (SD)31.8 (20.4)27.1 (20.1)36.3 (20.1).20
DEQSf summary score, (0-100), mean (SD)34.3 (21.3)29.9 (20.5)38.5 (21.7).25
TFBUTg (s), mean (SD)3.0 (1.7)3.4 (2.0)2.7 (1.4).23
CFSh (0-9), mean (SD)3.5 (2.8)3.4 (2.5)3.5 (3.0).88
MBIi (s), mean (SD)8.3 (4.9)8.5 (4.3)8.1 (5.5).81
Schirmer test I (mm), mean (SD)7.4 (8.4)9.3 (10.9)5.6 (4.5).21
MGDj, n (%)0 (0)0 (0)0 (0)k

aP values were estimated using the unpaired 2-tailed t test for continuous variables and chi-squared test for categorical variables.

bDED: dry eye disease.

cBCVA: best-corrected visual acuity.

dIOP: intraocular pressure.

eJ-OSDI: Japanese version of the Ocular Surface Disease Index.

fDEQS: Dry Eye-Related Quality-of-Life Score.

gTFBUT: tear film breakup time.

hCFS: corneal fluorescein staining.

iMBI: maximum blink interval.

jMGD: meibomian gland dysfunction.

kNot available.

Equivalence Test of App- and Paper-Based J-OSDI Based on the MCID

Table 2 summarizes the J-OSDI scores for each question and the mean differences of the scores between the paper-app and app-paper groups. The mean difference in the J-OSDI total score between the 2 groups was 1.8 (95% CI –1.4 to 5.0). Results of the equivalence test based on an MCID of 7.0 demonstrated that the app- and paper-based J-OSDI total scores were equivalent.

Table 2. Comparison of the J-OSDIa scores on each question between the paper-app and app-paper groups.

Paper-app groupApp-paper groupPaper- vs app-based J-OSDI, mean difference (95% CI)
Paper-based J-OSDI, mean (SD)App-based J-OSDI, mean (SD)App-based J-OSDI, mean (SD)Paper-based J-OSDI, mean (SD)
J-OSDI total score (0-100)29.6 (24.2)27.1 (20.1)36.3 (20.1)37.3 (24.3)1.8 (–1.4 to 5.0)
Ocular symptoms (0-100)26.6 (20.9)20.6 (14.1)35.0 (19.4)38.2 (26.5)4.6 (–0.6 to 9.7)

1. Eyes that are sensitive to light? (0-4)1.4 (1.1)1.1 (1.1)1.6 (1.3)1.7 (1.4)0.2 (–1.2 to 0.6)

2. Eyes that feel gritty? (0-4)0.9 (1.2)0.7 (0.9)1.8 (1.2)1.6 (1.3)0.0 (–0.2 to 0.3)

3. Painful or sore eyes? (0-4)0.6 (1.0)0.6 (0.9)1.2 (1.2)1.3 (1.4)0.1 (–0.3 to 0.4)

4. Blurred vision? (0-4)1.2 (1.0)0.9 (0.8)1.2 (0.9)1.4 (1.0)0.3 (0.0 to 0.4)

5. Poor vision? (0-4)1.3 (1.0)0.9 (0.9)1.2 (0.9)1.6 (1.2)0.4 (0.1 to 0.7)
Vision-related function (0-100)23.8 (26.1)22.9 (24.7)29.0 (17.2)24.9 (21.4)–1.6 (–7.8 to 4.6)

6. Reading? (0-4)1.2 (1.3)1.0 (1.2)1.4 (1.2)1.3 (1.2)0.1 (–0.4 to 0.4)

7. Driving at night? (0-4)1.2 (1.6)1.5 (1.6)0.9 (1.1)0.8 (1.2)-0.2 (-0.5 to 0.3)

8. Working with a computer or bank machine? (0-4)0.9 (1.0)0.9 (1.1)1.3 (1.3)0.9 (1.1)–0.2 (–0.6 to 0.1)

9. Watching television? (0-4)0.9 (1.1)0.8 (1.0)0.8 (0.7)1.1 (1.1)0.2 (–0.1 to 0.3)
Environmental triggers (0-100)42.7 (37.1)43.2 (33.5)45.8 (33.6)49.3 (37.0)1.5 (–2.6 to 5.7)

10. Windy conditions? (0-4)1.7 (1.7)1.8 (1.6)1.7 (1.6)2.0 (1.7)0.1 (–0.2 to 0.3)

11. Places or areas with low humidity (very dry)? (0-4)1.9 (1.6)2.0 (1.5)2.2 (1.3)2.2 (1.5)–0.1 (–0.2 to 0.2)

12. Areas that are air conditioned? (0-4)1.7 (1.5)1.6 (1.4)1.8 (1.3)1.8 (1.4)0.1 (–0.2 to 0.4)

aJ-OSDI: Japanese version of the Ocular Surface Disease Index.

Internal Consistency and Agreement of the App-Based J-OSDI

Table S1 in Multimedia Appendix 1 summarizes the internal consistency and agreement of the app-based J-OSDI total score and subscale scores with the Cronbach α coefficients and ICC values. The J-OSDI total score (.958), ocular symptoms (.873), vision-related function (.819), and environmental triggers subscales (.971) had Cronbach α coefficients >0.7, which indicated acceptable internal consistency. The ICC values for the J-OSDI total score, ocular symptoms subscale, vision-related function subscale, and environmental triggers subscale were 0.919 (95% CI 0.842-0.959), 0.775 (95% CI 0.592-0.882), 0.693 (95% CI 0.463-0.836), and 0.944 (95% CI 0.890-0.972), respectively. All ICCs, except for the vision-related function subscale, were >0.7.

Correlation and Agreement Between the App- and Paper-Based J-OSDI

Figure 3 depicts the correlation and agreement between the paper- and app-based J-OSDI. We observed a significant positive correlation between the paper- and app-based J-OSDI in the J-OSDI total score (r=0.932, P<.001) and in each subscale (ocular symptoms: r=0.806, P<.001; vision-related function: r=0.697, P<.001; and environmental triggers: r=0.949, P<.001). The Bland-Altman analysis for agreement between the paper- and app-based J-OSDIs demonstrated differences (biases) of 1.77 (95% limits of agreement [LOA] –15.9 to 19.4) for the J-OSDI total score (Figure 3) and 4.55 (95% LOA –23.8 to 32.8), –0.64 (95% LOA –35.9 to 32.6), and 1.52 (95% LOA –21.4 to 24.4) for the ocular symptoms, vision-related function, and environmental triggers subscales, respectively.

Figure 3. Correlation and agreement between the app- and paper-based J-OSDI. The x-axes of the Bland-Altman plots indicate the average scores of the 2 questionnaires; the y-axes indicate the differences between the scores (paper- and app-based J-OSDI). A, B, C, and D show the Pearson correlation coefficient between the app- and paper-based J-OSDI for total score, ocular symptoms subscale, vision-related function subscale, and the environmental triggers subscale, respectively. E, F, G, and H show Bland-Altman analyses for agreement between the paper- and app-based J-OSDI for total score, the ocular symptoms subscale, the vision-related function subscale, and the environmental triggers subscale, respectively. J-OSDI: Japanese version of the Ocular Surface Disease Index.

Principal Findings

In this study, we compared the performance of paper- and app-based J-OSDI through data collected from a DED mHealth app (DryEyeRhythm) to evaluate their equivalency for subjective symptom questionnaires. The app-based J-OSDI total score was comparable to its paper-based counterpart. The recent COVID-19 pandemic limited health care visits worldwide; therefore, efforts to improve telehealth and produce noncontact medical devices are escalating [48]. Evaluating subjective symptoms through an app-based questionnaire may facilitate the implementation of telehealth in DED diagnosis, thus reducing the reliance on in-patient consultations for DED diagnosis and making follow-up simpler for susceptible populations. As a novel mHealth app in Japan, DryEyeRhythm may offer the advantages of early DED diagnosis and effective disease management.

The app- and paper-based versions of the J-OSDI yielded comparable results, suggesting satisfactory performance of the app-based OSDI as a substitute for the counterpart platform. Subjective DED symptoms can be highly variable and nuanced [8,15], and the verbal inquisition of patient experiences can yield unreliable results lacking standardization and quantification. Hence, researchers recommend vetted, disease-specific questionnaires for assessing subjective symptoms as part of the diagnostic process [12]. The J-OSDI total score is often used in diagnosing DED according to the current DED diagnosis guidelines [3,49]. The mean difference in the J-OSDI total score between the 2 platforms was 1.8. A similar study that compared a web-based Chinese version of the OSDI to its paper-based counterpart reported a mean difference of 0.24, lower than our results [50]. Notably, the mean age of the participants in the Chinese study was relatively lower (27.9 years), and the web-based OSDI could display all 12 questions on a single page. In our study, the mean age was higher (63.6 years), which may imply that our participants were less familiar with modern devices. In addition, the user interface in DryEyeRhythm limited the visibility of the questionnaire because of the screen size, and questions were delivered one-by-one sequentially while responding. The difference in interaction with paper-based and digital platforms may result in a discrepancy in the data collected between traditional PROs and ePROs [21], which may lead to relatively higher mean differences between the app- and paper-based J-OSDI results. The J-OSDI total score increases in 2.5 increments each time a participant responds to an item in the questionnaire [3]. The mean difference of 1.8 between the 2 platforms was lower than a single-point change in the OSDI questionnaire. Although the mean difference is higher than the previously reported OSDI discrepancy between its app and paper versions, the clinical significance of the score gap is considered minimal in practice [3]. Thus, the app-based and paper-based J-OSDI total scores yielded comparable results. This finding supports the candidacy of the app-based J-OSDI as a suitable substitute for the paper-based J-OSDI for evaluating DED subjective symptoms.

The app-based J-OSDI demonstrated satisfactory internal consistency, and a comparison of the 2 platforms of J-OSDI indicated high agreement and positive correlation. Notably, the Cronbach α obtained in this study was higher than that obtained in a previous study on the reliability of the paper-based J-OSDI [3]. Similar results were observed for the ICC, except for the vision-related function subscale of the J-OSDI categories [3]. Numerous studies on OSDI adaptations into various languages besides Japanese did not demonstrate a deficit in internal consistency and agreement in the vision-related function subscale [51,52]. Interestingly, while the Cronbach α was lower in the vision-related function subscale compared with other categories in the Japanese study, the exclusion of item 7 (nighttime driving difficulty) noticeably improved the internal consistency [3]. We had a similar observation, in which the ICC increased beyond the 0.7 threshold for an acceptable range of agreement after eliminating item 7 from the analysis. This observation could be attributed to the study demographics consisting of older female participants who tend to drive less frequently during the night and the highly urbanized research site, which allowed easy access to public transportation [3,53]. The resultant Bland-Altman plot provided no evidence for systematic error that could challenge the agreement between the 2 platforms. Therefore, ePRO data collected through app-based J-OSDI was comparable to its traditional paper-based PRO counterpart, signifying the potential of app-based OSDI in mHealth for identifying undiagnosed patients with DED and obtaining subjective symptoms within the users’ daily lives in a nonintrusive manner.

Limitations

This study had several limitations. First, it may have a selection bias caused by the single-center design at a university hospital in Tokyo, Japan. In addition, most participants were older women. This bias in the target population may have affected the results of J-OSDI item 7. Conversely, the bias observed in the participant group toward older women might have minimally affected our remaining results because DED has a higher prevalence in older women [1]. Furthermore, the older population may not be skilled in using modern digital devices [54]. However, with growing resources and the normalization of smartphone use in daily life, older adults are expected to become more skilled in the use of digital devices in the near future [15,48]. Second, a carryover effect may have influenced our results because the participants may not have had a sufficient washout period before transitioning between the 2 platforms [55]. The interval between responding to the app-based J-OSDI, DEQS, and paper-based J-OSDI questionnaires—or in reverse order—was approximately 10 minutes. However, the participants responded to a non-OSDI questionnaire (DEQS) during the interim period, which could have reduced the carryover effect. Third, participant factors, including socioeconomic status, educational level, and cultural background, were not collected in this study, and researchers should attempt to collect and analyze their effects on outcomes in the future. Fourth, we did not compare the efficiency, effectiveness, or usability of the paper- and app-based J-OSDI. Future studies should demonstrate the response time to the J-OSDI questionnaire, its effectiveness in treating DED, and its usability to establish the performance of the app-based J-OSDI. Fifth, the equivalence assessment between the app- and paper-based J-OSDI was conducted with a relatively small participant pool. This is because a study with a crossover design can often be effectively conducted even with a smaller sample size. However, the validity and reliability of the app-based J-OSDI questionnaire should be evaluated comprehensively with a large sample size, and future researchers should attempt to validate these results with more participants.

Conclusions

In conclusion, the app-based J-OSDI and the paper-based J-OSDI were comparable in obtaining data on subjective DED symptoms. Implementing the app-based J-OSDI as a tool for the mHealth management of DED may have implications for the early diagnosis of DED and longitudinal PRO monitoring through an unintrusive collection of DED-related data in daily life.

Acknowledgments

The authors thank Ohako Inc and Medical Logue Inc for developing the DryEyeRhythm app, as well as Tina Shiang, Yusuke Yoshimura, Yoshimune Hiratsuka, Satoshi Hori, Miki Uchino, and Kazuo Tsubota for the initial development of the app. This research was supported by JST COI (grant JPMJCER02WD02; TI), JSPS KAKENHI grants (20KK0207 [TI], 20K23168 [AM-I], 21K17311 [AM-I], 21K20998 [AE], and 22K16983 [AE]), the Kondou Kinen Medical Foundation, the Medical Research Encouragement Prize 2020 (TI), the Charitable Trust Fund for Ophthalmic Research in Commemoration of Santen Pharmaceutical’s Founder 2020 (TI), the Nishikawa Medical Foundation, the Medical Research Encouragement Prize 2020 (TI), the OTC Self-Medication Promotion Foundation (TI and YO), and Takeda Science Foundation 2022 (TI). The sponsors had no role in the design or performance of the study, data collection and management, analysis and interpretation of the data, preparation, review, or approval of the manuscript, or in the decision to submit the manuscript for publication.

Data Availability

All data generated or analyzed during this study are included in this published article and its supplementary information files.

Conflicts of Interest

TI and YO are the owners of InnoJin, which developed DryEyeRhythm. TI reports receiving grants from Johnson and Johnson Vision Care, Seed, Novartis Pharma, and Kowa outside the submitted work, as well as personal fees from Santen Pharmaceutical and InnoJin. The remaining authors declare no competing interests.

Authors' Contributions

KN and TI had complete access to all data in this study and take responsibility for the integrity of the data and the accuracy of the data analysis. KN, YO, and TI developed the study concept and conducted study design. KN, YO, YA, K Fujio, TH, AM-I, K Fujimoto, AE, SH, MM, MO, HK, YM, and TI contributed to the acquisition, analysis, or interpretation of data. KN, JS, AY, and TI performed the drafting of the manuscript. KN, JS, AY, AM, HK, and TI contributed to the critical revision of the manuscript for important intellectual content. AM-I, AE, and TI obtained funding. KN, AM-I, and TI performed statistical analysis. AM, HK, and TI contributed administrative, technical, or material support. AM, HK, and TI supervised the project.

Multimedia Appendix 1

Supplementary Table 1. Comparison of the reliability of the app- and paper-based versions of the J-OSDI.

PDF File (Adobe PDF File), 77 KB

Multimedia Appendix 2

CONSORT-eHEALTH checklist (V 1.6.1).

PDF File (Adobe PDF File), 1477 KB

  1. Stapleton F, Alves M, Bunya VY, Jalbert I, Lekhanont K, Malet F, et al. TFOS DEWS II epidemiology report. Ocul Surf. 2017;15(3):334-365. [FREE Full text] [CrossRef] [Medline]
  2. Inomata T, Shiang T, Iwagami M, Sakemi F, Fujimoto K, Okumura Y, et al. Changes in distribution of dry eye disease by the new 2016 diagnostic criteria from the Asia Dry Eye Society. Sci Rep. 2018;8(1):1918. [FREE Full text] [CrossRef] [Medline]
  3. Midorikawa-Inomata A, Inomata T, Nojiri S, Nakamura M, Iwagami M, Fujimoto K, et al. Reliability and validity of the Japanese version of the Ocular Surface Disease Index for dry eye disease. BMJ Open. 2019;9(11):e033940. [FREE Full text] [CrossRef] [Medline]
  4. Okumura Y, Inomata T, Midorikawa-Inomata A, Sung J, Fujio K, Akasaki Y, et al. DryEyeRhythm: a reliable and valid smartphone application for the diagnosis assistance of dry eye. Ocul Surf. 2022;25:19-25. [FREE Full text] [CrossRef] [Medline]
  5. Rhee J, Chan TCY, Chow SSW, Di Zazzo A, Inomata T, Shih KC, et al. A systematic review on the association between tear film metrics and higher order aberrations in dry eye disease and treatment. Ophthalmol Ther. 2022;11(1):35-67. [FREE Full text] [CrossRef] [Medline]
  6. Craig JP, Nichols KK, Akpek EK, Caffery B, Dua HS, Joo CK, et al. TFOS DEWS II definition and classification report. Ocul Surf. 2017;15(3):276-283. [FREE Full text] [CrossRef] [Medline]
  7. Jones L, Downie LE, Korb D, Benitez-Del-Castillo JM, Dana R, Deng SX, et al. TFOS DEWS II management and therapy report. Ocul Surf. 2017;15(3):575-628. [FREE Full text] [CrossRef] [Medline]
  8. Inomata T, Iwagami M, Nakamura M, Shiang T, Yoshimura Y, Fujimoto K, et al. Characteristics and risk factors associated with diagnosed and undiagnosed symptomatic dry eye using a smartphone application. JAMA Ophthalmol. 2020;138(1):58-68. [FREE Full text] [CrossRef] [Medline]
  9. Lu Y, Wu Y, Zhou X, Inomata T, Gu L, Jin X, et al. Editorial: advances in the pathophysiology, diagnosis, and treatment of dry eye disease. Front Med (Lausanne). 2022;9:925876. [FREE Full text] [CrossRef] [Medline]
  10. Inomata T, Sung J. Changing medical paradigm on inflammatory eye disease: technology and its implications for P4 medicine. J Clin Med. 2022;11(11):2964. [FREE Full text] [CrossRef] [Medline]
  11. Tsubota K, Yokoi N, Shimazaki J, Watanabe H, Dogru M, Yamada M, et al. Asia Dry Eye Society. New perspectives on dry eye definition and diagnosis: a consensus report by the Asia Dry Eye Society. Ocul Surf. 2017;15(1):65-76. [FREE Full text] [CrossRef] [Medline]
  12. Wolffsohn JS, Arita R, Chalmers R, Djalilian A, Dogru M, Dumbleton K, et al. TFOS DEWS II diagnostic methodology report. Ocul Surf. 2017;15(3):539-574. [FREE Full text] [CrossRef] [Medline]
  13. Okumura Y, Inomata T, Iwata N, Sung J, Fujimoto K, Fujio K, et al. A review of dry eye questionnaires: measuring patient-reported outcomes and health-related quality of life. Diagnostics (Basel). 2020;10(8):559. [FREE Full text] [CrossRef] [Medline]
  14. Inomata T, Nakamura M, Iwagami M, Shiang T, Yoshimura Y, Fujimoto K, et al. Risk factors for severe dry eye disease: crowdsourced research using DryEyeRhythm. Ophthalmology. 2019;126(5):766-768. [CrossRef] [Medline]
  15. Inomata T, Nakamura M, Sung J, Midorikawa-Inomata A, Iwagami M, Fujio K, et al. Smartphone-based digital phenotyping for dry eye toward P4 medicine: a crowdsourced cross-sectional study. NPJ Digit Med. 2021;4(1):171. [FREE Full text] [CrossRef] [Medline]
  16. mHealth: New horizons for health through mobile technologies: second global survey on eHealth. World Health Organization. 2011. URL: https://apps.who.int/iris/handle/10665/44607 [accessed 2019-12-06]
  17. Tsiakiri A, Koutzmpi V, Megagianni S, Toumaian M, Geronikola N, Despoti A, et al. Remote neuropsychological evaluation of older adults. Appl Neuropsychol Adult. 2022:1-8. [CrossRef] [Medline]
  18. Bonini N, Vitolo M, Imberti JF, Proietti M, Romiti GF, Boriani G, et al. Mobile health technology in atrial fibrillation. Expert Rev Med Devices. 2022;19(4):327-340. [FREE Full text] [CrossRef] [Medline]
  19. Xu H, Long H. The effect of smartphone app-based interventions for patients with hypertension: systematic review and meta-analysis. JMIR mHealth uHealth. 2020;8(10):e21759. [FREE Full text] [CrossRef] [Medline]
  20. Gray CS, Gill A, Khan AI, Hans PK, Kuluski K, Cott C. The electronic patient reported outcome tool: testing usability and feasibility of a mobile app and portal to support care for patients with complex chronic disease and disability in primary care settings. JMIR mHealth uHealth. 2016;4(2):e58. [FREE Full text] [CrossRef] [Medline]
  21. Coons SJ, Gwaltney CJ, Hays RD, Lundy JJ, Sloan JA, Revicki DA, et al. ISPOR ePRO Task Force. Recommendations on evidence needed to support measurement equivalence between electronic and paper-based patient-reported outcome (PRO) measures: ISPOR ePRO good research practices task force report. Value Health. 2009;12(4):419-429. [FREE Full text] [CrossRef] [Medline]
  22. Inomata T, Nakamura M, Iwagami M, Sung J, Nakamura M, Ebihara N, et al. Individual characteristics and associated factors of hay fever: a large-scale mHealth study using AllerSearch. Allergol Int. 2022;71(3):325-334. [FREE Full text] [CrossRef] [Medline]
  23. Akasaki Y, Inomata T, Sung J, Okumura Y, Fujio K, Miura M, et al. Reliability and validity of electronic patient-reported outcomes using the smartphone app AllerSearch for hay fever: prospective observational study. JMIR Form Res. 2022;6(8):e38475. [FREE Full text] [CrossRef] [Medline]
  24. Inomata T, Nakamura M, Iwagami M, Sung J, Nakamura M, Ebihara N, et al. Symptom-based stratification for hay fever: a crowdsourced study using the smartphone application AllerSearch. Allergy. 2021;76(12):3820-3824. [CrossRef] [Medline]
  25. Graf J, Sickenberger N, Brusniak K, Matthies LM, Deutsch TM, Simoes E, et al. Implementation of an electronic patient-reported outcome app for health-related quality of life in breast cancer patients: evaluation and acceptability analysis in a two-center prospective trial. J Med Internet Res. 2022;24(2):e16128. [FREE Full text] [CrossRef] [Medline]
  26. Fujio K, Inomata T, Fujisawa K, Sung J, Nakamura M, Iwagami M, et al. Patient and public involvement in mobile health-based research for hay fever: a qualitative study of patient and public involvement implementation process. Res Involv Engagem. 2022;8(1):45. [FREE Full text] [CrossRef] [Medline]
  27. Eguchi A, Inomata T, Nakamura M, Nagino K, Iwagami M, Sung J, et al. Heterogeneity of eye drop use among symptomatic dry eye individuals in Japan: large-scale crowdsourced research using DryEyeRhythm application. Jpn J Ophthalmol. 2021;65(2):271-281. [CrossRef] [Medline]
  28. Inomata T, Nakamura M, Iwagami M, Midorikawa-Inomata A, Sung J, Fujimoto K, et al. Stratification of individual symptoms of contact lens-associated dry eye using the iPhone app DryEyeRhythm: crowdsourced cross-sectional study. J Med Internet Res. 2020;22(6):e18996. [FREE Full text] [CrossRef] [Medline]
  29. Inomata T, Iwagami M, Nakamura M, Shiang T, Fujimoto K, Okumura Y, et al. Association between dry eye and depressive symptoms: large-scale crowdsourced research using the DryEyeRhythm iPhone application. Ocul Surf. 2020;18(2):312-319. [FREE Full text] [CrossRef] [Medline]
  30. Kasetsuwan N, Suwan-Apichon O, Lekhanont K, Chuckpaiwong V, Reinprayoon U, Chantra S, et al. Assessing the risk factors for diagnosed symptomatic dry eye using a smartphone app: cross-sectional study. JMIR mHealth uHealth. 2022;10(6):e31011. [FREE Full text] [CrossRef] [Medline]
  31. Wolffsohn JS, Craig JP, Vidal-Rohr M, Huarte ST, Kit LA, Wang M. Blink test enhances ability to screen for dry eye disease. Cont Lens Anterior Eye. 2018;41(5):421-425. [CrossRef] [Medline]
  32. Nagino K, Sung J, Midorikawa-Inomata A, Eguchi A, Fujimoto K, Okumura Y, et al. Clinical utility of smartphone applications in ophthalmology: a systematic review. Ophthalmol Sci. 2023:100342. [FREE Full text] [CrossRef]
  33. Inomata T, Sung J, Nakamura M, Iwagami M, Okumura Y, Fujio K, et al. Cross-hierarchical integrative research network for heterogenetic eye disease toward P4 medicine: a narrative review. Juntendo Med J. 2021;67(6):519-529. [FREE Full text] [CrossRef]
  34. Inomata T, Sung J, Nakamura M, Iwagami M, Okumura Y, Iwata N, et al. Using medical big data to develop personalized medicine for dry eye disease. Cornea. 2020;39(Suppl 1):S39-S46. [CrossRef] [Medline]
  35. Inomata T, Sung J, Yee A, Murakami A, Okumura Y, Nagino K, et al. P4 medicine for heterogeneity of dry eye: a mobile health-based digital cohort study. Juntendo Med J. 2023;69(1):2-13. [FREE Full text] [CrossRef]
  36. Sakane Y, Yamaguchi M, Yokoi N, Uchino M, Dogru M, Oishi T, et al. Development and validation of the dry eye-related quality-of-life score questionnaire. JAMA Ophthalmol. 2013;131(10):1331-1338. [FREE Full text] [CrossRef] [Medline]
  37. van Bijsterveld OP. Diagnostic tests in the sicca syndrome. Arch Ophthalmol. 1969;82(1):10-14. [CrossRef] [Medline]
  38. Inomata T, Iwagami M, Hiratsuka Y, Fujimoto K, Okumura Y, Shiang T, et al. Maximum blink interval is associated with tear film breakup time: a new simple, screening test for dry eye disease. Sci Rep. 2018;8(1):13443. [FREE Full text] [CrossRef] [Medline]
  39. Schirmer O. Studien zur physiologie und pathologie der tränenabsonderung und tränenabfuhr. Graefes Arhiv für Ophthalmologie. 1903;56(2):197-291. [CrossRef]
  40. Amano S. Meibomian gland dysfunction: recent progress worldwide and in Japan. Invest Ophthalmol Vis Sci. 2018;59(14):DES87-DES93. [FREE Full text] [CrossRef] [Medline]
  41. Elfil M, Negida A. Sampling methods in clinical research; an educational review. Emerg (Tehran). 2017;5(1):e52. [FREE Full text] [Medline]
  42. Kang M, Ragan BG, Park JH. Issues in outcomes research: an overview of randomization techniques for clinical trials. J Athl Train. 2008;43(2):215-221. [FREE Full text] [CrossRef] [Medline]
  43. Miller KL, Walt JG, Mink DR, Satram-Hoang S, Wilson SE, Perry HD, et al. Minimal clinically important difference for the Ocular Surface Disease Index. Arch Ophthalmol. 2010;128(1):94-101. [FREE Full text] [CrossRef] [Medline]
  44. Uhrenholt L, Christensen R, Dreyer L, Schlemmer A, Hauge EM, Krogh NS, et al. Using a novel smartphone application for capturing of patient-reported outcome measures among patients with inflammatory arthritis: a randomized, crossover, agreement study. Scand J Rheumatol. 2022;51(1):25-33. [CrossRef] [Medline]
  45. Piaggio G, Elbourne DR, Altman DG, Pocock SJ, Evans SJW, CONSORT Group. Reporting of noninferiority and equivalence randomized trials: an extension of the CONSORT statement. JAMA. 2006;295(10):1152-1160. [CrossRef] [Medline]
  46. Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16(3):297-334. [CrossRef]
  47. Deyo RA, Diehr P, Patrick DL. Reproducibility and responsiveness of health status measures. Statistics and strategies for evaluation. Control Clin Trials. 1991;12(4 Suppl):142S-158S. [CrossRef] [Medline]
  48. Byambasuren O, Beller E, Hoffmann T, Glasziou P. Barriers to and facilitators of the prescription of mHealth apps in Australian general practice: qualitative study. JMIR mHealth uHealth. 2020;8(7):e17447. [FREE Full text] [CrossRef] [Medline]
  49. Schiffman RM, Christianson MD, Jacobsen G, Hirsch JD, Reis BL. Reliability and validity of the Ocular Surface Disease Index. Arch Ophthalmol. 2000;118(5):615-621. [FREE Full text] [CrossRef] [Medline]
  50. Zhang XM, Yang LT, Zhang Q, Fan QX, Zhang C, You Y, et al. Reliability of Chinese web-based Ocular Surface Disease Index Questionnaire in dry eye patients: a randomized, crossover study. Int J Ophthalmol. 2021;14(6):834-843. [FREE Full text] [CrossRef] [Medline]
  51. Bakkar MM, El-Sharif AK, Al Qadire M. Validation of the Arabic version of the Ocular Surface Disease Index Questionnaire. Int J Ophthalmol. 2021;14(10):1595-1601. [FREE Full text] [CrossRef] [Medline]
  52. Traipe L, Gauro F, Goya MC, Cartes C, López D, Salinas D, et al. Validation of the ocular surface disease index questionnaire for chilean patients. Rev Med Chil. 2020;148(2):187-195. [FREE Full text] [CrossRef] [Medline]
  53. Hajek A, König HH. Frequency and correlates of driving status among the oldest old: results from a large, representative sample. Aging Clin Exp Res. 2022;34(12):3083-3088. [FREE Full text] [CrossRef] [Medline]
  54. Dong Q, Liu T, Liu R, Yang H, Liu C. Effectiveness of digital health literacy interventions in older adults: single-arm meta-analysis. J Med Internet Res. 2023;25:e48166. [FREE Full text] [CrossRef] [Medline]
  55. Belisario JSM, Jamsek J, Huckvale K, O'Donoghue J, Morrison CP, Car J. Comparison of self-administered survey questionnaire responses collected using mobile apps versus other methods. Cochrane Database Syst Rev. 2015;2015(7):MR000042. [FREE Full text] [CrossRef] [Medline]


CFS: corneal fluorescein staining
DED: dry eye disease
DEQS: Dry Eye-Related Quality-of-Life Score
ePRO: electronic patient-reported outcome
ICC: intraclass correlation coefficient
J-OSDI: Japanese Ocular Surface Disease Index
LOA: limits of agreement
MBI: maximum blink interval
MCID: minimal clinically important difference
mHealth: mobile health
OSDI: Ocular Surface Disease Index
PRO: patient-reported outcome
TFBUT: tear film breakup time


Edited by T Leung; submitted 12.09.22; peer-reviewed by H Ayatollahi, M Bin Mohd Hanim, Z Huang, M Aksoy, Z Li; comments to author 02.03.23; revised version received 22.03.23; accepted 12.07.23; published 03.08.23.

Copyright

©Ken Nagino, Yuichi Okumura, Yasutsugu Akasaki, Kenta Fujio, Tianxiang Huang, Jaemyoung Sung, Akie Midorikawa-Inomata, Keiichi Fujimoto, Atsuko Eguchi, Shokirova Hurramhon, Alan Yee, Maria Miura, Mizu Ohno, Kunihiko Hirosawa, Yuki Morooka, Akira Murakami, Hiroyuki Kobayashi, Takenori Inomata. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 03.08.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.