Original Paper
Abstract
Background: Cognitive impairment (CI) is one of the most prevalent symptoms of multiple sclerosis (MS). However, it is difficult to include cognitive assessment as part of MS standard care since the comprehensive neuropsychological examinations are usually time-consuming and extensive.
Objective: To improve access to CI assessment, we evaluated the feasibility and potential assessment sensitivity of a tablet-based cognitive battery in patients with MS.
Methods: In total, 53 participants with MS (24 [45%] with CI and 29 [55%] without CI) and 24 non-MS participants were assessed with a tablet-based cognitive battery (Adaptive Cognitive Evaluation [ACE]) and standard cognitive measures, including the Symbol Digit Modalities Test (SDMT) and the Paced Auditory Serial Addition Test (PASAT). Associations between performance in ACE and the SDMT/PASAT were explored, with group comparisons to evaluate whether ACE modules can capture group-level differences.
Results: Correlations between performance in ACE and the SDMT (R=–0.57, P<.001), as well as PASAT (R=–0.39, P=.01), were observed. Compared to non-MS and non-CI MS groups, the CI MS group showed a slower reaction time (CI MS vs non-MS: P<.001; CI MS vs non-CI MS: P=.004) and a higher attention cost (CI MS vs non-MS: P=.02; CI MS vs non-CI MS: P<.001).
Conclusions: These results provide preliminary evidence that ACE, a tablet-based cognitive assessment battery, provides modules that could potentially serve as a digital cognitive assessment for people with MS.
Trial Registration: ClinicalTrials.gov NCT03569618; https://clinicaltrials.gov/ct2/show/NCT03569618
doi:10.2196/25748
Keywords
Introduction
Background
Multiple sclerosis (MS) is a chronic inflammatory and neurodegenerative disorder, and it is the leading cause of major disability in young adults. Cognitive impairment (CI) occurs in 30%-70% of patients with MS [
, ], even in the absence of physical impairment [ , , ]. CI is one of the most debilitating manifestations of MS and can have a profound influence on a patient’s personal independence and quality of life, interfering with social functioning and employment. Since 2014, CI assessment has been one of the measure specifications of the American Academy of Neurology’s MS Quality Measurement Set [ ]. Ideally, patients with MS should undergo a complete cognitive assessment and routinely repeat the examination to detect cognitive changes overtime and to start timely treatment, if needed. However, to date, cognitive assessment in MS relies on a comprehensive neuropsychological examination, which is time-consuming and extensive; therefore, it makes it difficult to include cognitive assessment/monitoring as part of MS standard care.Cognitive Assessment with Digital Tools
Integrating digital tools into clinical settings can reduce the time and cost associated with cognitive examination and further allows repeated assessments, which provide more precise monitoring of cognitive performance and longitudinal changes. One more feature of digital tools is the capability of remote and self-administration, which can relieve the burden of travel to the clinic, due to deficits in mobility or cognition for many patients. Moreover, with remote administration features, data collection can be performed in a nonclinical, real-life setting, which allows sampling of cognitive performance more closely reflecting real-world cognitive function [
]. The self-administered feature also reduces common stressors for patients who get nervous during structured testing in clinical settings.With advanced technology, remote, computerized platforms for cognitive assessment and treatment, using personalizing features, including adaptive staircase algorithms for populations with cognitive deficits [
- ], have been developed. In Alzheimer’s disease–related dementias, digital cognitive assessment tools have shown reliability in measuring longitudinal cognitive changes in individuals with no CI, mild CI, and dementia [ , ] and have exhibited cross-sectional sensitivity to cerebrospinal fluid amyloid-ß levels [ ]. In schizophrenia, digital assessments have also shown effectiveness in identifying deficits across different cognitive domains [ , ]. Moreover, tablet-based cognitive assessment has been validated to differentiate cognitive control ability between children with and without 16p11.2 deletion, a genetic variation implicated in attention deficit/hyperactivity disorder and autism [ ]. Although the development and validation of digital cognitive assessment tools have been growing, the investigations of remote, digital CI assessments in MS have been scarce [ - ]; therefore, there is a need to deepen the exploration of digital cognitive evaluation for improving access to CI assessments in order to thereby navigate problems related to cognitive issues and further reduce the impact of CI on patients’ lives.Aims and Overview of the Study
The goal of this study was to evaluate the feasibility and potential assessment sensitivity of a tablet-based cognitive assessment battery in patients with MS, focusing on the most commonly affected cognitive domains in MS: processing speed, attention, executive function, and memory [
, ]. To accomplish this, a tablet-based cognitive assessment battery (Adaptive Cognitive Evaluation [ACE]; see the Methods section) that measures different aspects of high-order cognitive function (eg, attention, working memory, speed of information processing, and executive function) [ ], was tested in 53 participants with MS and 24 participants without MS. ACE was developed by Neuroscape at the University of California, San Francisco (UCSF) [ ]. It has a user-friendly interface as well as adaptive algorithms, which modulate the challenge level of a task on a trial-by-trial basis based on individual performance. In this study, 3 modules (Boxed, Sustained Attention ACE Task [SAAT], Spatial Span) assessing different aspects of cognitive control ability and 1 module (Basic Reaction Time [BRT]) measuring the basic response speed were included (see the Methods section) for a preliminary examination of the construct validity of the ACE battery. The Symbol Digit Modalities Test (SDMT), a test considered the most sensitive measurement for the evaluation of cognitive involvement and information processing speed in the early MS course [ , ], was also administered. Given that information processing speed has been shown to account for impairments in high-level cognitive functions in MS [ - ], the relationship between performance in the SDMT and ACE modules was investigated. To further delineate whether ACE modules can differentiate different levels of cognitive function, performance differences among participants with MS with and without CI, as well as participants without MS, were examined.We hypothesized that there would be a correlation between the SDMT score and ACE performance in accordance with the relative consequence theory of information processing speed [
- ], in which impaired processing speed is considered the key deficit underlying CI in MS [ - ]. In addition, as a tool for cognitive assessment in MS, ACE would reveal group-level differences between participants with MS with and without CI, as well as participants without MS.Methods
Participants
In total, 53 adults with clinically definite MS [
], mean age 51.8 (1.7) years, were recruited from the University of California, San Francisco Multiple Sclerosis and Neuroinflammation Center between April 2018 and January 2019 with the following inclusion criteria: internet connection available at home or in the work environment and free of relapses or steroid use in the past month. Patients with severe visual, cognitive, or motor impairment that would preclude the use of a tablet-based tool were excluded. A group of 24 adults without MS (non-MS), mean age 46.0 (3.7) years, with no chronic autoimmune diseases were also recruited from the UCSF staff, willing family members of patients in the clinic, and other eligible and willing volunteers.Standard Approval, Registration, and Patient Consent
All procedures performed in the study involving human participants were approved by the Committee for Human Research at the UCSF. Written informed consent was obtained from each participant. The trial is registered with clinicaltrials.gov (NCT03569618).
Study Design
To evaluate ACE, both ACE and the SDMT were administered to all participants, including 53 adults with MS and 24 adults without MS (non-MS). All participants with MS were recruited as part of studies to determine the feasibility [
] and preliminary efficacy [ ] of a digital cognitive treatment, as previously described [ ]. As part of a published study in which another tablet-based assessment (ie, EVO Monitor) was investigated [ ], this study contains data that have not been analyzed or published in a larger trial (clinicaltrials.gov NCT03569618). The analysis of this study was based on baseline performance data (ie, before any cognitive intervention) of our feasibility [ ] and efficacy [ ] trials, where participants underwent cognitive testing, including ACE and the SDMT. The 2-hour baseline session began with standard measures (SDMT, the Paced Auditory Serial Addition Test [PASAT], the California Verbal Learning Test Second Edition, and the Brief Visuospatial Memory Test Revised), followed by digital cognitive assessment with ACE (BRT, Boxed, SAAT, Spatial Span) and EVO Monitor (data presented in [ ]). Standard measures were administered by a study coordinator. Digital tool assessment was self-guided; however, a study coordinator sat in with the participants to answer questions and clarify aspects of the directions, if needed. The baseline session did not include any predetermined break, while participants were informed at the beginning of the visit that they could take a break at any time, if needed. Task order was predetermined (as described above) and remained consistent through the whole study. Only the SDMT and PASAT were included in the analysis of this study, given that they are the most widely used standard measures for people with MS and the cognitive domains being evaluated by these tests are close to cognitive aspects that ACE is designed to test for (ie, attention, working memory, speed of information processing, and executive function).Cognitive Measures
Standard Measures: SDMT and PASAT
SDMT is a widely used measure of selective attention and information processing speed in MS [
, ], which requires the participant to substitute geometric symbols for numbers while scanning a response key. The participants were presented with a page headed by a key that pairs 9 symbols with the single digits 1-9. Rows below showed only symbols, and the task was to write the correct number in the spaces below based on the key row. After finishing the first 10 items with guidance, correct responses being made within 90 seconds were counted as the SDMT score.Since PASAT has also been used extensively to test information processing speed, attention, and working memory in MS [
], we included it as a standard measure for participants with MS recruited in this study. The participants were instructed to listen to numbers presented every 3 seconds and add the number they hear to the number they heard before (rather than giving a running total). The PASAT score was defined as the total number of correct answers out of 60 possible answers.Digital Cognitive Assessment Battery: ACE
Tasks within ACE (
) followed a similar schematic: across modules, the probe or target (as specified in the individual module description below) was displayed either until a response was made or until the maximum reaction time (RT) limit was reached. After each trial, the trial-level feedback, either a green (correct response was made within the RT limit), yellow (correct response was made outside of the RT limit), or red (incorrect response) centralized fixation cross was displayed for 200 ms, followed by a standard 1000 ms intertrial interval.The BRT task was designed to index the basic response speed of participants on a simple task with minimal loading on executive function skills [
]. Participants were instructed to press a button at the bottom of the screen as fast as they could when they detected a symbol (target) that appeared in the center of the screen. The target always appeared without distraction. The BRTs were measured for both index fingers. Participants first completed 5 practice trials for each of their right and left index fingers, followed by 20 experimental trials per index finger. This task started with a maximum RT limit of 500 ms and a response window of 500 ms that adapted for each participant according to trialwise performance. An average RT was measured for each hand. Only data from the dominant hand were included as each participant’s BRT in the following analyses.The Boxed task was designed to measure visual search performance across different types and number of distractors [
]. Participants were presented with an array of either 4 or 12 Landolt squares (ie, squares with gaps) with an opening on 1 side until participants located the target (a green box with a gap on the top or bottom) and indicated the location of the gap (top or bottom) by tapping with their dominant hand on a button with either “top” or “bottom.” Two distinct search modes were included: a feature search, where red Landolt squares were present in addition to the single green target, allowing the target to be located based solely on object color, and a conjunction search, where distractor boxes were green and red, with all green distractor boxes having gaps on either side and red distractor boxes having gaps on the top and bottom, similar to the green target box. As such, participants had to search the array for the target square based on a conjunction of features: both color and position of the opening. In addition to the 2 search types (feature and conjunction), there were 2 distractor load conditions: a low load with 1 target and 3 distractors and a high load with 1 target and 11 distractors. Participants completed 8 practice trials of each condition (32 total) before moving on to the experimental task with 25 trials per condition (100 total). This task started with a maximum RT limit of 1500 ms and a response window of 1000 ms that adapted for each participant according to trialwise performance. Only correct responses made within the participants’ adaptive response window were included in analysis. Task performance was assessed by examining the mean RT to correct responses for all trial types, including 4- and 12-item trials collapsed across both feature and conjunction search modes. The RT cost between target identification for feature and conjunction trials across each set size was measured as distraction cost = 12-item (conjunction and feature) – 4-item (conjunction and feature).SAAT was developed based on the Tests of Variables of Attention (TOVA) [
] and was designed to include blocks that separately measure sustained attention (to an infrequent target) and inhibitory control (inhibiting a prepotent response to salient distractors). During a trial, a symbol (target) appeared at the top or bottom of the screen. Participants were instructed to press a button with the index finger of only their dominant hand when the target appeared at the top of the screen and to ignore the symbol and withhold a response when it appeared at the bottom. This task proceeded in 2 blocks. In the inhibitory control block, targets appeared on 27 of 40 (67%) of trials and required participants to withhold a (highly primed) response when a distractor appeared. In the sustained attention block, the target appeared on only 13 of 40 (33%) of trials, requiring participants to maintain attention to avoid missing an infrequent target. Participants completed 10 practice trials (6 target and 4 nontarget trials) and then 80 experimental trials, 40 in each block. This task started with a maximum RT limit of 600 ms and a response window of 600 ms that adapted for each participant according to trialwise performance. However, to avoid creating an artificially low response window, correct rejections did not affect the response window. Trials where no response was given (when a response was expected) or that were anticipatory (RT < 150 ms) were excluded from analyses. All remaining trials were evaluated for accuracy regardless of whether the response was within the response window. Thus, trials were only considered incorrect if an incorrect response was made (and not if they were correct but late). The mean RT to correct responses collapsed across block types were measured as task performance.The Spatial Span Task is a computerized version of the Corsi Block-Tapping Test [
], which has frequently been used to assess visuospatial working memory capacity. On each trial, participants viewed a test array of 20 black circles that were cued sequentially in line with the typical administration of the Corsi Block-Tapping Task stimuli. Cued circles were lit in green, one at a time, sequentially. After the sequence of circles was complete and no longer displayed, participants were instructed to recall the location of each cued circle in the order they were shown and indicate the location and sequence by tapping each cued location in the cued order. Participants started with between 2 and 4 practice trials with 3-location sequences (ie, 3 cued circles). Participants practiced until 2 consecutive trials were answered incorrectly. Regardless of practice performance, participants then began the experimental task with a 3-location sequence. Once the participant completed 2 consecutive trials of the previous level without an error, they would advance to the next level that included an additional cued circle, increasing the difficulty level. Participants completed as many levels as possible until 2 consecutive incorrect trials, at which point the task ended. Participants had unlimited time to respond for this task. Participants needed to have successfully completed at least 2 3-location sequence trials to be included in analysis. The highest level (ie, maximum number of items in a sequence) of the successful trial for each participant was defined as the spatial span, the measurement of working memory capacity.Statistical Analysis
All numerical data are presented as the mean (SE). To evaluate the digital cognitive assessment battery (ie, ACE), Pearson correlation analyses were conducted to scrutinize the relationship between performance in the SDMT/PASAT and ACE modules. Partial correlation analyses with age, sex, years of education, and the BRT as covariates were applied, when appropriate. To examine whether the selected ACE modules can differentiate CI and non-CI participants with MS, participants with MS were divided into 2 subgroups (ie, CI and non-CI) according to their baseline SDMT z scores. Participants with an SDMT z score of <–1 based on published normative data [
] were characterized as CI. Differences between CI and non-CI participants with MS, as well as non-MS participants in terms of ACE performance, were examined by one-way ANOVA, with the BRT as a covariate to control for potential motor speed deficit in participants with MS. Two-tailed Student t tests were carried out for post hoc comparisons, when appropriate. The statistical significance threshold was set as P≦.05.Results
Participants
A total of 53 participants with MS (mean age 51.8 [1.7] years) and 24 participants without MS (mean age 46.0 [3.7] years) completed the assessments; their demographic and clinical characteristics are summarized in
. One-way ANOVA revealed no age differences between the groups (F(2,76)=1.42, P=.24). For categorical variables (ie, sex and race), chi-square tests showed no statistically significant association between groups and sex (X2(2)=5.31, P=.07) as well as race (X2(4)=2.90, P=.59). For analysis purposes, the 53 participants with MS were divided into CI (n=24 [45%]) and non-CI (n=29 [55%]) subgroups based on to their baseline SDMT z score. details the task completion rate.Characteristics | MSa | Non-MS (n=24) | |||
CIb (n=24) | Non-CI (n=29) | ||||
Age (years), mean (SE) | 50.87 (2.51) | 52.68 (2.35) | 46.04 (3.72) | ||
Sex (female), n (%) | 17 (70) | 23 (79) | 12 (50) | ||
Education (years), mean (SE) | 16.50 (0.47) | 16.79 (0.51) | 16.16 (0.41) | ||
Right-handedness, n (%) | 22 (91) | 23 (80) | 50 (100) | ||
Part- or full-time employed, n (%) | 11 (45%) | 15 (51%) | 17 (70%) | ||
SDMTc score, mean (SE) | 34.79 (1.34) | 49.34 (1.12)d | 51.20 (2.65)e | ||
SDMT z score, mean (SE) | –1.58 (0.08) | –0.05 (0.10)d | 0.26 (0.20)e | ||
Expanded Disability Status Scale (EDSS), median (IQR) | 4 (1.75) | 3 (1.5) | N/Af | ||
Disease duration (years), mean (SE) | 11.95 (1.83) | 13.71 (1.49) | N/A | ||
Race, n (%) | |||||
White | 22 (92%) | 22 (76%) | 19 (79%) | ||
Black/African American | N/A | 2 (7%) | 1 (4%) | ||
Other/unknown | 2 (8%) | 5 (17%) | 4 (17%) | ||
MS subtype, n (%) | |||||
Relapsing-remitting | 18 (75%) | 23 (79%) | N/A | ||
Primary progressive | 2 (8%) | 2 (7%) | N/A | ||
Secondary progressive | 3 (13%) | 3 (10%) | N/A | ||
Clinically isolated syndrome (CIS) | N/A | 1 (3%) | N/A | ||
Unknown | 1 (4%) | N/A | N/A |
aMS: multiple sclerosis.
bCI: cognitive impairment.
cSDMT: Symbol Digit Modalities Test.
dP<.001 for the comparison between CI and non-CI groups.
eP<.001 for the comparison between CI and non-MS groups.
fN/A: not applicable.
Correlation Between Standard Measures and ACE
To delineate associations between performance in standard measures (ie, SDMT and PASAT scores) and the tested digital cognitive platform (ie, ACE), Pearson correlation analyses were performed. The SDMT showed significant correlations with several ACE measures (Boxed RT: R=–0.57, P<.001; Boxed distraction cost: R=–0.28, P=.02; SAAT RT: R=–0.36, P=.001; Spatial Span: R=0.34, P=.003;
). Since the Boxed RT showed the strongest correlation with the SDMT, we further performed an exploratory linear regression analysis to examine to what extent the Boxed RT value can be used to predict the SDMT score. The analysis revealed a moderate R-squared value of 0.333 with the regression equation SDMT = 82.55 – 0.038 × Boxed RT; 33% of the total variation in the SDMT score can be explained by the Boxed RT.When controlling for age, sex, years of education, and the BRT with partial correlations, similar results were observed (Boxed RT: R=–0.44, P<.001; Boxed distraction cost: R=–0.28, P=.01; SAAT RT: R=–0.17, P=.15; Spatial Span: R=0.18, P=.12;
). When we restricted the analyses to only participants with MS, SDMT correlations with the Boxed RT and Boxed distraction cost remained statistically significant (Boxed RT: R=–0.50, P<.001; Boxed distraction cost: R=–0.34, P=.01; SAAT RT: R=–0.22, P=.10; Spatial Span: R=0.24, P=.07). Again, in adjusted correlation analyses, we saw similar results (Boxed RT: R=–0.43, P=.002; Boxed distraction cost: R=–0.38, P=.01; SAAT RT: R=–0.16, P=.26; Spatial Span: R=0.18, P=.21; ).PASAT, which has also been extensively used to test cognitive function in MS, was tested in the participants with MS and also showed significant correlations with the Boxed RT (R=–0.39, P=.01) and Spatial Span (R=0.29, P=.03;
and ). These correlations remained significant after accounting for age, sex, years of education, and the BRT as covariates (Boxed RT: R=–0.40, P=.01; Spatial Span: R=0.34, P=.02). These results support the hypothesis that there is a correlational association between standard MS information processing measures and ACE measures.ACE measures | Covariates | R | P value | ||
All participants (N=77) | |||||
Boxed RTc | N/Ad | –0.57 | <.001e | ||
Boxed distraction cost | N/A | –0.28 | .01e | ||
SAATf RT | N/A | –0.36 | .001e | ||
Spatial Span | N/A | 0.34 | .003e | ||
Boxed RT | age, sex, edug, and BRTh | –0.44 | <.001e | ||
Boxed distraction cost | age, sex, edu, and BRT | –0.28 | .01e | ||
SAAT RT | age, sex, edu, and BRT | –0.17 | .15 | ||
Spatial Span | age, sex, edu, and BRT | 0.18 | .12 | ||
Participants with MSi(N=53) | |||||
Boxed RT | N/A | –0.50 | <.001e | ||
Boxed distraction cost | N/A | –0.34 | .01e | ||
SAAT RT | N/A | –0.22 | .10 | ||
Spatial Span | N/A | 0.24 | .07 | ||
Boxed RT | age, sex, edu, and BRT | –0.43 | .002e | ||
Boxed distraction cost | age, sex, edu, and BRT | –0.38 | .007e | ||
SAAT RT | age, sex, edu, and BRT | –0.16 | .26 | ||
Spatial Span | age, sex, edu, and BRT | 0.18 | .21 |
aSDMT: Symbol Digit Modalities Test.
bACE: Adaptive Cognitive Evaluation.
cRT: reaction time.
dN/A: not applicable.
eP values in italic are significant.
fSAAT: Sustained Attention ACE Task.
gedu: years of education.
hBRT: basic reaction time.
iMS: multiple sclerosis.
ACE measures | Covariates | R | P value |
Boxed RTd | N/Ae | –0.39 | .01f |
Boxed distraction cost | N/A | –0.25 | .07 |
SAATg RT | N/A | –0.03 | .83 |
Spatial Span | N/A | 0.29 | .03f |
Boxed RT | age, sex, eduh, and BRTi | –0.40 | .01f |
Boxed distraction cost | age, sex, edu, and BRT | –0.24 | .09 |
SAAT RT | age, sex, edu, and BRT | –0.01 | .92 |
Spatial Span | age, sex, edu, and BRT | 0.34 | .02f |
aPASAT: Paced Auditory Serial Addition Test.
bACE: Adaptive Cognitive Evaluation.
cMS: multiple sclerosis.
dRT: reaction time.
eN/A: not applicable.
fP values in italic are significant.
gSAAT: Sustained Attention ACE Task.
hedu: years of education.
iBRT: basic reaction time.
Group Differences in ACE
We then determined whether ACE modules can differentiate participants with MS with CI (SDMT z score<–1) and without CI (SDMT z score≥–1), as well as non-MS participants. To accomplish this, we conducted one-way ANOVA with the participant category as the independent variable for the Boxed RT, Boxed distraction cost, SAAT RT, and Spatial Span. We included age, sex, and years of education as covariates. The BRT was also included as a covariate since there was a significant difference in the BRT among the 3 groups (F (2,71)=3.96, P=.02), where the non-MS group showed a faster BRT (311.81 [14.50] ms) compared to both CI (362.86 [13.56] ms, P=.02) and non-CI (357.27 [12.34] ms, P=.02) participants with MS.
Significant group differences in the Boxed RT (F (2,66)=9.73, P<.001) and Boxed distraction cost (F (2,66)=7.40, P=.001) were revealed. Post hoc comparisons indicated a slower Boxed RT in CI (1072.72 [28.14] ms) compared to non-CI (959.89 [25.48] ms; P=.004) participants with MS as well as non-MS participants (904.86 [31.68] ms, P<.001). For Boxed distraction cost, CI participants showed a higher attention cost (274.35 [20.47] ms) when compared to non-CI participants (170.12 [18.53] ms, P<.001) and non-MS (203.84 [23.04] ms, P=.02) participants (
). Although no statistical differences in Boxed distraction cost were found between the non-CI and non-MS groups (P=.20), numerically, the distraction cost in the non-CI group was slightly lower than in the non-MS group. A lower RT cost may be attributed to 2 combinations of task performance: First, the same level of performance for the more challenging task condition but a worse performance in the easier task, and second, the same level of task performance for the easier task and a less performance drop in the more challenging task, indicating better cognitive ability as the performance is not affected much when the task becomes more difficult. Here, the numerical difference in Boxed distraction cost between non-CI and non-MS groups was more likely to be a result of worse performance in the easier task in the non-CI group (ie, slower RT in the 4-item condition) rather than a less performance change in the more challenging task (ie, faster RT in the 12-item condition). To understand this, we investigated RTs in 12-item and 4-item conditions in both groups. As expected, the 2 groups showed the same level of RT in the 12-item condition (non-CI vs non-MS: 1030.22 [24.09] ms vs 1022.23 [30.66] ms, P=0.84), but a slower RT was found in the non-CI group in the 4-item condition (non-CI vs non-MS: 854.37 [19.42] ms vs 814.89 [23.87] ms, P=0.23). The slower RT in the 4-item condition, to some extent, explained the numerically lower Boxed distraction cost in the non-CI group. No significant difference between the 3 groups was discovered for the SAAT RT (F (2,66)=0.42, P=.65) or Spatial Span (F (2,66)=0.62, P=.54). The results suggest that the ACE Boxed module can identify group-level differences between CI participants with MS, non-CI participants with MS, and non-MS participants.Discussion
Principal Findings
In this study, we aimed to determine whether a digital cognitive assessment battery (ie, ACE) could be used to evaluate cognitive function in adults with MS with and without CI. We found a significant correlational association between ACE metrics and standard cognitive measures. When age, sex, years of education, and the BRT were considered as covariates, only correlations between the SDMT score and the Boxed RT as well as the Boxed distraction cost remained significant. Specifically, the ACE Boxed module, a task measuring visual search performance and attention [
], showed the strongest correlation with the SDMT and could identify group-level differences between adults with MS with and without CI, as well as adults without MS. Altogether, these results provide preliminary evidence that ACE, a tablet-based cognitive assessment battery, provides modules that could potentially serve as an unsupervised cognitive assessment for people with MS.Correlational links between standard measures and ACE measures were discovered. Specifically, we noted significant correlations between higher SDMT scores and faster Boxed and SAAT RTs, lower Boxed distraction costs, and higher Spatial Span. The significant correlations between SDMT and Boxed measures in all participants (including both non-MS and MS) indicate that this ACE module is a potential cognitive assessment tool, and the results stand alone when only participants with MS are considered. Moreover, an exploratory simple linear regression analysis revealed a moderate R-squared value of 0.33, indicated that 33% of the total variation in the SDMT score can be explained by the Boxed RT. These findings partially support relative consequence theory [
- ], which postulates that a change in the information processing speed is a key deficit underlying cognitive dysfunction in MS [ - ]. However, a clear causal relationship between information processing speed and high-level cognitive performance cannot be concluded with the current results. Of note, when age, sex, years of education, and the BRT were considered as covariates, only correlations between the SDMT score and the Boxed RT and Boxed distraction cost remained significant. Boxed is a visual search task that requires participants to search for a specific target, filter out distractors, and provide a response. To some extent, the task structure and the domains of cognitive function being challenged are similar to those of the SDMT, in which participants are asked to substitute geometric symbols for numbers (make a response) while scanning a response key (search and filter out distractors). The similarity of the cognitive function subserving the 2 tasks may explain the consistent correlation between SDMT scores and Boxed performance. In contrast, the cognitive domains mainly being challenged in SAAT and Spatial Span are attention control and working memory, respectively, which are less similar to what is being challenged during an SDMT test. These different cognitive domains could have been affected differently by factors such as age, sex, and years of education, which could explain the marginally significant correlation revealed between the SDMT and performance in SAAT and Spatial Span when controlling for these factors.Participants with MS with a higher PASAT score demonstrated a faster Boxed RT and better Spatial Span. PASAT is a test involving information processing, attention, and short-term maintenance and manipulation of information [
]. These associations were expected, as the Boxed module is designed to assess attention and information processing speed, and Spatial Span mainly contains the cognitive component of holding information in mind.In addition to showing the correlational association between a subset of modules of ACE and standard cognitive measures, we further demonstrated that ACE could differentiate CI in adults with and without MS, as indicated by a significantly slower Boxed RT and the higher attention cost in CI compared to both non-CI participants with MS and non-MS participants. Of note, performance on SAAT and Spatial Span was not significantly different among the 3 groups. Compared to SAAT and Spatial Span, which only challenge 1 or 2 cognitive domains, Boxed is a task that is more complicated and requires more underlying cognitive resources to reach the task goal. Since there is large interindividual variability in the pattern of CI in MS [
, ], it is possible that a complex task requires more executive control and may be a more sensitive tool to capture cognitive dysfunction in participants with MS compared to tasks that challenge only 1 or 2 aspects of cognitive function. These results support our hypotheses that there are correlational links between performance in standard cognitive measures for MS and ACE modules. In addition, as a digital tool in assessing cognitive function in MS, at least 1 ACE module has the capacity to differentiate group-level differences among CI and non-CI MS participants and non-MS participants.With the advances in digital technology, the assessment and treatment for people with MS have adopted digital platforms [
], which when used in the home can substantially improve accessibility to cognitive remediation programs. Recently, we demonstrated that in-game navigation features of an unsupervised, digital video game–based digital therapeutic could represent a novel and sensitive way to perform cognitive evaluations in MS [ ]. The current results further support the use of a digital platform for cognitive assessment in MS. The built-in adaptive algorithms, which modulate task difficulty based on individual performance, reduce interindividual variability, which is usually a concern for cognitive assessments [ , ]. Since ACE is a self-guided digital assessment tool, it has the potential to be used in different settings (eg, at home or in the clinic). Future studies evaluating how ACE performance fluctuates during a day and whether the results would be affected by different testing environments are warranted. Moreover, since each ACE module is designed to challenge specific cognitive components, baseline ACE subtest scores could be useful to inform personalized cognitive training targets. For instance, for a participant with a low-grade score in Spatial Span and a high score in SAAT, the prescribed cognitive training approaches may place great emphasis on working memory rather than attentional control. Studies with a larger sample size or administering ACE as an outcome measure to capture the effect of a cognitive intervention are also needed to provide more information about how the ACE battery can reflect the patient’s and the caregiver’s real-life experience and to better translate the subtest scores to a meaningful treatment target.Limitations
Among the limitations of this study, the relatively small sample size and lack of multiple points of data collection at baseline made it difficult to draw a definitive conclusion with respect to the test validity and test-retest reliability of the ACE battery in people with MS. Related to this, given the predominately White participants in the study, particularly those with CI, the potential influence of racial and ethnicity on the results could not be fully excluded. Participants with severe CI were excluded from the study. Although the severity of CI can vary from mild to quite severe in MS patients, it has been reported that the majority (771/1014, 76.03%) of patients experience mild (340/771, 33.7%) to moderate (431/771, 42.7%) cognitive disturbance [
]. Since the application of the ACE program in clinical settings is still at an early stage, we planned to start with patients without severe CI to reduce the heterogeneity of the sample. Future studies with a broader range of CI are needed to investigate how the ACE tool performs when applied to participants with severe CI. Furthermore, the results of the exploratory simple linear regression analysis should be taken with caution, given the sample size does not have adequate power to provide a rigorous predictive model. Future studies with a larger sample size are warranted for a better predictive model development. Finally, experience with using digital tools may be confounding factors that can impact the results where participants with more experience in using digital tools may have performed better in this study. Future studies investigating digital assessments should control for participants’ skills in using digital devices.Conclusion
In summary, this study suggests that a tablet-based adaptive cognitive battery could be used to perform cognitive assessments in MS. As noted previously [
, ], the high adherence rate indicated that this remote, home-based health care strategy is well accepted by patients with MS, who may have limited access to cognitive assessment or treatment. Since CI poses major limitations to patients with MS, the current findings open up new paths to deploying digital cognitive tests for MS.Acknowledgments
We would like to thank our research participants. This work was supported by the Doris Duke Charitable Foundation and Akili Interactive.
Conflicts of Interest
RB received research support from the National Multiple Sclerosis Society, the Hilton Foundation, the California Initiative to Advance Precision Medicine, the Sherak Foundation, and Akili Interactive. RB also received personal compensation for consulting from Alexion, Biogen, EMR Serono, Novartis, Pear Therapeutics, Roche Genentech, and Sanofi Genzyme. AG is cofounder, shareholder, board member, and advisor for Akili Interactive Labs, a company that manufactures investigational digital treatments delivered through a video game–like interface.
References
- Achiron A, Barak Y. Cognitive impairment in probable multiple sclerosis. J Neurol Neurosurg Psychiatry 2003 Apr;74(4):443-446 [FREE Full text] [CrossRef] [Medline]
- Rogers JM, Panegyres PK. Cognitive impairment in multiple sclerosis: evidence-based analysis and recommendations. J Clin Neurosci 2007 Oct;14(10):919-927. [CrossRef] [Medline]
- Hoffmann S, Tittgemeyer M, von Cramon DY. Cognitive impairment in multiple sclerosis. Curr Opin Neurol 2007 Jun;20(3):275-280. [CrossRef] [Medline]
- Rao SM, Leo GJ, Bernardin L, Unverzagt F. Cognitive dysfunction in multiple sclerosis. I. Frequency, patterns, and prediction. Neurology 1991 May;41(5):685-691. [CrossRef] [Medline]
- Rae-Grant A, Bennett A, Sanders AE, Phipps M, Cheng E, Bever C. Quality improvement in neurology: multiple sclerosis quality measures; executive summary. Neurology 2015 Nov 24;85(21):1904-1908 [FREE Full text] [CrossRef] [Medline]
- Sliwinski MJ, Mogle JA, Hyun J, Munoz E, Smyth JM, Lipton RB. Reliability and validity of ambulatory cognitive assessments. Assessment 2018 Jan;25(1):14-30 [FREE Full text] [CrossRef] [Medline]
- Anguera JA, Brandes-Aitken AN, Antovich AD, Rolle CE, Desai SS, Marco EJ. A pilot study to determine the feasibility of enhancing cognitive abilities in children with sensory processing dysfunction. PLoS ONE 2017;12(4):e0172616 [FREE Full text] [CrossRef] [Medline]
- Anguera JA, Brandes-Aitken AN, Rolle CE, Skinner SN, Desai SS, Bower JD, et al. Characterizing cognitive control abilities in children with 16p11.2 deletion using adaptive 'video game' technology: a pilot study. Transl Psychiatry 2016 Sep 20;6(9):e893 [FREE Full text] [CrossRef] [Medline]
- Anguera JA, Gunning FM, Areán PA. Improving late life depression and cognitive control through the use of therapeutic video game technology: a proof-of-concept randomized trial. Depress Anxiety 2017 Jun;34(6):508-517 [FREE Full text] [CrossRef] [Medline]
- Anguera JA, Jordan JT, Castaneda D, Gazzaley A, Areán PA. Conducting a fully mobile and randomised clinical trial for depression: access, engagement and expense. BMJ Innov 2016 Jan;2(1):14-21 [FREE Full text] [CrossRef] [Medline]
- Arean PA, Hallgren KA, Jordan JT, Gazzaley A, Atkins DC, Heagerty PJ, et al. The use and effectiveness of mobile apps for depression: results from a fully remote clinical trial. J Med Internet Res 2016 Dec 20;18(12):e330 [FREE Full text] [CrossRef] [Medline]
- Areàn PA, Hoa Ly K, Andersson G. Mobile technology for mental health assessment. Dialogues Clin Neurosci 2016 Jun;18(2):163-169 [FREE Full text] [Medline]
- Charvet LE, Yang J, Shaw MT, Sherman K, Haider L, Xu J, et al. Cognitive function in multiple sclerosis improves with telerehabilitation: results from a randomized controlled trial. PLoS ONE 2017 May 11;12(5):e0177177 [FREE Full text] [CrossRef] [Medline]
- Davis NO, Bower J, Kollins SH. Proof-of-concept study of an at-home, engaging, digital intervention for pediatric ADHD. PLoS ONE 2018;13(1):e0189749 [FREE Full text] [CrossRef] [Medline]
- Hackett K, Krikorian R, Giovannetti T, Melendez-Cabrero J, Rahman A, Caesar EE, et al. Utility of the NIH Toolbox for assessment of prodromal Alzheimer's disease and dementia. Alzheimers Dement (Amst) 2018;10:764-772 [FREE Full text] [CrossRef] [Medline]
- Tsoy E, Erlhoff SJ, Goode CA, Dorsman KA, Kanjanapong S, Lindbergh CA, et al. BHA-CS: a novel cognitive composite for Alzheimer's disease and related disorders. Alzheimers Dement (Amst) 2020;12(1):e12042 [FREE Full text] [CrossRef] [Medline]
- Reijs BLR, Ramakers IHGB, Köhler S, Teunissen CE, Koel-Simmelink M, Nathan PJ, et al. Memory correlates of Alzheimer's disease cerebrospinal fluid markers: a longitudinal cohort study. J Alzheimers Dis 2017;60(3):1119-1128. [CrossRef] [Medline]
- Kim HS, An YM, Kwon JS, Shin M. A preliminary validity study of the cambridge neuropsychological test automated battery for the assessment of executive function in schizophrenia and bipolar disorder. Psychiatry Investig 2014 Oct;11(4):394-401 [FREE Full text] [CrossRef] [Medline]
- Levaux M, Potvin S, Sepehry AA, Sablier J, Mendrek A, Stip E. Computerized assessment of cognition in schizophrenia: promises and pitfalls of CANTAB. Eur Psychiatry 2007 Mar;22(2):104-115. [CrossRef] [Medline]
- Bove RM, Rush G, Zhao C, Rowles W, Garcha P, Morrissey J, et al. A videogame-based digital therapeutic to improve processing speed in people with multiple sclerosis: a feasibility study. Neurol Ther 2019 Jun;8(1):135-145 [FREE Full text] [CrossRef] [Medline]
- Khaligh-Razavi S, Sadeghi M, Khanbagi M, Kalafatis C, Nabavi SM. A self-administered, artificial intelligence (AI) platform for cognitive assessment in multiple sclerosis (MS). BMC Neurol 2020 May 18;20(1):193 [FREE Full text] [CrossRef] [Medline]
- Middleton RM, Pearson OR, Ingram G, Craig EM, Rodgers WJ, Downing-Wood H, UK MS Register Research Group. A rapid electronic cognitive assessment measure for multiple sclerosis: validation of cognitive reaction, an electronic version of the Symbol Digit Modalities Test. J Med Internet Res 2020 Sep 23;22(9):e18234 [FREE Full text] [CrossRef] [Medline]
- Rudick RA, Miller D, Bethoux F, Rao SM, Lee J, Stough D, et al. The Multiple Sclerosis Performance Test (MSPT): an iPad-based disability assessment tool. J Vis Exp 2014 Jun 30(88):e51318 [FREE Full text] [CrossRef] [Medline]
- Younger J, O'Laughlin K, Anguera J, Bunge S, Ferrer E, Hoeft F, et al. Development of Executive Function in Middle Childhood: A Large-Scale, In-School, Longitudinal Investigation. 2021 Apr 20. URL: https://psyarxiv.com/xf489/ [accessed 2021-04-20]
- Benedict RH, DeLuca J, Phillips G, LaRocca N, Hudson LD, Rudick R, Multiple Sclerosis Outcome Assessments Consortium. Validity of the Symbol Digit Modalities Test as a cognition performance outcome measure for multiple sclerosis. Mult Scler 2017 Apr 16;23(5):721-733 [FREE Full text] [CrossRef] [Medline]
- Parmenter BA, Weinstock-Guttman B, Garg N, Munschauer F, Benedict RHB. Screening for cognitive impairment in multiple sclerosis using the Symbol Digit Modalities Test. Mult Scler 2007 Jan;13(1):52-57. [CrossRef] [Medline]
- Denney DR, Lynch SG. The impact of multiple sclerosis on patients' performance on the Stroop Test: processing speed versus interference. J Int Neuropsychol Soc 2009 May;15(3):451-458. [CrossRef] [Medline]
- Drew MA, Starkey NJ, Isler RB. Examining the link between information processing speed and executive functioning in multiple sclerosis. Arch Clin Neuropsychol 2009 Feb;24(1):47-58. [CrossRef] [Medline]
- Genova HM, DeLuca J, Chiaravalloti N, Wylie G. The relationship between executive functioning, processing speed, and white matter integrity in multiple sclerosis. J Clin Exp Neuropsychol 2013;35(6):631-641 [FREE Full text] [CrossRef] [Medline]
- Owens EM, Denney DR, Lynch SG. Difficulties in planning among patients with multiple sclerosis: a relative consequence of deficits in information processing speed. J Int Neuropsychol Soc 2013 May;19(5):613-620. [CrossRef] [Medline]
- DeLuca J, Chelune GJ, Tulsky DS, Lengenfelder J, Chiaravalloti ND. Is speed of processing or working memory the primary information processing deficit in multiple sclerosis? J Clin Exp Neuropsychol 2004 Jun;26(4):550-562. [CrossRef] [Medline]
- Forn C, Belenguer A, Parcet-Ibars MA, Avila C. Information-processing speed is the primary deficit underlying the poor performance of multiple sclerosis patients in the Paced Auditory Serial Addition Test (PASAT). J Clin Exp Neuropsychol 2008 Oct;30(7):789-796. [CrossRef] [Medline]
- Polman CH, Reingold SC, Banwell B, Clanet M, Cohen JA, Filippi M, et al. Diagnostic criteria for multiple sclerosis: 2010 revisions to the McDonald criteria. Ann Neurol 2011 Feb;69(2):292-302 [FREE Full text] [CrossRef] [Medline]
- Bove R, Rowles W, Zhao C, Anderson A, Friedman S, Langdon D, et al. A novel in-home digital treatment to improve processing speed in people with multiple sclerosis: a pilot study. Mult Scler 2021 Apr 25;27(5):778-789. [CrossRef] [Medline]
- Hsu W, Rowles W, Anguera JA, Zhao C, Anderson A, Alexander A, et al. Application of an adaptive, digital, game-based approach for cognitive assessment in multiple sclerosis: observational study. J Med Internet Res 2021 Jan 27;23(1):e27440 [FREE Full text] [CrossRef] [Medline]
- Gronwall DM. Paced auditory serial-addition task: a measure of recovery from concussion. Percept Mot Skills 1977 Apr;44(2):367-373. [CrossRef] [Medline]
- Anguera JA, Boccanfuso J, Rintoul JL, Al-Hashimi O, Faraji F, Janowich J, et al. Video game training enhances cognitive control in older adults. Nature 2013 Sep 05;501(7465):97-101 [FREE Full text] [CrossRef] [Medline]
- Treisman AM, Gelade G. A feature-integration theory of attention. Cogn Psychol 1980 Jan;12(1):97-136. [CrossRef] [Medline]
- Leark RA, Dupuy TR, Greenberg LM, Corman CL, Kindschi C. T.O.V.A. Test of Variables of Attention Professional Manual. Los Alamitos, CA: Universal Attention Disorders; 1999.
- Berch DB, Krikorian R, Huha EM. The Corsi block-tapping task: methodological and theoretical considerations. Brain Cogn 1998 Dec;38(3):317-338. [CrossRef] [Medline]
- Kiely KM, Butterworth P, Watson N, Wooden M. The Symbol Digit Modalities Test: normative data from a large nationally representative sample of Australians. Arch Clin Neuropsychol 2014 Dec;29(8):767-775. [CrossRef] [Medline]
- Bobholz JA, Rao SM. Cognitive dysfunction in multiple sclerosis: a review of recent developments. Curr Opin Neurol 2003 Jun;16(3):283-288. [CrossRef] [Medline]
- Langdon DW. Cognition in multiple sclerosis. Curr Opin Neurol 2011 Jun;24(3):244-249. [CrossRef] [Medline]
- Hoang P, Schoene D, Gandevia S, Smith S, Lord SR. Effects of a home-based step training programme on balance, stepping, cognition and functional performance in people with multiple sclerosis: a randomized controlled trial. Mult Scler 2016 Jan;22(1):94-103. [CrossRef] [Medline]
- Castellanos FX, Sonuga-Barke EJS, Scheres A, Di Martino A, Hyde C, Walters JR. Varieties of attention-deficit/hyperactivity disorder-related intra-individual variability. Biol Psychiatry 2005 Jun 01;57(11):1416-1423 [FREE Full text] [CrossRef] [Medline]
- Hetherington CR, Stuss DT, Finlayson MA. Reaction time and variability 5 and 10 years after traumatic brain injury. Brain Inj 1996 Jul;10(7):473-486. [CrossRef] [Medline]
- Harel Y, Kalron A, Menascu S, Magalashvili D, Dolev M, Doniger G, et al. Cognitive function in multiple sclerosis: a long-term look on the bright side. PLoS ONE 2019;14(8):e0221784 [FREE Full text] [CrossRef] [Medline]
Abbreviations
ACE: Adaptive Cognitive Evaluation |
BRT: basic reaction time |
CI: cognitive impairment |
MS: multiple sclerosis |
PASAT: Paced Auditory Serial Addition Test |
RT: reaction time |
SAAT: Sustained Attention ACE Task |
SDMT: Symbol Digit Modalities Test |
TOVA: Tests of Variables of Attention |
UCSF: University of California, San Francisco |
Edited by G Eysenbach; submitted 13.11.20; peer-reviewed by A Wright, A Mendez, S Kollins, V Horner; comments to author 13.01.21; revised version received 29.01.21; accepted 16.11.21; published 30.12.21
Copyright©Wan-Yu Hsu, William Rowles, Joaquin A Anguera, Annika Anderson, Jessica W Younger, Samuel Friedman, Adam Gazzaley, Riley Bove. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 30.12.2021.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.