Acceptability, Engagement, and Effects of a Mobile Digital Intervention to Support Mental Health for Young Adults Transitioning to College: Pilot Randomized Controlled Trial

Background The transition from high school to college can exacerbate mental health problems in young adults yet barriers prevent seamless mental health care. Existing digital support tools show promise but are not yet designed to optimize engagement or implementation. Objective The goal of the research was to test acceptability and effects of an automated digital Mobile Support Tool for Mental Health (MoST-MH) for young adults transitioning to college. Methods Youths aged 18 years and older with a current mental health diagnosis preparing to transition to college (n=52; 85% female [45/52], 91% White [48/52]) were recruited from a primary care (n=31) and a mental health clinic (n=21). Participants were randomized 2:1 to either receive MoST-MH (n=34) or enhanced Usual Care (eUC; n=18). MoST-MH included periodic text message and web-based check-ins of emotional health, stressors, negative impacts, and self-efficacy that informed tailored self-care support messages. Both eUC and MoST-MH participants received links to a library of psychoeducational videos and were asked to complete web-based versions of the Mental Health Self-Efficacy Scale (MHSES), College Counseling Center Assessment of Psychological Symptoms (CCAPS), and Client Service Receipt Inventory for Mental Health (C-SRI) monthly for 3 months and the Post-Study System Usability Scale (PSSUQ) at 3-months. Results MoST-MH participants were sent a median of 5 (range 3 to 10) text message check-in prompts over the 3-month study period and 100% were completed; participants were sent a median of 2 (range 1 to 8) web-based check-in prompts among which 78% (43/55) were completed. PSSUQ scores indicate high usability (mean score 2.0). Results from the completer analysis demonstrated reductions in mental health symptoms over time and significant between-group effects of MoST-MH compared to eUC on depressive symptom severity (d=0.36, 95% CI 0.08 to 0.64). No significant differences in mental health self-efficacy or mental health health care use were observed. Conclusions In this pilot trial, we found preliminary evidence that MoST-MH was engaged with at high rates and found to be highly usable and reduced depression symptoms relative to eUC among youth with mental health disorders transitioning to college. Findings were measured during the COVID-19 pandemic, and the study was not powered to detect differences in outcomes between groups; therefore, further testing is needed. Trial Registration ClinicalTrials.gov NCT04560075; https://clinicaltrials.gov/ct2/show/NCT04560075


INTRODUCTION 2a-i) Problem and the type of system/solution 2a-ii) Scientific background, rationale:
What is known about the (type of) system "MoST-MH was intended to provide support independent of mental health diagnosis type (i.e. trans-diagnostic) and designed to minimize burden of intensive digital interactions using a stepwise algorithm which adapts frequency of interaction to the needs of the youth." METHODS 3a) CONSORT: Description of trial design (such as parallel, factorial) including allocation ratio There has to be a better and more efficient way. This is way too burdensome. 3b) CONSORT: Important changes to methods after trial commencement (such as eligibility criteria), with reasons "We hypothesized that youth would engage with MoST-MH at high rates over the first 3-months of college and that they would report high levels of usability. We also hypothesized that youth who received MoST-MH, as compared with youth who receive eUC, would report greater mental health self-efficacy, lower symptom severity, and higher rates of follow-through with mental health care at 3-months." 3b-i) Bug fixes, Downtimes, Content Changes 4a) CONSORT: Eligibility criteria for participants "We conducted a pilot randomized trial among youth with a current mental health disorder and/or recent mental health care preparing to transition to college. Design and a priori hypotheses were registered (clinicaltrials.gov NCT04560075). As this was a pilot study, we were not powered to detect significant differences in mental health outcomes between groups. All participants completed written informed consent. Study investigators and outcome assessors were blinded to allocation to treatment arms." 4a-i) Computer / Internet literacy 4a-ii) Open vs. closed, web-based vs. face-to-face assessments: Not applicable to this study 4a-iii) Information giving during recruitment "Participants were recruited from one primary care (n=31) and one mental health clinic (n=21) in Pittsburgh, PA from August to October, 2020" 4b) CONSORT: Settings and locations where the data were collected Not applicable to this study 4b-i) Report if outcomes were (self-)assessed through online questionnaires 4b-ii) Report how institutional affiliations are displayed "Each monthly assessment battery was estimated to take 15 minutes to complete and were completed on a smartphone, laptop, tablet or desktop. Participants in both groups were sent text-message reminders every 3 days up to 3 times prompting them to complete their web-based follow-up assessment batteries." 5) CONSORT: Describe the interventions for each group with sufficient details to allow replication, including how and when they were actually administered 5-i) Mention names, credential, affiliations of the developers, sponsors, and owners 5-ii) Describe the history/development process "we developed an automated Mobile Support Tool for Mental Health (MoST-MH). MoST-MH was iteratively designed and refined by a multi-disciplinary team with expertise in psychology, psychiatry, primary care, and digital interventions with integral feedback from a college student ambassador."

5-iii) Revisions and updating
We had not done any prior usability testing 5-iv) Quality assurance methods Not applicable to this study 5-v) Ensure replicability by publishing the source code, and/or providing screenshots/screen-capture video, and/or providing flowcharts of the algorithms used There was no specific quality assurance set up 5-vi) Digital preservation "Upon allocation, MoST-MH participants were prompted to text a unique keyword to our study phone number to initiate the program. Once initiated, participants received a series of welcome messages describing what to expect over the intervention period and ways to reduce breach of privacy. For example: "Welcome to MoST-MH. Over the next 3 months we'll be checking in by text message. Set up a password on your phone and erase messages you do not want anyone to see after reading them." Participants were instructed that they can drop out of the MoST-MH program at any time by texting "Quit." Starting on the day of enrollment, MoST-MH participants received mental health check-in via text message: "How would you rate your emotional health this past week?". If they replied "excellent, very good or good", they received a positive feedback text message and link to video library. The brief 2-minute videos were created by the study team and included psycho-education about mental health self-care during college. If they replied "fair" or "poor", they were sent a link to complete a web-based check-in. Upon opening the web-link, a page displayed a checklist of common stressors7 and negative effects (shown in Table 2). Then they were asked a self-efficacy question: "To what extent do you feel you can manage your stressors and negative effects with supports and skills you have?" If they reported high self-efficacy ("completely"), they received positive feedback, a web-link to a library of mental health videos, and the program was timed to check in with them in a month. If they reported low self-efficacy ("somewhat"; "a little"; or "not at all"), they received a text message from a skills library, the link to the videos, and were asked if it was ok to check in next week. On subsequent MoST-MH check-ins, their reports of stressors and negative effects were compared to the prior assessment, and feedback incorporated relative improvement or unresolved stressors/effects. If the ability to self-manage stressors or negative effects was still reported as sub-optimal, the individual was prompted to consider making an appointment for seeking mental health care: "Your doctor or another health professional can help. Would you be willing to reach out to them to set up an appointment?" If they were willing, they were provided with a link to resources to assist. Throughout all program queries, missing responses were re-prompted once only. To ensure safety, if an individual reported poor mental health and low self-efficacy 2 weeks in a row, they were prompted to seek formal MH care. Figure 2 demonstrates an example communication exchange. "

5-vii) Access
See prior response. All content represented in figures and Methods.

5-viii) Mode of delivery, features/functionalities/components of the intervention and comparator, and the theoretical framework
See Section V above for further description 5-ix) Describe use parameters "MoST-MH was intended to provide support independent of mental health diagnosis type (i.e. trans-diagnostic) and designed to minimize burden of intensive digital interactions using a stepwise algorithm which adapts frequency of interaction to the needs of the youth. MoST-MH is an ecological momentary intervention (EMI)16 in that it provides support in the context of a young adult's current state and needs. Specifically, MoST-MH incorporated periodic text-message mental health check-ins, which triggered web-based check-ins (when mental health is rated low) to understand stressors, negative effects, and self-efficacy, which informed self-efficacy support strategies and prompted links to psychoeducational videos focused on college and mental health. Figure 1 outlines the design of MoST-MH. "

5-x) Clarify the level of human involvement
See above Section V for complete description 5-xi) Report any prompts/reminders used Not applicable to our study 5-xii) Describe any co-interventions (incl. training/support) See above Section V for complete description 6a) CONSORT: Completely defined pre-specified primary and secondary outcome measures, including how and when they were assessed "Participants were recruited from one primary care (n=31) and one mental health clinic (n=21) in Pittsburgh, PA from August to October, 2020. We chose to recruit from healthcare sites because we view the ultimate implementation to be initiated by care providers who are able to identify individuals with mental health needs prior to leaving for college. The youth's care provider(s) identified potentially eligible youth and asked the youth about interest in participating in the study; interested youth were texted or emailed a web link that provided information about the study. If they were interested, they contacted the research team via telephone, where enrollment criteria were confirmed. Inclusion criteria included: 18 years of age or older, current mental health diagnosis documented in their electronic medical record and/or in receipt of mental health services within 3 months per self-/ parent-or clinician-report, graduated high school, plan to attend college or higher education within 6 weeks, and own a personal mobile phone with text messaging. We excluded non-Englishspeaking individuals given that intervention materials were in English only. " 6a-i) Online questionnaires: describe if they were validated for online use and apply CHERRIES items to describe how the questionnaires were designed/deployed 6a-ii) Describe whether and how "use" (including intensity of use/dosage) was defined/measured/monitored "MoST-MH usability was measured via the Post-Study System Usability Scale (PSSUQ)19 at 3-months only. The PSSUQ includes 19 items, each rated on a 7-point Likert-type scale ranging from 1 (strongly agree) to 7 (strongly disagree). The psychometric factors of the PSSUQ are (1) overall usability, (2) system usefulness, (3) information quality, and (4) interface quality. The lower the score (to a limit of one), the higher the perceived usability. Mental Health Self-Efficacy was measured using the 6-item self-report Mental Health Self-Efficacy Scale (MHSES)20, which asks participants to rate each statement on a 10-point Likert scale ranging from 1 ("Not at all confident") to 10 ("Totally confident") whereby higher scores indicate higher self-efficacy: "On an average day in the next month, how confident are you that… (1) You can keep your stress, anxiety or depression from interfering with the things that you want to do?; (2) You can do the different tasks and activities needed to manage your stress, anxiety or depression so as to reduce your need to see a doctor?; (3) You can do things other than just taking medicine to reduce how much your stress, anxiety or depression affects your everyday life?; (4) You can make your days at least moderately enjoyable?; (5) You will have moderate amounts of time where you do not experience stress, anxiety or depression?; and (6) You will be able to effectively manage any stress, anxiety or depression that you do experience?" Symptom Severity was measured using the College Counseling Center Assessment of Psychological Symptoms (CCAPS)21 which has 62 items with eight distinct subscales of psychological symptoms for college students: (a) Depression (13 items), (b) Generalized Anxiety (9 items), (c) Social Anxiety (7 items), (d) Academic Distress (5 items), (e) Eating Concerns (9 items), (f) Family Distress (6 items), (g) Hostility (7 items), and (h) Substance Use (6 items). Items are scored on a 5-point Likert scale from 0 ("Not at all like me") to 4 ("Extremely like me"), whereby higher scores indicate higher symptom severity. Mental Health Treatment Utilization was measured using the brief self-report Client Service Receipt Inventory for Mental Health (C-SRI)22 including outpatient, inpatient, and medication management services." 6a-iii) Describe whether, how, and when qualitative feedback from participants was obtained "We tested the hypothesis that youth would engage with MoST-MH at high rates (>80% response rate) by calculating text message and web check-in completions within and between individuals." 6b) CONSORT: Any changes to trial outcomes after the trial commenced, with reasons "Each monthly assessment battery was estimated to take 15 minutes to complete and were completed on a smartphone, laptop, tablet or desktop. Participants in both groups were sent text-message reminders every 3 days up to 3 times prompting them to complete their web-based follow-up assessment batteries." 7a) CONSORT: How sample size was determined 7a-i) Describe whether and how expected attrition was taken into account when calculating the sample size 7b) CONSORT: When applicable, explanation of any interim analyses and stopping guidelines "We tested the hypothesis that youth would engage with MoST-MH at high rates (>80% response rate) by calculating text message and web check-in completions within and between individuals. We tested the hypothesis that youth would report high levels of usability with MoST-MH (mean PSSUQ rating <=2) by computing PSSUQ ratings at 3-month follow-up. We explored the effect of MoST-MH, as compared with eUC, on mental health self-efficacy (MHSES), symptom severity (CCAPS), and mental healthcare services utilization (C-SRI) using mixed-effect (general estimating equation: GEE) models. Mixed-effects models using GEE are recommended for analysis of repeated-measures data and can properly account for missing data.23 To understand for whom the intervention may works better/worse for, we explored associations between patient factors (sex, race, planned college attendance, baseline CCAPS scores) and engagement, usability, and mental health outcomes using univariate GEE models. Primary analyses were conducted using listwise deletion." 8a) CONSORT: Method used to generate the random allocation sequence Not applicable to this study 8b) CONSORT: Type of randomisation; details of any restriction (such as blocking and block size) We did not stop trial or conduct interim analyses 9) CONSORT: Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned