Published on in Vol 24, No 3 (2022): March

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/29114, first published .
Process and Outcome Evaluations of Smartphone Apps for Bipolar Disorder: Scoping Review

Process and Outcome Evaluations of Smartphone Apps for Bipolar Disorder: Scoping Review

Process and Outcome Evaluations of Smartphone Apps for Bipolar Disorder: Scoping Review

Review

1Translational and Clinical Research Institute, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, United Kingdom

2National Specialist Adolescent Mood disorders Service, Cumbria Northumberland Tyne and Wear NHS Foundation Trust, Walkergate Park, Newcastle upon Tyne, United Kingdom

3Open Lab, Human Computer Interaction, Urban Sciences Building, Newcastle University, Newcastle upon Tyne, United Kingdom

*these authors contributed equally

Corresponding Author:

Aditya Narain Sharma, MBBS, MD, PhD

Translational and Clinical Research Institute

Faculty of Medical Sciences

Newcastle University

Academic Psychiatry, Wolfson Research Centre

Campus for Ageing and Vitality

Newcastle upon Tyne, NE4 5PL

United Kingdom

Phone: 44 1912875262

Email: aditya.sharma@ncl.ac.uk


Background: Mental health apps (MHAs) provide opportunities for accessible, immediate, and innovative approaches to better understand and support the treatment of mental health disorders, especially those with a high burden, such as bipolar disorder (BD). Many MHAs have been developed, but few have had their effectiveness evaluated.

Objective: This systematic scoping review explores current process and outcome measures of MHAs for BD with the aim to provide a comprehensive overview of current research. This will identify the best practice for evaluating MHAs for BD and inform future studies.

Methods: A systematic literature search of the health science databases PsycINFO, MEDLINE, Embase, EBSCO, Scopus, and Web of Science was undertaken up to January 2021 (with no start date) to narratively assess how studies had evaluated MHAs for BD.

Results: Of 4051 original search results, 12 articles were included. These 12 studies included 435 participants, and of these, 343 had BD type I or II. Moreover, 11 of the 12 studies provided the ages (mean 37 years) of the participants. One study did not report age data. The male to female ratio of the 343 participants was 137:206. The most widely employed validated outcome measure was the Young Mania Rating Scale, being used 8 times. The Hamilton Depression Rating Scale-17/Hamilton Depression Rating Scale was used thrice; the Altman Self-Rating Mania Scale, Quick Inventory of Depressive Symptomatology, and Functional Assessment Staging Test were used twice; and the Coping Inventory for Stressful Situations, EuroQoL 5-Dimension Health Questionnaire, Generalized Anxiety Disorder Scale-7, Inventory of Depressive Symptomatology, Mindfulness Attention Awareness Scale, Major Depression Index, Morisky-Green 8-item, Perceived Stress Scale, and World Health Organization Quality of Life-BREF were used once. Self-report measures were captured in 9 different studies, 6 of which used MONARCA. Mood and energy levels were the most commonly used self-report measures, being used 4 times each. Furthermore, 11 of the 12 studies discussed the various confounding factors and barriers to the use of MHAs for BD.

Conclusions: Reported low adherence rates, usability challenges, and privacy concerns act as barriers to the use of MHAs for BD. Moreover, as MHA evaluation is itself developing, guidance for clinicians in how to aid patient choices in mobile health needs to develop. These obstacles could be ameliorated by incorporating co-production and co-design using participatory patient approaches during the development and evaluation stages of MHAs for BD. Further, including qualitative aspects in trials that examine patient experience of both mental ill health and the MHA itself could result in a more patient-friendly fit-for-purpose MHA for BD.

J Med Internet Res 2022;24(3):e29114

doi:10.2196/29114

Keywords



Background

There are many critical factors that can influence the course and outcome of mental health disorders. Two key factors are (1) early and accurate identification of the first onset and subsequent relapses of the disorder, leading to the institution of appropriate management, and (2) access to appropriate treatment locations. For bipolar disorder (BD), the average delay between the onset of symptoms and the first institution of treatment can be as long as 10 to 15 years [1-3]. Between 35% and 50% of patients with mental health disorders receive no treatment because appropriate treatment locations are rare [4]. BD is no exception to this rule. A United Kingdom–based 2015 study found that the median diagnostic delay in the South London and Maudsley National Health Service (NHS) Trust was 62 days, with the median treatment delay being a further 31 days [5]. Research regarding pathways to redress these delays is urgently required, and with the potential to reliably scaffold processes and scale to both large numbers and remote locations, digital technology holds considerable potential to address these challenges.

In 2020, an estimated 6 billion smartphones were in use across the globe [6]. In the United Kingdom, there has been a 79% increase in the number of 5- to 15-year-old children owning mobile phones since 2015 [7]. Although smartphone ownership tends to be more common in high-income countries, as economies develop, the price of smartphones will decrease, and this correlation will reduce [6]. One form of digital technology that can capitalize on this increased smartphone usage globally is mental health apps (MHAs).

Figure 1. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flowchart for scoping reviews.
View this figure

Prior Work

Currently, MHAs can be seen to improve engagement and accessibility for individuals in rural areas where health care provision is increasingly difficult to access [8,9], and they are well accepted by service users [10]. So much so that financial incentives have been implemented for behavioral health information technologies in US policy [11]. Such advances in digital mental health care have now been adopted by clinicians in the treatment of common mental disorders in the United Kingdom [9]. Cost-effectiveness is crucial to health care, especially in a government-funded system as comprehensive as the NHS. Evidence suggests that the use of tele-psychiatry interventions reduced pressure on mental health services in low- and middle-income countries in comparison to a control group [12], but noted the importance of rigorous app evaluation. This is echoed by Tal et al [13], who described both the potential opportunities and risks posed by digital mental health, and how they must be balanced in order to achieve meaningful change.

The socioeconomic cost of BD in the United Kingdom is well recognized [14]. Previous literature suggests that the use of MHAs for BD can increase patient engagement and provide real-time symptom monitoring to allow for improved recognition of symptoms of relapse [15]. Subsequently, this reduces barriers to treatment, such as lack of resources and time. However, the efficacy of MHAs for BD is unclear [16], and a paucity of evidence in how to assess and evaluate MHAs for BD makes these statements difficult to qualify. To date, there is a lack of regulatory guidelines regarding MHAs, including those for BD, as health technologies are a relatively new resource within the NHS. This could potentially lead to unsafe use and practice [17]. Little is known about how MHAs (including those for BD) are developed and scrutinized, and studies predict that consumers, policymakers, health services, and funders will demand a robust evaluation process before funding, prescribing, or using these services [18].

As the development of an MHA for BD requires iterative processes with stakeholders outside of the academic and clinical research environment, process evaluation is important (in addition to more traditional outcome measure methods) to ensure the app remains user-friendly and functional without compromising clinical outcomes. This scoping review aims to address this research gap through mapping existing literature on process and outcome evaluation methods of MHAs for BD to increase the understanding around currently available evaluation tools and the latest practice.

Aim

The aim of this scoping review is to systematically explore current process and outcome measures to identify the best practice for evaluating MHAs for BD. The focus is on apps for BD designed for individuals across the lifespan. Conducting a scoping review will allow health care systems to be more structurally informed on how to accurately evaluate the effectiveness of such technologies when implementing them into routine care [19]. The specific objectives of this scoping review, based on the detailed framework of Levac et al [20], were to map the available evidence and report on (1) process evaluation methods (ie, participant usability and functionality), (2) outcome measure methods (ie, data on how widely a measure is used and concordance of the target population with completing a measure), (3) outcome measures used to measure mental health improvement (eg, well-being measures), and (4) methods for best practice in the evaluation of MHAs for BD.


Overview

The review was informed by the Arksey and O’Malley 5-step framework [19], which was further developed by the Levac, Colquhoun, and O’Brien model [20]. This includes identification of a research question, study selection and criteria, data extraction, and content analysis. Employing this methodological framework will support in examining the broader field of the evaluation of MHAs for BD to identify the best practice.

Search Strategy

A scoping search was initially performed in the following databases: PsycINFO, MEDLINE, Embase, EBSCO, Scopus, and Web of Science. Relevant search terms were identified from key papers, and the search strategy was developed iteratively in MEDLINE and then translated across the other databases, up to January 2021. Due to time constraints, grey literature sources were not investigated, and the search was limited to articles published in English, as no resources were available to undertake translation work. Broad search terms were used to reduce the likelihood of article omission. The complete search strategy for MEDLINE is available in Multimedia Appendix 1. The reference lists of included studies were hand searched for additional reports of relevance.

Selection Criteria

For studies to be included in this review, they needed to meet the following inclusion and exclusion criteria. Studies were included if they (1) were related to BD, (2) targeted individuals across the lifespan, (3) included qualitative and/or quantitative evaluation methods and measures, (4) were published in the English language, (5) had no start date limit, (6) included any type of study design, (7) included participants with symptoms of BD or diagnosed with BD according to International Classification of Diseases-10, Diagnostic and Statistical Manual of Mental Disorders-IV, or Diagnostic and Statistical Manual of Mental Disorders-5, (8) included evaluation of the functionality of the MHA and/or evaluation of the participant outcomes, and (9) included any function (eg, screening, mood monitoring, or medication adherence). Studies were excluded if they (1) were based on a web-based intervention with no MHA counterpart, (2) included MHAs that were only psychotherapeutic intervention specific with no evaluation, (3) were based on MHA development only, and (4) included a participant population without symptoms of BD or not diagnosed with BD. Where systematic review papers were identified, these were not included. However, their reference lists were hand searched to identify primary articles relevant for inclusion.

Given that the aim of the study was to recognize the scope of research already conducted, both qualitative and quantitative research designs were included. As few studies focused on children and adolescents as their participant population, no age limits were applied.

Selection Process

The search was completed by 2 researchers (IT and PK), who independently screened articles by the title and abstract against the inclusion criteria. Articles that fulfilled the inclusion criteria were then subjected to full-text screening by IT and PK. Conflicts were discussed with a third researcher (EBM) to reach consensus. Eight conflicts arose altogether, including 6 when screening the title, 1 when screening the title and abstract, and 1 when screening the full paper.

Data Collection Process

Characterization of the data and the results were exported into a customized data extraction form that was piloted in a subset of included studies. Data extracted included study name, authors, year, country, study design, MHA for BD, whether the MHA for BD was independent or adjunctive, sample size, mean age of the participants, gender of the participants, inclusion/exclusion criteria for the participants, results, tools used, measures used, time points, and whether it addressed any of the 4 objectives.

Data Synthesis and Quality Assessment

EC and IT analyzed the data using narrative synthesis and placed this in the context of the current literature to formulate conclusions. The studies were assessed using the Mixed Methods Appraisal Tool (MMAT) [21].


Overview

The original database search yielded 4051 articles. Hand searching of relevant review articles was conducted, which yielded a further 5 articles. After duplicates were removed, 3730 articles remained. Screening of the title and abstract resulted in 3642 articles being excluded. The remaining 88 articles were then subjected to full-text screening, and 76 articles were excluded (71 due to a lack of focus on BD, 2 could not be located, 1 only focused on app development, 1 did not diagnose participants according to our specified criteria, and 1 had no app evaluation).

Study Characteristics

Overall, 12 studies were identified as part of this review, which evaluated 7 MHAs for BD. Multimedia Appendix 2 describes the characteristics and assessment compliance of each study [22-33], and Multimedia Appendix 3 describes the results of the respective compliance with the standards set out in the MMAT [22-33]. Across all 12 studies, data from 435 participants were analyzed (343 with BD). Five out of the 12 studies stated the type of BD for 167 participants (112 had bipolar I disorder, 52 had bipolar II disorder, and 3 had bipolar disorder not otherwise specified). Eleven of the 12 studies provided the mean age (37 years) of the participants with BD. All 12 studies provided information on the gender (M:F of 137:206) of the participants with BD.

Assessment of the quality of all 12 studies (quantitative and mixed methods) was performed (IT and EBM) using the MMAT (version 2018) [21]. The results of their respective compliance with the standards set out in the MMAT can be found in Multimedia Appendix 3. Scores ranged from 20% to 100%. However, low-quality studies were not excluded in order to summarize the small pool of available literature.

Process Evaluation

Six studies examined the self-perceived participant usability and functionality of the MHAs for BD. This examination ranged from detailed feedback questionnaires given to the participants [22,23] to participant feedback suggesting that a reminder prompt from the MHA for BD increased completion rates. Only 1 study mentioned functionality problems in MHAs for BD. Bardram et al [23] commented that MONARCA only worked 63 out of the 69 days of the study period and the information quality score was lower due to unresponsive error messages. The authors also noted that the Android market locked the app during the study period, negatively affecting the pattern of usage during that time.

Two studies recognized that technical problems were likely to arise and so implemented a system to solve these problems. Hidalgo-Mazzei et al [34] supplied participants with technical support via telephone, so they could contact the researcher for further assistance. A similar system was put in place by Schärer et al [24]. Subjects were able to report errors and receive immediate assistance by phone or personal communication. However, it was found that the MHA for BD required a certain amount of knowledge as a prerequisite, which restricted its use in comparison to a text message equivalent [24].

Outcome Evaluation

A variety of validated outcome measures were used to evaluate the selected MHAs for BD. The most widely employed measure was the Young Mania Rating Scale (n=8) [35]. The Hamilton Depression Rating Scale [36] was applied 3 times, and the Altman Self-Rating Mania Scale [37], Quick Inventory of Depressive Symptomatology [38], and Functional Assessment Staging Test [39] were used on 2 occasions. The Coping Inventory for Stressful Situations [40], EuroQoL 5-Dimension Health Questionnaire [41], Generalized Anxiety Disorder Scale-7 [41], Inventory of Depressive Symptomatology [42], Mindfulness Attention Awareness Scale [43], Major Depression Index [44], Morisky-Green 8-item [45], Perceived Stress Scale [46], and World Health Organization Quality of Life-BREF [47] were all utilized just once. Other measures were assessed in 9 different studies, 6 of which used MONARCA. These measures included mood, sleep length, medication taken, activity level, irritability, mixed mood, cognitive problems, alcohol consumption, stress level, menstruation, individualized early warning sign, energy level, anxiety, elation, sadness, anger, speed of thoughts, and impulsivity. Mood and energy level were the most commonly utilized measures, being used 4 times each.

Outcome Measure Methods

Eleven of the 12 studies presented a debate on the confounding factors affecting the efficacy of MHAs for BD. These confounding factors and the number of times they were mentioned in the 11 studies are shown in Table 1.

Table 1. Identified potential confounders for mental health app efficacy.
Potential confounderNumber of studies in which mentioned
Participants were mainly stable or euthymic4
Participant insight when experiencing a manic phase varies3
Sample size was too small3
Length of study was too short3
Low retention or adherence rate2
Method of objective data collection was not robust enough2
Patients were found to be capable of experiencing both manic and depressive symptoms concurrently1
Questionnaires given were too simplistic1
Opportunity of free-text input not given1
Error in the app1
Change in mobile phone communication habits1
Low prevalence of manic symptoms1
Potential sampling bias1
Order of questions in the questionnaire did not vary and so was open to mindless input1
Scales not delivered often enough1
Participants switched the mental health app (MHA) off during the study1
Participants may have chosen not to complete the surveys due to their mood1
Subjective scales1
The MHA gave daily confrontation with depressive symptoms1
The MHA was not sensitive enough to manic or depressive symptoms1
Participants were already involved in a medication adherence intervention1

Evaluation of MHAs for BD

Only 5 of the 12 studies commented on the future of the evaluation of MHAs for BD. Streicher et al [48] suggested that instead of measuring relapse or recurrence of affective episodes, a more sensitive measure would be assessment of mood instability or subsyndromal symptoms. They also commented that future research should include patients with bipolar disorder not otherwise specified, as this patient group may represent a large proportion of patients with BD. Hidalgo-Mazzei et al [34] acknowledged the low retention and adherence rates of MHAs for BD, and stated that researchers should focus on developing new approaches to motivate and engage patients with the intervention in the long term. The authors suggested adopting a user-centered design approach or incorporating gamification elements into a formal psychoeducation process.

Osmani et al [25] commented on the personalization of MHAs for BD, with a focus on physical activity; however, the authors found it difficult to generalize their results to the wider population. This was due to substantial variations between patients for both overall physical activity levels and physical activity levels within daily intervals. Therefore, the authors considered that an adaptive approach to user modeling would be better suited to detect early warning signs of the onset of episodes of BD and facilitate timely intervention. This involves personalizing goals and achievements around each patient’s individual needs. This has been evidenced in previous conference proceedings [48].

Schwartz et al [26] found that the generalizability of the results was limited due to a lack of a comparison group with a differing psychiatric diagnosis, which may exhibit overlapping symptoms. They recommended that future research should use additional comparison groups to better differentiate between symptoms.

Faurholt-Jepsen et al [27] proposed that emphasis should be placed on the differentiation between day-to-day difficulties and depressive symptoms. A positive reinforcing feedback mechanism may help minimize the negative processing bias and so, in theory, relieve the sustained depressive symptoms. They addressed the notion that it can be difficult for an intervention to have an effect on both depressive and manic symptoms, given the complexity of BD [27]. These suggestions are in keeping with the existing literature [49].


Principal Findings and Comparison With Prior Work

The aim of this scoping review was to better understand how MHAs for BD are being evaluated, particularly in terms of the process of use and outcome measures. Due to the scarcity of studies evaluating MHAs for BD specifically, inferences for discussion have been assumed from studies evaluating general MHAs. This relies on the assumption that the functions are similar.

The need for effective and diligent evaluation of MHAs for BD is well established in the literature; no credentialing is currently required for their development and release. Karcher et al [50] warned that the “questionable content” and sparse evidence base of the myriad of current MHAs available warrant careful consideration. Effective robust evaluation systems would be required in aiding patients and practitioners in making individualized appropriate decisions regarding their role and treatment options in patient care. The NHS in the United Kingdom made considerable progress in this area when they launched their Digital Technology Assessment Criteria (DTAC) in February 2021. This provides a “simpler and faster assessment process to help give staff, patients, and the public confidence that the digital health tools they use meet NHS standards” [51]. The DTAC bring together legislation and good practice into a core document [52] that all digital health technologies have to meet in order to be recommended by clinical policy teams within NHS England and NHS Improvement. Though this goes so far to provide the public with centrally regulated technologies, clinicians may lack the knowledge and skills required for the effective recommendation of an MHA [25-27,43-54]. Mindfulness and meditation MHAs are the most commonly recommended by general practitioners in Australia [54]. However, the clinical presentation of BD and its specialist management may deter general practitioners from researching or recommending MHAs for its monitoring or management. Therefore, training health care professionals’ skills in identifying and selecting high-quality MHAs for BD would be beneficial for patients. If, as the literature suggests, the use of MHAs decreases service use [10], the financial benefit may outweigh the cost of the additional training required.

One barrier in the development of MHA evaluation systems is the adherence rate. O’Connell [55] reported that 74% of users stop engaging with an MHA after only 10 uses. Low adherence reduces the confidence with which researchers can generalize their results [56]. Aforementioned personalization and gamification of MHAs for BD have been recommended; however, engagement can tail off once the initial novelty effect of the feature has subsided. It has been suggested that a change in the communication approach may help to solve this problem. Kenny et al [57] surmised that it is possible that in studies where participants (specifically young people) are aware of the importance of their input to achieve the research objective, engagement levels may be higher. This brings into focus the importance of participatory co-design and co-production of MHAs for BD. Eight of our 12 studies mentioned adherence rates (adherence rates were not applicable in 2 studies, and another 2 studies failed to mention the rates), with the average rate being 84%. In fact, Tsana et al [28] experienced 91% adherence over the first 3 months of the study period and 81.9% in the following 9 months. The variability between the studies by O’Connell [55] and Tsanas et al [28] illustrates that there is still work to be done to achieve successful and reliable compliance with such apps.

Interestingly, one reason for the low utilization of MHAs for BD may be decreased motivation, which is often a key feature of depressive episodes [57]. Previous literature suggests that “communities of practice” around an MHA can improve long-term concordance, with social interaction and communal use (whether in person or digitally), encouraging users to continue to access it [49,50]. Integrating a “days since last updated” screening tool would help identify early relapse in the usage of MHAs for BD, and aid in assessing clinical usability [18].

Torous et al [58] interviewed adolescent patients to identify which factors would be useful to bear in mind for MHA development. The results included safety, engagement, functionality, social interaction, awareness, gender, and participative engagement by young people. One study strongly suggested the abandonment of randomized controlled trials as a method of evaluating and improving apps, and instead called for iterative participatory research or single case designs [59]. As such, MHA developers would work in collaboration with patients throughout the design and development process in order to gain regular ecologically valid feedback so that relevant appealing prototypes are established. This could take the form of consumer-used tools or accreditation portals [60]. Then, when pilot and nonpilot studies are performed, both qualitative and quantitative data could be obtained in order to receive valuable feedback in how to further improve the app. The role of randomized controlled trials can then be established in validating the MHAs at later stages. Torous et al [58] theorized further reasons for low engagement, including poor usability, lack of a user-centric design, privacy concerns, and lack of trust. Another study found that MHA efficiency, effectiveness, memorability, and learnability and cognitive load were major usability barriers to continuing MHA use [61]. This evidence lends further strength to the argument that streamlining of the usability of MHAs should be at the core of future iterative development stages of MHAs in order to increase adherence rates and improve the ecological validity and reliability of evaluations.

Torous et al [58] also recognized accessibility as a factor to consider when developing MHAs. The Office for National Statistics stated that in 2018, 10% of the UK adult population were internet “nonusers” [62], meaning they had never used the internet or had not used it in the last 3 months. This brings the idea of the digital divide into the spotlight and shows the complexities it brings with it along with merely accessibility. The digital divide cannot be solved by just providing patients with devices, as they will also need digital literacy skills to use the device to its full potential. Even though Torous et al [58] interviewed adolescents, their study does illustrate the need to consider the skill level of the intended audience. Ennis et al [63] found that lack of technological skills was the reason for nonengagement with computers and mobile devices. Furthermore, Ennis et al found that only a quarter of their 121 participants reported familiarity and easy access to smartphones. Therefore, throughout the MHA evaluation process, patients’ skill sets, in addition to access, should be taken into consideration.

As well as employing participative engagement in MHA development, improving user awareness can come from creative measures, such as describing or advertising the MHA appropriately. Over a quarter of MHAs for depression failed to mention depression in the title or description [64].

Moving to the user-identified priority of safety [29,58], third parties obtaining confidential information is considered the greatest threat to MHA use [65]. Tools are available that can increase device security [50]; however, threats to privacy are continually emerging and endangering data security. For example, identity cannot be confirmed unless a video-calling app is used and personal devices are easily lost or stolen, leaving data vulnerable [50]. Karcher et al [50] determined that the greatest threat to patient privacy in MHA use was the possibility of confidential information being shared with third parties, whether via patient or clinician devices. Hacking of secure devices and new viruses were also identified as challenges to a secure patient database on MHAs. As well as being its own point for consideration when overcoming the challenges of evaluating the clinical effectiveness of MHAs, the complex legal and ethical considerations involved in MHA use are a consideration for clinicians themselves [50]. This has been highlighted in recent news, where contact tracing apps used to help curb the spread of COVID-19 have been the subject of widespread debate.

From a more pragmatic standpoint, MHA cost may be a factor in choosing the right MHA for BD, with 76% of people surveyed reporting interest in using their mobile phones for mental health monitoring and self-management if the MHA was free of charge [15]. Moreover, Larsen et al [66] reported that an MHA clinically relevant for depression is being removed from the market every 2.9 days. This furthers the challenge faced by both patients and clinicians in trying to identify a relevant and appropriate MHA for BD. If the MHA for BD is to be paid for and could be removed from the market without warning, it is difficult to justify its recommendation and purchase.

Limitations

This review had its own limitations. Only 7 MHAs for BD were evaluated, somewhat limiting the generalizability of the results of this review. Furthermore, only 5 studies commented on the future of the development and evaluation of MHAs for BD.

Conclusion

The studies in our review focused on patient monitoring as an indicator for process and outcome evaluation in MHAs for BD. They based their conclusions on whether the app improved assessment scores rather than interviewing patients on their experience of using the app. Although this is suggested to be a reliable way of measuring the process and outcome values, as modern medicine shifts to holistic patient-centered care, more emphasis should be put on users’ experiences rather than quantitative outcomes. In the long term, this will make patients feel respected and involved in the design of MHAs for BD, increasing adherence rates in both the short and long term.

Personalized medicine is a rapidly emerging movement in the field of health care. It is defined as a move away from the “one size fits all” approach to treatment, with new approaches and targeted therapies allowing for flexibility in the management of diseases. With this in mind, more MHAs for BD should be easily available in order to encourage patient choice and freedom to choose an MHA that is best suited to them. At the moment, MONARCA dominates the market, reducing the range and scope of MHAs for BD. As NHS England suggests [60], it can be difficult for an intervention to address both depressive and manic symptoms given the complexity of BD. This is all the more reason to develop a wider variety of apps, with some apps perhaps only focusing on either mania or depression.

The field of MHAs for BD shows promise in both improving patient care and creating a more cost-effective health care service [10]. However, as with any new development in health care, it must be appropriately evaluated and regulated. By encouraging patient co-design and co-evaluation, we can develop a new frontier in personalized digital health, while improving patient experience and care.

Authors' Contributions

IT: acquisition of data, analysis and interpretation of data, drafting the article, and final approval of the version to be published. EC: acquisition of data, analysis and interpretation of data, drafting the article, and final approval of the version to be published. KG: design of the study, revising the manuscript critically for important intellectual content, and final approval of the version to be published. PK: design of the study, acquisition of data, analysis and interpretation of data, revising the manuscript critically for important intellectual content, and final approval of the version to be published. JS: concept of the study, revising the manuscript critically for important intellectual content, and final approval of the version to be published. EBM: concept and design of the study, interpretation of the data, revising the manuscript critically for important intellectual content, and final approval of the version to be published. ANS: concept and design of the study, revising the manuscript critically for important intellectual content, and final approval of the version to be published.

Conflicts of Interest

None declared.

Multimedia Appendix 1

MEDLINE and PsycINFO searches.

DOCX File , 46 KB

Multimedia Appendix 2

Study characteristics.

DOCX File , 49 KB

Multimedia Appendix 3

Study designs and outcomes.

DOCX File , 48 KB

  1. Medici CR, Videbech P, Gustafsson LN, Munk-Jørgensen P. Mortality and secular trend in the incidence of bipolar disorder. J Affect Disord 2015 Sep 01;183:39-44. [CrossRef] [Medline]
  2. Drancourt N, Etain B, Lajnef M, Henry C, Raust A, Cochet B, et al. Duration of untreated bipolar disorder: missed opportunities on the long road to optimal treatment. Acta Psychiatr Scand 2013 Feb;127(2):136-144. [CrossRef] [Medline]
  3. Dagani J, Signorini G, Nielssen O, Bani M, Pastore A, Girolamo GD, et al. Meta-analysis of the Interval between the Onset and Management of Bipolar Disorder. Can J Psychiatry 2017 Apr;62(4):247-258 [FREE Full text] [CrossRef] [Medline]
  4. 65th World Health Assembly. Global burden of mental disorders and the need for a comprehensive, coordinated response from health and social sectors at the country level: report by the Secretariat. World Health Organisation. 2012.   URL: https://apps.who.int/iris/handle/10665/78898 [accessed 2021-03-10]
  5. Patel R, Shetty H, Jackson R, Broadbent M, Stewart R, Boydell J, et al. Delays before Diagnosis and Initiation of Treatment in Patients Presenting to Mental Health Services with Bipolar Disorder. PLoS One 2015;10(5):e0126530 [FREE Full text] [CrossRef] [Medline]
  6. Poushter J. Smartphone Ownership and Internet Usage Continues to Climb in Emerging Economies. Pew Research Center. 2016.   URL: https:/​/www.​pewresearch.org/​global/​2016/​02/​22/​smartphone-ownership-and-internet-usage-continues-to-climb-in-emerging-economies/​ [accessed 2021-03-10]
  7. Children and parents: media use and attitudes report: 2013. Ofcom. 2013.   URL: https:/​/www.​ofcom.org.uk/​research-and-data/​media-literacy-research/​childrens/​children-parents-oct-2013 [accessed 2021-03-10]
  8. Rosenfeld A, Pendse S, Nugent NR. How mobile health applications can help treat depression. The Brown University Child and Adolescent Behavior Letter 2017 Aug 22;33(9):1-6. [CrossRef]
  9. Chandrashekar P. Do mental health mobile apps work: evidence and recommendations for designing high-efficacy mental health mobile apps. Mhealth 2018;4:6 [FREE Full text] [CrossRef] [Medline]
  10. Ben-Zeev D, Buck B, Hallgren K, Drake RE. Effect of Mobile Health on In-person Service Use Among People With Serious Mental Illness. Psychiatr Serv 2019 Jun 01;70(6):507-510. [CrossRef] [Medline]
  11. Roberts LW, Chan S, Torous J. New tests, new tools: mobile and connected technologies in advancing psychiatric diagnosis. NPJ Digit Med 2018;1:20176 [FREE Full text] [CrossRef] [Medline]
  12. Lazar MA, Pan Z, Ragguett R, Lee Y, Subramaniapillai M, Mansur RB, et al. Digital revolution in depression: A technologies update for clinicians. Personalized Medicine in Psychiatry 2017 Dec;4-6:1-6. [CrossRef]
  13. Tal A, Torous J. The digital mental health revolution: Opportunities and risks. Psychiatr Rehabil J 2017 Sep;40(3):263-265. [CrossRef] [Medline]
  14. Das Gupta R, Guest JF. Annual cost of bipolar disorder to UK society. Br J Psychiatry 2002 Mar;180:227-233. [CrossRef] [Medline]
  15. Proudfoot J. The future is in our hands: the role of mobile phones in the prevention and management of mental disorders. Aust N Z J Psychiatry 2013 Feb;47(2):111-113. [CrossRef] [Medline]
  16. Wisniewski H, Liu G, Henson P, Vaidyam A, Hajratalli NK, Onnela J, et al. Understanding the quality, effectiveness and attributes of top-rated smartphone health apps. Evid Based Ment Health 2019 Feb;22(1):4-9 [FREE Full text] [CrossRef] [Medline]
  17. O'Brien KK, Colquhoun H, Levac D, Baxter L, Tricco AC, Straus S, et al. Advancing scoping study methodology: a web-based survey and consultation of perceptions on terminology, definition and methodological steps. BMC Health Serv Res 2016 Jul 26;16:305 [FREE Full text] [CrossRef] [Medline]
  18. Anderson S, Allen P, Peckham S, Goodwin N. Asking the right questions: scoping studies in the commissioning of research on the organisation and delivery of health services. Health Res Policy Syst 2008 Jul 09;6:7 [FREE Full text] [CrossRef] [Medline]
  19. Arksey H, O'Malley L. Scoping studies: towards a methodological framework. International Journal of Social Research Methodology 2005 Feb;8(1):19-32. [CrossRef]
  20. Levac D, Colquhoun H, O'Brien KK. Scoping studies: advancing the methodology. Implement Sci 2010 Sep 20;5:69 [FREE Full text] [CrossRef] [Medline]
  21. Hong QN, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, et al. The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. EFI 2018 Dec 18;34(4):285-291. [CrossRef]
  22. Hidalgo-Mazzei D, Mateu A, Reinares M, Murru A, Del Mar Bonnín C, Varo C, et al. Psychoeducation in bipolar disorder with a SIMPLe smartphone application: Feasibility, acceptability and satisfaction. J Affect Disord 2016 Aug;200:58-66. [CrossRef] [Medline]
  23. Bardram J, Frost M, Szántó K, Faurholt-Jepsen M, Vinberg M, Kessing LV. Designing mobile health technology for bipolar disorder: a field trial of the monarca system. In: CHI '13: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2013 Presented at: SIGCHI Conference on Human Factors in Computing Systems; April 27-May 2, 2013; Paris, France p. 2627-2636. [CrossRef]
  24. Schärer LO, Krienke UJ, Graf S, Meltzer K, Langosch JM. Validation of life-charts documented with the personal life-chart app - a self-monitoring tool for bipolar disorder. BMC Psychiatry 2015 Mar 14;15:49 [FREE Full text] [CrossRef] [Medline]
  25. Osmani V, Maxhuni A, Grünerbl A, Lukowicz P, Haring C, Mayora O. Monitoring activity of patients with bipolar disorder using smart phones. In: MoMM '13: Proceedings of International Conference on Advances in Mobile Computing & Multimedia. 2013 Presented at: International Conference on Advances in Mobile Computing & Multimedia; December 2-4, 2013; Vienna, Austria p. 85-92. [CrossRef]
  26. Schwartz S, Schultz S, Reider A, Saunders EFH. Daily mood monitoring of symptoms using smartphones in bipolar disorder: A pilot study assessing the feasibility of ecological momentary assessment. J Affect Disord 2016 Feb;191:88-93 [FREE Full text] [CrossRef] [Medline]
  27. Faurholt-Jepsen M, Frost M, Ritz C, Christensen EM, Jacoby AS, Mikkelsen RL, et al. Daily electronic self-monitoring in bipolar disorder using smartphones - the MONARCA I trial: a randomized, placebo-controlled, single-blind, parallel group trial. Psychol Med 2015 Oct;45(13):2691-2704. [CrossRef] [Medline]
  28. Tsanas A, Saunders KEA, Bilderbeck AC, Palmius N, Osipov M, Clifford GD, Goodwin, et al. Daily longitudinal self-monitoring of mood variability in bipolar disorder and borderline personality disorder. J Affect Disord 2016 Nov 15;205:225-233 [FREE Full text] [CrossRef] [Medline]
  29. Faurholt-Jepsen M, Ritz C, Frost M, Mikkelsen RL, Margrethe Christensen E, Bardram J, et al. Mood instability in bipolar disorder type I versus type II-continuous daily electronic self-monitoring of illness activity using smartphones. J Affect Disord 2015 Nov 01;186:342-349. [CrossRef] [Medline]
  30. Faurholt-Jepsen M, Vinberg M, Frost M, Christensen EM, Bardram JE, Kessing LV. Smartphone data as an electronic biomarker of illness activity in bipolar disorder. Bipolar Disord 2015 Nov;17(7):715-728. [CrossRef] [Medline]
  31. Guidi A, Salvi S, Ottaviano M, Gentili C, Bertschy G, de Rossi D, et al. Smartphone Application for the Analysis of Prosodic Features in Running Speech with a Focus on Bipolar Disorders: System Performance Evaluation and Case Study. Sensors (Basel) 2015 Nov 06;15(11):28070-28087 [FREE Full text] [CrossRef] [Medline]
  32. Faurholt-Jepsen M, Frost M, Vinberg M, Christensen EM, Bardram JE, Kessing LV. Smartphone data as objective measures of bipolar disorder symptoms. Psychiatry Res 2014 Jun 30;217(1-2):124-127. [CrossRef] [Medline]
  33. Beiwinkel T, Kindermann S, Maier A, Kerl C, Moock J, Barbian G, et al. Using Smartphones to Monitor Bipolar Disorder Symptoms: A Pilot Study. JMIR Ment Health 2016 Jan 06;3(1):e2 [FREE Full text] [CrossRef] [Medline]
  34. Hidalgo-Mazzei D, Mateu A, Reinares M, Matic A, Vieta E, Colom F. Internet-based psychological interventions for bipolar disorder: Review of the present and insights into the future. J Affect Disord 2015 Dec 01;188:1-13. [CrossRef] [Medline]
  35. Young RC, Biggs JT, Ziegler VE, Meyer DA. A rating scale for mania: reliability, validity and sensitivity. Br J Psychiatry 1978 Nov;133:429-435. [CrossRef] [Medline]
  36. Hamilton M. A rating scale for depression. J Neurol Neurosurg Psychiatry 1960 Feb;23:56-62 [FREE Full text] [CrossRef] [Medline]
  37. Altman EG, Hedeker D, Peterson JL, Davis JM. The Altman Self-Rating Mania Scale. Biological Psychiatry 1997 Nov 15;42(10):948-955. [CrossRef] [Medline]
  38. Rush AJ, Trivedi MH, Ibrahim HM, Carmody TJ, Arnow B, Klein DN, et al. The 16-Item Quick Inventory of Depressive Symptomatology (QIDS), clinician rating (QIDS-C), and self-report (QIDS-SR): a psychometric evaluation in patients with chronic major depression. Biol Psychiatry 2003 Sep 01;54(5):573-583. [CrossRef] [Medline]
  39. Madera J, Such P, Zhang P, Baker RA, Grande I. Use of the Functioning Assessment Short Test (FAST) in defining functional recovery in bipolar I disorder. Post-hoc analyses of long-term studies of aripiprazole once monthly as maintenance treatment. Neuropsychiatr Dis Treat 2019;15:2325-2338 [FREE Full text] [CrossRef] [Medline]
  40. Endler NS, Parker JDA. Assessment of multidimensional coping: Task, emotion, and avoidance strategies. Psychological Assessment 1994 Mar;6(1):50-60. [CrossRef]
  41. Spitzer RL, Kroenke K, Williams JBW, Löwe B. A brief measure for assessing generalized anxiety disorder: the GAD-7. Arch Intern Med 2006 May 22;166(10):1092-1097. [CrossRef] [Medline]
  42. Rush AJ, Carmody T, Reimitz P. The Inventory of Depressive Symptomatology (IDS): Clinician (IDS-C) and Self-Report (IDS-SR) ratings of depressive symptoms. Int. J. Method. Psychiat. Res 2006 Jun;9(2):45-59. [CrossRef]
  43. Brown KW, Ryan RM. The benefits of being present: mindfulness and its role in psychological well-being. J Pers Soc Psychol 2003 Apr;84(4):822-848. [CrossRef] [Medline]
  44. Olsen LR, Jensen DV, Noerholm V, Martiny K, Bech P. The internal and external validity of the Major Depression Inventory in measuring severity of depressive states. Psychol Med 2003 Feb;33(2):351-356. [CrossRef] [Medline]
  45. Tan X, Patel I, Chang J. Review of the four item Morisky Medication Adherence Scale (MMAS-4) and eight item Morisky Medication Adherence Scale (MMAS-8). Innov Pharm 2014 Jan 01;5(3):347. [CrossRef]
  46. Cohen S. Perceived stress in a probability sample of the United States. In: Spacapan S, Oskamp S, editors. The social psychology of health. Thousand Oaks, CA: Sage Publications, Inc; 1988:31-67.
  47. No authors listed. The World Health Organization Quality of Life assessment (WHOQOL): position paper from the World Health Organization. Soc Sci Med 1995 Nov;41(10):1403-1409. [CrossRef] [Medline]
  48. Streicher A, Smeddinck J. Personalized and Adaptive Serious Games. In: Dörner R, Göbel S, Kickmeier-Rust M, Masuch M, Zweig K, editors. Entertainment Computing and Serious Games. Lecture Notes in Computer Science, vol 9970. Cham: Springer; 2016:332-377.
  49. Smeddinck J, Herrlich M, Malaka R. Exergames for Physiotherapy and Rehabilitation: A Medium-term Situated Study of Motivational Aspects and Impact on Functional Reach. In: CHI '15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 2015 Presented at: 33rd Annual ACM Conference on Human Factors in Computing Systems; December 2-4, 2013; Seoul, Republic of Korea p. 4143-4146. [CrossRef]
  50. Karcher NR, Presser NR. Ethical and Legal Issues Addressing the Use of Mobile Health (mHealth) as an Adjunct to Psychotherapy. Ethics & Behavior 2016 Sep 03;28(1):1-22. [CrossRef]
  51. New simpler and faster assessment process for digital health technologies launched for the NHS and social care. NHS. 2021.   URL: https:/​/www.​nhsx.nhs.uk/​news/​new-simpler-and-faster-assessment-process-for-digital-health-technologies-launched-for-the-nhs-and-social-care/​ [accessed 2021-06-24]
  52. How to use the DTAC. NHS.   URL: https:/​/www.​nhsx.nhs.uk/​key-tools-and-info/​digital-technology-assessment-criteria-dtac/​how-to-use-the-dtac/​ [accessed 2021-06-24]
  53. NHS Apps Library. NHS.   URL: https://www.nhs.uk/apps-library/ [accessed 2021-06-24]
  54. Byambasuren O, Beller E, Glasziou P. Current Knowledge and Adoption of Mobile Health Apps Among Australian General Practitioners: Survey Study. JMIR Mhealth Uhealth 2019 Jun 03;7(6):e13199 [FREE Full text] [CrossRef] [Medline]
  55. O'Connell C. 23% of Users Abandon an App After One Use. DZone. 2016.   URL: https://dzone.com/articles/23-of-users-abandon-an-app-after-one-use [accessed 2021-03-01]
  56. Arean PA, Hallgren KA, Jordan JT, Gazzaley A, Atkins DC, Heagerty PJ, et al. The Use and Effectiveness of Mobile Apps for Depression: Results From a Fully Remote Clinical Trial. J Med Internet Res 2016 Dec 20;18(12):e330 [FREE Full text] [CrossRef] [Medline]
  57. Kenny R, Dooley B, Fitzgerald A. Ecological Momentary Assessment of Adolescent Problems, Coping Efficacy, and Mood States Using a Mobile Phone App: An Exploratory Study. JMIR Ment Health 2016 Nov 29;3(4):e51 [FREE Full text] [CrossRef] [Medline]
  58. Torous J, Nicholas J, Larsen ME, Firth J, Christensen H. Clinical review of user engagement with mental health smartphone apps: evidence, theory and improvements. Evid Based Ment Health 2018 Aug 05;21(3):116-119. [CrossRef] [Medline]
  59. Nicholas J, Boydell K, Christensen H. mHealth in psychiatry: time for methodological change. Evid Based Ment Health 2016 May;19(2):33-34. [CrossRef] [Medline]
  60. Improving Outcomes Through Personalised Medicine. NHS.   URL: https://www.england.nhs.uk/wp-content/uploads/2016/09/improving-outcomes-personalised-medicine.pdf [accessed 2021-06-24]
  61. Blankenhagel KJ. Identifying Usability Challenges of eHealth Applications for People with Mental Disorders: Errors and Design Recommendations. In: PervasiveHealth'19: Proceedings of the 13th EAI International Conference on Pervasive Computing Technologies for Healthcare. 2019 Presented at: 13th EAI International Conference on Pervasive Computing Technologies for Healthcare; May 20-23, 2019; Trento, Italy p. 91-100. [CrossRef]
  62. Exploring the UK’s digital divide. Office for National Statistics. 2019.   URL: https:/​/www.​ons.gov.uk/​peoplepopulationandcommunity/​householdcharacteristics/​homeinternetandsocialmediausage/​articles/​exploringtheuksdigitaldivide/​2019-03-04 [accessed 2021-06-04]
  63. Ennis L, Rose D, Denis M, Pandit N, Wykes T. Can't surf, won't surf: the digital divide in mental health. J Ment Health 2012 Aug 19;21(4):395-403 [FREE Full text] [CrossRef] [Medline]
  64. Shen N, Levitan M, Johnson A, Bender JL, Hamilton-Page M, Jadad AAR, et al. Finding a depression app: a review and content analysis of the depression app marketplace. JMIR Mhealth Uhealth 2015 Feb 16;3(1):e16 [FREE Full text] [CrossRef] [Medline]
  65. Guidelines for the Practice of Telepsychology. American Psychological Association.   URL: https://www.apa.org/practice/guidelines/telepsychology [accessed 2021-03-01]
  66. Larsen ME, Nicholas J, Christensen H. Quantifying App Store Dynamics: Longitudinal Tracking of Mental Health Apps. JMIR Mhealth Uhealth 2016 Aug 09;4(3):e96 [FREE Full text] [CrossRef] [Medline]


BD: bipolar disorder
DTAC: Digital Technology Assessment Criteria
MHA: mental health app
MMAT: Mixed Methods Appraisal Tool
NHS: National Health Service


Edited by A Mavragani; submitted 26.03.21; peer-reviewed by N Khalili-Mahani, F Lobban, H Mehdizadeh; comments to author 07.05.21; revised version received 28.07.21; accepted 01.12.21; published 23.03.22

Copyright

©Iona Tatham, Ellisiv Clarke, Kelly Ann Grieve, Pulkit Kaushal, Jan Smeddinck, Evelyn Barron Millar, Aditya Narain Sharma. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 23.03.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.