Published on in Vol 23, No 6 (2021): June

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/25199, first published .
Validation of Visual and Auditory Digital Markers of Suicidality in Acutely Suicidal Psychiatric Inpatients: Proof-of-Concept Study

Validation of Visual and Auditory Digital Markers of Suicidality in Acutely Suicidal Psychiatric Inpatients: Proof-of-Concept Study

Validation of Visual and Auditory Digital Markers of Suicidality in Acutely Suicidal Psychiatric Inpatients: Proof-of-Concept Study

Original Paper

1Research and Development, AiCure, New York, NY, United States

2Psychiatry, New York University School of Medicine, New York, NY, United States

3Department of Psychology, University of Zurich, Zurich, Switzerland

4Department of Psychiatry, Psychotherapy and Psychosomatics, University of Zurich, Zurich, Switzerland

5Neuroscience Centre Zurich, University of Zurich, Zurich, Switzerland

Corresponding Author:

Vidya Koesmahargyo, MSc

Research and Development

AiCure

19 W 24th St

11th Floor

New York, NY, 10010

United States

Phone: 1 6463015037

Email: vidya.koesmahargyo@aicure.com


Background: Multiple symptoms of suicide risk have been assessed based on visual and auditory information, including flattened affect, reduced movement, and slowed speech. Objective quantification of such symptomatology from novel data sources can increase the sensitivity, scalability, and timeliness of suicide risk assessment.

Objective: We aimed to examine measurements extracted from video interviews using open-source deep learning algorithms to quantify facial, vocal, and movement behaviors in relation to suicide risk severity in recently admitted patients following a suicide attempt.

Methods: We utilized video to quantify facial, vocal, and movement markers associated with mood, emotion, and motor functioning from a structured clinical conversation in 20 patients admitted to a psychiatric hospital following a suicide risk attempt. Measures were calculated using open-source deep learning algorithms for processing facial expressivity, head movement, and vocal characteristics. Derived digital measures of flattened affect, reduced movement, and slowed speech were compared to suicide risk with the Beck Scale for Suicide Ideation controlling for age and sex, using multiple linear regression.

Results: Suicide severity was associated with multiple visual and auditory markers, including speech prevalence (β=−0.68, P=.02, r2=0.40), overall expressivity (β=−0.46, P=.10, r2=0.27), and head movement measured as head pitch variability (β=−1.24, P=.006, r2=0.48) and head yaw variability (β=−0.54, P=.06, r2=0.32).

Conclusions: Digital measurements of facial affect, movement, and speech prevalence demonstrated strong effect sizes and linear associations with the severity of suicidal ideation.

J Med Internet Res 2021;23(6):e25199

doi:10.2196/25199

Keywords



Timely, efficient, sensitive, and noninvasive measurement approaches are required to improve suicide risk assessment [1]. One promising avenue is the use of remote digital monitoring methodologies that utilize smart devices, such as phones and wearables, in conjunction with deep learning algorithms to infer behavioral and physiological states associated with suicide risk [2,3].

While suicide risk is often comorbid with other mental disorders, in particular, major depressive disorder (MDD) and schizophrenia, suicidal behavior is increasingly recognized as unique in presentation and risk profile [4-7]. Based on prior knowledge, visual and auditory data sources represent a compelling direction in the objective measurement of behavior associated with suicide risk. Emil Kraepelin first observed that suicide risk was associated with melancholic states characterized by slowed speech, where patients appeared to “become mute in the middle of a sentence,” and further observed how “the facial expression and the general attitude are sleepy and languid, the speech is low…” [8]. More contemporarily, reduced facial expressivity and movement measured using standardized coding schemes based on videos of patient interviews differentiated depressed patients with and without suicide risk [9], and altered vocal characteristics have been observed in acutely suicidal patients [5].

A number of visual and auditory characteristics can be directly quantified, including gross motor activity [10], head movement variability [11-13], facial activity [14], and properties of speech [15]. The automated measurement of these clinical features introduces the possibility of objective digital assessment of visual and auditory markers of suicide risk. Given that audio and video data sources can be captured remotely, this further introduces the possibility of greatly scaling the reach and frequency of assessment. Increased scale and objectivity can facilitate increased accuracy and accessibility of clinical risk and treatment response assessment.

Visual and auditory biomarkers were selected based on a mechanistic theory that reduced serotonin, a key risk factor for suicidal behavior and a primary biological target for treatment of MDD, will affect behavioral characteristics, including an individual’s speech, head movement, and facial activity. Serotonergic tone is known to mechanistically impact motor functioning directly and via interactions with dopamine and norepinephrine signaling [16-18]. Depleted serotonin has been observed in the postmortem brains of individuals with MDD [19] and those who have committed suicide [20]. Furthermore, direct pharmacological manipulation of serotonin using selective serotonin reuptake inhibitors (SSRIs) increases suicide risk [21,22].

While novel measurements are promising, validation is required before such metrics can be interpreted clinically. Key steps for validation include comparison to traditional clinical measures, both cross-sectionally and as they change alongside treatment and disease course [23]. Such measures should strive to be easy to collect, should have increased sensitivity to facilitate frequent and accurate assessment, and should be validated in relationship to narrower biological phenotypes and treatment targets than traditional endpoints offer. This will lead to improved dynamic treatment research and clinical decision making based on modulation of underlying neurobiological deficits [24,25].

In this study, we examined measurements extracted from video interviews using open-source deep learning algorithms to quantify facial, vocal, and movement behaviors in relation to suicide risk severity in patients interviewed soon after admission to a psychiatric hospital following a suicide attempt.


Study Participants

Participants were recruited from the Psychiatric University Hospital, Zurich, Switzerland, as part of the “SIMON–Suicide Ideation MONitoring” study. The SIMON study has been funded by the Digital Lives initiative of the Swiss National Science Foundation (SNSF) and was approved by the ethics committee of the Faculty of Philosophy, University of Zurich (approval number: 19.2.1). The study aims to develop a digital protocol to monitor and predict suicidal ideation and hospital readmission in high-risk psychiatric patients. Digital markers of psychopathology are set to be derived from smartphone-based experience sampling, mobile passive sensing, and video recordings of patient free speech.

Patients were included in the study if they met the following criteria: (1) admission to the hospital after a suicide attempt or in the context of suicidal ideation, and suicidal ideation was identified in the first diagnostic intake interview, (2) sufficient knowledge of the German language, (3) being a smartphone user, and (4) discharge from the clinic after identification of suicidal ideation, with established outpatient care contact with a physician or psychologist. Patients were excluded if they met the following criteria: (1) planning to leave the greater Zurich area within the study period, (2) sharing a smartphone with another person, and (3) being active military personnel. Researchers kept track of all patients admitted to the hospital and contacted the treating psychologist or physician in case of eligibility. Patients who met the inclusion criteria and for whom an approval from the psychologist or physician was granted were informed about the study. If patients agreed to participate in the study, they were invited for a baseline assessment appointment that entailed the following: (1) detailed information about the study, (2) collection of informed consent signed by the patient, (3) assessment of current mental disorders with the Mini International Neuropsychiatric Interview (MINI; version 6), (4) a short videotaped semistructured qualitative interview (for which additional informed consent signed by the patient was obtained) upon agreement, (5) electronic/pen and paper questionnaires evaluating relevant psychological variables, and (6) smartphone app installation. Participants were reimbursed with up to CHF 120 (US $127).

At the time of the video assessment, all patients had an inpatient status at the Psychiatric University Hospital Zurich. We recruited patients admitted to the psychiatric hospital to ensure an appropriate reach and a sufficient sample size of the high-risk psychiatric patient group. For practical reasons and because of the format of the semistructured interviews (ie, questions asked by the researcher and video recordings with a tablet), we obtained video recordings during the baseline interview, when patients were still inpatients. However, the remaining parts of the study (ie, remote patient monitoring using smartphones) commenced after patients were discharged from the hospital. In that way, we assessed patient well-being and behavior in an ecologically valid manner outside the hospital. However, this study only focused on the video recordings from the time patients were inpatients.

Participants were recruited for the study on a rolling basis. At the time of the analysis, 30 patients agreed to participate in the videotaped interview, of whom 20 completed the necessary baseline questionnaires. Ultimately, the analysis was conducted on a sample of 20 patients.

Clinical Assessment of Suicidality

Assessment of suicide risk was completed at baseline. The Beck Scale for Suicide Ideation (BSS) questionnaire (German validated version) [26] was administered to evaluate patients’ current intensity of attitudes, plans, and behaviors to commit suicide. Patients’ history of nonsuicidal self-injury and suicide attempts was assessed using the following two self-report items from the Self-Injurious Thoughts and Behaviors Interview (SITBI): (1) “In your life, have you purposefully hurt yourself without wanting to die?” and (2) “How many times in your lifetime have you made an attempt to kill yourself during which you had at least some intent to die?” [27,28].

Collection of Video and Audio Data

At baseline, patients were given the choice of additionally participating in a video-recorded interview. Upon agreement, patients signed an informed consent form specific to this part of the study. A short videotaped semistructured qualitative interview was performed. A laptop was placed on the table in front of the patient, and 1-minute video and audio samples were recorded. During the qualitative video interview, participants answered introductory questions assessing their current state, as well as questions about experiences with different valences (ie, neutral, positive, and negative) and temporal dimensions (ie, past, present, and future). Overall, the following six video and audio samples, each at 1 minute, were recorded per participant: introduction (neutral present), neutral, positive past, positive future, negative past, and negative future. Multimedia Appendix 1 displays exemplary questions asked for each category. The conversation for each category starts with the central question (first in order) asked by the researcher, followed by additional questions to keep the participant involved and talking during the 1-minute recordings.

Machine Learning Computer Vision and Voice Analysis

All analyses were conducted in a Python environment using open-source tools. No novel machine learning models were trained for the purposes of this study, rather existing tools for the measurement of facial expressivity [29], vocal acoustics [30], and patterns of movement [31] were utilized. The code used to calculate the biomarkers has been compiled into its own open-source package, which allows for other researchers to replicate the methods used in this manuscript [32]. The videos were segmented to only include the participants’ responses to the questionnaires, with behaviors during free speech cropped out and then concatenated for analysis of digital markers.

Facial Activity Analysis

Initially, all videos were segmented into frames, resulting in a minimum of 33 image frames per second. Thereafter, OpenCV, an open-source computer vision software package [33], was used to segment each image into three matrices consisting of red, blue, and green spectrum pixels. Subsequently, each frame was analyzed using OpenFace [29], an open source software package that has demonstrated validity next to expert human ratings of Facial Action Coding Scheme (FACS) [34], a standardized methodology to measure facial movements that reflect activity in the underlying human facial musculature utilized in the production of basic emotions. Specifically, for each frame, OpenFace outputs (1) a binary value indicating the presence of an action unit and (2) a continuous value indicating the intensity with which the action unit is being expressed. Action unit intensities were used to derive measures of individual emotional expressivity (ie, happiness expressivity, sadness expressivity, fear expressivity, anger expressivity, surprise expressivity, and disgust expressivity). Action unit intensities were also used to calculate overall expressivity regardless of emotion.

Movement Analysis

The angle of the head’s pitch (up and down movement) and yaw (side to side movement) was acquired for each video frame using OpenFace. The standard deviations of the frame-wise pitch and yaw measurements were used as indicators of head movement (ie, head pitch variability and head yaw variability).

Voice Analysis

Recordings were segmented into speech and nonspeech parts using Parselmouth, an open-source software package utilized for vocal analysis [30]. The percentage of the audio file where speech was recorded as opposed to no speech (ie, speech prevalence) was calculated to represent how much the participant was talking given the length of the recording.

Data Analysis

Initially, BSS scores were regressed on all movement, facial, and audio markers controlling for age and sex using separate multiple linear regressions. Facial activity, movement, and voice are all behavioral characteristics that are influenced by both age and sex. Hence, it was important to conduct a multiple linear regression controlling for age and sex in order to remove the influence of those factors on the final comparisons between digital measures and BSS scores. In addition to significance testing, unique variance accounted for in BSS scores by the digital measures was assessed using Cohen d [35]. Following analysis of overall expressivity, the BSS was regressed on expressivity of each emotion correcting for multiple comparisons using false discovery rate adjustment to reduce P-value inflation due to chance.


Suicide Severity

Suicidal ideation severity on the BSS across subjects was, on average, above the clinical cutoff of 9, indicating severe suicide risk (µ=10.9, σ=10.2, range 0-31) and an average count of three past suicide attempts (σ=6.9, range 0-30).

Facial Activity

Controlling for age and sex, overall expressivity demonstrated a significant negative linear association with BSS scores, indicating that decreased facial activity is associated with greater suicide risk (β=−0.46, P=.01, r2=0.27). This indicates that facial activity decreases as suicide severity increases, with the strongest evidence in the context of facial expressions without emotional expressions.

To better understand if particular emotions contributed to the observation of composite expressivity, we regressed BSS scores with individual emotional expressivity, sex, and age. P values were adjusted using false discovery rate adjustment. Post-hoc analyses demonstrated significance using Benjamini-Hochberg adjusted P values for sadness expressivity (β=−0.68, P=.01, r2=.43), surprise expressivity (β=−0.74, P=.002, r2=0.53), and disgust expressivity (β=−0.64, P=.04, r2=0.35), but not for fear, anger, and happiness expressivity (Table 1).

Table 1. Results from multiple linear regression between Beck Scale for Suicide Ideation questionnaire scores and digital measurements of facial expressivity, vocal behavior, and head movement, controlling for age and sex.
PredictorCoefficientStandard errort (df)P value F (df)R2Adjusted R2P value
Regression 10.9013 (1)0.153−0.017.46

Constant0.20200.3800.531 (18).60



Anger expressivity−0.24560.322−0.762 (18).46




Age0.00680.0070.960 (18).35



Sex0.11670.1720.680 (18).51

Regression 22.6930 (1)0.3500.220.08

Constant0.12130.2400.506 (18).62




Disgust expressivity−0.64010.278−2.305 (18).04




Age0.01200.0061.980 (18).07




Sex0.14220.1460.972 (18).35



Regression 31.0430 (1)0.1730.007.40

Constant0.26280.3800.692 (18).50



Fear expressivity−0.32100.328−0.978 (18).34



Age0.00600.0070.842 (18).41



Sex0.10560.1700.620 (18).54

Regression 40.6834 (1)0.120−0.056.58

Constant0.00510.2920.018 (18).99




Happiness expressivity−0.02280.255−0.089 (18).93



Age0.00830.0071.194 (18).25




Sex0.15170.1780.852 (18).41


Regression 53.7140 (1)0.4260.311.04

Constant0.43940.2701.629 (18).12




Sadness expressivity−0.67850.240−2.830 (18).01




Age0.00820.0061.480 (18).16




Sex0.03450.152−0.228 (18).82


Regression 65.7130 (1)0.5330.440.008

Constant−0.04410.198−0.223 (18).83




Surprise expressivity−0.74370.204−3.645 (18).002




Age0.01430.0052.734 (18).02




Sex0.24210.1271.912 (18).08
Regression 71.8200 (1)0.2670.120.19

Constant0.28810.3000.961 (18).35


Overall expressivity−0.45850.264−1.734 (18).10



Age0.00540.0060.833 (18).42



Sex0.19470.1581.234 (18).24

Regression 83.3890 (1)0.4040.285.046

Constant0.32910.2561.285 (18).22




Speech prevalence−0.68080.255−2.674 (18).02




Age0.00570.0060.991 (18).34




Sex0.34290.1582.170 (18).047

Regression 94.5330 (1)0.4760.371.02

Constant0.41920.2481.689 (18).11




Head pitch variability−1.24650.391−3.189 (18).006




Age0.01240.0052.294 (18).04




Sex−0.03420.143−0.239 (18).81


Regression 100.1170 (1)0.3170.181.12

Constant0.10060.2450.411 (18).69

Head yaw variability−0.54080.260−2.081 (18).06



Age0.01250.0061.979 (18).07



Sex0.17230.1501.146 (18).27



Voice Analysis

Controlling for age and sex, speech prevalence demonstrated a significant negative linear association with BSS scores, indicating that suicide severity scores increased as speech decreased (β=−0.68; P=.02, r2=0.40; Table 1).

Movement Analysis

Controlling for age and sex, both head pitch variability and head yaw variability demonstrated significant negative linear relationships with suicide severity (βpitch=−1.24, P=.006, r2=0.48; βyaw=−0.54, P=.055, r2=0.32), indicating that lower levels of head movement were associated with greater suicide severity scores (Table 1).


We examined visual and auditory measures of facial activity, head movement, and speech production, which were all calculated using deep learning algorithms applied to open-ended clinical interviews with psychiatric patients following a suicide attempt. The goal of this work was to determine if key indicators of suicide severity could be measured in an objective and automated manner using video data captured during clinical interviews that provided structured questions, but were otherwise kept deliberately open to mimic psychiatric interviewing in routine care. Achieving this goal provides a proof of concept that suicide risk can be assessed through analysis of unstructured video interview data in conjunction with deep learning algorithms designed to measure clinical/behavioral characteristics.

Objective measurements of visual and auditory markers of suicidality calculated from videos of patient behavior can be useful in the context of psychiatric care and clinical research, in particular, if the measurements can be acquired without specialized hardware using video or audio from regular patient-clinician interactions, as has been demonstrated in this study. Such tools have applications in clinical practice as they can allow for remote measurement of symptomatology. For example, virtual clinic visits through video calls can involve digital measurements to assess mental health if such measurements are integrated into the technology utilized. Existing smartphone-based platforms allow for the collection of video and audio data and subsequent calculation of digital measures, and can be applied in out-patient settings to remotely acquire these measurements without adding further burden on the clinical staff [36-38]. Of course, such technologies face a great deal of challenges before they can be deployed in patient care, starting with further validation of methodologies, passage through regulatory review as dependable clinical measures, and adoption by health care professionals into their day-to-day clinical workflows [39]. Ultimately, methods that can provide remote, objective, and passive measurement of clinical functioning can greatly increase the reach of treatment development, dissemination, and personalization [25,40].

Finally, the studied markers were selected based on a mechanistic theory that reduced serotonin in the brain, a key risk factor for suicidal behavior, will manifest in behavioral characteristics, including those involving a patient’s speech, head movement, and facial activity. These primary hypotheses were confirmed. Overall facial expressivity, speech activity, and head movement during spontaneous behavior, which are all downstream effects of motor slowing, were correlated with BSS scores. This work provides a proof of concept that proxy markers of underlying neurobiological functioning can be used to measure clinical risk. Demonstrating this indicates that there is potential to remotely measure more specific neurobiological phenotypes to determine risk or examine the response to treatment. For example, owing to suicide risk associated with SSRIs, there is a need for close clinical monitoring during the initiation of treatment. Similarly, such models may be useful to determine who would benefit from treatments that affect serotonergic tone. Ultimately, by measuring a more specific biologically based phenotype, there is greatly increased opportunity to improve sensitivity of measurement and specificity of treatment [24].

The work has key limitations. First, this work represents a proof of concept. The indicated markers for risk assessment or measurement of clinical severity should only be used in an exploratory manner until such markers can be validated in larger and more diverse clinical populations and settings. We need to directly determine the amount of video data needed and the clinically meaningful cutoffs to determine clinical functioning. It is likely that like traditional suicide risk assessment, multiple characteristics together are needed to determine risk or severity. Multiple digital measures can be combined to produce a more robust metric, but like traditional scale development, this requires larger samples from diverse populations. Importantly, the current work uses open-source software, and additionally, we have published our software methods. As such, there is no access barrier for researchers to implement identical methods to test and extend the current approach. An open-source approach lends itself to rapid replication, extension, and implementation.

Ultimately, this proof-of-concept study demonstrates that theoretically grounded visual and auditory digital measurements are valid as markers of suicide severity. This effort provided a digital approach akin to traditional clinical assessment whereby a skilled clinician listens to a patient and applies internal working models developed through years of experience to assess clinical functioning.

Conflicts of Interest

AA, VK, and VY are employed by and hold stock options in AiCure, LLC. The other authors have no conflicts to declare.

Multimedia Appendix 1

Exemplary questions for the six categories of the videotaped semistructured qualitative interview.

DOCX File , 21 KB

  1. Claassen CA, National Action Alliance for Suicide Prevention Research Prioritization Task. The agenda development process of the United States' National Action Alliance for Suicide Prevention Research Prioritization Task Force. Crisis 2013 May;34(3):147-155. [CrossRef] [Medline]
  2. Torous J, Walker R. Leveraging Digital Health and Machine Learning Toward Reducing Suicide-From Panacea to Practical Tool. JAMA Psychiatry 2019 Oct 01;76(10):999-1000 [FREE Full text] [CrossRef] [Medline]
  3. Vahabzadeh A, Sahin N, Kalali A. Digital Suicide Prevention: Can Technology Become a Game-changer? Innov Clin Neurosci 2016;13(5-6):16-20 [FREE Full text] [Medline]
  4. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, 5th Edition. Washington, DC: American Psychiatric Association; 2013.
  5. Cummins N, Scherer S, Krajewski J, Schnieder S, Epps J, Quatieri TF. A review of depression and suicide risk assessment using speech analysis. Speech Communication 2015 Jul;71:10-49. [CrossRef]
  6. Pompili M, Amador XF, Girardi P, Harkavy-Friedman J, Harrow M, Kaplan K, et al. Suicide risk in schizophrenia: learning from the past to change the future. Ann Gen Psychiatry 2007 Mar 16;6(1):1-22. [CrossRef]
  7. Sisti D, Mann JJ, Oquendo MA. Toward a Distinct Mental Disorder-Suicidal Behavior. JAMA Psychiatry 2020 Jul 01;77(7):661-662. [CrossRef] [Medline]
  8. Diefendorf AR, Kraepelin E. Clinical psychiatry: A textbook for students and physicians, abstracted and adapted from the 7th German edition of Kraepelin's Lehrbuch der Psychiatrie. New York, NY: MacMillan Co; 1907.
  9. Heller M, Haynal V. Depression and Suicide Faces. In: Ekman P, Rosenberg EL, editors. What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). Oxford, United Kingdom: Oxford University Press; 2005.
  10. Buyukdura JS, McClintock SM, Croarkin PE. Psychomotor retardation in depression: biological underpinnings, measurement, and treatment. Prog Neuropsychopharmacol Biol Psychiatry 2011 Mar 30;35(2):395-409 [FREE Full text] [CrossRef] [Medline]
  11. Alghowinem S, Goecke R, Wagner M, Parkerx G, Breakspear M. Head Pose and Movement Analysis as an Indicator of Depression. 2013 Presented at: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction; September 2-5, 2013; Geneva, Switzerland. [CrossRef]
  12. Dibeklioglu H, Hammal Z, Cohn JF. Dynamic Multimodal Measurement of Depression Severity Using Deep Autoencoding. IEEE J. Biomed. Health Inform 2018 Mar;22(2):525-536. [CrossRef]
  13. Kim Y, Cheon S, Youm C, Son M, Kim JW. Depression and posture in patients with Parkinson's disease. Gait Posture 2018 Mar;61:81-85. [CrossRef] [Medline]
  14. Girard J, Cohn J, Mahoor M, Mavadati S, Rosenwald D. Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis. Proc Int Conf Autom Face Gesture Recognit 2013:1-8 [FREE Full text] [CrossRef] [Medline]
  15. Garcia-Toro M, Talavera JA, Saiz-Ruiz J, Gonzalez A. Prosody impairment in depression measured through acoustic analysis. J Nerv Ment Dis 2000 Dec;188(12):824-829. [CrossRef] [Medline]
  16. El Mansari M, Guiard BP, Chernoloz O, Ghanbari R, Katz N, Blier P. Relevance of norepinephrine-dopamine interactions in the treatment of major depressive disorder. CNS Neurosci Ther 2010 Jul;16(3):e1-17 [FREE Full text] [CrossRef] [Medline]
  17. Herrera-Guzmán I, Gudayol-Ferré E, Herrera-Guzmán D, Guàrdia-Olmos J, Hinojosa-Calvo E, Herrera-Abarca JE. Effects of selective serotonin reuptake and dual serotonergic-noradrenergic reuptake treatments on memory and mental processing speed in patients with major depressive disorder. J Psychiatr Res 2009 Jul;43(9):855-863. [CrossRef] [Medline]
  18. Morrissette DA, Stahl SM. Modulating the serotonin system in the treatment of major depressive disorder. CNS Spectr 2014 Dec;19 Suppl 1:57-67; quiz 54. [CrossRef] [Medline]
  19. Stockmeier CA. Involvement of serotonin in depression: evidence from postmortem and imaging studies of serotonin receptors and the serotonin transporter. Journal of Psychiatric Research 2003 Sep;37(5):357-373. [CrossRef] [Medline]
  20. Stockmeier CA, Shapiro LA, Dilley GE, Kolli TN, Friedman L, Rajkowska G. Increase in serotonin-1A autoreceptors in the midbrain of suicide victims with major depression-postmortem evidence for decreased serotonin activity. J Neurosci 1998 Oct 15;18(18):7394-7401 [FREE Full text] [Medline]
  21. The Lancet. SSRIs: suicide risk and withdrawal. The Lancet 2003 Jun 14;361(9374):1999. [CrossRef] [Medline]
  22. Wessely S, Kerwin R. Suicide risk and the SSRIs. JAMA 2004 Jul 21;292(3):379-381. [CrossRef] [Medline]
  23. Coravos A, Khozin S, Mandl KD. Developing and adopting safe and effective digital biomarkers to improve patient outcomes. npj Digit. Med 2019 Mar 11;2(1):40 [FREE Full text] [CrossRef] [Medline]
  24. Torous J, Onnela J, Keshavan M. New dimensions and new tools to realize the potential of RDoC: digital phenotyping via smartphones and connected devices. Transl Psychiatry 2017 Mar 07;7(3):e1053 [FREE Full text] [CrossRef] [Medline]
  25. Lenze EJ, Rodebaugh TL, Nicol GE. A Framework for Advancing Precision Medicine in Clinical Trials for Mental Disorders. JAMA Psychiatry 2020 Jul 01;77(7):663-664. [CrossRef] [Medline]
  26. Kliem S, Lohmann A, Mößle T, Brähler E. German Beck Scale for Suicide Ideation (BSS): psychometric properties from a representative population survey. BMC Psychiatry 2017 Dec 04;17(1):389 [FREE Full text] [CrossRef] [Medline]
  27. Nock MK, Holmberg EB, Photos VI, Michel BD. Self-Injurious Thoughts and Behaviors Interview: development, reliability, and validity in an adolescent sample. Psychol Assess 2007 Oct;19(3):309-317. [CrossRef] [Medline]
  28. Chu C, Hom MA, Stanley IH, Gai AR, Nock MK, Gutierrez PM, et al. Non-suicidal self-injury and suicidal thoughts and behaviors: A study of the explanatory roles of the interpersonal theory variables among military service members and veterans. J Consult Clin Psychol 2018 Jan;86(1):56-68 [FREE Full text] [CrossRef] [Medline]
  29. Brahmbhatt S. Introduction to Computer Vision and OpenCV. In: Practical OpenCV. Berkeley, CA: Apress; 2013:3-5.
  30. Baltrusaitis T, Robinson P, Morency L. OpenFace: An open source facial behavior analysis toolkit. 2016 Presented at: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV); March 7-10, 2016; Lake Placid, NY. [CrossRef]
  31. Ekman P, Rosenberg E. What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). Oxford, United Kingdom: Oxford University Press; 2005.
  32. Jadoul Y, Thompson B, de Boer B. Introducing Parselmouth: A Python interface to Praat. Journal of Phonetics 2018 Nov;71:1-15. [CrossRef]
  33. Cohen ML. An Adaptive R-Estimate. Fort Belvoir, VA: Defense Technical Information Center; 1980.
  34. Insel TR. Digital phenotyping: a global tool for psychiatry. World Psychiatry 2018 Oct;17(3):276-277 [FREE Full text] [CrossRef] [Medline]
  35. Abbas A. anzarabbas/ms_dbms_mdd_suicidality. GitHub, Inc.   URL: https://github.com/anzarabbas/ms_dbms_mdd_suicidality [accessed 2021-01-15]
  36. Galatzer-Levy I, Abbas A, Yadav V, Koesmahargyo V, Aghjayan A, Marecki S, et al. Remote digital measurement of visual and auditory markers of Major Depressive Disorder severity and treatment response. medRxiv Preprint posted online August 26, 2020. [CrossRef]
  37. Galatzer-Levy IR, Abbas A, Koesmahargyo V, Yadav V, Perez-Rodriguez MM, Rosenfield P, et al. Facial and vocal markers of schizophrenia measured using remote smartphone assessments. medRxiv Preprint posted online December 4, 2020. [CrossRef]
  38. Abbas A, Yadav V, Smith E, Ramjas E, Rutter SB, Benavides C, et al. Computer vision-based assessment of motor functioning in schizophrenia: Use of smartphones for remote measurement of schizophrenia symptomatology. medRxiv Preprint posted online July 25, 2020. [CrossRef]
  39. Abbas A, Schultebraucks K, Galatzer-Levy I. Digital Measurement of Mental Health: Challenges, Promises, and Future Directions. Psychiatric Annals 2021 Jan 01;51(1):14-20. [CrossRef]
  40. Insel TR. Digital phenotyping: a global tool for psychiatry. World Psychiatry 2018 Oct;17(3):276-277 [FREE Full text] [CrossRef] [Medline]


BSS: Beck Scale for Suicide Ideation
MDD: major depressive disorder
SSRI: selective serotonin reuptake inhibitor


Edited by R Kukafka; submitted 26.10.20; peer-reviewed by A Benis, P Gazerani; comments to author 24.11.20; revised version received 15.12.20; accepted 16.03.21; published 03.06.21

Copyright

©Isaac Galatzer-Levy, Anzar Abbas, Anja Ries, Stephanie Homan, Laura Sels, Vidya Koesmahargyo, Vijay Yadav, Michael Colla, Hanne Scheerer, Stefan Vetter, Erich Seifritz, Urte Scholz, Birgit Kleim. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 03.06.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.