Published on in Vol 24, No 2 (2022): February

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/31146, first published .
An Ethics Checklist for Digital Health Research in Psychiatry: Viewpoint

An Ethics Checklist for Digital Health Research in Psychiatry: Viewpoint

An Ethics Checklist for Digital Health Research in Psychiatry: Viewpoint

Viewpoint

1Harvard Medical School, Boston, MA, United States

2Law School, University of Minnesota, Minneapolis, MN, United States

3Institute for Technology in Psychiatry, McLean Hospital, Belmont, MA, United States

*these authors contributed equally

Corresponding Author:

Francis X Shen, JD, PhD

Harvard Medical School

641 Huntington Ave

Boston, MA, 02115

United States

Phone: 1 617 462 3845

Email: fshen1@mgh.harvard.edu


Background: Psychiatry has long needed a better and more scalable way to capture the dynamics of behavior and its disturbances, quantitatively across multiple data channels, at high temporal resolution in real time. By combining 24/7 data—on location, movement, email and text communications, and social media—with brain scans, genetics, genomics, neuropsychological batteries, and clinical interviews, researchers will have an unprecedented amount of objective, individual-level data. Analyzing these data with ever-evolving artificial intelligence could one day include bringing interventions to patients where they are in the real world in a convenient, efficient, effective, and timely way. Yet, the road to this innovative future is fraught with ethical dilemmas as well as ethical, legal, and social implications (ELSI).

Objective: The goal of the Ethics Checklist is to promote careful design and execution of research. It is not meant to mandate particular research designs; indeed, at this early stage and without consensus guidance, there are a range of reasonable choices researchers may make. However, the checklist is meant to make those ethical choices explicit, and to require researchers to give reasons for their decisions related to ELSI issues. The Ethics Checklist is primarily focused on procedural safeguards, such as consulting with experts outside the research group and documenting standard operating procedures for clearly actionable data (eg, expressed suicidality) within written research protocols.

Methods: We explored the ELSI of digital health research in psychiatry, with a particular focus on what we label “deep phenotyping” psychiatric research, which combines the potential for virtually boundless data collection and increasingly sophisticated techniques to analyze those data. We convened an interdisciplinary expert stakeholder workshop in May 2020, and this checklist emerges out of that dialogue.

Results: Consistent with recent ELSI analyses, we find that existing ethical guidance and legal regulations are not sufficient for deep phenotyping research in psychiatry. At present, there are regulatory gaps, inconsistencies across research teams in ethics protocols, and a lack of consensus among institutional review boards on when and how deep phenotyping research should proceed. We thus developed a new instrument, an Ethics Checklist for Digital Health Research in Psychiatry (“the Ethics Checklist”). The Ethics Checklist is composed of 20 key questions, subdivided into 6 interrelated domains: (1) informed consent; (2) equity, diversity, and access; (3) privacy and partnerships; (4) regulation and law; (5) return of results; and (6) duty to warn and duty to report.

Conclusions: Deep phenotyping research offers a vision for vastly more effective care for people with, or at risk for, psychiatric disease. The potential perils en route to realizing this vision are significant; however, and researchers must be willing to address the questions in the Ethics Checklist before embarking on each leg of the journey.

J Med Internet Res 2022;24(2):e31146

doi:10.2196/31146

Keywords



“The deeper you go, the more you know.” This headline captures the tantalizing promise of deeply probing digital health research in psychiatry [1].

Psychiatry has long needed a better and more scalable way to capture the dynamics of behavior and its disturbances, quantitatively across multiple data channels, at high temporal resolution in real time. By combining 24/7 data—on location, movement, email and text communications, and social media—with brain scans, genetics, genomics, neuropsychological batteries, and clinical interviews, researchers will have an unprecedented amount of objective, individual-level data [2,3]. Analyzing these data with ever-evolving artificial intelligence offers the possibility of intervening early with precision and could even prevent the most critical sentinel events [4]. Ideally, this could one day include bringing interventions to patients where they are in the real world, in a convenient, efficient, effective, and timely way [5,6].

Yet, the road to this innovative future is fraught with ethical dilemmas [7-10], and ethical, legal, and social implications (ELSI) on issues such as informed consent and return of results must be reexamined in light of research employing previously unavailable deep and computational characterization of humans based on: (1) real time and cross-sectional behavioral measures; (2) imaging, interviews, and other phenotypic data; (3) genotypic data; and (4) epigenetic and environmental data. The potentially boundless data collection across these streams gives rise to what might be described as “deep biobehavioral typing” or “deep geno/phenotyping.” Given these multiple streams, ethical frameworks for deep biobehavioral typing must integrate the overlapping ethics of genetics, genomics, ethics of brain imaging, ethics of real time digital monitoring, and so forth to critically reexamine well-known ELSI issues. For example, when considering return of results, although it is true that the deeper you go, the more you know, it is unclear when a researcher knows enough to justify sharing data with a clinician, alerting appropriate individuals about potential self-harm, and returning individual research results [11].

Supported by a National Institutes of Health (NIH) Bioethics Administrative Supplement award (NIH 1U01MH116925-01), we have been exploring the ELSI of digital health research in psychiatry, with a particular focus on what we label “deep phenotyping” psychiatric research, which combines the potential for virtually boundless data collection and increasingly sophisticated techniques to analyze that data. We convened an interdisciplinary expert stakeholder workshop in May 2020, and this checklist emerges out of that dialogue. As we use it in this article, the phrase “deep phenotyping” in psychiatric research is meant to describe research that—even if it does not encompass a large number of research subjects—goes deep into the lives of those subjects by collecting many digital and biological data streams (eg, digital data such as text messages, phone screen shots, and GPS location; health data such as heart rate and blood pressure; and clinical evaluations and biological data such as genetics and brain scans).

Consistent with recent ELSI analyses [8,9], the bottom line of our bioethics analysis is that existing ethical guidance and legal regulation are not sufficient for deep phenotyping research in psychiatry. At present, there are regulatory gaps and inconsistencies across research teams in ethics protocols. There is also a lack of consensus among institutional review boards (IRBs) on when and how deep phenotyping research should proceed [12,13]. Efforts are underway to fill these gaps, notably those led by the Connected and Open Research Ethics initiative at the University of California San Diego [9].

Until the field develops more robust consensus guidelines, however, the onus clearly falls on individual research teams to take the lead in shaping the applied ethics of digital health research in psychiatry.

To guide these ethics considerations, we developed a new instrument, an Ethics Checklist for Digital Health Research in Psychiatry (“the Ethics Checklist”). The Ethics Checklist is composed of 20 key questions, subdivided into six interrelated domains: (1) informed consent; (2) equity, diversity, and access; (3) privacy and partnerships; (4) regulation and law; (5) return of results; and (6) duty to warn and duty to report. The questions included in the checklist are presented in Table 1, and Multimedia Appendix 1 provides the Ethics Checklist as a user-friendly instrument for research teams.

The goal of the Ethics Checklist is to promote the careful design and execution of research. It is not meant to mandate particular research designs; indeed, at this early stage of digital phenotyping research and without consensus guidance, there are a range of reasonable choices researchers may make. But the checklist is meant to make those ethical choices explicit, and to require researchers to give reasons for their decisions related to ELSI issues. The Ethics Checklist is primarily focused on procedural safeguards, such as consulting with experts outside the research group and documenting standard operating procedures for clearly actionable data (eg, expressed suicidality) within written research protocols.

Table 1. Ethics checklist for digital health research.
CategoryCategory descriptionChecklist items
Informed consentHow can we meaningfully communicate and be transparent about research methods that involve deep, complex, often passive and continuous data collection, machine learning analysis, and interpretation?1.Have we appropriately adapted our informed consent procedures to our specific study population, including possible use of surrogate consent?

2.Will we provide background education on relevant technologies, such as explaining what social media companies may already be doing with the participant’s data?

3.Have we determined what a reasonable person would want to know, and explained in our institutional review board proposal the evidence on which we reached that determination?
Equity, diversity, and accessHow will we address concerns that our research might replicate existing, or generate new, biased results or contribute to health inequities in access based on race, ethnicity, gender, sexual orientation, age, or another legally protected class?4.Starting at the early conceptualization and research design stages, have we sought input from a diverse community of stakeholders to identify and address potential equity concerns and opportunities to advance justice with our proposed research?

5.Has our research plan addressed potential inequities in access, for instance varying levels of access to mobile technology and to health care services?

6.Has every member of the research team completed our institution’s recommended trainings around diversity, inclusion, equity, and access?
Privacy and partnershipsHow can we design our research to balance an interest in robust data collection, with a potentially competing interest in protecting participant privacy?7.Have we consulted with information security experts about exactly where the data will flow, from start to finish?

8.Do we have a written policy on data deidentification and participant privacy that is consistent with best practices in psychiatry and neuroscience?

9.Have we determined which, if any, third-party vendors will be required to be HIPAAa compliant and sign a Business Associate Agreement?
Regulation and lawWhich state, federal, and international law and regulatory guidance must be adhered to in our research?10.Have we examined the terms of service, end user license agreements, privacy statements, and HIPAA notices for each of the vendors and software applications involved in our research?

11.Have we determined how laws in applicable jurisdictions will treat the data we collect, for instance considering the data to be “sensitive,” “special category,” or “personal health information”?

12.Have we ensured compliance with state, federal and international laws governing our research, HIPAA privacy requirements, state data privacy laws, and applicable international privacy laws?
Return of resultsBy which criteria will we determine if our data analytic models are sufficiently valid and reliable for us to share the individual research results and data with the research participant and the participant’s clinicians?13.Have we considered whether our study will generate any “actionable” results, based on established guidelines and how we have defined actionability?

14.Have we established with what frequency results will be returned? (eg, should participants have daily, weekly, and monthly access to some subset of their data?)

15.Have we clarified the protocols and mechanisms for returning different types of information, (eg, raw data, interpreted data, etc)?

16.Do we have a protocol in place for contacting a participant’s clinicians and nonclinical caregivers?
Duty to warn and duty to reportWhen might our research trigger a legal or ethical duty to report the potential for participant self-harm or harm to others, and what are our protocols for determining whether in individual instances we have such a duty?17.Has everyone in our research lab received sufficient training to know when to flag data or results as requiring follow-up review by a supervisor?

18.Will our analytic methods allow us to identify the precursors to dangerous or illegal behavior, to oneself or to others, and if so, at which point will we intervene to protect the research participant or a third party?

19.Have we updated our lab’s suicidality standard operating procedure to be consistent with the novel data acquisition and analysis techniques we are using in our study?

20.Do we have a protocol for responding to legally mandated reporting if our data uncover child pornography, restraining order violations, and so on?

aHIPAA: Health Insurance Portability and Accountability Act.


Each of the 20 ethics checklist questions are phrased so that they can be answered with a “Yes,” “No,” or “Pending” response. In our view, deep phenotyping research in psychiatry should not proceed until a research team answers “Yes” or “Pending” to each checklist question. To arrive at “Yes” or “Pending” for each question will require research labs to carefully consider a complex interplay of ethical and legal considerations.

It is beyond the scope of this short paper to address all of these complexities, but we offer here several illustrative examples, from each of the 6 key domains, of how the checklist might be applied in practice.

Informed Consent

The revised Common Rule requires researchers to present participants with information that is “most likely to assist … in understanding the reasons why one might or might not want to participate in the research,” and that is what “a reasonable person would want to have …” (Title 45 of the Code of Federal Regulations, part 46, effective July 2018) [14]. The Ethics Checklist thus requires researchers to address the question: “Have we determined what a reasonable person would want to know, and explained in our IRB proposal the evidence on which we reached that determination?” There are presently no empirical data or standard protocols for determining what information a reasonable person would want to have before agreeing to participate in deep phenotyping research. Privacy scholars and ethicists are increasingly concerned that the rights of “notice, access, and consent regarding the collection, use, and disclosure of personal data” are no longer adequate because “many privacy harms are the result of an aggregation of pieces of data over a period of time by different entities” [15]. Thus, researchers must consider a broad range of possible information to communicate.

We offer here several of these many possibilities. One decision is whether to provide research participants a list of clear, concrete examples of the inferences that can likely be made from participants’ data. For example, the informed consent material could explicitly say, “You should know that, although we will not reveal this information outside the research team, we may be able to identify when you are going to the bathroom or having sex.” Another decision, especially for researchers who are collecting data on participants’ GPS data and social media content, is whether to provide basic background education to make participants more informed on the data collection and data sharing practices already being utilized by the mobile technology and apps they already use regularly. Third, ethics research has identified a need to make informed consent processes more meaningful and valid by improving communication [16,17]. Research teams may consider innovative strategies such as video-based multimedia as part of the consent procedure [18-20]. They may also consider staged informed consent [21,22], dynamic consent to facilitate 2-way communication between researchers and participants [23], or a systemic oversight approach for big data research that provides flexibility for addressing uncertainty in how data will be used [24].

In addition, researchers in psychiatry must address a further question that has long been challenging for the field, “how to ensure meaningful and valid informed consent with participants who have a mental illness?” [25,26]. The question of decisional capacity in psychiatric research has been well researched, with multiple instruments now available [27], but the field will need to revisit the effectiveness of these instruments in the new context of deep biobehavioral research.

Equity, Diversity, Inclusion, and Access

It is well established that biomedical research generally [28] and psychiatric research specifically struggle to enroll racially and ethnically diverse participants [29]. In the United States, for example, there is a reluctance of African Americans to participate in research given a long history of racism and exploitation [30,31].

The Ethics Checklist proposes that researchers answer the following question: Starting at the early conceptualization and research design stages, have we sought input from a diverse community of stakeholders to identify and address potential equity concerns and opportunities to advance justice with our proposed research? The question emphasizes that equity concerns extend beyond simply developing proportional samples, and that from the start, “[r]esearch relationships must become balanced, reciprocal, and community informed, without centering researcher and institutional priorities” [32]. Community advisory boards, which can be comprised of community and family stakeholders, can be a useful vehicle for facilitating such engagement [33]. Operationalizing “diverse community of stakeholders” will depend on the nature and scope of the research, institutional context, and affected communities. The stakeholder group will necessarily be comprised differently; for instance, whether the focus is on a single disease versus basic research, or the research involves multiple international sites versus a single community partner.

In defining the stakeholders, the checklist encourages researchers to go beyond their own research team to seek guidance and build trust, even at the conceptual and research design stage. We agree with Wilkins [34], who suggests that enhancing trust “must build on the principles of community engagement including balancing power dynamics, equitable distribution of resources, effective bidirectional communication, shared decision-making, and valuing of different resources and assets (such as the lived experience and knowledge of group norms and perspectives).” Researchers might, for instance, consult with their institution’s leadership on diversity and inclusion to see if the institution already has mechanisms in place for community engagement. Additional options include reviewing the best practices in community-based participatory research and community-engaged research [35] and consulting expertise in other disciplines, including law [36] and the humanities [37].

Privacy and Partnerships

In an analysis of smartphone digital phenotyping, Onnela and Rauch [38] write that “[p]atient and participant privacy is always of utmost importance in clinical and research settings.” The literature on ethics of deep phenotyping has similarly identified privacy as foundational [39,40]. Deep phenotyping research requires that data flow across multiple platforms and vendors; thus, to safeguard data privacy, researchers must be aware of where the data go, what happens to the data at each stop, and where security vulnerabilities may exist [10]. The Ethics Checklist thus includes the question, “Have we consulted with information security experts about exactly where the data will flow, from start to finish?” Such consultation could potentially lead to modifications in data collection and data analysis techniques to improve privacy safeguards. For instance, security experts may be aware of the vulnerabilities of particular apps or new technical advances recently developed.

Regulation and Law

The regulation of mobile health apps is currently undergoing transformation [41,42], as is the regulation of artificial intelligence and machine learning data analysis [43]. At the same time, state privacy laws are emerging [44], as are international law innovations such as the European Union’s General Data Protection Regulation [45]. These legal developments have implications for the deep phenotyping research we describe in this article [7].

For instance, the data collection may require interfacing with multiple third-party vendors, and it is the responsibility of the research team to examine the terms of service, end user license agreements, privacy statements, and Health Insurance Portability and Accountability Act (HIPAA) notices for each of these vendors and associated software applications (Checklist question 10). This may not be an easy task, as research on mobile health apps suggests that many vendors do not have a privacy policy publicly available [46]. It will also be challenging to determine how applicable laws will treat the data being collected (Checklist question 11). Different laws define categories differently. For example, what is considered “sensitive” data under the California Consumer Privacy Act and California Privacy Rights Act might be different from what is considered “special category” data under the European Union’s General Data Protection Regulation or considered “personal health information” under HIPAA [47]. In setting up the research design, the research team may need to enlist institutional or external expertise to help understand and comply with these statutes.

In addition, when data collection follows the individual across state or international boundaries, and when data flow across those boundaries, the research will be exposed to multiple legal jurisdictions, including emerging state laws governing privacy and research [48]. For example, if data is collected continuously while a research participant living in Boston visits a relative in Chicago, then goes to a meeting in Baltimore, both the Illinois Biometric Privacy Act and the Maryland Confidentiality of Medical Records Act may apply. This may place different requirements on the research team. Similarly, if data gathered from a research participant in Detroit are transferred to a vendor operating in nearby Windsor, Canada’s data privacy laws may now be relevant in a way they would not be for traditional research with subjects and data firmly rooted in the United States. Given this potential for movement of research subjects, researchers should answer the question, “Have we ensured compliance with state, federal and international laws governing our research, HIPAA privacy requirements, state data privacy laws, and applicable international privacy laws?” Reviewing HIPAA compliance is foundational, and the details of such review are beyond the scope of our commentary; however, we emphasize here that, given the geographic mobility of the subjects, HIPAA is not the only applicable privacy regime. Thus, addressing legal and regulatory concerns may likely require consultation with legal experts in the researcher’s institution. This review of privacy law can be integrated with the Ethics Checklist question 10 concerning vendors’ policies and question 11 concerning the designation of sensitive information.

Return of Results

The return of individual research results has garnered significant attention in the ethics literature [49-51]. In the context of deep phenotyping research, the return of results is challenging because there are potentially so much data to return and because some of that data fluctuate over time and could be returned hourly, daily, weekly, monthly, and so on. In a virtual workshop we hosted in May 2020, a group of 25 stakeholders from science, medicine, law, and ethics gathered to explore the issues of return of results in deep phenotyping research. The discussion in that workshop made clear that, at present, there is considerable uncertainty over what constitutes “actionable” information in this space as well as divergent practices among research teams in which information participants can access. The Ethics Checklist includes 4 different questions on return of results, including, “Have we considered whether our study will generate any ‘actionable’ results, based on established guidelines, and how have we defined actionability?” The concept of actionability is debated across multiple fields [50,52], and in the context of deep phenotyping it is not clear where the bounds of actionability are.

For instance, if a research team is measuring step count data and a participant’s step count drops below average in a given week, alerting that participant of the data is actionable in the sense that the participant—informed by these data—may choose to walk substantially more steps next week. But what about more complex results, such as a machine learning algorithm that predicts that the participant has a 72% higher likelihood of experiencing a manic episode in the following year? When has the scientific knowledge base accumulated sufficiently to make such a prediction “actionable”? An even more fundamental question is implicated: for any measurement or prediction, what is the confidence in the measurement, sensitivity, or specificity of the interpretation or prediction, and how should that be shared? The effects of researcher mobile health interventions on participants within research studies are only now being studied [53], and the field is still formulating practices for clinical interventions based on phenotyping data [54].

The potential ethical responsibility and legal duty for reanalysis of data is also of concern. For instance, in genomics research, many genetic variations are classified as “variant of uncertain significance (VUS),” but as knowledge increases, those variations may be reclassified [55]. An ethical and legal question is whether researchers should (or must) revisit previously collected data to determine whether reclassification is warranted, and if so, when and how should they contact those participants from the earlier study [56]. This issue will likely emerge in deep geno/phenotyping research, and it should be proactively anticipated with a policy put into place.

Duty to Warn and Duty to Report

The duty to warn and the duty to report are well known to psychiatric researchers, but the advanced data collection and data analysis methods of deep phenotyping introduce unique concerns [57]. At present, the field has only begun to develop protocols and thresholds for when the data should trigger a legal or ethical duty to report the potential for participants’ self-harm or harm to others, and further work is needed to address the possibility of false positives, false negatives, and reliable signal detection. For instance, among the 4 questions included in the Ethics Checklist under this heading, we require that researchers address the question, “Have we updated our lab’s suicidality standard operating procedure (SOP) to be consistent with the novel data acquisition and analysis techniques we are using in our study?” Traditionally, suicidality SOPs have relied almost exclusively on the clinical judgment of the psychiatrist or psychologist reviewing individual records and conducting interviews with the participant. The goal of deep phenotyping research, however, is to reduce reliance on this single stream of data, and instead to incorporate many additional real-world data points. The suicidality SOP may need to be modified in recognition of this new paradigm of psychiatric assessment. For instance, if GPS data show that the participant has spent 3 hours at a local bar, then is located on a bridge at 2 AM in the morning, and has sent 20 text messages in the past 5 minutes, is there a way for the research team to have such behavior flagged in real time and should there be a real time intervention in the protocol? The Ethics Checklist requires that researchers consider such situations.


Deep phenotyping research offers a vision for vastly more effective care for people with or at risk of psychiatric disease. The potential perils en route to realizing this vision are significant; however, researchers must be willing to address the questions in the Ethics Checklist before embarking on each leg of the journey.

The illustrative examples discussed above make clear that deep phenotyping researchers have few guideposts and little empirical data with which to address many pressing ethical and legal questions critical for their research. This lack of clarity regarding best practices is understandable for a field that has emerged rapidly, mainly in the past 5 years. But as the field continues to expand, there is a need to fill this gap by developing consensus guidance, informed by quantitative and qualitative bioethics research, as well as community and patient advocate input. This paper has raised more questions than answers, and it did not reach many other avenues of inquiry including considerations for international research and research with children.

To make progress toward consensus guidance, we identify 2 immediate action items. First, ethics should be integrated into the practice of deep phenotyping research (as is already being carried out at centers such as the McLean Institute for Technology in Psychiatry and the Connected and Open Research Ethics initiative at the University of California San Diego Research Center for Optimal Digital Ethics in Health).

Second, professional organizations such as the American Psychiatric Association and the Digital Medicine Society, along with institutions such as the NIH and National Academies, are well positioned to convene an interdisciplinary team to conduct in-depth analysis and produce foundational reports to guide the field.

The deeper you go in deep phenotyping research, the deeper the ethical and legal challenges. But with timely, concerted action, the research community can promote ethically sound and legally compliant digital health research in psychiatry.

Acknowledgments

We thank our student research assistant Lois Yoo. We also thank the participants in a virtual workshop hosted by the McLean Institute for Technology in Psychiatry on May 8, 2020, to explore the ethical, legal, and social implications of return of results in deep phenotyping research. Research reported in this publication was supported by a Bioethics Supplement from the National Institute of Mental Health (NIMH) of the National Institutes of Health (NIH) under Award Number 1U01MH116925-01. The content is solely the responsibility of the authors and does not necessarily represent the official views of NIMH or NIH.

Authors' Contributions

FXS, BCS, and JTB conceived and designed the work. FS and BS substantially conducted the ethics research and drafted the manuscript. SLR and JTB substantially edited and critically revised the manuscript. PM and SK significantly contributed to the ethics research and revised the manuscript. All the authors gave final approval of the completed manuscript version and are accountable for all aspects of the work.

Conflicts of Interest

SLR is employed by McLean Hospital/Mass General Brigham, serves as the secretary of Society of Biological Psychiatry, and on the Boards of McLean Hospital, National Network of Depression Centers, National Association of Behavioral Healthcare, and Community Psychiatry/Mindpath Health. He receives royalties from Oxford University Press and APPI. None of these are known to represent conflicts of interest with this work. JTB has received consulting fees from Verily Life Sciences, as well as consulting fees and equity from Mindstrong Health, Inc, unrelated to the present work.

Multimedia Appendix 1

Ethics checklist for digital health research in psychiatry and worksheet.

DOCX File , 30 KB

  1. Delude CM. Deep phenotyping: The details of disease. Nature 2015 Nov 05;527(7576):S14-S15. [CrossRef] [Medline]
  2. Baker JT, Germine LT, Ressler KJ, Rauch SL, Carlezon WA. Digital devices and continuous telemetry: opportunities for aligning psychiatry and neuroscience. Neuropsychopharmacology 2018 Dec;43(13):2499-2503 [FREE Full text] [CrossRef] [Medline]
  3. Torous J, Onnela JP, Keshavan M. New dimensions and new tools to realize the potential of RDoC: digital phenotyping via smartphones and connected devices. Transl Psychiatry 2017 Mar 07;7(3):e1053 [FREE Full text] [CrossRef] [Medline]
  4. Sanislow CA, Ferrante M, Pacheco J, Rudorfer MV, Morris SE. Advancing Translational Research Using NIMH Research Domain Criteria and Computational Methods. Neuron 2019 Mar 06;101(5):779-782 [FREE Full text] [CrossRef] [Medline]
  5. Insel TR. Digital Phenotyping: Technology for a New Science of Behavior. JAMA 2017 Oct 03;318(13):1215-1216. [CrossRef] [Medline]
  6. Baker JT. The Digital Future of Psychiatry. Psychiatric Annals 2019 May;49(5):193-194. [CrossRef]
  7. Martinez-Martin N, Insel TR, Dagum P, Greely HT, Cho MK. Data mining for health: staking out the ethical territory of digital phenotyping. NPJ Digit Med 2018;1:1-5 [FREE Full text] [CrossRef] [Medline]
  8. Nebeker C, Bartlett Ellis RJ, Torous J. Development of a decision-making checklist tool to support technology selection in digital health research. Transl Behav Med 2020 Oct 08;10(4):1004-1015 [FREE Full text] [CrossRef] [Medline]
  9. Torous J, Nebeker C. Navigating Ethics in the Digital Age: Introducing Connected and Open Research Ethics (CORE), a Tool for Researchers and Institutional Review Boards. J Med Internet Res 2017 Feb 08;19(2):e38 [FREE Full text] [CrossRef] [Medline]
  10. Kaplan B. Revisiting Health Information Technology Ethical, Legal, and Social Issues and Evaluation: Telehealth/Telemedicine and COVID-19. Int J Med Inform 2020 Nov;143:104239 [FREE Full text] [CrossRef] [Medline]
  11. Ienca M, Ferretti A, Hurst S, Puhan M, Lovis C, Vayena E. Considerations for ethics review of big data health research: A scoping review. PLoS One 2018;13(10):e0204937 [FREE Full text] [CrossRef] [Medline]
  12. Torous J, Wisniewski H, Bird B, Carpenter E, David G, Elejalde E, et al. Creating a Digital Health Smartphone App and Digital Phenotyping Platform for Mental Health and Diverse Healthcare Needs: an Interdisciplinary and Collaborative Approach. J. technol. behav. sci 2019 Apr 27;4(2):73-85. [CrossRef]
  13. Ball TM, Kalinowski A, Williams LM. Ethical implementation of precision psychiatry. Personalized Medicine in Psychiatry 2020 Mar 20;19-20:100046. [CrossRef]
  14. 2018 Requirements (2018 Common Rule): Code of Federal Regulations Title 45 Public Welfare Department Of Health And Human Services Part 46: Protection Of Human Subjects. Office for Human Research Protections.   URL: https:/​/www.​hhs.gov/​ohrp/​regulations-and-policy/​regulations/​45-cfr-46/​revised-common-rule-regulatory-text/​index.​html [accessed 2022-01-25]
  15. Solove DJ. Privacy self-management and the consent dilemma. Harvard Law Review 2012 Nov 04;126:1880-1903.
  16. Kadam RA. Informed consent process: A step further towards making it meaningful!. Perspect Clin Res 2017;8(3):107-112 [FREE Full text] [CrossRef] [Medline]
  17. Wolf SM, Clayton EW, Lawrenz F. The Past, Present, and Future of Informed Consent in Research and Translational Medicine. J. Law. Med. Ethics 2021 Jan 01;46(1):7-11. [CrossRef]
  18. Synnot A, Ryan R, Prictor M, Fetherstonhaugh D, Parker B. Audio-visual presentation of information for informed consent for participation in clinical trials. Cochrane Database Syst Rev 2014 May 09(5):CD003717 [FREE Full text] [CrossRef] [Medline]
  19. Anderson EE, Newman SB, Matthews AK. Improving informed consent: Stakeholder views. AJOB Empir Bioeth 2017;8(3):178-188 [FREE Full text] [CrossRef] [Medline]
  20. Kraft SA, Constantine M, Magnus D, Porter KM, Lee SS, Green M, et al. A randomized study of multimedia informational aids for research on medical practices: Implications for informed consent. Clin Trials 2017 Feb;14(1):94-102 [FREE Full text] [CrossRef] [Medline]
  21. Young-Afat DA, Verkooijen HAM, van Gils CH, van der Velden JM, Burbach JP, Elias SG, et al. Brief Report: Staged-informed Consent in the Cohort Multiple Randomized Controlled Trial Design. Epidemiology 2016 May;27(3):389-392. [CrossRef] [Medline]
  22. Vickers AJ, Young-Afat DA, Ehdaie B, Kim SY. Just-in-time consent: The ethical case for an alternative to traditional informed consent in randomized trials comparing an experimental intervention with usual care. Clin Trials 2018 Feb;15(1):3-8 [FREE Full text] [CrossRef] [Medline]
  23. Budin-Ljøsne I, Teare HJA, Kaye J, Beck S, Bentzen HB, Caenazzo L, et al. Dynamic Consent: a potential solution to some of the challenges of modern biomedical research. BMC Med Ethics 2017 Jan 25;18(1):4 [FREE Full text] [CrossRef] [Medline]
  24. Vayena E, Blasimme A. Health Research with Big Data: Time for Systemic Oversight. J Law Med Ethics 2018 Mar;46(1):119-129 [FREE Full text] [CrossRef] [Medline]
  25. Carpenter WT, Gold JM, Lahti AC, Queern CA, Conley RR, Bartko JJ, et al. Decisional capacity for informed consent in schizophrenia research. Arch Gen Psychiatry 2000 Jun;57(6):533-538. [CrossRef] [Medline]
  26. Roberts LW. Informed consent and the capacity for voluntarism. Am J Psychiatry 2002 May;159(5):705-712. [CrossRef] [Medline]
  27. Dunn LB, Nowrangi MA, Palmer BW, Jeste DV, Saks ER. Assessing decisional capacity for clinical research or treatment: a review of instruments. Am J Psychiatry 2006 Aug;163(8):1323-1334. [CrossRef] [Medline]
  28. Oh SS, Galanter J, Thakur N, Pino-Yanes M, Barcelo NE, White MJ, et al. Diversity in Clinical and Biomedical Research: A Promise Yet to Be Fulfilled. PLoS Med 2015 Dec;12(12):e1001918 [FREE Full text] [CrossRef] [Medline]
  29. Brown G, Marshall M, Bower P, Woodham A, Waheed W. Barriers to recruiting ethnic minorities to mental health research: a systematic review. Int J Methods Psychiatr Res 2014 Mar;23(1):36-48 [FREE Full text] [CrossRef] [Medline]
  30. Gordon-Achebe K, Hairston DR, Miller S, Legha R, Starks S. Origins of Racism in American Medicine and Psychiatry. In: Medlock M, Shtasel D, Trinh NH, Williams D, editors. Racism and Psychiatry. Cham, Switzerland: Humana Press; Nov 21, 2018:3-19.
  31. Huang H, Coker AD. Examining Issues Affecting African American Participation in Research Studies. Journal of Black Studies 2008 May 01;40(4):619-636. [CrossRef]
  32. Gilmore-Bykovskyi A, Jackson JD, Wilkins CH. The Urgency of Justice in Research: Beyond COVID-19. Trends Mol Med 2021 Feb;27(2):97-100 [FREE Full text] [CrossRef] [Medline]
  33. Quinn SC. Ethics in public health research: protecting human subjects: the role of community advisory boards. Am J Public Health 2004 Jun;94(6):918-922. [CrossRef] [Medline]
  34. Wilkins WH. Effective Engagement Requires Trust and Being Trustworthy. Med Care 2018 Oct;56 Suppl 10 Suppl 1:S6-S8 [FREE Full text] [CrossRef] [Medline]
  35. Wallerstein N, Calhoun K, Eder M, Kaplow J, Wilkins CH. Engaging the Community: Community-Based Participatory Research and Team Science. In: Hall K, Vogel A, Croyle R, editors. Strategies for Team Science Success. Cham, Switzerland: Springer; 2019:123-134.
  36. Jones OD, Schall JD, Shen FX. Law and Neuroscience, Second Edition. Philadelphia, PA, US: Wolters Kluwer; Sep 15, 2020.
  37. Boniolo G, Campaner R, Coccheri S. Why include the humanities in medical studies? Intern Emerg Med 2019 Oct;14(7):1013-1017. [CrossRef] [Medline]
  38. Onnela J, Rauch SL. Harnessing Smartphone-Based Digital Phenotyping to Enhance Behavioral and Mental Health. Neuropsychopharmacology 2016 Jun;41(7):1691-1696 [FREE Full text] [CrossRef] [Medline]
  39. Price WN, Cohen IG. Privacy in the age of medical big data. Nat Med 2019 Jan;25(1):37-43 [FREE Full text] [CrossRef] [Medline]
  40. Rosenfeld L, Torous J, Vahia IV. Data Security and Privacy in Apps for Dementia: An Analysis of Existing Privacy Policies. Am J Geriatr Psychiatry 2017 Aug;25(8):873-877. [CrossRef] [Medline]
  41. Cortez NG, Cohen IG, Kesselheim AS. FDA Regulation of Mobile Health Technologies. N Engl J Med 2014 Jul 24;371(4):372-379. [CrossRef]
  42. Armontrout JA, Torous J, Cohen M, McNiel DE, Binder R. Current Regulation of Mobile Mental Health Applications. J Am Acad Psychiatry Law 2018 Jun;46(2):204-211. [CrossRef] [Medline]
  43. Marks M. Emergent Medical Data: Health Information Inferred by Artificial Intelligence. UC Irvine Law Review 2021;11(4):995-1066 [FREE Full text]
  44. Noordyke M. US State Comprehensive Privacy Law Comparison. International Association of Privacy Professionals. 2020.   URL: https://iapp.org/resources/article/state-comparison-table/ [accessed 2021-01-04]
  45. Mostert M, Koomen B, van Delden J, Bredenoord A. Privacy in Big Data psychiatric and behavioural research: A multiple-case study. Int J Law Psychiatry 2018;60:40-44. [CrossRef] [Medline]
  46. Sunyaev A, Dehling T, Taylor PL, Mandl KD. Availability and quality of mobile health app privacy policies. J Am Med Inform Assoc 2015 Apr;22(e1):e28-e33. [CrossRef] [Medline]
  47. Mulgund P, Mulgund BP, Sharman R, Singh R. The implications of the California Consumer Privacy Act (CCPA) on healthcare organizations: Lessons learned from early compliance experiences. Health Policy and Technology 2021 Sep;10(3):100543. [CrossRef]
  48. Tovino SA. Mobile Research Applications and State Research Laws. J Law Med Ethics 2020 Mar;48(1_suppl):82-86. [CrossRef] [Medline]
  49. National Academies of Sciences, Engineering, and Medicine, Health and Medicine Division, Board on Health Sciences Policy, Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories. In: Downey AS, Busta ER, Mancher M, Botkin JR, editors. Returning Individual Research Results to Participants: Guidance for a New Research Paradigm. Washington DC, US: National Academies Press; Jul 10, 2018.
  50. Wolf SM. Return of individual research results and incidental findings: facing the challenges of translational science. Annu Rev Genomics Hum Genet 2013;14:557-577 [FREE Full text] [CrossRef] [Medline]
  51. Evans BJ, Wolf SM. A Faustian Bargain That Undermines Research Participants' Privacy Rights and Return of Results. Florida Law Review 2019;71(5):1281-1345.
  52. Jarvik GP, Amendola LM, Berg JS, Brothers K, Clayton EW, Chung W, eMERGE Act-ROR CommitteeCERC Committee, CSER Act-ROR Working Group, et al. Return of genomic results to research participants: the floor, the ceiling, and the choices in between. Am J Hum Genet 2014 Jun 05;94(6):818-826 [FREE Full text] [CrossRef] [Medline]
  53. NeCamp T, Sen S, Frank E, Walton MA, Ionides EL, Fang Y, et al. Assessing Real-Time Moderation for Developing Adaptive Mobile Health Interventions for Medical Interns: Micro-Randomized Trial. J Med Internet Res 2020 Mar 31;22(3):e15033 [FREE Full text] [CrossRef] [Medline]
  54. Vaidyam A, Halamka J, Torous J. Actionable digital phenotyping: a framework for the delivery of just-in-time and longitudinal interventions in clinical healthcare. Mhealth 2019;5:25 [FREE Full text] [CrossRef] [Medline]
  55. Roberts JL, Foulkes AL. Genetic Duties. William & Mary Law Review 2020 Nov 21;62(1):143.
  56. Lu CY, Hendricks-Sturrup RM, Mazor KM, McGuire AL, Green RC, Rehm HL. The case for implementing sustainable routine, population-level genomic reanalysis. Genet Med 2020 Apr;22(4):815-816. [CrossRef] [Medline]
  57. Kaplan B. Seeing Through Health Information Technology: The Need for Transparency in Software, Algorithms, Data Privacy, and Regulation. SSRN Journal 2020;7(1):62. [CrossRef]


ELSI: ethical, legal, and social implications
HIPAA: Health Insurance Portability and Accountability Act
IRB: institutional review board
NIH: National Institutes of Health
SOP: standard operating procedure


Edited by R Kukafka; submitted 10.06.21; peer-reviewed by N Martinez-Martin, R Hendricks-Sturrup; comments to author 28.06.21; revised version received 23.08.21; accepted 29.10.21; published 09.02.22

Copyright

©Francis X Shen, Benjamin C Silverman, Patrick Monette, Sara Kimble, Scott L Rauch, Justin T Baker. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 09.02.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.