Viewpoint
Abstract
Despite an ever-expanding number of analytics with the potential to impact clinical care, the field currently lacks point-of-care technological tools that allow clinicians to efficiently select disease-relevant data about their patients, algorithmically derive clinical indices (eg, risk scores), and view these data in straightforward graphical formats to inform real-time clinical decisions. Thus far, solutions to this problem have relied on either bottom-up approaches that are limited to a single clinic or generic top-down approaches that do not address clinical users’ specific setting-relevant or disease-relevant needs. As a road map for developing similar platforms, we describe our experience with building a custom but institution-wide platform that enables economies of time, cost, and expertise. The BRIDGE platform was designed to be modular and scalable and was customized to data types relevant to given clinical contexts within a major university medical center. The development process occurred by using a series of human-centered design phases with extensive, consistent stakeholder input. This institution-wide approach yielded a unified, carefully regulated, cross-specialty clinical research platform that can be launched during a patient’s electronic health record encounter. The platform pulls clinical data from the electronic health record (Epic; Epic Systems) as well as other clinical and research sources in real time; analyzes the combined data to derive clinical indices; and displays them in simple, clinician-designed visual formats specific to each disorder and clinic. By integrating an application into the clinical workflow and allowing clinicians to access data sources that would otherwise be cumbersome to assemble, view, and manipulate, institution-wide platforms represent an alternative approach to achieving the vision of true personalized medicine.
J Med Internet Res 2022;24(2):e34560doi:10.2196/34560
Keywords
Introduction
Precision medicine holds the potential to revolutionize medicine [
- ], just as prior technological advances, such as microscopy, molecular diagnostics, and imaging, have done in the past. In the research realm, big data and artificial intelligence have yielded substantial advances that showcase the potential of precision medicine [ , ]. However, translating these advances into the clinical realm remains a challenge [ , ]. A patient is more likely to interact with complex algorithms informed by big data in the waiting room (ie, algorithms in the form of internet searches, travel directions, or tailored social media) than in the actual clinic. The medical field needs similarly intuitive interfaces that can collate the necessary patient-related data to highlight salient knowledge, pinpoint a patient’s condition, predict optimal therapy, or estimate the risk of disease or death [ ]. Much of the required physical infrastructure is already in place, with computers being available in most clinics and the majority of clinical data being stored in electronic health records (EHRs). A small minority of wealthier clinics and health care systems have built custom, domain-specific interfaces into their EHRs to deliver the more complex precision medicine algorithms and visualizations that their physicians need; however, in the majority of health systems, only the most basic algorithms (eg, those for calculating BMI) are built into the EHR, while other, more sophisticated clinical indices (eg, atrial fibrillation stroke risk [ , ]) are calculated via manual entry into a public website [ ].The task of translating innovative precision medicine tools from research projects to clinical care is inhibited by a catch-22 problem. To justify the expense of building the costly computational infrastructure required to run complex algorithms on patient data, the algorithms or visualizations need to demonstrate real-world value. However, to evaluate and prove these algorithms’ value, the needed infrastructure must already be in place. One solution to this conundrum is building boutique, single-clinic solutions consisting of carefully designed, specialized algorithms or data displays built within or alongside the EHR [
, ]. Although this bottom-up approach is limited in scope to a single clinical domain and thus can be comparatively quick and cost-effective to implement, scalability and rapid obsolescence are major concerns. To adapt data displays to other clinics, an institution has to maintain, secure, and update an ever-expanding heterogeneous code base across those clinics. Yet, the originating “owners” of these algorithms are often clinical researchers and physicians without the backing of an enterprise-level developer team that is equipped to manage the software as a service over several years of use ( ). The opposite extreme is commercial vendors building generalized health care software suites that run on cloud-based infrastructures. Such centralized solutions address the scalability challenges of bottom-up approaches, but the emerging health system–wide products are typically far too generic to meet the medically heterogeneous and shifting requirements of individual clinics. Furthermore, adopting such solutions requires substantial institutional investment, and becoming locked into a single vendor in a rapidly evolving marketplace poses a risk.Between these two extremes exists a third solution that solves many of the aforementioned problems. Institution-wide platforms permit rapid innovation in parallel across multiple clinics but are built on a single secure, stable, and cost-efficient technological foundation. These platforms benefit from a common architecture built within an institutional firewall with real-time EHR access and application programming interfaces (APIs) to major (eg, REDCap [Research Electronic Data Capture; Vanderbilt University] and Radiology PACS [Picture Archiving and Communication System]) and custom data resources, which facilitate the integration of multimodal research data across all specialties. Yet, these platforms also incorporate clinic-specific visualization tools that allow clinicians to tailor the display of information. Therefore, specific research discoveries can be rapidly translated into clinical tools that fit each specialty (
). This approach strikes a balance between the fast development and flexibility of single-clinic solutions and the scalability and sustainability of centralized health care solutions while optimizing transparent institutional oversight.The BRIDGE platform at University of California, San Francisco (UCSF), is one example of this approach. Based on our experience with developing BRIDGE, we describe key considerations and practical steps for implementing institution-wide solutions in this rapidly progressing field to provide a road map for other health care systems considering a similar approach. We also consider future developments that will enable the medical community to quickly and comprehensively realize the potential of computational medicine to improve the lives of patients.
Consideration 1: Human-Centered Design
Overview of the Human-Centered Design of Precision Medicine Tools
For a precision medicine tool to be adopted in a clinic, it needs to provide pertinent, actionable information in a format that is appropriate to the user (either a clinician or a patient). Therefore, perhaps the most essential components of effective precision medicine tool deployment are the principles and phases of human-centered design (HCD) [
- ]. For tools targeted at medical professionals, clinician users, who are well informed, should be at the center of decisions about which technological format is the most appropriate for their workflow, which innovations in their specialty are scientifically ready for deployment at clinics, and how evaluations of tool effectiveness should be conducted to justify the continued use of such tools ( ). Many of these decisions reflect the dimensions of precision medicine, as articulated in a recent scoping review [ ].Key decisions in designing a digital application for clinical research.
Key questions
- Who are the users (eg, clinicians, patients, and specialists)?
- What do the users need (eg, novel data sources, novel algorithms, novel visualization, and data collation)?
- How will it improve care (eg, patient experience, clinic efficiency, morbidity, and mortality)?
- How does the user access the application (eg, individual log-in and authorization via an existing clinical system)?
- Where is it hosted (eg, local server, cloud-based server, or external vendor)?
- What is the maintenance schedule (eg, 9 AM to 5 PM on Monday to Friday or 24 hours per day year-round)?
- What are the constraints of the system? For example, will it not write to the electronic health record? Are data behind an institutional firewall?
Practical Considerations From the BRIDGE Experience
From its inception, BRIDGE exemplified both the principles and phases of HCD [
, ]. It was conceptualized and designed according to the requirements of clinician scientists, including the project’s principal investigators (manuscript authors RB, KPR, and SJS). Further, the key architectural decisions ( ) were made by applying HCD principles to engage clinician, patient, scientific, programming, design, industry, and institutional stakeholders.The three HCD phases are also being deployed in the iterative process of adapting BRIDGE to each new clinic that is interested in a BRIDGE dashboard (
). In the “Inspiration” phase, the BRIDGE clinician scientists and programmers identify and meet with a small number of clinician champions to collaboratively define the problems to be solved to improve care in the clinician champions’ clinic. They also generate ideal use cases based on that clinic’s workflow, such as those for data types, data sources, and visualizations. In the “Ideation” phase, a design mock-up is shared with a broader set of intended stakeholders from that clinic to obtain their input, after which the final set of minimum viable product (MVP) specifications is derived for the dashboard, and programming begins for the jointly approved design mock-up. The finalized MVP is built in the “Implementation” phase, during which early testing is conducted by a small superuser group of clinicians who generate feedback about bugs and minor refinements. These clinic domain experts are the primary drivers for designing and conducting formal evaluations of their precision medicine tools, which include clinician users’ feedback about dashboard ease, utility, and fidelity; patients’ satisfaction with care; impacts on workflow, including automated click tracking; and longer-term analyses of the clinical impact, value, and cost-effectiveness of these tools. Clinical validation, technological or therapeutic innovation, or user demand may result in further cycles of design.HCD: Future Directions
Since the back-end infrastructure of an institution-wide platform is unified, only 1 set of corresponding regulatory approvals is needed, and this approach reduces the cost and time required to develop a front-end tool and allows multiple tools to be developed in parallel (
). However, given the number of medical specialties, clinical scenarios, disorders, and algorithms across a health care system, engaging in this intensive HCD process with each new clinic will not be cost-effective in the long term. Instead, a library of existing data sources and graphical interfaces could be generated, and clinicians (or patients, ie, in the event of a patient-facing version) could customize this library to design their own dashboards, thereby freeing programmers to concentrate on developing new modular interfaces and data sources. Generating more universal standards for describing clinical dashboards and their connections to APIs and EHRs could ease the deployment of dashboards across a wide range of health care platforms. Containerization, the Substitutable Medical Applications and Reusable Technologies (SMART) on Fast Healthcare Interoperability Resources (FHIR) API, and the Epic App Orchard (Epic Systems) represent important steps in this direction, but substantial scope for further standardization remains. The adoption of this type of adaptable clinical dashboard at scale would provide sufficient data for iteratively testing and improving performance, resulting in a second data-driven evaluation phase that focuses on surveys and click data. As the scale of data grows, especially across institutions, a third design phase that is based on both clinical outcomes and user experience will become possible.Consideration 2: Technological Design
Common Approaches to the Technological Design of Digital Health Tools
The architecture of most digital health tools involves a connection among the back-end databases, middleware software algorithms that convert the data into useful knowledge, and front-end displays for users (
). Both single-clinic and centralized solutions are often hard-coded to represent a specific data source and visualization type, which slows the development of novel iterations and results in higher overall costs. A more efficient solution is to build a framework of reusable APIs that connects a multiplying number of data sources, computational algorithms, and modular visualization schematics and is adaptable and scalable to diverse types of medical data and clinical specialties.Practical Considerations From the BRIDGE Experience
Overview of Practical Considerations
The BRIDGE platform was designed as a proof-of-principle MVP scaffold that could be developed efficiently and quickly but later refined and scaled up depending on its success and the collaborative opportunities generated. The HCD process made clear the following four key technical requirements: (1) it had to permit access to a variety of data sources (ie, beyond the EHR), which could then be either displayed directly or processed through computationally intense algorithms [
]; (2) it needed to enable the visualization of these data in an intuitive and actionable manner, and this process needed to be embedded in the clinical workflow, so that it was not cumbersome for clinicians to access or operate; (3) following logically from the second requirement, it required the ability to launch directly from the EHR; and (4) it had to be as modular as possible to make iterative clinic-by-clinic customizations easier and more efficient to program.Data Sources
Many data types contribute to precision care. To build a data foundation for BRIDGE that would best meet the needs of a variety of clinical use cases, we opted to include real-time clinical data from the EHR, minimally processed data from widely available data platforms (REDCap and Qualtrics [Qualtrics International Inc]) [
], data from institutional tools (eg, TabCAT [Tablet-Based Cognitive Assessment Tool; UCSF]) [ ] and research databases [ ], and complex data that either cannot be currently hosted in the standard EHR or require processing by complex analytics processing pipelines ( ). For example, images from the Radiology PACS can be obtained ahead of time based on scheduled appointments, thereby allowing time for computationally intensive image processing pipelines to run prior to a patient appointment. Further innovations requiring advanced data processing include accessing expansive knowledge networks to compute precise clinical risk and treatment predictions [ ]. As it would represent the convergence of so many sensitive data streams, BRIDGE required robust front-end and back-end architectures that were unified around security and hosted within the UCSF firewall.Workflow Fit and EHR Integration
As a fundamental requirement for BRIDGE, to give clinicians actionable information during patient encounters, it had to launch directly from patients’ records in the EHR (ie, Epic; Epic Systems) and pull their clinical data in real time (
). This resulted in a central technical decision to design BRIDGE as a SMART on FHIR application. Launching from the EHR resulted in additional clinical workflow benefits; discrete data could be collected at the point of care by using clinic-specific EHR Flowsheets and SmartForms (sharable across institutions), and data could then be pulled into clinical notes. Direct flowsheet data entry also allows BRIDGE to call and visualize discrete research data during clinic visits more efficiently. Enabling this launch functionality required interactions with the EHR development group and resources for funding their modifications to the EHR.Modular Design
BRIDGE was designed to capitalize on a common language of clinical information flow through the creation of core widgets, or visualization modules, that can be adapted to an expanding array of clinical scenarios (
). At the time of BRIDGE MVP deployment, we had programmed the following four reusable core widgets: (1) longitudinal clinical course in the context of treatment, (2) cross-sectional metrics, (3) specialty-focused laboratory data, and (4) quantitative neuroimaging. Both the cross-sectional and longitudinal widgets allow patients’ scores and metrics to be contextualized against a larger reference cohort that indicates both normal and abnormal values as well as percentile calculations, thus allowing a patient’s clinical status to be interpreted by a clinician at a glance ( ). We were able to convert existing precision medicine tools, such as the UCSF Multiple Sclerosis BioScreen longitudinal viewer [ ] and the UCSF Brainsight magnetic resonance image processing and visualization tool [ ] into these initial BRIDGE widgets. The configuration data for all viewers are stored by BRIDGE, which queries these data in real time and then renders the specified widgets and data sources for the clinician. Updates to the configuration can be made quickly when existing dashboards need to be adapted, thus enabling both ongoing user engagement and rapid deployment to meet the evolving needs of specific clinics. As we expand to other clinics, new widgets (eg, geolocation and genomics) that can be retroactively made available to existing clinics are being developed.Technological Design: Future Directions
Two architectural changes can be made. The first is integration with a middleware platform. BRIDGE is currently connected to multiple data sources through direct API integrations, and connecting to additional APIs necessitates the modification of the codebase. Making use of a platform that aggregates APIs would reduce maintenance efforts and promote more stable platforms. Examples of such platforms already exist (eg, Human API [
]) and include EHR data. The second architectural change is creating a graphical user interface (GUI) that clinicians can use to create their own dashboards. Currently, dashboard configuration is done by the BRIDGE development team. Building a GUI that allows clinicians to configure and customize their dashboards would accelerate progress and allow clinicians without programming experience to access relevant data sources. Such an endeavor will likely require the integration of institution-wide platforms and centralized platforms, and such an integration will benefit both types of platforms. The resulting unified platforms would probably combine generic, cloud-based back-end and middleware components but be able to deliver the customized, clinic-specific, front-end dashboards designed by clinicians through the GUI. Overall, BRIDGE aims to augment—not supplant—the EHR; should an institution’s visualization show clinical value, the institution could choose to maintain it in BRIDGE or integrate it into their EHR more permanently.Innovations are also needed to improve data quality in the EHR, including tools that systematically flag likely data entry errors, simplify the correction of the EHR by a clinician, and ensure that corrections are distributed to all clinical tools. Finally, to demonstrate that these tools comply with the Health Insurance Portability and Accountability Act (HIPAA) or equivalent guidelines, a cross-institutional body that is responsible for testing and validating these solutions could be created. It might accelerate progress substantially by, for example, supporting cloud-based, HIPAA-compliant, off-the-shelf solutions to ease this data quality burden.
Consideration 3: Regulation and Policy
Launching a clinical application with real-time access to identified patient health data requires close institutional oversight and multiple stages of regulatory approval, especially in cases where clear institutional road maps or leadership structures are lacking due to the innovative nature of such applications.
Practical Considerations From the BRIDGE Experience
With regard to developing the BRIDGE MVP, the Epic EHR, and the SMART on FHIR application, technological capabilities were already available within our institution, but multiple security, privacy, technological, and compliance concerns had to be addressed. Specifically, authorizing an expandable, cross-specialty, modular platform rather than a domain- and clinic-specific tool was entirely novel. This necessitated parallel revisions to the approval process itself. Early in the design process, we set clear functional constraints that would reduce the barriers to institutional approval. Foremost among these were (1) conceptualizing BRIDGE as a clinical research tool that is custom designed with clinical specialists rather than as an institution-wide, enterprise-level clinical solution; (2) not requesting write access to the EHR (real-time read access was enabled); and (3) ensuring that data do not leave the institutional firewall. With an approved clinical research platform in place, the bar for institutional approval is substantially reduced for subsequent clinical dashboards that iterate on the initial design, reducing this multi-month process to a simple, clinic-specific sign-off (
). Further approval is required for applications that add novel functionalities or revisit one of the major systems constraints (eg, sending data to an external server).Regulation and Policy: Future Directions
BRIDGE provides a mechanism for rapidly deploying and evaluating novel precision medicine algorithms and visualizations developed by clinical researchers [
- ] to evaluate their clinical benefit [ ]. As the system expands and more clinical visualizations become the standard of care, medical centers may eventually choose to move the fundamental infrastructure of their institution-wide platforms from an MVP clinical research entity, such as BRIDGE, to a full, enterprise-level clinical system that delivers the same capabilities at a higher level of reliability [ , ]. This shift will be precipitated by a number of considerations, including the need for professional-level version control and releases; automated testing and quality control; the capacity for multilevel monitoring, logging, and auditing; and the ability to handle high user volumes without concurrency issues. The institution will also need to ensure that there is adequate personnel infrastructure behind the system to permit sustainable 24-hour user support and timely design and adaptation for new clinics. In the end, all stakeholders must be able to trust the reliability and clinical value of the final platform and the sustainability of the system supporting it [ ]. For many such algorithms, moving along the continuum from clinical research to enterprise clinical care may well necessitate regulatory approval from the Food and Drug Administration Center for Devices and Radiological Health [ ], as spelled out in their Digital Health Innovation Action Plan, and alignment with the international Software as a Medical Device guidelines through the International Medical Device Regulators Forum.Consideration 4: Evaluation and Impact
Pathway to Evaluation
Technological innovations in health care will ultimately be evaluated in terms of their impacts on patients, clinicians, data, and payors. In the near term, this requires the evaluation of a tool’s interpretability and fidelity, that is, whether clinicians and patients like, understand, and use the tool and whether the use of the tool improves patients’ experiences within the health system [
, , ]. Making even the most complex algorithms visually digestible and actionable will be a key evaluation criterion [ ]. To this end, for each BRIDGE dashboard, prior to measuring its clinical impact, we ensure that it meets key drivers of clinical adoption. We use the Health Information Technology Usability Evaluation Model [ ] to evaluate at least 15 patients’ and 8 clinicians’ perceptions on the usefulness [ , ], ease of use [ , ], actionability [ ], and likability [ ] of each clinical dashboard. Low-scoring items (ie, <80% of respondents state “agree” for any given metric) engender another round of iterative development.Evaluation and Impact: Future Directions
The impact of a dashboard like BRIDGE on clinical research and, eventually, care can be evaluated through in silico trials for answering a variety of clinical questions, as depicted in
. A near-term goal may be to compare users’ preference for 2 types of symptom displays or to evaluate the impact of BRIDGE on workflow efficiency (eg, determining whether the use of the tool reduces the overall time spent on “clicking” through a patient’s chart). Medium-term goals may be to refine a series of treatment action prompts that could yield a clinical decision support tool or to compare the effects of 2 different prediction algorithms on the risk of rehospitalization after a cardiac event. Long-term, altering clinical outcomes [ , , ] that have obvious implications for health economics, such as reductions in the time to accurate diagnosis, rehospitalization, disability progression, morbidity, or death, will be directly relevant to an institution’s assessment of a tool’s utility.Discussion
Determining whether big data analytics will truly disrupt clinical care depends on providing clinicians with access to the results of these analytics. In this paper, we describe one approach to overcoming the technical hurdle of making algorithms clinically available: the development of BRIDGE, an example of an institution-wide platform that allows for substantial clinic-specific customization. From the outset, BRIDGE was designed by intended users who worked closely with stakeholders, through an HCD process, to develop a structured and modular solution (
) that could be scaled and customized to specific clinic use cases in a cost- and time-efficient manner ( ). The resulting platform addresses clinicians’ requests to reduce data overload and more precisely tailor the data that they use during clinical encounters. The lessons learned from building an institution-wide digital medicine platform include not only the importance of using HCD but also the importance of engaging with institutional partners and leadership early to collaboratively and transparently navigate through the long and arduous process of obtaining regulatory and security approval.Based on our experiences, we propose that the development of similar platforms at other institutions is an efficient way to accelerate the testing of digital health algorithms in clinics. To reduce the burden of this undertaking, other academic clinical centers could use all or part of the BRIDGE platform code to create their own instances, especially if these centers used Epic, even though there would still be regulatory approval and software integration steps for making BRIDGE available within their EHRs. Additional developments could simplify this further, including sharing aspects of BRIDGE through centralized application stores, such as the Epic App Orchard, as well as creating centralized security audits and certifications that allow software to be vetted thoroughly once rather than vetting software at each new institution. Such centralization could be achieved by a federal initiative, a nationwide nonprofit society, or commercial vendors. For example, commercial vendors could provide institutions with centralized platforms that provide cloud-based computational resources, data access, security, and certification while clinicians and scientists develop dashboards and algorithms that run on these centralized platforms. BRIDGE provides a way to immediately develop and test these dashboards and algorithms in preparation for this future.
The potential of precision medicine will only be realized when the utility of the algorithms developed in this field can be evaluated at the point of care with real patients. Performing this testing requires substantial infrastructure development, which is hard to justify in the evaluation phase. Modular, scalable, institution-wide platforms, such as BRIDGE, represent one approach to resolving this catch-22 problem by providing an efficient mechanism for rapidly and cost-effectively deploying and evaluating new algorithms in clinics. Such a mechanism effectively serves as a bridge for translating research innovations into clinical tools.
Acknowledgments
The authors thank the funders (UCSF Weill Institute for Neurosciences), Dr Matthew State (for leadership of BRIDGE deployment in Psychiatry), Drs Ida Sim and Jason Satterfield (for iteration of a clinical action prompt tool), as well as Michael Schaffer (early BRIDGE architecture consultation) and Cosmo Mielke (w-map viewer in the UCSF Brainsight app).
Conflicts of Interest
RB receives research support from National Institutes of Health (NIH), California Initiative to Advance Precision Medicine, National Multiple Sclerosis Society (Harry Weaver Award), Hilton Foundation, and Sherak Foundation as well as Biogen, Novartis, and Roche Genentech. RB also receives scientific advisory board and consulting fees from Alexion, Biogen, EMD Serono, Genzyme Sanofi, Novartis, and Roche Genentech. BLM receives royalties from Guilford Press, Cambridge University Press, Johns Hopkins Press, and Oxford University Press and grant support from NIH and the Bluefield Project to Cure Frontotemporal Dementia. SLH serves on the scientific advisory boards of Accure, Alector, Annexon, and Molecular Stethoscope and the board of directors for Neurona. SLH has received travel reimbursement and writing support from Roche and Novartis for CD20-related meetings and presentations. KPR receives research funding from NIH, Quest Diagnostics, the Marcus Family Foundation, and the Rainwater Charitable Foundation. ES, PS, MG, SML, AB, and SJS have no conflicts of interest to declare.
References
- Avram R, Olgin JE, Kuhar P, Hughes JW, Marcus GM, Pletcher MJ, et al. A digital biomarker of diabetes from smartphone-based vascular signals. Nat Med 2020 Oct;26(10):1576-1582 [FREE Full text] [CrossRef] [Medline]
- Cook DA, Enders F, Caraballo PJ, Nishimura RA, Lloyd FJ. An automated clinical alert system for newly-diagnosed atrial fibrillation. PLoS One 2015 Apr 07;10(4):e0122153. [CrossRef] [Medline]
- Bean DM, Teo J, Wu H, Oliveira R, Patel R, Bendayan R, et al. Semantic computational analysis of anticoagulation use in atrial fibrillation from real world data. PLoS One 2019 Nov 25;14(11):e0225625. [CrossRef] [Medline]
- Shah P, Kendall F, Khozin S, Goosen R, Hu J, Laramie J, et al. Artificial intelligence and machine learning in clinical development: a translational perspective. NPJ Digit Med 2019 Jul 26;2:69 [FREE Full text] [CrossRef] [Medline]
- Eaneff S, Obermeyer Z, Butte AJ. The case for algorithmic stewardship for artificial intelligence and machine learning technologies. JAMA 2020 Oct 13;324(14):1397-1398. [CrossRef] [Medline]
- Wiljer D, Hakim Z. Developing an artificial intelligence-enabled health care practice: Rewiring health care professions for better care. J Med Imaging Radiat Sci 2019 Dec;50(4 Suppl 2):S8-S14. [CrossRef] [Medline]
- Afzal M, Islam SMR, Hussain M, Lee S. Precision medicine informatics: Principles, prospects, and challenges. IEEE Access 2020 Jan 13;8:13593-13612 [FREE Full text] [CrossRef]
- Grouin C, Deléger L, Rosier A, Temal L, Dameron O, Van Hille P, et al. Automatic computation of CHA2DS2-VASc score: information extraction from clinical texts for thromboembolism risk assessment. AMIA Annu Symp Proc 2011;2011:501-510 [FREE Full text] [Medline]
- Melgaard L, Gorst-Rasmussen A, Lane DA, Rasmussen LH, Larsen TB, Lip GYH. Assessment of the CHA2DS2-VASc Score in Predicting Ischemic Stroke, Thromboembolism, and Death in Patients With Heart Failure With and Without Atrial Fibrillation. JAMA 2015 Sep 08;314(10):1030-1038. [CrossRef] [Medline]
- CHA2DS2-VASc score for atrial fibrillation stroke risk. MDCalc. URL: https://www.mdcalc.com/cha2ds2-vasc-score-atrial-fibrillation-stroke-risk [accessed 2020-09-24]
- Wang T, Oliver D, Msosa Y, Colling C, Spada G, Roguski Ł, et al. Implementation of a real-time psychosis risk detection and alerting system based on electronic health records using CogStack. J Vis Exp 2020 May 15(159):10.3791/60794 [FREE Full text] [CrossRef] [Medline]
- Gourraud P, Henry RG, Cree BAC, Crane JC, Lizee A, Olson MP, et al. Precision medicine in chronic disease management: The multiple sclerosis BioScreen. Ann Neurol 2014 Nov;76(5):633-642 [FREE Full text] [CrossRef] [Medline]
- Design Kit: The human-centered design toolkit. IDEO. URL: https://www.ideo.com/post/design-kit [accessed 2022-01-06]
- Matheson GO, Pacione C, Shultz RK, Klügl M. Leveraging human-centered design in chronic disease prevention. Am J Prev Med 2015 Apr;48(4):472-479. [CrossRef] [Medline]
- Bove R, Bruce CA, Lunders CK, Pearce JR, Liu J, Schleimer E, et al. Electronic health record technology designed for the clinical encounter: MS NeuroShare. Neurol Clin Pract 2021 Aug;11(4):318-326. [CrossRef] [Medline]
- Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform 2009 Apr;42(2):377-381 [FREE Full text] [CrossRef] [Medline]
- Possin KL, Moskowitz T, Erlhoff SJ, Rogers KM, Johnson ET, Steele NZR, et al. The brain health assessment for detecting and diagnosing neurocognitive disorders. J Am Geriatr Soc 2018 Jan;66(1):150-156 [FREE Full text] [CrossRef] [Medline]
- University of California‚ San Francisco MS-EPIC Team:, Cree BAC, Gourraud P, Oksenberg JR, Bevan C, Crabtree-Hartman E, et al. Long-term evolution of multiple sclerosis disability in the treatment era. Ann Neurol 2016 Oct;80(4):499-510 [FREE Full text] [CrossRef] [Medline]
- Nelson CA, Bove R, Butte AJ, Baranzini SE. Embedding electronic health records onto a knowledge network recognizes prodromal features of multiple sclerosis and predicts diagnosis. J Am Med Inform Assoc 2021 Dec 16:ocab270. [CrossRef] [Medline]
- Memory and Aging Center. University of California, San Francisco. URL: https://memory.ucsf.edu/ [accessed 2022-01-24]
- Human API. Human API. URL: https://www.humanapi.co/ [accessed 2022-01-24]
- Olgin JE, Lee BK, Vittinghoff E, Morin DP, Zweibel S, Rashba E, et al. Impact of wearable cardioverter-defibrillator compliance on outcomes in the VEST trial: As-treated and per-protocol analyses. J Cardiovasc Electrophysiol 2020 May;31(5):1009-1018. [CrossRef] [Medline]
- Norgeot B, Glicksberg BS, Trupin L, Lituiev D, Gianfrancesco M, Oskotsky B, et al. Assessment of a deep learning model based on electronic health record data to forecast clinical outcomes in patients with rheumatoid arthritis. JAMA Netw Open 2019 Mar 01;2(3):e190606 [FREE Full text] [CrossRef] [Medline]
- Hong JC, Eclov NCW, Dalal NH, Thomas SM, Stephens SJ, Malicki M, et al. System for High-Intensity Evaluation During Radiation Therapy (SHIELD-RT): A prospective randomized study of machine learning-directed clinical evaluations during radiation and chemoradiation. J Clin Oncol 2020 Nov 01;38(31):3652-3661. [CrossRef] [Medline]
- Panch T, Pollard TJ, Mattie H, Lindemer E, Keane PA, Celi LA. "Yes, but will it work for patients?" Driving clinically relevant research with benchmark datasets. NPJ Digit Med 2020 Jun 19;3:87 [FREE Full text] [CrossRef] [Medline]
- Parikh RB, Obermeyer Z, Navathe AS. Regulation of predictive analytics in medicine. Science 2019 Feb 22;363(6429):810-812 [FREE Full text] [CrossRef] [Medline]
- Choudhury A, Asan O. Role of artificial intelligence in patient safety outcomes: Systematic literature review. JMIR Med Inform 2020 Jul 24;8(7):e18599 [FREE Full text] [CrossRef] [Medline]
- Asan O, Bayrak AE, Choudhury A. Artificial intelligence and human trust in healthcare: Focus on clinicians. J Med Internet Res 2020 Jun 19;22(6):e15154 [FREE Full text] [CrossRef] [Medline]
- Allen B. The role of the FDA in ensuring the safety and efficacy of artificial intelligence software and devices. J Am Coll Radiol 2019 Feb;16(2):208-210. [CrossRef] [Medline]
- Shinners L, Aggar C, Grace S, Smith S. Exploring healthcare professionals' understanding and experiences of artificial intelligence technology use in the delivery of healthcare: An integrative review. Health Informatics J 2020 Jun;26(2):1225-1236 [FREE Full text] [CrossRef] [Medline]
- Brown 3rd W, Yen PY, Rojas M, Schnall R. Assessment of the Health IT Usability Evaluation Model (Health-ITUEM) for evaluating mobile health (mHealth) technology. J Biomed Inform 2013 Dec;46(6):1080-1087 [FREE Full text] [CrossRef] [Medline]
- Yen P, Sousa KH, Bakken S. Examining construct and predictive validity of the Health-IT Usability Evaluation Scale: confirmatory factor analysis and structural equation modeling results. J Am Med Inform Assoc 2014 Oct;21(e2):e241-e248 [FREE Full text] [CrossRef] [Medline]
- Schnall R, Cho H, Liu J. Health Information Technology Usability Evaluation Scale (Health-ITUES) for usability assessment of mobile health technology: Validation study. JMIR Mhealth Uhealth 2018 Jan 05;6(1):e4 [FREE Full text] [CrossRef] [Medline]
- Mathews SC, McShea MJ, Hanley CL, Ravitz A, Labrique AB, Cohen AB. Digital health: a path to validation. NPJ Digit Med 2019 May 13;2:38 [FREE Full text] [CrossRef] [Medline]
Abbreviations
API: application programming interface |
EHR: electronic health record |
FHIR: Fast Healthcare Interoperability Resources |
GUI: graphical user interface |
HCD: human-centered design |
HIPAA: Health Insurance Portability and Accountability Act |
MVP: minimum viable product |
NIH: National Institutes of Health |
PACS: Picture Archiving and Communication System |
REDCap: Research Electronic Data Capture |
SMART: Substitutable Medical Applications and Reusable Technologies |
TabCAT: Tablet-Based Cognitive Assessment Tool |
UCSF: University of California, San Francisco |
Edited by G Eysenbach; submitted 29.10.21; peer-reviewed by M Afzal; comments to author 25.11.21; revised version received 17.12.21; accepted 22.12.21; published 15.02.22
Copyright©Riley Bove, Erica Schleimer, Paul Sukhanov, Michael Gilson, Sindy M Law, Andrew Barnecut, Bruce L Miller, Stephen L Hauser, Stephan J Sanders, Katherine P Rankin. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 15.02.2022.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.