Published on in Vol 26 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/51058, first published .
Developing and Testing the Usability of a Novel Child Abuse Clinical Decision Support System: Mixed Methods Study

Developing and Testing the Usability of a Novel Child Abuse Clinical Decision Support System: Mixed Methods Study

Developing and Testing the Usability of a Novel Child Abuse Clinical Decision Support System: Mixed Methods Study

Original Paper

1Department of Pediatrics, Yale University School of Medicine, New Haven, CT, United States

23M | M*Modal, 3M Health Information Systems, 3M Company, Maplewood, MN, United States

Corresponding Author:

Gunjan Tiyyagura, MHS, MD

Department of Pediatrics

Yale University School of Medicine

Pediatric Emergency Medicine, PO Box 208064

New Haven, CT, 06520

United States

Phone: 1 203 464 6343

Fax:1 203 737 7447

Email: gunjan.tiyyagura@yale.edu


Related ArticleThis is a corrected version. See correction statement in: https://www.jmir.org/2024/1/e60444

Background: Despite the impact of physical abuse on children, it is often underdiagnosed, especially among children evaluated in emergency departments (EDs). Electronic clinical decision support (CDS) can improve the recognition of child physical abuse.

Objective: We aimed to develop and test the usability of a natural language processing–based child abuse CDS system, known as the Child Abuse Clinical Decision Support (CA-CDS), to alert ED clinicians about high-risk injuries suggestive of abuse in infants’ charts.

Methods: Informed by available evidence, a multidisciplinary team, including an expert in user design, developed the CA-CDS prototype that provided evidence-based recommendations for the evaluation and management of suspected child abuse when triggered by documentation of a high-risk injury. Content was customized for medical versus nursing providers and initial versus subsequent exposure to the alert. To assess the usability of and refine the CA-CDS, we interviewed 24 clinicians from 4 EDs about their interactions with the prototype. Interview transcripts were coded and analyzed using conventional content analysis.

Results: Overall, 5 main categories of themes emerged from the study. CA-CDS benefits included providing an extra layer of protection, providing evidence-based recommendations, and alerting the entire clinical ED team. The user-centered, workflow-compatible design included soft-stop alert configuration, editable and automatic documentation, and attention-grabbing formatting. Recommendations for improvement included consolidating content, clearer design elements, and adding a hyperlink with additional resources. Barriers to future implementation included alert fatigue, hesitancy to change, and concerns regarding documentation. Facilitators of future implementation included stakeholder buy-in, provider education, and sharing the test characteristics. On the basis of user feedback, iterative modifications were made to the prototype.

Conclusions: With its user-centered design and evidence-based content, the CA-CDS can aid providers in the real-time recognition and evaluation of infant physical abuse and has the potential to reduce the number of missed cases.

J Med Internet Res 2024;26:e51058

doi:10.2196/51058

Keywords



Background

Child physical abuse is commonly missed by emergency department (ED) providers, leading to escalating injuries and death [1]. More than 30% of children with serious injuries resulting from physical abuse have been previously evaluated for injuries that were not recognized as abusive [2-5]. This is amplified in general EDs, where most children receive emergency care and abuse is more frequently missed than in pediatric EDs [6-8]. Evaluation and reporting of child abuse are also impacted by provider biases [2,9-12]. Children belonging to racial or ethnic minority groups are more often evaluated for abusive head trauma than White or non-Hispanic children, and children with public insurance undergo increased testing and are reported more often to Child Protective Services (CPS) than privately insured children [9,12]. These findings highlight the need for systems that standardize care, improve clinical outcomes, and reduce bias.

Clinical decision support (CDS) integrated into the electronic health record (EHR) can present intelligently filtered, individualized, and timely information to enhance clinical decision-making [13]. Child abuse–specific CDS systems may improve outcomes and reduce bias in the evaluation and reporting of suspected abuse. Experts have shared consensus recommendations regarding developing, disseminating, and sustaining EHR-embedded child abuse CDS systems in the ED [14]. Key recommendations included universal, routine implementation of a child abuse CDS system in general and pediatric EDs for children aged <4 years; use of active alerts that share their reason for triggering; integration of a standardized system for reports to CPS; use of data warehouse reports to evaluate the CDS system’s efficacy; integration of a system that is feasible, sustainable, and easily disseminated; and personalized usability testing to ensure seamless integration of the system [14].

While reviewing the existing child abuse CDS systems [15-32], we found that a common limitation is their inability to be triggered by free text in an EHR encounter. To address this gap, our team previously developed and validated a natural language processing (NLP) algorithm that automatically and methodically examined the free text in the notes of nursing providers, medical providers, and social workers (SWs) to identify high-risk injuries associated with possible abuse in children [33]. The NLP algorithm would provide a positive alert when it identified preselected combinations of written terms associated with fractures, intracranial injuries, abdominal injuries, burns, bruising, or oral injuries. It was targeted to identify high-risk injuries in children aged <1 year (ie, infants) specifically given that infants are more than twice as likely to experience maltreatment and thrice as likely to experience fatality from maltreatment compared to older children of any age group [1]. Developing a novel child abuse CDS system triggered by this validated NLP algorithm may further increase the tool’s potential to reduce the number of missed cases and mitigate bias.

To change providers’ practice using CDS, it is crucial to understand the providers’ needs and priorities before development and implementation. Evaluating a system’s usability involves the assessment of its accommodation of users’ needs, ease of mastery, effects on workflow, and achievement of goals. Conducting evaluations during the design process is also important to identify shortcomings and incorporate user-centered modifications [34,35]. Usability testing can include direct observation, recording of user-system interactions, think-aloud sessions where users verbalize their thoughts while interacting with the system, near-live sessions where users test the system with simulated patient interactions, live testing, and quantitative measures [34,36]. However, to date, only 1 study has described the usability testing of child abuse CDS in local settings [31].

Objective

Therefore, in this study, we aimed to develop a novel child abuse CDS system—hereafter referred to as the Child Abuse Clinical Decision Support (CA-CDS)—which is triggered by a validated NLP algorithm and that both alerts ED providers to high-risk injuries in infants and provides evidence-based recommendations for evaluation and management. We also sought to test the usability of the CA-CDS and refine the system based on user feedback.


Study Design

The study consisted of 3 phases informed by the Guideline Implementation with Decision Support (GUIDES) checklist by Van de Velde et al [37], which describes factors relevant to the development of successful guideline-based CDS. The phases included the (1) development of a prototype, (2) mixed methods usability testing, and (3) iterative refinement of the CA-CDS based on stakeholder feedback. Participants were stakeholders from 4 EDs, including 1 (25%) academic pediatric ED (Yale New Haven Children’s Hospital) and 3 (75%) community pediatric and general EDs (Bridgeport Hospital, Lawrence + Memorial Hospital, and Saint Raphael Campus). All campuses use Epic (Epic Systems Corporation) as their EHR and can use the M*Modal Fluency Direct speech recognition technology and Natural Language Understanding platform (3M Company), which hosts the NLP algorithm and presents the CA-CDS via built-in computer-assisted physician documentation functionality.

Development of the Prototype

The initial CA-CDS was developed after literature review and discussions with local experts in child abuse, pediatric emergency medicine, and health informatics. The issues discussed included target users (medical and nursing providers in EDs), appropriate language, recommendations considering the local context (eg, using order sets vs consulting the local child protection team [CPT]), and degree of interruption (ie, hard-stop vs soft-stop alert in which the former requires alert completion to proceed with one’s workflow). The prototype, as depicted in Figures 1-3, consisted of a card and protocol that appeared in the EHR once the NLP algorithm identified a high-risk injury within a note’s free text. A smaller card would first appear, stating that a high-risk injury was found, with the triggering language presented in a tooltip. The card would then allow providers to open a larger protocol that presented further information about the triggering language, suggested questions for evaluation, and suggested actions for management. Users could then select between 2 acknowledgment options regarding the likelihood of child abuse or neglect and select the actions taken, which would automatically be entered into an editable documentation field. Finally, they could click submit response to add the documentation to the bottom of their note or later to minimize the CA-CDS such that it no longer blocked the provider’s view of the EHR but remained accessible via the Fluency Direct pop-up bar. The CA-CDS was designed as a soft-stop alert such that completion was not required and workflow was not permanently interrupted.

To tailor the CA-CDS to the needs of medical versus nursing providers, the content was customized for each provider type. For instance, for medical providers (Figure 1), the suggested questions were based on the MORE (Mechanism, Others present, Review of development, and Examination details) mnemonic. The components of the mnemonic (“Mechanism: additional details about history and injury mechanism; Others present: witnesses to injury and history corroboration; Review of development: developmental ability; and Examination details: disrobed exam, specifically to examine for sentinel injuries, and additional details related to the physical examination”) aids providers in differentiating between accidental and abusive injuries [38]. For nurses (Figure 2), the content was simplified and asked whether the history was consistent with the injury. The suggested actions included recommendations to contact the local CPT (known as the Detection, Assessment, Referral, and Treatment or DART team) and file a report with Connecticut’s CPS agency known as the Department of Children and Families (DCF) as appropriate. The nursing version also recommended discussing concerns with medical providers. A subsequent-provider version was also designed to be received after another provider had already submitted their response (Figure 3). This version was equivalent to the first-provider version, except for text indicating that another provider had responded and its modified acknowledgment options, allowing the subsequent provider to disagree with the previous provider’s selection regarding the likelihood of abuse, agree without further action, or agree and take additional action.

A web-based prototype of the CA-CDS in a model EHR was designed using the InVision platform (InVisionApp Inc) for usability testing with the abovementioned features (Figures 1-3). The model EHR showed a clinical vignette of an infant presenting for emergency care for whom a provider documented a high-risk injury that triggered the CA-CDS (Table S1 in Multimedia Appendix 1). For this study, the CA-CDS was tested solely in a model EHR before future live implementation.

Figure 1. Initial first-provider version of the medical provider–specific Child Abuse Clinical Decision Support (CA-CDS) prototype within a model electronic health record. DART: Detection, Assessment, Referral, and Treatment; DCF: Department of Children and Families; ED: emergency department; M: male; MORE: Mechanism, Others present, Review of development, and Examination details.
Figure 2. Initial first-provider version of the nursing provider–specific Child Abuse Clinical Decision Support (CA-CDS) prototype. DART: Detection, Assessment, Referral, and Treatment; DCF: Department of Children and Families; M: male.
Figure 3. Initial subsequent-provider version of the medical provider–specific Child Abuse Clinical Decision Support (CA-CDS) prototype. DART: Detection, Assessment, Referral, and Treatment; DCF: Department of Children and Families; ED: emergency department; F: female; MORE: Mechanism, Others present, Review of development, and Examination details.

Usability Testing

We tested the CA-CDS’s usability through a mixed methods approach. The research team, including a user design expert (KL) and researchers with qualitative research expertise (GT and AA), developed and iteratively refined an interview guide (Table S2 in Multimedia Appendix 1) with open-ended questions about topics including the CA-CDS’s design, strengths, deficits, and recommendations for improvement and future implementation. Purposive recruitment of stakeholders for interviews who represented the CA-CDS’s end users and local champions in child abuse care was conducted via email, in person, and through ED section meetings. Overall, 3 rounds of interviews were conducted by GT and AT, with audiovisual recording for documenting user-system interactions and transcript generation (Figure 4). Interviews were conducted until thematic sufficiency was achieved [39-41].

Figure 4. Flow diagram of usability testing. CA-CDS: Child Abuse Clinical Decision Support.

The web-based prototype (accessed via hyperlink) and interview guide were pilot-tested through in-person, think-aloud interviews with 3 ED providers (Figure 4). After further refinement of the interview guide, interviews were conducted via teleconferencing with 10 additional providers. Participants were instructed to think aloud while interacting with the prototype and then asked targeted questions using the interview guide. On the basis of the findings from the initial interviews, the CA-CDS was refined, and usability of the updated prototype was assessed with another round of interviews. Here, we sought to address topics that we felt needed more exploration such as preferred resources, documentation-related concerns, and target users for the subsequent-provider alert, and thus, we designed a more targeted interview guide (Table S2 in Multimedia Appendix 1). 11 additional ED providers were recruited in person in the ED to participate in a final round of interviews. The updated prototype was provided as a multipage PDF document on the interviewer’s tablet. The prototype was further refined based on these interviews.

Following each interview, participants were asked to complete a survey to capture demographic and quantitative usability data with adequate time and privacy for completion (Figure S1 in Multimedia Appendix 1). The survey requested the participants’ ID numbers to anonymously link their survey and interview transcript, profession and years in their role, employment site, and experience with suspected child abuse cases. To assess usability quantitatively, we used the System Usability Scale (SUS), which is a 5-point Likert scale with 10 questions exploring the different aspects of a tool’s usability and learnability. The SUS is a validated, frequently used scale that provides a quick, standardized, and easily interpretable measure for reporting and comparing a product’s usability [42,43].

Analysis

The interview transcripts were anonymized and independently reviewed by a coding team consisting of 3 researchers, including 1 experienced in the analysis of qualitative data (GT) and 1 experienced in usability testing (KL), using conventional content analysis [44]. Team members applied codes to categorize data. Researchers then met to discuss the codes until consensus was reached, and a code list was subsequently generated and iteratively revised as new interviews were discussed [41]. The codes were then clustered into recurrent categories.

Regarding surveys, each participant’s SUS responses were scored, with the score ranging between 0 and 100 [45]. A cumulative SUS score for the CA-CDS was then obtained by calculating the median and IQR of the data set of participants’ scores. This cumulative score was compared against the curved grading scale developed by Lewis and Sauro [46] in which an SUS score of 68 corresponded to the 50th percentile of the range of scores included in their study and thus a “C” letter grade [46,47]. In their study and industry, an SUS score ≥80 indicated an above-average user experience.

Ethical Considerations

All participants provided verbal informed consent to be interviewed and recorded before starting the interviews. Participants received no compensation. This study was approved by the Yale Human Investigations Committee (2000029566).


Participant Characteristics

In total, 24 participants were interviewed in the study, and 23 participants completed the demographic survey. Most were physicians (13/23, 57%), from Yale New Haven Children’s Hospital (19/23, 83%), and held their current roles for >6 years (13/23, 57%; Table 1).

Table 1. Demographics of interviewees (n=23a).
DemographicsValues, n (%)
Profession

Medical provider


Physician assistant2 (9)


Nurse practitioner1 (4)


Physician (attending or fellow)10 (43)


Physician (resident)3 (13)

Nursing provider7 (30)
Primary hospital affiliation

Yale New Haven Children’s Hospital19 (83)

Saint Raphael Campus1 (4)

Lawrence + Memorial Hospital2 (9)

Bridgeport Hospital1 (4)
Duration in the role (y)
<11 (4)

1-59 (39)

6-104 (17)

≥119 (39)
Average monthly exposure to cases with suspicion for child abuse (number of patients)

0-19 (39)

2-511 (48)

6-102 (9)

>101 (4)

aIn total, 24 participants were interviewed, but 1 (4%) participant was unable to complete the Qualtrics survey that requested demographic data due to conflicting clinical obligations.

Emerging Themes

Overview

Analysis of the interviews revealed 5 main categories of themes. These themes, along with sample subcategories and representative quotations, are presented in Table 2.

Table 2. Categories of themes emerging from the coding of interview transcripts.
Categories and sample subcategoriesRepresentative quotes
CA-CDSa benefits

Extra layer of protection“The advantage is that you’re going to try to reduce the needle in the haystack phenomenon of missing these very occult cases of child abuse.”

Inclusion of evidence-based recommendations“I think the trigger questions are...helping us better interview patients and families, so that when we called DARTb, we have a better history to relate to them...It’s helpful that there’s phone numbers and...that it kind of prompts you through the next steps.”

Alerting the entire clinical EDc team“It gives each individual provider a chance to document what they need to document about their concerns for each individual patient.”
User-centered, workflow-compatible design

Customization based on provider type“From a nursing end, obviously discussing it with the care team, with all of us being involved and voicing our concerns on the child.”

Soft-stop alert configuration“Hard stops...could be in the wrong place...I think soft is good. I think being able to close [the alert] because [the flow] is just so unpredictable.”

Editable and automatic documentation“I think as long as those options can be documented in the chart just to save somebody the step of writing...That would be helpful.”

Triggers from multiple providers’ notes“You getting to the point of actually doing documentation is midway or further down. So, if it’s also looking at the nursing notes..., then on [opening the] chart, I would get this message.”

Clear presentation of alert trigger“[The evidence] is factual...They can get everybody on the same page with what the concern is, and what everybody needs to be aware of that’s caring for that child.”

Attention-grabbing design elements“I like that it has...the exclamation point with the red triangle that, kind of, alerts you to pay attention to it.”

Accessible recording of the previous provider’s actions“I think [the previous provider’s actions box] is good because that’s the sign-out. So, I think that’s reiterated as what should be signed out from the prior provider.”
Recommendations for improvement

Consolidating the content“As opposed to having so much in this ‘Please select one of the following, this will be sent to the Yale child abuse team’ stuff popping up, [I wonder] if the suggested actions for concern of suspected abuse and neglect...could be clickable themselves.”

Clearer design elements“I don’t know if [the evidence tooltip] was a different color or, like, you know like a hyperlink is in an email...how it shows up as a different color and underlined...I just wasn’t aware that’s what it was representing.”

Adding a hyperlink to additional resources“The diagnosis is a fracture. Then here, for all this stuff, there can be direct links whether it’s either mandated, like happens automatically, or they can link to it to see what the evidence actually is.”

Adding further information about the trigger source“But if it’s not your note, is there a way to more readily send to the specific note?...A progress note, that’s pretty generic...but it might be nice if there’s a way to get a little more specific, that it was the triage note or.”

Modifications to better reflect the provider workflow“I submit that, then this just completely disappearing...it would seem to me, maybe having it be completely disappeared, and if you want, at the point of discharge, to maybe one more time give me the opportunity to say, ‘Are you really? Just think about this one more time’.”
Barriers to future implementation

Alert fatigue“The drawback is that you’ll...probably have to trigger 10 alerts for every kid that is a true positive...So, there might be fatigue with the alert.”

Infringement of provider autonomy“There are some sentiments that this is being forced on somebody without any evidence...I don’t want to call them naysayers, but they don’t really believe the bibliographic evidence.”

Hesitancy to change“It seems like every week we’re getting told to do something new. So honestly, I really feel like as much as people don’t like change, we’re just getting told this is the way it is, and you need to do it.”

Concerns regarding documentation“If I am going to contact DCFd, and my husband’s the one that’s beating my kid, and now it’s available per the Cures Act, they know that DCF was contacted. You might be putting me and my kid at risk...Just putting it in the notes...[which] are being released to everybody now who has access, that could be a safety concern.”
Facilitators of future implementation

Stakeholder buy-in“To communicate with physicians, practitioners, you need to do it five different ways. So, I think probably asking to come to staff meetings...that system ED leadership committee...to do high-level show-and-tell.”

Provider education“I think as long as there is some kind of super user that can educate everybody on it...It’s a matter of just getting shown once how to do something, and then it kind of sticks.”

Sharing the system’s test characteristics“I would love to have some evidence to show that that\'s benefiting somebody, hopefully the patient...Just some retrospective data to show the veracity or the utility of your system, however you define it, either accuracy or times that you picked up something that the physicians should have or didn’t.”

aCA-CDS: Child Abuse Clinical Decision Support.

bDART: Detection, Assessment, Referral, and Treatment.

cED: emergency department.

dDCF: Department of Children and Families.

CA-CDS Benefits

Participants discussed the challenges of recognizing abusive injuries, especially those that were subtle or “minor.” They expressed that the CA-CDS could provide an extra layer of protection against missing abuse by reminding providers in real time to consider abuse in their differential diagnosis. Users also appreciated the CA-CDS’s evidence-based recommendations for evaluation and management that included guidance about important historical information to be collected and about using the expertise of specialists. Specifically, they found the MORE mnemonic to be clear, memorable, and helpful to improve information gathering, decision-making, and documentation. Participants also valued the emphasis on consulting specialists to determine the appropriate workup as it enabled the CA-CDS to remain simple but adaptable despite case-specific variations. In addition, the users appreciated that the CA-CDS alerted both medical and nursing providers, allowing for open communication of concerns among the entire clinical team.

User-Centered, Workflow-Compatible Design

Participants discussed the elements of the CA-CDS that would optimize their workflow. First, they valued the CA-CDS’s customization based on provider type, which reflected workflow differences, and preferred that nurses and medical providers submit independent CA-CDS responses. For instance, nurses favored the recommendation to discuss concerns with the medical team rather than consulting the CPT directly as it reflected the typical nursing workflow. Second, users felt that the CA-CDS’s soft-stop alert configuration, which could be minimized and reaccessed on demand, would be more flexible around providers’ unpredictable workflow. Third, users expressed that the documentation component, which automatically populated the selected actions into the note while also remaining editable for providers to share their own decision-making, would avoid redundancy. Fourth, participants appreciated that the CA-CDS could be triggered by injuries in various providers’ notes. In particular, they expressed that by including nursing notes, which are often created before other clinicians’ notes, as a trigger source, the CA-CDS could allow for more timely evaluation. Fifth, users shared that having the alert explicitly identify the triggering documentation enabled all team members to quickly be on the same page regarding the specific injury causing concern for abuse and allowed providers to assess if their documentation was being construed as intended. Sixth, participants appreciated the CA-CDS’s attention-grabbing formatting. Features such as bold text, colorful symbols, and high-risk injury phrasing helped emphasize the alert’s significance. Finally, regarding the subsequent-provider CA-CDS version, participants valued that the protocol clearly displayed the actions selected by the provider who initially submitted a response. They found this helpful for handoffs, highlighting the salient concerns and workup for all team members.

Recommendations for Improvement

Users made several recommendations to improve the CA-CDS’s usability. Participants recommended consolidating the content to reduce information overload. For instance, they suggested removing the acknowledgment section to reduce redundant text, allow more flexibility for documentation, and circumvent the potential legal implications of documenting disagreement with previous providers. To improve clarity, users also suggested using obvious underlining and bold colors to highlight design elements such as hyperlinks and editable text fields. In addition, to improve providers’ case-specific and general knowledge about abuse, participants suggested adding a link to additional resources that providers could access for further support.

Next, users requested further information about the source of the triggering documentation, including its author and location, to better find and assess the triggering content. Finally, providers suggested modifications to improve their workflow and use of the CA-CDS. For example, nurses appreciated the card’s reminder to be in an appropriately private setting as they often charted near patients, whereas medical providers supported removing this component which was less relevant to their workflow. They also shared that the reappearance of the CA-CDS at discharge could serve as a reminder to those who had missed, ignored, or initially not felt ready to complete the alert and as an opportunity to add more information and reconsider abuse in their differential.

Barriers to Future Implementation

First, participants warned about the potential for alert fatigue, especially if there were several false positives or if excessive effort was required for completion. They discussed how alert fatigue may lead providers to ignore the alert or seek work-arounds such as documenting in a manner to avoid triggering the alert. Second, they shared that the CA-CDS, with its interruptive alert and recommendations to consult specialists, may be perceived by some providers as infringing on their autonomy. Third, users counseled that providers may be accustomed to a particular manner of providing care and hesitant to change.

Finally, users warned that providers may be wary of using the CA-CDS, especially its documentation component, given the potential consequences of documenting concerns about abuse in notes that are accessible to caregivers. These included liability, inadequate patient-sensitive language, caregivers learning about concerns before discussions with the medical team, and caregivers purposefully obstructing care or inflicting further harm. However, users also responded that they tried to document objectively, being mindful about how their documentation could be interpreted by caregivers. On the basis of these discussions, the following text was added to the protocol: “Consider ‘unsharing’ the note ‘to prevent substantive harm to patient or another person.’”

Facilitators of Future Implementation

Participants recommended several strategies to optimize the CA-CDS’s implementation. Users felt that stakeholder and leadership buy-in and support for the system would promote future use and sustainability. In addition, participants stated the importance of educating providers regarding how to use the system to appropriately manage the cases of potential abuse and how to approach caregivers based on evidence-based recommendations. Users also recommended communicating the accuracy of the CA-CDS and its triggering NLP algorithm by sharing the system’s validation data and instances where the system could have made a difference.

Prototype Revisions

On the basis of the interviews and feedback from our team of experts, multiple rounds of modifications were performed to create our final prototype (Figures 5 and 6). All modifications are listed in Table S3 in Multimedia Appendix 1.

Figure 5. Final prototype of the first-provider version of the medical provider–specific Child Abuse Clinical Decision Support (CA-CDS). DART: Detection, Assessment, Referral, and Treatment; DCF: Department of Children and Families; ED: emergency department; M: male; MORE: Mechanism, Others present, Review of development, and Examination details.
Figure 6. Final prototype of the subsequent-provider version of the medical provider–specific Child Abuse Clinical Decision Support (CA-CDS). DART: Detection, Assessment, Referral, and Treatment; DCF: Department of Children and Families; ED: emergency department; F: female; MORE: Mechanism, Others present, Review of development, and Examination details.

SUS Scores

Of the 24 interviewees, 23 (96%) completed the SUS. Scores ranged from 62.5 to 100, with a median of 80 (IQR 75-92.5). Compared with Lewis and Sauro’s [46,47] curved grading scale, our median corresponded to the 85th to 89th percentile and an A− letter grade.


Principal Findings

Usability testing of the CA-CDS revealed several key findings. Users valued the additional protection against missing abuse that is offered by the alert to the entire clinical team and the presence of evidence-based recommendations for the evaluation and management of suspected abuse. Users also appreciated the CA-CDS’s user-centered, workflow-compatible design elements that captured the user’s attention to provide timely, provider-specific information while minimizing interruptions and redundancy. However, they recommended improving the system’s clarity and brevity, highlighting critical features such as the triggering documentation’s source, and further supporting the users by offering additional resources and alert reappearance at discharge. User recommendations informed the iterative refinements of the CA-CDS prototype. Future studies will be directed toward the implementation and live testing of the revised, user-centric CA-CDS within our hospital system’s EHR.

Comparing the CA-CDS With Existing Systems

Researchers have described the development, implementation, and evaluation of a child abuse CDS system for pediatric and general EDs that identified high-risk injuries through a variety of alert triggers including specific screening results, orders, and discharge documentation [26-31]. Their CDS system notified providers about the concern for abuse and recommended direct connection to an age-appropriate, injury-specific order set or a CPS referral. While the CA-CDS similarly aimed to identify high-risk injuries and provide CDS regarding the evaluation of suspected abuse, there were numerous differences. The current CA-CDS was triggered via an NLP algorithm that examined all the free text in the notes of medical providers, nursing providers, and SWs, whereas previous CDS systems were triggered primarily by discrete fields, active screening such as those completed by nurses upon evaluation, or limited NLP function that could only examine the free text within the chief complaint and focused assessment fields [33]. An entirely NLP-triggered CDS system may allow for minimal interruptions to the workflow; be more acceptable to frontline providers; and allow the CA-CDS to be triggered as soon as there is any documentation, even as early as triage, without requiring actions outside the normal workflow.

While many existing CDS systems connect users to standardized order sets [26-31] and recent consensus guidelines also recommended the use of a physical abuse order set with consistent and evidence-based actions [14], most of our users (13/24, 54%) preferred simpler suggested actions with reminders to consult a SW or the CPT to aid in nuanced decision-making. Consultation with these specialists may facilitate appropriate decision-making around performing additional testing or reporting to CPS and reduce bias in evaluation and reporting of suspected child abuse [48,49]. However, to acknowledge the importance of autonomy for users, the CA-CDS was designed as a nonmandatory, soft-stop alert that included a free-text response option and a hyperlink to additional resources including local clinical pathway guidelines to provide either support or an avenue for independent decision-making depending on the provider’s needs. Next steps include comparing the outcomes of systems that recommend standardized order sets to those that recommend consulting clinicians.

Finally, the CA-CDS was hosted on external software from 3M Company and designed to be subsequently integrated into the EHR, rather than being directly built into the EHR. This design aligns with the Fast Healthcare Interoperability Resources (FHIR) data standard that standardizes how information is stored, used, and exchanged between computer systems and thereby streamlines software development to support health care needs [50,51]. EHRs with FHIR-enabled technology allow for the packaging of information from the EHR into discrete, standardized units that can be interpreted and acted upon by external applications including CDS systems. FHIR-based applications, such as those used in this study, allow the results of a CDS system to trigger the opening of order sets or to directly provide CDS within the EHR and may realistically solve the problem of 1 child abuse CDS system communicating with multiple EHRs [31,52,53]. With 84% of hospitals in the United States having adopted FHIR-enabled technology and 3M’s connection with hundreds of EHR systems [51,54,55], the CA-CDS’s FHIR-based application design may facilitate the system’s dissemination across numerous institutions and EHRs.

Examining the Rigor of the CA-CDS

Consistent with the recommendations for successful, guideline-based, computerized CDS as described by the GUIDES checklist and child abuse expert consensus recommendations [14,22,37], the CA-CDS was developed by a team of local experts to provide evidence-based guidelines for the evaluation and management of high-risk injuries, reflective of recent studies in the field. In addition, the CA-CDS was integrated with an objective and internally validated NLP algorithm that captured data widely in the notes of ED SWs, nursing providers, and medical providers [33]. Given that the system is triggered independent of the providers’ gestalt and background, the CA-CDS may improve the standardization of patient care and reduce the impact of providers’ implicit biases [49,56].

The CA-CDS met the recommended design standards in several ways. Users’ feedback demonstrated the system’s usability, with users finding the CA-CDS to be user friendly, concise, and clear. The CA-CDS was intentionally refined based on feedback to reflect the users’ preferences. Considerable effort was made to integrate the system into providers’ clinical and EHR workflow to minimize interruptions and redundancy. The CA-CDS also provided ample flexibility around decision-making through features such as editable and automatic documentation and soft-stop alert design. Interestingly, in contrast to experts’ recommendation to incorporate automated referrals, standardized CPS reporting, and a multidisciplinary audience [14,22], most users preferred to keep the CA-CDS simple without these features (19/24, 79%) and limit the alert’s recipients to the primary clinical team (7/13, 54%). Next steps include examining the real-time use of the CA-CDS by ED clinicians.

Similar to the consensus recommendations, participants discussed the importance of planning for future implementation [14,37]. They identified the facilitators of future implementation, such as stakeholder buy-in, education about the CA-CDS and the accuracy of the underlying trigger (ie, the NLP algorithm), and iterative refinement of the system based on user feedback. While participants discussed alert fatigue, or provider desensitization owing to excessive alerts [57], as a potential barrier to future implementation, the NLP algorithm’s relatively high specificity and the limited patient population may minimize this concern. However, continual improvement of any rule-based algorithm is critical to maintain its quality. While participants did not discuss the implications of receiving a CA-CDS alert after a patient’s discharge if a provider completes the documentation after a patient’s ED visit, encounters with injuries identified by the NLP algorithm undergo weekly routine case surveillance by the CPT [58]. This is especially important for cases in which documentation is completed after a patient’s discharge to assure that the identified injuries are not concerning for missed abuse. Such a monitoring system may facilitate the identification of cases that might have been missed in real time during the ED encounter [14].

Patient Access to Electronic Health Information

An important but underexplored aspect of child abuse CDS systems is the impact of the 21st Century Cures Act on providers’ EHR interactions. The Cures Act is a federal law that came into effect in April 2021, mandating the free, timely release of electronic health information to patients and their guardians unless the practice meets the condition of select exceptions, one of which is preventing harm in contexts such as child abuse [59-61]. Concurrently, adoption and use of the patient portal dramatically increased as a result of the COVID-19 pandemic, with much of patient care and communication shifting to electronic mediums. Given the increased ease of patient access to EHR content, it is especially important to understand how provider perspectives about documentation of suspected child abuse have been affected by the new law. This study was uniquely timed to explore these concerns following the Cures Act. Although users worried about the potential repercussions of using the CA-CDS to document suspicions about abuse in caregiver-accessible notes, their unease was alleviated by the clarity provided by the protocol regarding the destination of the automatic documentation into the note and the addition of a reminder to unshare the note if appropriate. Next steps include exploring caregivers’ perspectives about the documentation of suspected child abuse.

Limitations

This study had at least 3 limitations. First, although we tried to recruit representative participants, few community ED providers (4/23, 17%) versus pediatric ED providers (19/23, 83%) participated in the usability testing. As providers who work at sites that often see most of a community’s pediatric population and more often underdiagnose child abuse [6-8], feedback from community ED clinicians is uniquely valuable. Future system testing would benefit from having a more balanced or community-focused participant pool. Second, we modified the CA-CDS based on the majority’s preferences. As such, there may have been modifications desired by a notable percentage of our users that were not implemented, which may limit their interaction with the CA-CDS in the future. However, the high-risk injuries identified by the CA-CDS will be routinely reviewed by the CPT to assure that cases are not misdiagnosed. Third, while this system was developed by a local team of experts and through iterative usability testing with providers at different sites, the CA-CDS’s recommended management may be institution specific. Further usability testing may be required if the system is disseminated to other hospitals, especially those with limited resources.

Conclusions

In summary, with its user-centered design and evidence-based content, the CA-CDS offers a novel method to aid ED medical and nursing providers in the real-time recognition, evaluation, and management of infant physical abuse. Our system has the potential to reduce the number of missed cases and increase the provision of less biased and evidence-based care to all infants.

Acknowledgments

This study was supported in part by funds from the National Institute of Child Health and Human Development (grant K23HD107178 [GT]). The contents of this paper are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health.

Data Availability

The data underlying this paper will be shared upon reasonable request to the corresponding author. The technical components used in this study were provided by 3M Company under license and will be shared upon request to the corresponding author with permission from 3M Company.

Authors' Contributions

AT, GT, and KL were involved in all steps of the study including conducting literature review, communicating with experts, developing the study material such as the interview guides and surveys, conducting interviews, analyzing the interview transcripts, and designing and refining the Child Abuse Clinical Decision Support (CA-CDS) prototypes. GT was also responsible for initial project conception, institutional review board submission, suggestion of candidates to participate in the study, and presentation of material at Yale New Haven Health stakeholder meetings. KL was also responsible for the technical components, designing and editing the prototype, and serving as our liaison with other 3M Company staff. AA and AH provided guidance and shared their clinical expertise in pediatric emergency medicine, child abuse, and health informatics as we developed and refined the CA-CDS. All authors collaborated to draft and review this paper.

Conflicts of Interest

KL serves as a consultant for 3M Health Information Systems Inc, 3M Company, and AH serves as a member of the Strategic Advisory Board for Johnson & Johnson. All other authors declare no other conflicts of interest.

Multimedia Appendix 1

Supplementary tables and figures including clinical vignettes, interview guide, demographic and System Usability Scale survey, and prototype modifications.

DOCX File , 2506 KB

  1. Child maltreatment 2021. U.S. Department of Health & Human Services, Administration for Children and Families, Administration on Children, Youth and Families, Children’s Bureau. 2023. URL: https://www.acf.hhs.gov/sites/default/files/documents/cb/cm2021.pdf [accessed 2024-03-17]
  2. Jenny C, Hymel KP, Ritzen A, Reinert SE, Hay TC. Analysis of missed cases of abusive head trauma. JAMA. Feb 17, 1999;281(7):621-626. [CrossRef] [Medline]
  3. Letson MM, Cooper JN, Deans KJ, Scribano PV, Makoroff KL, Feldman KW, et al. Prior opportunities to identify abuse in children with abusive head trauma. Child Abuse Negl. Oct 2016;60:36-45. [CrossRef] [Medline]
  4. Thorpe EL, Zuckerbraun NS, Wolford JE, Berger RP. Missed opportunities to diagnose child physical abuse. Pediatr Emerg Care. Nov 2014;30(11):771-776. [CrossRef] [Medline]
  5. King WK, Kiesel EL, Simon HK. Child abuse fatalities: are we missing opportunities for intervention? Pediatr Emerg Care. Apr 2006;22(4):211-214. [CrossRef] [Medline]
  6. Ravichandiran N, Schuh S, Bejuk M, Al-Harthy N, Shouldice M, Au H, et al. Delayed identification of pediatric abuse-related fractures. Pediatrics. Jan 2010;125(1):60-66. [CrossRef] [Medline]
  7. Trokel M, Waddimba A, Griffith J, Sege R. Variation in the diagnosis of child abuse in severely injured infants. Pediatrics. Mar 2006;117(3):722-728. [CrossRef] [Medline]
  8. Ziegler DS, Sammut J, Piper AC. Assessment and follow-up of suspected child abuse in preschool children with fractures seen in a general hospital emergency department. J Paediatr Child Health. 2005;41(5-6):251-255. [CrossRef] [Medline]
  9. Hymel KP, Laskey AL, Crowell KR, Wang M, Armijo-Garcia V, Frazier TN, et al. Racial and ethnic disparities and bias in the evaluation and reporting of abusive head trauma. J Pediatr. Jul 2018;198:137-43.e1. [FREE Full text] [CrossRef] [Medline]
  10. Lane WG, Dubowitz H. What factors affect the identification and reporting of child abuse-related fractures? Clin Orthop Relat Res. Aug 2007;461:219-225. [CrossRef] [Medline]
  11. Wood JN, Christian CW, Adams CM, Rubin DM. Skeletal surveys in infants with isolated skull fractures. Pediatrics. Feb 2009;123(2):e247-e252. [CrossRef] [Medline]
  12. Wood JN, Hall M, Schilling S, Keren R, Mitra N, Rubin DM. Disparities in the evaluation and diagnosis of abuse among infants with traumatic brain injury. Pediatrics. Sep 2010;126(3):408-414. [CrossRef] [Medline]
  13. Clinical decision support. HealthIT.gov. Office of the National Coordinator for Health Information Technology URL: https://www.healthit.gov/topic/safety/clinical-decision-support [accessed 2021-12-23]
  14. Suresh S, Barata I, Feldstein D, Heineman E, Lindberg DM, Bimber T, et al. Clinical decision support for child abuse: recommendations from a consensus conference. J Pediatr. Jan 2023;252:213-8.e5. [CrossRef] [Medline]
  15. Low D. The Child Protection Information Sharing Project (CP-IS): electronic information sharing to improve the assessment of known vulnerable or at-risk children. Adopt Foster. 2016;40(3):293-296. [CrossRef]
  16. Child Protection - Information Sharing (CP-IS) service. National Health Service England. National Health Service England URL: https://digital.nhs.uk/services/child-protection-information-sharing-project [accessed 2021-12-24]
  17. Deutsch SA, Henry MK, Lin W, Valentine KJ, Valente C, Callahan JM, et al. Quality improvement initiative to improve abuse screening among infants with extremity fractures. Pediatr Emerg Care. Sep 2019;35(9):643-650. [CrossRef] [Medline]
  18. Erlanger AC, Heyman RE, Slep AM. Creating and testing the reliability of a family maltreatment severity classification system. J Interpers Violence. Apr 2022;37(7-8):NP5649-NP5668. [CrossRef] [Medline]
  19. Kelly P, Chan C, Reed P, Ritchie M. The national child protection alert system in New Zealand: a prospective multi-centre study of inter-rater agreement. Child Youth Serv Rev. Sep 2020;116:105174. [CrossRef]
  20. Thraen IM, Frasier L, Cochella C, Yaffe J, Goede P. The use of TeleCAM as a remote web-based application for child maltreatment assessment, peer review, and case documentation. Child Maltreat. Nov 2008;13(4):368-376. [CrossRef] [Medline]
  21. Child-at-risk electronic medical record alert. Agency for Clinical Innovation, New South Wales Government. New South Wales Government; Aug 25, 2017. URL: https://aci.health.nsw.gov.au/ie/projects/child-at-risk [accessed 2021-12-24]
  22. Gonzalez DO, Deans KJ. Hospital-based screening tools in the identification of non-accidental trauma. Semin Pediatr Surg. Feb 2017;26(1):43-46. [CrossRef] [Medline]
  23. Escobar MAJ, Pflugeisen BM, Duralde Y, Morris CJ, Haferbecker D, Amoroso PJ, et al. Development of a systematic protocol to identify victims of non-accidental trauma. Pediatr Surg Int. Apr 2016;32(4):377-386. [CrossRef] [Medline]
  24. Riney LC, Frey TM, Fain ET, Duma EM, Bennett BL, Murtagh Kurowski E. Standardizing the evaluation of nonaccidental trauma in a large pediatric emergency department. Pediatrics. Jan 2018;141(1):e20171994. [CrossRef] [Medline]
  25. Luo S, Botash AS. Designing and developing a mobile app for clinical decision support: an interprofessional collaboration. Comput Inform Nurs. Oct 2018;36(10):467-472. [CrossRef] [Medline]
  26. Berger RP, Saladino RA, Fromkin J, Heineman E, Suresh S, McGinn T. Development of an electronic medical record-based child physical abuse alert system. J Am Med Inform Assoc. Feb 01, 2018;25(2):142-149. [FREE Full text] [CrossRef] [Medline]
  27. Suresh S, Heineman E, Meyer L, Richichi R, Conger S, Young S, et al. Improved detection of child maltreatment with routine screening in a tertiary care pediatric hospital. J Pediatr. Apr 2022;243:181-7.e2. [CrossRef] [Medline]
  28. Suresh S, Saladino RA, Fromkin J, Heineman E, McGinn T, Richichi R, et al. Integration of physical abuse clinical decision support into the electronic health record at a Tertiary Care Children's Hospital. J Am Med Inform Assoc. Jul 01, 2018;25(7):833-840. [FREE Full text] [CrossRef] [Medline]
  29. Rumball-Smith J, Fromkin J, Rosenthal B, Shane D, Skrbin J, Bimber T, et al. Implementation of routine electronic health record-based child abuse screening in General Emergency Departments. Child Abuse Negl. Nov 2018;85:58-67. [CrossRef] [Medline]
  30. Rosenthal B, Skrbin J, Fromkin J, Heineman E, McGinn T, Richichi R, et al. Integration of physical abuse clinical decision support at 2 general emergency departments. J Am Med Inform Assoc. Oct 01, 2019;26(10):1020-1029. [FREE Full text] [CrossRef] [Medline]
  31. McGinn T, Feldstein DA, Barata I, Heineman E, Ross J, Kaplan D, et al. Dissemination of child abuse clinical decision support: moving beyond a single electronic health record. Int J Med Inform. Mar 2021;147:104349. [FREE Full text] [CrossRef] [Medline]
  32. Luo S, Botash AS. Testing a mobile app for child abuse treatment: a mixed methods study. Int J Nurs Sci. Jun 23, 2020;7(3):320-329. [FREE Full text] [CrossRef] [Medline]
  33. Tiyyagura G, Asnes AG, Leventhal JM, Shapiro ED, Auerbach M, Teng W, et al. Development and validation of a natural language processing tool to identify injuries in infants associated with abuse. Acad Pediatr. Aug 2022;22(6):981-988. [FREE Full text] [CrossRef] [Medline]
  34. Kushniruk AW, Patel VL. Cognitive and usability engineering methods for the evaluation of clinical information systems. J Biomed Inform. Feb 2004;37(1):56-76. [FREE Full text] [CrossRef] [Medline]
  35. Usability. Interaction Design Foundation. URL: https://www.interaction-design.org/literature/topics/usability [accessed 2021-12-26]
  36. Mann DM, Chokshi SK, Kushniruk A. Bridging the gap between academic research and pragmatic needs in usability: a hybrid approach to usability evaluation of health care information systems. JMIR Hum Factors. Nov 28, 2018;5(4):e10721. [FREE Full text] [CrossRef] [Medline]
  37. Van de Velde S, Kunnamo I, Roshanov P, Kortteisto T, Aertgeerts B, Vandvik PO, et al. The GUIDES checklist: development of a tool to improve the successful use of guideline-based computerised clinical decision support. Implement Sci. Jun 25, 2018;13(1):86. [FREE Full text] [CrossRef] [Medline]
  38. Shum M, Asnes A, Leventhal JM, Bechtel K, Gaither JR, Tiyyagura G. The use of experts to evaluate a child abuse guideline in community emergency departments. Acad Pediatr. Apr 2021;21(3):521-528. [CrossRef] [Medline]
  39. Varpio L, Ajjawi R, Monrouxe LV, O'Brien BC, Rees CE. Shedding the cobra effect: problematising thematic emergence, triangulation, saturation and member checking. Med Educ. Jan 2017;51(1):40-50. [CrossRef] [Medline]
  40. LaDonna KA, Artino ARJ, Balmer DF. Beyond the guise of saturation: rigor and qualitative interview data. J Grad Med Educ. Oct 2021;13(5):607-611. [FREE Full text] [CrossRef] [Medline]
  41. Hanson JL, Balmer DF, Giardino AP. Qualitative research methods for medical educators. Acad Pediatr. 2011;11(5):375-386. [CrossRef] [Medline]
  42. Johnson CM, Johnston D, Crowley PK, Culbertson H, Rippen HE, Damico DJ, et al. EHR usability toolkit: a background report on usability and electronic health records. Agency for Healthcare Research and Quality. U.S. Department of Health and Human Services; Aug 2011. URL: https:/​/digital.​ahrq.gov/​sites/​default/​files/​docs/​citation/​EHR_Usability_Toolkit_Background_Report.​pdf [accessed 2024-03-11]
  43. Brooke J. SUS: a retrospective. J User Exp. 2013;8(2):29-40. [FREE Full text]
  44. Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. Nov 2005;15(9):1277-1288. [CrossRef] [Medline]
  45. Brooke J. SUS: a 'quick and dirty' usability scale. In: Usability Evaluation In Industry. Boca Raton, FL. CRC Press; 1996.
  46. Lewis JR, Sauro J. Item benchmarks for the system usability scale. J Usab Stud. May 1, 2018;13(3):158-167. [FREE Full text]
  47. Sauro J. A Practical Guide to the System Usability Scale: Background, Benchmarks & Best Practices. Denver, CO. Measuring Usability LLC; 2011.
  48. Powers E, Tiyyagura G, Asnes AG, Leventhal JM, Moles R, Christison-Lagay E, et al. Early involvement of the child protection team in the care of injured infants in a pediatric emergency department. J Emerg Med. Jun 2019;56(6):592-600. [CrossRef] [Medline]
  49. Tiyyagura G, Emerson B, Gaither JR, Bechtel K, Leventhal JM, Becker H, et al. Child protection team consultation for injuries potentially due to child abuse in community emergency departments. Acad Emerg Med. Jan 2021;28(1):70-81. [FREE Full text] [CrossRef] [Medline]
  50. What is FHIR®? HealthIT.gov. The Office of the National Coordinator for Health Information Technology; 2019. URL: https://www.healthit.gov/sites/default/files/2019-08/ONCFHIRFSWhatIsFHIR.pdf [accessed 2024-03-11]
  51. Yan E. Catching FHIR: healthcare interoperability in 2023. Keysight Technologies. Jan 16, 2023. URL: https:/​/www.​keysight.com/​blogs/​en/​tech/​software-testing/​2023/​1/​16/​fhir-healthcare-interoperability-testing [accessed 2024-03-11]
  52. The FHIR® API. HealthIT.gov. The Office of the National Coordinator for Health Information Technology URL: https://www.healthit.gov/sites/default/files/page/2021-04/FHIR%20API%20Fact%20Sheet.pdf [accessed 2024-03-11]
  53. Vorisek CN, Lehne M, Klopfenstein SA, Mayer PJ, Bartschke A, Haese T, et al. Fast healthcare interoperability resources (FHIR) for interoperability in health research: systematic review. JMIR Med Inform. Jul 19, 2022;10(7):e35724. [FREE Full text] [CrossRef] [Medline]
  54. Barker W, Posnack S. The heat is on: US caught FHIR in 2019. Health IT Buzz. Office of the National Coordinator for Health Information Technology; Jul 29, 2021. URL: https://www.healthit.gov/buzz-blog/health-it/the-heat-is-on-us-caught-fhir-in-2019 [accessed 2024-03-11]
  55. 3M™ M*Modal fluency direct. 3M Company. URL: https:/​/www.​3m.com/​3M/​en_US/​health-information-systems-us/​create-time-to-care/​clinician-solutions/​speech-recognition/​fluency-direct/​ [accessed 2022-01-17]
  56. Rangel EL, Cook BS, Bennett BL, Shebesta K, Ying J, Falcone RA. Eliminating disparity in evaluation for abuse in infants with head injury: use of a screening guideline. J Pediatr Surg. Jun 2009;44(6):1229-1235. [CrossRef] [Medline]
  57. Alert fatigue. Patient Safety Network. Agency for Healthcare Research and Quality; Sep 07, 2019. URL: https://tinyurl.com/467dssaw [accessed 2024-03-10]
  58. Shum M, Hsiao A, Teng W, Asnes A, Amrhein J, Tiyyagura G. Natural language processing - a surveillance stepping stone to identify child abuse. Acad Pediatr. 2024;24(1):92-96. [CrossRef] [Medline]
  59. 21st century cures act. U.S. Food & Drug Administration. Jan 31, 2020. URL: https://www.fda.gov/regulatory-information/selected-amendments-fdc-act/21st-century-cures-act [accessed 2022-01-17]
  60. ONC's Cures Act Final Rule. HealthIT.gov. The Office of the National Coordinator for Health Information Technology URL: https://www.healthit.gov/topic/oncs-cures-act-final-rule [accessed 2022-01-17]
  61. Anthony ES. The cures act final rule: interoperability-focused policies that empower patients and support providers. Health IT Buzz. The Office of the National Coordinator for Health Information Technology; Mar 09, 2020. URL: https://www.healthit.gov/buzz-blog/21st-century-cures-act/the-cures-final-rule [accessed 2022-01-16]


CA-CDS: Child Abuse Clinical Decision Support
CDS: clinical decision support
CPS: Child Protective Services
CPT: child protection team
DART: Detection, Assessment, Referral, and Treatment
DCF: Department of Children and Families
ED: emergency department
EHR: electronic health record
FHIR: Fast Healthcare Interoperability Resources
GUIDES: Guideline Implementation with Decision Support
MORE: Mechanism, Others present, Review of development, and Examination details
NLP: natural language processing
SUS: System Usability Scale
SW: social worker


Edited by G Tsafnat; submitted 20.07.23; peer-reviewed by E Heiman, D Listman; comments to author 08.02.24; revised version received 23.02.24; accepted 23.02.24; published 29.03.24.

Copyright

©Amy Thomas, Andrea Asnes, Kyle Libby, Allen Hsiao, Gunjan Tiyyagura. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 29.03.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.