Review
Abstract
Background: Surgical site infections (SSIs) occur frequently and impact patients and health care systems. Remote surveillance of surgical wounds is currently limited by the need for manual assessment by clinicians. Machine learning (ML)–based methods have recently been used to address various aspects of the postoperative wound healing process and may be used to improve the scalability and cost-effectiveness of remote surgical wound assessment.
Objective: The objective of this review was to provide an overview of the ML methods that have been used to identify surgical wound infections from images.
Methods: We conducted a scoping review of ML approaches for visual detection of SSIs following the JBI (Joanna Briggs Institute) methodology. Reports of participants in any postoperative context focusing on identification of surgical wound infections were included. Studies that did not address SSI identification, surgical wounds, or did not use image or video data were excluded. We searched MEDLINE, Embase, CINAHL, CENTRAL, Web of Science Core Collection, IEEE Xplore, Compendex, and arXiv for relevant studies in November 2022. The records retrieved were double screened for eligibility. A data extraction tool was used to chart the relevant data, which was described narratively and presented using tables. Employment of TRIPOD (Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis) guidelines was evaluated and PROBAST (Prediction Model Risk of Bias Assessment Tool) was used to assess risk of bias (RoB).
Results: In total, 10 of the 715 unique records screened met the eligibility criteria. In these studies, the clinical contexts and surgical procedures were diverse. All papers developed diagnostic models, though none performed external validation. Both traditional ML and deep learning methods were used to identify SSIs from mostly color images, and the volume of images used ranged from under 50 to thousands. Further, 10 TRIPOD items were reported in at least 4 studies, though 15 items were reported in fewer than 4 studies. PROBAST assessment led to 9 studies being identified as having an overall high RoB, with 1 study having overall unclear RoB.
Conclusions: Research on the image-based identification of surgical wound infections using ML remains novel, and there is a need for standardized reporting. Limitations related to variability in image capture, model building, and data sources should be addressed in the future.
doi:10.2196/52880
Keywords
Introduction
Postoperative complications are associated with significant morbidity and mortality [
, ]. Wound-related issues following surgery remain common and represent a considerable cost to patients and health care systems [ , ]. The global incidence of surgical site infections (SSIs)—which include superficial or deep infections occurring at the incision site as well as organ-space infections related to the surgery [ ]—has been estimated to be 11% [ ]. Many of these events occur after hospital discharge, highlighting the need for remote posthospital discharge monitoring. Early research suggests that remote postoperative wound follow-up is associated with high patient satisfaction and reduced costs [ , ].Artificial intelligence tools have been applied to various aspects of health care and are contributing to the shift toward precision medicine [
- ]. Specifically, machine learning (ML) techniques can leverage health data and develop predictive models to assist in clinical decision-making [ ], and can be used in conjunction with computer vision. An important medical task is the classification and detection of various objects, ranging from skin lesions to cell nuclei [ ]. Recently, ML-enabled computer vision methods have been used to contribute to the automation of wound segmentation [ , ], evaluation of postoperative outcomes [ , ], and improvement of wound assessment practices [ , ], often outperforming existing approaches.Wound care involves cleaning and dressing, monitoring healing, addressing possible infection, and other wound type-specific measures [
]. Current image-based wound management practices, often involving manual wound photography and assessment carried out by nurses, are time- and labor-intensive [ ]. In contrast, models of care augmented with ML-enabled methods can be automated [ , ]. The portability of these methods might also be employed to conduct such assessments remotely [ ], reducing patient travel burden and improving access to wound care in rural areas [ , ]. A recent clinical trial (Post-Discharge After Surgery Virtual Care With Remote Automated Monitoring-1) found that virtual care with remote monitoring that included wound evaluation shows promise in improving outcomes important to patients and to optimal health system function [ ]. These results highlight the utility of digital approaches to care, which can be integrated with automated ML systems to increase scalability.The research landscape of ML-based methods for wound surveillance is evolving rapidly. Several reviews have addressed the use of ML for various aspects of wound care from different perspectives. One scoping review focused on mapping the use cases for ML in the management of various types of chronic wounds (eg, visual assessment and predicting evolution) [
]. Another review addressed image-based chronic wound assessment from a technical standpoint, characterizing existing rule-based and ML methods for wound feature extraction and classification, as well as systems for wound imaging [ ]. However, chronic and acute wounds differ in terms of the clinical signs associated with infection as those in chronic wound infections are often less discernible [ ], and there is a need to establish the state of the science with respect to how ML-based tools are being used for postoperative wounds. One systematic review specifically characterized the effectiveness of ML algorithms that use textual or structured data for the detection and prediction of SSIs [ ], though a survey of image-based methods has not been undertaken. Likewise, other systematic reviews have found that reporting in ML-based prediction model studies is generally poor and that most are at high risk of bias (RoB) [ , ]. Considering these results, assessments of RoB and the employment of reporting guidelines—which have not been included in previous reviews of image-based ML for wound care—can further provide insights into the current state of research in this field.The scope and purpose of this review was to provide an in-depth overview of ML approaches that use visual data for the identification of SSIs. Specifically, this review describes the nature of the methods used in this context, the ways in which they have been validated, the extent to which the reporting of these studies follows guideline recommendations, and their RoB.
Methods
Review Methodology
This scoping review was conducted in accordance with the appropriate JBI (Joanna Briggs Institute) methodology [
]. The PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist was used to guide the writing of this review [ ]. We opted for a scoping review approach as we sought to analyze the methods employed in conducting research in this field, an indication for scoping reviews [ ], rather than synthesize model performance.Search Strategy and Study Selection
Following our protocol [
], participants of any age (or other demographic variable) who underwent any type of surgery were considered. The main concept being addressed is the use of ML-based computer vision in the image-based identification of surgical wound infections. Only wounds that were directly the product of surgery were included. Other types of wounds, such as pressure ulcers, were excluded. We included studies that described detection of infection of such wounds (as defined by study authors). Studies solely focusing on tasks other than identification (eg, segmentation) and using sources other than images or videos for prediction were not considered. Studies conducted in any postoperative context, including postdischarge settings, were included.Studies that developed or validated one or more prediction models were included in this review, including those that gathered data from experimental, quasi-experimental, and observational studies (eg, randomized controlled trials, and prospective and retrospective studies). Only primary sources were considered. Select grey literature sources, such as conference proceedings and preprints, were also considered. Animal studies were excluded.
An initial limited search of MEDLINE (Ovid) and CINAHL (EBSCO) was undertaken to identify relevant papers. Text words used in the titles and abstracts of retrieved records, as well as index terms used to describe them, were used to develop the full search strategy (
), which was adapted for each database. The databases we searched were MEDLINE (Ovid), CENTRAL (Ovid), Embase (Ovid), CINAHL (EBSCO), Web of Science Core Collection, IEEE Xplore, and Compendex. We also searched arXiv for relevant preprints. All databases were searched from inception to November 24, 2022. Reference lists of all included records were likewise searched for other records. Only English-language records were considered.After the search was completed, duplicate citations were removed and all identified citations were uploaded into Rayyan [
] for title and abstract and full-text screening by 2 independent reviewers. An abstract screening tool was used to aid in the screening process ( ). The texts of potentially relevant records were retrieved in full and assessed in the same manner. Disagreements were resolved through discussion or by consultation with an additional reviewer.Assessment of the Employment of Reporting Guidelines and RoB
A data extraction tool (
)—that had been piloted with 20% (2/10) of the included reports by 2 independent reviewers—was used to abstract the relevant data. After piloting the tool, a single reviewer extracted data from the remaining sources with validation by an additional reviewer. The data were summarized using tables and presented narratively.We determined the extent to which the included reports employed TRIPOD (Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis) guidelines using the TRIPOD adherence assessment form [
], and used the PROBAST (Prediction Model Risk of Bias Assessment Tool) to conduct critical appraisal [ ]. Further, 2 reviewers assessed both employment of reporting guidelines and RoB for 20% (2/10) of the included reports; the remaining assessments were carried out by 1 reviewer (with an additional reviewer available for validation). In studies that developed multiple models, we only evaluated reporting and RoB for those that were image-based. To facilitate comparison between the reporting level of TRIPOD items, we chose arbitrary thresholds to denote high (≥70%), moderate (40%-69%), and low (1%-39%) adherence.The TRIPOD adherence form and PROBAST were modified as needed for the purposes of this review. As has been noted in other reviews [
, - ], it is difficult to assess RoB in the predictors of deep learning (DL) models that use images for prediction, as the image features are automatically selected by the algorithm. Still, we deemed image capture considerations important (eg, whether images were systematically captured) and altered the relevant TRIPOD and PROBAST items accordingly. The full list of modifications can be found in .Results
Study Inclusion
The search retrieved 796 records, or 699 records after duplicates were removed (
). We excluded 684 records during initial screening and full-text screened 15 reports. We identified 10 reports that met the eligibility criteria. The reference lists of these reports had an additional 16 potentially relevant records, though none met the eligibility criteria.Review Findings
The included studies took place in a variety of settings, across a wide range of cohort sizes (
). Important study characteristics were sometimes unclear or not reported. The full data extraction sheet can be found in .Author | Purpose | Setting | Cohort | Events |
Fletcher et al [ | ]To develop a model for predicting SSIa in C-section wounds from thermal images taken with smartphones | Women who underwent C-section at a particular hospital in Kigali, Rwanda, between September 2019 and February 2020 prospectively enrolled on postoperative day 1 | In total, 530 participants | In total, 30 participants with infected wounds |
Fletcher et al [ | ]To develop a model for predicting SSI in C-section wounds from color images taken with mobile devices | Women aged >18 years who underwent C-section at a particular hospital in Kigali, Rwanda, between March and October 2017 enrolled prior to discharge | In total, 572 participants (out of 729) that returned for follow-up | In total, 62 participants with infected wounds |
Wu et al [ | ]To develop an automatic monitoring tool for surgical wounds based on smartphone images | Prospectively collected wound database of patients who had undergone laparotomy, minimal invasive surgery, or hernia repair at an Asian academic center | In total, 480 wound images from 100 patients | In total, 136 images of infected wounds |
Fletcher et al [ | ]To develop models for predicting SSI in C-section wounds from questionnaire and image data | Women aged 18+ years who underwent C-section at a particular hospital in Kigali, Rwanda, between March and October 2017 enrolled prior to discharge | In total, 572 participants (out of 729) that returned for follow-up; images available for 568 patients | In total, 62 participants with infected wounds |
Hsu et al [ | ]To develop an automatic wound interpretation app for automated wound monitoring | Images of chest, abdomen, back, hand, and podiatry wounds collected from the Department of Surgery and Department of Internal Medicine of National Taiwan University Hospital | In total, 293 wound images | In training set, 27 infection images; total number unclear |
Lüneburg et al [ | ]To explore MLb approaches for remote LVADc patient monitoring using images | Images of LVAD driveline exit sites obtained from Schüchtermann-Schiller’sche Kliniken and Hannover Medical School | In total, 745 images from 61 patients, though only 732 are labeled | In total, 212 images of mild infection and 37 images of severe infection |
Shenoy et al [ | ]To develop a model that can identify the onset of wound ailments from smartphone images | Images collected primarily from patients and surgeons at the Palo Alto Veterans Affairs Hospital and the Washington University Medical Center | In total, 1335 images | In total, 355 images of infection |
Hsu et al [ | ]To develop a model for recognizing SSI | Images collected from the Department of Surgery of National Taiwan University Hospital | In total, 42 images | In total, 30 images of infection |
Zeng et al [ | ]To develop a system for automatic wound detection and subsequent infection detection | Not reported | Total unclear; 6 images for testing | Unclear |
Wang et al [ | ]To develop an integrated system for automatic wound segmentation and analysis of wound conditions from wound images | Images collected from New York University Wound Database | In total, 3400 images | In total, 155 images of infection |
aSSI: surgical site infection.
bML: machine learning.
cLVAD: left ventricular assist device.
The earliest included paper was published in 2015 [
], 6 papers were published between 2017 and 2019 [ - ], and 3 papers were published between 2020 and November 2022 [ - ].The objective of the included studies was generally to develop models for identifying surgical wound infection from images. In some cases, the purpose was broader; 2 studies sought to identify the presence of various wound attributes (eg, granulation) [
, ] and 4 studies developed models for automatic wound segmentation [ , , , ]. Other objectives included healing progress prediction [ ], surface area estimation [ ], and wound detection [ ].Patients, Procedures, and Image Capture
The types of patients and surgical procedures studied varied. In total, 3 papers focused on C-section patients in rural Rwanda [
, , ], while another study examined patients implanted with a left ventricular assist device in Germany [ ]. Further, 2 studies conducted in Asia described the surgical procedures more broadly; for instance, 1 paper included patients that had undergone laparotomy, minimal invasive surgery, or hernia repair [ ], while another included surgical wounds of the chest, abdomen, back, hands, and feet [ ]. In 4 papers, this information was not specified [ - ].The context of image capture likewise varied (
). Most studies simply stated that images were obtained from one or more sites or data sets [ - , ], without further details on how the images were selected; though 1 study additionally indicated that the data were “prospectively collected” [ ] and the studies conducted in Rwanda described their cohorts in the greatest detail [ , , ].Author | Time of image capture | Imaging modality | Outcome determination | Modeling methods | Performance metric for best-performing model |
Fletcher et al [ | ]Approximately 10 days after surgery | Thermal images taken by community health workers with a thermal camera module connected to smartphone that produces a JPG thermal image and a separate 2D temperature array | Physical examination performed by general practitioner | CNNb | Median AUCc: 0.90 |
Fletcher et al [ | ]Approximately 10 days after surgery | Color images taken by community health workers with Android tablets | Physical examination performed by general practitioner | CNN | Median AUC: 0.655 |
Wu et al [ | ]Just after surgery, during hospitalization, and in outpatient clinic follow-up | Color images taken by surgeons with smartphones | Annotation of abnormal wound features on images performed by surgeons | CNN, SVMd, RFe, GBf | Median AUC: 0.833 |
Fletcher et al [ | ]Approximately 10 days after surgery | Color images taken by community health workers with Android tablets | Physical examination performed by general practitioner | Unclear; potentially both SVM and logistic regression | Median AUC: 1.0 |
Hsu et al [ | ]Not reported | Color images taken with smartphones | Unclear, but likely annotation of images by 3 physicians | SVM | Overall accuracy: 0.8358 |
Lüneburg et al [ | ]Not reported | Color images; device not reported | Unclear, but likely based on physical examination performed by physicians | CNN | Overall accuracy: 0.670 |
Shenoy et al [ | ]Not reported | Color images taken by patients and surgeons with smartphones | Not reported | CNN | AUC: 0.82 |
Hsu et al [ | ]Not reported | Color images; device not reported | Not reported | SVM | Overall accuracy: 0.9523 |
Zeng et al [ | ]Not reported | Color images; device not reported | Not reported | SVM | AUCs varied depending on the infection-related wound attribute, but ranged from 0.7682 to 0.9145. |
Wang et al [ | ]Not reported | Color images; device not reported | Not reported | SVM using CNN features | AUC: 0.847 |
aML: machine learning.
bCNN: convolutional neural network.
cAUC: area under curve.
dSVM: support vector machine.
eRF: random forest.
fGB: gradient boosting.
In the studies conducted with C-section patients, the wounds were photographed approximately 10 days after surgery, with infection assessment taking place on the same day [
, , ]. Another study collected images at multiple time points: immediately after surgery, during hospitalization, and at a later follow-up, though the number of days post surgery was not indicated [ ]. However, the time at which the images were taken relative to surgery and the time at which infection was assessed relative to image capture were not reported in 6 records [ - ].In terms of the images themselves, 9 studies used color images [
- ], and 1 used thermal images [ ]. Further, 6 studies used a mobile device (either smartphone or tablet) to capture the image [ - , ], while others did not report the device used [ , - ]. Across studies that reported the persons responsible for capturing the images, community health workers were typically responsible [ , , ]; 1 study used images taken by surgeons [ ]; and another used images collected by both patients and surgeons [ ].Assessment of surgical wound infection establishes the model ground truth and occurred mainly through face-to-face physical examination [
, , , ], through manual annotation of the wound images [ , , ], or was not reported [ , , ].ML Approaches
All the included records were model development studies (ie, no external validation). In total, 4 papers used convolutional neural networks (CNNs) [
, , , ], 3 used support vector machines (SVMs) developed using handcrafted features [ , , ], 1 trained an SVM classifier using CNN-derived features [ ], 1 used a CNN, an SVM, a random forest model, and a gradient boosting classifier [ ], and 1 paper’s methods were not entirely clear but may have involved both logistic regression and SVMs [ ]. Additional technical details are available in .The number of images used for developing an infection detection model ranged from just 42 [
] to 3400 [ ]. Likewise, the proportion of images of infected wounds ranged from 4.6% (155/3400) [ ] to 71.4% (30/42) [ ]. In some cases, there was 1 image per patient [ , , ], while in others, there were multiple per patient [ , ] or the number of patients was not reported [ , - ].In 5 papers, the classification task was binary [
- , ], while in most others, the task was multiclass. In 1 paper, multiclass classification entailed distinguishing between mild, severe, and no infection [ ], while in 3 others, the model differentiated between various infection-related wound attributes, such as granulation and swelling [ , , ]. In contrast, 1 paper addressed a multilabel task in which the model identified the presence of a wound, infection, granulation, and drainage per image [ ].All studies reported model performance. In total, 7 studies reported area under the receiver operating characteristic curve values, which ranged from 0.655 [
] to 1.0 [ ] for the best-performing models. The remaining studies reported overall accuracies, ranging from 0.670 [ ] to 0.952 [ ] for the best-performing models, as well as other performance metrics (eg, F1-scores).Employment of Reporting Guidelines
There were a few TRIPOD items that were highly employed (ie, employed by at least 7 out of the 10 included studies). For instance, all papers reported their objectives, and most reported background information, overall interpretations of study results, and descriptions of whether actions were taken to standardize image capture or otherwise systematically identify wounds from the images. In addition, 6 TRIPOD items had moderate employment (employed by between 4 and 6 studies); namely, the reporting of data sources and study setting, descriptions of model-building procedures, the number of participants or images (and the number showing infection), study limitations, as well as the potential clinical use of the models and future research directions.
Employment of 8 TRIPOD items was low (employed by between 1 and 3 studies), including items related to the reporting of participant selection methods, descriptions of how and when images were taken, rationales for sample sizes, the flow of participants within the paper, explanations of how to use the models, and funding details. Most studies did not completely use these guidelines in terms of outcome assessment, as there was often no indication of the criteria used to diagnose surgical wound infection or the time interval between surgery and assessment was unclear.
An additional 7 TRIPOD items were not reported in any of the included studies. Titles and abstracts did not employ reporting guidelines, and participant demographics were not reported. Similarly, model calibration was not discussed, and in studies that did not exclusively use DL methods for infection detection [
- , - ], reporting of feature modeling details did not meet TRIPOD guidelines.About RoB
The RoB assessment led to 9 studies being identified as having an overall high RoB, while the remaining study was determined to have overall unclear RoB (
). The participants domain was determined to be unclear in terms of RoB because little information about the source of data and recruitment methods was reported [ , - ]. The 3 papers on C-section patients in Rwanda were at low RoB for this domain, as the nature of these works was cross-sectional and the cohorts were well defined [ , , ]. In terms of predictors, we identified 5 papers as being at high RoB since there was variability in image capture conditions without later accounting for this variability [ - , , ]. In contrast, other papers were judged to be at low RoB for this domain because they segmented the wound prior to infection detection [ , , ] or placed a frame around the wound prior to image capture [ ], improving the uniformity of images processed for model training. Likewise, most studies were rated as having unclear RoB in the outcome domain, largely because the specific criteria used to gauge the presence of surgical wound infection were not reported. In other cases, the outcome domain was determined to be at high RoB because the presence of infection was determined solely from images, as opposed to by face-to-face review. In 8 studies, the analysis domain was assessed as being at high RoB for many reasons [ , - ], including omission of participants in model development, an absence of both discrimination and calibration measures, and failure to appropriately account for overfitting.Study | RoB | ||||
Participants | Predictors | Outcome | Analysis | Overall | |
Fletcher et al [ | ]+c | + | ?d | ? | ? |
Fletcher et al [ | ]+ | −e | ? | − | − |
Wu et al [ | ]? | − | − | ? | − |
Fletcher et al [ | ]+ | − | ? | − | − |
Hsu et al [ | ]? | + | − | − | − |
Lüneburg et al [ | ]? | + | ? | − | − |
Shenoy et al [ | ]? | − | ? | − | − |
Hsu et al [ | ]? | + | − | − | − |
Zeng et al [ | ]? | ? | ? | − | − |
Wang et al [ | ]? | − | ? | − | − |
aPROBAST: Prediction Model Risk of Bias Assessment Tool.
bRoB: risk of bias.
c+ indicates low RoB.
d? indicates unclear RoB.
e− indicates high RoB.
Discussion
Principal Findings
This scoping review aimed to characterize the available research on ML approaches for the image-based identification of surgical wound infections. Such research is important as it can be integrated with remote patient monitoring, which enables improved health care decision-making and management, with additional benefits such as reduced travel burden. Initial work has suggested that remote image-based monitoring of wounds is feasible and associated with higher patient satisfaction [
- ], and is at least comparable to routine in-person care in terms of time to infection diagnosis [ ]. Other aspects of wound assessment targeted by image-based remote patient monitoring include identification of dehiscence and surface area and temperature measurements [ , ], though much has not been automated or ML-based.Despite the extensive body of ML-based work using medical images in other specialties [
, ], there is scarce ML research on the identification of surgical wound infections from digital images. We identified only 10 such papers, 7 of which were conference papers, which limits the space for reporting and likely contributed to the low reporting of TRIPOD items. In contrast, a recent review of ML for SSI detection identified 32 papers that used structured electronic health record, free-text, or administrative data for prediction [ ], suggesting that ML-based SSI detection research has mostly used these more readily available data sources. While models based on such in-hospital data perform well in the context of inpatient SSI detection, they may be limited in their practical application during clinical care, as visual inspection is the essential mode by which infection is identified. In terms of incorporating innovative imaging techniques, thermal imaging has recently emerged as a potentially valuable tool in the management of surgical wounds [ - ]. Thermal imaging can be used with mobile devices [ , ], which facilitates its application for postdischarge monitoring, and may better generalize to different skin colors. On the other hand, the utility of electronic health record– or text-based models for postdischarge surveillance is perhaps less clear. Current postdischarge surgical wound surveillance largely depends on evaluation at follow-up visits, which may be infrequent and not timely [ ], or on patient self-assessment, which is not reliable [ , ]. ML for the image-based identification of surgical wound infections presents the opportunity to automate this practice.Reporting Data Collection Details
ML hinges on effective data collection, which can be challenging in outpatient or remote monitoring settings; hence, this type of research is still in early development. Although virtual care as a model of health care is relatively new, progress has been made in terms of data collection technology [
, ], similar telemedicine research without ML [ - ], and monitoring of other wound types [ , ]. As almost three-fourths of individuals worldwide own a mobile phone [ ], leveraging this technology for remote monitoring holds potential. Still, it is worth noting that mobile phone ownership and mobile network coverage is lower in certain geographical areas and in low-income groups. In these contexts, alternative approaches, such as in-hospital follow-up with pictures taken by a community health worker [ ], may be more appropriate. In terms of the data used in the included studies, it has mainly been collected in non-Western settings, and there are no publicly available data sets of surgical wound infection images, which presents a challenge to reproducibility and further development in the field. Likewise, the lack of reporting on image metadata (eg, gender and age distributions, procedures received, and occurrence of surgical complications) and eligibility criteria limits the understanding of the populations that this research can be generalized to and contributes to RoB in terms of participant selection. Reporting of such details needs improvement for the progression of different prototypes for different subpopulations in this domain.Transparency and Standardization in Model Development
The nature of the models developed in the included studies was diagnostic rather than prognostic. Similarly, none of the included papers performed out-of-sample external validation, highlighting the newness of this field of work and opportunity for further maturity. Interestingly, 4 papers published between 2017 and 2019 did not use DL methods, perhaps because the expertise required for development of such models was not yet widely available. Model performance is likewise not well-standardized in its reporting, as no papers reported on calibration and some did not include discrimination measures, which gives rise to RoB in analysis methods. Many papers did not report on measures to address overfitting, which calls the developed models’ generalizability into question. Despite the partial reporting, the performance of the models in the included papers suggest that image-based ML for identification of surgical wound infection holds promise. In order to better understand their generalizability and reliability, future studies should externally validate and calibrate the developed models and report areas under the curve (as opposed to solely reporting other measures such as accuracy), and provide transparent documentation (eg, open-source code) to promote reproducibility and collaboration. Considering that interpretability and explainability support clinician trust [
], researchers may likewise wish to explore these concepts in future work.Employment of Reporting Guidelines and RoB
Standardization of Image Capture
Use of TRIPOD guidelines was mostly low and RoB was generally unclear or high. This was in part due to participant- and analysis-related considerations discussed above; however, there were also concerns with the images themselves. In most studies, the way in which the images were taken, environmental conditions, the persons responsible for taking them, and the time of image capture relative to surgery, were not reported in detail. Still, there was often variability in the conditions of image capture, which might be attributed to unique challenges associated with collection and standardization in this particular modality. As opposed to other modalities, surgical wound infection images are largely taken by hand, without explicit training or guidance, which makes for considerable differences among images and introduces RoB in terms of model predictors. Efforts to standardize image capture help reduce RoB by minimizing systematic differences between images of infected versus noninfected wounds. Recent approaches such as instructions for patient-generated surgical wound images [
] or automated color calibration, scaling, and rotation correction [ ] suggest that these considerations are receiving attention. Some studies created segmentation algorithms to capture the wound more reliably from the nonuniform images, which may have hindered the development of infection detection models. Segmentation and classification represent distinct areas of research, though many studies developed their own segmentation models rather than using or building on existing segmentation algorithms. In future work, specific directions detailing the time (relative to surgery), method, and conditions of image capture should be provided in order to reduce unwanted variability, and image processing steps can be undertaken for further standardization.Transparency in Outcome Assessment
Outcome assessment was also not well reported in most papers. While there is no universally accepted and objective gold standard for SSI detection [
], clinical examination (involving direct observation of the wound) is frequently used as a reference standard [ , , , , ]. Although some studies did perform in-person clinical examination, none reported the specific criteria used to gauge the presence of infection. Considering that there are differences in the rates of reported SSIs depending on the criteria used [ ], specifying these criteria is important to more accurately assess the RoB arising from outcome assessment. It is worth noting, however, that there are challenges associated with in-person postoperative wound assessment. Surgical wound infections progress variably, with some only apparent after the 30-day postoperative surveillance benchmark [ , ]. However, extended in-person follow-up timeframes may require additional administrative resources. In practice, the criteria employed for SSI assessment typically consider both feasibility and validity [ ]. This may necessitate striking a balance between resources, time constraints, and quality of assessment, which can pose challenges to the comprehensive evaluation of surgical wound infections. On a smaller scale, interrater reliability of in-person SSI assessment using established criteria can be modest [ , ], and in rural areas, there may be limited access to high-quality in-person wound care. Where feasible, determination of ground truth should use established criteria for infection and employ multiple independent assessors to minimize RoB.Limitations
There are some limitations to this review. For instance, additional searching (eg, forward citation searching) may have led to more relevant reports being identified, as may have searching grey literature sources, which would reduce selection bias. We may have missed other relevant non–English-language papers, potentially excluding valuable studies. The included studies are from diverse locations (eg, Rwanda, Germany, and Taiwan), though this does not fully compensate for the potential language bias. Similarly, data extraction and the TRIPOD and PROBAST assessments were mainly completed by 1 reviewer, which introduces a potential source of bias in our findings. The modifications made to the TRIPOD and PROBAST tools may limit the ability to compare the results of our assessments to those of other reviews. Artificial intelligence–oriented extensions of both tools are in development [
] and will facilitate their use in appraising ML-based studies.Conclusions
The use of ML for the image-based identification of surgical wound infections remains in the early stages, with only 10 studies available and a need for reporting standardization. Future development and validation of such models should carefully consider image variability, overfitting concerns, and criteria for determination of infection. These considerations are important to advance the state of image-based ML for wound management, which has the potential to automate traditionally labor-intensive practices.
Acknowledgments
This study was funded through an award from the Hamilton Health Sciences Research Institute. The funders had no role in the collection, analysis, and interpretation of data; in the writing of this paper; and in the decision to submit this paper for publication.
Conflicts of Interest
JP declares research funding support from Roche Canada. PJD has received grants from Abbott Diagnostics, Siemens Canada, and Roche Diagnostics; received consulting fees from Abbott Laboratories, Renibus Therapeutics, Roche Diagnostics, and Trimedic Canada; received monitoring devices from CloudDX and Philips Healthcare; participated in advisory board meetings for Bayer AG; and is a member of the Data Safety Monitoring Board for the PEPPER (Comparative Effectiveness of Pulmonary Embolism Prevention After Hip and Knee Replacement) Study, New Hampshire. The other authors have no conflicts of interest to disclose.
Search strategy.
DOCX File , 22 KBAbstract screening tool.
DOCX File , 14 KBData extraction tool.
DOCX File , 14 KBTRIPOD and PROBAST altered or excluded items.
DOCX File , 17 KBComplete results of data extraction, assessment of TRIPOD employment, and PROBAST assessment.
DOCX File , 70 KBAdditional technical details.
DOCX File , 28 KBPRISMA-ScR checklist.
PDF File (Adobe PDF File), 101 KBReferences
- Tevis SE, Kennedy GD. Postoperative complications and implications on patient-centered outcomes. J Surg Res. 2013;181(1):106-113. [FREE Full text] [CrossRef] [Medline]
- Endo I, Kumamoto T, Matsuyama R. Postoperative complications and mortality: are they unavoidable? Ann Gastroenterol Surg. 2017;1(3):160-163. [FREE Full text] [CrossRef] [Medline]
- Monahan M, Jowett S, Pinkney T, Brocklehurst P, Morton DG, Abdali Z, et al. Surgical site infection and costs in low- and middle-income countries: a systematic review of the economic burden. PLoS One. 2020;15(6):e0232960. [FREE Full text] [CrossRef] [Medline]
- Guest JF, Fuller GW, Vowden P. Costs and outcomes in evaluating management of unhealed surgical wounds in the community in clinical practice in the UK: a cohort study. BMJ Open. 2018;8(12):e022591. [FREE Full text] [CrossRef] [Medline]
- Onyekwelu I, Yakkanti R, Protzer L, Pinkston CM, Tucker C, Seligson D. Surgical wound classification and surgical site infections in the orthopaedic patient. J Am Acad Orthop Surg Glob Res Rev. 2017;1(3):e022. [FREE Full text] [CrossRef] [Medline]
- Gillespie BM, Harbeck E, Rattray M, Liang R, Walker R, Latimer S, et al. Worldwide incidence of surgical site infections in general surgical patients: a systematic review and meta-analysis of 488,594 patients. Int J Surg. 2021;95:106136. [FREE Full text] [CrossRef] [Medline]
- Wang SC, Au Y, Ramirez-GarciaLuna JL, Lee L, Berry GK. The promise of smartphone applications in the remote monitoring of postsurgical wounds: a literature review. Adv Skin Wound Care. 2020;33(9):489-496. [CrossRef] [Medline]
- Gunter RL, Chouinard S, Fernandes-Taylor S, Wiseman JT, Clarkson S, Bennett K, et al. Current use of telemedicine for post-discharge surgical care: a systematic review. J Am Coll Surg. 2016;222(5):915-927. [FREE Full text] [CrossRef] [Medline]
- Mesko B. The role of artificial intelligence in precision medicine. Expert Rev Precis Med Drug Dev. 2017;2(5):239-241. [FREE Full text] [CrossRef]
- Johnson KB, Wei WQ, Weeraratne D, Frisse ME, Misulis K, Rhee K, et al. Precision medicine, AI, and the future of personalized health care. Clin Transl Sci. 2021;14(1):86-93. [FREE Full text] [CrossRef] [Medline]
- Amisha; Malik P, Pathania M, Rathaur VK. Overview of artificial intelligence in medicine. J Family Med Prim Care. 2019;8(7):2328-2331. [FREE Full text] [CrossRef] [Medline]
- Ghassemi M, Naumann T, Schulam P, Beam AL, Chen IY, Ranganath R. A review of challenges and opportunities in machine learning for health. AMIA Jt Summits Transl Sci Proc. ;2020:191-200. [FREE Full text] [Medline]
- Esteva A, Chou K, Yeung S, Naik N, Madani A, Mottaghi A, et al. Deep learning-enabled medical computer vision. npj Digit Med. 2021;4(1):5. [FREE Full text] [CrossRef] [Medline]
- Carrión H, Jafari M, Bagood MD, Yang HY, Isseroff RR, Gomez M. Automatic wound detection and size estimation using deep learning algorithms. PLoS Comput Biol. 2022;18(3):e1009852. [FREE Full text] [CrossRef] [Medline]
- Wang C, Anisuzzaman DM, Williamson V, Dhar MK, Rostami B, Niezgoda J, et al. Fully automatic wound segmentation with deep convolutional neural networks. Sci Rep. 2020;10(1):21897. [FREE Full text] [CrossRef] [Medline]
- Sikka K, Ahmed AA, Diaz D, Goodwin MS, Craig KD, Bartlett MS, et al. Automated assessment of children's postoperative pain using computer vision. Pediatrics. 2015;136(1):e124-e131. [FREE Full text] [CrossRef] [Medline]
- Şimşek İ, Şirolu C. Analysis of surgical outcome after upper eyelid surgery by computer vision algorithm using face and facial landmark detection. Graefes Arch Clin Exp Ophthalmol. 2021;259(10):3119-3125. [CrossRef] [Medline]
- Barakat-Johnson M, Jones A, Burger M, Leong T, Frotjold A, Randall S, et al. Reshaping wound care: evaluation of an artificial intelligence app to improve wound assessment and management amid the COVID-19 pandemic. Int Wound J. 2022;19(6):1561-1577. [FREE Full text] [CrossRef] [Medline]
- Mukherjee R, Manohar DD, Das DK, Achar A, Mitra A, Chakraborty C. Automated tissue classification framework for reproducible chronic wound assessment. Biomed Res Int. ;2014:851582. [FREE Full text] [CrossRef] [Medline]
- Dreifke MB, Jayasuriya AA, Jayasuriya AC. Current wound healing procedures and potential care. Mater Sci Eng C. 2015;48:651-662. [FREE Full text] [CrossRef] [Medline]
- Aldaz G, Shluzas LA, Pickham D, Eris O, Sadler J, Joshi S, et al. Hands-free image capture, data tagging and transfer using Google Glass: a pilot study for improved wound care management. PLoS One. 2015;10(4):e0121179. [FREE Full text] [CrossRef] [Medline]
- Zahia S, Garcia-Zapirain B, Elmaghraby A. Integrating 3D model representation for an accurate non-invasive assessment of pressure injuries with deep learning. Sensors (Basel). 2020;20(10):2933. [FREE Full text] [CrossRef] [Medline]
- Lau CH, Yu KHO, Yip TF, Luk LYF, Wai AKC, Sit TY, et al. An artificial intelligence-enabled smartphone app for real-time pressure injury assessment. Front Med Technol. 2022;4:905074. [FREE Full text] [CrossRef] [Medline]
- Zoppo G, Marrone F, Pittarello M, Farina M, Uberti A, Demarchi D, et al. AI technology for remote clinical assessment and monitoring. J Wound Care. 2020;29(12):692-706. [FREE Full text] [CrossRef] [Medline]
- Sood A, Granick MS, Trial C, Lano J, Palmier S, Ribal E, et al. The role of telemedicine in wound care: a review and analysis of a database of 5,795 patients from a mobile wound-healing center in Languedoc-Roussillon, France. Plast Reconstr Surg. 2016;138(Suppl 3):248S-256S. [CrossRef] [Medline]
- Zhang J, Mihai C, Tüshaus L, Scebba G, Distler O, Karlen W. Wound image quality from a mobile health tool for home-based chronic wound management with real-time quality feedback: randomized feasibility study. JMIR mHealth uHealth. 2021;9(7):e26149. [FREE Full text] [CrossRef] [Medline]
- McGillion MH, Parlow J, Borges FK, Marcucci M, Jacka M, Adili A, et al. Post-discharge after surgery Virtual Care with Remote Automated Monitoring-1 (PVC-RAM-1) technology versus standard care: randomised controlled trial. BMJ. 2021;374:n2209. [FREE Full text] [CrossRef] [Medline]
- Dabas M, Schwartz D, Beeckman D, Gefen A. Application of artificial intelligence methodologies to chronic wound care and management: a scoping review. Adv Wound Care (New Rochelle). 2023;12(4):205-240. [CrossRef] [Medline]
- Anisuzzaman DM, Wang C, Rostami B, Gopalakrishnan S, Niezgoda J, Yu Z. Image-based artificial intelligence in wound assessment: a systematic review. Adv Wound Care (New Rochelle). 2022;11(12):687-709. [CrossRef] [Medline]
- Hurlow J, Bowler PG. Acute and chronic wound infections: microbiological, immunological, clinical and therapeutic distinctions. J Wound Care. 2022;31(5):436-445. [FREE Full text] [CrossRef] [Medline]
- Wu G, Khair S, Yang F, Cheligeer C, Southern D, Zhang Z, et al. Performance of machine learning algorithms for surgical site infection case detection and prediction: a systematic review and meta-analysis. Ann Med Surg (Lond). 2022;84:104956. [FREE Full text] [CrossRef] [Medline]
- Navarro CLA, Damen JAA, Takada T, Nijman SWJ, Dhiman P, Ma J, et al. Risk of bias in studies on prediction models developed using supervised machine learning techniques: systematic review. BMJ. 2021;375:n2281. [FREE Full text] [CrossRef] [Medline]
- Nagendran M, Chen Y, Lovejoy CA, Gordon AC, Komorowski M, Harvey H, et al. Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ. 2020;368:m689. [FREE Full text] [CrossRef] [Medline]
- Peters MDJ, Godfrey C, McInerney P, Munn Z, Trico AC, Khalil H. Chapter 11: scoping reviews. In: Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis. Adelaide, Australia. Joanna Briggs Institute; 2020.
- Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for Scoping Reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467-473. [FREE Full text] [CrossRef] [Medline]
- Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18(1):143. [FREE Full text] [CrossRef] [Medline]
- Machine learning approaches for computer vision tasks related to the identification of surgical site infections: a scoping review protocol. OSF Registries. URL: https://osf.io/3k9xq [accessed 2023-03-23]
- Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan-a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):210. [FREE Full text] [CrossRef] [Medline]
- Heus P, Damen JAAG, Pajouheshnia R, Scholten RJPM, Reitsma JB, Collins GS, et al. Uniformity in measuring adherence to reporting guidelines: the example of TRIPOD for assessing completeness of reporting of prediction model studies. BMJ Open. 2019;9(4):e025611. [FREE Full text] [CrossRef] [Medline]
- Wolff RF, Moons KGM, Riley RD, Whiting PF, Westwood M, Collins GS, et al. PROBAST: a tool to assess the risk of bias and applicability of prediction model studies. Ann Intern Med. 2019;170(1):51-58. [FREE Full text] [CrossRef] [Medline]
- Roberts M, Driggs D, Thorpe M, Gilbey J, Yeung M, Ursprung S, et al. Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. Nat Mach Intell. 2021;3(3):199-217. [FREE Full text] [CrossRef]
- Wynants L, Van Calster B, Collins GS, Riley RD, Heinze G, Schuit E, et al. Prediction models for diagnosis and prognosis of COVID-19: systematic review and critical appraisal. BMJ. 2020;369:m1328. [FREE Full text] [CrossRef] [Medline]
- Frizzell TO, Glashutter M, Liu CC, Zeng A, Pan D, Hajra SG, et al. Artificial intelligence in brain MRI analysis of Alzheimer's disease over the past 12 years: a systematic review. Ageing Res Rev. 2022;77:101614. [CrossRef] [Medline]
- Fletcher RR, Schneider G, Bikorimana L, Rukundo G, Niyigena A, Miranda E, et al. The use of mobile thermal imaging and deep learning for prediction of surgical site infection. In: Annu Int Conf IEEE Eng Med Biol Soc. Presented at: 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); Mexico, 2021;5059-5062; November 1-5, 2021. URL: https://ieeexplore.ieee.org/document/9630094 [CrossRef]
- Fletcher RR, Schneider G, Hedt-Gauthier B, Nkurunziza T, Alayande B, Riviello R, et al. Use of convolutional neural nets and transfer learning for prediction of surgical site infection from color images. In: Annu Int Conf IEEE Eng Med Biol Soc. Presented at: 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); November 1-5, 2021, 2021;5047-5050; Mexico. [CrossRef]
- Wu JM, Tsai CJ, Ho TW, Lai F, Tai HC, Lin MT. A unified framework for automatic detection of wound infection with artificial intelligence. Appl Sci. 2020;10(15):5353. [FREE Full text] [CrossRef]
- Fletcher RR, Olubeko O, Sonthalia H, Kateera F, Nkurunziza T, Ashby JL, et al. Application of machine learning to prediction of surgical site infection. In: Annu Int Conf IEEE Eng Med Biol Soc. Presented at: 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); July 23-27, 2019, 2019;2234-2237; Berlin, Germany. [CrossRef]
- Hsu JT, Chen YW, Ho TW, Tai HC, Wu JM, Sun HY, et al. Chronic wound assessment and infection detection method. BMC Med Inform Decis Mak. 2019;19(1):99. [FREE Full text] [CrossRef] [Medline]
- Lüneburg N, Reiss N, Feldmann C, van der Meulen P, van de Steeg M, Schmidt T, et al. Photographic LVAD driveline wound infection recognition using deep learning. Stud Health Technol Inform. 2019;260:192-199. [CrossRef] [Medline]
- Shenoy VN, Foster E, Aalami L, Majeed B, Aalami O. Deepwound: automated postoperative wound assessment and surgical site surveillance through convolutional neural networks. In: IEEE Int Conf Bioinform Biomed. Presented at: 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM); December 3-6, 2018, 2018;1017-1021; Madrid, Spain. [CrossRef]
- Hsu JT, Ho TW, Shih HF, Chang CC, Lai F, Wu JM. Automatic wound infection interpretation for postoperative wound image. Presented at: Proceedings SPIE 10225, Eighth International Conference on Graphic and Image Processing (ICGIP 2016); February 8, 2017, 2017;1022526; Tokyo, Japan. [CrossRef]
- Zeng YC, Liao KH, Wang CH, Lin Y, Chang WT. Implementation of post-operative wound analytics. In: IEEE Int Conf Consum Electron-Taiwan. Presented at: 2017 IEEE International Conference on Consumer Electronics—Taiwan (ICCE-TW); June 12-14, 2017, 2017;93-94; Taipei, Taiwan. [CrossRef]
- Wang C, Yan X, Smith M, Kochhar K, Rubin M, Warren SM, et al. A unified framework for automatic wound segmentation and analysis with deep convolutional neural networks. In: Annu Int Conf IEEE Eng Med Biol Soc. Presented at: 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); August 25-29, 2015, 2015;2415-2418; Milan, Italy. [CrossRef]
- Gunter RL, Fernandes-Taylor S, Rahman S, Awoyinka L, Bennett KM, Weber SM, et al. Feasibility of an image-based mobile health protocol for postoperative wound monitoring. J Am Coll Surg. 2018;226(3):277-286. [FREE Full text] [CrossRef] [Medline]
- Gunter R, Fernandes-Taylor S, Mahnke A, Awoyinka L, Schroeder C, Wiseman J, et al. Evaluating patient usability of an image-based mobile health platform for postoperative wound monitoring. JMIR mHealth uHealth. 2016;4(3):e113. [FREE Full text] [CrossRef] [Medline]
- Mousa AY, Broce M, Monnett S, Davis E, McKee B, Lucas BD. Results of telehealth electronic monitoring for post discharge complications and surgical site infections following arterial revascularization with groin incision. Ann Vasc Surg. 2019;57:160-169. [CrossRef] [Medline]
- McLean KA, Mountain KE, Shaw CA, Drake TM, Pius R, Knight SR, et al. Remote diagnosis of surgical-site infection using a mobile digital intervention: a randomised controlled trial in emergency surgery patients. npj Digit Med. 2021;4(1):160. [FREE Full text] [CrossRef] [Medline]
- Wang SC, Anderson JAE, Evans R, Woo K, Beland B, Sasseville D, et al. Point-of-care wound visioning technology: reproducibility and accuracy of a wound measurement app. PLoS One. 2017;12(8):e0183139. [FREE Full text] [CrossRef] [Medline]
- Wiseman JT, Fernandes-Taylor S, Gunter R, Barnes ML, Saunders RS, Rathouz PJ, et al. Inter-rater agreement and checklist validation for postoperative wound assessment using smartphone images in vascular surgery. J Vasc Surg Venous Lymphat Disord. 2016;4(3):320-328.e2. [FREE Full text] [CrossRef] [Medline]
- Aggarwal R, Sounderajah V, Martin G, Ting DSW, Karthikesalingam A, King D, et al. Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis. npj Digit Med. 2021;4(1):65. [FREE Full text] [CrossRef] [Medline]
- Liu X, Faes L, Kale AU, Wagner SK, Fu DJ, Bruynseels A, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health. 2019;1(6):e271-e297. [FREE Full text] [CrossRef] [Medline]
- Childs C, Wright N, Willmott J, Davies M, Kilner K, Ousey K, et al. The surgical wound in infrared: thermographic profiles and early stage test-accuracy to predict surgical site infection in obese women during the first 30 days after caesarean section. Antimicrob Resist Infect Control. 2019;8:7. [FREE Full text] [CrossRef] [Medline]
- Siah CJR, Childs C, Chia CK, Cheng KFK. An observational study of temperature and thermal images of surgical wounds for detecting delayed wound healing within four days after surgery. J Clin Nurs. 2019;28(11-12):2285-2295. [CrossRef] [Medline]
- Li F, Wang M, Wang T, Wang X, Ma X, He H, et al. Smartphone-based infrared thermography to assess progress in thoracic surgical incision healing: a preliminary study. Int Wound J. 2023;20(6):2000-2009. [FREE Full text] [CrossRef] [Medline]
- Kanazawa T, Nakagami G, Goto T, Noguchi H, Oe M, Miyagaki T, et al. Use of smartphone attached mobile thermography assessing subclinical inflammation: a pilot study. J Wound Care. 2016;25(4):177-182. [CrossRef] [Medline]
- Saunders RS, Fernandes-Taylor S, Rathouz PJ, Saha S, Wiseman JT, Havlena J, et al. Outpatient follow-up versus 30-day readmission among general and vascular surgery patients: a case for redesigning transitional care. Surgery. 2014;156(4):949-958. [FREE Full text] [CrossRef] [Medline]
- Whitby M, McLaws ML, Doidge S, Collopy B. Post-discharge surgical site surveillance: does patient education improve reliability of diagnosis? J Hosp Infect. 2007;66(3):237-242. [CrossRef] [Medline]
- Richter V, Cohen MJ, Benenson S, Almogy G, Brezis M. Patient self-assessment of surgical site infection is inaccurate. World J Surg. 2017;41(8):1935-1942. [CrossRef] [Medline]
- Maddah E, Beigzadeh B. Use of a smartphone thermometer to monitor thermal conductivity changes in diabetic foot ulcers: a pilot study. J Wound Care. 2020;29(1):61-66. [CrossRef] [Medline]
- Nkurunziza T, Williams W, Kateera F, Riviello R, Niyigena A, Miranda E, et al. mHealth-community health worker telemedicine intervention for surgical site infection diagnosis: a prospective study among women delivering via caesarean section in rural Rwanda. BMJ Glob Health. 2022;7(7):e009365. [FREE Full text] [CrossRef] [Medline]
- Hedt-Gauthier B, Miranda E, Nkurunziza T, Hughes O, Boatin AA, Gaju E, et al. Telemedicine for surgical site infection diagnosis in rural Rwanda: concordance and accuracy of image reviews. World J Surg. 2022;46(9):2094-2101. [CrossRef] [Medline]
- Totty JP, Harwood AE, Wallace T, Smith GE, Chetter IC. Use of photograph-based telemedicine in postoperative wound assessment to diagnose or exclude surgical site infection. J Wound Care. 2018;27(3):128-135. [CrossRef] [Medline]
- Wang L, Pedersen PC, Strong DM, Tulu B, Agu E, Ignotz R. Smartphone-based wound assessment system for patients with diabetes. IEEE Trans Biomed Eng. 2015;62(2):477-488. [FREE Full text] [CrossRef] [Medline]
- Howell RS, Liu HH, Khan AA, Woods JS, Lin LJ, Saxena M, et al. Development of a method for clinical evaluation of artificial intelligence-based digital wound assessment tools. JAMA Netw Open. 2021;4(5):e217234. [FREE Full text] [CrossRef] [Medline]
- Facts and figures 2022—mobile phone ownership. International Telecommunication Union. 2022. URL: https://www.itu.int/itu-d/reports/statistics/2022/11/24/ff22-mobile-phone-ownership/ [accessed 2023-10-03]
- Tonekaboni S, Joshi S, McCradden MD, Goldenberg A. What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use. In: Proceedings of the 4th Machine Learning for Healthcare Conference. Presented at: Machine Learning for Healthcare; August 2019, 2019;359-380; Ann Arbor, MI. URL: https://proceedings.mlr.press/v106/tonekaboni19a.html
- Macefield RC, Blazeby JM, Reeves BC, King A, Rees J, Pullyblank A, et al. Remote assessment of surgical site infection (SSI) using patient-taken wound images: development and evaluation of a method for research and routine practice. J Tissue Viability. 2023;32(1):94-101. [FREE Full text] [CrossRef] [Medline]
- Alayande BT, Prasad S, Abimpaye M, Bakorimana L, Niyigena A, Nkurunziza J, et al. Image-based surgical site infection algorithms to support home-based post-cesarean monitoring: lessons from Rwanda. PLOS Glob Public Health. 2023;3(2):e0001584. [FREE Full text] [CrossRef] [Medline]
- Bruce J, Russell EM, Mollison J, Krukowski ZH. The quality of measurement of surgical wound infection as the basis for monitoring: a systematic review. J Hosp Infect. 2001;49(2):99-108. [CrossRef] [Medline]
- Lathan R, Sidapra M, Yiasemidou M, Long J, Totty J, Smith G, et al. Diagnostic accuracy of telemedicine for detection of surgical site infection: a systematic review and meta-analysis. npj Digit Med. 2022;5(1):108. [FREE Full text] [CrossRef] [Medline]
- Guerra J, Guichon C, Isnard M, So S, Chan S, Couraud S, et al. Active prospective surveillance study with post-discharge surveillance of surgical site infections in Cambodia. J Infect Public Health. 2015;8(3):298-301. [FREE Full text] [CrossRef] [Medline]
- Ashby E, Haddad FS, O'Donnell E, Wilson APR. How will surgical site infection be measured to ensure "high quality care for all"? J Bone Joint Surg Br. 2010;92(9):1294-1299. [FREE Full text] [CrossRef] [Medline]
- Hopkins B, Eustache J, Ganescu O, Ciopolla J, Kaneva P, Fiore JF, et al. At least ninety days of follow-up are required to adequately detect wound outcomes after open incisional hernia repair. Surg Endosc. 2022;36(11):8463-8471. [CrossRef] [Medline]
- Holihan JL, Flores-Gonzalez JR, Mo J, Ko TC, Kao LS, Liang MK. How long is long enough to identify a surgical site infection? Surg Infect (Larchmt). 2017;18(4):419-423. [CrossRef] [Medline]
- Carpini GD, Giannella L, Di Giuseppe J, Fioretti M, Franconi I, Gatti L, et al. Inter-rater agreement of CDC criteria and ASEPSIS score in assessing surgical site infections after cesarean section: a prospective observational study. Front Surg. 2023;10:1123193. [FREE Full text] [CrossRef] [Medline]
- Allami MK, Jamil W, Fourie B, Ashton V, Gregg PJ. Superficial incisional infection in arthroplasty of the lower limb. Interobserver reliability of the current diagnostic criteria. J Bone Joint Surg Br. 2005;87-B(9):1267-1271. [FREE Full text] [CrossRef] [Medline]
- Collins GS, Dhiman P, Navarro CLA, Ma J, Hooft L, Reitsma JB, et al. Protocol for development of a reporting guideline (TRIPOD-AI) and risk of bias tool (PROBAST-AI) for diagnostic and prognostic prediction model studies based on artificial intelligence. BMJ Open. 2021;11(7):e048008. [FREE Full text] [CrossRef] [Medline]
Abbreviations
CNN: convolutional neural network |
DL: deep learning |
JBI: Joanna Briggs Institute |
ML: machine learning |
PRISMA-ScR: Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews |
PROBAST: Prediction Model Risk of Bias Assessment Tool |
RoB: risk of bias |
SSI: surgical site infection |
SVM: support vector machine |
TRIPOD: Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis |
Edited by G Eysenbach, T de Azevedo Cardoso; submitted 18.09.23; peer-reviewed by D Hu, M Zhou, P Okoro, K McLean; comments to author 21.09.23; revised version received 09.11.23; accepted 12.12.23; published 18.01.24.
Copyright©Juan Pablo Tabja Bortesi, Jonathan Ranisau, Shuang Di, Michael McGillion, Laura Rosella, Alistair Johnson, PJ Devereaux, Jeremy Petch. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 18.01.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.