TY - JOUR AU - Xu, Ming AU - Ouyang, Liu AU - Han, Lei AU - Sun, Kai AU - Yu, Tingting AU - Li, Qian AU - Tian, Hua AU - Safarnejad, Lida AU - Zhang, Hengdong AU - Gao, Yue AU - Bao, Forrest Sheng AU - Chen, Yuanfang AU - Robinson, Patrick AU - Ge, Yaorong AU - Zhu, Baoli AU - Liu, Jie AU - Chen, Shi PY - 2021 DA - 2021/1/6 TI - Accurately Differentiating Between Patients With COVID-19, Patients With Other Viral Infections, and Healthy Individuals: Multimodal Late Fusion Learning Approach JO - J Med Internet Res SP - e25535 VL - 23 IS - 1 KW - COVID-19 KW - machine learning KW - deep learning KW - multimodal KW - feature fusion KW - biomedical imaging KW - diagnosis support KW - diagnosis KW - imaging KW - differentiation KW - testing KW - diagnostic AB - Background: Effectively identifying patients with COVID-19 using nonpolymerase chain reaction biomedical data is critical for achieving optimal clinical outcomes. Currently, there is a lack of comprehensive understanding in various biomedical features and appropriate analytical approaches for enabling the early detection and effective diagnosis of patients with COVID-19. Objective: We aimed to combine low-dimensional clinical and lab testing data, as well as high-dimensional computed tomography (CT) imaging data, to accurately differentiate between healthy individuals, patients with COVID-19, and patients with non-COVID viral pneumonia, especially at the early stage of infection. Methods: In this study, we recruited 214 patients with nonsevere COVID-19, 148 patients with severe COVID-19, 198 noninfected healthy participants, and 129 patients with non-COVID viral pneumonia. The participants’ clinical information (ie, 23 features), lab testing results (ie, 10 features), and CT scans upon admission were acquired and used as 3 input feature modalities. To enable the late fusion of multimodal features, we constructed a deep learning model to extract a 10-feature high-level representation of CT scans. We then developed 3 machine learning models (ie, k-nearest neighbor, random forest, and support vector machine models) based on the combined 43 features from all 3 modalities to differentiate between the following 4 classes: nonsevere, severe, healthy, and viral pneumonia. Results: Multimodal features provided substantial performance gain from the use of any single feature modality. All 3 machine learning models had high overall prediction accuracy (95.4%-97.7%) and high class-specific prediction accuracy (90.6%-99.9%). Conclusions: Compared to the existing binary classification benchmarks that are often focused on single-feature modality, this study’s hybrid deep learning-machine learning framework provided a novel and effective breakthrough for clinical applications. Our findings, which come from a relatively large sample size, and analytical workflow will supplement and assist with clinical decision support for current COVID-19 diagnostic methods and other clinical applications with high-dimensional multimodal biomedical features. SN - 1438-8871 UR - http://www.jmir.org/2021/1/e25535/ UR - https://doi.org/10.2196/25535 UR - http://www.ncbi.nlm.nih.gov/pubmed/33404516 DO - 10.2196/25535 ID - info:doi/10.2196/25535 ER -