Published on in Vol 23, No 4 (2021): April

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/27468, first published .
Deep Convolutional Neural Network–Based Computer-Aided Detection System for COVID-19 Using Multiple Lung Scans: Design and Implementation Study

Deep Convolutional Neural Network–Based Computer-Aided Detection System for COVID-19 Using Multiple Lung Scans: Design and Implementation Study

Deep Convolutional Neural Network–Based Computer-Aided Detection System for COVID-19 Using Multiple Lung Scans: Design and Implementation Study

Original Paper

1Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran

2Department of Radiology, Baqiyatallah University of Medical Sciences, Tehran, Iran

3Department of Hematology and Blood Banking, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran

4Pediatric Congenital Hematologic Disorders Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran

5Department of Computer Engineering, Faculty of Electrical and Computer Engineering, Shiraz University, Shiraz, Iran

Corresponding Author:

Farkhondeh Asadi, PhD

Department of Health Information Technology and Management

School of Allied Medical Sciences

Shahid Beheshti University of Medical Sciences

Darband St

Ghods Square

Tehran

Iran

Phone: 98 9123187253

Email: Asadifar@sbmu.ac.ir


Background: Owing to the COVID-19 pandemic and the imminent collapse of health care systems following the exhaustion of financial, hospital, and medicinal resources, the World Health Organization changed the alert level of the COVID-19 pandemic from high to very high. Meanwhile, more cost-effective and precise COVID-19 detection methods are being preferred worldwide.

Objective: Machine vision–based COVID-19 detection methods, especially deep learning as a diagnostic method in the early stages of the pandemic, have been assigned great importance during the pandemic. This study aimed to design a highly efficient computer-aided detection (CAD) system for COVID-19 by using a neural search architecture network (NASNet)–based algorithm.

Methods: NASNet, a state-of-the-art pretrained convolutional neural network for image feature extraction, was adopted to identify patients with COVID-19 in their early stages of the disease. A local data set, comprising 10,153 computed tomography scans of 190 patients with and 59 without COVID-19 was used.

Results: After fitting on the training data set, hyperparameter tuning, and topological alterations of the classifier block, the proposed NASNet-based model was evaluated on the test data set and yielded remarkable results. The proposed model's performance achieved a detection sensitivity, specificity, and accuracy of 0.999, 0.986, and 0.996, respectively.

Conclusions: The proposed model achieved acceptable results in the categorization of 2 data classes. Therefore, a CAD system was designed on the basis of this model for COVID-19 detection using multiple lung computed tomography scans. The system differentiated all COVID-19 cases from non–COVID-19 ones without any error in the application phase. Overall, the proposed deep learning–based CAD system can greatly help radiologists detect COVID-19 in its early stages. During the COVID-19 pandemic, the use of a CAD system as a screening tool would accelerate disease detection and prevent the loss of health care resources.

J Med Internet Res 2021;23(4):e27468

doi:10.2196/27468

Keywords



In 2020, the rapid global spread of COVID-19 made the World Health Organization declare the first pandemic of the 21st century, with the highest level of alert worldwide. Based on the WorldMeters statistics, until January 5, 2021, more than 86 million people worldwide contracted this disease, with more than 1,870,000 confirmed deaths due to COVID-19. Early detection of COVID-19 is essential not only for patient care but also for public health by ensuring patients’ isolation and controlling disease spread [1,2]. The first and most important step to control this pandemic is the rapid detection of infected patients and monitoring of positive cases. Various diagnostic methods for the rapid detection of COVID-19 have been introduced by different studies and by the WHO, with the reverse transcription–polymerase chain reaction (RT–PCR) test being the most prominent diagnostic method. Although RT–PCR is the gold standard for COVID-19 detection, owing to its time-intensiveness and high cost, infected individuals, as a source of transmission, can transmit the virus to many people while they are waiting to receive the results of their RT–PCR test. Moreover, previous studies have reported that the RT–PCR test has a high false-negative rate; this is a major limitation of this diagnostic test and reduces its sensitivity. Furthermore, this leads to delayed detection, treatment, and—in advanced stages of the disease—an increased mortality rate [3-8]. A high influx of patients at diagnostic centers during the pandemic has led to excessive use of resources and a shortage of RT–PCR test kits. Regardless of the need of RT–PCR tests for suspected individuals, the multiple repeats of these tests for patients have imposed a heavy burden on health care sources. The time-consuming nature of laboratory tests, coupled with the molecular and nonspecific nature of serological tests, has necessitated the use of a cheaper test focusing on findings in the lung tissue. As major lung health monitoring tools, radiological tests have attracted the attention of clinical specialists. For COVID-19 evaluation, computed tomography (CT) is a more sensitive and specific detection method than chest X-ray imaging, and, in many cases, lung involvement and ground-glass opacities (GGO) can be viewed on CT even before the onset of clinical symptoms and before obtaining positive results on an RT–PCR test. This implies that, in many cases, before the emergence of the first clinical symptoms and a positive RT–PCR finding, the complications of COVID-19 can be detected in the lungs. Based on previous reports and the WHO’s recommendations, chest CT has emerged as a valuable tool for the early detection and triaging of individuals suspected with COVID-19 [4,9,10]. In a study on 1014 patients with COVID-19, CT enabled more sensitive detection than RT–PCR [11]. Despite the success of this radiological modality in detecting COVID-19–related lung damage, certain problems are associated with its use. Despite the WHO’s recommendations, chest CT findings are normal in some patients at the outset of the disease, and this lends a negative predictive value to CT alone. The low-specificity of CT can deter disease detection in non–COVID-19 cases. In addition, ionizing radiation from the CT scanner can cause problems to patients who require multiple CT scans during the course of their disease [12-16]. In the past decade, numerous computer-based methods have been employed for improving the efficiency of medical imaging techniques. One such method is the use of machine learning algorithms, which has had remarkable success in medical imaging. Among different types of machine learning methods, deep learning models have achieved high precision in machine vision tasks rapidly after the emergence of COVID-19. Convolutional neural networks (CNNs) have high potential for feature extraction and analysis. Upon the emergence of COVID-19, and owing to the limitations of diagnostic tests, numerous machine learning techniques have been adopted to improve the precision of diagnostic methods. Table 1 lists some relevant studies.

Table 1. Studies evaluating machine learning algorithms used for COVID-19 detection.
Study (country)Study objectivePopulationModels usedEvaluation results
Ni et al (China) [15]Automatic detection14,531Convolutional multiview feature pyramid network with positron-aware attention and a 3D U-Net
  • F1 score=97%
  • Sensitivity=100%
Wang et al (China) [17]Diagnostic and prognostic analysis5372Densenet121-feature pyramid network
  • Area under the receiver operating characteristic curve=87%-88%
  • Sensitivity=80.3%-79.35%

Hasan et al (Iraq) [18]Diagnosis (classification)321Long short-term memory classifier
  • Accuracy=99.68%
Pathak et al (India) [19]Classification (detection)852ResNet-50
  • Accuracy=93.01%
Ardakani et al (Iran) [20]Detection19410 pretrained convolutional neural networks: AlexNet, VGG-16, VGG-19, SqueezeNet, GoogleNet, MobileNet-V2, ResNet-18, ResNet-50, ResNet-101, and Xception
  • Best performance: ResNet-101 and Xception
  • Sensitivity (ResNet-101)=100%
  • Sensitivity (Xception)=98.04%
  • Specificity (ResNet-101): 99.02%
  • Specificity (Xception)=100%
  • Accuracy (ResNet-101)=99.51%
  • Accuracy (Xception)=99.02%
Li et al (China) [21]Automatic detection4356RestNet50 as the backbone of the main model
  • Sensitivity=90%
  • Specificity=96%
Mei et al (United States) [22]Rapid diagnosis905Inception-ResNet-V2
  • Correctly identified 17 of 25 (68%) patients with COVID-19

Song et al (China) [23]Diagnosis227Bidirectional generative adversarial network
  • Sensitivity=85%
  • Specificity=88%

Study Overview

Based on the success of CNNs in machine vision tasks, we designed and implemented a model for the classification of CT images of individuals with and those without COVID-19 through a deep neural network based on a Neural Search Architecture Network (NASNet) [24] feature extractor.

Data Set

The data set comprised 10,153 CT scans, of which 7644 belong to 190 patients with COVID-19 and 2509 belong to 59 people without COVID-19, including those with pneumonia and otherwise healthy individuals who visited the hospital owing to a suspicion of COVID-19 [25]. All these images were collected from the radiology centers of teaching hospitals in Tehran, Iran. The disease status in suspected individuals was confirmed in this set after an RT–PCR test. Figure 1 shows the CT scans of some patients with COVID-19 and their counterparts with suspected disease.

Figure 1. Axial computed tomography scan slices of the lung. (A, B, C) Non–COVID-19 cases including those of pneumonia and healthy individuals; (D, E, F) infected lungs of patients with COVID-19.
View this figure

Proposed Method

To detect COVID-19 in patients at an early stage of the disease from multiple lung CT scans, a state-of-the-art model based on a NASNet CNN feature extractor was proposed. Based on the proposed model, a computer-aided detection (CAD) system was designed.

Data Preparation and Preprocessing

For data preparation, lung CT scans were first received in the Digital Imaging and Communications in Medicine format as the output of the picture archiving and communications system of a diagnostic center. In the preprocessing stage, the images were converted to the commonly used JPG format, and the order of the color channels was changed from the default BGR to RGB to prepare the images for processing.

Based on the literature, the success of medical image visual tasks in deep learning is not merely attributed to CNN models; rather, a major part of this success results from image preprocessing [26]. Data normalization to maintain the integrity of the images was performed as the first step of preprocessing, which plays a key role in the analysis of CT scans [27]. To this end, first, the pixel-level global mean (SD) values were calculated for all the images; thereafter, the data were normalized using the following equation:

where is the global mean of the image set X, σ is the SD, and ε=1e–10 is an insignificant value to prevent the denominator from becoming 0.

After normalization, for standardizing the images to achieve a unified scale for the input of the deep neural network, the values of the pixels of each image were scaled by transferring to (0,255) and then by transforming to the (0,1) interval, such that the images would be standard during training. Since CNNs depend on a large number of data to enhance their efficiency and prevent model over-fitting [28,29]; at this stage, data were augmented for the training data set through random rotation, contrast alteration, illumination alteration, and gamma correction.

All the images of the sets were shuffled such that the network would not necessarily see the data of only a certain class during training, and each batch would include images with different labels belonging to both COVID-19 and non–COVID-19 classes. The dimensions of the input images were changed to 224×224×3; however, this method can be used on images with any other dimensions. The data set with the 64:16:20 ratio was randomly divided among the training, validation, and test sets, respectively, whereby 20% of the data were allocated to the test set, the remaining 80% to the training set, and 20% of the training set was assigned to the validation set.

Feature Extraction and Classification

Convolutional layers were used in the feature extraction block. Immediately after each Conv2D layer, the useful statistics were collected using the Max-Pooling module and, after normalizing them using batch normalization, passed to the next CNN block. To prevent model overfitting, in addition to batch normalization, weight regularization and dropout methods were also applied. For regularization, the Euclidean norm (L2) was used with different coefficient values in the (0.001-0.01) interval after activation using the LeakyReLU activation function and dropout of 20%-30% of the weights. Inspired by the transfer learning approach, for better feature extraction, the preliminary blocks of the pretrained NASNetLarge network were used. NASNet has a scalable architecture for image classification and consists of 2 repeated motifs termed the normal cell and the reduction cell. Figure 2 illustrates the architecture of these convolutional cells. All parameters were initialized using the weights obtained from fitting NASNetLarge on the ImageNet data set. After the feature extraction block, the weights were transferred to 3 dense or fully connected layers by using a global average pooling flatten layer in the form of a 1-dimensional tensor.

Figure 2. Architecture of the NASNet’s convolutional cells with B=5 blocks. The input (white) is the hidden state from previous activations (or input image). The output (pink) is the result of a concatenation operation across all resulting branches. Each convolutional cell is the result of B blocks. A single block corresponds to 2 primitive operations (yellow) and a combination operation (green) [23].
View this figure

In these layers, batch normalization, regularization, and weight dropout were performed as well. The first dense layer used a ReLU activation function, the next layer utilized a LeakyReLU, and the last layer, which actually is the classifier layer, employed a Softmax multiclass activation function.

During the training process, in the first phase, the feature extraction block was frozen and contained nontrainable parameters, and Adam was used as the optimizer. In this phase, the initial learning rate of 1e–3 and the binary cross-entropy loss function were used. If validation loss remained stable in every 10 epochs, the learning rate would be reduced by 20% to a minimum of 1e–6 in case of no further improvement. If the validation loss remained stable up to 20 epochs, the process of training would stop. Eventually, only the best weights were saved. After training the dense layers, in the second phase the feature extraction block was unfreezed, and the network—this time fully trainable—was fitted once more on the same data; in this phase, the stochastic gradient descent optimizer with the initial learning rate of 1e–4 was utilized. The batch size was 32; the number of epochs in the first phase was assumed to be 200, and 1000 iterations were considered in the second phase.

CAD System Based on the Proposed Model

Many studies have recommended the use of CT scans for COVID-19 detection, many of which have used machine learning–based computer methods to enhance the results of chest CT. All machine learning methods have attempted to detect COVID-19 in the images with a single CT slice [10,12,30-33]; however, in real time, radiologists confirm or reject COVID-19 on the basis of overall slices of a patient’s CT scan. This study aimed to design a computer-aided diagnostic system to detect COVID-19 with multiple CT images for each person. In the CAD system designed on the basis of the proposed model, 4 CT slices were obtained from a person suspected with COVID-19 and the system estimates the final result from the output average mean of classifying all the slices. However, the proposed system can receive a different number of slides and does not depend on the number of inputs. This increases the reliability of the results obtained from the proposed model. Figure 3 provides a schematic representation of the proposed model.

In the experiments, the proposed model successfully detected all the cases of COVID-19 with high accuracy and differentiated all the positive and negative cases without any discernible error. Figures 4 and 5 display the performance of the proposed model by presenting the results of detection on the first 25 samples and 25 random ones from the test set, respectively.

Figure 3. Proposed deep convolutional neural network–based CAD system for COVID-19 detection using multiple lung computed tomography scans. CT: computed tomography.
View this figure
Figure 4. Results of the detection on the first 25 samples from the test set. “I” is the image index, “P” is the predicted value, and “L” is the grand truth label. Green indicates correct detection and red indicates incorrect detection.
View this figure
Figure 5. Results of detection on 25 random samples from the test set. “I” is the image index, “P” is the predicted value, and “L” is the grand truth label. Green indicates correct detection and red indicates incorrect detection.
View this figure

In the case of COVID-19, the lack of an early and accurate diagnosis leads to the spread of the disease among other individuals, which has irreversible effects on the control of the pandemic. The proposed CAD system, which detects cases of COVID-19 from multiple CT slices can perform more accurately because, owing to the error of not showing the area of the GGO in some CT slices, the data of many slices may have been missed. Moreover, in models that detect COVID-19 from a single slice, there is a risk of error in viewing the infected area of the lung or ROI owing to operator errors, the angle of the slices, or problems with the CT scanner tube. Thus, by using a multislice CAD system, the disease can be detected in its early stages, and the initial signs of lung involvement can be discovered with maximum precision.

Implementation

The proposed method was implemented using the Python programming language by Keras, which is a high-level library for TensorFlow machine learning framework which also utilized a Compute Unified Device Architecture deep learning network library for parallel processing on the graphics processing unit. The computer system had an Intel Core i7 7700K CPU, 32 GB RAM, and an Nvidia T4 GPU accelerator. Implementation codes and the pretrained model are available on GitHub [34].


Metrics

To quantitatively evaluate the performance of the proposed method, the sensitivity, specificity, accuracy, and F1 score evaluation criteria were determined on the basis of the model’s performance by using a confusion matrix. Here, sensitivity was defined as the ratio of COVID-19 cases correctly detected by the model to all the actual COVID-19 cases. Specificity was defined as the ratio of the non–COVID-19 cases correctly detected by the model to all the actual non–COVID-19 cases. Moreover, accuracy was defined as the rate of all the COVID-19 and non–COVID-19 cases accurately detected on the basis of the CT images.

Experimental Results and Evaluation

In this study, we used traditional measures to evaluate the performance of the proposed model, using a confusion matrix. Based on this confusion matrix, the specificity and sensitivity in measuring and analyzing the performance of the proposed CAD model were calculated, where specificity was defined as the ability of the classifier to correctly identify individuals without COVID-19 (true-negative rate). Sensitivity was defined as the classifier’s ability to identify individuals with COVID-19 correctly (true-positive rate). These evaluations were performed using the following equations:

Figure 6 shows the confusion matrix for the evaluation in the test set for 2 classes. The evaluation criteria based on the confusion matrix are provided in Table 2.

Figure 6. (A) Confusion matrix and (B) normalized confusion matrix of the model performance for the test data.
View this figure
Table 2. Performance of the proposed method in the test data set.
MetricValue
Sensitivity99.9
Specificity98.6
Accuracy99.6

The learning curve of the proposed model for the training and validation sets is illustrated in Figure 7. On assessing the behavior of the proposed model in handling new validation data, we observed that with increased epochs, the model had a lower error rate, and thus enhanced accuracy for the unknown data, which suggests that the model has high potential for detecting new cases of COVID-19 from the CT scans. The mean square error in detecting all the COVID-19 and non–COVID-19 cases from among the test set images was 0.003938, which was significantly lower than that reported in previous studies.

The metrics of positive predictive value and negative predictive value, as well as the F1 score, for the proposed model are shown in Table 3.

Figure 7. Training and validation loss and accuracy.
View this figure
Table 3. Evaluation of the proposed model for the test set.
MetricValue (%)
Positive predictive value = True positive / Positive calls99.8%
Negative predictive value = True negative / Negative calls99.7%
F1 score = (True positive / True positive) + 0.5 (False positive + False negative)99.8%

To evaluate the performance of the proposed model further in real-life applications and present a comparable evaluation, we tested our model on a publicly available and well-known data set [30] using the cross–data set evaluation approach. The results shown in Table 4 compare the proposed method with other state-of-the-art approaches, including traditional deep neural networks and pretrained networks [30-33,35].

Table 4. Comparison of the performance of different models for detecting COVID-19 using various evaluation metrics.
ModelEvaluation Metrics

AccuracyPrecisionRecallF1 score
SqueezeNet95.194.296.295.2
ShuffleNet97.596.199.097.5
GoogleNet91.790.293.591.8
VGG-1694.994.095.494.9
AlexNet93.794.992.293.6
ResNet5094.993.097.195.0
Xception98.899.098.698.8
AdaBoost95.193.696.795.1
Decision Tree79.476.883.179.8
Explainable deep learning [30]97.399.195.597.3
DenseNet201 [31]96.296.296.296.2
Modied VGG19 [32]95.095.394.094.3
COVID CT-Net [33]90.788.585.090.0
Contrastive Learning [35]90.895.785.890.8
Proposed99.499.699.899.5

Principal Findings

RT–PCR is the definitive method for diagnosing COVID-19. However, the nucleic acid test is very time-consuming, and sputum analysis may take several days. This test’s high cost and low sensitivity have caused major problems to health care systems during the pandemic. Consequently, people with false-negative findings on RT–PCR have been a source of virus transmission and have spread the virus to others. When the WHO emphasized the need to increase diagnostic tests and comprehensively evaluate suspected individuals, physicians and health care systems were encouraged to utilize cheaper and faster tests [36-38]. When attempting to detect COVID-19 in its initial stages, a lung CT scan does not always demonstrate the lung consolidation areas, and no GGO findings are observed in many cases. Machine learning models can enhance the efficiency of radiological diagnostic methods and serve as a suitable alternative to the RT–PCR test. The core of the CAD system designed in this study is based on a deep CNN architecture and uses an input with 4 slices. NASNet was utilized here because it could determine the best architecture for feature engineering [24]. No previous study has employed this technological model for analyzing the CT scans of individuals suspected with COVID-19. Further examination of medical image processing revealed the remarkable performance of this model in image feature extraction. This study achieved maximum sensitivity and precision in detecting COVID-19 compared to previous studies. Considering the algorithm and the use of multiple chest CT scan slices for a single patient, the proposed system can be employed at diagnostic centers as a reliable method to detect individuals with COVID-19 with high precision in the early stages of the disease. In the future, this CAD can be included in the picture archiving and communications systems of radiology wards to achieve an automated and more efficient diagnosis.

Conclusions

Using the CAD system for detecting COVID-19 during the pandemic minimizes the time of image interpretation and consequently the number of patients waiting at radiology centers. Furthermore, by increasing the number of images produced by the CT scanner and increasing the population size, better classification results for differentiating positive and negative cases can be expected.

Acknowledgments

This study was part of a research project conducted at Shahid Beheshti University of Medical Sciences (Tehran, Iran) and was approved by Iran National Committee for Ethics in Biomedical Research (approval ID IR.SBMU.RETECH.REC.1399.132).

Conflicts of Interest

None declared.

  1. WHO Coronavirus Disease (COVID-19) Dashboard: Vaccination. World Health Organization.   URL: https://covid19.who.int/ [accessed 2021-04-17]
  2. WHO Coronavirus Disease (COVID-19) Dashboard. World Health Organization.   URL: https://covid19.who.int/
  3. Nishiura H, Jung S, Linton NM, Kinoshita R, Yang Y, Hayashi K, et al. The Extent of Transmission of Novel Coronavirus in Wuhan, China, 2020. J Clin Med 2020 Jan 24;9(2):330 [FREE Full text] [CrossRef] [Medline]
  4. Chung M, Bernheim A, Mei X, Zhang N, Huang M, Zeng X, et al. CT Imaging Features of 2019 Novel Coronavirus (2019-nCoV). Radiology 2020 Apr;295(1):202-207 [FREE Full text] [CrossRef] [Medline]
  5. Ye Z, Zhang Y, Wang Y, Huang Z, Song B. Chest CT manifestations of new coronavirus disease 2019 (COVID-19): a pictorial review. Eur Radiol 2020 Aug;30(8):4381-4389 [FREE Full text] [CrossRef] [Medline]
  6. Li Q, Guan X, Wu P, Wang X, Zhou L, Tong Y, et al. Early Transmission Dynamics in Wuhan, China, of Novel Coronavirus-Infected Pneumonia. N Engl J Med 2020 Mar 26;382(13):1199-1207 [FREE Full text] [CrossRef] [Medline]
  7. Wang L, Lin Z, Wong A. COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Sci Rep 2020 Nov 11;10(1):19549 [FREE Full text] [CrossRef] [Medline]
  8. Huang C, Wang Y, Li X, Ren L, Zhao J, Hu Y, et al. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet 2020 Feb 15;395(10223):497-506. [CrossRef] [Medline]
  9. Choi H, Qi X, Yoon SH, Park SJ, Lee KH, Kim JY, et al. Extension of Coronavirus Disease 2019 on Chest CT and Implications for Chest Radiographic Interpretation. Radiol Cardiothorac Imaging 2020 Apr;2(2):e200107 [FREE Full text] [CrossRef] [Medline]
  10. Wang D, Hu B, Hu C, Zhu F, Liu X, Zhang J, et al. Clinical Characteristics of 138 Hospitalized Patients With 2019 Novel Coronavirus-Infected Pneumonia in Wuhan, China. JAMA 2020 Mar 17;323(11):1061-1069. [CrossRef] [Medline]
  11. Ai T, Yang Z, Hou H, Zhan C, Chen C, Lv W, et al. Correlation of Chest CT and RT-PCR Testing for Coronavirus Disease 2019 (COVID-19) in China: A Report of 1014 Cases. Radiology 2020 Aug;296(2):E32-E40 [FREE Full text] [CrossRef] [Medline]
  12. Gozes O, Frid-Adar M, Greenspan H, Browning PD, Zhang H, Ji W, et al. arXiv. Mar 24 Preprint posted online March 24, 2020.
  13. Yamashita R, Nishio M, Do RKG, Togashi K. Convolutional neural networks: an overview and application in radiology. Insights Imaging 2018 Aug;9(4):611-629 [FREE Full text] [CrossRef] [Medline]
  14. Wang S, Kang B, Ma J, Zeng X, Xiao M, Guo J, et al. A deep learning algorithm using CT images to screen for Corona virus disease (COVID-19). Eur Radiol 2021 Feb 24:1-9 [FREE Full text] [CrossRef] [Medline]
  15. Ni Q, Sun ZY, Qi L, Chen W, Yang Y, Wang L, et al. A deep learning approach to characterize 2019 coronavirus disease (COVID-19) pneumonia in chest CT images. Eur Radiol 2020 Dec;30(12):6517-6527 [FREE Full text] [CrossRef] [Medline]
  16. Zhang K, Liu X, Shen J, Li Z, Sang Y, Wu X, et al. Clinically Applicable AI System for Accurate Diagnosis, Quantitative Measurements, and Prognosis of COVID-19 Pneumonia Using Computed Tomography. Cell 2020 Jun 11;181(6):1423-1433.e11 [FREE Full text] [CrossRef] [Medline]
  17. Wang S, Zha Y, Li W, Wu Q, Li X, Niu M, et al. A fully automatic deep learning system for COVID-19 diagnostic and prognostic analysis. Eur Respir J 2020 Aug;56(2):2000775 [FREE Full text] [CrossRef] [Medline]
  18. Hasan AM, Al-Jawad MM, Jalab HA, Shaiba H, Ibrahim RW, Al-Shamasneh AR. Classification of Covid-19 Coronavirus, Pneumonia and Healthy Lungs in CT Scans Using Q-Deformed Entropy and Deep Learning Features. Entropy (Basel) 2020 May 01;22(5):517 [FREE Full text] [CrossRef] [Medline]
  19. Pathak Y, Shukla P, Tiwari A, Stalin S, Singh S, Shukla P. Deep Transfer Learning Based Classification Model for COVID-19 Disease. Ing Rech Biomed 2020 May 20 [FREE Full text] [CrossRef] [Medline]
  20. Ardakani AA, Kanafi AR, Acharya UR, Khadem N, Mohammadi A. Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks. Comput Biol Med 2020 Jun;121:103795 [FREE Full text] [CrossRef] [Medline]
  21. Li L, Qin L, Xu Z, Yin Y, Wang X, Kong B, et al. Using Artificial Intelligence to Detect COVID-19 and Community-acquired Pneumonia Based on Pulmonary CT: Evaluation of the Diagnostic Accuracy. Radiology 2020 Aug;296(2):E65-E71 [FREE Full text] [CrossRef] [Medline]
  22. Mei X, Lee H, Diao K, Huang M, Lin B, Liu C, et al. Artificial intelligence-enabled rapid diagnosis of patients with COVID-19. Nat Med 2020 Aug;26(8):1224-1228 [FREE Full text] [CrossRef] [Medline]
  23. Song J, Wang H, Liu Y, Wu W, Dai G, Wu Z, et al. End-to-end automatic differentiation of the coronavirus disease 2019 (COVID-19) from viral pneumonia based on chest CT. Eur J Nucl Med Mol Imaging 2020 Oct;47(11):2516-2524 [FREE Full text] [CrossRef] [Medline]
  24. Zoph B, Vasudevan V, Shlens J, Le QV. Learning Transferable Architectures for Scalable Image Recognition. 2018 Presented at: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition; 18-23 June 2018; Salt Lake City, UT. [CrossRef]
  25. Aria M. COVID-19 Lung CT Scans: A large dataset of lung CT scans for COVID-19 (SARS-CoV-2) detection. Kaggle.   URL: https://www.kaggle.com/mehradaria/covid19-lung-ct-scans [accessed 2021-04-20]
  26. Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis. Med Image Anal 2017 Dec;42:60-88. [CrossRef] [Medline]
  27. Patro SK, sahu KK. Normalization: A Preprocessing Stage. Int Adv Res J Sci Eng Technol 2015 Mar 20;2(3):20-22. [CrossRef]
  28. Frühwirth-Schnatter S. Data Augmentation and Dynamic Linear Models. J Time Series Analysis 1994 Mar;15(2):183-202. [CrossRef]
  29. Shorten C, Khoshgoftaar TM. A survey on Image Data Augmentation for Deep Learning. J Big Data 2019 Jul 6;6(1). [CrossRef]
  30. Soares E, Angelov P, Biaso S, Froes M, Abe D. SARS-CoV-2 CT-scan dataset: A large dataset of real patients CT scans for SARS-CoV-2 identification. medRxiv. Preprint posted online May 14, 2020.
  31. Jaiswal A, Gianchandani N, Singh D, Kumar V, Kaur M. Classification of the COVID-19 infected patients using DenseNet201 based deep transfer learning. J Biomol Struct Dyn 2020 Jul 03:1-8. [CrossRef] [Medline]
  32. Panwar H, Gupta PK, Siddiqui MK, Morales-Menendez R, Singh V. Application of deep learning for fast detection of COVID-19 in X-Rays using nCOVnet. Chaos Solitons Fractals 2020 Sep;138:109944. [CrossRef] [Medline]
  33. Yazdani S, Minaee S, Kafieh R, Saeedizadeh N, Sonka M. COVID CT-Net: Predicting Covid-19 From Chest CT Images Using Attentional Convolutional Network. arXiv. Preprint posted online September 10, 2020 [FREE Full text] [CrossRef]
  34. Deep CNN-Based CAD System for COVID-19 Detection Using Multiple Lung CT Scans. GitHub.   URL: https://github.com/MehradAria/COVID-19-CAD [accessed 2021-04-16]
  35. Wang Z, Liu Q, Dou Q. Contrastive Cross-Site Learning With Redesigned Net for COVID-19 CT Classification. IEEE J Biomed Health Inform 2020 Oct;24(10):2806-2813. [CrossRef] [Medline]
  36. Milletari F, Navab N, Ahmadi SA. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. 2016 Presented at: 2016 Fourth International Conference on 3D Vision (3DV); 25-28 October 2016; Stanford, CA. [CrossRef]
  37. Li X, Zhou Y, Du P, Lang G, Xu M, Wu W. A deep learning system that generates quantitative CT reports for diagnosing pulmonary Tuberculosis. Appl Intell 2020 Nov 26. [CrossRef]
  38. Ghaderzadeh M, Asadi F. Deep Learning in the Detection and Diagnosis of COVID-19 Using Radiology Modalities: A Systematic Review. J Healthc Eng 2021;2021:6677314 [FREE Full text] [CrossRef] [Medline]


CNN: convolutional neural network
CAD: computer-aided detection
CT: computed tomography
NASNet: neural search architecture network
GGO: ground-glass opacities
RT–PCR: reverse transcription–polymerase chain reaction
WHO: World Health Organization


Edited by C Basch; submitted 26.01.21; peer-reviewed by D Huang, N Ramezanghorbani, A Bordbar, S Almasi; comments to author 23.02.21; revised version received 26.02.21; accepted 03.04.21; published 26.04.21

Copyright

©Mustafa Ghaderzadeh, Farkhondeh Asadi, Ramezan Jafari, Davood Bashash, Hassan Abolghasemi, Mehrad Aria. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 26.04.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.