Published on in Vol 24, No 4 (2022): April

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/28114, first published .
Understanding the Research Landscape of Deep Learning in Biomedical Science: Scientometric Analysis

Understanding the Research Landscape of Deep Learning in Biomedical Science: Scientometric Analysis

Understanding the Research Landscape of Deep Learning in Biomedical Science: Scientometric Analysis

Authors of this article:

Seojin Nam1 Author Orcid Image ;   Donghun Kim1 Author Orcid Image ;   Woojin Jung1 Author Orcid Image ;   Yongjun Zhu2 Author Orcid Image

Original Paper

1Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea

2Department of Library and Information Science, Yonsei University, Seoul, Republic of Korea

Corresponding Author:

Yongjun Zhu, PhD

Department of Library and Information Science

Yonsei University

50 Yonsei-ro

Seodaemun-gu

Seoul, 03722

Republic of Korea

Phone: 82 2 2123 2409

Email: zhu@yonsei.ac.kr


Background: Advances in biomedical research using deep learning techniques have generated a large volume of related literature. However, there is a lack of scientometric studies that provide a bird’s-eye view of them. This absence has led to a partial and fragmented understanding of the field and its progress.

Objective: This study aimed to gain a quantitative and qualitative understanding of the scientific domain by analyzing diverse bibliographic entities that represent the research landscape from multiple perspectives and levels of granularity.

Methods: We searched and retrieved 978 deep learning studies in biomedicine from the PubMed database. A scientometric analysis was performed by analyzing the metadata, content of influential works, and cited references.

Results: In the process, we identified the current leading fields, major research topics and techniques, knowledge diffusion, and research collaboration. There was a predominant focus on applying deep learning, especially convolutional neural networks, to radiology and medical imaging, whereas a few studies focused on protein or genome analysis. Radiology and medical imaging also appeared to be the most significant knowledge sources and an important field in knowledge diffusion, followed by computer science and electrical engineering. A coauthorship analysis revealed various collaborations among engineering-oriented and biomedicine-oriented clusters of disciplines.

Conclusions: This study investigated the landscape of deep learning research in biomedicine and confirmed its interdisciplinary nature. Although it has been successful, we believe that there is a need for diverse applications in certain areas to further boost the contributions of deep learning in addressing biomedical research problems. We expect the results of this study to help researchers and communities better align their present and future work.

J Med Internet Res 2022;24(4):e28114

doi:10.2196/28114

Keywords



Deep learning is a class of machine learning techniques based on neural networks with multiple processing layers that learn representations of data [1,2]. Stemming from shallow neural networks, many deep learning architectures, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have been developed for various purposes [3]. The exponentially growing amount of data in many fields and recent advances in graphics processing units have further expedited research progress in the field. Deep learning has been actively applied to tasks, such as natural language processing (NLP), speech recognition, and computer vision, in various domains [1] and has shown promising results in diverse areas of biomedicine, including radiology [4], neurology [2], cardiology [5], cancer detection and diagnosis [6,7], radiotherapy [8], and genomics and structural biology [9-11]. Medical image analysis is a field that has actively used deep learning. For example, successful applications have been made in diagnosis [12], lesion classification or detection [13,14], organ and other substructure localization or segmentation [15,16], and image registration [17,18]. In addition, deep learning has also made an impact on predicting protein structures [19,20] and genomic sequencing [21-23] for biomarker development and drug design.

Despite the increasing number of published biomedical studies on deep learning techniques and applications, there has been a lack of scientometric studies that both qualitatively and quantitatively explore, analyze, and summarize the relevant studies to provide a bird’s-eye view of them. Previous studies have mostly provided qualitative reviews [2,9,10], and the few available bibliometric analyses were limited in their scope in that the researchers focused on a subarea such as public health [24] or a particular journal [25]. The absence of a coherent lens through which we can examine the field from multiple perspectives and levels of granularity leads to a partial and fragmented understanding of the field and its progress. To fill this gap, the aim of this study is to perform a scientometric analysis of metadata, content, and citations to investigate current leading fields, research topics, and techniques, as well as research collaboration and knowledge diffusion in deep learning research in biomedicine. Specifically, we intend to examine (1) biomedical journals that had frequently published deep learning studies and their coverage of research areas, (2) diseases and other biomedical entities that have been frequently studied with deep learning and their relationships, (3) major deep learning architectures in biomedicine and their specific applications, (4) research collaborations among disciplines and organizations, and (5) knowledge diffusion among different areas of study.


Data

Data were collected from PubMed, a citation and abstract database that includes biomedical literature from MEDLINE and other life science journals indexed with Medical Subject Heading (MeSH) terms [26]. MeSH is a hierarchically structured biomedical terminology with descriptors organized into 16 categories, with subcategories [27]. In this study, deep learning [MeSH Major Topic] was used as the query to search and download deep learning studies from PubMed. Limiting a MeSH term as a major topic increases the precision of retrieval so that only studies that are highly relevant to the topic are found [28]. As of January 1, 2020, a total of 978 PubMed records with publication years ranging from 2016 to 2020 have been retrieved using the National Center for Biotechnology Information Entrez application programming interface. Entrez is a data retrieval system that can be programmatically accessed through its Biopython module to search and export records from the National Center for Biotechnology Information’s databases, including PubMed [26,29]. The metadata of the collected bibliographic records included the PubMed identifier or PubMed ID, publication year, journal title and its electronic ISSN, MeSH descriptor terms, and author affiliations. We also downloaded the citation counts and references of each bibliographic record and considered data sources other than PubMed as well. We collected citation counts of the downloaded bibliographic records from Google Scholar (last updated on February 8, 2020) and the subject categories of their publishing journals from the Web of Science (WoS) Core Collection database using the electronic ISSN.

Detailed Methods

Metadata Analysis
Journals

Journals are an important unit of analysis in scientometrics and have been used to understand specific research areas and disciplines [30]. In this study, biomedical journals that published deep learning studies were grouped using the WoS Core Collection subject categories and analyzed to identify widely studied research areas and disciplines.

MeSH Terms

Disease-related MeSH terms were analyzed to identify major diseases that have been studied using deep learning. We mapped descriptors to their corresponding numbers in MeSH Tree Structures to identify higher level concepts for descriptors that were too specific and ensured that all the descriptors had the same level of specificity. Ultimately, all descriptors were mapped to 6-digit tree numbers (C00.000), and terms with >1 tree number were separately counted for all the categories they belonged to. In addition, we visualized the co-occurrence network of major MeSH descriptors using VOSviewer (version 1.6.15) [31,32] and its clustering technique [33] to understand the relationships among the biomedical entities, as well as the clusters they form together.

Author Affiliations

We analyzed author affiliations to understand the major organizations and academic disciplines that were active in deep learning research. The affiliations of 4908 authors extracted from PubMed records were recorded in various formats and manually standardized. We manually reviewed the affiliations to extract organizations, universities, schools, colleges, and departments. For authors with multiple affiliations, we selected the first one listed, which is usually the primary. We also analyzed coauthorships to investigate research collaboration among organizations and disciplines. All the organizations were grouped into one of the following categories: universities, hospitals, companies, or research institutes and government agencies to understand research collaboration among different sectors. We classified medical schools under hospitals as they are normally affiliated with each other. In the category of research institutes or government agencies, we included nonprofit private organizations or foundations and research centers that do not belong to a university, hospital, or company. We extracted academic disciplines from the department section or the school or college section when department information was unavailable. As the extracted disciplines were not coherent with multiple levels and combinations, data were first cleaned with OpenRefine (originally developed by Metaweb then Google), an interactive data transformation tool for profiling and cleaning messy data [34], and then manually grouped based on WoS categories and MeSH Tree Structures according to the following rules. We treated interdisciplinary fields and fields with high occurrence as separate disciplines from their broader fields and aggregated multiple fields that frequently co-occurred under a single department name into a single discipline after reviewing their disciplinary similarities.

Content Analysis

We identified influential studies by examining their citation counts in PubMed and Google Scholar. Citation counts from Google Scholar were considered in addition to PubMed as Google Scholar’s substantial citation data encompasses WoS and Scopus citations [35]. After sorting the articles in descending order of citations, the 2 sources showed a Spearman rank correlation coefficient of 0.883. From the PubMed top 150 list (ie, citation count >7) and Google Scholar top 150 list (ie, citation count >36), we selected the top 109 articles. Among these, we selected the sources that met the criteria for applying or developing deep learning models as the subjects of analysis to understand the major deep learning architectures in biomedicine and their applications. Specifically, we analyzed the research topics of the studies, the data and architectures used for those purposes, and how the black box problem was addressed.

Cited Reference Analysis

We collected the references from downloaded articles that had PubMed IDs. Citations represent the diffusion of knowledge from cited to citing publications; therefore, analyzing the highly cited references in deep learning studies in biomedicine allows for the investigation of disciplines and studies that have greatly influenced the field. Toward this end, we visualized networks of knowledge diffusion among WoS subjects using Gephi (v0.9.2) [36] and examined metrics such as modularity, PageRank score, and weighted outdegree using modularity for community detection [37]. PageRank indicates the importance of a node by measuring the quantity and quality of its incoming edges [38], and weighted outdegree measures the number of outgoing edges of a node. We also reviewed the contents of the 10 most highly cited influential works.


Metadata Analysis

Journals

On the basis of the data set, 315 biomedical journals have published deep learning studies, and Table 1 lists the top 10 journals selected based on publication size. Different WoS categories and MeSH terms are separated using semicolons.

From a total of 978 records, 96 (9.8%) were unindexed in the WoS Core Collection and were excluded, following which, an average of 2.02 (SD 1.19) categories were assigned per record. The top ten subject categories, which mostly pertained to (1) biomedicine, with 22.2% (196/882) articles published in Radiology, Nuclear Medicine, and Medical Imaging (along with Engineering, Biomedical: 121/882, 13.7%; Mathematical and Computational Biology: 107/882, 12.1%; Biochemical Research Methods: 103/882, 11.7%; Biotechnology and Applied Microbiology: 76/882, 8.6%; Neurosciences: 74/882, 8.4%); (2) computer science and engineering (Computer Science, Interdisciplinary Applications: 112/882, 12.7%; Computer Science, Artificial Intelligence: 75/882, 8.5%; Engineering, Electrical and Electronic: 75/882, 8.5%); or (3) Multidisciplinary Sciences (82/882, 9.3%).

Table 1. Top 10 journals with the highest record counts.
Journal titleWeb of Science categoryNational Library of Medicine catalog Medical Subject Heading termPublisherRecord count, n
BMCa BioinformaticsBiochemical Research Methods; Mathematical and Computational Biology; Biotechnology and Applied MicrobiologyComputational BiologyBMC38
Scientific ReportsMultidisciplinary SciencesNatural Science DisciplinesNature Research37
Neural NetworksNeurosciences; Computer Science, Artificial IntelligenceNerve Net; Nervous SystemElsevier35
Proceedings of the Annual International Conference of the IEEEb Engineering in Medicine and Biology SocietyN/AcBiomedical EngineeringIEEE31
IEEE Transactions on Medical ImagingImaging Science and Photographic Technology; Engineering, Electrical and Electronic; Computer Science, Interdisciplinary Applications; Radiology, Nuclear Medicine, and Medical Imaging; Engineering, BiomedicalElectronics, Medical; RadiographyIEEE30
SensorsChemistry, Analytical; Electrochemistry; Instruments and Instrumentation; Engineering, Electrical and ElectronicBiosensing TechniquesMultidisciplinary Digital Publishing Institute26
BioinformaticsBiochemical Research Methods; Mathematical and Computational Biology; Biotechnology and Applied MicrobiologyComputational Biology; GenomeOxford University Press22
Nature MethodsBiochemical Research MethodsBiomedical Research/methods; Research DesignNature Research21
Medical PhysicsRadiology, Nuclear Medicine, and Medical ImagingBiophysicsAmerican Association of Physicists in Medicine20
PloS oneMultidisciplinary SciencesMedicine; SciencePublic Library of Science20

aBMC: BioMed Central.

bIEEE: Institute of Electrical and Electronics Engineers.

cN/A: not applicable.

MeSH Terms

For the main MeSH term or descriptor, an average of 9 (SD 4.21) terms was assigned to each record as subjects. Among them, we present in Figure 1 the diseases that were extracted from the C category. In the figure, the area size is proportional to the record count, and the terms are categorized by color. In addition, terms under >1 category were counted multiple times. For instance, the term Digestive System Neoplasms has two parents in MeSH Tree Structures, Neoplasms and Digestive System Diseases, and as such, we counted articles in this category under Neoplasmsby Site as well as under Digestive System Neoplasms. Owing to the limited space, 7 categories whose total record counts were ≤10 (eg, Congenital, Hereditary, and Neonatal Diseases and Abnormalities; Nutritional and Metabolic Diseases; and Stomatognathic Diseases) were combined under the Others category, and individual diseases that had <10 record counts were summed up with each other in the same category to show only their total count (or with one of the diseases included as an example). In the process, we identified Neoplasms as the most frequently studied disease type, with a total of 199 studies.

We further constructed a co-occurrence network of the complete set of major MeSH descriptors assigned to the records to understand the relationships among the biomedical entities. To enhance legibility, we filtered out terms with <5 occurrences. Figure 2 presents the visualized network of nodes (100/966, 10.4% of the total terms) with 612 edges and 7 clusters. In the figure, the sizes of the nodes and edges are proportional to the number of occurrences, and the node color indicates the assigned cluster (although the term deep learning was considered nonexclusive to any cluster as it appeared in all records).

Figure 1. Disease-related Medical Subject Heading descriptors studied with deep learning.
View this figure
Figure 2. Co-occurrence network of the major Medical Subject Heading descriptors (number of nodes=100; number of edges=612; number of clusters=7).
View this figure

As depicted in Figure 2, each cluster comprised descriptors from two groups: (1) biomedical domains that deep learning was applied to, including body regions, related diseases, diagnostic imaging methods, and theoretical models, and (2) the purposes of deep learning and techniques used for the tasks, including diagnosis, analysis, and processing of biomedical data. In the first cluster, computer neural networks and software were studied for the purposes of computational biology, specifically protein sequence analysis, drug discovery, and drug design, to achieve precision medicine. These were relevant to the biomedical domains of (1) proteins, related visualization methods (microscopy), and biological models, and (2) neoplasms, related drugs (antineoplastic agents), and diagnostic imaging (radiology). In the second cluster, deep learning and statistical models were used for RNA sequence analysis and computer-assisted radiotherapy planning in relation to the domains of (1) genomics, RNA, and mutation, and (2) brain neoplasms and liver neoplasms. The third cluster comprised (1) heart structures (heart ventricles), cardiovascular diseases, and ultrasonography and (2) eye structures (retina), diseases (glaucoma), and ophthalmological diagnostic techniques. These had been studied for computer-assisted image interpretation using machine learning and deep learning algorithms. The biomedical domain group of the fourth cluster involved specific terms related to neoplasms such as type (adenocarcinoma), different regions (breast neoplasms, lung neoplasms, and colorectal neoplasms), and respective imaging methods (mammography and X-ray computed tomography) to which deep learning and support vector machines have been applied for the purpose of computer-assisted radiographic image interpretation and computer-assisted diagnosis. The fifth cluster included (1) brain disorders (Alzheimer disease), neuroimaging, and neurological models; (2) prostatic neoplasms; and (3) diagnostic magnetic resonance imaging and 3D imaging. Supervised machine learning had been used for computer-assisted image processing of these data. In the sixth cluster, automated pattern recognition and computer-assisted signal processing were studied with (1) human activities (eg, movement and face), (2) abnormal brain activities (epilepsy and seizures) and monitoring methods (electroencephalography), and (3) heart diseases and electrocardiography. In the last cluster, medical informatics, specifically data mining and NLP, including speech perception, had been applied to (1) electronic health records, related information storage and retrieval, and theoretical models and (2) skin diseases (skin neoplasms and melanoma) and diagnostic dermoscopy.

Author Affiliations

To investigate research collaboration within the field, we analyzed paper-based coauthorships using author affiliations with different levels of granularity, including organization and academic disciplines. We extracted organizations from 98.7% (4844/4908) of the total affiliations and visualized the collaboration of different organization types. The top 10 organizations with the largest publication records included Harvard University (37/844, 4.4%), Chinese Academy of Sciences (21/844, 2.5%; eg, Institute of Computing Technology, Institute of Automation, and Shenzhen Institutes of Advanced Technology), Seoul National University (21/844, 2.5%), Stanford University (20/844, 2.4%), Sun Yat-sen University (14/844, 1.7%; eg, Zhongshan Ophthalmic Center and Collaborative Innovation Center of Cancer Medicine), University of California San Diego (14/844, 1.7%; eg, Institute for Genomic Medicine, Shiley Eye Institute, and Institute for Brain and Mind), University of California San Francisco (14/844, 1.7%), University of Michigan (14/844, 1.7%), Yonsei University (14/844, 1.7%), and the University of Texas Health Science Center at Houston (12/844, 1.4%). The extracted organizations were assigned to one of the following four categories according to their main purpose: universities, hospitals, companies, or research institutes and government agencies. Among these, universities participated in most papers (567/844, 67.2%), followed by hospitals (429/844, 50.8%), companies (139/844, 16.5%), and research institutes or government agencies (88/844, 10.4%). We used a co-occurrence matrix to visualize the degrees of organizational collaboration, with the co-occurrence values log normalized to compare the relative differences (Figure 3).

From Figure 3, we found that universities were the most active in collaborative research, particularly with hospitals, followed by companies and research institutes or government agencies. Hospitals also frequently collaborated with companies; however, research institutes or government agencies tended not to collaborate much as they published relatively fewer studies.

We also examined the collaborations among academic disciplines, which we could extract, as described in the Methods section, from 76.24% (3742/4908) of the total affiliations. Approximately half (ie, 386/756, 51.1%) of the papers were completed under disciplinary collaboration. Figure 4 depicts the network with 36 nodes (36/148, 24.3% of the total) and 267 edges after we filtered out disciplines with weighted degrees <10, representing the number of times one collaborated with the other disciplines. In the figure, the node and edge sizes are proportional to the weighted degree and link strength, respectively, and the node color indicates the assigned cluster.

As shown in the figure, the academic disciplines were assigned to 1 of 6 clusters, including 1 engineering-oriented cluster (cluster 1) and other clusters that encompassed biomedical fields. We specifically looked at the degree of collaboration between the biomedical and engineering disciplines. Figure 4 depicts that the most prominent collaboration was among Radiology, Medical Imaging, and Nuclear Medicine; Computer Science; and Electronics and Electrical Engineering. There were also strong links among Computer Science or Electronics and Electrical Engineering and Biomedical Informatics, Biomedical Engineering, and Pathology and Laboratory Medicine.

Among the top 10 disciplines in Figure 4, the following three had published the most papers and had the highest weighted degree and degree centralities: Computer Science (number of papers=195, weighted degree=193, and degree centrality=32); Radiology, Medical Imaging, and Nuclear Medicine (number of papers=168, weighted degree=166, and degree centrality=30); and Electronics and Electrical Engineering (number of papers=161, weighted degree=160, and degree centrality=32). Meanwhile, some disciplines had high weighted degrees compared with their publication counts, indicating their activeness in collaborative research. These included Pathology and Laboratory Medicine (5th in link strength vs 8th in publications) and Public Health and Preventive Medicine (9th in link strength vs 15th in publications). A counterexample was Computational Biology, which was 12th in link strength but 7th in publications.

Figure 3. Collaboration of organization types.
View this figure
Figure 4. Collaboration network of academic disciplines (number of nodes=36; number of edges=267; number of clusters=6).
View this figure

Content Analysis

Overview

We analyzed the content of influential studies that had made significant contributions to the field through the application or development of deep learning architectures. We identified these studies by examining the citation counts from PubMed and Google Scholar, assigning the 109 most-cited records to one of the following categories: (1) review, (2) application of existing deep learning architectures to certain biomedical domains (denoted by A), or (3) development of a novel deep learning model (denoted by D). Table 2 summarizes the 92 papers assigned to the application or development category according to their research topic in descending order of citation count.

Table 2. Top 92 studies with the highest citation count under the application or development category, according to the research topic.
Research topic and numberTask typeDataDeep learning architectures
(Diagnostic) image analysis

A1 [39]ClassificationRetinal disease OCTa and chest x-ray with pneumoniaInception

A2 [40]Segmentation and classificationRetinal disease OCTU-net and CNNb

A3 [41]ClassificationMelanoma dermoscopic imagesInception

A4 [42]Survival predictionBrain glioblastoma MRIcCNN_S

A6 [43]Classification and segmentationWSId of 13 cancer typesCNN with CAEe and DeconvNet

D1 [44]SegmentationBrain MRIResNetf based

A7 [45]PredictionRetinal fundus images with cardiovascular diseaseInception

D2 [46]TrackingVideo of freely behaving animalResNet-based DeeperCut subset

A8 [47]ClassificationColonoscopy video of colorectal polypsInception

A9 [48]ClassificationLung cancer CTgCNN

A10 [49]Classification and segmentationRetinal OCT with macular diseaseEncoder-decoder CNN

D3 [50]SegmentationBrain glioma MRICNN based

D4 [51]Binding affinities predictionProtein-ligand complexes as voxelSqueezeNet based

A11 [52]Survival classificationBrain glioma MRI, functional MRI, and DTIhCNN and mCNNi

A12 [53]ClassificationFundus images with glaucomatous optic neuropathyInception

A13 [54]ClassificationChest radiographs with pneumoniaResNet and CheXNet

A14 [55]Classification and segmentationCritical head abnormality CTResNet, U-net, and DeepLab

A15 [56]ClassificationBrain glioma MRIResNet

D6 [57]ClassificationThoracic disease radiographsDenseNet based

A16 [58]Classification and segmentationEchocardiogram video with cardiac diseaseVGGNet and U-net

A17 [59]ClassificationBrain positron emission tomography with AlzheimerInception

D7 [60]ClassificationBreast cancer histopathological imagesCNN based

A18 [61]ClassificationSkin tumor imagesResNet

A19 [62]Classification and predictionChest CT with chronic obstructive pulmonary disease and acute respiratory diseaseCNN

A20 [63]SegmentationBrain MRI with autism spectrum disorderFCNNj

D8 [16]SegmentationFetal MRI and brain tumor MRIProposal network (P-Net) based

A21 [64]Classification, prediction, and reconstructionNatural movies and functional MRI of watching moviesAlexNet and De-CNN

D9 [65]Detection and classificationFacial images with a genetic syndromeCNN based

A22 [66]Detection and segmentationMicroscopic images of cellsU-net

A23 [67]Classification and localizationBreast cancer mammogramsFaster region-based CNN with VGGNet

A24 [68]Segmentation and predictionLung cancer CTMask-RCNN, CNN with GoogLeNet and RetinaNet

A26 [69]ClassificationLung cancer CTCNN; fully connected NN; SAEk

A27 [70]Survival classificationLung cancer CTCNN

A29 [71]PredictionPolar maps of myocardial perfusion imaging with CADlCNN

A30 [72]ClassificationProstate cancer MRICNN

D12 [73]ClassificationLiver SWEm with chronic hepatitis BCNN based

D14 [74]SegmentationLiver cancer CTDenseNet with U-net based

A31 [75]ClassificationFundus images with macular degenerationAlexNet, GoogLeNet, VGGNet, inception, ResNet, and inception-ResNet

A32 [76]ClassificationBladder cancer CTcuda-convnet

A34 [77]ClassificationProstate cancer tissue microarray imagesMobileNet

D19 [78]ClassificationHolographic microscopy of Bacillus speciesCNN based

A36 [79]Survival classificationChest CTCNN

D20 [80]Classification and localizationMalignant lung nodule radiographsResNet based

A37 [81]ClassificationShoulder radiographs with proximal humerus fractureResNet

A39 [82]ClassificationFacial images of hetero and homosexualVGG-Face

A41 [83]Segmentation and classificationCAD CT angiographyCNN and CAE

A42 [84]Classification and localizationRadiographs with fractureU-net

A43 [85]Binding classificationPeptide major histocompatibility complex as image-like arrayCNN

A44 [86]DetectionLung nodule CTCNN

A45 [87]ClassificationConfocal endomicroscopy video of oral cancerLeNet

A46 [88]ClassificationWSI of prostate, skin, and breast cancerMILn with ResNet and RNN

D24 [89]TrackingVideo of freely behaving animalFCNN based

D25 [90]SegmentationFundus images with glaucomaU-net based

A47 [91]Segmentation and classificationCardiac disease cine MRIU-net; M-Net; Dense U-net; SVF-Net; Grid-Net; Dilated CNN

D27 [92]ClassificationKnee abnormality MRIAlexNet based

D28 [93]Binding affinities predictionProtein-ligand complexes as gridCNN based

A50 [94]SegmentationAutosomal dominant polycystic kidney disease CTFCNN with VGGNet

A51 [95]Segmentation and classificationKnee cartilage lesion MRIVGGNet

A52 [96]ClassificationMammogramsResNet

A54 [97]PredictionCAD CT angiographyFCNN

D31 [98]Classification and localizationWSI of lymph nodes in metastatic breast cancerInception based

D35 [99]ClassificationFluorescence microscopic images of cellsFFNNo based

A56 [100]ClassificationRetinal fundus images with diabetic retinopathy and breast mass mammographyResNet; GoogLeNet
Image processing

A25 [101]Artifact reductionBrain and abdomen CT and radial MRp dataU-net

A28 [102]Resolution enhancementFluorescence microscopic imagesGANq with U-net and CNN

D15 [103]DealiasingCompressed sensing brain lesion and cardiac MRIGAN with U-net and VGGNet based

D16 [104]Resolution enhancementSuperresolution localization microscopic imagesGAN with U-net–based pix2pix network modified

A33 [105]ReconstructionBrain and pelvic MRI and CTGAN with FCNN and CNN

D18 [106]Artifact reductionCTCNN based

A38 [107]ReconstructionContrast-enhanced brain MRIEncoder-decoder CNN

D22 [108]ReconstructionBrain MR fingerprinting dataFFNN based

D23 [109]Resolution enhancementHi-C matrix of chromosomesCNN based

A48 [110]Resolution enhancementBrain tumor MRIU-net

D26 [111]ReconstructionLung vessels CTCNN based

D32 [112]Resolution enhancementKnee MRICNN based

D33 [113]ReconstructionCTCNN based

D34 [18]RegistrationCardiac cine MRI and chest CTCNN based
Sequence analysis

D17 [114]Novel structures generation and property predictionSMILESrStack-RNNs with GRUt- and LSTMu based

A40 [115]Novel structures generationSMILESvariational AEv; CNN- and RNN with GRU-based AAEw

D21 [116]Gene expression (variant effects) predictionGenomic sequenceCNN based

D30 [117]Novel structures generation and classificationSMILESGAN with differentiable neural computer and CNN based

A53 [118]Novel structures generationSMILESLSTM

A57 [119]ClassificationAntimicrobial peptide sequenceCNN with LSTM
Sequence and image analysis

D13 [120]Contact predictionProtein sequence to contact matrixResNet based
(Diagnostic) pattern analysis

A5 [121]Subtype identification (survival classification)Multi-omics data from liver cancerAE

D5 [122]Phenotype predictionGenotypeGoogLeNet and deeply supervised net based

D10 [123]Survival predictionGenomic profiles from cancerFFNN based

D11 [124]Drug synergies predictionGene expression profiles of cancer cell line and chemical descriptors of drugsFFNN based

A35 [125]NLPx (classification)Electronic health record with pediatric diseaseAttention-based BLSTMy

A49 [126]Binding classificationProtein sequence as matrix and drug molecular fingerprintSAE

D29 [127]ClassificationElectrocardiogram signalBLSTM based

A55 [128]ClassificationPolysomnogram signalCNN

aOCT: optical coherence tomography.

bCNN: convolutional neural network.

cMRI: magnetic resonance imaging.

dWSI: whole slide image.

eCAE: convolutional autoencoder.

fResNet: residual networks.

gCT: computed tomography.

hDTI: diffusion tensor imaging.

imCNN: multicolumn convolutional neural network.

jFCNN: fully convolutional neural network.

kSAE: stacked autoencoder.

lCAD: coronary artery disease.

mSWE: shear wave elastography.

nMIL: multiple instance learning.

oFFNN: feedforward neural network.

pMR: magnetic resonance.

qGAN: generative adversarial network.

rSMILES: simplified molecular input line-entry system.

sRNN: recurrent neural network.

tGRU: gated recurrent unit.

uLSTM: long short-term memory.

vAE: autoencoder.

wAAE: adversarial autoencoder.

xNLP: natural language processing.

yBLSTM: bidirectional long short-term memory.

Research Topics

In these studies, researchers applied or developed deep learning architectures mainly for the following purposes: image analysis, especially for diagnostic purposes, including the classification or prediction of diseases or survival, and the detection, localization, or segmentation of certain areas or abnormalities. These 3 tasks, which aim to identify the location of an object of interest, are different in that detection involves a single reference point, whereas localization involves an area identified through a bounding box, saliency map, or heatmap, segmentation involves a precise area with clear outlines identified through pixel-wise analysis. Meanwhile, in some studies, models for image analysis unrelated to diagnosis were proposed, such as classifying or segmenting cells in microscopic images and tracking moving animals in videos through pose estimation. Another major objective involved image processing for reconstructing or registering medical images. This included enhancing low-resolution images to high resolution, reconstructing images with different modalities or synthesized targets, reducing artifacts, dealiasing, and aligning medical images.

Meanwhile, several researchers used deep learning architectures to analyze molecules, proteins, and genomes for various purposes. These included drug design or discovery, specifically for generating novel molecular structures through sequence analysis and for predicting binding affinities through image analysis of complexes; understanding protein structure through image analysis of contact matrix; and predicting phenotypes, cancer survival, drug synergies, and genomic variant effects from genes or genomes. Finally, in some studies, deep learning was applied to the diagnostic classification of sequential data, including electrocardiogram or polysomnogram signals and electronic health records. In summary, in the reviewed literature, we identified a predominant focus on applying or developing deep learning models for image analysis regarding localization or diagnosis and image processing, with a few studies focusing on protein or genome analysis.

Deep Learning Architectures

Regarding the main architectures, most of them were predominantly CNNs and based on ≥1 CNN architecture such as a fully CNN (FCNN) and its variants, including U-net; residual neural network (ResNet) and its variants; GoogLeNet (Inception v1) or Inception and VGGNet and its variants; and other architectures. Meanwhile, a few researchers based their models on feedforward neural networks that were not CNNs, including autoencoders (AEs) such as convolutional AE and stacked AE. Others adapted RNNs, including (bidirectional) long short-term memory and gated recurrent unit. Furthermore, models that combined RNNs or AEs with CNNs were also proposed.

Content analysis of the reviewed literature showed that different deep learning architectures were used for different research tasks. Models for classification or prediction tasks using images were predominantly CNN based, with most being ResNet and GoogLeNet or Inception. ResNet with shortcut connections [129] and GoogLeNet or Inception with 1×1 convolutions, factorized convolutions, and regularizations [130,131] allow networks of increased depth and width by solving problems such as vanishing gradients and computational costs. These mostly analyzed medical images from magnetic resonance imaging or computed tomography, with cancer-related images often used as input data for diagnostic classification, in addition to image-like representations of protein complexes. Meanwhile, when applying these tasks to data other than images, such as genomic or gene expression profiles and protein sequence matrices, researchers used feedforward neural networks, including AEs, that enabled semi- or unsupervised learning and dimensionality reduction.

Image analysis for segmentation and image processing were achieved through CNN-based architectures as well, with most of them being FCNNs, especially U-net. FCNNs produce an input-sized pixel-wise prediction by replacing the last fully connected layers to convolution layers, making them advantageous for the abovementioned tasks [132], and U-net enhances these performances through long skip connections that concatenate feature maps from the encoder path to the decoder path [133]. In particular, for medical image processing tasks, a few researchers combined FCNNs (U-net) with other CNNs by adopting the generative adversarial network structure, which generates new instances that mimic the real data through an adversarial process between the generator and discriminator [134]. We found that images of the brain were often used as input data for these studies.

On the other hand, RNNs were applied to sequence analysis of the string representation of molecules (simplified molecular input line-entry system) and pattern analysis of sequential data such as signals. A few of these models, especially those generating novel molecular structures, combined RNNs with CNNs by adopting generative adversarial networks, including adversarial AE. In summary, the findings showed that the current deep learning models were predominantly CNN based, with most of them focusing on analyzing medical image data and different architectures that are preferred for the specific tasks.

Among these studies, Table 3 shows, in detail, the objectives and the proposed methods of the 35 studies with novel model development.

Table 3. Content analysis of the top 35 records in the development category.
NumberDevelopment objectivesMethods (proposed model)
D1Segment brain anatomical structures in 3D MRIaVoxelwise Residual Network: trained through residual learning of volumetric feature representation and integrated with contextual information of different modalities and levels
D2Estimate poses to track body parts in various animal behaviorsDeeperCut’s subset DeepLabCut: network fine-tuned on labeled body parts, with deconvolutional layers producing spatial probability densities to predict locations
D3Predict isocitrate dehydrogenase 1 mutation in low-grade glioma with MRI radiomics analysisDeep learning–based radiomics: segment tumor regions and directly extract radiomics image features from the last convolutional layer, which is encoded for feature selection and prediction
D4Predict protein-ligand binding affinities represented by 3D descriptorsKDEEP: 3D network to predict binding affinity using voxel representation of protein-ligand complex with assigned property according to its atom type
D5Predict phenotype from genotype through the biological hierarchy of cellular subsystemsDCell: visible neural network with structure following cellular subsystem hierarchy to predict cell growth phenotype and genetic interaction from genotype
D6Classify and localize thoracic diseases in chest radiographsDenseNet-based CheXNeXt: networks trained for each pathology to predict its presence and ensemble and localize indicative parts using class activation mappings
D7Multi-classification of breast cancer from histopathological imagesCSDCNNb: trained through end-to-end learning of hierarchical feature representation and optimized feature space distance between breast cancer classes
D8Interactive segmentation of 2D and 3D medical images fine-tuned on a specific imageBounding box and image-specific fine-tuning–based segmentation: trained for interactive image segmentation using bounding box and fine-tuned for specific image with or without scribble and weighted loss function
D9Facial image analysis for identifying phenotypes of genetic syndromesDeepGestalt: preprocessed for face detection and multiple regions and extracts phenotype to predict syndromes per region and aggregate probabilities for classification
D10Predict cancer outcomes with genomic profiles through survival models optimizationSurvivalNet: deep survival model with high-dimensional genomic input and Bayesian hyperparameter optimization, interpreted using risk backpropagation
D11Predict synergy effect of novel drug combinations for cancer treatmentDeepSynergy: predicts drug synergy value using cancer cell line gene expressions and chemical descriptors, which are normalized and combined through conic layers
D12Classify liver fibrosis stages in chronic hepatitis B using radiomics of SWEcDLREd: predict the probability of liver fibrosis stages with quantitative radiomics approach through automatic feature extraction from SWE images
D13Predict protein residue contact map at pixel level with protein featuresRaptorX-Contact: combined networks to learn contact occurrence patterns from sequential and pairwise protein features to predict contacts simultaneously at pixel level
D14Segment liver and tumor in abdominal CTe scansHybrid Densely connected U-net: 2D and 3D networks to extract intra- and interslice features with volumetric contexts, optimized through hybrid feature fusion layer
D15Reconstruct compressed sensing MRI to dealiased imageDAGANf: conditional GANg stabilized by refinement learning, with the content loss combined adversarial loss incorporating frequency domain data
D16Reconstruct sparse localization microscopy to superresolution imageArtificial Neural Network Accelerated–Photoactivated Localization Microscopy: trained with superresolution PALMh as the target, compares reconstructed and target with loss functions containing conditional GAN
D17Generate novel chemical compound design with desired propertiesReinforcement Learning for Structural Evolution: generate chemically feasible molecule as strings and predict its property, which is integrated with reinforcement learning to bias the design
D18Reduce metal artifacts in reconstructed x-ray CT imagesCNNi-based Metal Artifact Reduction: trained on images processed by other Metal Artifact Reduction methods and generates prior images through tissue processing and replaces metal-affected projections
D19Predict Bacillus species to identify anthrax spores in single cell holographic imagesHoloConvNet: trained with raw holographic images to directly recognize interspecies difference through representation learning using error backpropagation
D20Classify and detect malignant pulmonary nodules in chest radiographsDeep learning–based automatic detection: predict the probability of nodules per radiograph for classification and detect nodule location per nodule from activation value
D21Predict tissue-specific gene expression and genomic variant effects on the expressionExPecto: predict regulatory features from sequences and transform to spatial features and use linear models to predict tissue-specific expression and variant effects
D22Reconstruct MRFj to obtain tissue parameter mapsDeep reconstruction network: trained with a sparse dictionary that maps magnitude image to quantitative tissue parameter values for MRF reconstruction
D23Generate high-resolution Hi-C interaction matrix of chromosomes from a low-resolution matrixHiCPlus: predict high-resolution matrix through mapping regional interaction features of low-resolution to high-resolution submatrices using neighboring regions
D24Estimate poses to track body parts of freely moving animalsLEAPk: videos preprocessed for egocentric alignment and body parts labeled using GUIl and predicts each location by confidence maps with probability distributions
D25Jointly segment optic disc and cup in fundus images for glaucoma screeningM-Net: multi-scale network for generating multi-label segmentation prediction maps of disc and cup regions using polar transformation
D26Reconstruct limited-view PATm to high-resolution 3D imagesDeep gradient descent: learned iterative image reconstruction, incorporated with gradient information of the data fit separately computed from training
D27Predict classifications of and localize knee injuries from MRIMRNet: networks trained for each diagnosis according to a series to predict its presence and combine probabilities for classification using logistic regression
D28Predict binding affinities between 3D structures of protein-ligand complexesPafnucy: structure-based prediction using 3D grid representation of molecular complexes with different orientations as having same atom types
D29Classify electrocardiogram signals based on wavelet transformDeep bidirectional LSTMn network–based wavelet sequences: generate decomposed frequency subbands of electrocardiogram signal as sequences by wavelet-based layer and use as input for classification
D30Generate novel small molecule structures with possible biological activityReinforced Adversarial Neural Computer: combined with GAN and reinforcement learning, generates sequences matching the key feature distributions in the training molecule data
D31Detect and localize breast cancer metastasis in digitized lymph nodes slidesLYmph Node Assistant: predict the likelihood of tumor in tissue area and generate a heat map for slides identifying likely areas
D32Transform low-resolution thick slice knee MRI to high-resolution thin slicesDeepResolve: trained to compute residual images, which are added to low-resolution images to generate their high-resolution images
D33Reconstruct sparse-view CT to suppress artifact and preserve featureLearned Experts’ Assessment–Based Reconstruction Network: iterative reconstruction using previous compressive sensing methods, with fields of expert-applied regularization terms learned iteration dependently
D34Unsupervised affine and deformable aligning of medical imagesDeep Learning Image Registration: multistage registration network and unsupervised training to predict transformation parameters using image similarity and create warped moving images
D35Classify subcellular localization patterns of proteins in microscopy imagesLocalization Cellular Annotation Tool: predict localization per cell for image-based classification of multi-localizing proteins, combined with gamer annotations for transfer learning

aMRI: magnetic resonance imaging.

bCSDCNN: class structure-based deep convolutional neural network.

cSWE: shear wave elastography.

dDLRE: deep learning radiomics of elastography.

eCT: computed tomography.

fDAGAN: Dealiasing Generative Adversarial Networks.

gGAN: generative adversarial network.

hPALM: photoactivated localization microscopy.

iCNN: convolutional neural network.

jMRF: magnetic resonance fingerprinting.

kLEAP: LEAP Estimates Animal Pose.

lGUI: graphical user interface.

mPAT: photoacoustic tomography.

nLSTM: long short-term memory.

Black Box Problem

In quite a few of the reviewed studies, the black box problem of deep learning was partly addressed, as researchers implemented various methods to improve model interpretability. To understand the prediction results of image analysis models, most used one of the following two techniques to visualize the important regions: (1) activation-based heatmaps [45,54,65,70], especially class activation maps [57,61,77,92], and saliency maps [59] and (2) occlusion testing [39,75,82,94]. For models analyzing data other than images, there were no generally accepted techniques for model interpretation, and researchers suggested some methods, including adopting an interpretable hierarchical structure such as the cellular subsystem [122] or anatomical division [125], using backpropagation [123], observing gate activations of cells in the neural network [114], or investigating how corrupted input data affect the prediction and how identical predictions are made for different inputs [93]. As such, various methods were found to be used to tackle this well-known limitation of deep learning.

Cited Reference Analysis

On average, each examined deep learning study with at least one PubMed indexed citation (429/978, 43.9%) had 25.8 (SD 20.0) citations. These cited references comprised 9373 unique records that were cited 1.27 times on average (SD 2.16). Excluding the ones that were unindexed in the WoS Core Collection (8618/9373, 8.06% of the unique records), an average of 1.77 (SD 1.07) categories were assigned to a record. The top ten WoS categories, which were assigned to the greatest number of total cited references, pertained to the following three major groups: (1) biomedicine (Radiology, Nuclear Medicine, and Medical Imaging: 2025/11,033, 18.35%; Biochemical Research Methods: 1118/11,033, 10.13%; Mathematical and Computational Biology: 1066/11,033, 9.66%; Biochemistry and Molecular Biology: 1043/11,033, 9.45%; Engineering, Biomedical: 981/11,033, 8.89%; Biotechnology and Applied Microbiology: 916/11,033, 8.3%; Neurosciences: 844/11,033, 7.65%), (2) computer science and engineering (Computer Science, Interdisciplinary Applications: 1041/11,033, 9.44%; Engineering, Electrical and Electronic: 645/11,033, 5.85%), and (3) Multidisciplinary Sciences (with 1411/11,033, 12.79% records).

To understand the intellectual structure of how knowledge is transferred among different areas of study through citations, we visualized the citation network of WoS subject categories. In the directed citation network shown in Figure 5, the edges were directed clockwise with the source nodes as the WoS categories of the deep learning studies we examined and the target nodes as the WoS categories of the cited references from which knowledge was obtained. To enhance legibility, we filtered out categories with <100 weighted degrees, excluding self-loops, to form a network of 20 nodes (20/158, 12.7% of the total) and 59 edges (59/2380, 2.48% of the total). In the figure, the node color and size are proportional to the PageRank score (probability 0.85; ε=0.001; Figure 5A) and weighted-out degree (Figure 5B), and the edge size and color are proportional to the link strength. PageRank considers not only the quantity but also the quality of incoming edges, identifying important exporters for knowledge diffusion based on how often and by which fields a node is cited. On the other hand, the weighted outdegree measures outgoing edges and identifies major knowledge importers that frequently cite other fields.

Figure 5. Citation network of the Web of Science subject categories assigned to the reviewed publications and their cited references according to (A) PageRank and (B) weighted outdegree (number of nodes=20; number of edges=59).
View this figure

As depicted in Figure 5A, categories with high PageRank scores mostly coincided with the frequently cited fields identified above and were grouped into two communities through modularity (upper half and lower half). The upper half region centered on Radiology, Nuclear Medicine, and Medical Imaging, which had the highest PageRank score (0.191) and proved to be a field with a significant influence on deep learning studies in biomedicine. Meanwhile, important knowledge exporters to this field included Engineering, Biomedical (0.134); Engineering, Electrical and Electronic (0.110); and Computer Science, Interdisciplinary Applications (0.091). The lower half region mainly comprised categories with comparable PageRank scores in which knowledge was frequently exchanged between one another, including Biochemical Research Methods (0.053), Multidisciplinary Sciences (0.053), Biochemistry and Molecular Biology (0.052), Biotechnology and Applied Microbiology (0.050), and Mathematical and Computational Biology (0.048). Specifically, in Figure 5B, Mathematical and Computational Biology (1992), Biotechnology and Applied Microbiology (1836), and Biochemical Research Methods (1807) were identified as major knowledge importers with the highest weighted outdegrees, whereas Biochemistry and Molecular Biology (344) had a relatively low weighted outdegree, indicating their role as a source of knowledge for these fields.

We analyzed the 10 most frequently cited studies to gain an in-depth understanding of the most influential works and assigned these papers to one of the three categories: review, application, or development. Review articles provided comprehensive overviews of the development and applications of deep learning [1,3], with 1 focusing on applications to medical image analysis [4]. We summarize the 7 application (denoted by A) or development (denoted by D) studies in Table 4.

In these studies, excluding the study by Hochreiter and Schmidhuber [135], whose research topic pertained to computer science, deep learning was used for diagnostic image analysis of various areas [12-14,136] and for sequence analysis of proteins [21] or genomes [22]. The main architectures implemented to achieve the different research objectives mostly comprised CNNs [12-14,136] or CNN-based novel models [21,22] and RNNs [135]. The findings indicated that these deep neural networks either outperformed previous methods or achieved a performance comparable with that of human experts.

Table 4. Content analysis matrix of the highly cited references in the application or development category.
CategoryCitation count, nResearch topic: task typeObjectivesMethods (deep learning architectures)
A1 [12]53Diagnostic image analysis: classificationApply CNNa to classifying skin lesions from clinical imagesInception version 3 fine-tuned end to end with images; tested against dermatologists on 2 binary classifications
A2 [13]51Diagnostic image analysis: classificationApply CNN to detecting referrable diabetic retinopathy on retinal fundus imagesInception version 3 trained and validated using 2 data sets of images graded by ophthalmologists
D1 [135]34Computer scienceDevelop a new gradient-based RNNb to solve error backflow problemsLSTMc achieved constant error flow through memory cells regulated by gate units; tested numerous times against other methods
D2 [21]33Sequence analysis: binding (variant effects) predictionPropose a predictive model for sequence specificities of DNA- and RNA-binding proteinsCNN-based DeepBind trained fully automatically through parallel implementation to predict and visualize binding specificities and variation effects
A3 [14]27Diagnostic image analysis: classificationEvaluate factors of using CNNs for thoracoabdominal lymph node detection and interstitial lung disease classificationCompare performances of AlexNet, CifarNet, and GoogLeNet trained with transfer learning and different data set characteristics
D3 [22]23Sequence analysis: chromatin profiles (variant effects) predictionPropose a model for predicting noncoding variant effects from genomic sequenceCNN-based DeepSEA trained for chromatin profile prediction to estimate variant effects with single nucleotide sensitivity and prioritize functional variants
A4 [136]23Diagnostic image analysis: classificationEvaluate CNNs for tuberculosis detection on chest radiographsCompare performances of AlexNet and GoogLeNet and ensemble of 2 trained with transfer learning, augmented data set, and radiologist-augmented approach

aCNN: convolutional neural network.

bRNN: recurrent neural network.

cLSTM: long short-term memory.


Principal Findings

With the increase in biomedical research using deep learning techniques, we aimed to gain a quantitative and qualitative understanding of the scientific domain, as reflected in the published literature. For this purpose, we conducted a scientometric analysis of deep learning studies in biomedicine.

Through the metadata and content analyses of bibliographic records, we identified the current leading fields and research topics, the most prominent being radiology and medical imaging. Other biomedical fields that have led this domain included biomedical engineering, mathematical and computational biology, and biochemical research methods. As part of interdisciplinary research, computer science and electrical engineering were important fields as well. The major research topics that were studied included computer-assisted image interpretation and diagnosis (which involved localizing or segmenting certain areas for classifying or predicting diseases), image processing such as medical image reconstruction or registration, and sequence analysis of proteins or RNA to understand protein structure and discover or design drugs. These topics were particularly prevalent in their application to neoplasms.

Furthermore, although deep learning techniques that had been proposed for these themes were predominantly CNN based, different architectures are preferred for different research tasks. The findings showed that CNN-based models mostly focused on analyzing medical image data, with RNN architectures for sequential data analysis and AEs for unsupervised dimensionality reduction yet to be actively explored. Other deep learning methods, such as deep belief networks [137,138], deep Q network [139], and dictionary learning [140], have also been applied to biomedical research but were excluded from the content analysis because of low citation count. As deep learning is a rapidly evolving field, future biomedical researchers should pay attention to the emerging trends and keep aware of state-of-the-art models for enhanced performance, such as transformer-based models, including bidirectional encoder representations from transformers for NLP [141]; wav2vec for speech recognition [142]; and the Swin transformer for computer vision tasks of image classification, segmentation, and object detection [143].

The findings from the analysis of the cited references revealed patterns of knowledge diffusion. In the analysis, radiology and medical imaging appeared to be the most significant knowledge source and an important field in the knowledge diffusion network. Relatedly, we identified knowledge exporters to this field, including biomedical engineering, electrical engineering, and computer science, as important, despite their relatively low citation counts. Furthermore, citation patterns revealed clique-like relationships among the four fields—biochemical research methods, biochemistry and molecular biology, biotechnology and applied microbiology, and mathematical and computational biology—with each being a source of knowledge and diffusion for the others.

Beyond knowledge diffusion, knowledge integration was also encouraged through collaboration among authors from different organizations and academic disciplines. Coauthorship analysis revealed active research collaboration between universities and hospitals and between hospitals and companies. Separately, we identified an engineering-oriented cluster and biomedicine-oriented clusters of disciplines, among which we observed a range of disciplinary collaborations, with the most prominent 2 between radiology and medical imaging and computer science and electrical engineering, which were the 3 disciplines that were most involved in publishing and collaboration. Meanwhile, pathology and public health showed a high collaborative research to publications ratio, whereas computational biology showed a low collaborative ratio.

Limitations

This study has the following limitations that may have affected data analysis and interpretation. First, focusing only on published studies may have underrepresented the field. Second, publication data were only retrieved from PubMed; although PubMed is one of the largest databases for biomedical literature, other databases such as DataBase systems and Logic Programming may also include relevant studies. Third, the use of PubMed limited our data to biomedical journals and proceedings. Given that deep learning is an active research area in computer science, computer science conference articles are valuable sources of data that were not considered in this study. Finally, our current data retrieval strategy involved searching deep learning as the major MeSH term, which increased precision but may have omitted relevant studies that were not explicitly tagged as deep learning. We plan to expand our scope in future work to consider other bibliographic databases and search terms as well.

Conclusions

In this study, we investigated the landscape of deep learning research in biomedicine and identified major research topics, influential works, knowledge diffusion, and research collaboration through scientometric analyses. The results showed a predominant focus on research applying deep learning techniques, especially CNNs, to radiology and medical imaging and confirmed the interdisciplinary nature of this domain, especially between engineering and biomedical fields. However, diverse biomedical applications of deep learning in the fields of genetics and genomics, medical informatics focusing on text or speech data, and signal processing of various activities (eg, brain, heart, and human) will further boost the contribution of deep learning in addressing biomedical research problems. As such, although deep learning research in biomedicine has been successful, we believe that there is a need for further exploration, and we expect the results of this study to help researchers and communities better align their present and future work.

Authors' Contributions

SN and YZ designed the study. SN, DK, and WJ analyzed the data. SN took the lead in the writing of the manuscript. YZ supervised and implemented the study. All authors contributed to critical edits and approved the final manuscript.

Conflicts of Interest

None declared.

  1. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015 May 28;521(7553):436-444. [CrossRef] [Medline]
  2. Zaharchuk G, Gong E, Wintermark M, Rubin D, Langlotz CP. Deep learning in neuroradiology. AJNR Am J Neuroradiol 2018 Oct;39(10):1776-1784 [FREE Full text] [CrossRef] [Medline]
  3. Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw 2015 Jan;61:85-117. [CrossRef] [Medline]
  4. Litjens G, Kooi T, Bejnordi BE, Setio AA, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis. Med Image Anal 2017 Dec;42:60-88. [CrossRef] [Medline]
  5. Dilsizian ME, Siegel EL. Machine meets biology: a primer on artificial intelligence in cardiology and cardiac imaging. Curr Cardiol Rep 2018 Oct 18;20(12):139. [CrossRef] [Medline]
  6. Hu Z, Tang J, Wang Z, Zhang K, Zhang L, Sun Q. Deep learning for image-based cancer detection and diagnosis − a survey. Pattern Recognit 2018 Nov;83:134-149. [CrossRef]
  7. Xue Y, Chen S, Qin J, Liu Y, Huang B, Chen H. Application of deep learning in automated analysis of molecular images in cancer: a survey. Contrast Media Mol Imaging 2017;2017:9512370 [FREE Full text] [CrossRef] [Medline]
  8. Meyer P, Noblet V, Mazzara C, Lallement A. Survey on deep learning for radiotherapy. Comput Biol Med 2018 Jul 01;98:126-146. [CrossRef] [Medline]
  9. Mamoshina P, Vieira A, Putin E, Zhavoronkov A. Applications of deep learning in biomedicine. Mol Pharm 2016 May 02;13(5):1445-1454. [CrossRef] [Medline]
  10. Cao C, Liu F, Tan H, Song D, Shu W, Li W, et al. Deep learning and its applications in biomedicine. Genomics Proteomics Bioinformatics 2018 Feb;16(1):17-32 [FREE Full text] [CrossRef] [Medline]
  11. Wainberg M, Merico D, Delong A, Frey BJ. Deep learning in biomedicine. Nat Biotechnol 2018 Oct;36(9):829-838. [CrossRef] [Medline]
  12. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017 Feb 02;542(7639):115-118 [FREE Full text] [CrossRef] [Medline]
  13. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016 Dec 13;316(22):2402-2410. [CrossRef] [Medline]
  14. Shin HC, Roth HR, Gao M, Lu L, Xu Z, Nogues I, et al. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging 2016 May;35(5):1285-1298 [FREE Full text] [CrossRef] [Medline]
  15. de Vos BD, Wolterink JM, de Jong PA, Viergever MA, Išgum I. 2D image classification for 3D anatomy localization: employing deep convolutional neural networks. In: Proceedings Volume 9784, Medical Imaging 2016: Image Processing. 2016 Presented at: SPIE '16; February 27-March 3, 2016; San Diego, CA, USA. [CrossRef]
  16. Wang G, Li W, Zuluaga MA, Pratt R, Patel PA, Aertsen M, et al. Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE Trans Med Imaging 2018 Jul;37(7):1562-1573 [FREE Full text] [CrossRef] [Medline]
  17. Miao S, Wang ZJ, Liao R. A CNN regression approach for real-time 2D/3D registration. IEEE Trans Med Imaging 2016 May;35(5):1352-1363. [CrossRef] [Medline]
  18. de Vos BD, Berendsen FF, Viergever MA, Sokooti H, Staring M, Išgum I. A deep learning framework for unsupervised affine and deformable image registration. Med Image Anal 2019 Feb;52:128-143. [CrossRef] [Medline]
  19. Lin Z, Lanchantin J, Qi Y. MUST-CNN: a multilayer shift-and-stitch deep convolutional architecture for sequence-based protein structure prediction. In: Proceedings of the 13th AAAI Conference on Artificial Intelligence. 2016 Presented at: IAAI '16; February 12-17, 2016; Phoenix, AZ, USA p. 27-34.
  20. Wang S, Li W, Liu S, Xu J. RaptorX-Property: a web server for protein structure property prediction. Nucleic Acids Res 2016 Jul 08;44(W1):W430-W435 [FREE Full text] [CrossRef] [Medline]
  21. Alipanahi B, Delong A, Weirauch MT, Frey BJ. Predicting the sequence specificities of DNA- and RNA-binding proteins by deep learning. Nat Biotechnol 2015 Aug;33(8):831-838. [CrossRef] [Medline]
  22. Zhou J, Troyanskaya OG. Predicting effects of noncoding variants with deep learning-based sequence model. Nat Methods 2015 Oct;12(10):931-934 [FREE Full text] [CrossRef] [Medline]
  23. Quang D, Xie X. DanQ: a hybrid convolutional and recurrent deep neural network for quantifying the function of DNA sequences. Nucleic Acids Res 2016 Jun 20;44(11):e107 [FREE Full text] [CrossRef] [Medline]
  24. dos Santos BS, Steiner MT, Fenerich AT, Lima RH. Data mining and machine learning techniques applied to public health problems: a bibliometric analysis from 2009 to 2018. Comput Ind Eng 2019 Dec;138:106120. [CrossRef]
  25. Shukla N, Merigó JM, Lammers T, Miranda L. Half a century of computer methods and programs in biomedicine: a bibliometric analysis from 1970 to 2017. Comput Methods Programs Biomed 2020 Jan;183:105075. [CrossRef] [Medline]
  26. Entrez help. Bethesda, MD, USA: National Center for Biotechnology Information (US); 2006.
  27. Introduction to MeSH. National Library of Medicine. 2019.   URL: https://www.nlm.nih.gov/mesh/introduction.html [accessed 2020-05-25]
  28. Chapman D. Advanced search features of PubMed. J Can Acad Child Adolesc Psychiatry 2009 Feb;18(1):58-59 [FREE Full text] [Medline]
  29. Chang J, Chapman B, Friedberg I, Hamelryck T, de Hoon M, Cock P, et al. Biopython tutorial and cookbook. Biopython. 2020.   URL: http://biopython.org/DIST/docs/tutorial/Tutorial.pdf [accessed 2020-05-25]
  30. Sugimoto CR, Larivière V. Measuring research: what everyone needs to know. Oxford, UK: Oxford University Press; 2018.
  31. Van Eck NJ, Waltman L. Manual for VOSviewer version 1.6.15. VOSviewer. 2020.   URL: https://www.vosviewer.com/documentation/Manual_VOSviewer_1.6.15.pdf [accessed 2020-05-25]
  32. Van Eck NJ, Waltman L. Visualizing bibliometric networks. In: Ding Y, Rousseau R, Wolfram D, editors. Measuring scholarly impact. Berlin, Germany: Springer; 2014:285-320.
  33. Waltman L, van Eck NJ, Noyons EC. A unified approach to mapping and clustering of bibliometric networks. J Informetr 2010 Oct;4(4):629-635. [CrossRef]
  34. Verborgh R, De Wilde M. Using OpenRefine. Birmingham, UK: Packt Publishing; 2013.
  35. Martín-Martín A, Orduna-Malea E, Thelwall M, Delgado López-Cózar E. Google Scholar, Web of Science, and Scopus: a systematic comparison of citations in 252 subject categories. J Informetr 2018 Nov;12(4):1160-1177. [CrossRef]
  36. Bastian M, Heymann S, Jacomy M. Gephi: an open source software for exploring and manipulating networks. In: Proceedings of the 3rd International AAAI Conference on Web and Social Media. 2009 Presented at: ICWSM '09; May 17-20, 2009; San Jose, CA, USA p. 17-20.
  37. Blondel VD, Guillaume JL, Lambiotte R, Lefebvre E. Fast unfolding of communities in large networks. J Stat Mech 2008 Oct 09;2008(10):P10008. [CrossRef]
  38. Brin S, Page L. The anatomy of a large-scale hypertextual web search engine. Comput Netw 1998 Apr;30(1-7):107-117. [CrossRef]
  39. Kermany DS, Goldbaum M, Cai W, Valentim CC, Liang H, Baxter SL, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 2018 Feb 22;172(5):1122-31.e9 [FREE Full text] [CrossRef] [Medline]
  40. De Fauw J, Ledsam JR, Romera-Paredes B, Nikolov S, Tomasev N, Blackwell S, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med 2018 Sep;24(9):1342-1350. [CrossRef] [Medline]
  41. Haenssle HA, Fink C, Schneiderbauer R, Toberer F, Buhl T, Blum A, Reader study level-Ilevel-II Groups, et al. Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann Oncol 2018 Aug 01;29(8):1836-1842 [FREE Full text] [CrossRef] [Medline]
  42. Lao J, Chen Y, Li ZC, Li Q, Zhang J, Liu J, et al. A deep learning-based radiomics model for prediction of survival in glioblastoma multiforme. Sci Rep 2017 Sep 04;7(1):10353 [FREE Full text] [CrossRef] [Medline]
  43. Saltz J, Gupta R, Hou L, Kurc T, Singh P, Nguyen V, Cancer Genome Atlas Research Network, et al. Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell Rep 2018 Apr 03;23(1):181-93.e7 [FREE Full text] [CrossRef] [Medline]
  44. Chen H, Dou Q, Yu L, Qin J, Heng PA. VoxResNet: deep voxelwise residual networks for brain segmentation from 3D MR images. Neuroimage 2018 Apr 15;170:446-455. [CrossRef] [Medline]
  45. Poplin R, Varadarajan AV, Blumer K, Liu Y, McConnell MV, Corrado GS, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng 2018 Mar;2(3):158-164. [CrossRef] [Medline]
  46. Mathis A, Mamidanna P, Cury KM, Abe T, Murthy VN, Mathis MW, et al. DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. Nat Neurosci 2018 Sep;21(9):1281-1289. [CrossRef] [Medline]
  47. Byrne MF, Chapados N, Soudan F, Oertel C, Linares Pérez M, Kelly R, et al. Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model. Gut 2019 Jan;68(1):94-100 [FREE Full text] [CrossRef] [Medline]
  48. Ciompi F, Chung K, van Riel SJ, Setio AA, Gerke PK, Jacobs C, et al. Towards automatic pulmonary nodule management in lung cancer screening with deep learning. Sci Rep 2017 Apr 19;7:46479 [FREE Full text] [CrossRef] [Medline]
  49. Schlegl T, Waldstein SM, Bogunovic H, Endstraßer F, Sadeghipour A, Philip AM, et al. Fully automated detection and quantification of macular fluid in OCT using deep learning. Ophthalmology 2018 Apr;125(4):549-558 [FREE Full text] [CrossRef] [Medline]
  50. Li Z, Wang Y, Yu J, Guo Y, Cao W. Deep learning based radiomics (DLR) and its usage in noninvasive IDH1 prediction for low grade glioma. Sci Rep 2017 Jul 14;7(1):5467 [FREE Full text] [CrossRef] [Medline]
  51. Jiménez J, Škalič M, Martínez-Rosell G, De Fabritiis G. K deep: protein-ligand absolute binding affinity prediction via 3d-convolutional neural networks. J Chem Inf Model 2018 Feb 26;58(2):287-296. [CrossRef] [Medline]
  52. Nie D, Zhang H, Adeli E, Liu L, Shen D. 3D deep learning for multi-modal imaging-guided survival time prediction of brain tumor patients. Med Image Comput Comput Assist Interv 2016 Oct;9901:212-220 [FREE Full text] [CrossRef] [Medline]
  53. Li Z, He Y, Keel S, Meng W, Chang RT, He M. Efficacy of a deep learning system for detecting glaucomatous optic neuropathy based on color fundus photographs. Ophthalmology 2018 Aug;125(8):1199-1206. [CrossRef] [Medline]
  54. Zech JR, Badgeley MA, Liu M, Costa AB, Titano JJ, Oermann EK. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS Med 2018 Nov;15(11):e1002683 [FREE Full text] [CrossRef] [Medline]
  55. Chilamkurthy S, Ghosh R, Tanamala S, Biviji M, Campeau NG, Venugopal VK, et al. Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study. Lancet 2018 Dec 01;392(10162):2388-2396. [CrossRef] [Medline]
  56. Chang P, Grinband J, Weinberg BD, Bardis M, Khy M, Cadena G, et al. Deep-learning convolutional neural networks accurately classify genetic mutations in gliomas. AJNR Am J Neuroradiol 2018 Jul;39(7):1201-1207 [FREE Full text] [CrossRef] [Medline]
  57. Rajpurkar P, Irvin J, Ball RL, Zhu K, Yang B, Mehta H, et al. Deep learning for chest radiograph diagnosis: a retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Med 2018 Nov;15(11):e1002686 [FREE Full text] [CrossRef] [Medline]
  58. Zhang J, Gajjala S, Agrawal P, Tison GH, Hallock LA, Beussink-Nelson L, et al. Fully automated echocardiogram interpretation in clinical practice. Circulation 2018 Oct 16;138(16):1623-1635 [FREE Full text] [CrossRef] [Medline]
  59. Ding Y, Sohn JH, Kawczynski MG, Trivedi H, Harnish R, Jenkins NW, et al. A deep learning model to predict a diagnosis of Alzheimer disease by using 18 F-FDG PET of the brain. Radiology 2019 Feb;290(2):456-464 [FREE Full text] [CrossRef] [Medline]
  60. Han Z, Wei B, Zheng Y, Yin Y, Li K, Li S. Breast cancer multi-classification from histopathological images with structured deep learning model. Sci Rep 2017 Jun 23;7(1):4172 [FREE Full text] [CrossRef] [Medline]
  61. Han SS, Kim MS, Lim W, Park GH, Park I, Chang SE. Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm. J Invest Dermatol 2018 Jul;138(7):1529-1538 [FREE Full text] [CrossRef] [Medline]
  62. González G, Ash SY, Vegas-Sánchez-Ferrero G, Onieva Onieva J, Rahaghi FN, Ross JC, COPDGene and ECLIPSE Investigators. Disease staging and prognosis in smokers using deep learning in chest computed tomography. Am J Respir Crit Care Med 2018 Jan 15;197(2):193-203 [FREE Full text] [CrossRef] [Medline]
  63. Dolz J, Desrosiers C, Ben Ayed I. 3D fully convolutional networks for subcortical segmentation in MRI: a large-scale study. Neuroimage 2018 Apr 15;170:456-470. [CrossRef] [Medline]
  64. Wen H, Shi J, Zhang Y, Lu KH, Cao J, Liu Z. Neural encoding and decoding with deep learning for dynamic natural vision. Cereb Cortex 2018 Dec 01;28(12):4136-4160 [FREE Full text] [CrossRef] [Medline]
  65. Gurovich Y, Hanani Y, Bar O, Nadav G, Fleischer N, Gelbman D, et al. Identifying facial phenotypes of genetic disorders using deep learning. Nat Med 2019 Jan;25(1):60-64. [CrossRef] [Medline]
  66. Falk T, Mai D, Bensch R, Çiçek Ö, Abdulkadir A, Marrakchi Y, et al. U-Net: deep learning for cell counting, detection, and morphometry. Nat Methods 2019 Jan;16(1):67-70. [CrossRef] [Medline]
  67. Ribli D, Horváth A, Unger Z, Pollner P, Csabai I. Detecting and classifying lesions in mammograms with Deep Learning. Sci Rep 2018 Mar 15;8(1):4165 [FREE Full text] [CrossRef] [Medline]
  68. Ardila D, Kiraly AP, Bharadwaj S, Choi B, Reicher JJ, Peng L, et al. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat Med 2019 Jun;25(6):954-961. [CrossRef] [Medline]
  69. Song Q, Zhao L, Luo X, Dou X. Using deep learning for classification of lung nodules on computed tomography images. J Healthc Eng 2017;2017:8314740 [FREE Full text] [CrossRef] [Medline]
  70. Hosny A, Parmar C, Coroller TP, Grossmann P, Zeleznik R, Kumar A, et al. Deep learning for lung cancer prognostication: a retrospective multi-cohort radiomics study. PLoS Med 2018 Nov;15(11):e1002711 [FREE Full text] [CrossRef] [Medline]
  71. Betancur J, Commandeur F, Motlagh M, Sharir T, Einstein AJ, Bokhari S, et al. Deep learning for prediction of obstructive disease from fast myocardial perfusion SPECT: a multicenter study. JACC Cardiovasc Imaging 2018 Nov;11(11):1654-1663 [FREE Full text] [CrossRef] [Medline]
  72. Wang X, Yang W, Weinreb J, Han J, Li Q, Kong X, et al. Searching for prostate cancer by fully automated magnetic resonance imaging classification: deep learning versus non-deep learning. Sci Rep 2017 Nov 13;7(1):15415 [FREE Full text] [CrossRef] [Medline]
  73. Wang K, Lu X, Zhou H, Gao Y, Zheng J, Tong M, et al. Deep learning Radiomics of shear wave elastography significantly improved diagnostic performance for assessing liver fibrosis in chronic hepatitis B: a prospective multicentre study. Gut 2019 Apr;68(4):729-741 [FREE Full text] [CrossRef] [Medline]
  74. Li X, Chen H, Qi X, Dou Q, Fu CW, Heng PA. H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans Med Imaging 2018 Dec;37(12):2663-2674. [CrossRef] [Medline]
  75. Grassmann F, Mengelkamp J, Brandl C, Harsch S, Zimmermann ME, Linkohr B, et al. A deep learning algorithm for prediction of age-related eye disease study severity scale for age-related macular degeneration from color fundus photography. Ophthalmology 2018 Sep;125(9):1410-1420 [FREE Full text] [CrossRef] [Medline]
  76. Cha KH, Hadjiiski L, Chan HP, Weizer AZ, Alva A, Cohan RH, et al. Bladder cancer treatment response assessment in CT using radiomics with deep-learning. Sci Rep 2017 Aug 18;7(1):8738 [FREE Full text] [CrossRef] [Medline]
  77. Arvaniti E, Fricker KS, Moret M, Rupp N, Hermanns T, Fankhauser C, et al. Automated Gleason grading of prostate cancer tissue microarrays via deep learning. Sci Rep 2018 Aug 13;8(1):12054 [FREE Full text] [CrossRef] [Medline]
  78. Jo Y, Park S, Jung J, Yoon J, Joo H, Kim MH, et al. Holographic deep learning for rapid optical screening of anthrax spores. Sci Adv 2017 Aug;3(8):e1700606 [FREE Full text] [CrossRef] [Medline]
  79. Oakden-Rayner L, Carneiro G, Bessen T, Nascimento JC, Bradley AP, Palmer LJ. Precision radiology: predicting longevity using feature engineering and deep learning methods in a radiomics framework. Sci Rep 2017 May 10;7(1):1648 [FREE Full text] [CrossRef] [Medline]
  80. Nam JG, Park S, Hwang EJ, Lee JH, Jin KN, Lim KY, et al. Development and validation of deep learning-based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology 2019 Jan;290(1):218-228. [CrossRef] [Medline]
  81. Chung SW, Han SS, Lee JW, Oh KS, Kim NR, Yoon JP, et al. Automated detection and classification of the proximal humerus fracture by using deep learning algorithm. Acta Orthop 2018 Aug;89(4):468-473 [FREE Full text] [CrossRef] [Medline]
  82. Wang Y, Kosinski M. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. J Pers Soc Psychol 2018 Feb;114(2):246-257. [CrossRef] [Medline]
  83. Zreik M, Lessmann N, van Hamersvelt RW, Wolterink JM, Voskuil M, Viergever MA, et al. Deep learning analysis of the myocardium in coronary CT angiography for identification of patients with functionally significant coronary artery stenosis. Med Image Anal 2018 Feb;44:72-85. [CrossRef] [Medline]
  84. Lindsey R, Daluiski A, Chopra S, Lachapelle A, Mozer M, Sicular S, et al. Deep neural network improves fracture detection by clinicians. Proc Natl Acad Sci U S A 2018 Nov 06;115(45):11591-11596 [FREE Full text] [CrossRef] [Medline]
  85. Han Y, Kim D. Deep convolutional neural networks for pan-specific peptide-MHC class I binding prediction. BMC Bioinformatics 2017 Dec 28;18(1):585 [FREE Full text] [CrossRef] [Medline]
  86. Jiang H, Ma H, Qian W, Gao M, Li Y, Jiang H, et al. An automatic detection system of lung nodule based on multigroup patch-based deep learning network. IEEE J Biomed Health Inform 2018 Jul;22(4):1227-1237. [CrossRef] [Medline]
  87. Aubreville M, Knipfer C, Oetter N, Jaremenko C, Rodner E, Denzler J, et al. Automatic classification of cancerous tissue in laserendomicroscopy images of the oral cavity using deep learning. Sci Rep 2017 Sep 20;7(1):11979 [FREE Full text] [CrossRef] [Medline]
  88. Campanella G, Hanna MG, Geneslaw L, Miraflor A, Werneck Krauss Silva V, Busam KJ, et al. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat Med 2019 Aug;25(8):1301-1309 [FREE Full text] [CrossRef] [Medline]
  89. Pereira TD, Aldarondo DE, Willmore L, Kislin M, Wang SS, Murthy M, et al. Fast animal pose estimation using deep neural networks. Nat Methods 2019 Jan;16(1):117-125 [FREE Full text] [CrossRef] [Medline]
  90. Fu H, Cheng J, Xu Y, Wong DW, Liu J, Cao X. Joint optic disc and cup segmentation based on multi-label deep network and polar transformation. IEEE Trans Med Imaging 2018 Jul;37(7):1597-1605. [CrossRef] [Medline]
  91. Bernard O, Lalande A, Zotti C, Cervenansky F, Yang X, Heng PA, et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE Trans Med Imaging 2018 Nov;37(11):2514-2525. [CrossRef] [Medline]
  92. Bien N, Rajpurkar P, Ball RL, Irvin J, Park A, Jones E, et al. Deep-learning-assisted diagnosis for knee magnetic resonance imaging: development and retrospective validation of MRNet. PLoS Med 2018 Nov;15(11):e1002699 [FREE Full text] [CrossRef] [Medline]
  93. Stepniewska-Dziubinska MM, Zielenkiewicz P, Siedlecki P. Development and evaluation of a deep learning model for protein-ligand binding affinity prediction. Bioinformatics 2018 Nov 01;34(21):3666-3674 [FREE Full text] [CrossRef] [Medline]
  94. Sharma K, Rupprecht C, Caroli A, Aparicio MC, Remuzzi A, Baust M, et al. Automatic segmentation of kidneys using deep learning for total kidney volume quantification in autosomal dominant polycystic kidney disease. Sci Rep 2017 May 17;7(1):2049 [FREE Full text] [CrossRef] [Medline]
  95. Liu F, Zhou Z, Samsonov A, Blankenbaker D, Larison W, Kanarek A, et al. Deep learning approach for evaluating knee MR images: achieving high diagnostic performance for cartilage lesion detection. Radiology 2018 Oct;289(1):160-169 [FREE Full text] [CrossRef] [Medline]
  96. Lehman CD, Yala A, Schuster T, Dontchos B, Bahl M, Swanson K, et al. Mammographic breast density assessment using deep learning: clinical implementation. Radiology 2019 Jan;290(1):52-58. [CrossRef] [Medline]
  97. Coenen A, Kim YH, Kruk M, Tesche C, De Geer J, Kurata A, et al. Diagnostic accuracy of a machine-learning approach to coronary computed tomographic angiography-based fractional flow reserve: result from the MACHINE consortium. Circ Cardiovasc Imaging 2018 Jun;11(6):e007217. [CrossRef] [Medline]
  98. Steiner DF, MacDonald R, Liu Y, Truszkowski P, Hipp JD, Gammage C, et al. Impact of deep learning assistance on the histopathologic review of lymph nodes for metastatic breast cancer. Am J Surg Pathol 2018 Dec;42(12):1636-1646 [FREE Full text] [CrossRef] [Medline]
  99. Sullivan DP, Winsnes CF, Åkesson L, Hjelmare M, Wiking M, Schutten R, et al. Deep learning is combined with massive-scale citizen science to improve large-scale image classification. Nat Biotechnol 2018 Oct;36(9):820-828. [CrossRef] [Medline]
  100. Chang K, Balachandar N, Lam C, Yi D, Brown J, Beers A, et al. Distributed deep learning networks among institutions for medical imaging. J Am Med Inform Assoc 2018 Aug 01;25(8):945-954 [FREE Full text] [CrossRef] [Medline]
  101. Han Y, Yoo J, Kim HH, Shin HJ, Sung K, Ye JC. Deep learning with domain adaptation for accelerated projection-reconstruction MR. Magn Reson Med 2018 Sep;80(3):1189-1205. [CrossRef] [Medline]
  102. Wang H, Rivenson Y, Jin Y, Wei Z, Gao R, Günaydın H, et al. Deep learning enables cross-modality super-resolution in fluorescence microscopy. Nat Methods 2019 Jan;16(1):103-110 [FREE Full text] [CrossRef] [Medline]
  103. Yang G, Yu S, Dong H, Slabaugh G, Dragotti PL, Ye X, et al. DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction. IEEE Trans Med Imaging 2018 Jun;37(6):1310-1321. [CrossRef] [Medline]
  104. Ouyang W, Aristov A, Lelek M, Hao X, Zimmer C. Deep learning massively accelerates super-resolution localization microscopy. Nat Biotechnol 2018 Jun;36(5):460-468. [CrossRef] [Medline]
  105. Nie D, Trullo R, Lian J, Wang L, Petitjean C, Ruan S, et al. Medical image synthesis with deep convolutional adversarial networks. IEEE Trans Biomed Eng 2018 Dec;65(12):2720-2730 [FREE Full text] [CrossRef] [Medline]
  106. Zhang Y, Yu H. Convolutional neural network based metal artifact reduction in X-ray computed tomography. IEEE Trans Med Imaging 2018 Jun;37(6):1370-1381 [FREE Full text] [CrossRef] [Medline]
  107. Gong E, Pauly JM, Wintermark M, Zaharchuk G. Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI. J Magn Reson Imaging 2018 Aug;48(2):330-340. [CrossRef] [Medline]
  108. Cohen O, Zhu B, Rosen MS. MR fingerprinting deep reconstruction network (DRONE). Magn Reson Med 2018 Sep;80(3):885-894 [FREE Full text] [CrossRef] [Medline]
  109. Zhang Y, An L, Xu J, Zhang B, Zheng WJ, Hu M, et al. Enhancing Hi-C data resolution with deep convolutional neural network HiCPlus. Nat Commun 2018 Feb 21;9(1):750 [FREE Full text] [CrossRef] [Medline]
  110. Hyun CM, Kim HP, Lee SM, Lee S, Seo JK. Deep learning for undersampled MRI reconstruction. Phys Med Biol 2018 Jun 25;63(13):135007. [CrossRef] [Medline]
  111. Hauptmann A, Lucka F, Betcke M, Huynh N, Adler J, Cox B, et al. Model-based learning for accelerated, limited-view 3-D photoacoustic tomography. IEEE Trans Med Imaging 2018 Jun;37(6):1382-1393. [CrossRef] [Medline]
  112. Chaudhari AS, Fang Z, Kogan F, Wood J, Stevens KJ, Gibbons EK, et al. Super-resolution musculoskeletal MRI using deep learning. Magn Reson Med 2018 Nov;80(5):2139-2154 [FREE Full text] [CrossRef] [Medline]
  113. Chen H, Zhang Y, Chen Y, Zhang J, Zhang W, Sun H, et al. LEARN: learned experts' assessment-based reconstruction network for sparse-data CT. IEEE Trans Med Imaging 2018 Jun;37(6):1333-1347 [FREE Full text] [CrossRef] [Medline]
  114. Popova M, Isayev O, Tropsha A. Deep reinforcement learning for de novo drug design. Sci Adv 2018 Jul;4(7):eaap7885 [FREE Full text] [CrossRef] [Medline]
  115. Blaschke T, Olivecrona M, Engkvist O, Bajorath J, Chen H. Application of generative autoencoder in de novo molecular design. Mol Inform 2018 Jan;37(1-2):1700123 [FREE Full text] [CrossRef] [Medline]
  116. Zhou J, Theesfeld CL, Yao K, Chen KM, Wong AK, Troyanskaya OG. Deep learning sequence-based ab initio prediction of variant effects on expression and disease risk. Nat Genet 2018 Aug;50(8):1171-1179 [FREE Full text] [CrossRef] [Medline]
  117. Putin E, Asadulaev A, Ivanenkov Y, Aladinskiy V, Sanchez-Lengeling B, Aspuru-Guzik A, et al. Reinforced adversarial neural computer for de novo molecular design. J Chem Inf Model 2018 Jun 25;58(6):1194-1204 [FREE Full text] [CrossRef] [Medline]
  118. Merk D, Friedrich L, Grisoni F, Schneider G. De novo design of bioactive small molecules by artificial intelligence. Mol Inform 2018 Jan;37(1-2):1700153 [FREE Full text] [CrossRef] [Medline]
  119. Veltri D, Kamath U, Shehu A. Deep learning improves antimicrobial peptide recognition. Bioinformatics 2018 Aug 15;34(16):2740-2747 [FREE Full text] [CrossRef] [Medline]
  120. Wang S, Sun S, Xu J. Analysis of deep learning methods for blind protein contact prediction in CASP12. Proteins 2018 Mar;86 Suppl 1:67-77 [FREE Full text] [CrossRef] [Medline]
  121. Chaudhary K, Poirion OB, Lu L, Garmire LX. Deep learning-based multi-omics integration robustly predicts survival in liver cancer. Clin Cancer Res 2018 Mar 15;24(6):1248-1259 [FREE Full text] [CrossRef] [Medline]
  122. Ma J, Yu MK, Fong S, Ono K, Sage E, Demchak B, et al. Using deep learning to model the hierarchical structure and function of a cell. Nat Methods 2018 Apr;15(4):290-298 [FREE Full text] [CrossRef] [Medline]
  123. Yousefi S, Amrollahi F, Amgad M, Dong C, Lewis JE, Song C, et al. Predicting clinical outcomes from large scale cancer genomic profiles with deep survival models. Sci Rep 2017 Sep 15;7(1):11707 [FREE Full text] [CrossRef] [Medline]
  124. Preuer K, Lewis RP, Hochreiter S, Bender A, Bulusu KC, Klambauer G. DeepSynergy: predicting anti-cancer drug synergy with deep learning. Bioinformatics 2018 May 01;34(9):1538-1546 [FREE Full text] [CrossRef] [Medline]
  125. Liang H, Tsui BY, Ni H, Valentim CC, Baxter SL, Liu G, et al. Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence. Nat Med 2019 Mar;25(3):433-438. [CrossRef] [Medline]
  126. Wang L, You ZH, Chen X, Xia SX, Liu F, Yan X, et al. A computational-based method for predicting drug-target interactions by using stacked autoencoder deep neural network. J Comput Biol 2018 Mar;25(3):361-373. [CrossRef] [Medline]
  127. Yildirim Ö. A novel wavelet sequence based on deep bidirectional LSTM network model for ECG signal classification. Comput Biol Med 2018 May 01;96:189-202. [CrossRef] [Medline]
  128. Chambon S, Galtier MN, Arnal PJ, Wainrib G, Gramfort A. A deep learning architecture for temporal sleep stage classification using multivariate and multimodal time series. IEEE Trans Neural Syst Rehabil Eng 2018 Apr;26(4):758-769. [CrossRef] [Medline]
  129. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016 Presented at: CVPR '16; June 27-30, 2016; Las Vegas, NV, USA p. 770-778. [CrossRef]
  130. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016 Presented at: CVPR '16; June 27-30, 2016; Las Vegas, NV, USA p. 2818-2826. [CrossRef]
  131. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015 Presented at: CVPR '15; June 7-12, 2015; Boston, MA, USA p. 1-9. [CrossRef]
  132. Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell 2017 Apr;39(4):640-651. [CrossRef] [Medline]
  133. Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: International Conference on 18th Medical Image Computing and Computer-Assisted Intervention. 2015 Presented at: MICCAI '15; October 5-9, 2015; Munich, Germany p. 234-241. [CrossRef]
  134. Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems. 2014 Presented at: NIPS '14; December 8-13, 2014; Montreal, Canada p. 2672-2680. [CrossRef]
  135. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput 1997 Nov 15;9(8):1735-1780. [CrossRef] [Medline]
  136. Lakhani P, Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology 2017 Aug;284(2):574-582. [CrossRef] [Medline]
  137. Chen G, Tsoi A, Xu H, Zheng WJ. Predict effective drug combination by deep belief network and ontology fingerprints. J Biomed Inform 2018 Sep;85:149-154 [FREE Full text] [CrossRef] [Medline]
  138. Kim JK, Choi MJ, Lee JS, Hong JH, Kim CS, Seo SI, et al. A deep belief network and dempster-shafer-based multiclassifier for the pathology stage of prostate cancer. J Healthc Eng 2018;2018:4651582 [FREE Full text] [CrossRef] [Medline]
  139. Alansary A, Oktay O, Li Y, Folgoc LL, Hou B, Vaillant G, et al. Evaluating reinforcement learning agents for anatomical landmark detection. Med Image Anal 2019 Apr;53:156-164 [FREE Full text] [CrossRef] [Medline]
  140. Shu X, Tang J, Li Z, Lai H, Zhang L, Yan S. Personalized age progression with bi-level aging dictionary learning. IEEE Trans Pattern Anal Mach Intell 2018 Apr;40(4):905-917. [CrossRef] [Medline]
  141. Devlin J, Chang MW, Lee K, Toutanova K. Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2019 Presented at: NAACL-HLT '19; June 2-7, 2019; Minneapolis, MN, USA p. 4171-4186. [CrossRef]
  142. Baevski A, Zhou H, Mohamed A, Auli M. wav2vec 2.0: a framework for self-supervised learning of speech representations. In: Proceedings of the 34th Conference on Neural Information Processing Systems. 2020 Dec Presented at: NeurIPS '20; December 6-12, 2020; Vancouver, Canada.
  143. Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, et al. Swin transformer: hierarchical vision transformer using shifted windows. In: IEEE/CVF International Conference on Computer Vision. 2021 Oct Presented at: ICCV '21; October 10-17, 2021; Montreal, Canada p. 9992-10002. [CrossRef]


AE: autoencoder
CNN: convolutional neural network
FCNN: fully convolutional neural network
MeSH: Medical Subject Heading
NLP: natural language processing
ResNet: residual neural network
RNN: recurrent neural network
WoS: Web of Science


Edited by A Mavragani; submitted 22.02.21; peer-reviewed by Y Zhao, C Su, Y Zhang; comments to author 17.03.21; revised version received 30.05.21; accepted 20.02.22; published 22.04.22

Copyright

©Seojin Nam, Donghun Kim, Woojin Jung, Yongjun Zhu. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 22.04.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.