Original Paper
Abstract
Background: Advances in biomedical research using deep learning techniques have generated a large volume of related literature. However, there is a lack of scientometric studies that provide a bird’s-eye view of them. This absence has led to a partial and fragmented understanding of the field and its progress.
Objective: This study aimed to gain a quantitative and qualitative understanding of the scientific domain by analyzing diverse bibliographic entities that represent the research landscape from multiple perspectives and levels of granularity.
Methods: We searched and retrieved 978 deep learning studies in biomedicine from the PubMed database. A scientometric analysis was performed by analyzing the metadata, content of influential works, and cited references.
Results: In the process, we identified the current leading fields, major research topics and techniques, knowledge diffusion, and research collaboration. There was a predominant focus on applying deep learning, especially convolutional neural networks, to radiology and medical imaging, whereas a few studies focused on protein or genome analysis. Radiology and medical imaging also appeared to be the most significant knowledge sources and an important field in knowledge diffusion, followed by computer science and electrical engineering. A coauthorship analysis revealed various collaborations among engineering-oriented and biomedicine-oriented clusters of disciplines.
Conclusions: This study investigated the landscape of deep learning research in biomedicine and confirmed its interdisciplinary nature. Although it has been successful, we believe that there is a need for diverse applications in certain areas to further boost the contributions of deep learning in addressing biomedical research problems. We expect the results of this study to help researchers and communities better align their present and future work.
doi:10.2196/28114
Keywords
Introduction
Deep learning is a class of machine learning techniques based on neural networks with multiple processing layers that learn representations of data [
, ]. Stemming from shallow neural networks, many deep learning architectures, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have been developed for various purposes [ ]. The exponentially growing amount of data in many fields and recent advances in graphics processing units have further expedited research progress in the field. Deep learning has been actively applied to tasks, such as natural language processing (NLP), speech recognition, and computer vision, in various domains [ ] and has shown promising results in diverse areas of biomedicine, including radiology [ ], neurology [ ], cardiology [ ], cancer detection and diagnosis [ , ], radiotherapy [ ], and genomics and structural biology [ - ]. Medical image analysis is a field that has actively used deep learning. For example, successful applications have been made in diagnosis [ ], lesion classification or detection [ , ], organ and other substructure localization or segmentation [ , ], and image registration [ , ]. In addition, deep learning has also made an impact on predicting protein structures [ , ] and genomic sequencing [ - ] for biomarker development and drug design.Despite the increasing number of published biomedical studies on deep learning techniques and applications, there has been a lack of scientometric studies that both qualitatively and quantitatively explore, analyze, and summarize the relevant studies to provide a bird’s-eye view of them. Previous studies have mostly provided qualitative reviews [
, , ], and the few available bibliometric analyses were limited in their scope in that the researchers focused on a subarea such as public health [ ] or a particular journal [ ]. The absence of a coherent lens through which we can examine the field from multiple perspectives and levels of granularity leads to a partial and fragmented understanding of the field and its progress. To fill this gap, the aim of this study is to perform a scientometric analysis of metadata, content, and citations to investigate current leading fields, research topics, and techniques, as well as research collaboration and knowledge diffusion in deep learning research in biomedicine. Specifically, we intend to examine (1) biomedical journals that had frequently published deep learning studies and their coverage of research areas, (2) diseases and other biomedical entities that have been frequently studied with deep learning and their relationships, (3) major deep learning architectures in biomedicine and their specific applications, (4) research collaborations among disciplines and organizations, and (5) knowledge diffusion among different areas of study.Methods
Data
Data were collected from PubMed, a citation and abstract database that includes biomedical literature from MEDLINE and other life science journals indexed with Medical Subject Heading (MeSH) terms [
]. MeSH is a hierarchically structured biomedical terminology with descriptors organized into 16 categories, with subcategories [ ]. In this study, deep learning [MeSH Major Topic] was used as the query to search and download deep learning studies from PubMed. Limiting a MeSH term as a major topic increases the precision of retrieval so that only studies that are highly relevant to the topic are found [ ]. As of January 1, 2020, a total of 978 PubMed records with publication years ranging from 2016 to 2020 have been retrieved using the National Center for Biotechnology Information Entrez application programming interface. Entrez is a data retrieval system that can be programmatically accessed through its Biopython module to search and export records from the National Center for Biotechnology Information’s databases, including PubMed [ , ]. The metadata of the collected bibliographic records included the PubMed identifier or PubMed ID, publication year, journal title and its electronic ISSN, MeSH descriptor terms, and author affiliations. We also downloaded the citation counts and references of each bibliographic record and considered data sources other than PubMed as well. We collected citation counts of the downloaded bibliographic records from Google Scholar (last updated on February 8, 2020) and the subject categories of their publishing journals from the Web of Science (WoS) Core Collection database using the electronic ISSN.Detailed Methods
Metadata Analysis
Journals
Journals are an important unit of analysis in scientometrics and have been used to understand specific research areas and disciplines [
]. In this study, biomedical journals that published deep learning studies were grouped using the WoS Core Collection subject categories and analyzed to identify widely studied research areas and disciplines.MeSH Terms
Disease-related MeSH terms were analyzed to identify major diseases that have been studied using deep learning. We mapped descriptors to their corresponding numbers in MeSH Tree Structures to identify higher level concepts for descriptors that were too specific and ensured that all the descriptors had the same level of specificity. Ultimately, all descriptors were mapped to 6-digit tree numbers (C00.000), and terms with >1 tree number were separately counted for all the categories they belonged to. In addition, we visualized the co-occurrence network of major MeSH descriptors using VOSviewer (version 1.6.15) [
, ] and its clustering technique [ ] to understand the relationships among the biomedical entities, as well as the clusters they form together.Author Affiliations
We analyzed author affiliations to understand the major organizations and academic disciplines that were active in deep learning research. The affiliations of 4908 authors extracted from PubMed records were recorded in various formats and manually standardized. We manually reviewed the affiliations to extract organizations, universities, schools, colleges, and departments. For authors with multiple affiliations, we selected the first one listed, which is usually the primary. We also analyzed coauthorships to investigate research collaboration among organizations and disciplines. All the organizations were grouped into one of the following categories: universities, hospitals, companies, or research institutes and government agencies to understand research collaboration among different sectors. We classified medical schools under hospitals as they are normally affiliated with each other. In the category of research institutes or government agencies, we included nonprofit private organizations or foundations and research centers that do not belong to a university, hospital, or company. We extracted academic disciplines from the department section or the school or college section when department information was unavailable. As the extracted disciplines were not coherent with multiple levels and combinations, data were first cleaned with OpenRefine (originally developed by Metaweb then Google), an interactive data transformation tool for profiling and cleaning messy data [
], and then manually grouped based on WoS categories and MeSH Tree Structures according to the following rules. We treated interdisciplinary fields and fields with high occurrence as separate disciplines from their broader fields and aggregated multiple fields that frequently co-occurred under a single department name into a single discipline after reviewing their disciplinary similarities.Content Analysis
We identified influential studies by examining their citation counts in PubMed and Google Scholar. Citation counts from Google Scholar were considered in addition to PubMed as Google Scholar’s substantial citation data encompasses WoS and Scopus citations [
]. After sorting the articles in descending order of citations, the 2 sources showed a Spearman rank correlation coefficient of 0.883. From the PubMed top 150 list (ie, citation count >7) and Google Scholar top 150 list (ie, citation count >36), we selected the top 109 articles. Among these, we selected the sources that met the criteria for applying or developing deep learning models as the subjects of analysis to understand the major deep learning architectures in biomedicine and their applications. Specifically, we analyzed the research topics of the studies, the data and architectures used for those purposes, and how the black box problem was addressed.Cited Reference Analysis
We collected the references from downloaded articles that had PubMed IDs. Citations represent the diffusion of knowledge from cited to citing publications; therefore, analyzing the highly cited references in deep learning studies in biomedicine allows for the investigation of disciplines and studies that have greatly influenced the field. Toward this end, we visualized networks of knowledge diffusion among WoS subjects using Gephi (v0.9.2) [
] and examined metrics such as modularity, PageRank score, and weighted outdegree using modularity for community detection [ ]. PageRank indicates the importance of a node by measuring the quantity and quality of its incoming edges [ ], and weighted outdegree measures the number of outgoing edges of a node. We also reviewed the contents of the 10 most highly cited influential works.Results
Metadata Analysis
Journals
On the basis of the data set, 315 biomedical journals have published deep learning studies, and
lists the top 10 journals selected based on publication size. Different WoS categories and MeSH terms are separated using semicolons.From a total of 978 records, 96 (9.8%) were unindexed in the WoS Core Collection and were excluded, following which, an average of 2.02 (SD 1.19) categories were assigned per record. The top ten subject categories, which mostly pertained to (1) biomedicine, with 22.2% (196/882) articles published in Radiology, Nuclear Medicine, and Medical Imaging (along with Engineering, Biomedical: 121/882, 13.7%; Mathematical and Computational Biology: 107/882, 12.1%; Biochemical Research Methods: 103/882, 11.7%; Biotechnology and Applied Microbiology: 76/882, 8.6%; Neurosciences: 74/882, 8.4%); (2) computer science and engineering (Computer Science, Interdisciplinary Applications: 112/882, 12.7%; Computer Science, Artificial Intelligence: 75/882, 8.5%; Engineering, Electrical and Electronic: 75/882, 8.5%); or (3) Multidisciplinary Sciences (82/882, 9.3%).
Journal title | Web of Science category | National Library of Medicine catalog Medical Subject Heading term | Publisher | Record count, n |
BMCa Bioinformatics | Biochemical Research Methods; Mathematical and Computational Biology; Biotechnology and Applied Microbiology | Computational Biology | BMC | 38 |
Scientific Reports | Multidisciplinary Sciences | Natural Science Disciplines | Nature Research | 37 |
Neural Networks | Neurosciences; Computer Science, Artificial Intelligence | Nerve Net; Nervous System | Elsevier | 35 |
Proceedings of the Annual International Conference of the IEEEb Engineering in Medicine and Biology Society | N/Ac | Biomedical Engineering | IEEE | 31 |
IEEE Transactions on Medical Imaging | Imaging Science and Photographic Technology; Engineering, Electrical and Electronic; Computer Science, Interdisciplinary Applications; Radiology, Nuclear Medicine, and Medical Imaging; Engineering, Biomedical | Electronics, Medical; Radiography | IEEE | 30 |
Sensors | Chemistry, Analytical; Electrochemistry; Instruments and Instrumentation; Engineering, Electrical and Electronic | Biosensing Techniques | Multidisciplinary Digital Publishing Institute | 26 |
Bioinformatics | Biochemical Research Methods; Mathematical and Computational Biology; Biotechnology and Applied Microbiology | Computational Biology; Genome | Oxford University Press | 22 |
Nature Methods | Biochemical Research Methods | Biomedical Research/methods; Research Design | Nature Research | 21 |
Medical Physics | Radiology, Nuclear Medicine, and Medical Imaging | Biophysics | American Association of Physicists in Medicine | 20 |
PloS one | Multidisciplinary Sciences | Medicine; Science | Public Library of Science | 20 |
aBMC: BioMed Central.
bIEEE: Institute of Electrical and Electronics Engineers.
cN/A: not applicable.
MeSH Terms
For the main MeSH term or descriptor, an average of 9 (SD 4.21) terms was assigned to each record as subjects. Among them, we present in
the diseases that were extracted from the C category. In the figure, the area size is proportional to the record count, and the terms are categorized by color. In addition, terms under >1 category were counted multiple times. For instance, the term Digestive System Neoplasms has two parents in MeSH Tree Structures, Neoplasms and Digestive System Diseases, and as such, we counted articles in this category under Neoplasmsby Site as well as under Digestive System Neoplasms. Owing to the limited space, 7 categories whose total record counts were ≤10 (eg, Congenital, Hereditary, and Neonatal Diseases and Abnormalities; Nutritional and Metabolic Diseases; and Stomatognathic Diseases) were combined under the Others category, and individual diseases that had <10 record counts were summed up with each other in the same category to show only their total count (or with one of the diseases included as an example). In the process, we identified Neoplasms as the most frequently studied disease type, with a total of 199 studies.We further constructed a co-occurrence network of the complete set of major MeSH descriptors assigned to the records to understand the relationships among the biomedical entities. To enhance legibility, we filtered out terms with <5 occurrences.
presents the visualized network of nodes (100/966, 10.4% of the total terms) with 612 edges and 7 clusters. In the figure, the sizes of the nodes and edges are proportional to the number of occurrences, and the node color indicates the assigned cluster (although the term deep learning was considered nonexclusive to any cluster as it appeared in all records).As depicted in
, each cluster comprised descriptors from two groups: (1) biomedical domains that deep learning was applied to, including body regions, related diseases, diagnostic imaging methods, and theoretical models, and (2) the purposes of deep learning and techniques used for the tasks, including diagnosis, analysis, and processing of biomedical data. In the first cluster, computer neural networks and software were studied for the purposes of computational biology, specifically protein sequence analysis, drug discovery, and drug design, to achieve precision medicine. These were relevant to the biomedical domains of (1) proteins, related visualization methods (microscopy), and biological models, and (2) neoplasms, related drugs (antineoplastic agents), and diagnostic imaging (radiology). In the second cluster, deep learning and statistical models were used for RNA sequence analysis and computer-assisted radiotherapy planning in relation to the domains of (1) genomics, RNA, and mutation, and (2) brain neoplasms and liver neoplasms. The third cluster comprised (1) heart structures (heart ventricles), cardiovascular diseases, and ultrasonography and (2) eye structures (retina), diseases (glaucoma), and ophthalmological diagnostic techniques. These had been studied for computer-assisted image interpretation using machine learning and deep learning algorithms. The biomedical domain group of the fourth cluster involved specific terms related to neoplasms such as type (adenocarcinoma), different regions (breast neoplasms, lung neoplasms, and colorectal neoplasms), and respective imaging methods (mammography and X-ray computed tomography) to which deep learning and support vector machines have been applied for the purpose of computer-assisted radiographic image interpretation and computer-assisted diagnosis. The fifth cluster included (1) brain disorders (Alzheimer disease), neuroimaging, and neurological models; (2) prostatic neoplasms; and (3) diagnostic magnetic resonance imaging and 3D imaging. Supervised machine learning had been used for computer-assisted image processing of these data. In the sixth cluster, automated pattern recognition and computer-assisted signal processing were studied with (1) human activities (eg, movement and face), (2) abnormal brain activities (epilepsy and seizures) and monitoring methods (electroencephalography), and (3) heart diseases and electrocardiography. In the last cluster, medical informatics, specifically data mining and NLP, including speech perception, had been applied to (1) electronic health records, related information storage and retrieval, and theoretical models and (2) skin diseases (skin neoplasms and melanoma) and diagnostic dermoscopy.Author Affiliations
To investigate research collaboration within the field, we analyzed paper-based coauthorships using author affiliations with different levels of granularity, including organization and academic disciplines. We extracted organizations from 98.7% (4844/4908) of the total affiliations and visualized the collaboration of different organization types. The top 10 organizations with the largest publication records included Harvard University (37/844, 4.4%), Chinese Academy of Sciences (21/844, 2.5%; eg, Institute of Computing Technology, Institute of Automation, and Shenzhen Institutes of Advanced Technology), Seoul National University (21/844, 2.5%), Stanford University (20/844, 2.4%), Sun Yat-sen University (14/844, 1.7%; eg, Zhongshan Ophthalmic Center and Collaborative Innovation Center of Cancer Medicine), University of California San Diego (14/844, 1.7%; eg, Institute for Genomic Medicine, Shiley Eye Institute, and Institute for Brain and Mind), University of California San Francisco (14/844, 1.7%), University of Michigan (14/844, 1.7%), Yonsei University (14/844, 1.7%), and the University of Texas Health Science Center at Houston (12/844, 1.4%). The extracted organizations were assigned to one of the following four categories according to their main purpose: universities, hospitals, companies, or research institutes and government agencies. Among these, universities participated in most papers (567/844, 67.2%), followed by hospitals (429/844, 50.8%), companies (139/844, 16.5%), and research institutes or government agencies (88/844, 10.4%). We used a co-occurrence matrix to visualize the degrees of organizational collaboration, with the co-occurrence values log normalized to compare the relative differences (
).From
, we found that universities were the most active in collaborative research, particularly with hospitals, followed by companies and research institutes or government agencies. Hospitals also frequently collaborated with companies; however, research institutes or government agencies tended not to collaborate much as they published relatively fewer studies.We also examined the collaborations among academic disciplines, which we could extract, as described in the Methods section, from 76.24% (3742/4908) of the total affiliations. Approximately half (ie, 386/756, 51.1%) of the papers were completed under disciplinary collaboration.
depicts the network with 36 nodes (36/148, 24.3% of the total) and 267 edges after we filtered out disciplines with weighted degrees <10, representing the number of times one collaborated with the other disciplines. In the figure, the node and edge sizes are proportional to the weighted degree and link strength, respectively, and the node color indicates the assigned cluster.As shown in the figure, the academic disciplines were assigned to 1 of 6 clusters, including 1 engineering-oriented cluster (cluster 1) and other clusters that encompassed biomedical fields. We specifically looked at the degree of collaboration between the biomedical and engineering disciplines.
depicts that the most prominent collaboration was among Radiology, Medical Imaging, and Nuclear Medicine; Computer Science; and Electronics and Electrical Engineering. There were also strong links among Computer Science or Electronics and Electrical Engineering and Biomedical Informatics, Biomedical Engineering, and Pathology and Laboratory Medicine.Among the top 10 disciplines in
, the following three had published the most papers and had the highest weighted degree and degree centralities: Computer Science (number of papers=195, weighted degree=193, and degree centrality=32); Radiology, Medical Imaging, and Nuclear Medicine (number of papers=168, weighted degree=166, and degree centrality=30); and Electronics and Electrical Engineering (number of papers=161, weighted degree=160, and degree centrality=32). Meanwhile, some disciplines had high weighted degrees compared with their publication counts, indicating their activeness in collaborative research. These included Pathology and Laboratory Medicine (5th in link strength vs 8th in publications) and Public Health and Preventive Medicine (9th in link strength vs 15th in publications). A counterexample was Computational Biology, which was 12th in link strength but 7th in publications.Content Analysis
Overview
We analyzed the content of influential studies that had made significant contributions to the field through the application or development of deep learning architectures. We identified these studies by examining the citation counts from PubMed and Google Scholar, assigning the 109 most-cited records to one of the following categories: (1) review, (2) application of existing deep learning architectures to certain biomedical domains (denoted by A), or (3) development of a novel deep learning model (denoted by D).
summarizes the 92 papers assigned to the application or development category according to their research topic in descending order of citation count.Research topic and number | Task type | Data | Deep learning architectures | ||||
(Diagnostic) image analysis | |||||||
A1 [ | ]Classification | Retinal disease OCTa and chest x-ray with pneumonia | Inception | ||||
A2 [ | ]Segmentation and classification | Retinal disease OCT | U-net and CNNb | ||||
A3 [ | ]Classification | Melanoma dermoscopic images | Inception | ||||
A4 [ | ]Survival prediction | Brain glioblastoma MRIc | CNN_S | ||||
A6 [ | ]Classification and segmentation | WSId of 13 cancer types | CNN with CAEe and DeconvNet | ||||
D1 [ | ]Segmentation | Brain MRI | ResNetf based | ||||
A7 [ | ]Prediction | Retinal fundus images with cardiovascular disease | Inception | ||||
D2 [ | ]Tracking | Video of freely behaving animal | ResNet-based DeeperCut subset | ||||
A8 [ | ]Classification | Colonoscopy video of colorectal polyps | Inception | ||||
A9 [ | ]Classification | Lung cancer CTg | CNN | ||||
A10 [ | ]Classification and segmentation | Retinal OCT with macular disease | Encoder-decoder CNN | ||||
D3 [ | ]Segmentation | Brain glioma MRI | CNN based | ||||
D4 [ | ]Binding affinities prediction | Protein-ligand complexes as voxel | SqueezeNet based | ||||
A11 [ | ]Survival classification | Brain glioma MRI, functional MRI, and DTIh | CNN and mCNNi | ||||
A12 [ | ]Classification | Fundus images with glaucomatous optic neuropathy | Inception | ||||
A13 [ | ]Classification | Chest radiographs with pneumonia | ResNet and CheXNet | ||||
A14 [ | ]Classification and segmentation | Critical head abnormality CT | ResNet, U-net, and DeepLab | ||||
A15 [ | ]Classification | Brain glioma MRI | ResNet | ||||
D6 [ | ]Classification | Thoracic disease radiographs | DenseNet based | ||||
A16 [ | ]Classification and segmentation | Echocardiogram video with cardiac disease | VGGNet and U-net | ||||
A17 [ | ]Classification | Brain positron emission tomography with Alzheimer | Inception | ||||
D7 [ | ]Classification | Breast cancer histopathological images | CNN based | ||||
A18 [ | ]Classification | Skin tumor images | ResNet | ||||
A19 [ | ]Classification and prediction | Chest CT with chronic obstructive pulmonary disease and acute respiratory disease | CNN | ||||
A20 [ | ]Segmentation | Brain MRI with autism spectrum disorder | FCNNj | ||||
D8 [ | ]Segmentation | Fetal MRI and brain tumor MRI | Proposal network (P-Net) based | ||||
A21 [ | ]Classification, prediction, and reconstruction | Natural movies and functional MRI of watching movies | AlexNet and De-CNN | ||||
D9 [ | ]Detection and classification | Facial images with a genetic syndrome | CNN based | ||||
A22 [ | ]Detection and segmentation | Microscopic images of cells | U-net | ||||
A23 [ | ]Classification and localization | Breast cancer mammograms | Faster region-based CNN with VGGNet | ||||
A24 [ | ]Segmentation and prediction | Lung cancer CT | Mask-RCNN, CNN with GoogLeNet and RetinaNet | ||||
A26 [ | ]Classification | Lung cancer CT | CNN; fully connected NN; SAEk | ||||
A27 [ | ]Survival classification | Lung cancer CT | CNN | ||||
A29 [ | ]Prediction | Polar maps of myocardial perfusion imaging with CADl | CNN | ||||
A30 [ | ]Classification | Prostate cancer MRI | CNN | ||||
D12 [ | ]Classification | Liver SWEm with chronic hepatitis B | CNN based | ||||
D14 [ | ]Segmentation | Liver cancer CT | DenseNet with U-net based | ||||
A31 [ | ]Classification | Fundus images with macular degeneration | AlexNet, GoogLeNet, VGGNet, inception, ResNet, and inception-ResNet | ||||
A32 [ | ]Classification | Bladder cancer CT | cuda-convnet | ||||
A34 [ | ]Classification | Prostate cancer tissue microarray images | MobileNet | ||||
D19 [ | ]Classification | Holographic microscopy of Bacillus species | CNN based | ||||
A36 [ | ]Survival classification | Chest CT | CNN | ||||
D20 [ | ]Classification and localization | Malignant lung nodule radiographs | ResNet based | ||||
A37 [ | ]Classification | Shoulder radiographs with proximal humerus fracture | ResNet | ||||
A39 [ | ]Classification | Facial images of hetero and homosexual | VGG-Face | ||||
A41 [ | ]Segmentation and classification | CAD CT angiography | CNN and CAE | ||||
A42 [ | ]Classification and localization | Radiographs with fracture | U-net | ||||
A43 [ | ]Binding classification | Peptide major histocompatibility complex as image-like array | CNN | ||||
A44 [ | ]Detection | Lung nodule CT | CNN | ||||
A45 [ | ]Classification | Confocal endomicroscopy video of oral cancer | LeNet | ||||
A46 [ | ]Classification | WSI of prostate, skin, and breast cancer | MILn with ResNet and RNN | ||||
D24 [ | ]Tracking | Video of freely behaving animal | FCNN based | ||||
D25 [ | ]Segmentation | Fundus images with glaucoma | U-net based | ||||
A47 [ | ]Segmentation and classification | Cardiac disease cine MRI | U-net; M-Net; Dense U-net; SVF-Net; Grid-Net; Dilated CNN | ||||
D27 [ | ]Classification | Knee abnormality MRI | AlexNet based | ||||
D28 [ | ]Binding affinities prediction | Protein-ligand complexes as grid | CNN based | ||||
A50 [ | ]Segmentation | Autosomal dominant polycystic kidney disease CT | FCNN with VGGNet | ||||
A51 [ | ]Segmentation and classification | Knee cartilage lesion MRI | VGGNet | ||||
A52 [ | ]Classification | Mammograms | ResNet | ||||
A54 [ | ]Prediction | CAD CT angiography | FCNN | ||||
D31 [ | ]Classification and localization | WSI of lymph nodes in metastatic breast cancer | Inception based | ||||
D35 [ | ]Classification | Fluorescence microscopic images of cells | FFNNo based | ||||
A56 [ | ]Classification | Retinal fundus images with diabetic retinopathy and breast mass mammography | ResNet; GoogLeNet | ||||
Image processing | |||||||
A25 [ | ]Artifact reduction | Brain and abdomen CT and radial MRp data | U-net | ||||
A28 [ | ]Resolution enhancement | Fluorescence microscopic images | GANq with U-net and CNN | ||||
D15 [ | ]Dealiasing | Compressed sensing brain lesion and cardiac MRI | GAN with U-net and VGGNet based | ||||
D16 [ | ]Resolution enhancement | Superresolution localization microscopic images | GAN with U-net–based pix2pix network modified | ||||
A33 [ | ]Reconstruction | Brain and pelvic MRI and CT | GAN with FCNN and CNN | ||||
D18 [ | ]Artifact reduction | CT | CNN based | ||||
A38 [ | ]Reconstruction | Contrast-enhanced brain MRI | Encoder-decoder CNN | ||||
D22 [ | ]Reconstruction | Brain MR fingerprinting data | FFNN based | ||||
D23 [ | ]Resolution enhancement | Hi-C matrix of chromosomes | CNN based | ||||
A48 [ | ]Resolution enhancement | Brain tumor MRI | U-net | ||||
D26 [ | ]Reconstruction | Lung vessels CT | CNN based | ||||
D32 [ | ]Resolution enhancement | Knee MRI | CNN based | ||||
D33 [ | ]Reconstruction | CT | CNN based | ||||
D34 [ | ]Registration | Cardiac cine MRI and chest CT | CNN based | ||||
Sequence analysis | |||||||
D17 [ | ]Novel structures generation and property prediction | SMILESr | Stack-RNNs with GRUt- and LSTMu based | ||||
A40 [ | ]Novel structures generation | SMILES | variational AEv; CNN- and RNN with GRU-based AAEw | ||||
D21 [ | ]Gene expression (variant effects) prediction | Genomic sequence | CNN based | ||||
D30 [ | ]Novel structures generation and classification | SMILES | GAN with differentiable neural computer and CNN based | ||||
A53 [ | ]Novel structures generation | SMILES | LSTM | ||||
A57 [ | ]Classification | Antimicrobial peptide sequence | CNN with LSTM | ||||
Sequence and image analysis | |||||||
D13 [ | ]Contact prediction | Protein sequence to contact matrix | ResNet based | ||||
(Diagnostic) pattern analysis | |||||||
A5 [ | ]Subtype identification (survival classification) | Multi-omics data from liver cancer | AE | ||||
D5 [ | ]Phenotype prediction | Genotype | GoogLeNet and deeply supervised net based | ||||
D10 [ | ]Survival prediction | Genomic profiles from cancer | FFNN based | ||||
D11 [ | ]Drug synergies prediction | Gene expression profiles of cancer cell line and chemical descriptors of drugs | FFNN based | ||||
A35 [ | ]NLPx (classification) | Electronic health record with pediatric disease | Attention-based BLSTMy | ||||
A49 [ | ]Binding classification | Protein sequence as matrix and drug molecular fingerprint | SAE | ||||
D29 [ | ]Classification | Electrocardiogram signal | BLSTM based | ||||
A55 [ | ]Classification | Polysomnogram signal | CNN |
aOCT: optical coherence tomography.
bCNN: convolutional neural network.
cMRI: magnetic resonance imaging.
dWSI: whole slide image.
eCAE: convolutional autoencoder.
fResNet: residual networks.
gCT: computed tomography.
hDTI: diffusion tensor imaging.
imCNN: multicolumn convolutional neural network.
jFCNN: fully convolutional neural network.
kSAE: stacked autoencoder.
lCAD: coronary artery disease.
mSWE: shear wave elastography.
nMIL: multiple instance learning.
oFFNN: feedforward neural network.
pMR: magnetic resonance.
qGAN: generative adversarial network.
rSMILES: simplified molecular input line-entry system.
sRNN: recurrent neural network.
tGRU: gated recurrent unit.
uLSTM: long short-term memory.
vAE: autoencoder.
wAAE: adversarial autoencoder.
xNLP: natural language processing.
yBLSTM: bidirectional long short-term memory.
Research Topics
In these studies, researchers applied or developed deep learning architectures mainly for the following purposes: image analysis, especially for diagnostic purposes, including the classification or prediction of diseases or survival, and the detection, localization, or segmentation of certain areas or abnormalities. These 3 tasks, which aim to identify the location of an object of interest, are different in that detection involves a single reference point, whereas localization involves an area identified through a bounding box, saliency map, or heatmap, segmentation involves a precise area with clear outlines identified through pixel-wise analysis. Meanwhile, in some studies, models for image analysis unrelated to diagnosis were proposed, such as classifying or segmenting cells in microscopic images and tracking moving animals in videos through pose estimation. Another major objective involved image processing for reconstructing or registering medical images. This included enhancing low-resolution images to high resolution, reconstructing images with different modalities or synthesized targets, reducing artifacts, dealiasing, and aligning medical images.
Meanwhile, several researchers used deep learning architectures to analyze molecules, proteins, and genomes for various purposes. These included drug design or discovery, specifically for generating novel molecular structures through sequence analysis and for predicting binding affinities through image analysis of complexes; understanding protein structure through image analysis of contact matrix; and predicting phenotypes, cancer survival, drug synergies, and genomic variant effects from genes or genomes. Finally, in some studies, deep learning was applied to the diagnostic classification of sequential data, including electrocardiogram or polysomnogram signals and electronic health records. In summary, in the reviewed literature, we identified a predominant focus on applying or developing deep learning models for image analysis regarding localization or diagnosis and image processing, with a few studies focusing on protein or genome analysis.
Deep Learning Architectures
Regarding the main architectures, most of them were predominantly CNNs and based on ≥1 CNN architecture such as a fully CNN (FCNN) and its variants, including U-net; residual neural network (ResNet) and its variants; GoogLeNet (Inception v1) or Inception and VGGNet and its variants; and other architectures. Meanwhile, a few researchers based their models on feedforward neural networks that were not CNNs, including autoencoders (AEs) such as convolutional AE and stacked AE. Others adapted RNNs, including (bidirectional) long short-term memory and gated recurrent unit. Furthermore, models that combined RNNs or AEs with CNNs were also proposed.
Content analysis of the reviewed literature showed that different deep learning architectures were used for different research tasks. Models for classification or prediction tasks using images were predominantly CNN based, with most being ResNet and GoogLeNet or Inception. ResNet with shortcut connections [
] and GoogLeNet or Inception with 1×1 convolutions, factorized convolutions, and regularizations [ , ] allow networks of increased depth and width by solving problems such as vanishing gradients and computational costs. These mostly analyzed medical images from magnetic resonance imaging or computed tomography, with cancer-related images often used as input data for diagnostic classification, in addition to image-like representations of protein complexes. Meanwhile, when applying these tasks to data other than images, such as genomic or gene expression profiles and protein sequence matrices, researchers used feedforward neural networks, including AEs, that enabled semi- or unsupervised learning and dimensionality reduction.Image analysis for segmentation and image processing were achieved through CNN-based architectures as well, with most of them being FCNNs, especially U-net. FCNNs produce an input-sized pixel-wise prediction by replacing the last fully connected layers to convolution layers, making them advantageous for the abovementioned tasks [
], and U-net enhances these performances through long skip connections that concatenate feature maps from the encoder path to the decoder path [ ]. In particular, for medical image processing tasks, a few researchers combined FCNNs (U-net) with other CNNs by adopting the generative adversarial network structure, which generates new instances that mimic the real data through an adversarial process between the generator and discriminator [ ]. We found that images of the brain were often used as input data for these studies.On the other hand, RNNs were applied to sequence analysis of the string representation of molecules (simplified molecular input line-entry system) and pattern analysis of sequential data such as signals. A few of these models, especially those generating novel molecular structures, combined RNNs with CNNs by adopting generative adversarial networks, including adversarial AE. In summary, the findings showed that the current deep learning models were predominantly CNN based, with most of them focusing on analyzing medical image data and different architectures that are preferred for the specific tasks.
Among these studies,
shows, in detail, the objectives and the proposed methods of the 35 studies with novel model development.Number | Development objectives | Methods (proposed model) |
D1 | Segment brain anatomical structures in 3D MRIa | Voxelwise Residual Network: trained through residual learning of volumetric feature representation and integrated with contextual information of different modalities and levels |
D2 | Estimate poses to track body parts in various animal behaviors | DeeperCut’s subset DeepLabCut: network fine-tuned on labeled body parts, with deconvolutional layers producing spatial probability densities to predict locations |
D3 | Predict isocitrate dehydrogenase 1 mutation in low-grade glioma with MRI radiomics analysis | Deep learning–based radiomics: segment tumor regions and directly extract radiomics image features from the last convolutional layer, which is encoded for feature selection and prediction |
D4 | Predict protein-ligand binding affinities represented by 3D descriptors | KDEEP: 3D network to predict binding affinity using voxel representation of protein-ligand complex with assigned property according to its atom type |
D5 | Predict phenotype from genotype through the biological hierarchy of cellular subsystems | DCell: visible neural network with structure following cellular subsystem hierarchy to predict cell growth phenotype and genetic interaction from genotype |
D6 | Classify and localize thoracic diseases in chest radiographs | DenseNet-based CheXNeXt: networks trained for each pathology to predict its presence and ensemble and localize indicative parts using class activation mappings |
D7 | Multi-classification of breast cancer from histopathological images | CSDCNNb: trained through end-to-end learning of hierarchical feature representation and optimized feature space distance between breast cancer classes |
D8 | Interactive segmentation of 2D and 3D medical images fine-tuned on a specific image | Bounding box and image-specific fine-tuning–based segmentation: trained for interactive image segmentation using bounding box and fine-tuned for specific image with or without scribble and weighted loss function |
D9 | Facial image analysis for identifying phenotypes of genetic syndromes | DeepGestalt: preprocessed for face detection and multiple regions and extracts phenotype to predict syndromes per region and aggregate probabilities for classification |
D10 | Predict cancer outcomes with genomic profiles through survival models optimization | SurvivalNet: deep survival model with high-dimensional genomic input and Bayesian hyperparameter optimization, interpreted using risk backpropagation |
D11 | Predict synergy effect of novel drug combinations for cancer treatment | DeepSynergy: predicts drug synergy value using cancer cell line gene expressions and chemical descriptors, which are normalized and combined through conic layers |
D12 | Classify liver fibrosis stages in chronic hepatitis B using radiomics of SWEc | DLREd: predict the probability of liver fibrosis stages with quantitative radiomics approach through automatic feature extraction from SWE images |
D13 | Predict protein residue contact map at pixel level with protein features | RaptorX-Contact: combined networks to learn contact occurrence patterns from sequential and pairwise protein features to predict contacts simultaneously at pixel level |
D14 | Segment liver and tumor in abdominal CTe scans | Hybrid Densely connected U-net: 2D and 3D networks to extract intra- and interslice features with volumetric contexts, optimized through hybrid feature fusion layer |
D15 | Reconstruct compressed sensing MRI to dealiased image | DAGANf: conditional GANg stabilized by refinement learning, with the content loss combined adversarial loss incorporating frequency domain data |
D16 | Reconstruct sparse localization microscopy to superresolution image | Artificial Neural Network Accelerated–Photoactivated Localization Microscopy: trained with superresolution PALMh as the target, compares reconstructed and target with loss functions containing conditional GAN |
D17 | Generate novel chemical compound design with desired properties | Reinforcement Learning for Structural Evolution: generate chemically feasible molecule as strings and predict its property, which is integrated with reinforcement learning to bias the design |
D18 | Reduce metal artifacts in reconstructed x-ray CT images | CNNi-based Metal Artifact Reduction: trained on images processed by other Metal Artifact Reduction methods and generates prior images through tissue processing and replaces metal-affected projections |
D19 | Predict Bacillus species to identify anthrax spores in single cell holographic images | HoloConvNet: trained with raw holographic images to directly recognize interspecies difference through representation learning using error backpropagation |
D20 | Classify and detect malignant pulmonary nodules in chest radiographs | Deep learning–based automatic detection: predict the probability of nodules per radiograph for classification and detect nodule location per nodule from activation value |
D21 | Predict tissue-specific gene expression and genomic variant effects on the expression | ExPecto: predict regulatory features from sequences and transform to spatial features and use linear models to predict tissue-specific expression and variant effects |
D22 | Reconstruct MRFj to obtain tissue parameter maps | Deep reconstruction network: trained with a sparse dictionary that maps magnitude image to quantitative tissue parameter values for MRF reconstruction |
D23 | Generate high-resolution Hi-C interaction matrix of chromosomes from a low-resolution matrix | HiCPlus: predict high-resolution matrix through mapping regional interaction features of low-resolution to high-resolution submatrices using neighboring regions |
D24 | Estimate poses to track body parts of freely moving animals | LEAPk: videos preprocessed for egocentric alignment and body parts labeled using GUIl and predicts each location by confidence maps with probability distributions |
D25 | Jointly segment optic disc and cup in fundus images for glaucoma screening | M-Net: multi-scale network for generating multi-label segmentation prediction maps of disc and cup regions using polar transformation |
D26 | Reconstruct limited-view PATm to high-resolution 3D images | Deep gradient descent: learned iterative image reconstruction, incorporated with gradient information of the data fit separately computed from training |
D27 | Predict classifications of and localize knee injuries from MRI | MRNet: networks trained for each diagnosis according to a series to predict its presence and combine probabilities for classification using logistic regression |
D28 | Predict binding affinities between 3D structures of protein-ligand complexes | Pafnucy: structure-based prediction using 3D grid representation of molecular complexes with different orientations as having same atom types |
D29 | Classify electrocardiogram signals based on wavelet transform | Deep bidirectional LSTMn network–based wavelet sequences: generate decomposed frequency subbands of electrocardiogram signal as sequences by wavelet-based layer and use as input for classification |
D30 | Generate novel small molecule structures with possible biological activity | Reinforced Adversarial Neural Computer: combined with GAN and reinforcement learning, generates sequences matching the key feature distributions in the training molecule data |
D31 | Detect and localize breast cancer metastasis in digitized lymph nodes slides | LYmph Node Assistant: predict the likelihood of tumor in tissue area and generate a heat map for slides identifying likely areas |
D32 | Transform low-resolution thick slice knee MRI to high-resolution thin slices | DeepResolve: trained to compute residual images, which are added to low-resolution images to generate their high-resolution images |
D33 | Reconstruct sparse-view CT to suppress artifact and preserve feature | Learned Experts’ Assessment–Based Reconstruction Network: iterative reconstruction using previous compressive sensing methods, with fields of expert-applied regularization terms learned iteration dependently |
D34 | Unsupervised affine and deformable aligning of medical images | Deep Learning Image Registration: multistage registration network and unsupervised training to predict transformation parameters using image similarity and create warped moving images |
D35 | Classify subcellular localization patterns of proteins in microscopy images | Localization Cellular Annotation Tool: predict localization per cell for image-based classification of multi-localizing proteins, combined with gamer annotations for transfer learning |
aMRI: magnetic resonance imaging.
bCSDCNN: class structure-based deep convolutional neural network.
cSWE: shear wave elastography.
dDLRE: deep learning radiomics of elastography.
eCT: computed tomography.
fDAGAN: Dealiasing Generative Adversarial Networks.
gGAN: generative adversarial network.
hPALM: photoactivated localization microscopy.
iCNN: convolutional neural network.
jMRF: magnetic resonance fingerprinting.
kLEAP: LEAP Estimates Animal Pose.
lGUI: graphical user interface.
mPAT: photoacoustic tomography.
nLSTM: long short-term memory.
Black Box Problem
In quite a few of the reviewed studies, the black box problem of deep learning was partly addressed, as researchers implemented various methods to improve model interpretability. To understand the prediction results of image analysis models, most used one of the following two techniques to visualize the important regions: (1) activation-based heatmaps [
, , , ], especially class activation maps [ , , , ], and saliency maps [ ] and (2) occlusion testing [ , , , ]. For models analyzing data other than images, there were no generally accepted techniques for model interpretation, and researchers suggested some methods, including adopting an interpretable hierarchical structure such as the cellular subsystem [ ] or anatomical division [ ], using backpropagation [ ], observing gate activations of cells in the neural network [ ], or investigating how corrupted input data affect the prediction and how identical predictions are made for different inputs [ ]. As such, various methods were found to be used to tackle this well-known limitation of deep learning.Cited Reference Analysis
On average, each examined deep learning study with at least one PubMed indexed citation (429/978, 43.9%) had 25.8 (SD 20.0) citations. These cited references comprised 9373 unique records that were cited 1.27 times on average (SD 2.16). Excluding the ones that were unindexed in the WoS Core Collection (8618/9373, 8.06% of the unique records), an average of 1.77 (SD 1.07) categories were assigned to a record. The top ten WoS categories, which were assigned to the greatest number of total cited references, pertained to the following three major groups: (1) biomedicine (Radiology, Nuclear Medicine, and Medical Imaging: 2025/11,033, 18.35%; Biochemical Research Methods: 1118/11,033, 10.13%; Mathematical and Computational Biology: 1066/11,033, 9.66%; Biochemistry and Molecular Biology: 1043/11,033, 9.45%; Engineering, Biomedical: 981/11,033, 8.89%; Biotechnology and Applied Microbiology: 916/11,033, 8.3%; Neurosciences: 844/11,033, 7.65%), (2) computer science and engineering (Computer Science, Interdisciplinary Applications: 1041/11,033, 9.44%; Engineering, Electrical and Electronic: 645/11,033, 5.85%), and (3) Multidisciplinary Sciences (with 1411/11,033, 12.79% records).
To understand the intellectual structure of how knowledge is transferred among different areas of study through citations, we visualized the citation network of WoS subject categories. In the directed citation network shown in
, the edges were directed clockwise with the source nodes as the WoS categories of the deep learning studies we examined and the target nodes as the WoS categories of the cited references from which knowledge was obtained. To enhance legibility, we filtered out categories with <100 weighted degrees, excluding self-loops, to form a network of 20 nodes (20/158, 12.7% of the total) and 59 edges (59/2380, 2.48% of the total). In the figure, the node color and size are proportional to the PageRank score (probability 0.85; ε=0.001; A) and weighted-out degree ( B), and the edge size and color are proportional to the link strength. PageRank considers not only the quantity but also the quality of incoming edges, identifying important exporters for knowledge diffusion based on how often and by which fields a node is cited. On the other hand, the weighted outdegree measures outgoing edges and identifies major knowledge importers that frequently cite other fields.As depicted in
A, categories with high PageRank scores mostly coincided with the frequently cited fields identified above and were grouped into two communities through modularity (upper half and lower half). The upper half region centered on Radiology, Nuclear Medicine, and Medical Imaging, which had the highest PageRank score (0.191) and proved to be a field with a significant influence on deep learning studies in biomedicine. Meanwhile, important knowledge exporters to this field included Engineering, Biomedical (0.134); Engineering, Electrical and Electronic (0.110); and Computer Science, Interdisciplinary Applications (0.091). The lower half region mainly comprised categories with comparable PageRank scores in which knowledge was frequently exchanged between one another, including Biochemical Research Methods (0.053), Multidisciplinary Sciences (0.053), Biochemistry and Molecular Biology (0.052), Biotechnology and Applied Microbiology (0.050), and Mathematical and Computational Biology (0.048). Specifically, in B, Mathematical and Computational Biology (1992), Biotechnology and Applied Microbiology (1836), and Biochemical Research Methods (1807) were identified as major knowledge importers with the highest weighted outdegrees, whereas Biochemistry and Molecular Biology (344) had a relatively low weighted outdegree, indicating their role as a source of knowledge for these fields.We analyzed the 10 most frequently cited studies to gain an in-depth understanding of the most influential works and assigned these papers to one of the three categories: review, application, or development. Review articles provided comprehensive overviews of the development and applications of deep learning [
, ], with 1 focusing on applications to medical image analysis [ ]. We summarize the 7 application (denoted by A) or development (denoted by D) studies in .In these studies, excluding the study by Hochreiter and Schmidhuber [
], whose research topic pertained to computer science, deep learning was used for diagnostic image analysis of various areas [ - , ] and for sequence analysis of proteins [ ] or genomes [ ]. The main architectures implemented to achieve the different research objectives mostly comprised CNNs [ - , ] or CNN-based novel models [ , ] and RNNs [ ]. The findings indicated that these deep neural networks either outperformed previous methods or achieved a performance comparable with that of human experts.Category | Citation count, n | Research topic: task type | Objectives | Methods (deep learning architectures) |
A1 [ | ]53 | Diagnostic image analysis: classification | Apply CNNa to classifying skin lesions from clinical images | Inception version 3 fine-tuned end to end with images; tested against dermatologists on 2 binary classifications |
A2 [ | ]51 | Diagnostic image analysis: classification | Apply CNN to detecting referrable diabetic retinopathy on retinal fundus images | Inception version 3 trained and validated using 2 data sets of images graded by ophthalmologists |
D1 [ | ]34 | Computer science | Develop a new gradient-based RNNb to solve error backflow problems | LSTMc achieved constant error flow through memory cells regulated by gate units; tested numerous times against other methods |
D2 [ | ]33 | Sequence analysis: binding (variant effects) prediction | Propose a predictive model for sequence specificities of DNA- and RNA-binding proteins | CNN-based DeepBind trained fully automatically through parallel implementation to predict and visualize binding specificities and variation effects |
A3 [ | ]27 | Diagnostic image analysis: classification | Evaluate factors of using CNNs for thoracoabdominal lymph node detection and interstitial lung disease classification | Compare performances of AlexNet, CifarNet, and GoogLeNet trained with transfer learning and different data set characteristics |
D3 [ | ]23 | Sequence analysis: chromatin profiles (variant effects) prediction | Propose a model for predicting noncoding variant effects from genomic sequence | CNN-based DeepSEA trained for chromatin profile prediction to estimate variant effects with single nucleotide sensitivity and prioritize functional variants |
A4 [ | ]23 | Diagnostic image analysis: classification | Evaluate CNNs for tuberculosis detection on chest radiographs | Compare performances of AlexNet and GoogLeNet and ensemble of 2 trained with transfer learning, augmented data set, and radiologist-augmented approach |
aCNN: convolutional neural network.
bRNN: recurrent neural network.
cLSTM: long short-term memory.
Discussion
Principal Findings
With the increase in biomedical research using deep learning techniques, we aimed to gain a quantitative and qualitative understanding of the scientific domain, as reflected in the published literature. For this purpose, we conducted a scientometric analysis of deep learning studies in biomedicine.
Through the metadata and content analyses of bibliographic records, we identified the current leading fields and research topics, the most prominent being radiology and medical imaging. Other biomedical fields that have led this domain included biomedical engineering, mathematical and computational biology, and biochemical research methods. As part of interdisciplinary research, computer science and electrical engineering were important fields as well. The major research topics that were studied included computer-assisted image interpretation and diagnosis (which involved localizing or segmenting certain areas for classifying or predicting diseases), image processing such as medical image reconstruction or registration, and sequence analysis of proteins or RNA to understand protein structure and discover or design drugs. These topics were particularly prevalent in their application to neoplasms.
Furthermore, although deep learning techniques that had been proposed for these themes were predominantly CNN based, different architectures are preferred for different research tasks. The findings showed that CNN-based models mostly focused on analyzing medical image data, with RNN architectures for sequential data analysis and AEs for unsupervised dimensionality reduction yet to be actively explored. Other deep learning methods, such as deep belief networks [
, ], deep Q network [ ], and dictionary learning [ ], have also been applied to biomedical research but were excluded from the content analysis because of low citation count. As deep learning is a rapidly evolving field, future biomedical researchers should pay attention to the emerging trends and keep aware of state-of-the-art models for enhanced performance, such as transformer-based models, including bidirectional encoder representations from transformers for NLP [ ]; wav2vec for speech recognition [ ]; and the Swin transformer for computer vision tasks of image classification, segmentation, and object detection [ ].The findings from the analysis of the cited references revealed patterns of knowledge diffusion. In the analysis, radiology and medical imaging appeared to be the most significant knowledge source and an important field in the knowledge diffusion network. Relatedly, we identified knowledge exporters to this field, including biomedical engineering, electrical engineering, and computer science, as important, despite their relatively low citation counts. Furthermore, citation patterns revealed clique-like relationships among the four fields—biochemical research methods, biochemistry and molecular biology, biotechnology and applied microbiology, and mathematical and computational biology—with each being a source of knowledge and diffusion for the others.
Beyond knowledge diffusion, knowledge integration was also encouraged through collaboration among authors from different organizations and academic disciplines. Coauthorship analysis revealed active research collaboration between universities and hospitals and between hospitals and companies. Separately, we identified an engineering-oriented cluster and biomedicine-oriented clusters of disciplines, among which we observed a range of disciplinary collaborations, with the most prominent 2 between radiology and medical imaging and computer science and electrical engineering, which were the 3 disciplines that were most involved in publishing and collaboration. Meanwhile, pathology and public health showed a high collaborative research to publications ratio, whereas computational biology showed a low collaborative ratio.
Limitations
This study has the following limitations that may have affected data analysis and interpretation. First, focusing only on published studies may have underrepresented the field. Second, publication data were only retrieved from PubMed; although PubMed is one of the largest databases for biomedical literature, other databases such as DataBase systems and Logic Programming may also include relevant studies. Third, the use of PubMed limited our data to biomedical journals and proceedings. Given that deep learning is an active research area in computer science, computer science conference articles are valuable sources of data that were not considered in this study. Finally, our current data retrieval strategy involved searching deep learning as the major MeSH term, which increased precision but may have omitted relevant studies that were not explicitly tagged as deep learning. We plan to expand our scope in future work to consider other bibliographic databases and search terms as well.
Conclusions
In this study, we investigated the landscape of deep learning research in biomedicine and identified major research topics, influential works, knowledge diffusion, and research collaboration through scientometric analyses. The results showed a predominant focus on research applying deep learning techniques, especially CNNs, to radiology and medical imaging and confirmed the interdisciplinary nature of this domain, especially between engineering and biomedical fields. However, diverse biomedical applications of deep learning in the fields of genetics and genomics, medical informatics focusing on text or speech data, and signal processing of various activities (eg, brain, heart, and human) will further boost the contribution of deep learning in addressing biomedical research problems. As such, although deep learning research in biomedicine has been successful, we believe that there is a need for further exploration, and we expect the results of this study to help researchers and communities better align their present and future work.
Authors' Contributions
SN and YZ designed the study. SN, DK, and WJ analyzed the data. SN took the lead in the writing of the manuscript. YZ supervised and implemented the study. All authors contributed to critical edits and approved the final manuscript.
Conflicts of Interest
None declared.
References
- LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015 May 28;521(7553):436-444. [CrossRef] [Medline]
- Zaharchuk G, Gong E, Wintermark M, Rubin D, Langlotz CP. Deep learning in neuroradiology. AJNR Am J Neuroradiol 2018 Oct;39(10):1776-1784 [FREE Full text] [CrossRef] [Medline]
- Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw 2015 Jan;61:85-117. [CrossRef] [Medline]
- Litjens G, Kooi T, Bejnordi BE, Setio AA, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis. Med Image Anal 2017 Dec;42:60-88. [CrossRef] [Medline]
- Dilsizian ME, Siegel EL. Machine meets biology: a primer on artificial intelligence in cardiology and cardiac imaging. Curr Cardiol Rep 2018 Oct 18;20(12):139. [CrossRef] [Medline]
- Hu Z, Tang J, Wang Z, Zhang K, Zhang L, Sun Q. Deep learning for image-based cancer detection and diagnosis − a survey. Pattern Recognit 2018 Nov;83:134-149. [CrossRef]
- Xue Y, Chen S, Qin J, Liu Y, Huang B, Chen H. Application of deep learning in automated analysis of molecular images in cancer: a survey. Contrast Media Mol Imaging 2017;2017:9512370 [FREE Full text] [CrossRef] [Medline]
- Meyer P, Noblet V, Mazzara C, Lallement A. Survey on deep learning for radiotherapy. Comput Biol Med 2018 Jul 01;98:126-146. [CrossRef] [Medline]
- Mamoshina P, Vieira A, Putin E, Zhavoronkov A. Applications of deep learning in biomedicine. Mol Pharm 2016 May 02;13(5):1445-1454. [CrossRef] [Medline]
- Cao C, Liu F, Tan H, Song D, Shu W, Li W, et al. Deep learning and its applications in biomedicine. Genomics Proteomics Bioinformatics 2018 Feb;16(1):17-32 [FREE Full text] [CrossRef] [Medline]
- Wainberg M, Merico D, Delong A, Frey BJ. Deep learning in biomedicine. Nat Biotechnol 2018 Oct;36(9):829-838. [CrossRef] [Medline]
- Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017 Feb 02;542(7639):115-118 [FREE Full text] [CrossRef] [Medline]
- Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016 Dec 13;316(22):2402-2410. [CrossRef] [Medline]
- Shin HC, Roth HR, Gao M, Lu L, Xu Z, Nogues I, et al. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging 2016 May;35(5):1285-1298 [FREE Full text] [CrossRef] [Medline]
- de Vos BD, Wolterink JM, de Jong PA, Viergever MA, Išgum I. 2D image classification for 3D anatomy localization: employing deep convolutional neural networks. In: Proceedings Volume 9784, Medical Imaging 2016: Image Processing. 2016 Presented at: SPIE '16; February 27-March 3, 2016; San Diego, CA, USA. [CrossRef]
- Wang G, Li W, Zuluaga MA, Pratt R, Patel PA, Aertsen M, et al. Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE Trans Med Imaging 2018 Jul;37(7):1562-1573 [FREE Full text] [CrossRef] [Medline]
- Miao S, Wang ZJ, Liao R. A CNN regression approach for real-time 2D/3D registration. IEEE Trans Med Imaging 2016 May;35(5):1352-1363. [CrossRef] [Medline]
- de Vos BD, Berendsen FF, Viergever MA, Sokooti H, Staring M, Išgum I. A deep learning framework for unsupervised affine and deformable image registration. Med Image Anal 2019 Feb;52:128-143. [CrossRef] [Medline]
- Lin Z, Lanchantin J, Qi Y. MUST-CNN: a multilayer shift-and-stitch deep convolutional architecture for sequence-based protein structure prediction. In: Proceedings of the 13th AAAI Conference on Artificial Intelligence. 2016 Presented at: IAAI '16; February 12-17, 2016; Phoenix, AZ, USA p. 27-34.
- Wang S, Li W, Liu S, Xu J. RaptorX-Property: a web server for protein structure property prediction. Nucleic Acids Res 2016 Jul 08;44(W1):W430-W435 [FREE Full text] [CrossRef] [Medline]
- Alipanahi B, Delong A, Weirauch MT, Frey BJ. Predicting the sequence specificities of DNA- and RNA-binding proteins by deep learning. Nat Biotechnol 2015 Aug;33(8):831-838. [CrossRef] [Medline]
- Zhou J, Troyanskaya OG. Predicting effects of noncoding variants with deep learning-based sequence model. Nat Methods 2015 Oct;12(10):931-934 [FREE Full text] [CrossRef] [Medline]
- Quang D, Xie X. DanQ: a hybrid convolutional and recurrent deep neural network for quantifying the function of DNA sequences. Nucleic Acids Res 2016 Jun 20;44(11):e107 [FREE Full text] [CrossRef] [Medline]
- dos Santos BS, Steiner MT, Fenerich AT, Lima RH. Data mining and machine learning techniques applied to public health problems: a bibliometric analysis from 2009 to 2018. Comput Ind Eng 2019 Dec;138:106120. [CrossRef]
- Shukla N, Merigó JM, Lammers T, Miranda L. Half a century of computer methods and programs in biomedicine: a bibliometric analysis from 1970 to 2017. Comput Methods Programs Biomed 2020 Jan;183:105075. [CrossRef] [Medline]
- Entrez help. Bethesda, MD, USA: National Center for Biotechnology Information (US); 2006.
- Introduction to MeSH. National Library of Medicine. 2019. URL: https://www.nlm.nih.gov/mesh/introduction.html [accessed 2020-05-25]
- Chapman D. Advanced search features of PubMed. J Can Acad Child Adolesc Psychiatry 2009 Feb;18(1):58-59 [FREE Full text] [Medline]
- Chang J, Chapman B, Friedberg I, Hamelryck T, de Hoon M, Cock P, et al. Biopython tutorial and cookbook. Biopython. 2020. URL: http://biopython.org/DIST/docs/tutorial/Tutorial.pdf [accessed 2020-05-25]
- Sugimoto CR, Larivière V. Measuring research: what everyone needs to know. Oxford, UK: Oxford University Press; 2018.
- Van Eck NJ, Waltman L. Manual for VOSviewer version 1.6.15. VOSviewer. 2020. URL: https://www.vosviewer.com/documentation/Manual_VOSviewer_1.6.15.pdf [accessed 2020-05-25]
- Van Eck NJ, Waltman L. Visualizing bibliometric networks. In: Ding Y, Rousseau R, Wolfram D, editors. Measuring scholarly impact. Berlin, Germany: Springer; 2014:285-320.
- Waltman L, van Eck NJ, Noyons EC. A unified approach to mapping and clustering of bibliometric networks. J Informetr 2010 Oct;4(4):629-635. [CrossRef]
- Verborgh R, De Wilde M. Using OpenRefine. Birmingham, UK: Packt Publishing; 2013.
- Martín-Martín A, Orduna-Malea E, Thelwall M, Delgado López-Cózar E. Google Scholar, Web of Science, and Scopus: a systematic comparison of citations in 252 subject categories. J Informetr 2018 Nov;12(4):1160-1177. [CrossRef]
- Bastian M, Heymann S, Jacomy M. Gephi: an open source software for exploring and manipulating networks. In: Proceedings of the 3rd International AAAI Conference on Web and Social Media. 2009 Presented at: ICWSM '09; May 17-20, 2009; San Jose, CA, USA p. 17-20.
- Blondel VD, Guillaume JL, Lambiotte R, Lefebvre E. Fast unfolding of communities in large networks. J Stat Mech 2008 Oct 09;2008(10):P10008. [CrossRef]
- Brin S, Page L. The anatomy of a large-scale hypertextual web search engine. Comput Netw 1998 Apr;30(1-7):107-117. [CrossRef]
- Kermany DS, Goldbaum M, Cai W, Valentim CC, Liang H, Baxter SL, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 2018 Feb 22;172(5):1122-31.e9 [FREE Full text] [CrossRef] [Medline]
- De Fauw J, Ledsam JR, Romera-Paredes B, Nikolov S, Tomasev N, Blackwell S, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med 2018 Sep;24(9):1342-1350. [CrossRef] [Medline]
- Haenssle HA, Fink C, Schneiderbauer R, Toberer F, Buhl T, Blum A, Reader study level-Ilevel-II Groups, et al. Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann Oncol 2018 Aug 01;29(8):1836-1842 [FREE Full text] [CrossRef] [Medline]
- Lao J, Chen Y, Li ZC, Li Q, Zhang J, Liu J, et al. A deep learning-based radiomics model for prediction of survival in glioblastoma multiforme. Sci Rep 2017 Sep 04;7(1):10353 [FREE Full text] [CrossRef] [Medline]
- Saltz J, Gupta R, Hou L, Kurc T, Singh P, Nguyen V, Cancer Genome Atlas Research Network, et al. Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell Rep 2018 Apr 03;23(1):181-93.e7 [FREE Full text] [CrossRef] [Medline]
- Chen H, Dou Q, Yu L, Qin J, Heng PA. VoxResNet: deep voxelwise residual networks for brain segmentation from 3D MR images. Neuroimage 2018 Apr 15;170:446-455. [CrossRef] [Medline]
- Poplin R, Varadarajan AV, Blumer K, Liu Y, McConnell MV, Corrado GS, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng 2018 Mar;2(3):158-164. [CrossRef] [Medline]
- Mathis A, Mamidanna P, Cury KM, Abe T, Murthy VN, Mathis MW, et al. DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. Nat Neurosci 2018 Sep;21(9):1281-1289. [CrossRef] [Medline]
- Byrne MF, Chapados N, Soudan F, Oertel C, Linares Pérez M, Kelly R, et al. Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model. Gut 2019 Jan;68(1):94-100 [FREE Full text] [CrossRef] [Medline]
- Ciompi F, Chung K, van Riel SJ, Setio AA, Gerke PK, Jacobs C, et al. Towards automatic pulmonary nodule management in lung cancer screening with deep learning. Sci Rep 2017 Apr 19;7:46479 [FREE Full text] [CrossRef] [Medline]
- Schlegl T, Waldstein SM, Bogunovic H, Endstraßer F, Sadeghipour A, Philip AM, et al. Fully automated detection and quantification of macular fluid in OCT using deep learning. Ophthalmology 2018 Apr;125(4):549-558 [FREE Full text] [CrossRef] [Medline]
- Li Z, Wang Y, Yu J, Guo Y, Cao W. Deep learning based radiomics (DLR) and its usage in noninvasive IDH1 prediction for low grade glioma. Sci Rep 2017 Jul 14;7(1):5467 [FREE Full text] [CrossRef] [Medline]
- Jiménez J, Škalič M, Martínez-Rosell G, De Fabritiis G. K deep: protein-ligand absolute binding affinity prediction via 3d-convolutional neural networks. J Chem Inf Model 2018 Feb 26;58(2):287-296. [CrossRef] [Medline]
- Nie D, Zhang H, Adeli E, Liu L, Shen D. 3D deep learning for multi-modal imaging-guided survival time prediction of brain tumor patients. Med Image Comput Comput Assist Interv 2016 Oct;9901:212-220 [FREE Full text] [CrossRef] [Medline]
- Li Z, He Y, Keel S, Meng W, Chang RT, He M. Efficacy of a deep learning system for detecting glaucomatous optic neuropathy based on color fundus photographs. Ophthalmology 2018 Aug;125(8):1199-1206. [CrossRef] [Medline]
- Zech JR, Badgeley MA, Liu M, Costa AB, Titano JJ, Oermann EK. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS Med 2018 Nov;15(11):e1002683 [FREE Full text] [CrossRef] [Medline]
- Chilamkurthy S, Ghosh R, Tanamala S, Biviji M, Campeau NG, Venugopal VK, et al. Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study. Lancet 2018 Dec 01;392(10162):2388-2396. [CrossRef] [Medline]
- Chang P, Grinband J, Weinberg BD, Bardis M, Khy M, Cadena G, et al. Deep-learning convolutional neural networks accurately classify genetic mutations in gliomas. AJNR Am J Neuroradiol 2018 Jul;39(7):1201-1207 [FREE Full text] [CrossRef] [Medline]
- Rajpurkar P, Irvin J, Ball RL, Zhu K, Yang B, Mehta H, et al. Deep learning for chest radiograph diagnosis: a retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Med 2018 Nov;15(11):e1002686 [FREE Full text] [CrossRef] [Medline]
- Zhang J, Gajjala S, Agrawal P, Tison GH, Hallock LA, Beussink-Nelson L, et al. Fully automated echocardiogram interpretation in clinical practice. Circulation 2018 Oct 16;138(16):1623-1635 [FREE Full text] [CrossRef] [Medline]
- Ding Y, Sohn JH, Kawczynski MG, Trivedi H, Harnish R, Jenkins NW, et al. A deep learning model to predict a diagnosis of Alzheimer disease by using 18 F-FDG PET of the brain. Radiology 2019 Feb;290(2):456-464 [FREE Full text] [CrossRef] [Medline]
- Han Z, Wei B, Zheng Y, Yin Y, Li K, Li S. Breast cancer multi-classification from histopathological images with structured deep learning model. Sci Rep 2017 Jun 23;7(1):4172 [FREE Full text] [CrossRef] [Medline]
- Han SS, Kim MS, Lim W, Park GH, Park I, Chang SE. Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm. J Invest Dermatol 2018 Jul;138(7):1529-1538 [FREE Full text] [CrossRef] [Medline]
- González G, Ash SY, Vegas-Sánchez-Ferrero G, Onieva Onieva J, Rahaghi FN, Ross JC, COPDGene and ECLIPSE Investigators. Disease staging and prognosis in smokers using deep learning in chest computed tomography. Am J Respir Crit Care Med 2018 Jan 15;197(2):193-203 [FREE Full text] [CrossRef] [Medline]
- Dolz J, Desrosiers C, Ben Ayed I. 3D fully convolutional networks for subcortical segmentation in MRI: a large-scale study. Neuroimage 2018 Apr 15;170:456-470. [CrossRef] [Medline]
- Wen H, Shi J, Zhang Y, Lu KH, Cao J, Liu Z. Neural encoding and decoding with deep learning for dynamic natural vision. Cereb Cortex 2018 Dec 01;28(12):4136-4160 [FREE Full text] [CrossRef] [Medline]
- Gurovich Y, Hanani Y, Bar O, Nadav G, Fleischer N, Gelbman D, et al. Identifying facial phenotypes of genetic disorders using deep learning. Nat Med 2019 Jan;25(1):60-64. [CrossRef] [Medline]
- Falk T, Mai D, Bensch R, Çiçek Ö, Abdulkadir A, Marrakchi Y, et al. U-Net: deep learning for cell counting, detection, and morphometry. Nat Methods 2019 Jan;16(1):67-70. [CrossRef] [Medline]
- Ribli D, Horváth A, Unger Z, Pollner P, Csabai I. Detecting and classifying lesions in mammograms with Deep Learning. Sci Rep 2018 Mar 15;8(1):4165 [FREE Full text] [CrossRef] [Medline]
- Ardila D, Kiraly AP, Bharadwaj S, Choi B, Reicher JJ, Peng L, et al. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat Med 2019 Jun;25(6):954-961. [CrossRef] [Medline]
- Song Q, Zhao L, Luo X, Dou X. Using deep learning for classification of lung nodules on computed tomography images. J Healthc Eng 2017;2017:8314740 [FREE Full text] [CrossRef] [Medline]
- Hosny A, Parmar C, Coroller TP, Grossmann P, Zeleznik R, Kumar A, et al. Deep learning for lung cancer prognostication: a retrospective multi-cohort radiomics study. PLoS Med 2018 Nov;15(11):e1002711 [FREE Full text] [CrossRef] [Medline]
- Betancur J, Commandeur F, Motlagh M, Sharir T, Einstein AJ, Bokhari S, et al. Deep learning for prediction of obstructive disease from fast myocardial perfusion SPECT: a multicenter study. JACC Cardiovasc Imaging 2018 Nov;11(11):1654-1663 [FREE Full text] [CrossRef] [Medline]
- Wang X, Yang W, Weinreb J, Han J, Li Q, Kong X, et al. Searching for prostate cancer by fully automated magnetic resonance imaging classification: deep learning versus non-deep learning. Sci Rep 2017 Nov 13;7(1):15415 [FREE Full text] [CrossRef] [Medline]
- Wang K, Lu X, Zhou H, Gao Y, Zheng J, Tong M, et al. Deep learning Radiomics of shear wave elastography significantly improved diagnostic performance for assessing liver fibrosis in chronic hepatitis B: a prospective multicentre study. Gut 2019 Apr;68(4):729-741 [FREE Full text] [CrossRef] [Medline]
- Li X, Chen H, Qi X, Dou Q, Fu CW, Heng PA. H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans Med Imaging 2018 Dec;37(12):2663-2674. [CrossRef] [Medline]
- Grassmann F, Mengelkamp J, Brandl C, Harsch S, Zimmermann ME, Linkohr B, et al. A deep learning algorithm for prediction of age-related eye disease study severity scale for age-related macular degeneration from color fundus photography. Ophthalmology 2018 Sep;125(9):1410-1420 [FREE Full text] [CrossRef] [Medline]
- Cha KH, Hadjiiski L, Chan HP, Weizer AZ, Alva A, Cohan RH, et al. Bladder cancer treatment response assessment in CT using radiomics with deep-learning. Sci Rep 2017 Aug 18;7(1):8738 [FREE Full text] [CrossRef] [Medline]
- Arvaniti E, Fricker KS, Moret M, Rupp N, Hermanns T, Fankhauser C, et al. Automated Gleason grading of prostate cancer tissue microarrays via deep learning. Sci Rep 2018 Aug 13;8(1):12054 [FREE Full text] [CrossRef] [Medline]
- Jo Y, Park S, Jung J, Yoon J, Joo H, Kim MH, et al. Holographic deep learning for rapid optical screening of anthrax spores. Sci Adv 2017 Aug;3(8):e1700606 [FREE Full text] [CrossRef] [Medline]
- Oakden-Rayner L, Carneiro G, Bessen T, Nascimento JC, Bradley AP, Palmer LJ. Precision radiology: predicting longevity using feature engineering and deep learning methods in a radiomics framework. Sci Rep 2017 May 10;7(1):1648 [FREE Full text] [CrossRef] [Medline]
- Nam JG, Park S, Hwang EJ, Lee JH, Jin KN, Lim KY, et al. Development and validation of deep learning-based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology 2019 Jan;290(1):218-228. [CrossRef] [Medline]
- Chung SW, Han SS, Lee JW, Oh KS, Kim NR, Yoon JP, et al. Automated detection and classification of the proximal humerus fracture by using deep learning algorithm. Acta Orthop 2018 Aug;89(4):468-473 [FREE Full text] [CrossRef] [Medline]
- Wang Y, Kosinski M. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. J Pers Soc Psychol 2018 Feb;114(2):246-257. [CrossRef] [Medline]
- Zreik M, Lessmann N, van Hamersvelt RW, Wolterink JM, Voskuil M, Viergever MA, et al. Deep learning analysis of the myocardium in coronary CT angiography for identification of patients with functionally significant coronary artery stenosis. Med Image Anal 2018 Feb;44:72-85. [CrossRef] [Medline]
- Lindsey R, Daluiski A, Chopra S, Lachapelle A, Mozer M, Sicular S, et al. Deep neural network improves fracture detection by clinicians. Proc Natl Acad Sci U S A 2018 Nov 06;115(45):11591-11596 [FREE Full text] [CrossRef] [Medline]
- Han Y, Kim D. Deep convolutional neural networks for pan-specific peptide-MHC class I binding prediction. BMC Bioinformatics 2017 Dec 28;18(1):585 [FREE Full text] [CrossRef] [Medline]
- Jiang H, Ma H, Qian W, Gao M, Li Y, Jiang H, et al. An automatic detection system of lung nodule based on multigroup patch-based deep learning network. IEEE J Biomed Health Inform 2018 Jul;22(4):1227-1237. [CrossRef] [Medline]
- Aubreville M, Knipfer C, Oetter N, Jaremenko C, Rodner E, Denzler J, et al. Automatic classification of cancerous tissue in laserendomicroscopy images of the oral cavity using deep learning. Sci Rep 2017 Sep 20;7(1):11979 [FREE Full text] [CrossRef] [Medline]
- Campanella G, Hanna MG, Geneslaw L, Miraflor A, Werneck Krauss Silva V, Busam KJ, et al. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat Med 2019 Aug;25(8):1301-1309 [FREE Full text] [CrossRef] [Medline]
- Pereira TD, Aldarondo DE, Willmore L, Kislin M, Wang SS, Murthy M, et al. Fast animal pose estimation using deep neural networks. Nat Methods 2019 Jan;16(1):117-125 [FREE Full text] [CrossRef] [Medline]
- Fu H, Cheng J, Xu Y, Wong DW, Liu J, Cao X. Joint optic disc and cup segmentation based on multi-label deep network and polar transformation. IEEE Trans Med Imaging 2018 Jul;37(7):1597-1605. [CrossRef] [Medline]
- Bernard O, Lalande A, Zotti C, Cervenansky F, Yang X, Heng PA, et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE Trans Med Imaging 2018 Nov;37(11):2514-2525. [CrossRef] [Medline]
- Bien N, Rajpurkar P, Ball RL, Irvin J, Park A, Jones E, et al. Deep-learning-assisted diagnosis for knee magnetic resonance imaging: development and retrospective validation of MRNet. PLoS Med 2018 Nov;15(11):e1002699 [FREE Full text] [CrossRef] [Medline]
- Stepniewska-Dziubinska MM, Zielenkiewicz P, Siedlecki P. Development and evaluation of a deep learning model for protein-ligand binding affinity prediction. Bioinformatics 2018 Nov 01;34(21):3666-3674 [FREE Full text] [CrossRef] [Medline]
- Sharma K, Rupprecht C, Caroli A, Aparicio MC, Remuzzi A, Baust M, et al. Automatic segmentation of kidneys using deep learning for total kidney volume quantification in autosomal dominant polycystic kidney disease. Sci Rep 2017 May 17;7(1):2049 [FREE Full text] [CrossRef] [Medline]
- Liu F, Zhou Z, Samsonov A, Blankenbaker D, Larison W, Kanarek A, et al. Deep learning approach for evaluating knee MR images: achieving high diagnostic performance for cartilage lesion detection. Radiology 2018 Oct;289(1):160-169 [FREE Full text] [CrossRef] [Medline]
- Lehman CD, Yala A, Schuster T, Dontchos B, Bahl M, Swanson K, et al. Mammographic breast density assessment using deep learning: clinical implementation. Radiology 2019 Jan;290(1):52-58. [CrossRef] [Medline]
- Coenen A, Kim YH, Kruk M, Tesche C, De Geer J, Kurata A, et al. Diagnostic accuracy of a machine-learning approach to coronary computed tomographic angiography-based fractional flow reserve: result from the MACHINE consortium. Circ Cardiovasc Imaging 2018 Jun;11(6):e007217. [CrossRef] [Medline]
- Steiner DF, MacDonald R, Liu Y, Truszkowski P, Hipp JD, Gammage C, et al. Impact of deep learning assistance on the histopathologic review of lymph nodes for metastatic breast cancer. Am J Surg Pathol 2018 Dec;42(12):1636-1646 [FREE Full text] [CrossRef] [Medline]
- Sullivan DP, Winsnes CF, Åkesson L, Hjelmare M, Wiking M, Schutten R, et al. Deep learning is combined with massive-scale citizen science to improve large-scale image classification. Nat Biotechnol 2018 Oct;36(9):820-828. [CrossRef] [Medline]
- Chang K, Balachandar N, Lam C, Yi D, Brown J, Beers A, et al. Distributed deep learning networks among institutions for medical imaging. J Am Med Inform Assoc 2018 Aug 01;25(8):945-954 [FREE Full text] [CrossRef] [Medline]
- Han Y, Yoo J, Kim HH, Shin HJ, Sung K, Ye JC. Deep learning with domain adaptation for accelerated projection-reconstruction MR. Magn Reson Med 2018 Sep;80(3):1189-1205. [CrossRef] [Medline]
- Wang H, Rivenson Y, Jin Y, Wei Z, Gao R, Günaydın H, et al. Deep learning enables cross-modality super-resolution in fluorescence microscopy. Nat Methods 2019 Jan;16(1):103-110 [FREE Full text] [CrossRef] [Medline]
- Yang G, Yu S, Dong H, Slabaugh G, Dragotti PL, Ye X, et al. DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction. IEEE Trans Med Imaging 2018 Jun;37(6):1310-1321. [CrossRef] [Medline]
- Ouyang W, Aristov A, Lelek M, Hao X, Zimmer C. Deep learning massively accelerates super-resolution localization microscopy. Nat Biotechnol 2018 Jun;36(5):460-468. [CrossRef] [Medline]
- Nie D, Trullo R, Lian J, Wang L, Petitjean C, Ruan S, et al. Medical image synthesis with deep convolutional adversarial networks. IEEE Trans Biomed Eng 2018 Dec;65(12):2720-2730 [FREE Full text] [CrossRef] [Medline]
- Zhang Y, Yu H. Convolutional neural network based metal artifact reduction in X-ray computed tomography. IEEE Trans Med Imaging 2018 Jun;37(6):1370-1381 [FREE Full text] [CrossRef] [Medline]
- Gong E, Pauly JM, Wintermark M, Zaharchuk G. Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI. J Magn Reson Imaging 2018 Aug;48(2):330-340. [CrossRef] [Medline]
- Cohen O, Zhu B, Rosen MS. MR fingerprinting deep reconstruction network (DRONE). Magn Reson Med 2018 Sep;80(3):885-894 [FREE Full text] [CrossRef] [Medline]
- Zhang Y, An L, Xu J, Zhang B, Zheng WJ, Hu M, et al. Enhancing Hi-C data resolution with deep convolutional neural network HiCPlus. Nat Commun 2018 Feb 21;9(1):750 [FREE Full text] [CrossRef] [Medline]
- Hyun CM, Kim HP, Lee SM, Lee S, Seo JK. Deep learning for undersampled MRI reconstruction. Phys Med Biol 2018 Jun 25;63(13):135007. [CrossRef] [Medline]
- Hauptmann A, Lucka F, Betcke M, Huynh N, Adler J, Cox B, et al. Model-based learning for accelerated, limited-view 3-D photoacoustic tomography. IEEE Trans Med Imaging 2018 Jun;37(6):1382-1393. [CrossRef] [Medline]
- Chaudhari AS, Fang Z, Kogan F, Wood J, Stevens KJ, Gibbons EK, et al. Super-resolution musculoskeletal MRI using deep learning. Magn Reson Med 2018 Nov;80(5):2139-2154 [FREE Full text] [CrossRef] [Medline]
- Chen H, Zhang Y, Chen Y, Zhang J, Zhang W, Sun H, et al. LEARN: learned experts' assessment-based reconstruction network for sparse-data CT. IEEE Trans Med Imaging 2018 Jun;37(6):1333-1347 [FREE Full text] [CrossRef] [Medline]
- Popova M, Isayev O, Tropsha A. Deep reinforcement learning for de novo drug design. Sci Adv 2018 Jul;4(7):eaap7885 [FREE Full text] [CrossRef] [Medline]
- Blaschke T, Olivecrona M, Engkvist O, Bajorath J, Chen H. Application of generative autoencoder in de novo molecular design. Mol Inform 2018 Jan;37(1-2):1700123 [FREE Full text] [CrossRef] [Medline]
- Zhou J, Theesfeld CL, Yao K, Chen KM, Wong AK, Troyanskaya OG. Deep learning sequence-based ab initio prediction of variant effects on expression and disease risk. Nat Genet 2018 Aug;50(8):1171-1179 [FREE Full text] [CrossRef] [Medline]
- Putin E, Asadulaev A, Ivanenkov Y, Aladinskiy V, Sanchez-Lengeling B, Aspuru-Guzik A, et al. Reinforced adversarial neural computer for de novo molecular design. J Chem Inf Model 2018 Jun 25;58(6):1194-1204 [FREE Full text] [CrossRef] [Medline]
- Merk D, Friedrich L, Grisoni F, Schneider G. De novo design of bioactive small molecules by artificial intelligence. Mol Inform 2018 Jan;37(1-2):1700153 [FREE Full text] [CrossRef] [Medline]
- Veltri D, Kamath U, Shehu A. Deep learning improves antimicrobial peptide recognition. Bioinformatics 2018 Aug 15;34(16):2740-2747 [FREE Full text] [CrossRef] [Medline]
- Wang S, Sun S, Xu J. Analysis of deep learning methods for blind protein contact prediction in CASP12. Proteins 2018 Mar;86 Suppl 1:67-77 [FREE Full text] [CrossRef] [Medline]
- Chaudhary K, Poirion OB, Lu L, Garmire LX. Deep learning-based multi-omics integration robustly predicts survival in liver cancer. Clin Cancer Res 2018 Mar 15;24(6):1248-1259 [FREE Full text] [CrossRef] [Medline]
- Ma J, Yu MK, Fong S, Ono K, Sage E, Demchak B, et al. Using deep learning to model the hierarchical structure and function of a cell. Nat Methods 2018 Apr;15(4):290-298 [FREE Full text] [CrossRef] [Medline]
- Yousefi S, Amrollahi F, Amgad M, Dong C, Lewis JE, Song C, et al. Predicting clinical outcomes from large scale cancer genomic profiles with deep survival models. Sci Rep 2017 Sep 15;7(1):11707 [FREE Full text] [CrossRef] [Medline]
- Preuer K, Lewis RP, Hochreiter S, Bender A, Bulusu KC, Klambauer G. DeepSynergy: predicting anti-cancer drug synergy with deep learning. Bioinformatics 2018 May 01;34(9):1538-1546 [FREE Full text] [CrossRef] [Medline]
- Liang H, Tsui BY, Ni H, Valentim CC, Baxter SL, Liu G, et al. Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence. Nat Med 2019 Mar;25(3):433-438. [CrossRef] [Medline]
- Wang L, You ZH, Chen X, Xia SX, Liu F, Yan X, et al. A computational-based method for predicting drug-target interactions by using stacked autoencoder deep neural network. J Comput Biol 2018 Mar;25(3):361-373. [CrossRef] [Medline]
- Yildirim Ö. A novel wavelet sequence based on deep bidirectional LSTM network model for ECG signal classification. Comput Biol Med 2018 May 01;96:189-202. [CrossRef] [Medline]
- Chambon S, Galtier MN, Arnal PJ, Wainrib G, Gramfort A. A deep learning architecture for temporal sleep stage classification using multivariate and multimodal time series. IEEE Trans Neural Syst Rehabil Eng 2018 Apr;26(4):758-769. [CrossRef] [Medline]
- He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016 Presented at: CVPR '16; June 27-30, 2016; Las Vegas, NV, USA p. 770-778. [CrossRef]
- Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016 Presented at: CVPR '16; June 27-30, 2016; Las Vegas, NV, USA p. 2818-2826. [CrossRef]
- Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015 Presented at: CVPR '15; June 7-12, 2015; Boston, MA, USA p. 1-9. [CrossRef]
- Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell 2017 Apr;39(4):640-651. [CrossRef] [Medline]
- Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: International Conference on 18th Medical Image Computing and Computer-Assisted Intervention. 2015 Presented at: MICCAI '15; October 5-9, 2015; Munich, Germany p. 234-241. [CrossRef]
- Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems. 2014 Presented at: NIPS '14; December 8-13, 2014; Montreal, Canada p. 2672-2680. [CrossRef]
- Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput 1997 Nov 15;9(8):1735-1780. [CrossRef] [Medline]
- Lakhani P, Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology 2017 Aug;284(2):574-582. [CrossRef] [Medline]
- Chen G, Tsoi A, Xu H, Zheng WJ. Predict effective drug combination by deep belief network and ontology fingerprints. J Biomed Inform 2018 Sep;85:149-154 [FREE Full text] [CrossRef] [Medline]
- Kim JK, Choi MJ, Lee JS, Hong JH, Kim CS, Seo SI, et al. A deep belief network and dempster-shafer-based multiclassifier for the pathology stage of prostate cancer. J Healthc Eng 2018;2018:4651582 [FREE Full text] [CrossRef] [Medline]
- Alansary A, Oktay O, Li Y, Folgoc LL, Hou B, Vaillant G, et al. Evaluating reinforcement learning agents for anatomical landmark detection. Med Image Anal 2019 Apr;53:156-164 [FREE Full text] [CrossRef] [Medline]
- Shu X, Tang J, Li Z, Lai H, Zhang L, Yan S. Personalized age progression with bi-level aging dictionary learning. IEEE Trans Pattern Anal Mach Intell 2018 Apr;40(4):905-917. [CrossRef] [Medline]
- Devlin J, Chang MW, Lee K, Toutanova K. Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2019 Presented at: NAACL-HLT '19; June 2-7, 2019; Minneapolis, MN, USA p. 4171-4186. [CrossRef]
- Baevski A, Zhou H, Mohamed A, Auli M. wav2vec 2.0: a framework for self-supervised learning of speech representations. In: Proceedings of the 34th Conference on Neural Information Processing Systems. 2020 Dec Presented at: NeurIPS '20; December 6-12, 2020; Vancouver, Canada.
- Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, et al. Swin transformer: hierarchical vision transformer using shifted windows. In: IEEE/CVF International Conference on Computer Vision. 2021 Oct Presented at: ICCV '21; October 10-17, 2021; Montreal, Canada p. 9992-10002. [CrossRef]
Abbreviations
AE: autoencoder |
CNN: convolutional neural network |
FCNN: fully convolutional neural network |
MeSH: Medical Subject Heading |
NLP: natural language processing |
ResNet: residual neural network |
RNN: recurrent neural network |
WoS: Web of Science |
Edited by A Mavragani; submitted 22.02.21; peer-reviewed by Y Zhao, C Su, Y Zhang; comments to author 17.03.21; revised version received 30.05.21; accepted 20.02.22; published 22.04.22
Copyright©Seojin Nam, Donghun Kim, Woojin Jung, Yongjun Zhu. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 22.04.2022.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.