Original Paper
Abstract
Background: Federated learning (FL) is a newly proposed machine-learning method that uses a decentralized dataset. Since data transfer is not necessary for the learning process in FL, there is a significant advantage in protecting personal privacy. Therefore, many studies are being actively conducted in the applications of FL for diverse areas.
Objective: The aim of this study was to evaluate the reliability and performance of FL using three benchmark datasets, including a clinical benchmark dataset.
Methods: To evaluate FL in a realistic setting, we implemented FL using a client-server architecture with Python. The implemented client-server version of the FL software was deployed to Amazon Web Services. Modified National Institute of Standards and Technology (MNIST), Medical Information Mart for Intensive Care-III (MIMIC-III), and electrocardiogram (ECG) datasets were used to evaluate the performance of FL. To test FL in a realistic setting, the MNIST dataset was split into 10 different clients, with one digit for each client. In addition, we conducted four different experiments according to basic, imbalanced, skewed, and a combination of imbalanced and skewed data distributions. We also compared the performance of FL to that of the state-of-the-art method with respect to in-hospital mortality using the MIMIC-III dataset. Likewise, we conducted experiments comparing basic and imbalanced data distributions using MIMIC-III and ECG data.
Results: FL on the basic MNIST dataset with 10 clients achieved an area under the receiver operating characteristic curve (AUROC) of 0.997 and an F1-score of 0.946. The experiment with the imbalanced MNIST dataset achieved an AUROC of 0.995 and an F1-score of 0.921. The experiment with the skewed MNIST dataset achieved an AUROC of 0.992 and an F1-score of 0.905. Finally, the combined imbalanced and skewed experiment achieved an AUROC of 0.990 and an F1-score of 0.891. The basic FL on in-hospital mortality using MIMIC-III data achieved an AUROC of 0.850 and an F1-score of 0.944, while the experiment with the imbalanced MIMIC-III dataset achieved an AUROC of 0.850 and an F1-score of 0.943. For ECG classification, the basic FL achieved an AUROC of 0.938 and an F1-score of 0.807, and the imbalanced ECG dataset achieved an AUROC of 0.943 and an F1-score of 0.807.
Conclusions: FL demonstrated comparative performance on different benchmark datasets. In addition, FL demonstrated reliable performance in cases where the distribution was imbalanced, skewed, and extreme, reflecting the real-life scenario in which data distributions from various hospitals are different. FL can achieve high performance while maintaining privacy protection because there is no requirement to centralize the data.
doi:10.2196/20891
Keywords
Introduction
Background
Traditional machine learning and deep learning require a centralized dataset to train a model. Therefore, such methods not only require data transfer to collect data from many devices, people, or institutions but also have a high computational cost because they must be trained on large datasets. When collecting privacy-sensitive data such as medical data, privacy protection is a major hurdle. Centralized databases are the main targets of hacking attacks, and therefore the risk of a data breach is severely increased [
, ]. Moreover, data centralization increases the risk of reidentification of deidentified data because of the increased data size [ ].To reduce the computational cost, Google proposed a method known as federated learning (FL), which uses the computational cores in mobile devices [
- ]. In FL, training is performed at the individual client level, and then the local weights of each client are sent to the server. The server collects the updated local weights and calculates the new global weights. Subsequently, the client downloads the global weights from the server and continues the training process. Since its first use in mobile apps [ - ], many researchers have been studying and improving FL in various fields [ - ]. In particular, studies on heterogeneity of data [ , ], robust optimization [ - ], and security methods such as differential privacy and secure multiparty computation have also been conducted with an FL approach [ , , ]. Research on FL has also been conducted in the medical field [ , , ]. In particular, studies have been conducted using electronic medical records and brain tumor data [ - ]. However, the application of FL to real medical data has not been sufficiently studied.FL can be used to resolve privacy issues and mitigate the risk of a data breach in clinical information, since transfer and centralization of data are not required. Privacy protection is particularly beneficial for medical data analysis, since medical data represent some of the most sensitive types of personal data. To protect patients’ privacy, deidentification methods have typically been applied [
- ]. However, data centralization is required for both deidentifying data and evaluating the risk of reidentification. If the data are centralized, the risk of a data breach is increased. Moreover, when deidentifying the dataset, the direct or indirect identifiers in the medical data must be determined. This is challenging because of the lack of clear guidelines. The Health Insurance Portability and Accountability Act in the United States provides clear deidentification guidance; it defines 18 types of protected health information to be removed [ ]. However, many researchers and social activists claim that this guidance should be revised to enhance privacy protection [ ]. In contrast, FL does not require the centralization of raw data. As a result, even the FL developers cannot access the raw data. Therefore, FL can solve privacy or deidentification issues that occur when using clinical data.Objectives
The aim of this study was to assess the performance of FL on three benchmark datasets: the Modified National Institute of Standards and Technology (MNIST) dataset, Medical Information Mart for Intensive Care-III (MIMIC-III) dataset, and PhysioNet Electrocardiogram (ECG) dataset. We also verified FL in environments that simulate real-world data distributions by modifying the MNIST, MIMIC-III, and ECG datasets.
Methods
FL Code and Server
FL is supported by several open-source projects, including TensorFlow Federated in TensorFlow 2.0 [
], PySyft [ , ], and Federated AI Technology Enabler [ , ]. However, there are limitations in using these libraries. First, most of these libraries only support a single server and not a network environment. Therefore, there is no control process for data communication. Second, as a prototype, the necessary features were not fully implemented to handle a complex dataset. For future research using real clinical data from hospitals, we implemented our own client-server version of FL using Python. The implemented server code is available on the FL_Server repository [ ] and the client code is available on the FL_Client repository [ ]. The MNIST dataset analyzed during the current study is available in the Keras package in the TensorFlow framework. Additionally, the original code used to generate and preprocess the MIMIC-III experiment used in this study referred to the mimic3-benchmarks repository [ ]. The original MIMIC-III dataset analyzed during this study is available on the PhysioNet repository [ ]. The ECG dataset analyzed during this study is available on the 2017 PhysioNet/CinC Challenge website [ ]. The model and environment assessed in this study refer to Hannun et al [ ].The FL server was developed using the Django framework and Python in Amazon Web Services (AWS). The server provides several application programming interfaces (APIs) for communication with a client, as shown in
, and performs federated averaging (FedAVG) [ ], which calculates the weighted averages. FedAVG is a widely used optimization algorithm that calculates the average value when the local weights collected from the client reach a specific level. The implemented code was deployed and managed in AWS Beanstalk, which was continuously monitored during the training process.Method | URL | Parameter | Description | Return |
GET | /round | N/Aa | Request current round | Number |
GET | /weight | N/A | Request global weight | List |
PUT | /weight | List | Update local weight | N/A |
aN/A: Not applicable.
Client
The client consists of three components. The first is the local learning component, which builds a suitable model for the dataset during the learning phase. The second is the communication component, which updates local weights according to the results of local training (the first component) on the server and downloads the global weights from the server. The third is the performance measure component, in which the performance of each client is measured using the downloaded global weights. The implemented code was deployed on an AWS EC2 instance. We used the specifications of g4dn.xlarge with the NVIDIA T4 Tensor core GPU for the Amazon instance.
Communications
Client–server communication for FL was implemented based on the process described by McMahan et al [
]. However, the implemented code exhibits some differences. The communication assumes that all clients (hospitals) are always powered (as is the case for a typical computer but not for a mobile device) and that their online status is maintained by a wired network connection. In addition, rather than selecting clients via an eligibility criterion from multiple client pools (thousands or millions), the code was implemented to manage a predefined fixed number of clients. In other words, all clients could participate in each round.A schematic diagram of the FL client–server communication is shown in
. In brief, the client decides whether to participate in the current round through the API. If it has already participated (sending local weights to the server), it waits to participate in the next round. The server waits for the client’s weight updates and ensures that no clients are eventually dropped. All communications are performed through the API provided by the server. The monitoring system is used to continuously observe system abnormalities.Datasets
MNIST
The MNIST dataset, which consists of digit handwriting images, contained 70,000 samples (including 60,000 for training and 10,000 for testing). The basic model was a simple artificial neural network with an input layer, one hidden layer with 128 units with a rectified linear unit activation function, and an output layer. The hyperparameters for training were set as follows: batch size 32, maximum 1000 epochs, and early stopping. Stochastic gradient descent was used as an optimizer [
].For FL, we used 10 individual clients to best mimic a real environment. We modified the datasets and hyperparameters of the learning algorithms. The datasets were modified considering differences in the distribution of medical data between hospitals. Hyperparameters were adjusted for training in each client. The proposed approach was evaluated on the MNIST dataset in four different experiments.
We first evaluated the basic performance of the FL. Ten clients randomly selected 600 images from the basic dataset. We continued the process for up to 500 rounds and observed the results. For the imbalanced FL experiment, each client used different sizes of randomly selected data, ranging from 1 to 600, for training (ie, one client used 36 data points and another client used 537 data points). However, other environments such as hyperparameters and the number of rounds were the same as set in the basic FL experiment. In addition, the MNIST dataset was split into single-digit groups, ranging from 0 to 9. Each of the 10 numbers was assigned to 10 different clients. Consequently, each client had a single digit instead of 10. This modified MNIST simulated an extremely skewed data distribution. Each client randomly selected 600 images from a dataset with a single digit for training. The simple artificial neural network used in the basic model was also used in these experiments. The hyperparameters were set as follows: 5 epochs and a batch size of 10. We continued the process for up to 3000 rounds and observed the results. For evaluation, a model was created with the latest updated global weights using 10,000 test samples. Finally, we conducted an extension of the modified MNIST FL that represents a skewed distribution. Each client was trained on data with an imbalanced and skewed distribution. Hence, each client was trained only on a single digit using a randomly selected sample.
MIMIC-III
The MIMIC-III dataset is a clinical dataset related to human health information, including demographics, vital signs, laboratory tests, and medications from intensive care units. MIMIC-III data were preprocessed using a state-of-the-art (SOTA) benchmark [
]. In this case, FL experiments with three individual clients were performed to predict in-hospital mortality, which is a classification problem that predicts death within the first 48 hours of an intensive care unit stay. After preprocessing the MIMIC-III dataset using the method described by Harutyunyan et al [ ], the dataset contained 21,139 samples (including 17,903 for training and 3236 for testing). The basic model was a standard long short-term memory (LSTM) with reference to the benchmark [ ]. The LSTM was chosen with 16 hidden units, depth 2, dropout 0.3, time step 1.0, batch size 8, and an adaptive moment estimation (ADAM) optimizer.For FL, randomly chosen samples from the original dataset were divided into 3 datasets without duplication and assigned to each client. This simulates having data from three different institutions. The same basic LSTM was used, and hyperparameters were set as follows: 2 epochs and a batch size of 4. We continued the process for up to 30 rounds and observed the results.
For the basic FL experiment, each client was trained on a subset of data that were split into three parts with the same data size without duplication. For the imbalanced FL experiment, all data were split into 50%, 30%, and 20% without duplication, and one subset was assigned to each client.
ECG
The 2017 PhysioNet/CinC Challenge ECG dataset was used in this study [
]. This target problem is a multiclassification problem that classifies four signals: atrial fibrillation, normal sinus rhythm, alternative rhythm, and noisy using a single short ECG signal. The total data size is 8528 single-lead ECG data points. The dataset was divided into 90% training data (7676) and 10% test data (852). For traditional learning, a convolution neural network with 34 layers based on Hannun et al [ ] was applied to the ECG dataset. The hyperparameters were chosen with a batch size of 32 and an ADAM optimizer.For FL, randomly chosen samples from the original dataset were divided into 3 datasets without duplication and assigned to 3 clients. The same model was used, and hyperparameters were set as follows: 3 epochs and a batch size of 16. We continued the process for up to 30 rounds and observed the results.
For the basic FL experiment, each client was trained on a subset of data that were split into three parts with the same data size without duplication. For the imbalanced FL experiment, all data were split into 50%, 30%, and 20% without duplication, and a subset was assigned to each client.
Evaluation
During training, we monitored the FL accuracy to evaluate performance. If the accuracy did not improve during the round, we completed the FL. Finally, we chose the best model and conducted bootstrapping to determine if there were significant differences between the experiments.
In all experiments, the area under the receiver operating characteristic curve (AUROC) score and F1-score were used as performance metrics. In addition, we evaluated the confusion matrix, precision, recall, or area under the precision recall curve (AUPRC) for comparison with the performance of the SOTA method. We calculated the 95% CIs and resampled the test set K times (for MNIST and ECG, K was 100, whereas for MIMIC-III, K was 10,000).
Results
MINST
The proposed approach was evaluated on the MNIST dataset for five different cases (as described in the Methods).
presents the values of the AUROC and F1-score for each case, and presents the confusion matrix for each case.Experiments | AUROCb (95% CI) | F1-score (95% CI) | Precision (95% CI) | Recall (95% CI) |
CMLc | 0.999 (0.999-0.999) | 0.981 (0.978-0.983) | 0.981 (0.972-0.989) | 0.981 (0.971-0.989) |
Basic FLd | 0.997 (0.996-0.998) | 0.946 (0.941-0.950) | 0.945 (0.929-0.959) | 0.945 (0.930-0.959) |
Imbalanced FL | 0.995 (0.994-0.995) | 0.921 (0.917-0.927) | 0.920 (0.904-0.937) | 0.920 (0.903-0.937) |
Skewed FL | 0.992 (0.991-0.993) | 0.905 (0.899-0.911) | 0.905 (0.885-0.922) | 0.904 (0.885-0.920) |
Imbalanced and skewed FL | 0.990 (0.989-0.991) | 0.891 (0.884-0.896) | 0.890 (0.869-0.909) | 0.889 (0.868-0.908) |
aAll experiments used the same model and hyperparameters. All results are presented with a 95% CI by resampling the validation task 100 times.
bAUROC: area under the receiver operating characteristic curve.
cCML: centralized traditional machine-learning method.
dFL: federated learning.
Centralized machine learning (CML) is a baseline training method that was used as a control group. CML achieved an AUROC of 0.999 and an F1-score of 0.981. For basic FL, the AUROC and F1-score were 0.997 and 0.946, respectively. The initial performance of the basic FL was fairly high, with an accuracy of approximately 0.800, which continually improved (
A).Imbalanced FL was designed to reflect a realistic clinical data distribution. As described in the Methods section, each client had a different training data size. Interestingly, the performance of imbalanced FL was significantly superior, with an AUROC and F1-score of 0.995 and 0.921, respectively. The initial performance was rather poor, as expected. However, after several rounds of processing, the performance rapidly improved to reach an accuracy of 0.900, after which the performance improvement was slow (
B).Skewed FL assumed an extreme case. Each client had only one digit from 0 to 9, thereby simulating a situation in which each hospital has a unique subpopulation of patients without overlaps. The final AUROC and F1-score were 0.992 and 0.905, respectively. As expected, the initial performance was poor; however, it rapidly improved after the initial rounds (
C).The most extreme case was designed by combining an imbalanced and a skewed dataset. In this experiment, the AUROC and F1-score were 0.990 and 0.891, respectively. Similar to the skewed FL, the initial performance was very poor, but it rapidly improved after the initial rounds (
D).Additionally, the precision and recall results for each digit class classification in each experiment are presented in
- .MIMIC-III
The proposed approach was evaluated on the MIMIC-III dataset in two different cases to compare the performance with a reported benchmark. FL experiments were performed on three individual clients. Apart from the AUROC and F1-score, we also refer to the AUPRC, which is reported in the benchmark [
]. The results are presented in and in and .SOTA performance was achieved by executing the codes provided in Harutyunyan
[
]. FL achieved an AUROC, F1-score, and AUPROC comparable with those of the SOTA method. The imbalanced FL experiment, as an extension of the basic MIMIC-III FL, also achieved AUROC, F1-score, and AUPRC comparable with those of SOTA ( ).Experiments | AUROCb (95% CI) | F1-score (95% CI) | AUPRCc (95% CI) | Precision (95% CI) | Recall (95% CI) |
SOTAd | 0.857 (0.837-0.875) | 0.944 (0.938-0.950) | 0.505 (0.451-0.558) | 0.973 (0.967-0.979) | 0.773 (0.907-0.927) |
Basic FLe | 0.850 (0.830-0.869) | 0.944 (0.938-0.950) | 0.483 (0.427-0.537) | 0.975 (0.969-0.980) | 0.797 (0.906-0.926) |
Imbalanced FL | 0.850 (0.829-0.869) | 0.943 (0.937-0.949) | 0.481 (0.426-0.535) | 0.981 (0.976-0.986) | 0.714 (0.897-0.918) |
aAll results are presented with a 95% CI by resampling 10,000 times.
bAUROC: area under the receiver operating characteristic curve.
cAUPRC: area under the precision-recall curve.
dSOTA: state of the art.
eFL: federated learning.
ECG
The proposed approach was evaluated on the ECG database using two different methods to compare the performance with a reported benchmark [
]. The results are presented in and - .Benchmark results were achieved using the code available on github [
]. The AUROC and F1-score of both basic and imbalanced FL were comparable with those of the benchmark ( ).Experiments | AUROCb (95% CI) | F1-score (95% CI) | Precision (95% CI) | Recall (95% CI) |
Benchmark | 0.954 (0.930-0.978) | 0.814 (0.655-0.910) | 0.820 (0.672-0.943) | 0.814 (0.640-0.936) |
Basic FLc | 0.938 (0.860-0.978) | 0.807 (0.651-0.931) | 0.823 (0.645-0.942) | 0.795 (0.660-0.925) |
Imbalanced FL | 0.943 (0.883-0.977) | 0.807 (0.635-0.902) | 0.830 (0.650-0.935) | 0.788 (0.626-0.905) |
aAll results are presented with a 95% CI by resampling 100 times.
bAUROC: area under the receiver operating characteristic curve.
cFL: federated learning.
Discussion
Principal Findings
When comparing the performances of CML and FL in basic MNIST experiments, both the AUROC and F1-score were high. Unexpectedly, when using an imbalanced dataset, FL delivered good performance with only small differences (AUROC and F1-score of 0.003 and 0.035, respectively). When using a skewed dataset, FL also yielded remarkable results with respect to both the AUROC and F1-score. When comparing the confusion matrices for experiments with four datasets (ie, normal, imbalanced, skewed, and a combination of two distributions), FL showed some deterioration in performance for visually similar numbers (eg, 3 vs 5; 4 vs 9). Even in the basic MNIST classification, the performance was relatively poor in these cases. However, this problem was not related to the small sizes of the training datasets. When we monitored the size of the training datasets for each client, the dataset for class 5 was not small. Moreover, depending on the experiment, the datasets for class 1 or 7 could be small, but superior classification performance was nevertheless achieved. This trend was maintained in the experiments with basic FL and imbalanced FL using the MIMIC-III dataset.
The FL experiments using MIMIC-III also exhibited good and competitive performance compared to a benchmark that has been trained on CML. The experimental results of in-hospital mortality using the MIMIC-III dataset, which is a well-known dataset with real clinical data, also showed good performance. This experiment was performed by splitting the randomly selected MIMIC-III data into three parts (ie, from the perspective of each institution, learning one-third of the total data). However, the performances of FL and CML were almost the same, with only a 0.005 difference in AUROC detected compared with the SOTA performance reported by Harutyunyan et al [
]. Before the experiments, we expected that the performance of FL would be slightly inferior to that of CML because FL uses a distributed dataset instead of a centralized dataset. Nevertheless, no significant difference was found in well-known evaluation indicators such as accuracy, sensitivity, precision, and F1-score (except for AUROC). Experimental results with an imbalanced dataset were very similar to those of basic FL. Therefore, an individual client may only use a small amount of data for training in FL, and the results will be similar to those achieved when all available data are used for training.The FL experiments using ECG data also exhibited good and competitive performance compared to CML. This experiment was performed by splitting the randomly selected ECG data into three parts (ie, from the perspective of each institution, learning on one-third of the total data) with each using different data distributions.
However, the performance of FL and CML was not significantly different. Experimental results with an imbalanced dataset were very similar to those for basic FL. As shown in
- , the noisy case was shown to have relatively low performance in precision and recall. This is because the data size for training was only 3% of the total size. However, the other classes performed well, such as atrial fibrillation, normal sinus rhythm, and alternative rhythm.The performance of FL was verified using three datasets with changed data distributions: imbalanced (with disproportionally represented classes) and skewed (the distribution of the target variable was different) to imitate real-world medical data. As a result, FL was comparable to CML. During the initial rounds, only a relatively small amount of data was used on each client instead of an ensemble; therefore, the performance of FL was significantly inferior to that of CML. However, in the subsequent rounds, the performance of FL (with respect to AUROC and F1-score) became similar to that of CML. Typically, medical centers have datasets with very different distributions, and our results demonstrate that FL is suitable for real-world medical datasets without requiring data centralization.
One reason for the comparable performance of FL might be that the weight updates and the process of FedAVG could have a similar effect in mini-batches [
- ] and ensembles [ ]. In FL, each client trains on a relatively small dataset and then transfers the local weights to the server. The server then collects the local weights and updates the global weights that reflect all of the data through FedAVG. Subsequently, the round is repeated to improve the global weights. Hence, individual clients are an element of a mini-batch, and FedAVG is similar to ensemble processing. When implementing FL, we used the widely known FedAVG aggregation method [ ], but this does not guarantee the best choice. To solve this problem, many researchers have studied aggregation methods that can work well with abnormal distributions, robust aggregation, and efficient communication such as FedProx [ ], FSVRG [ ], CO-OP [ ], LoAdaBoost FedAVG [ ], and RFA [ ]. Hyperparameter selection also requires further research.In addition, many researchers have studied methods to reduce communication costs. First, it has been suggested to reduce the communication round through methods such as client selection, peer-to-peer, and local update [
- ]. Second, a method such as sparsification, subsampling, or quantization has been suggested to reduce the communication message size [ , ]. Third, the asynchronous update method in traditional parallel computation methods can be applied.FL can be used to build medical artificial intelligence apps by protecting patient privacy. Although the data themselves are not exposed or gathered in the central repository in FL, these data can nevertheless be guessed during the aggregation process in the network [
]. Therefore, other privacy preservation methods such as differential privacy, secure multiparty computation, and homomorphic encryption [ , , , ] might be necessary to protect privacy from diverse up-to-date privacy attack methods.In future studies, we plan to use the proposed FL methods in real clinical datasets rather than benchmark datasets. First, we will try to improve the FL framework based on the results from this study. We will then compare the performance of a breast cancer recurrence prediction model using data from two different medical centers in Korea.
Conclusions
Our experiments demonstrated the potential of FL in terms of performance and data protection, which is important for dealing with sensitive medical data. Specifically, in FL, only weights are transferred, and the participants are unaware of each other’s local datasets. This can prevent personal information leaks. In addition, the proposed approach can be used to supplement existing approaches and to avoid problems that may occur during the deidentification process. The future direction of research is to use FL for actual medical data through collaborations with multiple institutions. Tasks such as expanding the client–server version of FL and improving communication will be expected to be important for the application of FL in real-world medical data with multiple institutions.
Acknowledgments
This work was supported by the Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korean government (MSIT) (2018-0-00861, Intelligent SW Technology Development for Medical Data Analysis), and by the Fourth Stage of Brain Korea 21 Project (Department of Intelligent Precision Healthcare Convergence) in 2021.
Conflicts of Interest
None declared.
Client–server communication logic.
PDF File (Adobe PDF File), 324 KB
Confusion matrices for the MNIST experiments.
PDF File (Adobe PDF File), 303 KB
Accuracy changes for each round of MNIST federated learning (FL) experiments.
PDF File (Adobe PDF File), 353 KB
Each digit class classification results of precision and recall in a centralized machine learning (CML) experiment using the MNIST dataset.
PDF File (Adobe PDF File), 230 KB
Each digit class classification result of precision and recall in the basic federated learning (FL) experiment using the MNIST dataset.
PDF File (Adobe PDF File), 230 KB
Each digit class classification result of precision and recall in the imbalanced federated learning (FL) experiment using the MNIST dataset.
PDF File (Adobe PDF File), 229 KB
Each digit class classification result of precision and recall in the skewed federated learning (FL) experiment using the MNIST dataset.
PDF File (Adobe PDF File), 229 KB
Each digit class classification result of precision and recall in an imbalanced and skewed federated learning (FL) experiment using the MNIST dataset.
PDF File (Adobe PDF File), 249 KB
Area under the receiver operating characteristic curve, which is the result of the in-hospital mortality prediction for each experiment.
PDF File (Adobe PDF File), 255 KB
Confusion matrix of federated learning (FL) to predict in-hospital mortality using MIMIC-III. (A) Basic FL. (B) Imbalanced FL.
PDF File (Adobe PDF File), 227 KB
Confusion matrices for the ECG experiments.
PDF File (Adobe PDF File), 251 KB
Each class classification result of precision and recall in a centralized machine learning (CML) experiment using the ECG dataset.
PDF File (Adobe PDF File), 229 KB
Each class classification result of precision and recall in the basic federated learning (FL) experiment using the ECG dataset.
PDF File (Adobe PDF File), 229 KB
Each class classification result of precision and recall in the imbalanced federated learning (FL) experiment using the ECG dataset.
PDF File (Adobe PDF File), 229 KBReferences
- A Study on Global Data Leaks in 2018. InfoWatch Analytics Center. URL: https://infowatch.com/sites/default/files/report/analytics/Global_Data_Breaches_2018.pdf [accessed 2020-10-13]
- Varunkumar K, Prabakaran M, Kaurav A, Chakkaravarthy S, Thiyagarajan S, Venkatesh P. Various Database Attacks and its Prevention Techniques. Int J Eng Trends Technol 2014 Mar 25;9(11):532-536. [CrossRef]
- Emam KE, Dankar FK, Vaillancourt R, Roffey T, Lysyk M. Evaluating the Risk of Re-identification of Patients from Hospital Prescription Records. Can J Hosp Pharm 2009 Jul;62(4):307-319 [FREE Full text] [CrossRef] [Medline]
- Konečný J, McMahan H, Yu F, Richtárik P, Suresh A, Bacon D. Federated learning: strategies for improving communication efficiency. arXiv preprint 2016:1610.05492 [FREE Full text]
- McMahan H, Moore E, Ramage D, Hampson S. Communication-efficient learning of deep networks from decentralized data. arXiv preprint 2016:1602.05629 [FREE Full text]
- Brendan H. Federated Learning: Collaborative Machine Learning without Centralized Training Data. Google AI Blog. 2017. URL: https://ai.googleblog.com/2017/04/federated-learning-collaborative.html [accessed 2020-10-13]
- Hard A, Rao K, Mathews R, Ramaswamy S, Beaufays F, Augenstein S. Federated learning for mobile keyboard prediction. arXiv preprint 2018:1811.03604 [FREE Full text]
- Yang T, Andrew G, Eichner H, Sun H, Li W, Kong N. Applied federated learning: Improving google keyboard query suggestions. arXiv preprint 2018:1812.02903 [FREE Full text]
- Chen M, Mathews R, Ouyang T, Beaufays F. Federated learning of out-of-vocabulary words. arXiv preprint 2019:1903.10635 [FREE Full text]
- Rieke N, Hancox J, Li W, Milletarì F, Roth HR, Albarqouni S, et al. The future of digital health with federated learning. NPJ Digit Med 2020;3:119. [CrossRef] [Medline]
- Kairouz K, McMahan H. Advances and Open Problems in Federated Learning. arXiv preprint 2019:1912.04977 [FREE Full text]
- Li T, Sahu AK, Talwalkar A, Smith V. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Process Mag 2020 May;37(3):50-60. [CrossRef]
- Jie X, Benjamin S, Glicksberg, Chang S, Peter W, Jiang B, et al. Federated learning for healthcare informatics. arXiv preprint 2019:1911.06270 [FREE Full text]
- Qinbin L, Zeyi W, Bingsheng H. A Survey on Federated Learning Systems: Vision, Hype and Reality for Data Privacy and Protection. arXiv preprint 2019:1907.09693 [FREE Full text]
- Zhao Y, Meng L, Liangzhen L, Naveen S, Damon C, Vikas C. Federated Learning with Non-IID Data. arXiv preprint 2018:1806.00582 [FREE Full text]
- Li T, Sahu A, Zaheer M, Sanjabi M, Talwalkar A, Smith V. Federated Optimization in Heterogeneous Networks. arXiv preprint 2018:1812.06127 [FREE Full text]
- Konečný J, McMahan H. Federated Optimization: Distributed Machine Learning for On-Device Intelligence. arXiv preprint 2016:1610.02527 [FREE Full text]
- Wang Y. Cooperative Machine Learning from Mobile Devices Masters Thesis. University of Alberta. 2017. URL: https://era.library.ualberta.ca/items/7d680f04-7987-45c5-b9cd-4fe43c87606f [accessed 2020-10-15]
- Huang L, Yin Y, Fu Z, Zhang S, Deng H, Liu D. LoAdaBoost: loss-based AdaBoost federated machine learning with reduced computational complexity on IID and non-IID intensive care data. arXiv preprint 2018:1811.12629 [FREE Full text]
- Pillutla K, Kakade S, Harchaoui Z. Robust Aggregation for Federated Learning. arXiv preprint 2019:1912.13445 [FREE Full text]
- Abadi M, Chu A, Goodfellow I, McMahan H, Mironov I, Talwar K, et al. Deep Learning with Differential Privacy. 2016 Presented at: CCS '16: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security; 2016; Vienna. [CrossRef]
- Pfohl Stephen R, Andrew M, Katherine H. Federated and Differentially Private Learning for Electronic Health Records. arXiv preprint 2019:1911.05861 [FREE Full text]
- Li W. Privacy-Preserving Federated Brain Tumour Segmentation. : Springer, Cham; 2019 Presented at: Machine Learning in Medical Imaging. MLMI 2019; 2019; Shenzhen. [CrossRef]
- Sheller MJ, Reina GA, Edwards B, Martin J, Bakas S. Multi-Institutional Deep Learning Modeling Without Sharing Patient Data: A Feasibility Study on Brain Tumor Segmentation. 2019 Presented at: International MICCAI Brain Lesion Workshop; 2018; Granada p. 92-104 URL: http://europepmc.org/abstract/MED/31231720 [CrossRef]
- Brisimi TS, Chen R, Mela T, Olshevsky A, Paschalidis IC, Shi W. Federated learning of predictive models from federated Electronic Health Records. Int J Med Inform 2018 Apr;112:59-67 [FREE Full text] [CrossRef] [Medline]
- El Emam K, Dankar FK, Issa R, Jonker E, Amyot D, Cogo E, et al. A globally optimal k-anonymity method for the de-identification of health data. J Am Med Inform Assoc 2009;16(5):670-682 [FREE Full text] [CrossRef] [Medline]
- Taira RK, Bui AA, Kangarloo H. Identification of patient name references within medical documents using semantic selectional restrictions. Proc AMIA Symp 2002:757-761 [FREE Full text] [Medline]
- Thomas SM, Mamlin B, Schadow G, McDonald C. A successful technique for removing names in pathology reports using an augmented search and replace method. Proc AMIA Symp 2002:777-781 [FREE Full text] [Medline]
- Guidance Regarding Methods for De-identification of Protected Health Information in Accordance with the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule. HHS.gov. Washington, DC: U.S. Department of Health & Human Services; 1996. URL: https://www.hhs.gov/hipaa/for-professionals/privacy/special-topics/de-identification/index.html [accessed 2020-10-13]
- Nass SJ, Levit LA, Gostin LO. Beyond the HIPAA Privacy Rule: Enhancing Privacy, Improving Health Through Research. Washington, DC: National Academies Press; 2009.
- TensorFlow Federated: Machine Learning on Decentralized Data. TensorFlow. URL: https://www.tensorflow.org/federated [accessed 2020-10-13]
- Ryffel T, Trask A, Dahl M, Wagner B, Mancuso J, Rueckert D. A generic framework for privacy preserving deep learning. arXiv preprint 2018:1811.04017 [FREE Full text]
- OpenMined. URL: https://www.openmined.org/ [accessed 2020-10-13]
- An Industrial Grade Federated Learning Framework. FATE. URL: https://fate.fedai.org/ [accessed 2020-10-13]
- FederatedAI. GitHub. URL: https://github.com/FederatedAI/FATE [accessed 2020-10-13]
- FL_Server. GitHub. URL: https://github.com/bmiskkuedu/FL_Server [accessed 2020-10-13]
- FL_Client. GitHub. URL: https://github.com/bmiskkuedu/FL_Client [accessed 2020-10-13]
- Harutyunyan H. MIMIC-III benchmarks. GitHub. 2018. URL: https://github.com/YerevaNN/mimic3-benchmarks [accessed 2020-10-13]
- Johnson A, Pollard T, Mark R. MIMIC-III Clinical Database. Physionet. URL: https://physionet.org/content/mimiciii/1.4/ [accessed 2020-10-13]
- AF Classification from a Short Single Lead ECG Recording - The PhysioNet Computing in Cardiology Challenge 2017. PhysioNet. 2017. URL: https://physionet.org/content/challenge-2017/1.0.0/ [accessed 2020-10-13]
- Hannun AY, Rajpurkar P, Haghpanahi M, Tison GH, Bourn C, Turakhia MP, et al. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat Med 2019 Jan;25(1):65-69 [FREE Full text] [CrossRef] [Medline]
- Bonawitz K, Eichner H, Grieskamp W, Huba D, Ingerman A, Ivanov V. Towards Federated Learning at Scale: System Design. arXiv preprint 2019:1902.01046 [FREE Full text]
- Ruder S. An overview of gradient descent optimization algorithms. arXiv preprint 2016:1609.04747 [FREE Full text]
- Harutyunyan H, Khachatrian H, Kale DC, Ver Steeg G, Galstyan A. Multitask learning and benchmarking with clinical time series data. Sci Data 2019 Jun 17;6(1):96. [CrossRef] [Medline]
- Awni ECG. GitHub. URL: https://github.com/awni/ecg [accessed 2020-10-13]
- Li M, Zhang T, Chen Y, Smola AJ. Efficient mini-batch training for stochastic optimization. 2014 Presented at: KDD '14: The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2014; New York p. 661-670. [CrossRef]
- Keskar N, Mudigere D, Nocedal J, Smelyanskiy M, Tang P. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. arXiv preprint 2016:1609.04836 [FREE Full text]
- Masters D, Luschi C. Revisiting Small Batch Training for Deep Neural Networks. arXiv preprint 2018:1804.07612 [FREE Full text]
- Dietterich T. Ensemble Methods in Machine Learning. : Springer; 2000 Presented at: International Workshop on Multiple Classifier Systems. MCS 2000; 2000; Berlin, Heidelberg. [CrossRef]
Abbreviations
ADAM: adaptive moment estimation |
API: application programming interface |
AUPRC: area under the precision-recall curve |
AUROC: area under the receiver operating characteristic curve |
AWS: Amazon Web Services |
CML: centralized machine learning |
ECG: electrocardiogram |
FedAVG: federated averaging |
FL: federated learning |
LSTM: long short-term memory |
MIMIC-III: Medical Information Mart for Intensive Care III |
MNIST: Modified National Institute of Standards and Technology |
SOTA: state of the art |
Edited by G Eysenbach; submitted 01.06.20; peer-reviewed by JH Yoon, M Abdalla; comments to author 22.06.20; revised version received 16.08.20; accepted 02.10.20; published 26.10.20
Copyright©Geun Hyeong Lee, Soo-Yong Shin. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 26.10.2020.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.