2021
DOI: 10.1109/access.2021.3108455
|View full text |Cite
|
Sign up to set email alerts
|

Spatio-Temporal Split Learning for Privacy-Preserving Medical Platforms: Case Studies With COVID-19 CT, X-Ray, and Cholesterol Data

Abstract: Machine learning requires a large volume of sample data, especially when it is used in highaccuracy medical applications. However, patient records are one of the most sensitive private information that is not usually shared among institutes. This paper presents spatio-temporal split learning, a distributed deep neural network framework, which is a turning point in allowing collaboration among privacysensitive organizations. Our spatio-temporal split learning presents how distributed machine learning can be eff… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
2

Relationship

1
9

Authors

Journals

citations
Cited by 19 publications
(8 citation statements)
references
References 19 publications
0
5
0
Order By: Relevance
“…In recent years, SL has attracted much attention due to its capability to work well with low-end devices with limited computational capabilities. In addition, results obtained by split learning are comparable to models trained in a centralized setting on a healthcare dataset [ 17 , 18 , 19 ]. However, SL is only capable of dealing with one client at a time while training.…”
Section: Introductionmentioning
confidence: 97%
“…In recent years, SL has attracted much attention due to its capability to work well with low-end devices with limited computational capabilities. In addition, results obtained by split learning are comparable to models trained in a centralized setting on a healthcare dataset [ 17 , 18 , 19 ]. However, SL is only capable of dealing with one client at a time while training.…”
Section: Introductionmentioning
confidence: 97%
“…On the other hand, split learning (SL), first proposed in [3], [4], is an alternative solution to FL to cope with largesized models in a resource-efficient way [7], [8], [9]. A typical architecture of SL is constructed by splitting an entire deep neural network (DNN) model into two partitions, in a way that the upper model segment above the split-layer is stored at the server while the lower model segment is stored at each client.…”
Section: Introductionmentioning
confidence: 99%
“…Split learning allows multiple participants to jointly train a federated model without encryption of intermediate results (intermediate computations of the cut layer), thus reducing the computation expense. SplitNN applies the idea of split learning to neural networks and has been used to analyze healthcare data (Poirot et al, 2019;Ha et al, 2021).…”
Section: Introductionmentioning
confidence: 99%