2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN) 2019
DOI: 10.1109/dsn.2019.00044
|View full text |Cite
|
Sign up to set email alerts
|

Reaching Data Confidentiality and Model Accountability on the CalTrain

Abstract: Distributed collaborative learning (DCL) paradigms enable building joint machine learning models from distrusting multi-party participants. Data confidentiality is guaranteed by retaining private training data on each participant's local infrastructure. However, this approach to achieving data confidentiality makes today's DCL designs fundamentally vulnerable to data poisoning and backdoor attacks. It also limits DCL's model accountability, which is key to backtracking the responsible "bad" training data insta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(13 citation statements)
references
References 36 publications
0
13
0
Order By: Relevance
“…This means that the approaches assume that labels exist for all of the data in the federated network. However, in practice, the data generated in the network may be unlabeled or mislabeled [197]. This poses a big challenge to the server to find participants with appropriate data for model training.…”
Section: Challenges and Future Research Directionsmentioning
confidence: 99%
“…This means that the approaches assume that labels exist for all of the data in the federated network. However, in practice, the data generated in the network may be unlabeled or mislabeled [197]. This poses a big challenge to the server to find participants with appropriate data for model training.…”
Section: Challenges and Future Research Directionsmentioning
confidence: 99%
“…The process uses predefined approaches and stores or distributes information to the intended users for analysis. Although collaborative data training breaks data monopoly, it also raises new challenges of model accountability, privacy protection, and data confidentiality (Gu et al, 2019). Although various mechanisms have been implemented to enhance data privacy and confidentiality, data breach incidents expose students' sensitive data (Hlioui et al, 2021).…”
Section: Confidentiality and Privacy Issuesmentioning
confidence: 99%
“…Although various mechanisms have been implemented to enhance data privacy and confidentiality, data breach incidents expose students' sensitive data (Hlioui et al, 2021). Besides, most government regulations and laws on personal privacy protection do not permit privacy-sensitive and mission-critical domains such as education institutions to share raw data with third parties (Gu et al, 2019). The policy poses a significant challenge in implementing PAAs.…”
Section: Confidentiality and Privacy Issuesmentioning
confidence: 99%
“…TEEs can act as trustworthy intermediaries for isolating and orchestrating ML processes and replace the expensive cryptographic primitives. For example, Software Guard Extensions (SGX) has been leveraged to support secure model inference [15,52], privacy-preserving multiparty machine learning [16,20,21,40,43], and analytics on sensitive data [4,9,46,64]. However, TEEs are not panacea to address all trust problems.…”
Section: Trusted Execution Environmentsmentioning
confidence: 99%