2021
DOI: 10.48550/arxiv.2109.10870
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SoK: Machine Learning Governance

Varun Chandrasekaran,
Hengrui Jia,
Anvith Thudi
et al.

Abstract: The application of machine learning (ML) in computer systems introduces not only many benefits but also risks to society. In this paper, we develop the concept of ML governance to balance such benefits and risks, with the aim of achieving responsible applications of ML. Our approach first systematizes research towards ascertaining ownership of data and models, thus fostering a notion of identity specific to ML systems. Building on this foundation, we use identities to hold principals accountable for failures o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 154 publications
0
1
0
Order By: Relevance
“…However, these approaches have been shown to cause utility degradation [27], or can be made ineffective using an adaptive query synthesis strategy [5]. Further, Chandrasekaran et al [5,6] provide theoretical insights to demonstrate that "model extraction is inevitable", even in a realistic setting with only hard labels, and even when models use randomised defenses. Hence, a model with a reasonably good accuracy would always leak information that could lead to model extraction.…”
Section: Defenses Against Model Stealingmentioning
confidence: 99%
“…However, these approaches have been shown to cause utility degradation [27], or can be made ineffective using an adaptive query synthesis strategy [5]. Further, Chandrasekaran et al [5,6] provide theoretical insights to demonstrate that "model extraction is inevitable", even in a realistic setting with only hard labels, and even when models use randomised defenses. Hence, a model with a reasonably good accuracy would always leak information that could lead to model extraction.…”
Section: Defenses Against Model Stealingmentioning
confidence: 99%