2021
DOI: 10.51593/2020ca013
|View full text |Cite
|
Sign up to set email alerts
|

Poison in the Well: Securing the Shared Resources of Machine Learning

Abstract: Modern machine learning often relies on open-source datasets, pretrained models, and machine learning libraries from across the internet, but are those resources safe to use? Previously successful digital supply chain attacks against cyber infrastructure suggest the answer may be no. This report introduces policymakers to these emerging threats and provides recommendations for how to secure the machine learning supply chain.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 6 publications
(6 reference statements)
0
2
0
Order By: Relevance
“…54 Opensource tools and models relied on by the ML research community present similar opportunities for attack. 55 And the threat from poisoning extends to the process of machine learning in deployment, as in the example of federated learning described above.…”
Section: How To Make Machine Learning Work For Cyber Defensementioning
confidence: 99%
“…54 Opensource tools and models relied on by the ML research community present similar opportunities for attack. 55 And the threat from poisoning extends to the process of machine learning in deployment, as in the example of federated learning described above.…”
Section: How To Make Machine Learning Work For Cyber Defensementioning
confidence: 99%
“…ML models in particular are vulnerable at the start of their development where attacks on shared resources like ML libraries, pretrained models, and training datasets can be extremely difficult to detect. 60 Maintaining confidentiality, integrity, and accessibility has long been the gold standard in cybersecurity and the same applies for AI/ML systems. Competitions built around one or more of these objectives can serve as a means to assess current progress and uncover promising solutions.…”
Section: Trust and Explainabilitymentioning
confidence: 99%