2020
DOI: 10.1109/mdat.2020.2971217
|View full text |Cite
|
Sign up to set email alerts
|

Robust Machine Learning Systems: Challenges,Current Trends, Perspectives, and the Road Ahead

Abstract: Machine Learning (ML) techniques have been rapidly adopted by smart Cyber-Physical Systems (CPS) and Internet-of-Things (IoT) due to their powerful decision-making capabilities. However, they are vulnerable to various security and reliability threats, at both hardware and software levels, that compromise their accuracy. These threats get aggravated in emerging edge ML devices that have stringent constraints in terms of resources (e.g., compute, memory, power/energy), and that therefore cannot employ costly sec… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
33
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4

Relationship

3
6

Authors

Journals

citations
Cited by 97 publications
(33 citation statements)
references
References 233 publications
(239 reference statements)
0
33
0
Order By: Relevance
“…Despite the great success and popularity of deep learning in recent years, recent researches showed that DNNs have intrinsic weaknesses that can threaten the security [267] [268] [269]. Starting from the work of Goodfellow et al [270], many researches have been conducted, with the purpose of identifying weaknesses (Adversarial Attacks) and their countermeasures (Adversarial Defenses) [271] [272].…”
Section: Deep Learning Securitymentioning
confidence: 99%
“…Despite the great success and popularity of deep learning in recent years, recent researches showed that DNNs have intrinsic weaknesses that can threaten the security [267] [268] [269]. Starting from the work of Goodfellow et al [270], many researches have been conducted, with the purpose of identifying weaknesses (Adversarial Attacks) and their countermeasures (Adversarial Defenses) [271] [272].…”
Section: Deep Learning Securitymentioning
confidence: 99%
“…Figure 18 (c) demonstrates the variations in the number of crossbars for different networks across cell bit-size (addressability). Increasing the cell addressability by one order of magnitude inversely reduces the number of crossbars by 93.72%, but the power and area costs associated with corresponding high-precision peripherals and analog-digital inter-conversion circuits 7 severely limit throughput improvements. In our design space exploration, w = 2 emerged as a sweet spot in terms of balancing the area and energy trade-offs for the test bench.…”
Section: B Crossbar Design Space Explorationmentioning
confidence: 99%
“…State-of-the-art networks rely on massive parameter count (≥ 10 8 ) and manually designed architectures (with 10 1 − 10 3 layers) to surpass prior benchmarks and beat human-level performance [2], [3]. However, these algorithms demand enormous memory (≥ 10 9 Byte), and operational cost (≥ 10 9 operations for each input), scaling commensurately with ever-growing datasets and network depth [4]- [7].…”
Section: Introductionmentioning
confidence: 99%
“…Learning-based algorithms can retrieve meaningful features from a large volume of data to predict outcome accurately and are able to reveal the hidden patterns in the data set that were previously unknown [41]- [43]. At variance with traditional data processing systems, ML algorithms build models based on existing data with little or no distributional requirements for future predictions or decision making, which increases their performance tremendously [44].…”
Section: Introductionmentioning
confidence: 99%