2020
DOI: 10.1007/978-3-030-41025-4_6
|View full text |Cite
|
Sign up to set email alerts
|

A Deep Learning Attack Countermeasure with Intentional Noise for a PUF-Based Authentication Scheme

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…The attacks discussed in the previous section have posed serious challenges for the PUF designers and manufacturers. To tackle this issue, various countermeasures have been introduced in the literature, including controlled PUFs [60,93], re-configurable PUFs [60], PUFs with noise-induced CRPs [94,91,87], to name a few. "Controlled PUFs" is the umbrella term given to PUFs, where the adversary does have a restricted access to the CRPs through either obfuscating the challenges/responses [26,28] or mechanisms used to feed the challenges/collect the responses [60,93,50,92,43,50,14].…”
Section: Resiliency Against ML Attacksmentioning
confidence: 99%
“…The attacks discussed in the previous section have posed serious challenges for the PUF designers and manufacturers. To tackle this issue, various countermeasures have been introduced in the literature, including controlled PUFs [60,93], re-configurable PUFs [60], PUFs with noise-induced CRPs [94,91,87], to name a few. "Controlled PUFs" is the umbrella term given to PUFs, where the adversary does have a restricted access to the CRPs through either obfuscating the challenges/responses [26,28] or mechanisms used to feed the challenges/collect the responses [60,93,50,92,43,50,14].…”
Section: Resiliency Against ML Attacksmentioning
confidence: 99%