2019
DOI: 10.46586/tches.v2020.i1.1-36
|View full text |Cite
|
Sign up to set email alerts
|

Methodology for Efficient CNN Architectures in Profiling Attacks

Abstract: The side-channel community recently investigated a new approach, based on deep learning, to significantly improve profiled attacks against embedded systems. Previous works have shown the benefit of using convolutional neural networks (CNN) to limit the effect of some countermeasures such as desynchronization. Compared with template attacks, deep learning techniques can deal with trace misalignment and the high dimensionality of the data. Pre-processing is no longer mandatory. However, the performance of attack… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
117
1
1

Year Published

2020
2020
2021
2021

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 82 publications
(152 citation statements)
references
References 21 publications
1
117
1
1
Order By: Relevance
“…From both the experiments, we propose few of the combinations that may work better for segmentation results although it is hard to make a concise conclusion from the values in quality matrices. In best of our knowledge, there is no best algorithm proposed for the purpose of generalised medical image segmentation, but studies show that there can be best choices we can make while designing the architecture [22][23][24]. From our experimental results, we have observed that two of the combinations with Xavier weight initialization (also known as Glorot), Adam optimiser, Cross Entropy loss (Glo Adam CE ) and LeCun weight initialization, cross entropy loss and Adam optimiser Lec Adam CE worked best for most of the metrics in 3D-UNet setting, while Xavier together with cross entropy loss and Tanh activation function (Glo tanh CE ) worked best for VGG-16 network.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…From both the experiments, we propose few of the combinations that may work better for segmentation results although it is hard to make a concise conclusion from the values in quality matrices. In best of our knowledge, there is no best algorithm proposed for the purpose of generalised medical image segmentation, but studies show that there can be best choices we can make while designing the architecture [22][23][24]. From our experimental results, we have observed that two of the combinations with Xavier weight initialization (also known as Glorot), Adam optimiser, Cross Entropy loss (Glo Adam CE ) and LeCun weight initialization, cross entropy loss and Adam optimiser Lec Adam CE worked best for most of the metrics in 3D-UNet setting, while Xavier together with cross entropy loss and Tanh activation function (Glo tanh CE ) worked best for VGG-16 network.…”
Section: Discussionmentioning
confidence: 99%
“…Reference [21] demonstrated the impact of choosing the right activation function on training dynamics and model performance. In [22], the authors proposed a strategy for the selection of hyperparameters that includes learnable parameters such as weights and biases of each layer, including the number of filters, strides, kernel sizes and the number of units per layer. In [23], the authors worked on studying different loss functions used in deep neural networks with the objective of knowing the impact of particular choices in learning dynamics for classification as a task.…”
Section: Related Workmentioning
confidence: 99%
“…The main advantage of the filters and convolutional layers is their time-invariance property that allows the network to be robust against desynchronization (e.g. shifting, jitter) [3,16]. Therefore, the resynchronization pre-processing is not necessary anymore.…”
Section: Neural Networkmentioning
confidence: 99%
“…As a consequence, the network is able to estimate functions F much more complex than the optimal one. To reduce the impact of overfitting, some techniques can be used such as data augmentation [3], noise addition [7], regularization or a more fitting architecture [16].…”
Section: Evaluation Metricsmentioning
confidence: 99%
See 1 more Smart Citation