2023
DOI: 10.1007/978-3-031-26066-7_23
|View full text |Cite
|
Sign up to set email alerts
|

A 0.8 mW TinyML-Based PDM-to-PCM Conversion for In-Sensor KWS Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 14 publications
0
4
0
Order By: Relevance
“…For the autoencoder, two loss functions were employed: the Mean Absolute Error (MAE) and the Fast-Fourier-Transform Mean Absolute Error (FFT-MAE). The FFT-MAE is a custom loss function introduced in [25], [26]. As described by ( 4), it returns the mean absolute error between the FFT of the model outputs and the FFT of the corresponding labels.…”
Section: Trainingmentioning
confidence: 99%
“…For the autoencoder, two loss functions were employed: the Mean Absolute Error (MAE) and the Fast-Fourier-Transform Mean Absolute Error (FFT-MAE). The FFT-MAE is a custom loss function introduced in [25], [26]. As described by ( 4), it returns the mean absolute error between the FFT of the model outputs and the FFT of the corresponding labels.…”
Section: Trainingmentioning
confidence: 99%
“…The proposed system is based on a tiny CNN, chosen for its highly proven effectiveness in image classification and fewer parameters compared to other architecture. [22][23][24][25][26][27] The model is designed to classify images into 3 classes, representing different percentage intervals of Cloud Coverage (CCov) as reported in: 28 clear (CCov < 35%), mid-cloudy (35% ≤ CCov ≤ 65%) and cloudy (CCov > 65%).…”
Section: The Proposed Systemmentioning
confidence: 99%
“…[8][9][10] In particular, Convolutional Neural Networks (CNNs) have demonstrated highly effective in image classification tasks, due to their ability to automatically capture local patterns and spatial features by leveraging convolutional filters. [11][12][13][14][15][16][17] Furthermore, techniques such as pooling and normalization contribute to the network's robustness and ability to generalize. [18][19][20] However, the use of NN-based cloud detection methods onboard satellites is constrained due to their typically high computational and memory demands.…”
Section: Introductionmentioning
confidence: 99%
“…3 These aspects are even more advantageous in the context of In-Sensor Computing (ISC), where the processing circuits are moved close to the audio sensor, integrated into its auxiliary circuitry, or in the same package, realizing a compact smart sensor. [4][5][6][7] A KWS pipeline involves three main stages: a conditioning signal module to adapt the microphone output to audio processing systems, a feature extractor, and a classifier. The quality of the extracted audio features is of primary importance because it directly affects the complexity and accuracy of the classifier.…”
Section: Introductionmentioning
confidence: 99%