2020
DOI: 10.20944/preprints202002.0318.v2
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FilterNet: A Many-to-Many Deep Learning Architecture for Time Series Classification

Abstract: We present and benchmark FilterNet, a flexible deep learning architecture for time series classification tasks, such as activity recognition via multichannel sensor data. It adapts popular CNN and CNN-LSTM motifs which have excelled in activity recognition benchmarks, implementing them in a many-to-many architecture to markedly improve frame-by-frame accuracy, event segmentation accuracy, model size, and computational efficiency. We propose several model variants, evaluate them alongside other published models… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(6 citation statements)
references
References 31 publications
0
6
0
Order By: Relevance
“…However, there is no significant improvement once the batch size passes the 64-mark. The drop in performance as window length increases may not have come as a surprise as some authors have suggested that models containing LSTM layers suffer from poor parallelism across longer window lengths [61]. Although we did not investigate further, Chambers and Yoder [61] suggest that increasing the LSTM layers can enhance the model's capacity to learn longer window lengths.…”
Section: Tuning For Window Size Segmentationmentioning
confidence: 81%
See 4 more Smart Citations
“…However, there is no significant improvement once the batch size passes the 64-mark. The drop in performance as window length increases may not have come as a surprise as some authors have suggested that models containing LSTM layers suffer from poor parallelism across longer window lengths [61]. Although we did not investigate further, Chambers and Yoder [61] suggest that increasing the LSTM layers can enhance the model's capacity to learn longer window lengths.…”
Section: Tuning For Window Size Segmentationmentioning
confidence: 81%
“…The drop in performance as window length increases may not have come as a surprise as some authors have suggested that models containing LSTM layers suffer from poor parallelism across longer window lengths [61]. Although we did not investigate further, Chambers and Yoder [61] suggest that increasing the LSTM layers can enhance the model's capacity to learn longer window lengths. Instead, we investigated the interaction between the two parameters (Figure 13) and discovered that window length of 32 and any batch size between 64, 128 and 512 are good for the proposed model(Figures 13 & 14).…”
Section: Tuning For Window Size Segmentationmentioning
confidence: 81%
See 3 more Smart Citations