2021
DOI: 10.1609/aaai.v35i1.16103
|View full text |Cite
|
Sign up to set email alerts
|

Two-Stream Convolution Augmented Transformer for Human Activity Recognition

Abstract: Recognition of human activities is an important task due to its far-reaching applications such as healthcare system, context-aware applications, and security monitoring. Recently, WiFi based human activity recognition (HAR) is becoming ubiquitous due to its non-invasiveness. Existing WiFi-based HAR methods regard WiFi signals as a temporal sequence of channel state information (CSI), and employ deep sequential models (e.g., RNN, LSTM) to automatically capture channel-over-time features. Although being remarkab… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
53
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 97 publications
(53 citation statements)
references
References 24 publications
0
53
0
Order By: Relevance
“…On this basis, Chen et al argued that traditional LSTM networks ignored the backward direction's information during prediction and proposed ABLSTM for HAR with WiFi CSI [13]. Li et al [19] showed that existing methods treated each frame as a single temporal point which ignores the informative context and only considered channel-over-time features while disregarding the time-over-channel features. Hence, they proposed THAT and gained SOTA performance.…”
Section: A Csi-based Harmentioning
confidence: 99%
See 4 more Smart Citations
“…On this basis, Chen et al argued that traditional LSTM networks ignored the backward direction's information during prediction and proposed ABLSTM for HAR with WiFi CSI [13]. Li et al [19] showed that existing methods treated each frame as a single temporal point which ignores the informative context and only considered channel-over-time features while disregarding the time-over-channel features. Hence, they proposed THAT and gained SOTA performance.…”
Section: A Csi-based Harmentioning
confidence: 99%
“…There are two encoders in our model. Both of them are built up with Multiscale Convolution Augmented Transformer (MCAT) layers proposed in [19] and 1D convolution layers. Fig.…”
Section: B Dualconfi Contrastive Learningmentioning
confidence: 99%
See 3 more Smart Citations