2023
DOI: 10.1109/tcss.2022.3158480
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Driver Anomaly Detection via Conditional Temporal Proposal and Classification Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 40 publications
0
6
0
Order By: Relevance
“…Also, Hu et al [13] proposed a novel clustering supervised contrastive loss to optimize the distribution of the extracted representation vectors to improve the model performance. Su et al [33] proposed a two-stage anomaly detection and classification framework that ensures the model's understanding of NDRA deep features as well as robustness to Open-Set anomalies. However, the NDRA classification task's usually have a high background variation and a weak driver subject variation.…”
Section: Video Anomaly Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…Also, Hu et al [13] proposed a novel clustering supervised contrastive loss to optimize the distribution of the extracted representation vectors to improve the model performance. Su et al [33] proposed a two-stage anomaly detection and classification framework that ensures the model's understanding of NDRA deep features as well as robustness to Open-Set anomalies. However, the NDRA classification task's usually have a high background variation and a weak driver subject variation.…”
Section: Video Anomaly Detectionmentioning
confidence: 99%
“…In the experiments of the overall approach, we include Deep SVDD [52], Deep EPCC [53], DAD [12], DC-EPCC [54], CL-MOD [50], DADCNet [33], ConvGRU [55], DCHS [56], SuMoCo [57], and TDAD [58]were used as comparative algorithms, where DAD [12] was the baseline model, and comparison experiments were carried out on the 2 views and the 2 modalities in the DAD dataset as well as the comparison experiments were conducted on the fused views. The specific experimental results are shown in Tab.…”
Section: Comparison Experimentsmentioning
confidence: 99%
“…Various multiview multimodal methods have been proposed with different emphases. Some propose novel learning methods (e.g., supervised contrastive learning [15]), while others [1,4,[21][22][23] are focused on handling the temporal dimension. However, how to combine heterogenous data in DMS has rarely been studied.…”
Section: Multimodal Driver Monitoring Systemsmentioning
confidence: 99%
“…However, how to combine heterogenous data in DMS has rarely been studied. Most previous methods [15,18,23] adopt a decision-level fusion by averaging the scores, while Ortega et al [21] propose to fuse data at an input level by concatenation. These strategies cannot handle modality/view interaction well and hence tend to underperform.…”
Section: Multimodal Driver Monitoring Systemsmentioning
confidence: 99%
“…Therefore, distractions can be identified from the driver's gaze or their behavior of secondary driving tasks. In particular, developing various existing sensors can help obtain the driver's gaze and movements, through both of which we can then identify driver distraction states [8,9].…”
Section: Introductionmentioning
confidence: 99%