2018 IEEE EMBS International Conference on Biomedical &Amp; Health Informatics (BHI) 2018
DOI: 10.1109/bhi.2018.8333405
|View full text |Cite
|
Sign up to set email alerts
|

A novel channel-aware attention framework for multi-channel EEG seizure detection via multi-view deep learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 53 publications
(31 citation statements)
references
References 11 publications
0
31
0
Order By: Relevance
“…For deeper layers, however, the hierarchical nature of neural networks means it is much harder to understand what a weight is applied to. The analysis of model activations was used in multiple studies [212,194,87,83,208,167,154,109]. This kind of inspection method usually involves visualizing the activations of the trained model over multiple examples, and thus inferring how different parts of the network react to known inputs.…”
Section: Inspection Of Trained Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…For deeper layers, however, the hierarchical nature of neural networks means it is much harder to understand what a weight is applied to. The analysis of model activations was used in multiple studies [212,194,87,83,208,167,154,109]. This kind of inspection method usually involves visualizing the activations of the trained model over multiple examples, and thus inferring how different parts of the network react to known inputs.…”
Section: Inspection Of Trained Modelsmentioning
confidence: 99%
“…Occlusion sensitivity techniques [92,26,175] use a similar idea, by which the decisions of the network when different parts of the input are occluded are analyzed. [135,211,86,34,87,200,182,122,170,228,164,109,204,85,25] Analysis of activations [212,194,87,83,208,167,154,109] Input-perturbation network-prediction correlation maps [149,191,67,16,150] Generating input to maximize activation [188,144,160,15] Occlusion of input [92,26,175] Several studies used backpropagation-based techniques to generate input maps that maximize activations of specific units [188,144,160,15]. These maps can then be used to infer the role of specific neurons, or the kind of input they are sensitive to.…”
Section: Inspection Of Trained Modelsmentioning
confidence: 99%
“…In our experiment, each EEG fragment is labeled based on the ground truth as in one of the two classes: ictal and non-ictal states. Taking the computational expense into consideration, we adopt hold-out validation in the same way to [ 17 , 32 , 33 ]. Note that the holding-out portions of the dataset is a manner similar to cross-validation.…”
Section: Resultsmentioning
confidence: 99%
“…ChannelAtt [18]. ChannelAtt adopts fully connected multi-view learning to soft-select critical channels from multi-channel biosignals.…”
Section: Methodsmentioning
confidence: 99%
“…Choi et al [16] and Ma et al [17] adopted attention mechanisms to explain medical codes (e.g., procedure, diagnosis, and medication codes) from electronic health records (EHRs). Yuan et al [18] first exploited an attention mechanism based on multi-view learning (i.e., ChannelAtt) to achieve soft channel selection for EEG seizure detection. In general, there are two differences between ChannelAtt and our proposed FusionAtt model.…”
Section: Related Workmentioning
confidence: 99%