2020
DOI: 10.1109/lsp.2020.3020215
|View full text |Cite
|
Sign up to set email alerts
|

Disentangled Adversarial Autoencoder for Subject-Invariant Physiological Feature Extraction

Abstract: Recent developments in biosignal processing have enabled users to exploit their physiological status for manipulating devices in a reliable and safe manner. One major challenge of physiological sensing lies in the variability of biosignals across different users and tasks. To address this issue, we propose an adversarial feature extractor for transfer learning to exploit disentangled universal representations. We consider the trade-off between task-relevant features and user-discriminative information by intro… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 21 publications
(9 citation statements)
references
References 19 publications
1
6
0
Order By: Relevance
“…6(k) will result in the overall network structure as shown in Fig. 7, where adversary network is attached as Z 2 is (conditionally) independent of S. This 5-node graph model justifies a recent work on partially disentanged A-CVAE by [29]. Each factor block is realized by a DNN, e.g., parameterized by θ for p θ (z 1 , z 2 |x), and all of the networks except for adversarial network are optimized to minimize corresponding loss functions including L(ŷ, y) as follows:…”
Section: ) Trainingsupporting
confidence: 74%
See 1 more Smart Citation
“…6(k) will result in the overall network structure as shown in Fig. 7, where adversary network is attached as Z 2 is (conditionally) independent of S. This 5-node graph model justifies a recent work on partially disentanged A-CVAE by [29]. Each factor block is realized by a DNN, e.g., parameterized by θ for p θ (z 1 , z 2 |x), and all of the networks except for adversarial network are optimized to minimize corresponding loss functions including L(ŷ, y) as follows:…”
Section: ) Trainingsupporting
confidence: 74%
“…We finally compare the performance of AutoBayes with the benchmark competitor models from [5], [29], [41]- [43] in Table 2. It can be seen that AutoBayes outperforms the state-of-the-art in all datasets except QMNIST.…”
Section: Resultsmentioning
confidence: 99%
“…We envision an adversarial shared-private model similar to [20] where some channels are shared among data-sources (as in our approach) but private (data-source-specific) input can be incorporated. Our approach can also easily be adapted to learn representations that are invariant corresponding to other EEG variation factors e.g., participant ID, by adding an additional adversarial classifier [21], [22].…”
Section: Discussionmentioning
confidence: 99%
“…: R K → R M . Previous research [6,9,10,11,12,13,14,15] has established the technique of learning subject-invariant representations by training models in the presence of an adversarial subject classifier model. a) Marginal Mutual Information: We briefly explain how such an adversarial classifier can be used to reduced the mutual information between representation and subject label I(z; s).…”
Section: A Mutual Information Estimation Methodsmentioning
confidence: 99%
“…In order to evaluate the proposed regularization approaches as described in Equation 4and Table I, we perform experiments with several challenging real-world datasets (see Section IV-A). For each dataset, we explore all of the censoring estimation procedures described above in Algorithms 1,2,3,4,5,6,7,8,9,12,13,14,15,16,and 17. We first search for promising hyperparameter ranges (see Section IV-C), then evaluate the most promising subset of hyperparameters using k-fold cross-validation and evaluate our AutoTransfer method on the resulting collection of models (see Section IV-D).…”
Section: Methodsmentioning
confidence: 99%