2023
DOI: 10.1007/978-3-031-26348-4_10
|View full text |Cite
|
Sign up to set email alerts
|

LatentGaze: Cross-Domain Gaze Estimation Through Gaze-Aware Analytic Latent Code Manipulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 27 publications
0
5
0
Order By: Relevance
“…Gaze SDA (Krafka et al 2016;He et al 2019;Yu, Liu, and Odobez 2019b) and UDA (Kellnhofer et al 2019;Lee et al 2022) methods usually fine-tune the source domain pretrained model on a few labeled or unlabeled target domain samples to improve cross-domain gaze estimation performance, respectively. For example, gaze SDA methods are often based on meta-learning (Park et al 2019), gaze difference (Liu et al 2018(Liu et al , 2019 and gaze decomposition Shi 2020, 2022), while gaze UDA methods are often based on adversarial learning (Wang et al 2019;Lahiri, Agarwalla, and Biswas 2018), teacher-student networks (He et al 2019;Liu et al 2021), representation learning (Guo et al 2021;, rotation consistency (Bao et al 2022) and jitter (Liu et al 2022).…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Gaze SDA (Krafka et al 2016;He et al 2019;Yu, Liu, and Odobez 2019b) and UDA (Kellnhofer et al 2019;Lee et al 2022) methods usually fine-tune the source domain pretrained model on a few labeled or unlabeled target domain samples to improve cross-domain gaze estimation performance, respectively. For example, gaze SDA methods are often based on meta-learning (Park et al 2019), gaze difference (Liu et al 2018(Liu et al , 2019 and gaze decomposition Shi 2020, 2022), while gaze UDA methods are often based on adversarial learning (Wang et al 2019;Lahiri, Agarwalla, and Biswas 2018), teacher-student networks (He et al 2019;Liu et al 2021), representation learning (Guo et al 2021;, rotation consistency (Bao et al 2022) and jitter (Liu et al 2022).…”
Section: Related Workmentioning
confidence: 99%
“…These methods typically use adversarial learning (Cheng, Bao, and Lu 2022) or adversarial attack (Xu, Wang, and Lu 2023) to eliminate or disturb gaze-irrelevant factors or features. Moreover, some gaze SDA, UDA or redirection methods also contain gaze DG modules, such as (Park et al 2019;Bao et al 2022;Lee et al 2022). However, their cross-domain performances still have room for improvement.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…For DG methods, we choose CDG (Wang et al 2022), PureGaze and Xu et al's method (Xu, Wang, and Lu 2023) for comparison and use the results report by the author. Additionally, the results of SOTA UDA methods, including PnP-GA (Liu et al 2021), RUDA (Bao et al 2022), CRGA (Wang et al 2022), LatentGaze (Lee et al 2022), Liu et al's work (Liu et al 2022) and UnReGA (Cai et al 2023) as a reference.…”
Section: Experiments Experiments Detailsmentioning
confidence: 99%