2022
DOI: 10.31234/osf.io/ph4q8
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Head-Mounted Mobile Eye-Tracking in the Domestic Dog: A New Method

Abstract: Humans rely on dogs for countless tasks, ranging from companionship to highly specialized detection work. In their daily lives, dogs must navigate a human-built visual world, yet comparatively little is known about what dogs visually attend to as they move through their environment. Real-world eye-tracking, or head-mounted eye-tracking, allows participants to freely move through their environment, providing more naturalistic results about visual attention while interacting with objects and agents. In dogs, rea… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 38 publications
0
4
0
Order By: Relevance
“…-test the perception of tail wagging parameters in humans and dogs (and ideally in non-human primates and other canids as well) through neuroimaging and physiological studies (e.g. expose both humans and dogs to tail wagging dogs and measure attention parameters such as eye fixations) [111][112][113][114].…”
Section: Recommendations and Future Directionsmentioning
confidence: 99%
“…-test the perception of tail wagging parameters in humans and dogs (and ideally in non-human primates and other canids as well) through neuroimaging and physiological studies (e.g. expose both humans and dogs to tail wagging dogs and measure attention parameters such as eye fixations) [111][112][113][114].…”
Section: Recommendations and Future Directionsmentioning
confidence: 99%
“…Following MemX [12], we first apply Gaussian Relaxation to convert those gaze points g ๐‘— into heatmaps and apply them as visual masks on the corresponding scene images. The intuition is to focus the model on regions closer to the gaze points, as the locations near the gaze points are areas of potential interest to subjects [12,61]. This can be denoted as I โ€ฒ ๐‘– = VM(I ๐‘– , g ๐‘– ) where I โ€ฒ ๐‘– is the masked scene image at step ๐‘–, and VM refers to this Visual Masking processing.…”
Section: Complete Predictormentioning
confidence: 99%
“…The world camera is essentially an Insta360 GO 2 (1920ร—1080@30fps) camera that collects the visual contents adjusted with the canine's FoV. We capture 30 frames per second in this work, as this is adequate for detecting dig eye movement patterns such as fixations, the durations of which are typically longer than 100 ms [61,74]. A higher frame rate would unnecessarily reduce battery lifespan.…”
Section: Hardware Designmentioning
confidence: 99%
See 1 more Smart Citation