2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis 2011
DOI: 10.1109/ivmspw.2011.5970367
|View full text |Cite
|
Sign up to set email alerts
|

Using human experts' gaze data to evaluate image processing algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2013
2013
2016
2016

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 10 publications
0
7
0
Order By: Relevance
“…The expert dermatologist group was instructed to "examine and describe each image verbally as if teaching the trainee to make a diagnosis based on the image." Physician assistant students were recruited to serve as 'trainees' in order to motivate the expert dermatologists through the modified MasterApprentice model [Beyer and Holtzblatt 1997;Vaidyanathan et al 2011]. We also recorded experts' verbal narration.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The expert dermatologist group was instructed to "examine and describe each image verbally as if teaching the trainee to make a diagnosis based on the image." Physician assistant students were recruited to serve as 'trainees' in order to motivate the expert dermatologists through the modified MasterApprentice model [Beyer and Holtzblatt 1997;Vaidyanathan et al 2011]. We also recorded experts' verbal narration.…”
Section: Methodsmentioning
confidence: 99%
“…For example, in addition to perceiving the presence of certain cues, they also perceive the absence of other critical cues, performing meaningful integration of that information. Characterizing experts' perceptual and conceptual expertise by capturing their viewing behavior and spoken description * pxv1621@g.rit.edu will improve our understanding of how experts perform such complex cognitive tasks using their domain knowledge, and this will benefit image informatics systems [Vaidyanathan et al 2011;Li et al 2012]. This work focuses on how the RQA method [Anderson et al 2013] can be extended to understand the effect of perceptual expertise on eye movement patterns and its potential use in investigating the interactions between experts' eye movements and spoken description.…”
Section: Introductionmentioning
confidence: 99%
“…Extending Vaidyanathan et al [2015a], we also include the K-means method to encode fixation sequences. We use K-means and Lab color features as they have been particularly useful for dermatological images [Bosman et al 2010;Vaidyanathan et al 2011]. Each image is first converted into Lab color space, where the L channel represents illumination, the a channel indicates redness-greenness, and the b channel indicates blueness-yellowness in the image.…”
Section: Visual Unitsmentioning
confidence: 99%
“…The device of capture is usually a physical piece of equipment, which records a measurable human output. For example, eye movements and related measurements of an observer can be objectively captured with adequate fidelity by an eye-tracker under the right conditions (Vaidyanathan et al 2011;Wang et al 2013), and an oximeter can be used to measure a person's blood oxygen level and pulse (Bethamcherla et al 2015). Similarly, linguistic sensing by definition involves capturing and objectively measuring the linguistic signal response produced by a language user.…”
Section: Introductionmentioning
confidence: 99%