2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI) 2022
DOI: 10.1109/isbi52829.2022.9761489
|View full text |Cite
|
Sign up to set email alerts
|

Visual Attention Analysis Of Pathologists Examining Whole Slide Images Of Prostate Cancer

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…The process of narrative annotation also contains helpful information in essence. By exploring the visual attention of doctors browsing and the process of scanning trajectories, Chakraborty et al ( 34 ) found there are strongly correlated between the feature regions of algorithm tasks and lesions in the image to a certain extent, which reflects their diagnostic logic. The annotators draw the object’s bounding box with the mouse and add class labels through voice.…”
Section: Related Studymentioning
confidence: 99%
“…The process of narrative annotation also contains helpful information in essence. By exploring the visual attention of doctors browsing and the process of scanning trajectories, Chakraborty et al ( 34 ) found there are strongly correlated between the feature regions of algorithm tasks and lesions in the image to a certain extent, which reflects their diagnostic logic. The annotators draw the object’s bounding box with the mouse and add class labels through voice.…”
Section: Related Studymentioning
confidence: 99%
“…In Ref. 27, Chakraborty et al employed a custom‐built CNN model called Prostate AttentionNet (ProstAttNet) to predict visual attention. Nagpal et al developed a DL system to classify between three Gleason patterns and non‐tumor.…”
Section: Related Workmentioning
confidence: 99%
“…In Ref. 27 classify between three Gleason patterns and non-tumor. The system was based on a custom version of the InceptionV3 architecture and a categorical prediction system based on class selection using the highest calibrated likelihood.…”
Section: Related Workmentioning
confidence: 99%