2013
DOI: 10.1007/978-3-642-32063-7_27
|View full text |Cite
|
Sign up to set email alerts
|

Optimised Computational Visual Attention Model for Robotic Cognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 8 publications
0
2
0
Order By: Relevance
“…Natural Language Processing (NLP) and then interfaced with vision using Language Perceptional Translator (LPT) for parsing the sentence and to extract the corresponding properties of an object like location, colour, size and shape for the object. Few features that influence the visual search task were context, features and background [12]. Context was unimportant for the detection of human due to its dynamic behaviour in surveillance scenes.…”
Section: Related Workmentioning
confidence: 99%
“…Natural Language Processing (NLP) and then interfaced with vision using Language Perceptional Translator (LPT) for parsing the sentence and to extract the corresponding properties of an object like location, colour, size and shape for the object. Few features that influence the visual search task were context, features and background [12]. Context was unimportant for the detection of human due to its dynamic behaviour in surveillance scenes.…”
Section: Related Workmentioning
confidence: 99%
“…Trough the central-surrounding structure, the contrast is increased between the target area and background area, so the target area will be noticed by the visual system. [15][16][17] Where + represents central excitement zone, and − represents central excitement zone.…”
Section: Related Theory Analysismentioning
confidence: 99%