2022
DOI: 10.3390/jimaging8040110
|View full text |Cite
|
Sign up to set email alerts
|

Salient Object Detection by LTP Texture Characterization on Opposing Color Pairs under SLICO Superpixel Constraint

Abstract: The effortless detection of salient objects by humans has been the subject of research in several fields, including computer vision, as it has many applications. However, salient object detection remains a challenge for many computer models dealing with color and textured images. Most of them process color and texture separately and therefore implicitly consider them as independent features which is not the case in reality. Herein, we propose a novel and efficient strategy, through a simple model, almost witho… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 68 publications
0
9
0
Order By: Relevance
“…First, at the beginning of the neural network, our model opposes color channels two by two by grouping them (R-R, R-G, R-B, G-G, G-B, B-B) then extracting the features at the channels’ spatial levels and between the color channels from each channel pair at the same time, to integrate color into patterns. Therefore, instead of performing a subtractive comparison or an OCLTP (opponent color linear ternary pattern) like Ndayikengurukiye and Mignotte [ 1 ], we let the neural network learn the features that represent the comparison of the two color pairs. Second, this idea of grouping and then extracting the features at the channels’ spatial levels and between the color channels at the same time is applied on feature maps at each neural network level until the saliency maps are obtained.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…First, at the beginning of the neural network, our model opposes color channels two by two by grouping them (R-R, R-G, R-B, G-G, G-B, B-B) then extracting the features at the channels’ spatial levels and between the color channels from each channel pair at the same time, to integrate color into patterns. Therefore, instead of performing a subtractive comparison or an OCLTP (opponent color linear ternary pattern) like Ndayikengurukiye and Mignotte [ 1 ], we let the neural network learn the features that represent the comparison of the two color pairs. Second, this idea of grouping and then extracting the features at the channels’ spatial levels and between the color channels at the same time is applied on feature maps at each neural network level until the saliency maps are obtained.…”
Section: Methodsmentioning
confidence: 99%
“…At this stage, through Pairing_Color_Unit, the input RGB image is paired in six opposing color channel pairs: R-R, R-G, R-B, G-G, G-B and B-B [ 1 , 35 , 48 ]. These pairs are then concatenated, which gives 12 channels, R, R, R, G, R, B, G, G, G, B, B, B, as illustrated in Figure 3 .…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…LTP is [25] employed in the image processing as feature extraction. It is a Local Binary Pattern (LBP) extension in which the 𝑠(𝑧) function is expressed in Eq.…”
Section: Local Ternary Pattern (Ltp)mentioning
confidence: 99%