2021
DOI: 10.1109/tip.2021.3062689
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Alternate Interaction Network for RGB-D Salient Object Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
63
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 203 publications
(70 citation statements)
references
References 91 publications
0
63
0
Order By: Relevance
“…On the NJUD [49] dataset, our meth-od is similar to or worse than the best performance. Through the percentage presentation, our method improves the MaxF index by 2% compared with HAINet [61] on the SIP [22] data set. In Tab.…”
Section: ) Comparison With Advanced Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…On the NJUD [49] dataset, our meth-od is similar to or worse than the best performance. Through the percentage presentation, our method improves the MaxF index by 2% compared with HAINet [61] on the SIP [22] data set. In Tab.…”
Section: ) Comparison With Advanced Methodsmentioning
confidence: 99%
“…PCF [54], AFNet [55], CPFP [20], MMCI [33], TANet [44], D3Net [22], JL-DCF [13], DMRA [34], S2MA [14], PGAR [56], ICNet [10], DASNet [57], UC-Net [58], DCF [59], DSA2F [60], HAINet [61]. Some of the above methods are trained with subsets of NJU2K [43] and NLPR [44], while others are trained with subsets of NJU2K [48], NLPR [26] and DUTLF-depth [34].…”
Section: Comparison With State-of-art Methodsmentioning
confidence: 99%
“…Our model is compared with 16 state-of-the-art RGB-D SOD models, including D3Net [22], ICNet [41], DCMF [6], DRLF [67], SSF [81], SSMA [43], A2dele [57], UCNet [80], CoNet [33], DANet [90], JLDCF [24], EBFSP [31],CDNet [35], HAINet [40], RD3D [10] and DSA2F [61]. To ensure the fairness of the comparison results, the saliency maps of the evaluation are provided by the authors or generated by running source codes.…”
Section: Comparisons With the State-of-the-artmentioning
confidence: 99%
“…However, directly fusing the depth cues and RGB information would lead to insufficient cross-modality understanding. Several works [17], [24], [37], [48] propose to fuse these two modality encoders with a stage-wise or hierarchical manner. For example, Chen et al [24] propose a progressive fusion strategy in a coarse-to-fine manner for sufficient information learning.…”
Section: Related Workmentioning
confidence: 99%
“…Zhao et al [17] propose a fluid pyramid integration strategy to make full use of depth enhanced features. Li et al [48] tends to utilize the alternate interaction of different network stages for learning relations of different modalities. Besides these fusion strategies, Chen [34] propose to find a disentangled feature representation of each modality and learns to interact the same type of disentangled feature with others.…”
Section: Related Workmentioning
confidence: 99%