2022
DOI: 10.1007/978-3-031-19797-0_3
|View full text |Cite
|
Sign up to set email alerts
|

Highly Accurate Dichotomous Image Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(37 citation statements)
references
References 72 publications
0
37
0
Order By: Relevance
“…Random and irregular anomalous regions (Fig. 4, P ) are first obtained from the Perlin [30] noise and then multiplied by the object foreground [33,56] (Fig. 4, F ) of the normal sample to obtain the ground truth mask (Fig.…”
Section: Trainingmentioning
confidence: 99%
“…Random and irregular anomalous regions (Fig. 4, P ) are first obtained from the Perlin [30] noise and then multiplied by the object foreground [33,56] (Fig. 4, F ) of the normal sample to obtain the ground truth mask (Fig.…”
Section: Trainingmentioning
confidence: 99%
“…This work focuses on the intrinsic attributes of Q instead of this redundant information. Hence we remove the redundant information using IS‐Net [QDH * 22], a segmentation network with pretrained weights.…”
Section: Wytiwyr Prototype Systemmentioning
confidence: 99%
“…Consequently, we compose a new dataset, called HQSeg-44K, which contains 44K extremely fine-grained image mask annotations. HQSeg-44K is constructed by merging six existing image datasets [34,29,26,37,8,45] with highly accurate mask annotations, covering over 1,000 diverse semantic classes. Thanks to the smaller-scale dataset and our minimal integrated architecture, HQ-SAM can be trained in only 4 hours on 8 RTX 3090 GPUs.…”
Section: Hq-sam Predictionmentioning
confidence: 99%
“…We note that the released SA-1B dataset only contains automatically generated mask labels, missing very accurate manual annotation on objects with complex structures. Due to the annotation difficulty, HQSeg-44K leverages a collection of six existing image datasets including DIS [34] (train set), ThinObject-5K [29] (train set), FSS-1000 [26], ECSSD [37], MSRA-10K [8], DUT-OMRON [45] with extremely fine-grained mask labeling, where each of them contains 7.4K mask labels on average. To make HQ-SAM robust and generalizable to new data, HQSeg-44K contains diverse semantic classes of more than 1,000.…”
Section: Training and Inference Of Hq-sammentioning
confidence: 99%
See 1 more Smart Citation