2018 IEEE Winter Conference on Applications of Computer Vision (WACV) 2018
DOI: 10.1109/wacv.2018.00200
|View full text |Cite
|
Sign up to set email alerts
|

ByLabel: A Boundary Based Semi-Automatic Image Annotation Tool

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(21 citation statements)
references
References 27 publications
0
21
0
Order By: Relevance
“…A total of 30 original images of cross-sectional microscopic images of cotton fiber with 4272 × 2848 resolution were collected from different cotton samples [12], and a total of 720 microscopic images of cotton cross-sectional with 427 × 320 resolution were generated by the random cutting method. All of the cross-sectional microscopic images of cotton fiber in the dataset are manually labeled with the edge-based semi-automatic image annotation tool ByLable that was developed by Xuebin Qin et al [25]. Subsequently, the dataset containing 7200 cross-sectional microscopic images of cotton fiber was generated by data enhancement methods, such as scale scaling, image rotation, and mirror image.…”
Section: Construct Of the Experimental Datasetmentioning
confidence: 99%
“…A total of 30 original images of cross-sectional microscopic images of cotton fiber with 4272 × 2848 resolution were collected from different cotton samples [12], and a total of 720 microscopic images of cotton cross-sectional with 427 × 320 resolution were generated by the random cutting method. All of the cross-sectional microscopic images of cotton fiber in the dataset are manually labeled with the edge-based semi-automatic image annotation tool ByLable that was developed by Xuebin Qin et al [25]. Subsequently, the dataset containing 7200 cross-sectional microscopic images of cotton fiber was generated by data enhancement methods, such as scale scaling, image rotation, and mirror image.…”
Section: Construct Of the Experimental Datasetmentioning
confidence: 99%
“…In a different approach, [34] have developed a recurrent neural network that iteratively proposes segmented objects to human annotators and refines the annotations with regard to their previous modifications. [35] presented a semi-automated platform that works based on edge detection, where high quality detected instances are proposed to annotators. It is worth mentioning that other studies have looked into novel tools based on different user interactions mechanisms, e.g.…”
Section: Assistive User Interfacesmentioning
confidence: 99%
“…Therefore, the meaningful regions of uniform size 540 × 360 pixels are cropped from the original images to form the datasets. To acquire pixel-level annotations, we adopt the publicly available semi-automatic boundary annotation tool ByLabel [27], in which edge fragments detected by EDLines [28] are selected by annotator and combined automatically. Compared with purely manual annotation, the semi-automatic annotation is much more accurate for the superiority of EDLines on edge localization.…”
Section: Power Line Datasetsmentioning
confidence: 99%