2019
DOI: 10.1007/978-3-030-27544-0_13
|View full text |Cite
|
Sign up to set email alerts
|

ImageTagger: An Open Source Online Platform for Collaborative Image Labeling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(26 citation statements)
references
References 6 publications
0
25
0
1
Order By: Relevance
“…One of the contributions of this study is our annotated dataset from the RoboCup 2019 competition using cameras mounted on the robots in both the controlled and natural lighting scenarios. We also used some images from previous competitions via the Image Tagger community-driven project [4]. Each of these raw images are manually annotated.…”
Section: A Datasetmentioning
confidence: 99%
“…One of the contributions of this study is our annotated dataset from the RoboCup 2019 competition using cameras mounted on the robots in both the controlled and natural lighting scenarios. We also used some images from previous competitions via the Image Tagger community-driven project [4]. Each of these raw images are manually annotated.…”
Section: A Datasetmentioning
confidence: 99%
“…4. Some portion of the used dataset were taken from the ImageTagger library [16], which have annotated samples from different angles, cameras, and brightness. We extract the object coordinates by post-processing the blob-shaped network outputs.…”
Section: Visual Perceptionmentioning
confidence: 99%
“…In total, the whole training process with around 3000 samples takes less than 40 minutes on a single Titan Black GPU with 6 GB memory. Some of the training samples were taken from the ImageTagger library [31] which have annotated samples from different angles, cameras, and brightness. Although the network produced very few falsepositives (around 1 % for the ball), we were able to reduce this value by utilising inference time augmentation.…”
Section: A Visual Perceptionmentioning
confidence: 99%