2020
DOI: 10.13020/g1gx-y834
|View full text |Cite
|
Sign up to set email alerts
|

TrashCan 1.0 An Instance-Segmentation Labeled Dataset of Trash Observations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(15 citation statements)
references
References 0 publications
0
15
0
Order By: Relevance
“…In contrast to the preceding research that employed a mask R-CNN architecture on the TrashCan dataset, our approach leverages an adapted U-Net model (13,15,18). Despite the variation in architectural choices, a comparative evaluation revealed a notable enhancement in segmentation performance by our model, successfully categorizing all 16 classes, an advancement from an Average Precision of 0.30 attained in the previous work (14).…”
Section: Discussionmentioning
confidence: 91%
See 3 more Smart Citations
“…In contrast to the preceding research that employed a mask R-CNN architecture on the TrashCan dataset, our approach leverages an adapted U-Net model (13,15,18). Despite the variation in architectural choices, a comparative evaluation revealed a notable enhancement in segmentation performance by our model, successfully categorizing all 16 classes, an advancement from an Average Precision of 0.30 attained in the previous work (14).…”
Section: Discussionmentioning
confidence: 91%
“…In this study, we created a deep learning algorithm to detect objects in an image captured by an underwater ROV, with the goal of finding and isolating trash for eventual removal. We utilized a convolutional neural network model developed using the U-Net architecture, which was trained to segment images from the TrashCan dataset (images taken by ROVs) into four distinct classes: trash (with 8 subclasses), animal (with 7 subclasses), plants, and ROV appendages (15). This dataset was composed of 7,212 images of different classes: trash, plants, animals, and ROVs.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Four studies (Hegde et al, 2021;Marin et al, 2021;Musić et al, 2020;Wu et al, 2020) retrieved images from internet. While authors of these studies had to manually produce the annotations, one study (Deng et al, 2021) directly utilized the images with annotations from the TrashCan dataset (Hong et al, 2020).…”
Section: Employed Dataset Sourcesmentioning
confidence: 99%