2018
DOI: 10.1007/978-981-13-0020-2_45
|View full text |Cite
|
Sign up to set email alerts
|

Transfer Learning by Finetuning Pretrained CNNs Entirely with Synthetic Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
7
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 23 publications
1
7
0
Order By: Relevance
“…The process of fine-tuning pre-trained models with an expanded database has precedent in existing research works [25]. It can be compared with the closely related field of transfer learning [26].…”
Section: E Fine Tuning Of Pre-trained Modelsmentioning
confidence: 99%
“…The process of fine-tuning pre-trained models with an expanded database has precedent in existing research works [25]. It can be compared with the closely related field of transfer learning [26].…”
Section: E Fine Tuning Of Pre-trained Modelsmentioning
confidence: 99%
“…The idea of DR has been applied to bridging reality gap in many tasks from car detection (Tremblay et al, 2018) to detecting packaged food in refrigerators (Rajpura et al, 2017). Specific to image-based plant phenotyping, applied DR to leaf instance segmentation.…”
Section: Related Workmentioning
confidence: 99%
“…Object detection or more precisely one of its multiple kinds is probably the most pursued topic of recent research in the area of training an artificial intelligence on synthetic data. Works in this direction includes Rajpura et al 19 use Blender and Cyles to generate synthetic image data for the purpose of object detection by transfer learning online 3D models from ShapeNet database (with everyday items like bottles, tins, cans and food items) and Archive3D database. They also encounter domain gap problem Jabbar et al conducted experiments to detect drinking glasses in realistically rendered images compared to real images.…”
Section: Object Detectionmentioning
confidence: 99%