Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Application 2022
DOI: 10.5220/0010780000003124
|View full text |Cite
|
Sign up to set email alerts
|

AAEGAN Loss Optimizations Supporting Data Augmentation on Cerebral Organoid Bright-field Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(16 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…Computer graphics primarily aims to generate visually realistic graphics using computers. To achieve this, it involves establishing a geometric representation of the scene depicted in the graphics [18]. Subsequently, lighting models are employed to calculate the lighting effects under hypothetical light sources, considering factors such as texture and material properties [3].…”
Section: Utilization Goal Of Computer Graphicsmentioning
confidence: 99%
“…Computer graphics primarily aims to generate visually realistic graphics using computers. To achieve this, it involves establishing a geometric representation of the scene depicted in the graphics [18]. Subsequently, lighting models are employed to calculate the lighting effects under hypothetical light sources, considering factors such as texture and material properties [3].…”
Section: Utilization Goal Of Computer Graphicsmentioning
confidence: 99%
“…Data augmentation solutions have already been used to increase the size and diversity of this brain organoid bright-field dataset. An adversarial autoencoder (AAE) seems the architecture the most suited to augment images of brain organoid bright-field acquisition (Brémond Martin et al, 2022a ). This AAE differs from the original GAN architecture by the input given to the encoding part (original images) and its generative network containing an auto-encoder-decoder framework (Goodfellow et al, 2014 ; Makhzani et al, 2016 ).…”
Section: Introductionmentioning
confidence: 99%
“…To improve the sharpness during the generation, we test various loss functions to improve the adversarial network (Brémond Martin et al, 2022a ). However, these results are based upon metric calculation and a dimensional reduction to compare all feature images (original and generated with each optimization) in the same statistical space.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations