2020
DOI: 10.1007/978-3-030-59354-4_16
|View full text |Cite
|
Sign up to set email alerts
|

mr$$^2$$NST: Multi-resolution and Multi-reference Neural Style Transfer for Mammography

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 9 publications
0
7
0
Order By: Relevance
“…The technique manipulates the sequential representations across a CNN to transfer the style of one image to another while keeping the original content [ 1 ]. Gatys et al [ 62 ] first proposed NST, which typically takes two input images: a content image C to be transferred and a style reference image S. It executes feature learning from the representations of Fl(C) and Fl(S) in layer l of a neural style transfer network [ 63 ]. However, suppose the image styles from different datasets are way too far.…”
Section: Advanced Augmentation Techniquesmentioning
confidence: 99%
“…The technique manipulates the sequential representations across a CNN to transfer the style of one image to another while keeping the original content [ 1 ]. Gatys et al [ 62 ] first proposed NST, which typically takes two input images: a content image C to be transferred and a style reference image S. It executes feature learning from the representations of Fl(C) and Fl(S) in layer l of a neural style transfer network [ 63 ]. However, suppose the image styles from different datasets are way too far.…”
Section: Advanced Augmentation Techniquesmentioning
confidence: 99%
“…There are several recent studies which include or relate to accommodating for subtle image differences. Recent approaches include an encoder-decoder module at the top of a CNN (18), batch-instance normalisation (18) and neural style transfer (33). This may be significant in the event that a screening service changes equipment vendors, requiring transition of an existing model to slightly different images.…”
Section: Work By Wu Et Almentioning
confidence: 99%
“…Specifically, given M seen vendor-style domain, we train M 2 generators, which map the data distribution of source domain Ω i to target Ω j domain, ∀i, j ∈ M . Comparing to the method used in [19], the CycleGAN realizes the style transfer with bidirectional learning process. The work [19] unidirectionally takes a few references images may attain limited transferring effect.…”
Section: Multi-style and Multi-view Contrastive Learningmentioning
confidence: 99%
“…Comparing to the method used in [19], the CycleGAN realizes the style transfer with bidirectional learning process. The work [19] unidirectionally takes a few references images may attain limited transferring effect.…”
Section: Multi-style and Multi-view Contrastive Learningmentioning
confidence: 99%
See 1 more Smart Citation