2016 Pattern Recognition Association of South Africa and Robotics and Mechatronics International Conference (PRASA-RobMech) 2016
DOI: 10.1109/robomech.2016.7813173
|View full text |Cite
|
Sign up to set email alerts
|

Texture synthesis using convolutional neural networks with long-range consistency and spectral constraints

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 1 publication
0
2
0
Order By: Relevance
“…Yet, this algorithm is somehow less generic than the PS algorithm because a different set of features is learned on each texture. From a neurally inspired perspective, convolutional neural networks (CNN) are successfully used to generate textures by simply matching second order statistics of each layer outputs [21]. The synthesized textures show an impressive quality and are often indistinguishable from their original version even when scrutiny is allowed [56].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Yet, this algorithm is somehow less generic than the PS algorithm because a different set of features is learned on each texture. From a neurally inspired perspective, convolutional neural networks (CNN) are successfully used to generate textures by simply matching second order statistics of each layer outputs [21]. The synthesized textures show an impressive quality and are often indistinguishable from their original version even when scrutiny is allowed [56].…”
Section: Introductionmentioning
confidence: 99%
“…While the hypothesis that peripheral vision relies on texture statistics generates great progress in understanding the visual cortex, it is still incomplete as it does not account for global scene features such as segmentation [57,43]. Recent deep learning approaches are promising to bridge the gap between low-level texture perception and high-level tasks like scene segmentation and object recognition [21,56,13,52]. Deep neural networks are now used to better probe neural sensitivity in the higher visual cortex [5,40] and their reduced algorithmic load provides more practicable theoretical ground to further model vision [53].…”
Section: Introductionmentioning
confidence: 99%