2019
DOI: 10.1007/978-3-030-31332-6_44
|View full text |Cite
|
Sign up to set email alerts
|

Personalised Aesthetics with Residual Adapters

Abstract: The use of computational methods to evaluate aesthetics in photography has gained interest in recent years due to the popularization of convolutional neural networks and the availability of new annotated datasets. Most studies in this area have focused on designing models that do not take into account individual preferences for the prediction of the aesthetic value of pictures. We propose a model based on residual learning that is capable of learning subjective, userspecific preferences over aesthetics in phot… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 31 publications
0
4
0
Order By: Relevance
“…To achieve this, and, in order to account for artifacts in the borders of the generated textures, we tile the synthesized images until they cover the same resolution as their corresponding inputs. Interestingly, methods based on patches [13], [22], [45] obtain better SSIM and Si-FID scores than previous deep learning-based methods [16], [23]. This difference is not seen in the LPIPS metric.…”
Section: Results and Comparisonsmentioning
confidence: 89%
See 1 more Smart Citation
“…To achieve this, and, in order to account for artifacts in the borders of the generated textures, we tile the synthesized images until they cover the same resolution as their corresponding inputs. Interestingly, methods based on patches [13], [22], [45] obtain better SSIM and Si-FID scores than previous deep learning-based methods [16], [23]. This difference is not seen in the LPIPS metric.…”
Section: Results and Comparisonsmentioning
confidence: 89%
“…While traditional methods used handcrafted features [39], [40], recent parametric methods rely on deep neural networks as their parameterization. Activations within latent spaces in pre-trained CNNs have shown to capture relevant statistics of the style and texture of images [41], [42], [43], [44], [45]. Textures can be synthesized through this approach by gradientdescent optimization [46], [47] or by training a neural network that learns those features [23], [48].…”
Section: Texture Synthesismentioning
confidence: 99%
“…Pre‐training an image descriptor model on contrastive or self‐supervised learning tasks, and use the activations of its last layer as input to the downstream task ( Linear Probing ) [CKNH20, RKH * 21, CXH21, HCX * 22, KRJ * 22]. For domain adaptation problems, it is common to adapt the internal representations of pre‐trained CNNs so as to efficiency [RBV17, RBV18, RPB19, PCYS20, LLB22]. Inspired by these approaches, we design a model that leverages fine‐tuning of a pre‐trained image CNN classifier as a feature extractor, capable of processing depth images, and extend it to account for additional input variables, and handling multiple images at the same time during test.…”
Section: Related Workmentioning
confidence: 99%
“…However, despite this expected variability when evaluating subjective properties of images, we show a sufficiently large degree of consistency between different users' judgments, suggesting it is possible to devise automatic systems to estimate visual sentiment directly from images, ignoring user differences. One of our future works is to design personalized approaches to predict both human sentiment and individual differences [73,74].…”
Section: Measures For Ensuring and Evaluating Data Reliabilitymentioning
confidence: 99%