2016
DOI: 10.1007/978-3-319-44781-0_3
|View full text |Cite
|
Sign up to set email alerts
|

DeepPainter: Painter Classification Using Deep Convolutional Autoencoders

Abstract: Abstract. In this paper we describe the problem of painter classification, and propose a novel approach based on deep convolutional autoencoder neural networks. While previous approaches relied on image processing and manual feature extraction from paintings, our approach operates on the raw pixel level, without any preprocessing or manual feature extraction. We first train a deep convolutional autoencoder on a dataset of paintings, and subsequently use it to initialize a supervised convolutional neural networ… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(16 citation statements)
references
References 19 publications
0
13
0
1
Order By: Relevance
“…The approach to use layers' activations of a CNN trained on ImageNet as features for artistic style recognition was introduced by Karayev et al (2014), where authors showed how features derived from the layers of a CNN trained for object recognition on non-artistic images, achieve high performance on the task of painting style classification and outperform most of the hand-crafted features. The efficiency of CNN-based features, particularly in combination with other hand-crafted features, was confirmed for style (Bar et al, 2014), artist (David & Netanyahu, 2016) and genre classification (Cetinic & Grgic, 2016), as well as for other related tasks such as recognizing objects in paintings (Crowley & Zisserman, 2014). Even better performance for a variety of visual recognition tasks has been achieved by fine-tuning a pre-trained network on the new target dataset as shown by Girshick et al (2014), as opposed to using CNNs just as feature extractors.…”
Section: Related Workmentioning
confidence: 87%
See 1 more Smart Citation
“…The approach to use layers' activations of a CNN trained on ImageNet as features for artistic style recognition was introduced by Karayev et al (2014), where authors showed how features derived from the layers of a CNN trained for object recognition on non-artistic images, achieve high performance on the task of painting style classification and outperform most of the hand-crafted features. The efficiency of CNN-based features, particularly in combination with other hand-crafted features, was confirmed for style (Bar et al, 2014), artist (David & Netanyahu, 2016) and genre classification (Cetinic & Grgic, 2016), as well as for other related tasks such as recognizing objects in paintings (Crowley & Zisserman, 2014). Even better performance for a variety of visual recognition tasks has been achieved by fine-tuning a pre-trained network on the new target dataset as shown by Girshick et al (2014), as opposed to using CNNs just as feature extractors.…”
Section: Related Workmentioning
confidence: 87%
“…Fortunately the appearance of large, annotated and online available fine art collections such as the WikiArt 1 dataset, which contains more than 130k artwork images, enabled the adoption of deep learning techniques, as well as helped shaping a more uniform framework for method comparison. To the best of our knowledge, the WikiArt dataset is currently the most commonly used dataset for art-related classifications tasks (Karayev et al, 2014;Bar et al, 2014;David & Netanyahu, 2016;Girshick et al, 2014;Hentschel et al, 2016;Seguin et al, 2016;Chu & Wu, 2016;, even though other online available sources are also being used such as the Web Gallery of Art 2 (WGA) with more than 40k images (Seguin et al, 2016); or the Rijksmuseum challenge dataset (van Noord et al, 2015;Mensink & Van Gemert, 2014). Furthermore, there were several initiatives for building painting datasets dedicated primarily to fine art image classification such as Painting-91 (Khan et al, 2014), which consists of 4266 images from 91 different painters; the Pandora dataset consisting of 7724 images from 12 art movements (Florea et al, 2016) and the recently introduced museum-centric OmniART dataset with more than 1M photographic reproductions of artworks (Strezoski & Worring, 2017).…”
Section: Related Workmentioning
confidence: 99%
“…Нещодавно Forsythe et al повідомили, що фрактальний аналіз картин може виявляти типові зміни в структурі твору художника; зміни, які можуть бути ранніми показниками початку неврологічного погіршення [2]. Використання глибоких нейронних мереж для класифікації зображень почалося порівняно недавно і вже показало свою ефективність [3,4]. В нашій роботі ми поставили за мету оцінити, чи можуть глибокі нейронні мережі служити потенційним підходом до ранньої діагностики психічних відхилень, пов'язаних із шизофренією, та допомогти медичному психологу у його професійній психокорекцій ній діяльності, в тому числі, тренінговій.…”
Section: використання комп'ютерної технології глибинних нейронних мерunclassified
“…Using the Conv-LSTM-based network, we have seen how to extract both local and globally important features for classifying individual samples from a limited number of labeled samples. However, in cases of very small numbers of training samples, unsupervised pretraining has proven highly effective [10,16,21,47]. Further, since both datasets are very sparse, we hypothesize that using a CAEbased representation learning (with reduced dimension) and classification scheme should further enhance the classification accuracy.…”
Section: Convolutional Autoencoder Classifiermentioning
confidence: 99%