2017
DOI: 10.1162/leon_a_01455
|View full text |Cite
|
Sign up to set email alerts
|

Autoencoding Blade Runner: Reconstructing Films with Artificial Neural Networks

Abstract: ABSTRACT'Blade Runner-Autoencoded' is a film made by training an autoencoder-a type of generative neural network-to recreate frames from the film Blade Runner. The autoencoder is made to reinterpret every individual frame, reconstructing it based on its memory of the film. The result is a hazy, dreamlike version of the original film. The project explores the aesthetic qualities of the disembodied gaze of the neural network. The autoencoder is also capable of representing images from films it has not seen based… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
2
2
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 0 publications
0
7
0
Order By: Relevance
“…The results we have obtained markedly lie outside the distribution of training images, and allow for a very large range of possible outcomes. In addition, the combination of autoencoding [ 36 ] and network bending techniques allows for completely novel approaches to filtering and transforming pre-recorded audio, which can be seen in Figure 3 .…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The results we have obtained markedly lie outside the distribution of training images, and allow for a very large range of possible outcomes. In addition, the combination of autoencoding [ 36 ] and network bending techniques allows for completely novel approaches to filtering and transforming pre-recorded audio, which can be seen in Figure 3 .…”
Section: Discussionmentioning
confidence: 99%
“…After training it is possible to sample randomly in the latent space and then sample directly from the decoder. It is also possible to input audio sequences, both from the training set and outside of it, and produce reconstructions of the audio track mediated through the VAE model, in a method that we have previously referred to as autoencoding [ 36 ]. By performing this autoencoding procedure in combination with network bending, we can provide a new way of transforming and filtering audio sequences.…”
Section: Base Modelsmentioning
confidence: 99%
“…Terence Broad and Mick Grierson's Blade Runner-Autoencoded [7] (Fig. 2) used a decoding algorithm to learn, frame by frame, the movie Blade Runner and then used the same algorithm iteratively to make a best guess of the movie based on this learning, in something like a feedback loop that exposed the "thinking" of the algorithm in a way that evoked the concept of memory itself, in all its hazy glory, and asked us to consider the distinction between the machine's "learning and memory" and our own [8].…”
Section: J E S S R O W L a N Dmentioning
confidence: 99%
“…Therefore, VAEs can also generate new samples, by remixing features they encountered in training samples. This generative power makes VAEs particularly useful for both image and music generation Broad and Grierson [6], Roberts et al [37], Roche et al [39].…”
Section: Vae Trainingmentioning
confidence: 99%
“…Technologists started performing the first pioneering experiments with automated generation of image and music in the 1950s and 60s [21,31], which evolved into an assistance to human artists Berg [4], Daudrich [12], Taylor [42], Xenakis [46]. Recent advances in hardware and algorithms made neural-networks-based generation widely accessible for both research and art Alvarez-Melis and Amores [2], Briot et al [5], Broad and Grierson [6], Carnovalini and Rodá [7], Diaz-Jerez [13], Fernandez and Vico [16], Goodfellow et al [17], Larsen et al [26], Roberts et al [37], Roche et al [39]. Scientific and artistic interests also meet in bridging between the expressions: music visualization-an idea rooted in early 20th century art Corra [10], Kandinsky [23], Moritz [29]-is mostly the purview of artists Ox and Keefer [32], while much of the image-to-music transformation is based on sonification, often with a scientific-technological focus Barrass and Kramer [3], Dubus and Bresin [14], Kramer et al [25], Walker and Nees [43].…”
mentioning
confidence: 99%