SIGGRAPH Asia 2014 Courses 2014
DOI: 10.1145/2659467.2659474
|View full text |Cite
|
Sign up to set email alerts
|

Shadertoy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…The first compiles the entered code to a compute shader to process or generate arbitrary data. The second node compiles to a fragment shader that generates an arbitrary 2D texture, with limited compatibility to Shadertoy [17]. Other nodes that do not perform computation can act as data sources or sinks, although there is no clear separation between the two roles.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The first compiles the entered code to a compute shader to process or generate arbitrary data. The second node compiles to a fragment shader that generates an arbitrary 2D texture, with limited compatibility to Shadertoy [17]. Other nodes that do not perform computation can act as data sources or sinks, although there is no clear separation between the two roles.…”
Section: Methodsmentioning
confidence: 99%
“…Specifically, we want to be able to see results immediately, even while editing the source code. This concept is used today, for example, for entertainment in the so-called Demoscene [17–19]. We expect this approach to also enable intuition-driven development in the domain of medical image analysis.…”
Section: Introductionmentioning
confidence: 99%
“…Examples include the OpenML project (Vanschoren et al., 2014), the Model Zoo project (Fenoy et al., 2013), the PyTorch hub (Paszke et al., 2019), and the model repository of TensorFlow (Abadi et al., 2016). In the computer vision domain, the Shadertoy platform offers an algorithm repository in which each item can be modified online and the execution results can be seen immediately (Jeremias & Quilez, 2014). These works are instructive for the sharing of model knowledge.…”
Section: Introductionmentioning
confidence: 99%