2018
DOI: 10.21105/joss.01002
|View full text |Cite
|
Sign up to set email alerts
|

TensorFlow.jl: An Idiomatic Julia Front End for TensorFlow

Abstract: TensorFlow.jl is a Julia (Bezanson, Edelman, Karpinski, & Shah, 2017) client library for the TensorFlow deep-learning framework (Abadi et al., 2015), (Abadi et al., 2016). It allows users to define TensorFlow graphs using Julia syntax, which are interchangeable with the graphs produced by Google's first-party Python TensorFlow client and can be used to perform training or inference on machine-learning models.Graphs are primarily defined by overloading native Julia functions to operate on a Ten-sorFlow.jl Tenso… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 2 publications
0
6
0
Order By: Relevance
“…In subsection 4.1, we train a DNN to map y ∈ R 2 to a scalar c ∈ R. The feature extractor is a ResNet with a width of w = 8 and a depth of d = 8 corresponding to a Appendix C. Autoencoder architecture. We adapt the MNIST autoencoder from [32]. The autoencoder consists of two convolutional neural networks with a user-defined width w and intrinsic dimension d. The width controls the number of convolutional filters used and the intrinsic dimension is the size of the low-dimensional embedding.…”
Section: Appendix B Residual Neural Network (Resnets)mentioning
confidence: 99%
“…In subsection 4.1, we train a DNN to map y ∈ R 2 to a scalar c ∈ R. The feature extractor is a ResNet with a width of w = 8 and a depth of d = 8 corresponding to a Appendix C. Autoencoder architecture. We adapt the MNIST autoencoder from [32]. The autoencoder consists of two convolutional neural networks with a user-defined width w and intrinsic dimension d. The width controls the number of convolutional filters used and the intrinsic dimension is the size of the low-dimensional embedding.…”
Section: Appendix B Residual Neural Network (Resnets)mentioning
confidence: 99%
“…The neural bias model is written using the julia (Bezanson et al 2017) interface to TensorFlow (Abadi et al 2016;Malmaud & White 2018). This is embedded into the hmclet which is one sub-block of the borg algorithm.…”
Section: Neural Bias Modelmentioning
confidence: 99%
“…float16, float32 and float64). Though high precision computations are enabled in Julia using multiple-digit "BigFloat", deep learning libraries built on top of Julia such as Flux (Innes et al, 2018;Innes, 2018), MXNet.jl (Chen et al, 2015), and TensorFlow.jl (Malmaud & White, 2018) do not support training with BigFloat. Furthermore, BigFloat in Julia can only exist on CPUs, but not on GPUs, which greatly limits its usages.…”
Section: Introductionmentioning
confidence: 99%