Proceedings of the 25th ACM International Conference on Multimedia 2017
DOI: 10.1145/3123266.3129391
|View full text |Cite
|
Sign up to set email alerts
|

TensorLayer

Abstract: Recently we have observed emerging uses of deep learning techniques in multimedia systems. Developing a practical deep learning system is arduous and complex. It involves labor-intensive tasks for constructing sophisticated neural networks, coordinating multiple network models, and managing a large amount of trainingrelated data. To facilitate such a development process, we propose TensorLayer which is a Python-based versatile deep learning library. TensorLayer provides high-level modules that abstract sophist… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 59 publications
(8 citation statements)
references
References 16 publications
(14 reference statements)
0
8
0
Order By: Relevance
“…The algorithms were implemented on Python 3.4.3. We used Tensorflow 22 GPU version 1.3.0 and some functions of Tensorlayer 23 library version 1.6.4. The CUDA version was 8.0.61.…”
Section: Methodsmentioning
confidence: 99%
“…The algorithms were implemented on Python 3.4.3. We used Tensorflow 22 GPU version 1.3.0 and some functions of Tensorlayer 23 library version 1.6.4. The CUDA version was 8.0.61.…”
Section: Methodsmentioning
confidence: 99%
“…The networks were implemented using TensorLayer (Dong et al , 2017) and Tensorflow (Abadi et al , 2016). The detailed description of the networks used in this paper is described in Supplementary Methods.…”
Section: Methods and Datasetsmentioning
confidence: 99%
“…We then compute the average of these images as the final prediction, with another map showing the P -value of each pixel. We use Tensorflow combined with TensorLayer ( Dong et al , 2017 ) to implement the deep learning module. Trained on a workstation with one Pascal Titan X, the model gets converged in around 8 h.…”
Section: Methodsmentioning
confidence: 99%