2019
DOI: 10.48550/arxiv.1912.01703
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

PyTorch: An Imperative Style, High-Performance Deep Learning Library

Abstract: Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it provides an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. In this paper, we detail the principles that drove the implementation of PyTor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
882
0
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 1,075 publications
(1,019 citation statements)
references
References 9 publications
(9 reference statements)
1
882
0
2
Order By: Relevance
“…Libraries Neural networks were implemented in PyTorch (Paszke et al, 2019). The RL algorithms were implemented using Tonic (Pardo, 2021).…”
Section: A Experimental Detailsmentioning
confidence: 99%
“…Libraries Neural networks were implemented in PyTorch (Paszke et al, 2019). The RL algorithms were implemented using Tonic (Pardo, 2021).…”
Section: A Experimental Detailsmentioning
confidence: 99%
“…It takes approximately 3 days to train the full model on one GTX2080 Ti. We train our network with PyTorch [48], using the Adam [33] optimizer with a cyclic learning rate schedule [57] with a base learning rate of 10 −4 and a max learning rate of 5 × 10 −4 . We provide training hyperparameters for each dataset in Section B.…”
Section: Trainingmentioning
confidence: 99%
“…To help understand the flexibility of our self-supervised model, we trained a variation of our model on images with large, simulated brightness and hue variations, inspired by the challenges of rapid exposure changes (e.g., in HDR photography). During training and testing, we randomly jitter the brightness and hue of the second image in KITTI by a factor of up to 0.6 and 0.3 respectively, using PyTorch's [48] built-in augmentation. We finetuned the variation of our model that combines our learned features with Census features (Tab.…”
Section: Motion Estimation Ablationsmentioning
confidence: 99%
See 1 more Smart Citation
“…44 In this work, we demonstrate the integration of NNP in molecular dynamics simulations by implementing NNP/MM in ACEMD 2 using OpenMM 6 and PyTorch. 46 The implementation is validated by performing MD simulations of four protein-ligand complexes. We have made the set up of NNP/MM as simple as possible to facilitate adaption.…”
Section: Introductionmentioning
confidence: 99%