Pipeline Parallelism with Reduced Network Communications for Efficient Compute-intensive Neural Network Training
Chanhee Yu,
Kyongseok Park
Abstract:Pipeline parallelism is a distributed deep neural network training method suitable for tasks that consume large amounts of memory. However, this method entails a large amount of overhead because of the dependency between devices in performing forward and backward steps through multiple devices. A method to remove forward step dependency through the all-to-all approach has been proposed for the compute-intensive models; however, this method incurs large overhead when training with a large number of devices and … Show more
Set email alert for when this publication receives citations?
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.