Papers
arxiv:1404.5997
One weird trick for parallelizing convolutional neural networks
Published on Apr 23, 2014
Authors:
Abstract
I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Cite arxiv.org/abs/1404.5997 in a dataset README.md to link it from this page.
Spaces citing this paper 3
Collections including this paper 0
No Collection including this paper
Add this paper to a
collection
to link it from this page.