taesiri commited on
Commit
df5f8c2
1 Parent(s): 7d83a38

Upload abstract/2001.04451.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. abstract/2001.04451.txt +1 -0
abstract/2001.04451.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(L square) to O(L log L), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.