simarora commited on
Commit
cc48fc0
1 Parent(s): a029664

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -1
README.md CHANGED
@@ -4,5 +4,41 @@ datasets:
4
  language:
5
  - en
6
  ---
 
7
 
8
- This is a 1.3Bn parameter Based model that has been trained on 50Bn tokens of the Pile corpus.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  language:
5
  - en
6
  ---
7
+ # Model Card
8
 
9
+ This model is pretrained Based model.
10
+
11
+ As a quality reference, we include a pretrained Mamba model provided here: https://huggingface.co/hazyresearch/mamba-1b-50b
12
+ Both checkpoints are pretrained on 50Bn tokens of the Pile in the exact same data order using next token prediction.
13
+
14
+ A WandB report for training is here: https://api.wandb.ai/links/hazy-research/ggo9rst2
15
+
16
+
17
+ ### Model Sources
18
+
19
+ The model implementation and training code that produced the model are provided here: https://github.com/HazyResearch/based
20
+
21
+ ### Uses
22
+
23
+ The purpose of this work is to evaluate the language modeling quality of a new efficient architecture, Based.
24
+
25
+ We include a series of benchmarks that you can use to evaluate quality:
26
+ - FDA: https://huggingface.co/datasets/hazyresearch/based-fda
27
+ - SWDE: https://huggingface.co/datasets/hazyresearch/based-swde
28
+ - SQUAD: https://huggingface.co/datasets/hazyresearch/based-squad
29
+
30
+
31
+ ## Citation
32
+
33
+ Please consider citing this paper if you use our work:
34
+
35
+ ```
36
+ @article{arora2024simple,
37
+ title={Simple linear attention language models balance the recall-throughput tradeoff},
38
+ author={Arora, Simran and Eyuboglu, Sabri and Zhang, Michael and Timalsina, Aman and Alberti, Silas and Zinsley, Dylan and Zou, James and Rudra, Atri and Ré, Christopher},
39
+ journal={arXiv:2402.18668},
40
+ year={2024}
41
+ }
42
+ ```
43
+
44
+ Please reach out to simarora@stanford.edu, eyuboglu@stanford.edu, and mzhang20@stanford.edu with questions.