pinzhenchen commited on
Commit
aa1ee4a
1 Parent(s): 07e01d2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ language:
4
+ - bg
5
+ - cs
6
+ - zh
7
+ - de
8
+ - fi
9
+ - fr
10
+ - ru
11
+ - es
12
+ tags:
13
+ - generation
14
+ - question answering
15
+ - instruction tuning
16
+ license: cc-by-nc-4.0
17
+ ---
18
+
19
+ ### Model Description
20
+
21
+ This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
22
+ * [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
23
+ * [Paper](https://arxiv.org/abs/2309.08958)
24
+
25
+ #### Instruction tuning details
26
+ * Base model: [EleutherAI/pythia-6.9b-deduped](https://huggingface.co/EleutherAI/pythia-6.9b-deduped)
27
+ * Instruction tuning language: multilingual downsampled (Bulgarian, Czech, Chinese, German, Finnish, French, Russian, and Spanish)
28
+ * Training method: LoRA.
29
+ * LoRA details: rank=8, alpha=16, target modules={key, query, value}.
30
+ * Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
31
+ * Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
32
+
33
+ #### Usage
34
+ The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
35
+
36
+ Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
37
+
38
+ #### Citation
39
+ ```
40
+ @inproceedings{chen-etal-2024-monolingual,
41
+ title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
42
+ author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
43
+ year="2024",
44
+ booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
45
+ }
46
+ ```
47
+