GGUF
Inference Endpoints
conversational
mav23 commited on
Commit
1358f2e
1 Parent(s): a964479

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +148 -0
  3. x-alma-13b-pretrain.Q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ x-alma-13b-pretrain.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - oscar-corpus/OSCAR-2301
5
+ - allenai/nllb
6
+ - Helsinki-NLP/opus-100
7
+ language:
8
+ - en
9
+ - da
10
+ - nl
11
+ - de
12
+ - is
13
+ - 'no'
14
+ - sc
15
+ - af
16
+ - ca
17
+ - ro
18
+ - gl
19
+ - it
20
+ - pt
21
+ - es
22
+ - bg
23
+ - mk
24
+ - sr
25
+ - uk
26
+ - ru
27
+ - id
28
+ - ms
29
+ - th
30
+ - vi
31
+ - mg
32
+ - fr
33
+ - hu
34
+ - el
35
+ - cs
36
+ - pl
37
+ - lt
38
+ - lv
39
+ - ka
40
+ - zh
41
+ - ja
42
+ - ko
43
+ - fi
44
+ - et
45
+ - gu
46
+ - hi
47
+ - mr
48
+ - ne
49
+ - ur
50
+ - az
51
+ - kk
52
+ - ky
53
+ - tr
54
+ - uz
55
+ - ar
56
+ - he
57
+ - fa
58
+ base_model:
59
+ - haoranxu/ALMA-13B-Pretrain
60
+ ---
61
+
62
+
63
+ [X-ALMA](https://arxiv.org/pdf/2410.03115) builds upon [ALMA-R](https://arxiv.org/pdf/2401.08417) by expanding support from 6 to 50 languages. It utilizes a plug-and-play architecture with language-specific modules, complemented by a carefully designed training recipe. This release includes the **X-ALMA pre-trained base model**.
64
+ ```
65
+ @misc{xu2024xalmaplugplay,
66
+ title={X-ALMA: Plug & Play Modules and Adaptive Rejection for Quality Translation at Scale},
67
+ author={Haoran Xu and Kenton Murray and Philipp Koehn and Hieu Hoang and Akiko Eriguchi and Huda Khayrallah},
68
+ year={2024},
69
+ eprint={2410.03115},
70
+ archivePrefix={arXiv},
71
+ primaryClass={cs.CL},
72
+ url={https://arxiv.org/abs/2410.03115},
73
+ }
74
+ ```
75
+ X-ALMA-13B-Pretrain is pre-trained on 50 languages: en,da,nl,de,is,no,sv,af,ca,ro,gl,it,pt,es,bg,mk,sr,uk,ru,id,ms,th,vi,mg,fr,hu,el,cs,pl,lt,lv,ka,zh,ja,ko,fi,et,gu,hi,mr,ne,ur,az,kk,ky,tr,uz,ar,he,fa.
76
+
77
+ All X-ALMA checkpoints are released at huggingface:
78
+ | Models | Model Link | Description |
79
+ |:-------------:|:---------------:|:---------------:|
80
+ | X-ALMA | [haoranxu/X-ALMA]([https://huggingface.co/haoranxu/ALMA-7B](https://huggingface.co/haoranxu/X-ALMA)) | X-ALMA model with all its modules |
81
+ | X-ALMA-13B-Pretrain | [haoranxu/X-ALMA-13B-Pretrain](https://huggingface.co/haoranxu/X-ALMA-13B-Pretrain) | X-ALMA 13B multilingual pre-trained base model |
82
+ | X-ALMA-Group1 | [haoranxu/X-ALMA-13B-Group1](https://huggingface.co/haoranxu/X-ALMA-13B-Group1) | X-ALMA group1 specific module and the merged model |
83
+ | X-ALMA-Group2 | [haoranxu/X-ALMA-13B-Group2](https://huggingface.co/haoranxu/X-ALMA-13B-Group2) | X-ALMA group2 specific module and the merged model |
84
+ | X-ALMA-Group3 | [haoranxu/X-ALMA-13B-Group3](https://huggingface.co/haoranxu/X-ALMA-13B-Group3) | X-ALMA group3 specific module and the merged model |
85
+ | X-ALMA-Group4 | [haoranxu/X-ALMA-13B-Group4](https://huggingface.co/haoranxu/X-ALMA-13B-Group4) | X-ALMA group4 specific module and the merged model |
86
+ | X-ALMA-Group5 | [haoranxu/X-ALMA-13B-Group5](https://huggingface.co/haoranxu/X-ALMA-13B-Group5) | X-ALMA group5 specific module and the merged model |
87
+ | X-ALMA-Group6 | [haoranxu/X-ALMA-13B-Group6](https://huggingface.co/haoranxu/X-ALMA-13B-Group6) | X-ALMA group6 specific module and the merged model |
88
+ | X-ALMA-Group7 | [haoranxu/X-ALMA-13B-Group7](https://huggingface.co/haoranxu/X-ALMA-13B-Group7) | X-ALMA group7 specific module and the merged model |
89
+ | X-ALMA-Group8 | [haoranxu/X-ALMA-13B-Group8](https://huggingface.co/haoranxu/X-ALMA-13B-Group8) | X-ALMA group8 specific module and the merged model |
90
+
91
+ ## A quick start:
92
+ There are three ways to load X-ALMA for translation. An example of translating "我爱机器翻译。" into English (X-ALMA should also able to do multilingual open-ended QA).
93
+
94
+ **The first way**: loading the merged model where the language-specific module has been merged into the base model **(Recommended)**:
95
+ ```
96
+ import torch
97
+ from transformers import AutoModelForCausalLM
98
+ from transformers import AutoTokenizer
99
+ from peft import PeftModel
100
+
101
+ GROUP2LANG = {
102
+ 1: ["da", "nl", "de", "is", "no", "sv", "af"],
103
+ 2: ["ca", "ro", "gl", "it", "pt", "es"],
104
+ 3: ["bg", "mk", "sr", "uk", "ru"],
105
+ 4: ["id", "ms", "th", "vi", "mg", "fr"],
106
+ 5: ["hu", "el", "cs", "pl", "lt", "lv"],
107
+ 6: ["ka", "zh", "ja", "ko", "fi", "et"],
108
+ 7: ["gu", "hi", "mr", "ne", "ur"],
109
+ 8: ["az", "kk", "ky", "tr", "uz", "ar", "he", "fa"],
110
+ }
111
+ LANG2GROUP = {lang: str(group) for group, langs in GROUP2LANG.items() for lang in langs}
112
+ group_id = LANG2GROUP["zh"]
113
+
114
+ model = AutoModelForCausalLM.from_pretrained(f"haoranxu/X-ALMA-13B-Group{group_id}", torch_dtype=torch.float16, device_map="auto")
115
+ tokenizer = AutoTokenizer.from_pretrained(f"haoranxu/X-ALMA-13B-Group{group_id}", padding_side='left')
116
+
117
+ # Add the source sentence into the prompt template
118
+ prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
119
+
120
+ # X-ALMA needs chat template but ALMA and ALMA-R don't need it.
121
+ chat_style_prompt = [{"role": "user", "content": prompt}]
122
+ prompt = tokenizer.apply_chat_template(chat_style_prompt, tokenize=False, add_generation_prompt=True)
123
+
124
+ input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
125
+
126
+ # Translation
127
+ with torch.no_grad():
128
+ generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
129
+ outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
130
+ print(outputs)
131
+ ```
132
+
133
+ **The second way**: loading the base model and language-specific module **(Recommended)**:
134
+ ```
135
+ model = AutoModelForCausalLM.from_pretrained("haoranxu/X-ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto")
136
+ model = PeftModel.from_pretrained(model, f"haoranxu/X-ALMA-13B-Group{group_id}")
137
+ tokenizer = AutoTokenizer.from_pretrained(f"haoranxu/X-ALMA-13B-Group{group_id}", padding_side='left')
138
+ ```
139
+
140
+ **The third way**: loading the base model with all language-specific modules like MoE: (Require large GPU memory)
141
+ ```
142
+ from modeling_xalma import XALMAForCausalLM
143
+ model = XALMAForCausalLM.from_pretrained("haoranxu/X-ALMA", torch_dtype=torch.float16, device_map="auto")
144
+ tokenizer = AutoTokenizer.from_pretrained("haoranxu/X-ALMA", padding_side='left')
145
+
146
+ # Add `lang="zh"`: specify the language to instruct the model on which group to use for the third loading method during generation.
147
+ generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9, lang="zh")
148
+ ```
x-alma-13b-pretrain.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb1a64abfd0fd2e940e1057e99772930d4fbe8ec27975f79e8421d6f13f6b8e3
3
+ size 7365836960