BAAI
/

ldwang commited on
Commit
56aec23
·
verified ·
1 Parent(s): 04c871a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -39
README.md CHANGED
@@ -2,83 +2,81 @@
2
  license: other
3
  ---
4
 
5
-
6
  ![Aquila_logo](./log.jpeg)
7
 
8
-
9
  <h4 align="center">
10
  <p>
11
  <b>English</b> |
12
- <a href="https://huggingface.co/BAAI/Aquila2-34B/blob/main/README_zh.md">简体中文</a>
13
- </p>
14
  </h4>
15
 
16
- <p align="center">
17
- <a href="https://github.com/FlagAI-Open/Aquila2" target="_blank">Github</a> • <a href="https://github.com/FlagAI-Open/Aquila2/blob/main/assets/wechat-qrcode.jpg" target="_blank">WeChat</a> <br>
18
- </p>
19
-
20
-
21
 
22
  We opensource our **Aquila2** series, now including **Aquila2**, the base language models, namely **Aquila2-7B** and **Aquila2-34B**, as well as **AquilaChat2**, the chat models, namely **AquilaChat2-7B** and **AquilaChat2-34B**, as well as the long-text chat models, namely **AquilaChat2-7B-16k** and **AquilaChat2-34B-16k**
23
 
 
24
 
25
- 2023.10.25 🔥 **Aquila2-34B v1.2** is based on the previous **Aquila2-34B**.
26
- The Aquila2-34B has achieved a 6.9% improvement in comprehensive evaluations, with MMLU(+12%), TruthfulQA(+14%), CSL(+11%), TNEWS(+12%), OCNLI(+28%), and BUSTM(+18%).
27
-
28
 
29
- The additional details of the Aquila model will be presented in the official technical report. Please stay tuned for updates on official channels.
30
 
31
- ### Note
32
- <p>
33
- We have discovered a data leakage problem with the GSM8K test data in the pre-training task dataset. Therefore, the evaluation results of GSM8K have been removed from the evaluation results.
34
 
35
- Upon thorough investigation and analysis, it was found that the data leakage occurred in the mathematical dataset A (over 2 million samples), recommended by a team we have collaborated with multiple times. This dataset includes the untreated GSM8K test set (1319 samples). The team only performed routine de-duplication and quality checks but did not conduct an extra filtering check for the presence of the GSM8K test data, resulting in this oversight.
 
 
 
 
 
36
 
37
- Our team has always strictly adhered to the principle that training data should not include test data. Taking this lesson from the error caused by not thoroughly checking the source of external data, we have investigated all 2 trillion tokens of data for various test datasets, including WTM22(en-zh), CLUEWSC, Winograd, HellaSwag, OpenBookQA, PIQA, ARC-e, BUSTSM, BoolQ, TruthfulQA, RAFT, ChID, EPRSTMT, TNEWS, OCNLI, SEM-Chinese, MMLU, C-Eval, CMMLU, CSL and HumanEval.
38
- </p>
39
 
40
- ## Chat Model Performance
41
 
42
- <br>
43
- <p align="center">
44
- <img src="base_metrics.jpeg" width="1024"/>
45
- <p>
46
- <br>
47
 
48
- ## Quick Start Aquila2-34B(Chat model)
49
 
50
  ### 1. Inference
 
51
 
52
  ```python
53
  import torch
54
- from transformers import AutoTokenizer, AutoModelForCausalLM
55
  from transformers import BitsAndBytesConfig
56
 
57
- device = torch.device("cuda")
58
- model_info = "BAAI/Aquila2-34B"
59
- tokenizer = AutoTokenizer.from_pretrained(model_info, trust_remote_code=True)
 
 
 
60
  quantization_config=BitsAndBytesConfig(
61
  load_in_4bit=True,
62
  bnb_4bit_use_double_quant=True,
63
  bnb_4bit_quant_type="nf4",
64
  bnb_4bit_compute_dtype=torch.bfloat16,
65
  )
66
- model = AutoModelForCausalLM.from_pretrained(model_info, trust_remote_code=True,
67
- # quantization_config=quantization_config, # Uncomment this line for 4bit quantization
68
- )
 
 
 
69
  model.eval()
 
70
  model.to(device)
71
- text = "请给出10个要到北京旅游的��由。"
 
 
72
  tokens = tokenizer.encode_plus(text)['input_ids']
73
  tokens = torch.tensor(tokens)[None,].to(device)
74
- stop_tokens = ["###", "[UNK]", "</s>"]
75
  with torch.no_grad():
76
- out = model.generate(tokens, do_sample=True, max_length=512, eos_token_id=100007, bad_words_ids=[[tokenizer.encode(token)[0] for token in stop_tokens]])[0]
77
- out = tokenizer.decode(out.cpu().numpy().tolist())
78
- print(out)
79
  ```
80
 
81
 
82
  ## License
83
 
84
- Aquila2 series open-source model is licensed under [ BAAI Aquila Model Licence Agreement](https://huggingface.co/BAAI/Aquila2-34B/blob/main/BAAI-Aquila-Model-License%20-Agreement.pdf)
 
2
  license: other
3
  ---
4
 
 
5
  ![Aquila_logo](./log.jpeg)
6
 
 
7
  <h4 align="center">
8
  <p>
9
  <b>English</b> |
10
+ <a href="https://huggingface.co/BAAI/Aquila2-7B/blob/main/README_zh.md">简体中文</a> |
11
+ <p>
12
  </h4>
13
 
 
 
 
 
 
14
 
15
  We opensource our **Aquila2** series, now including **Aquila2**, the base language models, namely **Aquila2-7B** and **Aquila2-34B**, as well as **AquilaChat2**, the chat models, namely **AquilaChat2-7B** and **AquilaChat2-34B**, as well as the long-text chat models, namely **AquilaChat2-7B-16k** and **AquilaChat2-34B-16k**
16
 
17
+ The additional details of the Aquila model will be presented in the official technical report. Please stay tuned for updates on official channels.
18
 
19
+ ## Updates 2024.6.6
 
 
20
 
21
+ We have updated the basic language model **Aquila2-34B**, which has the following advantages compared to the previous model:
22
 
23
+ * Replaced tokenizer with higher compression ratio:
 
 
24
 
25
+ | Tokenizer | Size | Zh | En | Code | Math | Average |
26
+ |-----------|-------|--------------------------|--------|-------|-------|---------|
27
+ | Aquila2-original | 100k | **4.70** | 4.42 | 3.20 | 3.77 | 4.02 |
28
+ | Qwen1.5 | 151k | 4.27 | 4.51 | 3.62 | 3.35 | 3.94 |
29
+ | Llama3 | 128k | 3.45 | **4.61** | 3.77 | **3.88** | 3.93 |
30
+ | Aquila2-new | 143k | 4.60 | **4.61** | **3.78** | **3.88** | **4.22** |
31
 
32
+ * The maximum processing length supported by the model has increased from 2048 to 8192
 
33
 
 
34
 
 
 
 
 
 
35
 
36
+ ## Quick Start Aquila2-7B
37
 
38
  ### 1. Inference
39
+ Aquila2-7B is a base model that can be used for continuation.
40
 
41
  ```python
42
  import torch
43
+ from transformers import AutoModelForCausalLM, AutoTokenizer
44
  from transformers import BitsAndBytesConfig
45
 
46
+ device= "cuda:0"
47
+
48
+ # Model Name
49
+ model_name = 'BAAI/Aquila2-34B'
50
+
51
+ # load model and tokenizer
52
  quantization_config=BitsAndBytesConfig(
53
  load_in_4bit=True,
54
  bnb_4bit_use_double_quant=True,
55
  bnb_4bit_quant_type="nf4",
56
  bnb_4bit_compute_dtype=torch.bfloat16,
57
  )
58
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, trust_remote_code=True,
59
+ # quantization_config=quantization_config # Uncomment this one for 4-bit quantization
60
+ )
61
+
62
+ tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
63
+
64
  model.eval()
65
+
66
  model.to(device)
67
+
68
+ # Example
69
+ text = "The meaning of life is"
70
  tokens = tokenizer.encode_plus(text)['input_ids']
71
  tokens = torch.tensor(tokens)[None,].to(device)
72
+
73
  with torch.no_grad():
74
+ out = llama.generate(tokens, do_sample=False, max_length=128, eos_token_id=tokenizer.eos_token_id)[0]
75
+ out = tokenizer.decode(out.cpu().numpy().tolist())
76
+ print(out)
77
  ```
78
 
79
 
80
  ## License
81
 
82
+ Aquila2 series open-source model is licensed under [ BAAI Aquila Model Licence Agreement](https://huggingface.co/BAAI/Aquila2-7B/blob/main/BAAI-Aquila-Model-License%20-Agreement.pdf)