BAAI
/

File size: 2,545 Bytes
9a62161
 
 
35db87b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9b62e3e
0d9226d
9b62e3e
f7addd8
 
9b62e3e
35db87b
 
 
 
 
 
 
a5dbc58
52a0f15
 
35db87b
a5dbc58
 
 
 
 
 
f70ac68
a5dbc58
 
35db87b
 
 
52a0f15
 
 
 
 
35db87b
 
 
 
 
c4997d3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
license: other
---


![Aquila_logo](./log.jpeg)


<h4 align="center">
    <p>
        <b>English</b> |
        <a href="https://huggingface.co/BAAI/AquilaChat2-34B-16K/blob/main/README_zh.md">简体中文</a> 
    </p>
</h4>


We opensource our **Aquila2** series, now including **Aquila2**, the base language models, namely **Aquila2-7B** and **Aquila2-34B**, as well as **AquilaChat2**, the chat models, namely **AquilaChat2-7B** and **AquilaChat2-34B**, as well as the long-text chat models, namely **AquilaChat2-7B-16k** and **AquilaChat2-34B-16k**

The additional details of the Aquila model will be presented in the official technical report. Please stay tuned for updates on official channels.


2023.10.25 🔥 Version 1.2 of AquilaChat2-34B-16K have been updated on ModelHub and Hugging Face.

AquilaChat2-34B-16K-V1.2 has significantly improved long-text synthesis capabilities compared to the V1 version, 
approaching the level of GPT-3.5-16K. Additionally, the V1.2 version incorporates more conventional instruction fine-tuning corpora, enhancing its performance in non-long-text scenarios compared to the V1 version.

## Quick Start  AquilaChat2-34B-16K(Chat model)

### 1. Inference

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

device = torch.device("cuda:0")
model_info = "BAAI/AquilaChat2-34B-16k"
tokenizer = AutoTokenizer.from_pretrained(model_info, trust_remote_code=True)
quantization_config=BitsAndBytesConfig(
                        load_in_4bit=True,
                        bnb_4bit_use_double_quant=True,
                        bnb_4bit_quant_type="nf4",
                        bnb_4bit_compute_dtype=torch.bfloat16,
                    )
model = AutoModelForCausalLM.from_pretrained(model_info, trust_remote_code=True, torch_dtype=torch.bfloat16,
                                                # quantization_config=quantization_config, # Uncomment this line for 4bit quantization
                                                )
model.eval()
model.to(device)
text = "请给出10个要到北京旅游的理由。"
from predict import predict
out = predict(model, text, tokenizer=tokenizer, max_gen_len=200, top_p=0.95,
              seed=1234, topk=100, temperature=0.9, sft=True, device=device,
              model_name="AquilaChat2-34B-16K")
print(out)
```


## License

Aquila2 series open-source model is licensed under [ BAAI Aquila Model Licence Agreement](https://huggingface.co/BAAI/AquilaChat2-34B-16K/blob/main/BAAI-Aquila-Model-License%20-Agreement.pdf)