File size: 3,686 Bytes
90ab5a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99d3cee
90ab5a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- 4bit
- AWQ
- AutoAWQ
- 7b
- quantized
- Mistral
- Mistral-7B
---


# Model Card for alokabhishek/Mistral-7B-Instruct-v0.2-4bit-AWQ

<!-- Provide a quick summary of what the model is/does. -->

This repo contains 4-bit quantized (using AutoAWQ) model of Mistral AI_'s Mistral-7B-Instruct-v0.2

AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration is developed by MIT-HAN-Lab


## Model Details

- Model creator: [Mistral AI_](https://huggingface.co/mistralai)
- Original model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)


### About 4 bit quantization using AutoAWQ

- AutoAWQ github repo: [AutoAWQ github repo](https://github.com/casper-hansen/AutoAWQ/tree/main)
- MIT-han-lab llm-aws github repo:  [MIT-han-lab llm-aws github repo](https://github.com/mit-han-lab/llm-awq/tree/main)

@inproceedings{lin2023awq,
  title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration},
  author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Chen, Wei-Ming and Wang, Wei-Chen and Xiao, Guangxuan and Dang, Xingyu and Gan, Chuang and Han, Song},
  booktitle={MLSys},
  year={2024}
}


# How to Get Started with the Model

Use the code below to get started with the model.

## How to run from Python code

#### First install the package
```shell
!pip install autoawq
!pip install accelerate
```

#### Import 

```python
import torch
import os
from torch import bfloat16
from huggingface_hub import login, HfApi, create_repo
from transformers import AutoTokenizer, pipeline
from awq import AutoAWQForCausalLM
```

#### Use a pipeline as a high-level helper

```python
# define the model ID
model_id_llama = "alokabhishek/Mistral-7B-Instruct-v0.2-4bit-AWQ"

# Load model
tokenizer_llama = AutoTokenizer.from_pretrained(model_id_llama, use_fast=True)
model_llama = AutoAWQForCausalLM.from_quantized(model_id_llama, fuse_layer=True, trust_remote_code = False, safetensors = True)

# Set up the prompt and prompt template. Change instruction as per requirements.
prompt_llama = "Tell me a funny joke about Large Language Models meeting a Blackhole in an intergalactic Bar."
fromatted_prompt = f'''<s> [INST] You are a helpful, and fun loving assistant. Always answer as jestfully as possible.[/INST] </s> [INST] {prompt_llama}[/INST]'''

tokens = tokenizer_llama(fromatted_prompt, return_tensors="pt").input_ids.cuda()

# Generate output, adjust parameters as per requirements
generation_output = model_llama.generate(tokens, do_sample=True, temperature=1.7, top_p=0.95, top_k=40, max_new_tokens=512)

# Print the output
print(tokenizer_llama.decode(generation_output[0], skip_special_tokens=True))


```


## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

[More Information Needed]

### Downstream Use [optional]

<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->

[More Information Needed]

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->

[More Information Needed]

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

[More Information Needed]



## Model Card Authors [optional]

[More Information Needed]

## Model Card Contact

[More Information Needed]