amirMohammadi commited on
Commit
e1f3912
1 Parent(s): ce35632

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -15
README.md CHANGED
@@ -4,21 +4,23 @@ language:
4
  - en
5
  - fa
6
  pipeline_tag: text-generation
7
- tags:
8
- - PartAI
9
  ---
10
 
11
  # Model Details
12
 
13
- The Dorna models are a family of decoder-only models, specifically trained/fine-tuned on Persian data, developed by [Part AI](https://partdp.ai/). As an initial release, an 8B instruct model from this family is being made available.
14
- Dorna-Llama3-8B-Instruct is built using the [Meta Llama 3 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) model.
15
 
 
16
 
17
- ## How to use
 
 
 
 
18
 
19
- To test and use model freely on Hugging Face Spaces click [here](https://huggingface.co/spaces/PartAI/Dorna-Llama3-8B-Instruct)!
20
 
21
- You can also run conversational inference using the Transformers Auto classes with the `generate()` function. Let's look at an example.
22
 
23
  ```Python
24
  import torch
@@ -36,7 +38,7 @@ model = AutoModelForCausalLM.from_pretrained(
36
  messages = [
37
  {"role": "system",
38
  "content": "You are a helpful Persian assistant. Please answer questions in the asked language."},
39
- {"role": "user", "content": "کاغذ A4 بزرگ تر است یا A5؟"},
40
  ]
41
 
42
  input_ids = tokenizer.apply_chat_template(
@@ -62,12 +64,7 @@ response = outputs[0][input_ids.shape[-1]:]
62
  print(tokenizer.decode(response, skip_special_tokens=True))
63
  ```
64
 
65
- You can also use the notebook below to test the model in Google Colab.
66
-
67
- <a href="https://colab.research.google.com/drive/1TmeZsN4Byi1EgAEQeOt27sPrZOWn5gBH?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Colab Code" width="87" height="15"/></a>
68
-
69
-
70
- ## Evaluation
71
 
72
  This model is evaluated on questions across various tasks, including Boolean Questions, Code Generation, Long Response, Math, News QA, Paraphrasing, General Knowledge, and Summarization. Most categories typically have two main difficulty levels: Hard and Easy.
73
 
@@ -193,4 +190,4 @@ Automatic evaluation results are as follows:
193
 
194
  ## Contact us
195
 
196
- If you have any questions regarding this model, you can reach us via the [community](https://huggingface.co/PartAI/Dorna-Llama3-8B-Instruct/discussions) on Hugging Face.
 
4
  - en
5
  - fa
6
  pipeline_tag: text-generation
 
 
7
  ---
8
 
9
  # Model Details
10
 
11
+ This Repository is a 4-bit quantized version of [Dorna-Llama3-8B-Instruct](https://huggingface.co/PartAI/Dorna-Llama3-8B-Instruct) model for efficient memory usage. Dorna model is a decoder-only model, specifically trained/fine-tuned on Persian data. [Flash Attention 2](https://arxiv.org/abs/2307.08691) is also integrated for faster inference.
 
12
 
13
+ ## Benefits
14
 
15
+ - **Reduced Memory Usage**: 4-bit quantization lowers memory requirements.
16
+ - **Faster Inference**: Flash Attention 2 speeds up processing.
17
+ - **Easy Deployment**: No need for additional libraries like LlamaCPP or Candle.
18
+ - **Ready to Use**: Compatible with Langchain, Haystack, LlamaIndex 2, and more.
19
+ - **Google Colab Friendly**: Can run on Google Colab free tier with T4 GPU (less than 15 GB of GPU RAM).
20
 
21
+ ## How to use
22
 
23
+ You can run conversational inference using the Transformers Auto classes with the `generate()` function. Let's look at an example.
24
 
25
  ```Python
26
  import torch
 
38
  messages = [
39
  {"role": "system",
40
  "content": "You are a helpful Persian assistant. Please answer questions in the asked language."},
41
+ {"role": "user", "content": "اصفهان بزرگ تر است یا قم؟"},
42
  ]
43
 
44
  input_ids = tokenizer.apply_chat_template(
 
64
  print(tokenizer.decode(response, skip_special_tokens=True))
65
  ```
66
 
67
+ ## Evaluation of Non-Quantized version
 
 
 
 
 
68
 
69
  This model is evaluated on questions across various tasks, including Boolean Questions, Code Generation, Long Response, Math, News QA, Paraphrasing, General Knowledge, and Summarization. Most categories typically have two main difficulty levels: Hard and Easy.
70
 
 
190
 
191
  ## Contact us
192
 
193
+ If you have any questions regarding this model, you can reach us via the [community](https://huggingface.co/amirMohammadi/Dorna-Llama3-8B-Instruct-Quantized4Bit/discussions) on Hugging Face.