asif00 commited on
Commit
12cacb2
1 Parent(s): f26dbb6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -5
README.md CHANGED
@@ -7,14 +7,87 @@ tags:
7
  - transformers
8
  - llama
9
  - trl
10
- inference: False
11
  base_model: unsloth/llama-3-8b-bnb-4bit
12
  library_name: transformers
13
  pipeline_tag: question-answering
 
 
14
  ---
15
 
16
- # Uploaded model
17
 
18
- - **Developed by:** asif00
19
- - **License:** apache-2.0
20
- - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  - transformers
8
  - llama
9
  - trl
10
+ inference: false
11
  base_model: unsloth/llama-3-8b-bnb-4bit
12
  library_name: transformers
13
  pipeline_tag: question-answering
14
+ datasets:
15
+ - iamshnoo/alpaca-cleaned-bengali
16
  ---
17
 
18
+ Bangla LLaMA is a specialized model for context-based question answering and Bengali retrieval augment generation. It is derived from LLaMA 3 7B and trained on the iamshnoo/alpaca-cleaned-bengali dataset. This model is designed to provide accurate responses in Bengali with relevant contextual information. It is integrated with the transformers library, making it easy to use for context-based question answering and Bengali retrieval augment generation in projects.
19
 
20
+ # Model Details:
21
+
22
+ - Model Family: Llama 3 7B
23
+ - Language: Bengali
24
+ - Use Case: Context-Based Question Answering, Bengali Retrieval Augment Generation
25
+ - Dataset: iamshnoo/alpaca-cleaned-bengali (51,760 samples)
26
+ - Training Loss: 0.4038
27
+ - Global Steps: 647
28
+ - Batch Size: 80
29
+ - Epoch: 1
30
+
31
+
32
+ # How to Use:
33
+
34
+ You can use the model with a pipeline for a high-level helper or load the model directly. Here's how:
35
+
36
+ ```python
37
+ # Use a pipeline as a high-level helper
38
+ from transformers import pipeline
39
+ pipe = pipeline("question-answering", model="asif00/bangla-llama-4bit")
40
+ ```
41
+
42
+ ```python
43
+ # Load model directly
44
+ from transformers import AutoTokenizer, AutoModelForCausalLM
45
+ tokenizer = AutoTokenizer.from_pretrained("asif00/bangla-llama-4bit")
46
+ model = AutoModelForCausalLM.from_pretrained("asif00/bangla-llama-4bit")
47
+ ```
48
+
49
+ # General Prompt Structure:
50
+
51
+ ```python
52
+ prompt = """Below is an instruction in Bengali language that describes a task, paired with an input also in Bengali language that provides further context. Write a response in Bengali language that appropriately completes the request.
53
+
54
+ ### Instruction:
55
+ {}
56
+
57
+ ### Input:
58
+ {}
59
+
60
+ ### Response:
61
+ {}
62
+ """
63
+ ```
64
+
65
+ # To get a cleaned up version of the response, you can use the `generate_response` function:
66
+
67
+ ```python
68
+ def generate_response(question, context):
69
+ inputs = tokenizer([prompt.format(question, context, "")], return_tensors="pt").to("cuda")
70
+ outputs = model.generate(**inputs, max_new_tokens=1024, use_cache=True)
71
+ responses = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
72
+ response_start = responses.find("### Response:") + len("### Response:")
73
+ response = responses[response_start:].strip()
74
+ return response
75
+ ```
76
+
77
+ # Example Usage:
78
+
79
+ ```python
80
+ question = "ভারতীয় বাঙালি কথাসাহিত্যিক মহাশ্বেতা দেবীর মৃত্যু কবে হয় ?"
81
+ context = "২০১৬ সালের ২৩ জুলাই হৃদরোগে আক্রান্ত হয়ে মহাশ্বেতা দেবী কলকাতার বেল ভিউ ক্লিনিকে ভর্তি হন। সেই বছরই ২৮ জুলাই একাধিক অঙ্গ বিকল হয়ে তাঁর মৃত্যু ঘটে। তিনি মধুমেহ, সেপ্টিসেমিয়া ও মূত্র সংক্রমণ রোগেও ভুগছিলেন।"
82
+ answer = generate_response(question, context)
83
+ print(answer)
84
+ ```
85
+
86
+
87
+ # Disclaimer:
88
+
89
+ The Bangla LLaMA-4bit model has been trained on a limited dataset, and its responses may not always be perfect or accurate. The model's performance is dependent on the quality and quantity of the data it has been trained on. Given more resources, such as high-quality data and longer training time, the model's performance can be significantly improved.
90
+
91
+
92
+ # Resources:
93
+ Work in progress...