malhajar commited on
Commit
413820f
1 Parent(s): fa2e222

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - atasoglu/databricks-dolly-15k-tr
4
+ language:
5
+ - tr
6
+ ---
7
+
8
+ # Model Card for Model ID
9
+
10
+ <!-- Provide a quick summary of what the model is/does. -->
11
+ malhajar/Llama-2-13b-chat-dolly-tr is a finetuned version of Llama-2-7b-hf using SFT Training.
12
+ This model can answer information in turkish language as it is finetuned on a turkish dataset specifically [`databricks-dolly-15k-tr`]( https://huggingface.co/datasets/atasoglu/databricks-dolly-15k-tr)
13
+
14
+ ![llama](./llama.png)
15
+
16
+ ### Model Description
17
+
18
+ - **Developed by:** [`Mohamad Alhajar`](https://www.linkedin.com/in/muhammet-alhajar/)
19
+ - **Language(s) (NLP):** Turkish
20
+ - **Finetuned from model:** [`meta-llama/Llama-2-13b-hf`](https://huggingface.co/meta-llama/Llama-2-13b-hf)
21
+
22
+ ### Prompt Template
23
+
24
+ ```
25
+ <s>[INST] <prompt> [/INST]
26
+ ```
27
+
28
+ ## How to Get Started with the Model
29
+
30
+ Use the code sample provided in the original post to interact with the model.
31
+ ```python
32
+ from transformers import AutoTokenizer,AutoModelForCausalLM
33
+
34
+ model_id = "malhajar/Llama-2-13b-chat-dolly-tr"
35
+ model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
36
+ device_map="auto",
37
+ torch_dtype=torch.float16,
38
+ revision="main")
39
+
40
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
41
+
42
+ question: "what is the will to truth?"
43
+ # For generating a response
44
+ prompt = '''
45
+ ### Instruction:
46
+ {question}
47
+
48
+ ### Response:'''
49
+ input_ids = tokenizer(prompt, return_tensors="pt").input_ids
50
+ output = model.generate(inputs=input_ids,max_new_tokens=512,pad_token_id=tokenizer.eos_token_id,top_k=50, do_sample=True,repetition_penalty=1.3
51
+ top_p=0.95)
52
+ response = tokenizer.decode(output[0])
53
+
54
+ print(response)
55
+ ```