malhajar commited on
Commit
d10f73b
1 Parent(s): d7add02

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - yahma/alpaca-cleaned
4
+ language:
5
+ - en
6
+ ---
7
+
8
+ # Model Card for Model ID
9
+
10
+ <!-- Provide a quick summary of what the model is/does. -->
11
+ malhajar/phi-2-chat is a finetuned version of [`phi-2`]( https://huggingface.co/microsoft/phi-2) using SFT Training.
12
+ This model can answer information in a chat format as it is finetuned specifically on instructions specifically [`alpaca-cleaned`]( https://huggingface.co/datasets/yahma/alpaca-cleaned)
13
+
14
+ ### Model Description
15
+
16
+ - **Developed by:** [`Mohamad Alhajar`](https://www.linkedin.com/in/muhammet-alhajar/)
17
+ - **Language(s) (NLP):** Turkish
18
+ - **Finetuned from model:** [`microsoft/phi-2`](https://huggingface.co/microsoft/phi-2)
19
+
20
+ ### Prompt Template
21
+ ```
22
+ ### Instruction:
23
+
24
+ <prompt> (without the <>)
25
+
26
+ ### Response:
27
+ ```
28
+
29
+ ## How to Get Started with the Model
30
+
31
+ Use the code sample provided in the original post to interact with the model.
32
+ ```python
33
+ from transformers import AutoTokenizer,AutoModelForCausalLM
34
+
35
+ model_id = "malhajar/phi-2-chat"
36
+ model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
37
+ device_map="auto",
38
+ torch_dtype=torch.float16,
39
+ revision="main")
40
+
41
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
42
+
43
+ question: "Türkiyenin en büyük şehir nedir?"
44
+ # For generating a response
45
+ prompt = '''
46
+ ### Instruction: {question} ### Response:
47
+ '''
48
+ input_ids = tokenizer(prompt, return_tensors="pt").input_ids
49
+ output = model.generate(inputs=input_ids,max_new_tokens=512,pad_token_id=tokenizer.eos_token_id,top_k=50, do_sample=True,repetition_penalty=1.3
50
+ top_p=0.95,trust_remote_code=True,)
51
+ response = tokenizer.decode(output[0])
52
+
53
+ print(response)
54
+ ```