Update README.md
Browse files
README.md
CHANGED
@@ -4,4 +4,18 @@ datasets:
|
|
4 |
- smjain/abap
|
5 |
language:
|
6 |
- en
|
7 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
- smjain/abap
|
5 |
language:
|
6 |
- en
|
7 |
+
---
|
8 |
+
This model is fine tuned on a very small ABAP dataset . Have used NousResearch/Llama-2-7b-chat-hf as the base model.
|
9 |
+
|
10 |
+
Sample code
|
11 |
+
|
12 |
+
from transformers import pipeline
|
13 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
14 |
+
|
15 |
+
model_path = "smjain/abap-nous-hermes" # change to the path where your model is saved
|
16 |
+
model = AutoModelForCausalLM.from_pretrained(model_path)
|
17 |
+
tokenizer = AutoTokenizer.from_pretrained('NousResearch/llama-2-7b-chat-hf')
|
18 |
+
prompt = "Write a sample ABAP report" # change to your desired prompt
|
19 |
+
gen = pipeline('text-generation', model=model, tokenizer=tokenizer,max_new_tokens=256)
|
20 |
+
result = gen(prompt)
|
21 |
+
print(result[0]['generated_text'])
|