RebeccaQian1 commited on
Commit
97997be
1 Parent(s): 0747f1a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -16
README.md CHANGED
@@ -68,32 +68,33 @@ The model will output the score as 'PASS' if the answer is faithful to the docum
68
  To run inference, you can use HF pipeline:
69
 
70
  ```
71
- pipe = pipeline(
72
- "text-generation",
73
- model="PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct",
74
- max_new_tokens=600,
75
- device="cuda",
76
- return_full_text=False
77
- )
 
 
 
 
78
 
79
  messages = [
80
  {"role": "user", "content": prompt},
81
  ]
82
 
83
- result = pipe(messages)
84
- print(result[0]['generated_text'])
 
 
85
 
 
86
  ```
87
 
88
  Since the model is trained in chat format, ensure that you pass the prompt as a user message.
89
 
90
- ## Training Details
91
-
92
- The model was finetuned for 3 epochs using H100s on dataset of size 2400. We use [lion](https://github.com/lucidrains/lion-pytorch) optimizer with lr=5.0e-7. For more details on data generation, please check out our Github repo.
93
-
94
- ### Training Data
95
-
96
- We train on 2400 samples consisting of CovidQA, PubmedQA, DROP and RAGTruth samples. For datasets that do not contain hallucinated samples, we generate perturbations to introduce hallucinations in the data. For more details about the data generation process, refer to the paper.
97
 
98
  ## Evaluation
99
 
 
68
  To run inference, you can use HF pipeline:
69
 
70
  ```
71
+ import transformers
72
+
73
+ model_id = "PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct"
74
+
75
+ pipeline = transformers.pipeline(
76
+ "text-generation",
77
+ model=model_id,
78
+ max_new_tokens=600,
79
+ device="cuda",
80
+ eturn_full_text=False
81
+ )
82
 
83
  messages = [
84
  {"role": "user", "content": prompt},
85
  ]
86
 
87
+ outputs = pipeline(
88
+ messages,
89
+ temperature=0
90
+ )
91
 
92
+ print(outputs[0]["generated_text"])
93
  ```
94
 
95
  Since the model is trained in chat format, ensure that you pass the prompt as a user message.
96
 
97
+ For more information on training details, refer to our [ArXiv paper](https://arxiv.org/abs/2407.08488).
 
 
 
 
 
 
98
 
99
  ## Evaluation
100