doberst commited on
Commit
7fc7eae
·
verified ·
1 Parent(s): 94aa11a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -23,7 +23,10 @@ Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://
23
  --Summarization Quality (1-5): 4 (Above Average)
24
  --Hallucinations: No hallucinations observed in test runs.
25
 
26
- For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
 
 
 
27
 
28
  ### Model Description
29
 
@@ -79,8 +82,7 @@ Load in your favorite GGUF inference engine, or try with llmware as follows:
79
 
80
  # to load the model and make a basic inference
81
  model = ModelCatalog().load_model("llmware/bling-phi-3-gguf", temperature=0.0, sample=False)
82
- response = model.function_call(text_sample)
83
-
84
 
85
  Details on the prompt wrapper and other configurations are on the config.json file in the files repository.
86
 
 
23
  --Summarization Quality (1-5): 4 (Above Average)
24
  --Hallucinations: No hallucinations observed in test runs.
25
 
26
+ For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
27
+
28
+ Note: compare results with [bling-phi-2](https://www.huggingface.co/llmware/bling-phi-2-v0), and [dragon-mistral-7b](https://www.huggingface.com/llmware/dragon-mistral-7b-v0).
29
+
30
 
31
  ### Model Description
32
 
 
82
 
83
  # to load the model and make a basic inference
84
  model = ModelCatalog().load_model("llmware/bling-phi-3-gguf", temperature=0.0, sample=False)
85
+ response = model.inference(query, add_context=text_sample)
 
86
 
87
  Details on the prompt wrapper and other configurations are on the config.json file in the files repository.
88