doberst commited on
Commit
0447273
·
verified ·
1 Parent(s): 75b062c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -18,16 +18,16 @@ Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://
18
  1 Test Run with sample=False & temperature=0.0 (deterministic output) - 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
19
 
20
  --**Accuracy Score**: **98.0** correct out of 100
21
- --Not Found Classification: 85.0%
22
- --Boolean: 100.0%
23
- --Math/Logic: 92.5%
24
  --Complex Questions (1-5): 5 (Best in Class)
25
- --Summarization Quality (1-5): 3 (Average)
26
  --Hallucinations: No hallucinations observed in test runs.
27
 
28
  For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
29
 
30
- Please note that these test results were achieved using the 4_K_M quantized version of this model - [dragon-qwen-7b-gguf](https://www.huggingface.co/llmware/dragon-qwen-7b-gguf).
31
 
32
  Note: compare results with [dragon-mistral-0.3-gguf](https://www.huggingface.co/llmware/dragon-mistral-0.3-gguf).
33
 
@@ -37,10 +37,10 @@ Note: compare results with [dragon-mistral-0.3-gguf](https://www.huggingface.co/
37
  <!-- Provide a longer summary of what this model is. -->
38
 
39
  - **Developed by:** llmware
40
- - **Model type:** Qwen
41
  - **Language(s) (NLP):** English
42
  - **License:** Apache 2.0
43
- - **Finetuned from model:** Qwen2-7b-base
44
 
45
 
46
  ### Direct Use
@@ -66,8 +66,8 @@ Any model can provide inaccurate or incomplete information, and should be used i
66
  The fastest way to get started with dRAGon is through direct import in transformers:
67
 
68
  from transformers import AutoTokenizer, AutoModelForCausalLM
69
- tokenizer = AutoTokenizer.from_pretrained("dragon-qwen-7b")
70
- model = AutoModelForCausalLM.from_pretrained("dragon-qwen-7b")
71
 
72
  Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
73
 
 
18
  1 Test Run with sample=False & temperature=0.0 (deterministic output) - 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
19
 
20
  --**Accuracy Score**: **98.0** correct out of 100
21
+ --Not Found Classification: 90.0%
22
+ --Boolean: 97.5%
23
+ --Math/Logic: 95%
24
  --Complex Questions (1-5): 5 (Best in Class)
25
+ --Summarization Quality (1-5): 4 (Above Average)
26
  --Hallucinations: No hallucinations observed in test runs.
27
 
28
  For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
29
 
30
+ Please note that these test results were achieved using the 4_K_M quantized version of this model - [dragon-yi-9b-gguf](https://www.huggingface.co/llmware/dragon-yi-9b-gguf).
31
 
32
  Note: compare results with [dragon-mistral-0.3-gguf](https://www.huggingface.co/llmware/dragon-mistral-0.3-gguf).
33
 
 
37
  <!-- Provide a longer summary of what this model is. -->
38
 
39
  - **Developed by:** llmware
40
+ - **Model type:** Yi-9B
41
  - **Language(s) (NLP):** English
42
  - **License:** Apache 2.0
43
+ - **Finetuned from model:** Yi-9b-base
44
 
45
 
46
  ### Direct Use
 
66
  The fastest way to get started with dRAGon is through direct import in transformers:
67
 
68
  from transformers import AutoTokenizer, AutoModelForCausalLM
69
+ tokenizer = AutoTokenizer.from_pretrained("dragon-yi-1.5v-9b")
70
+ model = AutoModelForCausalLM.from_pretrained("dragon-yi-1.5v-9b")
71
 
72
  Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
73