doberst commited on
Commit
fc93904
1 Parent(s): a990c5b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -6,7 +6,7 @@ license: apache-2.0
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
- dragon-llama-7b-0.1 part of the dRAGon ("Delivering RAG On Private Cloud") model series, RAG-instruct trained on top of a LLama-2 base model.
10
 
11
  DRAGON models are fine-tuned with high-quality custom instruct datasets, designed for production quality use in RAG scenarios.
12
 
@@ -16,10 +16,10 @@ DRAGON models are fine-tuned with high-quality custom instruct datasets, designe
16
  Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)
17
  Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
18
 
19
- --**Accuracy Score**: **99.0** correct out of 100
20
- --Not Found Classification: 95.0%
21
- --Boolean: 82.5%
22
- --Math/Logic: 70.0%
23
  --Complex Questions (1-5): 4 (Low-Medium)
24
  --Summarization Quality (1-5): 4 (Coherent, extractive)
25
  --Hallucinations: No hallucinations observed in test runs.
@@ -31,10 +31,10 @@ For test run results (and good indicator of target use cases), please see the fi
31
  <!-- Provide a longer summary of what this model is. -->
32
 
33
  - **Developed by:** llmware
34
- - **Model type:** LLama-2
35
  - **Language(s) (NLP):** English
36
  - **License:** Apache 2.0
37
- - **Finetuned from model:** Llama-2-7B-Base
38
 
39
  ## Uses
40
 
@@ -72,8 +72,8 @@ Any model can provide inaccurate or incomplete information, and should be used i
72
  The fastest way to get started with BLING is through direct import in transformers:
73
 
74
  from transformers import AutoTokenizer, AutoModelForCausalLM
75
- tokenizer = AutoTokenizer.from_pretrained("dragon-llama-7b-0.1")
76
- model = AutoModelForCausalLM.from_pretrained("dragon-llama-7b-0.1")
77
 
78
 
79
  The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
 
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
+ dragon-yi-6b-0.1 part of the dRAGon ("Delivering RAG On Private Cloud") model series, RAG-instruct trained on top of a Yi-6B base model.
10
 
11
  DRAGON models are fine-tuned with high-quality custom instruct datasets, designed for production quality use in RAG scenarios.
12
 
 
16
  Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)
17
  Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
18
 
19
+ --**Accuracy Score**: **99.5** correct out of 100
20
+ --Not Found Classification: 90.0%
21
+ --Boolean: 87.5%
22
+ --Math/Logic: 77.5%
23
  --Complex Questions (1-5): 4 (Low-Medium)
24
  --Summarization Quality (1-5): 4 (Coherent, extractive)
25
  --Hallucinations: No hallucinations observed in test runs.
 
31
  <!-- Provide a longer summary of what this model is. -->
32
 
33
  - **Developed by:** llmware
34
+ - **Model type:** Yi
35
  - **Language(s) (NLP):** English
36
  - **License:** Apache 2.0
37
+ - **Finetuned from model:** Yi-6B
38
 
39
  ## Uses
40
 
 
72
  The fastest way to get started with BLING is through direct import in transformers:
73
 
74
  from transformers import AutoTokenizer, AutoModelForCausalLM
75
+ tokenizer = AutoTokenizer.from_pretrained("dragon-yi-6b-0.1")
76
+ model = AutoModelForCausalLM.from_pretrained("dragon-yi-6b-0.1")
77
 
78
 
79
  The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as: