File size: 5,556 Bytes
ac50972
8981a40
 
 
 
 
 
ac50972
a990c5b
 
 
 
 
59ae0e2
a990c5b
 
 
 
 
 
 
 
 
fc93904
 
 
 
a990c5b
 
 
 
 
 
 
 
 
 
 
fc93904
a990c5b
fc5b617
fc93904
a990c5b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fc5b617
8981a40
a990c5b
 
 
 
 
 
 
 
 
 
 
 
92e2f50
 
 
a990c5b
c8319d6
a990c5b
59ae0e2
a990c5b
c8319d6
a990c5b
 
 
 
 
 
 
 
92e2f50
a990c5b
 
c2902b7
a990c5b
c2902b7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a990c5b
c2902b7
 
 
 
a990c5b
c2902b7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
---
license: other
license_link: https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE  
license_name: yi-license
model_creator: 01-ai  
model_name:  Yi 6B  
model_type:  yi
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

dragon-yi-6b-0.1 part of the dRAGon ("Delivering RAG On ...") model series, RAG-instruct trained on top of a Yi-6B base model.

DRAGON models are fine-tuned with high-quality custom instruct datasets, designed for production quality use in RAG scenarios.


### Benchmark Tests  

Evaluated against the benchmark test:   [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)  
Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.  

--**Accuracy Score**:  **99.5** correct out of 100  
--Not Found Classification:  90.0%  
--Boolean:  87.5%  
--Math/Logic:  77.5%  
--Complex Questions (1-5):  4 (Low-Medium)  
--Summarization Quality (1-5):  4 (Coherent, extractive)  
--Hallucinations:  No hallucinations observed in test runs.  

For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).

### Model Description

<!-- Provide a longer summary of what this model is. -->

- **Developed by:** llmware
- **Model type:** Yi
- **Language(s) (NLP):** English
- **License:** Yi License [Link](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE)
- **Finetuned from model:** Yi-6B

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

The intended use of DRAGON models is two-fold:

1.  Provide high-quality RAG-Instruct models designed for fact-based, no "hallucination" question-answering in connection with an enterprise RAG workflow.

2.  DRAGON models are fine-tuned on top of leading base foundation models, generally in the 6-7B+ range, and purposefully rolled-out across multiple base models to provide choices and "drop-in" replacements for RAG specific use cases.

3.  DRAGON models were trained on the same principles as the BLING models, so generally, it should be easy to "upgrade" from a BLING model in testing to a DRAGON model in production.


### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

DRAGON is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
legal and regulatory industries with complex information sources.  

DRAGON models have been trained for common RAG scenarios, specifically:   question-answering, key-value extraction, and basic summarization as the core instruction types
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.

This model is licensed according to the terms of the license of the base model, Yi-6B, at this [link](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE).


## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.


## How to Get Started with the Model

The fastest way to get started with BLING is through direct import in transformers:

    from transformers import AutoTokenizer, AutoModelForCausalLM  
    tokenizer = AutoTokenizer.from_pretrained("dragon-yi-6b-0.1")  
    model = AutoModelForCausalLM.from_pretrained("dragon-yi-6b-0.1")  

Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model.  The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.  

The DRAGON model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:

    full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:"

The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:

1.  Text Passage Context, and
2.  Specific question or instruction based on the text passage

To get the best results, package "my_prompt" as follows:

    my_prompt = {{text_passage}} + "\n" + {{question/instruction}}


If you are using a HuggingFace generation script:

    # prepare prompt packaging used in fine-tuning process
    new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"

    inputs = tokenizer(new_prompt, return_tensors="pt")  
    start_of_output = len(inputs.input_ids[0])

    #   temperature: set at 0.3 for consistency of output
    #   max_new_tokens:  set at 100 - may prematurely stop a few of the summaries

    outputs = model.generate(
            inputs.input_ids.to(device),
            eos_token_id=tokenizer.eos_token_id,
            pad_token_id=tokenizer.eos_token_id,
            do_sample=True,
            temperature=0.3,
            max_new_tokens=100,
            )

    output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)  


## Model Card Contact

Darren Oberst & llmware team