File size: 4,752 Bytes
2c7e033 b7d275e 2c7e033 b7d275e 2c7e033 b7d275e 2c7e033 b7d275e 2c7e033 b7d275e 2c7e033 b7d275e 2c7e033 b7d275e 2c7e033 b7d275e 2c7e033 b7d275e 2c7e033 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 |
---
license: apache-2.0
inference: false
---
# BLING-QWEN-1.5B
<!-- Provide a quick summary of what the model is/does. -->
bling-qwen-1.5b is part of the BLING model series, RAG-instruct trained on top of a Qwen2 1.5b base model.
BLING models have been fine-tuned with the specific objective of fact-based question-answering over complex business and legal documents with an emphasis on reducing hallucinations and providing short, clear answers for workflow automation.
### Benchmark Tests
Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)
1 Test Run with sample=False & temperature=0.0 (deterministic output) - 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
--**Accuracy Score**: **93.5** correct out of 100
--Not Found Classification: 75.0%
--Boolean: 87.5%
--Math/Logic: 70%
--Complex Questions (1-5): 3 (Best in Class)
--Summarization Quality (1-5): 3 (Average)
--Hallucinations: No hallucinations observed in test runs.
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
Please note that these test results were achieved using the 4_K_M quantized version of this model - [bling-qwen-mini-tool](https://www.huggingface.co/llmware/bling-qwen-mini-tool).
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** llmware
- **Model type:** Qwen
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** Qwen2-1.5b-base
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
legal and regulatory industries with complex information sources.
BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.
## How to Get Started with the Model
The fastest way to get started with dRAGon is through direct import in transformers:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llmware/bling-qwen-1.5b")
model = AutoModelForCausalLM.from_pretrained("llmware/bling-qwen-1.5b")
Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
The dRAGon model was fine-tuned with a simple "\<human> and \<bot>" wrapper, so to get the best results, wrap inference entries as:
full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:"
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
1. Text Passage Context, and
2. Specific question or instruction based on the text passage
To get the best results, package "my_prompt" as follows:
my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
If you are using a HuggingFace generation script:
# prepare prompt packaging used in fine-tuning process
new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"
inputs = tokenizer(new_prompt, return_tensors="pt")
start_of_output = len(inputs.input_ids[0])
# temperature: set at 0.0 for consistency of output
# max_new_tokens: set at 100 - may prematurely stop a few of the summaries
outputs = model.generate(
inputs.input_ids.to(device),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
do_sample=False,
temperature=0.0,
max_new_tokens=100,
)
output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)
## Model Card Contact
Darren Oberst & llmware team |