--- license: apache-2.0 --- # Model Card for Model ID BLING-1b-0.1 is the **smallest** model release in the BLING ("Best Little Instruction-following No-GPU-required") model series. BLING models are fine-tuned with distilled high-quality custom instruct datasets, targeted at a specific subset of instruct tasks with the objective of providing a high-quality Instruct model that is 'inference-ready' on a CPU laptop even without using any advanced quantization optimizations. ### Benchmark Tests Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester) Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations. --**Accuracy Score**: **73.25** correct out of 100 --Not Found Classification: 17.5% --Boolean: 29% --Math/Logic: 0% --Complex Questions (1-5): 1 (Low) --Summarization Quality (1-5): 1 (Coherent, extractive) --Hallucinations: No hallucinations observed in test runs. For test run results, please see the files ("core_rag_test" and "answer_sheet" in the repo). ### Model Description - **Developed by:** llmware - **Model type:** GPTNeoX instruct-trained decoder - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model [optional]:** EleutherAI/Pythia-1b-deduped ## Uses The intended use of BLING models is two-fold: 1. Provide high-quality Instruct models that can run on a laptop for local testing. We have found it extremely useful when building a proof-of-concept, or working with sensitive enterprise data that must be closely guarded, especially in RAG use cases. 2. Push the state of the art for smaller Instruct-following models in the sub-7B parameter range, especially 1B-3B, as single-purpose automation tools for specific tasks through targeted fine-tuning datasets and focused "instruction" tasks. ### Direct Use BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, legal and regulatory industries with complex information sources. Rather than try to be "all things to all people," BLING models try to focus on a narrower set of Instructions more suitable to a ~1B parameter GPT model. BLING is ideal for rapid prototyping, testing, and the ability to perform an end-to-end workflow locally on a laptop without having to send sensitive information over an Internet-based API. The first BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses. ## Bias, Risks, and Limitations Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms. This model can be used effective for quick "on laptop" testing and will be generally accurate in relatively simple extractive Q&A and basic summarization. For higher performing models, please see the larger models in the BLING series, starting at 1.3B-1.4B up to 3B. Note: this was the smallest model that we were able to train to consistently recognize Q&A and RAG instructions. ## How to Get Started with the Model The fastest way to get started with BLING is through direct import in transformers: from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("llmware/bling-1b-0.1") model = AutoModelForCausalLM.from_pretrained("llmware/bling-1b-0.1") The BLING model was fine-tuned with a simple "\ and \ wrapper", so to get the best results, wrap inference entries as: full_prompt = "\\: " + my_prompt + "\n" + "\\:" The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts: 1. Text Passage Context, and 2. Specific question or instruction based on the text passage To get the best results, package "my_prompt" as follows: my_prompt = {{text_passage}} + "\n" + {{question/instruction}} ## Citation [optional] BLING models are built on top of EleutherAI/Pythia base - please see citation for Pythia below: @misc{biderman2023pythia, title={Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling}, author={Stella Biderman and Hailey Schoelkopf and Quentin Anthony and Herbie Bradley and Kyle O'Brien and Eric Hallahan and Mohammad Aflah Khan and Shivanshu Purohit and USVSN Sai Prashanth and Edward Raff and Aviya Skowron and Lintang Sutawika and Oskar van der Wal}, year={2023}, eprint={2304.01373}, archivePrefix={arXiv}, primaryClass={cs.CL} } ## Model Card Contact Darren Oberst & llmware team Please reach out anytime if you are interested in this project.