|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
BLING-1.4b-0.1 is the first model release in the BLING ("Best Little Instruction-following No-GPU-required") model series. |
|
|
|
BLING models are designed as custom instruct-following laptop-effective GPT decoder-based models (~1B-2.7B parameters). BLING models are currently built on top of Pythia (GPTNeox architecture) base models and other Apache 2.0-licensed GPT-compatible models with primary focus on 'little' models in the range of 1B, 1.3-1.4B, and 2.7B parameters. (Note: in our testing, we have seen relatively limited success with instruct-following models below <1B parameters.) |
|
|
|
BLING models are fine-tuned with distilled high-quality custom instruct datasets, targeted at a specific subset of instruct tasks with the objective of providing a high-quality Instruct model that can be run entirely without a GPU server, with good quality instruct-following capability that can be loaded and run locally on a laptop. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
- **Developed by:** llmware |
|
- **Shared by [optional]:** Darren Oberst |
|
- **Model type:** GPTNeoX instruct-trained decoder |
|
- **Language(s) (NLP):** English |
|
- **License:** Apache 2.0 |
|
- **Finetuned from model [optional]:** EleutherAI/Pythia-1b-deduped |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
The intended use of BLING models is two-fold: |
|
|
|
1. Provide a high-quality Instruct models that can run on a laptop for local testing. We have found it extremely useful when building a |
|
proof-of-concept, or working with sensitive enterprise data that must be closely guarded, especially in RAG use cases. |
|
|
|
2. Push the state of the art for smaller Instruct-following models in the 1B - 7B range through improved fine-tuning datasets and targeted "instruction" tasks. |
|
|
|
|
|
### Direct Use |
|
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
|
|
BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, |
|
legal and regulatory industries. BLING is intended to be an experimental series of little instruct models targeted as specific |
|
RAG automation tasks with complex information sources. Rather than try to be "all things to all people," BLING models try to focus |
|
on a narrower set of Instructions more suitable to a ~1B parameter GPT model. |
|
|
|
BLING is ideal for rapid prototyping, testing, and the ability to perform an end-to-end workflow locally on a laptop without |
|
having to send sensitive information over an Internet-based API. |
|
|
|
The first BLING models have been trained on question-answering, key-value extraction, and basic summarization as the core instruction types. |
|
|
|
|
|
[More Information Needed] |
|
|
|
### Downstream Use [optional] |
|
|
|
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> |
|
|
|
[More Information Needed] |
|
|
|
### Out-of-Scope Use |
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> |
|
|
|
1. BLING is not designed for 'chat-bot' or 'consumer-oriented' applications. |
|
|
|
2. BLING is not optimal for most production applications, other than simple and highly specific use cases. |
|
|
|
|
|
[More Information Needed] |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
BLING has not been designed for end consumer-oriented applications, and there has been any focus in training on important safeguards to |
|
mitigate potential bias and safety. We would strongly discourage any use of BLING for any 'chatbot' use case. |
|
|
|
[More Information Needed] |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
The fastest way to get started with BLING is through direct import in transformers: |
|
|
|
model = AutoModelForCausalLM.from_pretrained("llmware/bling-1b-0.1") |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("llmware/bling-1b-0.1") |
|
|
|
The BLING model was fine-tuned with a simple "<human> and <bot> wrapper", so to get the best results, wrap inference entries as: |
|
|
|
full_prompt = "<human>: " + my_prompt + "\n" + "<bot>: " |
|
|
|
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of sub-parts: |
|
|
|
1. Text Passage Context, and |
|
2. Specific question or instruction based on the text passage |
|
|
|
To get the best results, package "my_prompt" as follows: |
|
|
|
my_prompt = {{text_passage}} + "\n" + {{question/instruction}} |
|
|
|
|
|
## Citation [optional] |
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
|
|
|
## Model Card Contact |
|
|
|
Darren Oberst & llmware team |
|
|
|
Please reach out anytime if you are interested in this research program and would like to participate and work with us! |
|
|
|
|
|
|
|
|