slim-sentiment-tool / README.md
doberst's picture
Upload 3 files
dbfa576 verified
|
raw
history blame
2.57 kB
metadata
license: apache-2.0

Model Card for Model ID

slim-sentiment is part of the SLIM ("Structured Language Instruction Model") model series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.

slim-sentiment has been fine-tuned for sentiment analysis function calls, with output of JSON dictionary corresponding to specific named entity keys.

Each slim model has a corresponding 'tool' in a separate repository, e.g., 'slim-sentiment-tool', which a 4-bit quantized gguf version of the model that is intended to be used for inference.

Model Description

  • Developed by: llmware
  • Model type: Small, specialized LLM
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Finetuned from model: Tiny Llama 1B

Uses

The intended use of SLIM models is to re-imagine traditional 'hard-coded' classifiers through the use of function calls.

Example:

text = "The stock market declined yesterday as investors worried increasingly about the slowing economy."   

model generation - {"sentiment": ["negative"]}

keys = "sentiment"

All of the SLIM models use a novel prompt instruction structured as follows:

"<human> " + text + "<classify> " + keys + "</classify>" + "/n<bot>: "

=

How to Get Started with the Model

The fastest way to get started with BLING is through direct import in transformers:

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("slim-sentiment")
model = AutoModelForCausalLM.from_pretrained("slim-sentiment")

The BLING model was fine-tuned with a simple "<human> and <bot> wrapper", so to get the best results, wrap inference entries as:

full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:"

The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:

  1. Text Passage Context, and
  2. Specific question or instruction based on the text passage

To get the best results, package "my_prompt" as follows:

my_prompt = {{text_passage}} + "\n" + {{question/instruction}}

Model Card Contact

Darren Oberst & llmware team

Please reach out anytime if you are interested in this project and would like to participate and work with us!