doberst commited on
Commit
6e3cab0
1 Parent(s): dbfa576

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -32
README.md CHANGED
@@ -6,13 +6,24 @@ license: apache-2.0
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
- **slim-sentiment** is part of the SLIM ("Structured Language Instruction Model") model series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
10
 
11
- slim-sentiment has been fine-tuned for **sentiment analysis** function calls, with output of JSON dictionary corresponding to specific named entity keys.
12
 
13
- Each slim model has a corresponding 'tool' in a separate repository, e.g., 'slim-sentiment-tool', which a 4-bit quantized gguf version of the model that is intended to be used for inference.
14
 
 
15
 
 
 
 
 
 
 
 
 
 
 
16
 
17
 
18
  ### Model Description
@@ -20,10 +31,10 @@ Each slim model has a corresponding 'tool' in a separate repository, e.g., 'slim
20
  <!-- Provide a longer summary of what this model is. -->
21
 
22
  - **Developed by:** llmware
23
- - **Model type:** Small, specialized LLM
24
  - **Language(s) (NLP):** English
25
  - **License:** Apache 2.0
26
- - **Finetuned from model:** Tiny Llama 1B
27
 
28
  ## Uses
29
 
@@ -43,36 +54,9 @@ All of the SLIM models use a novel prompt instruction structured as follows:
43
 
44
  "<human> " + text + "<classify> " + keys + "</classify>" + "/n<bot>: "
45
 
46
- =
47
- ## How to Get Started with the Model
48
-
49
- The fastest way to get started with BLING is through direct import in transformers:
50
-
51
- from transformers import AutoTokenizer, AutoModelForCausalLM
52
- tokenizer = AutoTokenizer.from_pretrained("slim-sentiment")
53
- model = AutoModelForCausalLM.from_pretrained("slim-sentiment")
54
-
55
-
56
- The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
57
-
58
- full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:"
59
-
60
- The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
61
-
62
- 1. Text Passage Context, and
63
- 2. Specific question or instruction based on the text passage
64
-
65
- To get the best results, package "my_prompt" as follows:
66
-
67
- my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
68
-
69
-
70
 
71
  ## Model Card Contact
72
 
73
  Darren Oberst & llmware team
74
 
75
- Please reach out anytime if you are interested in this project and would like to participate and work with us!
76
-
77
-
78
 
 
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
+ **slim-sentiment-tool** is part of the SLIM ("Structured Language Instruction Model") model series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
10
 
11
+ slim-sentiment-tool is a 4_K_M quantized GGUF version of slim-sentiment-tool, providing a fast, small inference implementation.
12
 
13
+ Load in your favorite GGUF inference engine, or try with llmware as follows:
14
 
15
+ from llmware.models import ModelCatalog
16
 
17
+ sentiment_tool = ModelCatalog().load_model("llmware/slim-sentiment-tool")
18
+ response = sentiment_tool.function_call(text_sample, params=["sentiment"], function="classify")
19
+
20
+ Slim models can also be loaded even more simply as part of LLMfx calls:
21
+
22
+ from llmware.agents import LLMfx
23
+
24
+ llm_fx = LLMfx()
25
+ llm_fx.load_tool("sentiment")
26
+ response = llm_fx.sentiment(text)
27
 
28
 
29
  ### Model Description
 
31
  <!-- Provide a longer summary of what this model is. -->
32
 
33
  - **Developed by:** llmware
34
+ - **Model type:** GGUF
35
  - **Language(s) (NLP):** English
36
  - **License:** Apache 2.0
37
+ - **Quantized from model:** llmware/slim-sentiment (finetuned tiny llama)
38
 
39
  ## Uses
40
 
 
54
 
55
  "<human> " + text + "<classify> " + keys + "</classify>" + "/n<bot>: "
56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
 
58
  ## Model Card Contact
59
 
60
  Darren Oberst & llmware team
61
 
 
 
 
62