Update README.md
Browse files
README.md
CHANGED
@@ -28,18 +28,19 @@ Load in your favorite GGUF inference engine, or try with llmware as follows:
|
|
28 |
# this one line will download the model and run a series of tests
|
29 |
ModelCatalog().tool_test_run("slim-sentiment-tool", verbose=True)
|
30 |
|
31 |
-
Note: please review [**config.json**](https://huggingface.co/llmware/slim-sentiment-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
|
32 |
|
33 |
-
|
34 |
Slim models can also be loaded even more simply as part of a multi-model, multi-step LLMfx calls:
|
35 |
|
36 |
from llmware.agents import LLMfx
|
37 |
|
38 |
llm_fx = LLMfx()
|
39 |
llm_fx.load_tool("sentiment")
|
40 |
-
response = llm_fx.sentiment(text)
|
41 |
|
42 |
|
|
|
|
|
|
|
43 |
### Model Description
|
44 |
|
45 |
<!-- Provide a longer summary of what this model is. -->
|
@@ -50,27 +51,6 @@ Slim models can also be loaded even more simply as part of a multi-model, multi-
|
|
50 |
- **License:** Apache 2.0
|
51 |
- **Quantized from model:** llmware/slim-sentiment (finetuned tiny llama 1b)
|
52 |
|
53 |
-
## Uses
|
54 |
-
|
55 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
56 |
-
|
57 |
-
SLIM models provide a fast, flexible, intuitive way to integrate classifiers and structured function calls into RAG and LLM application workflows.
|
58 |
-
|
59 |
-
Model instructions, details and test samples have been packaged into the config.json file in the repository, along with the GGUF file.
|
60 |
-
|
61 |
-
|
62 |
-
Example:
|
63 |
-
|
64 |
-
text = "The stock market declined yesterday as investors worried increasingly about the slowing economy."
|
65 |
-
|
66 |
-
model generation - {"sentiment": ["negative"]}
|
67 |
-
|
68 |
-
keys = "sentiment"
|
69 |
-
|
70 |
-
All of the SLIM models use a novel prompt instruction structured as follows:
|
71 |
-
|
72 |
-
"<human> " + text + "<classify> " + keys + "</classify>" + "/n<bot>: "
|
73 |
-
|
74 |
|
75 |
## Model Card Contact
|
76 |
|
|
|
28 |
# this one line will download the model and run a series of tests
|
29 |
ModelCatalog().tool_test_run("slim-sentiment-tool", verbose=True)
|
30 |
|
|
|
31 |
|
|
|
32 |
Slim models can also be loaded even more simply as part of a multi-model, multi-step LLMfx calls:
|
33 |
|
34 |
from llmware.agents import LLMfx
|
35 |
|
36 |
llm_fx = LLMfx()
|
37 |
llm_fx.load_tool("sentiment")
|
38 |
+
response = llm_fx.sentiment(text)
|
39 |
|
40 |
|
41 |
+
Note: please review [**config.json**](https://huggingface.co/llmware/slim-sentiment-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
|
42 |
+
|
43 |
+
|
44 |
### Model Description
|
45 |
|
46 |
<!-- Provide a longer summary of what this model is. -->
|
|
|
51 |
- **License:** Apache 2.0
|
52 |
- **Quantized from model:** llmware/slim-sentiment (finetuned tiny llama 1b)
|
53 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
|
55 |
## Model Card Contact
|
56 |
|