doberst commited on
Commit
e884afa
1 Parent(s): 00e05b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -9,7 +9,7 @@ license: apache-2.0
9
 
10
  **slim-sentiment-tool** is a 4_K_M quantized GGUF version of slim-sentiment, providing a small, fast inference implementation, optimized for multi-model concurrent deployment.
11
 
12
- [**slim-sentiment**](https://huggingface.co/llmware/slim-sentiment) is part of the SLIM ("Structured Language Instruction Model") series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
13
 
14
  To pull the model via API:
15
 
@@ -22,8 +22,8 @@ Load in your favorite GGUF inference engine, or try with llmware as follows:
22
  from llmware.models import ModelCatalog
23
 
24
  # to load the model and make a basic inference
25
- sentiment_tool = ModelCatalog().load_model("slim-sentiment-tool")
26
- response = sentiment_tool.function_call(text_sample)
27
 
28
  # this one line will download the model and run a series of tests
29
  ModelCatalog().tool_test_run("slim-sentiment-tool", verbose=True)
 
9
 
10
  **slim-sentiment-tool** is a 4_K_M quantized GGUF version of slim-sentiment, providing a small, fast inference implementation, optimized for multi-model concurrent deployment.
11
 
12
+ [**slim-sentiment**](https://huggingface.co/llmware/slim-sentiment) is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
13
 
14
  To pull the model via API:
15
 
 
22
  from llmware.models import ModelCatalog
23
 
24
  # to load the model and make a basic inference
25
+ model = ModelCatalog().load_model("slim-sentiment-tool")
26
+ response = model.function_call(text_sample)
27
 
28
  # this one line will download the model and run a series of tests
29
  ModelCatalog().tool_test_run("slim-sentiment-tool", verbose=True)