Update README.md
Browse files
README.md
CHANGED
@@ -7,14 +7,14 @@ license: apache-2.0
|
|
7 |
<!-- Provide a quick summary of what the model is/does. -->
|
8 |
|
9 |
|
10 |
-
**slim-
|
11 |
|
12 |
-
[**slim-
|
13 |
|
14 |
To pull the model via API:
|
15 |
|
16 |
from huggingface_hub import snapshot_download
|
17 |
-
snapshot_download("llmware/slim-
|
18 |
|
19 |
|
20 |
Load in your favorite GGUF inference engine, or try with llmware as follows:
|
@@ -22,11 +22,11 @@ Load in your favorite GGUF inference engine, or try with llmware as follows:
|
|
22 |
from llmware.models import ModelCatalog
|
23 |
|
24 |
# to load the model and make a basic inference
|
25 |
-
model = ModelCatalog().load_model("slim-
|
26 |
response = model.function_call(text_sample)
|
27 |
|
28 |
# this one line will download the model and run a series of tests
|
29 |
-
ModelCatalog().tool_test_run("slim-
|
30 |
|
31 |
|
32 |
Slim models can also be loaded even more simply as part of a multi-model, multi-step LLMfx calls:
|
@@ -34,11 +34,11 @@ Slim models can also be loaded even more simply as part of a multi-model, multi-
|
|
34 |
from llmware.agents import LLMfx
|
35 |
|
36 |
llm_fx = LLMfx()
|
37 |
-
llm_fx.load_tool("
|
38 |
-
response = llm_fx.
|
39 |
|
40 |
|
41 |
-
Note: please review [**config.json**](https://huggingface.co/llmware/slim-
|
42 |
|
43 |
|
44 |
## Model Card Contact
|
|
|
7 |
<!-- Provide a quick summary of what the model is/does. -->
|
8 |
|
9 |
|
10 |
+
**slim-intent-tool** is a 4_K_M quantized GGUF version of slim-sentiment, providing a small, fast inference implementation, optimized for multi-model concurrent deployment.
|
11 |
|
12 |
+
[**slim-intent**](https://huggingface.co/llmware/slim-intent) is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
|
13 |
|
14 |
To pull the model via API:
|
15 |
|
16 |
from huggingface_hub import snapshot_download
|
17 |
+
snapshot_download("llmware/slim-intent-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
|
18 |
|
19 |
|
20 |
Load in your favorite GGUF inference engine, or try with llmware as follows:
|
|
|
22 |
from llmware.models import ModelCatalog
|
23 |
|
24 |
# to load the model and make a basic inference
|
25 |
+
model = ModelCatalog().load_model("slim-intent-tool")
|
26 |
response = model.function_call(text_sample)
|
27 |
|
28 |
# this one line will download the model and run a series of tests
|
29 |
+
ModelCatalog().tool_test_run("slim-intent-tool", verbose=True)
|
30 |
|
31 |
|
32 |
Slim models can also be loaded even more simply as part of a multi-model, multi-step LLMfx calls:
|
|
|
34 |
from llmware.agents import LLMfx
|
35 |
|
36 |
llm_fx = LLMfx()
|
37 |
+
llm_fx.load_tool("intent")
|
38 |
+
response = llm_fx.intent(text)
|
39 |
|
40 |
|
41 |
+
Note: please review [**config.json**](https://huggingface.co/llmware/slim-intent-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
|
42 |
|
43 |
|
44 |
## Model Card Contact
|