doberst commited on
Commit
d06c209
·
verified ·
1 Parent(s): a8a9783

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -19
README.md CHANGED
@@ -6,20 +6,20 @@ license: apache-2.0
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
- **slim-sentiment-tool** is part of the SLIM ("Structured Language Instruction Model") model series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
10
 
11
- slim-sentiment-tool is a 4_K_M quantized GGUF version of slim-sentiment, providing a small, fast inference implementation.
12
 
13
  Load in your favorite GGUF inference engine (see details in config.json to set up the prompt template), or try with llmware as follows:
14
 
15
  from llmware.models import ModelCatalog
16
 
17
  # to load the model and make a basic inference
18
- sentiment_tool = ModelCatalog().load_model("slim-sentiment-tool")
19
- response = sentiment_tool.function_call(text_sample)
20
 
21
  # this one line will download the model and run a series of tests
22
- ModelCatalog().test_run("slim-sentiment-tool", verbose=True)
23
 
24
 
25
  Slim models can also be loaded even more simply as part of a multi-model, multi-step LLMfx calls:
@@ -27,8 +27,8 @@ Slim models can also be loaded even more simply as part of a multi-model, multi-
27
  from llmware.agents import LLMfx
28
 
29
  llm_fx = LLMfx()
30
- llm_fx.load_tool("sentiment")
31
- response = llm_fx.sentiment(text)
32
 
33
 
34
  ### Model Description
@@ -50,18 +50,6 @@ SLIM models provide a fast, flexible, intuitive way to integrate classifiers and
50
  Model instructions, details and test samples have been packaged into the config.json file in the repository, along with the GGUF file.
51
 
52
 
53
- Example:
54
-
55
- text = "The stock market declined yesterday as investors worried increasingly about the slowing economy."
56
-
57
- model generation - {"sentiment": ["negative"]}
58
-
59
- keys = "sentiment"
60
-
61
- All of the SLIM models use a novel prompt instruction structured as follows:
62
-
63
- "<human> " + text + "<classify> " + keys + "</classify>" + "/n<bot>: "
64
-
65
 
66
  ## Model Card Contact
67
 
 
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
+ **slim-ner-tool** is part of the SLIM ("Structured Language Instruction Model") model series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
10
 
11
+ slim-ner-tool is a 4_K_M quantized GGUF version of slim-ner, providing a small, fast inference implementation.
12
 
13
  Load in your favorite GGUF inference engine (see details in config.json to set up the prompt template), or try with llmware as follows:
14
 
15
  from llmware.models import ModelCatalog
16
 
17
  # to load the model and make a basic inference
18
+ ner_tool = ModelCatalog().load_model("slim-ner-tool")
19
+ response = ner_tool.function_call(text_sample)
20
 
21
  # this one line will download the model and run a series of tests
22
+ ModelCatalog().test_run("slim-ner-tool", verbose=True)
23
 
24
 
25
  Slim models can also be loaded even more simply as part of a multi-model, multi-step LLMfx calls:
 
27
  from llmware.agents import LLMfx
28
 
29
  llm_fx = LLMfx()
30
+ llm_fx.load_tool("ner")
31
+ response = llm_fx.named_entity_extraction(text)
32
 
33
 
34
  ### Model Description
 
50
  Model instructions, details and test samples have been packaged into the config.json file in the repository, along with the GGUF file.
51
 
52
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
  ## Model Card Contact
55