doberst commited on
Commit
6105f47
·
verified ·
1 Parent(s): 273ba0b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -10
README.md CHANGED
@@ -7,22 +7,22 @@ inference: false
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
10
- **slim-topics** is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") model series, consisting of small, specialized decoder-based models, fine-tuned for function-calling.
11
 
12
- slim-sentiment has been fine-tuned for **topic analysis** function calls, generating output consisting of a python dictionary corresponding to specified keys, e.g.:
13
 
14
- &nbsp;&nbsp;&nbsp;&nbsp;`{"topics": ["..."]}`
15
 
16
 
17
  SLIM models are designed to generate structured outputs that can be used programmatically as part of a multi-step, multi-model LLM-based automation workflow.
18
 
19
- Each slim model has a 'quantized tool' version, e.g., [**'slim-topics-tool'**](https://huggingface.co/llmware/slim-topics-tool).
20
 
21
 
22
  ## Prompt format:
23
 
24
  `function = "classify"`
25
- `params = "topics"`
26
  `prompt = "<human> " + {text} + "\n" + `
27
  &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp;`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
28
 
@@ -34,10 +34,12 @@ Each slim model has a 'quantized tool' version, e.g., [**'slim-topics-tool'**](
34
  tokenizer = AutoTokenizer.from_pretrained("llmware/slim-topics")
35
 
36
  function = "classify"
37
- params = "topic"
38
 
39
- text = "The stock market declined yesterday as investors worried increasingly about the slowing economy."
40
-
 
 
41
  prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"
42
 
43
  inputs = tokenizer(prompt, return_tensors="pt")
@@ -73,8 +75,8 @@ Each slim model has a 'quantized tool' version, e.g., [**'slim-topics-tool'**](
73
  <summary>Using as Function Call in LLMWare</summary>
74
 
75
  from llmware.models import ModelCatalog
76
- slim_model = ModelCatalog().load_model("llmware/slim-topics")
77
- response = slim_model.function_call(text,params=["topics"], function="classify")
78
 
79
  print("llmware - llm_response: ", response)
80
 
 
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
10
+ **slim-tags** is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") model series, consisting of small, specialized decoder-based models, fine-tuned for function-calling.
11
 
12
+ slim-tags has been fine-tuned for auto-generating relevant tags and points-of-interest function calls, generating output consisting of a python dictionary corresponding to specified keys, e.g.:
13
 
14
+ &nbsp;&nbsp;&nbsp;&nbsp;`{"tags": ["tag1", "tag2", "tag3",...]}`
15
 
16
 
17
  SLIM models are designed to generate structured outputs that can be used programmatically as part of a multi-step, multi-model LLM-based automation workflow.
18
 
19
+ Each slim model has a 'quantized tool' version, e.g., [**'slim-tags-tool'**](https://huggingface.co/llmware/slim-tags-tool).
20
 
21
 
22
  ## Prompt format:
23
 
24
  `function = "classify"`
25
+ `params = "tags"`
26
  `prompt = "<human> " + {text} + "\n" + `
27
  &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp;`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
28
 
 
34
  tokenizer = AutoTokenizer.from_pretrained("llmware/slim-topics")
35
 
36
  function = "classify"
37
+ params = "tags"
38
 
39
+ text = "Citibank announced a reduction in its targets for economic growth in France and the UK last week "
40
+ "in light of ongoing concerns about inflation and unemployment, especially in large employers "
41
+ "such as Airbus."
42
+
43
  prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"
44
 
45
  inputs = tokenizer(prompt, return_tensors="pt")
 
75
  <summary>Using as Function Call in LLMWare</summary>
76
 
77
  from llmware.models import ModelCatalog
78
+ slim_model = ModelCatalog().load_model("llmware/slim-tags")
79
+ response = slim_model.function_call(text,params=["tags"], function="classify")
80
 
81
  print("llmware - llm_response: ", response)
82