doberst commited on
Commit
61dbeb4
1 Parent(s): 62382b2

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -3
README.md CHANGED
@@ -1,3 +1,46 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ ---
4
+
5
+ # SLIM-XSUM-TOOL
6
+
7
+ <!-- Provide a quick summary of what the model is/does. -->
8
+
9
+
10
+ **slim-xsum-tool** is a 4_K_M quantized GGUF version of slim-xsum, providing a small, fast inference implementation, optimized for multi-model concurrent deployment.
11
+
12
+ This model implements an 'extreme summarization' (e.g., 'xsum') function based on the parameter key "xsum" that generates an LLM text output in the form of a python dictionary as follows:
13
+
14
+ `{'xsum': ['Stock Market declines on worries of interest rates.']} `
15
+
16
+ The intent of SLIMs is to forge a middle-ground between traditional encoder-based classifiers and open-ended API-based LLMs through the use of function-calling and small specialized LLMs.
17
+
18
+ [**slim-xsum**](https://huggingface.co/llmware/slim-xsum) is the Pytorch version of the model, and suitable for fine-tuning for further domain adaptation.
19
+
20
+
21
+ To pull the model via API:
22
+
23
+ from huggingface_hub import snapshot_download
24
+ snapshot_download("llmware/slim-xsum-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
25
+
26
+
27
+ Load in your favorite GGUF inference engine, or try with llmware as follows:
28
+
29
+ from llmware.models import ModelCatalog
30
+
31
+ # to load the model and make a basic inference
32
+ model = ModelCatalog().load_model("slim-xsum-tool")
33
+ response = model.function_call(text_sample)
34
+
35
+ # this one line will download the model and run a series of tests
36
+ ModelCatalog().tool_test_run("slim-xsum-tool", verbose=True)
37
+
38
+
39
+ Note: please review [**config.json**](https://huggingface.co/llmware/slim-xsum-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
40
+
41
+
42
+ ## Model Card Contact
43
+
44
+ Darren Oberst & llmware team
45
+
46
+ [Any questions? Join us on Discord](https://discord.gg/MhZn5Nc39h)