doberst commited on
Commit
0fd883a
·
verified ·
1 Parent(s): 61dbeb4

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -11
README.md CHANGED
@@ -1,27 +1,30 @@
1
  ---
2
- license: cc-by-sa-4.0
3
  ---
4
 
5
- # SLIM-XSUM-TOOL
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
 
10
- **slim-xsum-tool** is a 4_K_M quantized GGUF version of slim-xsum, providing a small, fast inference implementation, optimized for multi-model concurrent deployment.
11
 
12
- This model implements an 'extreme summarization' (e.g., 'xsum') function based on the parameter key "xsum" that generates an LLM text output in the form of a python dictionary as follows:
13
 
14
- `{'xsum': ['Stock Market declines on worries of interest rates.']} `
15
-
16
- The intent of SLIMs is to forge a middle-ground between traditional encoder-based classifiers and open-ended API-based LLMs through the use of function-calling and small specialized LLMs.
17
 
18
- [**slim-xsum**](https://huggingface.co/llmware/slim-xsum) is the Pytorch version of the model, and suitable for fine-tuning for further domain adaptation.
 
 
 
 
 
19
 
20
 
21
  To pull the model via API:
22
 
23
  from huggingface_hub import snapshot_download
24
- snapshot_download("llmware/slim-xsum-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
25
 
26
 
27
  Load in your favorite GGUF inference engine, or try with llmware as follows:
@@ -29,11 +32,11 @@ Load in your favorite GGUF inference engine, or try with llmware as follows:
29
  from llmware.models import ModelCatalog
30
 
31
  # to load the model and make a basic inference
32
- model = ModelCatalog().load_model("slim-xsum-tool")
33
  response = model.function_call(text_sample)
34
 
35
  # this one line will download the model and run a series of tests
36
- ModelCatalog().tool_test_run("slim-xsum-tool", verbose=True)
37
 
38
 
39
  Note: please review [**config.json**](https://huggingface.co/llmware/slim-xsum-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
 
1
  ---
2
+ license: apache-2.0
3
  ---
4
 
5
+ # SLIM-QA-GEN-TINY-TOOL
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
 
10
+ **slim-qa-gen-tiny-tool** is a 4_K_M quantized GGUF version of slim-qa-gen-tiny, providing a small, fast inference implementation, optimized for multi-model concurrent deployment.
11
 
12
+ This model implements a generative 'question' and 'answer' (e.g., 'qa-gen') function, which takes a context passage as an input, and then generates as an output a python dictionary consisting of two keys:
13
 
14
+ `{'question': ['What was the amount of revenue in the quarter?'], 'answer': ['$3.2 billion']} `
 
 
15
 
16
+ The model has been designed to accept one of three different parameters to guide the type of question-answer created: 'question, answer' (generates a standard question and answer), 'boolean' (generates a 'yes-no' question and answer), and 'multiple choice' (generates a multiple choice question and answer).
17
+
18
+ slim-qa-gen-tiny-tool is a fine-tune of a tinyllama (1b) parameter model, designed for fast, local deployment and rapid testing and prototyping. Please also see slim-qa-gen-phi-3-tool, which is finetune of phi-3, and will provide higher-quality results, at the trade-off of slightly slower performance and requiring more memory.
19
+
20
+
21
+ [**slim-qa-gen-tiny*](https://huggingface.co/llmware/slim-qa-gen-tiny) is the Pytorch version of the model, and suitable for fine-tuning for further domain adaptation.
22
 
23
 
24
  To pull the model via API:
25
 
26
  from huggingface_hub import snapshot_download
27
+ snapshot_download("llmware/slim-qa-gen-tiny-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
28
 
29
 
30
  Load in your favorite GGUF inference engine, or try with llmware as follows:
 
32
  from llmware.models import ModelCatalog
33
 
34
  # to load the model and make a basic inference
35
+ model = ModelCatalog().load_model("slim-qa-gen-tiny-tool")
36
  response = model.function_call(text_sample)
37
 
38
  # this one line will download the model and run a series of tests
39
+ ModelCatalog().tool_test_run("slim-qa-gen-tiny-tool", verbose=True)
40
 
41
 
42
  Note: please review [**config.json**](https://huggingface.co/llmware/slim-xsum-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.