doberst commited on
Commit
20bac97
·
verified ·
1 Parent(s): 0fd883a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -9
README.md CHANGED
@@ -7,7 +7,7 @@ license: apache-2.0
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
 
10
- **slim-qa-gen-tiny-tool** is a 4_K_M quantized GGUF version of slim-qa-gen-tiny, providing a small, fast inference implementation, optimized for multi-model concurrent deployment.
11
 
12
  This model implements a generative 'question' and 'answer' (e.g., 'qa-gen') function, which takes a context passage as an input, and then generates as an output a python dictionary consisting of two keys:
13
 
@@ -15,16 +15,13 @@ This model implements a generative 'question' and 'answer' (e.g., 'qa-gen') func
15
 
16
  The model has been designed to accept one of three different parameters to guide the type of question-answer created: 'question, answer' (generates a standard question and answer), 'boolean' (generates a 'yes-no' question and answer), and 'multiple choice' (generates a multiple choice question and answer).
17
 
18
- slim-qa-gen-tiny-tool is a fine-tune of a tinyllama (1b) parameter model, designed for fast, local deployment and rapid testing and prototyping. Please also see slim-qa-gen-phi-3-tool, which is finetune of phi-3, and will provide higher-quality results, at the trade-off of slightly slower performance and requiring more memory.
19
-
20
-
21
- [**slim-qa-gen-tiny*](https://huggingface.co/llmware/slim-qa-gen-tiny) is the Pytorch version of the model, and suitable for fine-tuning for further domain adaptation.
22
 
23
 
24
  To pull the model via API:
25
 
26
  from huggingface_hub import snapshot_download
27
- snapshot_download("llmware/slim-qa-gen-tiny-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
28
 
29
 
30
  Load in your favorite GGUF inference engine, or try with llmware as follows:
@@ -33,13 +30,13 @@ Load in your favorite GGUF inference engine, or try with llmware as follows:
33
 
34
  # to load the model and make a basic inference
35
  model = ModelCatalog().load_model("slim-qa-gen-tiny-tool")
36
- response = model.function_call(text_sample)
37
 
38
  # this one line will download the model and run a series of tests
39
- ModelCatalog().tool_test_run("slim-qa-gen-tiny-tool", verbose=True)
40
 
41
 
42
- Note: please review [**config.json**](https://huggingface.co/llmware/slim-xsum-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
43
 
44
 
45
  ## Model Card Contact
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
 
10
+ **slim-qa-gen-phi-3-tool** is a 4_K_M quantized GGUF version of slim-qa-gen-phi-3, providing a small, fast inference implementation, optimized for multi-model concurrent deployment.
11
 
12
  This model implements a generative 'question' and 'answer' (e.g., 'qa-gen') function, which takes a context passage as an input, and then generates as an output a python dictionary consisting of two keys:
13
 
 
15
 
16
  The model has been designed to accept one of three different parameters to guide the type of question-answer created: 'question, answer' (generates a standard question and answer), 'boolean' (generates a 'yes-no' question and answer), and 'multiple choice' (generates a multiple choice question and answer).
17
 
18
+ [**slim-qa-gen-phi-3*](https://huggingface.co/llmware/slim-qa-gen-phi-3) is the Pytorch version of the model, and suitable for fine-tuning for further domain adaptation.
 
 
 
19
 
20
 
21
  To pull the model via API:
22
 
23
  from huggingface_hub import snapshot_download
24
+ snapshot_download("llmware/slim-qa-gen-phi-3-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
25
 
26
 
27
  Load in your favorite GGUF inference engine, or try with llmware as follows:
 
30
 
31
  # to load the model and make a basic inference
32
  model = ModelCatalog().load_model("slim-qa-gen-tiny-tool")
33
+ response = model.function_call(text_sample, params=["boolean"])
34
 
35
  # this one line will download the model and run a series of tests
36
+ ModelCatalog().tool_test_run("slim-qa-gen-phi-3-tool", verbose=True)
37
 
38
 
39
+ Note: please review [**config.json**](https://huggingface.co/llmware/slim-qa-gen-phi-3-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
40
 
41
 
42
  ## Model Card Contact