doberst commited on
Commit
92267d7
1 Parent(s): 292deda

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -5
README.md CHANGED
@@ -13,12 +13,17 @@ This model implements a generative 'question' and 'answer' (e.g., 'qa-gen') func
13
 
14
  `{'question': ['What was the amount of revenue in the quarter?'], 'answer': ['$3.2 billion']} `
15
 
16
- The model has been designed to accept one of three different parameters to guide the type of question-answer created: 'question, answer' (generates a standard question and answer), 'boolean' (generates a 'yes-no' question and answer), and 'multiple choice' (generates a multiple choice question and answer).
17
 
18
- slim-qa-gen-tiny-tool is a fine-tune of a tinyllama (1b) parameter model, designed for fast, local deployment and rapid testing and prototyping. Please also see slim-qa-gen-phi-3-tool, which is finetune of phi-3, and will provide higher-quality results, at the trade-off of slightly slower performance and requiring more memory.
 
 
 
19
 
 
20
 
21
- [**slim-qa-gen-tiny*](https://huggingface.co/llmware/slim-qa-gen-tiny) is the Pytorch version of the model, and suitable for fine-tuning for further domain adaptation.
 
22
 
23
 
24
  To pull the model via API:
@@ -32,14 +37,14 @@ Load in your favorite GGUF inference engine, or try with llmware as follows:
32
  from llmware.models import ModelCatalog
33
 
34
  # to load the model and make a basic inference
35
- model = ModelCatalog().load_model("slim-qa-gen-tiny-tool")
36
  response = model.function_call(text_sample)
37
 
38
  # this one line will download the model and run a series of tests
39
  ModelCatalog().tool_test_run("slim-qa-gen-tiny-tool", verbose=True)
40
 
41
 
42
- Note: please review [**config.json**](https://huggingface.co/llmware/slim-xsum-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
43
 
44
 
45
  ## Model Card Contact
 
13
 
14
  `{'question': ['What was the amount of revenue in the quarter?'], 'answer': ['$3.2 billion']} `
15
 
16
+ The model has been designed to accept one of three different parameters to guide the type of question-answer created:
17
 
18
+ -- 'question, answer' (generates a standard question and answer),
19
+ -- 'boolean' (generates a 'yes-no' question and answer), and
20
+ -- 'multiple choice' (generates a multiple choice question and answer).
21
+
22
 
23
+ slim-qa-gen-tiny-tool is a fine-tune of a tinyllama (1b) parameter model, designed for fast, local deployment and rapid testing and prototyping. Please also see [slim-qa-gen-phi-3-tool](https://huggingface.co/llmware/slim-qa-gen-phi-3-tool), which is finetune of phi-3, and will provide higher-quality results, at the trade-off of slightly slower performance and requiring more memory.
24
 
25
+
26
+ [**slim-qa-gen-tiny**](https://huggingface.co/llmware/slim-qa-gen-tiny) is the Pytorch version of the model, and suitable for fine-tuning for further domain adaptation.
27
 
28
 
29
  To pull the model via API:
 
37
  from llmware.models import ModelCatalog
38
 
39
  # to load the model and make a basic inference
40
+ model = ModelCatalog().load_model("slim-qa-gen-tiny-tool", temperature=0.5, sample=True)
41
  response = model.function_call(text_sample)
42
 
43
  # this one line will download the model and run a series of tests
44
  ModelCatalog().tool_test_run("slim-qa-gen-tiny-tool", verbose=True)
45
 
46
 
47
+ Note: please review [**config.json**](https://huggingface.co/llmware/slim-qa-gen-tiny-tool/blob/main/config.json) in the repository for prompt template information, details on the model, and full test set.
48
 
49
 
50
  ## Model Card Contact