File size: 3,183 Bytes
012e00e
 
ba5de28
012e00e
ba5de28
2c2c967
ba5de28
 
 
22d474f
ba5de28
2c2c967
ba5de28
 
 
22d474f
ba5de28
 
 
 
2c2c967
 
ba5de28
 
 
 
 
 
 
2c2c967
 
ba5de28
2c2c967
 
ba5de28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2c2c967
 
ba5de28
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: cc-by-sa-4.0
inference: false
---

# SLIM-BOOLEAN

<!-- Provide a quick summary of what the model is/does. -->

**slim-boolean** is an experimental model designed to implement a boolean question answering function call using a 2.7B parameter specialized model.   As an input, the model takes a context passage, a yes-no question, and an optional (explain) parameter, and as output, the model generates a python dictionary with two keys - 'answer' which contains the 'yes/no' classification, and 'explain' which provides a text snippet from the passage that was the basis for the classification, e.g.:   

&nbsp;&nbsp;&nbsp;&nbsp;`{'answer': ['yes'], 'explanation': ['the results exceeded expectations by 3%'] }`  

This model is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.

For fast inference, we would recommend using the'quantized tool' version, e.g.,  [**'slim-boolean-tool'**](https://huggingface.co/llmware/slim-boolean-tool).  


## Prompt format:

`function = "boolean"`  
`params = "{insert yes-no-question} (explain)"`  
`prompt = "<human> " + {text} + "\n" + `  
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp;`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`  


<details>
<summary>Transformers Script </summary>

    model = AutoModelForCausalLM.from_pretrained("llmware/slim-boolean")
    tokenizer = AutoTokenizer.from_pretrained("llmware/slim-boolean")

    function = "boolean"
    params = "did tesla stock price increase? (explain) "

    text = "Tesla stock declined yesterday 8% in premarket trading after a poorly-received event in San Francisco yesterday, in which the company indicated a likely shortfall in revenue."  
    
    prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"

    inputs = tokenizer(prompt, return_tensors="pt")
    start_of_input = len(inputs.input_ids[0])

    outputs = model.generate(
        inputs.input_ids.to('cpu'),
        eos_token_id=tokenizer.eos_token_id,
        pad_token_id=tokenizer.eos_token_id,
        do_sample=True,
        temperature=0.3,
        max_new_tokens=100
    )

    output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)

    print("output only: ", output_only)  

    # here's the fun part
    try:
        output_only = ast.literal_eval(llm_string_output)
        print("success - converted to python dictionary automatically")
    except:
        print("fail - could not convert to python dictionary automatically - ", llm_string_output)
   
   </details>  
 
<details>  



    
<summary>Using as Function Call in LLMWare</summary>

    from llmware.models import ModelCatalog
    slim_model = ModelCatalog().load_model("llmware/slim-boolean")
    response = slim_model.function_call(text,params=["did the stock price increase? (explain)"], function="boolean")

    print("llmware - llm_response: ", response)

</details>  

    
## Model Card Contact

Darren Oberst & llmware team  

[Join us on Discord](https://discord.gg/MhZn5Nc39h)