File size: 3,913 Bytes
b389d7d
 
9288fbc
b389d7d
9288fbc
6f0f362
9288fbc
 
 
6f0f362
9288fbc
6f0f362
9288fbc
 
 
 
 
 
6f0f362
9288fbc
 
 
 
 
6f0f362
9288fbc
 
 
 
 
 
 
6f0f362
 
9288fbc
 
6f0f362
9288fbc
119bcc8
9288fbc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6f0f362
 
9288fbc
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
license: cc-by-sa-4.0
inference: false
---

# SLIM-XSUM

<!-- Provide a quick summary of what the model is/does. -->

**slim-xsum** implements an 'extreme summarization' function as a function-call on a decoder-based LLM, which generates as output a python dictionary with the form of:

&nbsp;&nbsp;&nbsp;&nbsp;`{'xsum': ['This is a short text summary or headline.']}`


The intent of SLIMs is to forge a middle-ground between traditional encoder-based classifiers and open-ended API-based LLMs, providing an intuitive, flexible natural language response, without complex prompting, and with improved generalization and ability to fine-tune to a specific domain use case.  

This model is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.

Each slim model has a 'quantized tool' version, e.g.,  [**'slim-xsum-tool'**](https://huggingface.co/llmware/slim-xsum-tool).  


## Prompt format:

`function = "classify"`  
`params = "xsum"`  
`prompt = "<human> " + {text} + "\n" + `  
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp;`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`  


<details>
<summary>Transformers Script </summary>

    model = AutoModelForCausalLM.from_pretrained("llmware/slim-xsum")
    tokenizer = AutoTokenizer.from_pretrained("llmware/slim-xsum")

    function = "classify"
    params = "xsum"

    text = "DeepMind, the UK-based AI lab owned by Google’s parent company Alphabet, has developed an AI system called AlphaGeometry that can solve complex geometry problems close to human Olympiad gold medalists. In a new paper in Nature, DeepMind revealed that AlphaGeometry was able to solve 25 out of 30 benchmark geometry problems from past International Mathematical Olympiad (IMO) competitions within the standard time limits. This nearly matches the average score of 26 problems solved by human gold medalists on the same tests.  The AI system combines a neural language model with a rule-bound deduction engine, providing a synergy that enables the system to find solutions to complex geometry theorems.  AlphaGeometry took a revolutionary approach to synthetic data generation by creating one billion random diagrams of geometric objects and deriving relationships between points and lines in each diagram. This process – termed “symbolic deduction and traceback” – resulted in a final training dataset of 100 million unique examples, providing a rich source for training the AI system."  
    
    prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"

    inputs = tokenizer(prompt, return_tensors="pt")
    start_of_input = len(inputs.input_ids[0])

    outputs = model.generate(
        inputs.input_ids.to('cpu'),
        eos_token_id=tokenizer.eos_token_id,
        pad_token_id=tokenizer.eos_token_id,
        do_sample=True,
        temperature=0.3,
        max_new_tokens=100
    )

    output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)

    print("output only: ", output_only)  

    # here's the fun part
    try:
        output_only = ast.literal_eval(llm_string_output)
        print("success - converted to python dictionary automatically")
    except:
        print("fail - could not convert to python dictionary automatically - ", llm_string_output)
   
   </details>  
 
<details>  



    
<summary>Using as Function Call in LLMWare</summary>

    from llmware.models import ModelCatalog
    slim_model = ModelCatalog().load_model("llmware/slim-xsum")
    response = slim_model.function_call(text,params=["xsum"], function="classify")

    print("llmware - llm_response: ", response)

</details>  

    
## Model Card Contact

Darren Oberst & llmware team  

[Join us on Discord](https://discord.gg/MhZn5Nc39h)