File size: 3,963 Bytes
acf7388
 
 
 
b270761
d690e4e
acf7388
669cf37
 
4819954
fac7b94
f9b15f3
 
4819954
f9b15f3
4819954
f9b15f3
fac7b94
34d374e
 
fac7b94
 
 
 
 
 
4819954
 
6904da6
 
4819954
 
 
 
452f304
6904da6
409b58c
 
 
6904da6
4819954
fac7b94
 
 
 
 
d25c898
fac7b94
 
 
f9b15f3
fac7b94
 
 
 
 
 
 
f9b15f3
fac7b94
 
 
 
 
f9b15f3
 
fac7b94
 
0bf8048
fac7b94
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53bb8c5
fac7b94
 
 
 
 
 
 
 
f9b15f3
d25c898
fac7b94
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
license: mit
language:
- en
base_model: microsoft/Phi-3-mini-4k-instruct
new_version: numind/NuExtract-v1.5
---
> ⚠️ **_NOTE:_**  This model is out-dated. Find the updated version [here](https://huggingface.co/numind/NuExtract-v1.5)

# Structure Extraction Model by NuMind 🔥

NuExtract is a version of [phi-3-mini](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct), fine-tuned on a private high-quality synthetic dataset for information extraction. 
To use the model, provide an input text (less than 2000 tokens) and a JSON template describing the information you need to extract. 

Note: This model is purely extractive, so all text output by the model is present as is in the original text. You can also provide an example of output formatting to help the model understand your task more precisely.

Try it here: https://huggingface.co/spaces/numind/NuExtract

We also provide a tiny(0.5B) and large(7B) version of this model: [NuExtract-tiny](https://huggingface.co/numind/NuExtract-tiny) and [NuExtract-large](https://huggingface.co/numind/NuExtract-large)

**Checkout other models by NuMind:**
* SOTA Zero-shot NER Model [NuNER Zero](https://huggingface.co/numind/NuNER_Zero)
* SOTA Multilingual Entity Recognition Foundation Model: [link](https://huggingface.co/numind/entity-recognition-multilingual-general-sota-v1)
* SOTA Sentiment Analysis Foundation Model: [English](https://huggingface.co/numind/generic-sentiment-v1), [Multilingual](https://huggingface.co/numind/generic-sentiment-multi-v1)


## Benchmark

Benchmark 0 shot (will release soon):

<p align="left">
<img src="result.png" width="600">
</p>

Benchmark fine-tunning (see blog post):

<p align="left">
<img src="result_ft.png" width="600">
</p>


## Usage

To use the model:

```python
import json
from transformers import AutoModelForCausalLM, AutoTokenizer


def predict_NuExtract(model, tokenizer, text, schema, example=["", "", ""]):
    schema = json.dumps(json.loads(schema), indent=4)
    input_llm =  "<|input|>\n### Template:\n" +  schema + "\n"
    for i in example:
      if i != "":
          input_llm += "### Example:\n"+ json.dumps(json.loads(i), indent=4)+"\n"
    
    input_llm +=  "### Text:\n"+text +"\n<|output|>\n"
    input_ids = tokenizer(input_llm, return_tensors="pt",truncation = True, max_length=4000).to("cuda")

    output = tokenizer.decode(model.generate(**input_ids)[0], skip_special_tokens=True)
    return output.split("<|output|>")[1].split("<|end-output|>")[0]


# We recommend using bf16 as it results in negligable performance loss
model = AutoModelForCausalLM.from_pretrained("numind/NuExtract", torch_dtype=torch.bfloat16, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("numind/NuExtract", trust_remote_code=True)

model.to("cuda")

model.eval()

text = """We introduce Mistral 7B, a 7–billion-parameter language model engineered for
superior performance and efficiency. Mistral 7B outperforms the best open 13B
model (Llama 2) across all evaluated benchmarks, and the best released 34B
model (Llama 1) in reasoning, mathematics, and code generation. Our model
leverages grouped-query attention (GQA) for faster inference, coupled with sliding
window attention (SWA) to effectively handle sequences of arbitrary length with a
reduced inference cost. We also provide a model fine-tuned to follow instructions,
Mistral 7B – Instruct, that surpasses Llama 2 13B – chat model both on human and
automated benchmarks. Our models are released under the Apache 2.0 license.
Code: https://github.com/mistralai/mistral-src
Webpage: https://mistral.ai/news/announcing-mistral-7b/"""

schema = """{
    "Model": {
        "Name": "",
        "Number of parameters": "",
        "Number of max token": "",
        "Architecture": []
    },
    "Usage": {
        "Use case": [],
        "Licence": ""
    }
}"""

prediction = predict_NuExtract(model, tokenizer, text, schema, example=["","",""])
print(prediction)

```