Update README.md
Browse files
README.md
CHANGED
@@ -210,9 +210,9 @@ for match in matches:
|
|
210 |
*Dataset: WiRe57_343-manual-oie*
|
211 |
| Model | Precision | Recall | F1 Score |
|
212 |
|:-----------------------|------------:|---------:|-----------:|
|
213 |
-
| knowledgator/gliner-llama-multitask-1B-v1.0 | 0.
|
214 |
-
| knowledgator/gliner-multitask-v0.5 | 0.
|
215 |
-
| knowledgator/gliner-multitask-v1.0 | 0.
|
216 |
|
217 |
---
|
218 |
|
@@ -235,7 +235,6 @@ for answer in answers:
|
|
235 |
|
236 |
### Performance:
|
237 |
*Dataset: SQuAD 2.0*
|
238 |
-
|
239 |
| Model | Precision | Recall | F1 Score |
|
240 |
|:-----------------------|------------:|---------:|-----------:|
|
241 |
| knowledgator/gliner-llama-multitask-1B-v1.0 | 0.578296 | 0.795821 | 0.669841 |
|
@@ -264,22 +263,12 @@ labels = ["summary"]
|
|
264 |
|
265 |
input_ = prompt+text
|
266 |
|
267 |
-
threshold = 0.
|
268 |
summaries = model.predict_entities(input_, labels, threshold=threshold)
|
269 |
|
270 |
for summary in summaries:
|
271 |
print(summary["text"], "=>", summary["score"])
|
272 |
```
|
273 |
-
|
274 |
-
### Performance:
|
275 |
-
*Dataset: SQuAD 2.0*
|
276 |
-
|
277 |
-
| Model | BLEU | ROUGE1 | ROUGE2 | ROUGEL | Cosine Similarity |
|
278 |
-
|:-----------------------|------------:|----------:|-----------:|----------:|--------------------:|
|
279 |
-
| knowledgator/gliner-llama-multitask-1B-v1.0 | 7.9728e-157 | 0.0955005 | 0.00236265 | 0.0738533 | 0.0515591 |
|
280 |
-
| knowledgator/gliner-multitask-v0.5 | 1.70326e-06 | 0.0627964 | 0.00203505 | 0.0482932 | 0.0532316 |
|
281 |
-
| knowledgator/gliner-multitask-v1.0 | 5.78799e-06 | 0.0878883 | 0.0030312 | 0.0657152 | 0.060342 |
|
282 |
-
|
283 |
---
|
284 |
|
285 |
**How to use for text classification:**
|
|
|
210 |
*Dataset: WiRe57_343-manual-oie*
|
211 |
| Model | Precision | Recall | F1 Score |
|
212 |
|:-----------------------|------------:|---------:|-----------:|
|
213 |
+
| knowledgator/gliner-llama-multitask-1B-v1.0 | 0.9047 | 0.2794 | 0.4269 |
|
214 |
+
| knowledgator/gliner-multitask-v0.5 | 0.9278 | 0.2779 | 0.4287 |
|
215 |
+
| knowledgator/gliner-multitask-v1.0 | 0.8775 | 0.2733 | 0.4168 |
|
216 |
|
217 |
---
|
218 |
|
|
|
235 |
|
236 |
### Performance:
|
237 |
*Dataset: SQuAD 2.0*
|
|
|
238 |
| Model | Precision | Recall | F1 Score |
|
239 |
|:-----------------------|------------:|---------:|-----------:|
|
240 |
| knowledgator/gliner-llama-multitask-1B-v1.0 | 0.578296 | 0.795821 | 0.669841 |
|
|
|
263 |
|
264 |
input_ = prompt+text
|
265 |
|
266 |
+
threshold = 0.1
|
267 |
summaries = model.predict_entities(input_, labels, threshold=threshold)
|
268 |
|
269 |
for summary in summaries:
|
270 |
print(summary["text"], "=>", summary["score"])
|
271 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
272 |
---
|
273 |
|
274 |
**How to use for text classification:**
|