Update README.md
Browse files
README.md
CHANGED
@@ -1,36 +1,84 @@
|
|
1 |
-
|
2 |
---
|
3 |
tags:
|
4 |
-
- autotrain
|
5 |
- text-classification
|
6 |
base_model: cross-encoder/nli-roberta-base
|
7 |
widget:
|
8 |
-
- text:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
---
|
10 |
|
11 |
-
# Model Trained Using AutoTrain
|
12 |
|
13 |
-
-
|
14 |
|
15 |
-
##
|
16 |
-
|
|
|
|
|
|
|
|
|
17 |
|
18 |
-
|
19 |
|
20 |
-
|
|
|
|
|
21 |
|
22 |
-
|
23 |
|
24 |
-
|
25 |
|
26 |
-
|
|
|
27 |
|
28 |
-
|
|
|
|
|
29 |
|
30 |
-
|
|
|
|
|
31 |
|
32 |
-
|
|
|
33 |
|
34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
tags:
|
|
|
3 |
- text-classification
|
4 |
base_model: cross-encoder/nli-roberta-base
|
5 |
widget:
|
6 |
+
- text: I love AutoTrain
|
7 |
+
license: mit
|
8 |
+
language:
|
9 |
+
- en
|
10 |
+
metrics:
|
11 |
+
- accuracy
|
12 |
+
pipeline_tag: zero-shot-classification
|
13 |
+
library_name: transformers
|
14 |
---
|
15 |
|
|
|
16 |
|
17 |
+
# LogicSpine/address-large-text-classifier
|
18 |
|
19 |
+
## Model Description
|
20 |
+
`LogicSpine/address-large-text-classifier` is a fine-tuned version of the `cross-encoder/nli-roberta-base` model, specifically designed for address classification tasks using zero-shot learning. It allows you to classify text related to addresses and locations without the need for direct training on every possible label.
|
21 |
+
|
22 |
+
## Model Usage
|
23 |
+
|
24 |
+
### Installation
|
25 |
|
26 |
+
To use this model, you need to install the `transformers` library:
|
27 |
|
28 |
+
```bash
|
29 |
+
pip install transformers torch
|
30 |
+
```
|
31 |
|
32 |
+
### Loading the Model
|
33 |
|
34 |
+
You can easily load and use this model for zero-shot classification using Hugging Face's pipeline API.
|
35 |
|
36 |
+
```
|
37 |
+
from transformers import pipeline
|
38 |
|
39 |
+
# Load the zero-shot classification pipeline with the custom model
|
40 |
+
classifier = pipeline("zero-shot-classification",
|
41 |
+
model="LogicSpine/address-large-text-classifier")
|
42 |
|
43 |
+
# Define your input text and candidate labels
|
44 |
+
text = "Delhi, India"
|
45 |
+
candidate_labels = ["Country", "Department", "Laboratory", "College", "District", "Academy"]
|
46 |
|
47 |
+
# Perform classification
|
48 |
+
result = classifier(text, candidate_labels)
|
49 |
|
50 |
+
# Print the classification result
|
51 |
+
print(result)
|
52 |
+
```
|
53 |
+
|
54 |
+
## Example Output
|
55 |
+
|
56 |
+
```
|
57 |
+
{'labels': ['Country',
|
58 |
+
'District',
|
59 |
+
'Academy',
|
60 |
+
'College',
|
61 |
+
'Department',
|
62 |
+
'Laboratory'],
|
63 |
+
'scores': [0.19237062335014343,
|
64 |
+
0.1802321970462799,
|
65 |
+
0.16583585739135742,
|
66 |
+
0.16354037821292877,
|
67 |
+
0.1526614874601364,
|
68 |
+
0.14535939693450928],
|
69 |
+
'sequence': 'Delhi, India'}
|
70 |
+
```
|
71 |
+
|
72 |
+
## Validation Metrics
|
73 |
|
74 |
+
**loss:** 1.3794080018997192
|
75 |
+
**f1_macro:** 0.21842933805832918
|
76 |
+
**f1_micro:** 0.4551574223406493
|
77 |
+
**f1_weighted:** 0.306703002026862
|
78 |
+
**precision_macro:** 0.19546905037281545
|
79 |
+
**precision_micro:** 0.4551574223406493
|
80 |
+
**precision_weighted:** 0.2510467302490216
|
81 |
+
**recall_macro:** 0.2811753463927377
|
82 |
+
**recall_micro:** 0.4551574223406493
|
83 |
+
**recall_weighted:** 0.4551574223406493
|
84 |
+
**accuracy:** 0.4551574223406493
|