File size: 2,246 Bytes
4017d85 821a596 adfcbbc 4017d85 8f6b22a 0216a6c 8f6b22a 0216a6c 4017d85 0216a6c 4017d85 0216a6c 4017d85 0216a6c 4017d85 0216a6c 4017d85 0216a6c 4017d85 0216a6c 8f6b22a 4017d85 0216a6c 4017d85 0216a6c 4017d85 0216a6c 4017d85 0216a6c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
---
tags:
- text-classification
base_model: cross-encoder/nli-roberta-base
widget:
- text: I love AutoTrain
license: mit
language:
- en
metrics:
- accuracy
pipeline_tag: zero-shot-classification
library_name: transformers
---
# LogicSpine/address-base-text-classifier
## Model Description
`LogicSpine/address-base-text-classifier` is a fine-tuned version of the `cross-encoder/nli-roberta-base` model, specifically designed for address classification tasks using zero-shot learning. It allows you to classify text related to addresses and locations without the need for direct training on every possible label.
## Model Usage
### Installation
To use this model, you need to install the `transformers` library:
```bash
pip install transformers torch
```
### Loading the Model
You can easily load and use this model for zero-shot classification using Hugging Face's pipeline API.
```
from transformers import pipeline
# Load the zero-shot classification pipeline with the custom model
classifier = pipeline("zero-shot-classification",
model="LogicSpine/address-base-text-classifier")
# Define your input text and candidate labels
text = "Delhi, India"
candidate_labels = ["Country", "Department", "Laboratory", "College", "District", "Academy"]
# Perform classification
result = classifier(text, candidate_labels)
# Print the classification result
print(result)
```
## Example Output
```
{'labels': ['Country',
'District',
'Academy',
'College',
'Department',
'Laboratory'],
'scores': [0.19237062335014343,
0.1802321970462799,
0.16583585739135742,
0.16354037821292877,
0.1526614874601364,
0.14535939693450928],
'sequence': 'Delhi, India'}
```
## Validation Metrics
**loss:** `0.28241145610809326`
**f1_macro:** `0.8093855588593053`
**f1_micro:** `0.9515418502202643`
**f1_weighted:** `0.949198754683482`
**precision_macro:** `0.8090277777777778`
**precision_micro:** `0.9515418502202643`
**precision_weighted:** `0.9473201174743024`
**recall_macro:** `0.8100845864661653`
**recall_micro:** `0.9515418502202643`
**recall_weighted:** `0.9515418502202643`
**accuracy:** `0.9515418502202643` |