5m4ck3r commited on
Commit
0216a6c
1 Parent(s): adfcbbc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -14
README.md CHANGED
@@ -14,29 +14,73 @@ pipeline_tag: zero-shot-classification
14
  library_name: transformers
15
  ---
16
 
17
- # Model Trained Using AutoTrain
18
-
19
  - Problem type: Text Classification
20
 
21
- ## Validation Metrics
22
- loss: 0.28241145610809326
 
 
 
 
 
 
23
 
24
- f1_macro: 0.8093855588593053
25
 
26
- f1_micro: 0.9515418502202643
 
 
27
 
28
- f1_weighted: 0.949198754683482
29
 
30
- precision_macro: 0.8090277777777778
31
 
32
- precision_micro: 0.9515418502202643
 
33
 
34
- precision_weighted: 0.9473201174743024
 
 
35
 
36
- recall_macro: 0.8100845864661653
 
 
37
 
38
- recall_micro: 0.9515418502202643
 
39
 
40
- recall_weighted: 0.9515418502202643
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
- accuracy: 0.9515418502202643
 
 
 
 
 
 
 
 
 
 
 
14
  library_name: transformers
15
  ---
16
 
 
 
17
  - Problem type: Text Classification
18
 
19
+ # LogicSpine/roberta-base-Address-classifier
20
+
21
+ ## Model Description
22
+ `LogicSpine/roberta-base-Address-classifier` is a fine-tuned version of the `cross-encoder/nli-roberta-base` model, specifically designed for address classification tasks using zero-shot learning. It allows you to classify text related to addresses and locations without the need for direct training on every possible label.
23
+
24
+ ## Model Usage
25
+
26
+ ### Installation
27
 
28
+ To use this model, you need to install the `transformers` library:
29
 
30
+ ```bash
31
+ pip install transformers torch
32
+ ```
33
 
34
+ ### Loading the Model
35
 
36
+ You can easily load and use this model for zero-shot classification using Hugging Face's pipeline API.
37
 
38
+ ```
39
+ from transformers import pipeline
40
 
41
+ # Load the zero-shot classification pipeline with the custom model
42
+ classifier = pipeline("zero-shot-classification",
43
+ model="LogicSpine/roberta-base-Address-classifier")
44
 
45
+ # Define your input text and candidate labels
46
+ text = "Delhi, India"
47
+ candidate_labels = ["Country", "Department", "Laboratory", "College", "District", "Academy"]
48
 
49
+ # Perform classification
50
+ result = classifier(text, candidate_labels)
51
 
52
+ # Print the classification result
53
+ print(result)
54
+ ```
55
+
56
+ ## Example Output
57
+
58
+ ```
59
+ {'labels': ['Country',
60
+ 'District',
61
+ 'Academy',
62
+ 'College',
63
+ 'Department',
64
+ 'Laboratory'],
65
+ 'scores': [0.19237062335014343,
66
+ 0.1802321970462799,
67
+ 0.16583585739135742,
68
+ 0.16354037821292877,
69
+ 0.1526614874601364,
70
+ 0.14535939693450928],
71
+ 'sequence': 'Delhi, India'}
72
+ ```
73
+
74
+ ## Validation Metrics
75
 
76
+ **loss:** `0.28241145610809326`
77
+ **f1_macro:** `0.8093855588593053`
78
+ **f1_micro:** `0.9515418502202643`
79
+ **f1_weighted:** `0.949198754683482`
80
+ **precision_macro:** `0.8090277777777778`
81
+ **precision_micro:** `0.9515418502202643`
82
+ **precision_weighted:** `0.9473201174743024`
83
+ **recall_macro:** `0.8100845864661653`
84
+ **recall_micro:** `0.9515418502202643`
85
+ **recall_weighted:** `0.9515418502202643`
86
+ **accuracy:** `0.9515418502202643`