mgrella commited on
Commit
f31dc13
1 Parent(s): 7b31fe8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +110 -46
README.md CHANGED
@@ -1,52 +1,116 @@
1
  ---
2
- tags: autotrain
3
- language: unk
4
- widget:
5
- - text: "I love AutoTrain 🤗"
6
- datasets:
7
- - EXOP/autotrain-data-exop-msc-flat-categories-multilingual
8
- co2_eq_emissions: 652.3729662301374
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
- # Model Trained Using AutoTrain
12
 
13
- - Problem type: Multi-class Classification
14
- - Model ID: 1147942216
15
- - CO2 Emissions (in grams): 652.3729662301374
16
 
17
- ## Validation Metrics
18
 
19
- - Loss: 0.4508252441883087
20
- - Accuracy: 0.8882102517882141
21
- - Macro F1: 0.7681095738330185
22
- - Micro F1: 0.8882102517882141
23
- - Weighted F1: 0.8873062298114072
24
- - Macro Precision: 0.8125021386404774
25
- - Micro Precision: 0.8882102517882141
26
- - Weighted Precision: 0.8875709606885154
27
- - Macro Recall: 0.7429489567097202
28
- - Micro Recall: 0.8882102517882141
29
- - Weighted Recall: 0.8882102517882141
30
-
31
-
32
- ## Usage
33
-
34
- You can use cURL to access this model:
35
-
36
- ```
37
- $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/EXOP/autotrain-exop-msc-flat-categories-multilingual-1147942216
38
- ```
39
-
40
- Or Python API:
41
-
42
- ```
43
- from transformers import AutoModelForSequenceClassification, AutoTokenizer
44
-
45
- model = AutoModelForSequenceClassification.from_pretrained("EXOP/autotrain-exop-msc-flat-categories-multilingual-1147942216", use_auth_token=True)
46
-
47
- tokenizer = AutoTokenizer.from_pretrained("EXOP/autotrain-exop-msc-flat-categories-multilingual-1147942216", use_auth_token=True)
48
-
49
- inputs = tokenizer("I love AutoTrain", return_tensors="pt")
50
-
51
- outputs = model(**inputs)
52
- ```
 
1
  ---
2
+ license: apache-2.0
3
+ language:
4
+ - multilingual
5
+ - af
6
+ - sq
7
+ - ar
8
+ - an
9
+ - hy
10
+ - ast
11
+ - az
12
+ - ba
13
+ - eu
14
+ - bar
15
+ - be
16
+ - bn
17
+ - inc
18
+ - bs
19
+ - br
20
+ - bg
21
+ - my
22
+ - ca
23
+ - ceb
24
+ - ce
25
+ - zh
26
+ - cv
27
+ - hr
28
+ - cs
29
+ - da
30
+ - nl
31
+ - en
32
+ - et
33
+ - fi
34
+ - fr
35
+ - gl
36
+ - ka
37
+ - de
38
+ - el
39
+ - gu
40
+ - ht
41
+ - he
42
+ - hi
43
+ - hu
44
+ - is
45
+ - io
46
+ - id
47
+ - ga
48
+ - it
49
+ - ja
50
+ - jv
51
+ - kn
52
+ - kk
53
+ - ky
54
+ - ko
55
+ - la
56
+ - lv
57
+ - lt
58
+ - roa
59
+ - nds
60
+ - lm
61
+ - mk
62
+ - mg
63
+ - ms
64
+ - ml
65
+ - mr
66
+ - min
67
+ - ne
68
+ - new
69
+ - nb
70
+ - nn
71
+ - oc
72
+ - fa
73
+ - pms
74
+ - pl
75
+ - pt
76
+ - pa
77
+ - ro
78
+ - ru
79
+ - sco
80
+ - sr
81
+ - hr
82
+ - scn
83
+ - sk
84
+ - sl
85
+ - aze
86
+ - es
87
+ - su
88
+ - sw
89
+ - sv
90
+ - tl
91
+ - tg
92
+ - ta
93
+ - tt
94
+ - te
95
+ - tr
96
+ - uk
97
+ - ud
98
+ - uz
99
+ - vi
100
+ - vo
101
+ - war
102
+ - cy
103
+ - fry
104
+ - pnb
105
+ - yo
106
+ tags:
107
+ - text-classification
108
  ---
109
 
110
+ # bert-multilingual-uncased-intelligence-headlines
111
 
112
+ This a bert-base-multilingual-uncased model fine-tuned to perform classification of news headlines according to an intelligence taxonomy.
 
 
113
 
114
+ ### Authors
115
 
116
+ The [NLP Odyssey](https://github.com/nlpodyssey/) Authors