File size: 5,716 Bytes
03b5274
d1e1471
6f6e9ef
 
 
03b5274
6f6e9ef
 
 
 
 
 
1faeeff
6f6e9ef
 
 
82222ba
 
 
6f6e9ef
 
 
 
 
 
 
 
1faeeff
6f6e9ef
 
d391506
6f6e9ef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0d706e8
 
 
 
 
 
7bea5bd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
---
license: apache-2.0
language:
- en
pipeline_tag: text-classification
---

# Model Card for Model ID

<!-- Based on  https://huggingface.co/t5-small, model generates SQL from text given table list with "CREATE TABLE" statements. 
This is a very light weigh model and could be used in multiple analytical applications. -->

Based on  [bert-base-uncased](https://huggingface.co/bert-base-uncased) . This model takes in news summary/news headlines/news article and classifies into one of 40 categories (listed below) . Dataset used to traing this model is from [Kaggle](www.kaggle.com) called [News Category Dataset](https://www.kaggle.com/datasets/rmisra/news-category-dataset) porvided by [ rishabhmisra.github.io/publications]( rishabhmisra.github.io/publications) (Please cite this datatset when using this model).
Contact us for more info: support@cloudsummary.com. 
### Below are the output labels:
- arts = 0, arts & culture =1, black voices = 2, business = 3, college = 4, comedy = 5, crime = 6, culture & arts = 7, education = 8, entertainment = 9,environment = 10 
  fifty=11, food & drink = 12 ,good news = 13, green = 14, healthy living = 15, home & living = 16, impact = 17, latino voices = 18 , media = 19, money = 20 , parenting = 21 , parents = 22
  politics = 23, queer voices = 24, religion = 25, science = 26, sports = 27, style = 28, style & beauty = 29 ,taste = 30 ,tech = 31, the worldpost = 32,travel = 33
  u.s. news = 34, weddings = 35, weird news = 36, wellness = 37, women = 38 , world news = 39 , worldpost = 40
## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->



- **Developed by:** CloudSummary (support@cloudsummary.com)
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** apache-2.0
- **Finetuned from model :** [bert-base-uncased](https://huggingface.co/bert-base-uncased)

### Model Sources 

<!-- Provide the basic links for the model. -->

Please refer [bert-base-uncased](https://huggingface.co/bert-base-uncased) for Model Sources.

## How to Get Started with the Model

Use the code below to get started with the model.

```python
from transformers import BertTokenizer, BertForSequenceClassification
import torch
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained ("cssupport/bert-news-class").to(device)

def predict(text):
    id_to_class = {0: 'arts', 1: 'arts & culture', 2: 'black voices', 3: 'business', 4: 'college', 5: 'comedy', 6: 'crime', 7: 'culture & arts', 8: 'education', 9: 'entertainment', 10: 'environment', 11: 'fifty', 12: 'food & drink', 13: 'good news', 14: 'green', 15: 'healthy living', 16: 'home & living', 17: 'impact', 18: 'latino voices', 19: 'media', 20: 'money', 21: 'parenting', 22: 'parents', 23: 'politics', 24: 'queer voices', 25: 'religion', 26: 'science', 27: 'sports', 28: 'style', 29: 'style & beauty', 30: 'taste', 31: 'tech', 32: 'the worldpost', 33: 'travel', 34: 'u.s. news', 35: 'weddings', 36: 'weird news', 37: 'wellness', 38: 'women', 39: 'world news', 40: 'worldpost'}
    # Tokenize the input text
    inputs = tokenizer(text, return_tensors='pt', truncation=True, max_length=512, padding='max_length').to(device)
    with torch.no_grad():
        logits = model(inputs['input_ids'], inputs['attention_mask'])[0]
    # Get the predicted class index
    pred_class_idx = torch.argmax(logits, dim=1).item()
    return id_to_class[pred_class_idx]


text ="The UK’s growing debt burden puts it on shaky ground ahead of upcoming assessments by the three main credit ratings agencies. A downgrade to its credit rating, which is a reflection of a country’s creditworthiness, could raise borrowing costs further still, although the impact may be limited."
predicted_class = predict(text)
print(predicted_class)
#OUTPUT : business
```


## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

[More Information Needed]

### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Could used in application where natural language is to be converted into SQL queries. 
[More Information Needed]



### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->

[More Information Needed]

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

[More Information Needed]

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.



## Technical Specifications 

### Model Architecture and Objective

[bert-base-uncased](https://huggingface.co/bert-base-uncased)

### Compute Infrastructure



#### Hardware

one P6000 GPU

#### Software

Pytorch and HuggingFace

### Citation

Misra, Rishabh. "News Category Dataset." arXiv preprint arXiv:2209.11429 (2022).
Misra, Rishabh and Jigyasa Grover. "Sculpting Data for ML: The first act of Machine Learning." ISBN 9798585463570 (2021).
Tandon, Karan. "This LLM is based on BERT (2018) a bidirectional Transformer. **cssupport/bert-news-class** was finetuned using AdamW with the help of NVIDIA AMP and trained in 45 minutes on one P6000 GPU. This model accepts news summary/news headlines/news article and classifies into one of 40 categories"