Updated the model card
Browse files
README.md
CHANGED
@@ -1,11 +1,21 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
tags:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
7 |
|
8 |
-
|
9 |
|
10 |
|
11 |
|
@@ -13,73 +23,90 @@ tags: []
|
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
|
17 |
|
18 |
-
This is the model
|
19 |
|
20 |
-
|
21 |
-
- **
|
22 |
-
- **
|
23 |
-
- **
|
24 |
-
- **Language
|
25 |
-
- **
|
26 |
-
- **
|
|
|
27 |
|
28 |
### Model Sources [optional]
|
29 |
|
30 |
<!-- Provide the basic links for the model. -->
|
31 |
|
32 |
-
- **Repository:**
|
33 |
-
- **Paper [optional]:**
|
34 |
-
- **Demo [optional]:**
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
|
|
|
40 |
### Direct Use
|
41 |
|
42 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
|
|
|
|
43 |
|
44 |
-
[More Information Needed]
|
45 |
|
46 |
### Downstream Use [optional]
|
47 |
|
48 |
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
|
|
49 |
|
50 |
-
[More Information Needed]
|
51 |
|
52 |
### Out-of-Scope Use
|
53 |
|
54 |
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
55 |
|
56 |
-
|
|
|
|
|
|
|
|
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
61 |
-
|
62 |
-
|
|
|
63 |
|
64 |
### Recommendations
|
65 |
|
66 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
67 |
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
|
|
|
69 |
|
70 |
## How to Get Started with the Model
|
71 |
|
72 |
Use the code below to get started with the model.
|
73 |
|
74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
|
76 |
## Training Details
|
77 |
|
78 |
### Training Data
|
79 |
|
80 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
|
84 |
### Training Procedure
|
85 |
|
@@ -87,18 +114,16 @@ Use the code below to get started with the model.
|
|
87 |
|
88 |
#### Preprocessing [optional]
|
89 |
|
90 |
-
|
91 |
-
|
92 |
|
93 |
#### Training Hyperparameters
|
94 |
|
95 |
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
|
103 |
## Evaluation
|
104 |
|
@@ -106,37 +131,17 @@ Use the code below to get started with the model.
|
|
106 |
|
107 |
### Testing Data, Factors & Metrics
|
108 |
|
109 |
-
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
|
127 |
### Results
|
128 |
|
129 |
-
|
|
|
130 |
|
131 |
#### Summary
|
132 |
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
|
141 |
## Environmental Impact
|
142 |
|
@@ -144,29 +149,24 @@ Use the code below to get started with the model.
|
|
144 |
|
145 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
|
147 |
-
- **Hardware Type:**
|
148 |
-
- **Hours used:**
|
149 |
-
- **Cloud Provider:**
|
150 |
-
- **Compute Region:**
|
151 |
-
- **Carbon Emitted:**
|
152 |
|
153 |
## Technical Specifications [optional]
|
154 |
|
155 |
### Model Architecture and Objective
|
156 |
|
157 |
-
|
|
|
158 |
|
159 |
### Compute Infrastructure
|
160 |
|
161 |
-
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
|
169 |
-
[More Information Needed]
|
170 |
|
171 |
## Citation [optional]
|
172 |
|
@@ -174,26 +174,26 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
174 |
|
175 |
**BibTeX:**
|
176 |
|
177 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
178 |
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
|
183 |
-
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
|
187 |
-
|
188 |
|
189 |
-
## More Information [optional]
|
190 |
|
191 |
-
[More Information Needed]
|
192 |
|
193 |
## Model Card Authors [optional]
|
194 |
|
195 |
-
|
196 |
|
197 |
## Model Card Contact
|
198 |
|
199 |
-
|
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
tags:
|
4 |
+
- low-resource-languages
|
5 |
+
- NLP
|
6 |
+
- Hausa
|
7 |
+
license: mit
|
8 |
+
datasets:
|
9 |
+
- HausaNLP/NaijaSenti-Twitter
|
10 |
+
language:
|
11 |
+
- ha
|
12 |
+
metrics:
|
13 |
+
- accuracy
|
14 |
---
|
15 |
|
16 |
# Model Card for Model ID
|
17 |
|
18 |
+
This model is a fine-tuned version of the Gemma 7B large language model, specifically optimized for Hausa language sentiment analysis. It classifies text into positive, neutral, and negative sentiments and can be used for tasks like social media monitoring, customer feedback analysis, and any sentiment-related task involving Hausa text.
|
19 |
|
20 |
|
21 |
|
|
|
23 |
|
24 |
### Model Description
|
25 |
|
26 |
+
The fine-tuned Gemma 7B model is designed for sentiment analysis in the Hausa language, which is widely spoken across West Africa. It was fine-tuned using Hausa text data with labeled sentiments to accurately classify text as positive, neutral, or negative. The fine-tuning process employed Low-Rank Adaptation (LoRA) to efficiently update the model’s parameters without requiring large amounts of computational resources.
|
27 |
|
28 |
+
This model is ideal for analyzing Hausa-language social media posts, reviews, and other text data to gain insights into public sentiment. It offers significant improvements over the base model, particularly for sentiment classification tasks in a low-resource language like Hausa. The model is part of an ongoing effort to create more robust natural language processing tools for underrepresented languages.
|
29 |
|
30 |
+
|
31 |
+
- **Developed by:** Mubarak Daha Isa
|
32 |
+
- **Funded by [optional]:** None
|
33 |
+
- **Shared by [optional]:** None
|
34 |
+
- **Model type:** Large Language Model (LLM) fine-tuned for sentiment analysis
|
35 |
+
- **Language(s) (NLP):** Hausa
|
36 |
+
- **License:** MIT
|
37 |
+
- **Finetuned from model [optional]:** Google Gemma 7B
|
38 |
|
39 |
### Model Sources [optional]
|
40 |
|
41 |
<!-- Provide the basic links for the model. -->
|
42 |
|
43 |
+
- **Repository:** https://huggingface.co/bagwai/fine-tuned-gemma-7b-Hausa
|
44 |
+
- **Paper [optional]:** Comming Soon..
|
45 |
+
- **Demo [optional]:** None
|
46 |
|
47 |
## Uses
|
48 |
|
49 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
50 |
|
51 |
+
|
52 |
### Direct Use
|
53 |
|
54 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
55 |
+
This model can be used for sentiment analysis of text written in the Hausa language, specifically categorizing text into positive, neutral, or negative sentiments. It is ideal for applications in social media analysis, customer feedback, or any Hausa text-based sentiment classification tasks.
|
56 |
+
|
57 |
|
|
|
58 |
|
59 |
### Downstream Use [optional]
|
60 |
|
61 |
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
62 |
+
Adaptation to other NLP tasks such as emotion detection, text classification, and content moderation in Hausa language contexts.
|
63 |
|
|
|
64 |
|
65 |
### Out-of-Scope Use
|
66 |
|
67 |
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
68 |
|
69 |
+
This model is not suitable for:
|
70 |
+
|
71 |
+
Sentiment analysis in languages other than Hausa without further fine-tuning.
|
72 |
+
Use in environments where bias in sentiment classification may have critical implications (e.g., legal or medical contexts).
|
73 |
+
|
74 |
|
75 |
## Bias, Risks, and Limitations
|
76 |
|
77 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
78 |
+
**Bias:** The model may reflect inherent biases in the training data, especially in its treatment of neutral and negative sentiment.
|
79 |
+
**Risks:** Misclassification of sentiment in sensitive use cases could lead to misinterpretations of Hausa language texts.
|
80 |
+
**Limitations:** This model was trained on a limited dataset. Performance might degrade when applied to Hausa texts outside of its training domain.
|
81 |
|
82 |
### Recommendations
|
83 |
|
84 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
85 |
|
86 |
+
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. The model’s outputs should be carefully reviewed in sensitive contexts.
|
87 |
+
|
88 |
|
89 |
## How to Get Started with the Model
|
90 |
|
91 |
Use the code below to get started with the model.
|
92 |
|
93 |
+
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
94 |
+
|
95 |
+
### Load the model and tokenizer
|
96 |
+
model = AutoModelForSequenceClassification.from_pretrained("your-username/fine-tuned-gemma-7b-hausa")
|
97 |
+
tokenizer = AutoTokenizer.from_pretrained("your-username/fine-tuned-gemma-7b-hausa")
|
98 |
+
|
99 |
+
### Example usage
|
100 |
+
inputs = tokenizer("Ina son wannan littafin", return_tensors="pt")
|
101 |
+
outputs = model(**inputs)
|
102 |
+
|
103 |
|
104 |
## Training Details
|
105 |
|
106 |
### Training Data
|
107 |
|
108 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
109 |
+
The model was fine-tuned on a Hausa sentiment analysis dataset consisting of 300 samples from NaijaSenti > Hausa Dataset.
|
|
|
110 |
|
111 |
### Training Procedure
|
112 |
|
|
|
114 |
|
115 |
#### Preprocessing [optional]
|
116 |
|
117 |
+
Preprocessing: Hausa stopwords were removed using a custom stopword list (hau_stop.csv).
|
|
|
118 |
|
119 |
#### Training Hyperparameters
|
120 |
|
121 |
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
122 |
+
**Epochs:** 5
|
123 |
+
**Learning Rate:** 2e-4
|
124 |
+
**Batch Size:** 8
|
125 |
+
**Optimizer:** AdamW
|
126 |
+
**LoRA Rank:** 64
|
|
|
127 |
|
128 |
## Evaluation
|
129 |
|
|
|
131 |
|
132 |
### Testing Data, Factors & Metrics
|
133 |
|
134 |
+
**Testing Data:** Evaluation was performed on a hold-out test set comprising 300 Hausa text samples.
|
135 |
+
**Metrics:** Accuracy, Precision, Recall, F1-Score.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
136 |
|
137 |
### Results
|
138 |
|
139 |
+
**Before Fine-Tuning:** Accuracy = 37.7%, F1-Score = 31.0%
|
140 |
+
**After Fine-Tuning:** Accuracy = 66.0%, F1-Score = 66.0%
|
141 |
|
142 |
#### Summary
|
143 |
|
144 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/667a06609715dee1c93d7a96/LMXaiRwOFV8e5-WkQLQ30.png)
|
|
|
|
|
|
|
|
|
|
|
|
|
145 |
|
146 |
## Environmental Impact
|
147 |
|
|
|
149 |
|
150 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
151 |
|
152 |
+
- **Hardware Type:** GPU P100
|
153 |
+
- **Hours used:** 5hrs
|
154 |
+
- **Cloud Provider:** Kaggle
|
155 |
+
- **Compute Region:** India
|
156 |
+
- **Carbon Emitted:** Zero
|
157 |
|
158 |
## Technical Specifications [optional]
|
159 |
|
160 |
### Model Architecture and Objective
|
161 |
|
162 |
+
**Model Type:** Gemma 7B (LLM)
|
163 |
+
**Objective:** Fine-tuned for sentiment analysis in the Hausa language.
|
164 |
|
165 |
### Compute Infrastructure
|
166 |
|
167 |
+
**Hardware:** Kaggle NVIDIA P100 GPUs
|
168 |
+
**Software:** PyTorch, Hugging Face Transformers, LoRA (Low-Rank Adaptation)
|
|
|
|
|
|
|
|
|
|
|
169 |
|
|
|
170 |
|
171 |
## Citation [optional]
|
172 |
|
|
|
174 |
|
175 |
**BibTeX:**
|
176 |
|
177 |
+
@misc{yourname2024fine,
|
178 |
+
author = {Mubarak Daha Isa},
|
179 |
+
title = {Fine-tuned Gemma 7B for Hausa Sentiment Analysis},
|
180 |
+
year = {2024},
|
181 |
+
publisher = {Hugging Face},
|
182 |
+
howpublished = {\url{https://huggingface.co/your-username/fine-tuned-gemma-7b-hausa}},
|
183 |
+
}
|
184 |
|
|
|
|
|
|
|
185 |
|
186 |
+
**APA:**
|
|
|
|
|
187 |
|
188 |
+
Mubarak Daha Isa (2024). Fine-tuned Gemma 7B for Hausa Sentiment Analysis. Hugging Face. https://huggingface.co/bagwai/fine-tuned-gemma-7b-hausa
|
189 |
|
|
|
190 |
|
|
|
191 |
|
192 |
## Model Card Authors [optional]
|
193 |
|
194 |
+
Mubarak Daha Isa
|
195 |
|
196 |
## Model Card Contact
|
197 |
|
198 |
+
mubarakdaha8@gmail.com
|
199 |
+
2023000675.mubarak@pg.sharda.ac.in
|