Update README.md
Browse files
README.md
CHANGED
@@ -94,12 +94,12 @@ Use the code below to get started with the model.
|
|
94 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
95 |
|
96 |
### Load the model and tokenizer
|
97 |
-
**model** = AutoModelForSequenceClassification.from_pretrained("bagwai/fine-tuned-gemma-7b-hausa")
|
98 |
-
**tokenizer** = AutoTokenizer.from_pretrained("bagwai/fine-tuned-gemma-7b-hausa")
|
99 |
|
100 |
### Example usage
|
101 |
-
**inputs** = tokenizer("Ina son wannan littafin", return_tensors="pt")
|
102 |
-
**outputs** = model(**inputs)
|
103 |
|
104 |
|
105 |
## Training Details
|
@@ -120,11 +120,11 @@ Preprocessing: Hausa stopwords were removed using a custom stopword list (hau_st
|
|
120 |
#### Training Hyperparameters
|
121 |
|
122 |
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
123 |
-
**Epochs:** 5
|
124 |
-
**Learning Rate:** 2e-4
|
125 |
-
**Batch Size:** 8
|
126 |
-
**Optimizer:** AdamW
|
127 |
-
**LoRA Rank:** 64
|
128 |
|
129 |
## Evaluation
|
130 |
|
@@ -160,13 +160,13 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
160 |
|
161 |
### Model Architecture and Objective
|
162 |
|
163 |
-
**Model Type:** Gemma 7B (LLM)
|
164 |
-
**Objective:** Fine-tuned for sentiment analysis in the Hausa language.
|
165 |
|
166 |
### Compute Infrastructure
|
167 |
|
168 |
-
**Hardware:** Kaggle NVIDIA P100 GPUs
|
169 |
-
**Software:** PyTorch, Hugging Face Transformers, LoRA (Low-Rank Adaptation)
|
170 |
|
171 |
|
172 |
## Citation [optional]
|
@@ -196,5 +196,5 @@ Mubarak Daha Isa
|
|
196 |
|
197 |
## Model Card Contact
|
198 |
|
199 |
-
**mubarakdaha8@gmail.com**
|
200 |
-
**2023000675.mubarak@pg.sharda.ac.in**
|
|
|
94 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
95 |
|
96 |
### Load the model and tokenizer
|
97 |
+
- **model** = AutoModelForSequenceClassification.from_pretrained("bagwai/fine-tuned-gemma-7b-hausa")
|
98 |
+
- **tokenizer** = AutoTokenizer.from_pretrained("bagwai/fine-tuned-gemma-7b-hausa")
|
99 |
|
100 |
### Example usage
|
101 |
+
- **inputs** = tokenizer("Ina son wannan littafin", return_tensors="pt")
|
102 |
+
- **outputs** = model(**inputs)
|
103 |
|
104 |
|
105 |
## Training Details
|
|
|
120 |
#### Training Hyperparameters
|
121 |
|
122 |
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
123 |
+
- **Epochs:** 5
|
124 |
+
- **Learning Rate:** 2e-4
|
125 |
+
- **Batch Size:** 8
|
126 |
+
- **Optimizer:** AdamW
|
127 |
+
- **LoRA Rank:** 64
|
128 |
|
129 |
## Evaluation
|
130 |
|
|
|
160 |
|
161 |
### Model Architecture and Objective
|
162 |
|
163 |
+
- **Model Type:** Gemma 7B (LLM)
|
164 |
+
- **Objective:** Fine-tuned for sentiment analysis in the Hausa language.
|
165 |
|
166 |
### Compute Infrastructure
|
167 |
|
168 |
+
- **Hardware:** Kaggle NVIDIA P100 GPUs
|
169 |
+
- **Software:** PyTorch, Hugging Face Transformers, LoRA (Low-Rank Adaptation)
|
170 |
|
171 |
|
172 |
## Citation [optional]
|
|
|
196 |
|
197 |
## Model Card Contact
|
198 |
|
199 |
+
- **mubarakdaha8@gmail.com**
|
200 |
+
- **2023000675.mubarak@pg.sharda.ac.in**
|