File size: 1,594 Bytes
def5fb5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1b5cb64
def5fb5
 
 
 
 
 
 
 
 
 
 
 
1b5cb64
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
license: gemma
language:
- or
- en
---

# odia-gemma-2b-base (Pre-trained)

Odia-Gemma-2B-Base is a pre-trained Odia large language model with 2 billion parameters, and it is based on Google/Gemma 2B. The model is pre-trained on the Culturex-Odia dataset, a filtered version of the original CulturaX dataset for Odia text. The training dataset contains 49 million tokens. The CulturaX-Odia dataset is sourced from mc4 and four distinct OSCAR corpora.

For more details about the model, data, training procedure, and evaluations, go through the blog [post]().

## Model Description
* Model type: A 2B pre-trained decoder-only model
* Primary Language(s): Odia and English 
* License: Gemma Terms of Use
 
**NOTE**

This is not an instruction-tuned model, so it may not be able to follow human instructions without using one/few-shot learning or instruction fine-tuning. The model has no moderation mechanisms and may generate harmful or inappropriate responses. 
It is recommended to first fine-tune it on the task(s) you are interested in.


### Citation Information

If you find this model useful, please consider giving 👏 and citing:

```
@misc{odia-gemma-2b-base,
  author = {Sambit Sekhar and Shantipriya Parida and Debasish Dhal and Guneet Singh Kohli},
  title = {OdiaGenAI Introduces Gemma 2B Pre-Trained LLM Catered to Odia Speakers},
  year = {2024},
  publisher = {Hugging Face},
  journal = {Hugging Face repository},
  howpublished = {\url{https://huggingface.co/OdiaGenAI}},
}
```

### Contributions

- Sambit Sekhar
- Shantipriya Parida
- Debasish Dhal
- Guneet Singh Kohli