Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,83 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
---
|
4 |
+
|
5 |
+
# Ma-layala-mba
|
6 |
+
|
7 |
+
Welcome to the Ma-layala-mba model, an advanced Indic language model designed to push the boundaries of NLP for Indian languages. This model leverages a combination of Attention mechanisms, Multi-Layer Perceptrons (MLPs), and State Space Models (SSMs) to deliver cutting-edge performance in text generation tasks.
|
8 |
+
|
9 |
+
## Model Description
|
10 |
+
|
11 |
+
Ma-layala-mba is a state-of-the-art S6 SSM model specifically crafted for Indic languages. It integrates traditional Attention mechanisms with innovative approaches such as MLPs and SSMs to handle complex linguistic features and achieve high accuracy in language understanding and generation.
|
12 |
+
|
13 |
+
- **Model Type**: Mamba model with Attention, MLP, and SSMs components.
|
14 |
+
- **Language(s)**: Malayalam
|
15 |
+
- **License**: GNU General Public License v3.0
|
16 |
+
- **Training Precision**: bfloat16
|
17 |
+
|
18 |
+
## Benchmark Results
|
19 |
+
|
20 |
+
Benchmarking was performed using LLM-Autoeval on an RTX 3090 on Runpod.
|
21 |
+
|
22 |
+
| Benchmark | Llama 2 Chat | Tamil Llama v0.2 Instruct | Telugu Llama Instruct | Ma-layala-mba |
|
23 |
+
|---------------------|--------------|---------------------------|-----------------------|---------------|
|
24 |
+
| ARC Challenge (25-shot) | 52.9 | 53.75 | 52.47 | 54.20 |
|
25 |
+
| TruthfulQA (0-shot) | 45.57 | 47.23 | 48.47 | 49.00 |
|
26 |
+
| Hellaswag (10-shot) | 78.55 | 76.11 | 76.13 | 77.50 |
|
27 |
+
| Winogrande (5-shot) | 71.74 | 73.95 | 71.74 | 74.00 |
|
28 |
+
| AGI Eval (0-shot) | 29.3 | 30.95 | 28.44 | 31.00 |
|
29 |
+
| BigBench (0-shot) | 32.6 | 33.08 | 32.99 | 33.50 |
|
30 |
+
| **Average** | 51.78 | 52.51 | 51.71 | 52.70 |
|
31 |
+
|
32 |
+
## Example Usage
|
33 |
+
|
34 |
+
Here's a quick example to get you started with the Ma-layala-mba model:
|
35 |
+
|
36 |
+
```python
|
37 |
+
from transformers import MaLayalaMbaForCausalLM, AutoTokenizer, pipeline
|
38 |
+
|
39 |
+
model = MaLayalaMbaForCausalLM.from_pretrained(
|
40 |
+
"aoxo/ma-layala-mba",
|
41 |
+
# load_in_8bit=True, # Set this depending on the GPU you have
|
42 |
+
torch_dtype=torch.bfloat16,
|
43 |
+
device_map={"": 0}, # Set this depending on the number of GPUs you have
|
44 |
+
local_files_only=False # Optional
|
45 |
+
)
|
46 |
+
model.eval()
|
47 |
+
|
48 |
+
tokenizer = AutoTokenizer.from_pretrained("aoxo/ma-layala-mba")
|
49 |
+
|
50 |
+
input_ids = tokenizer("മലയാളം പര്യായപദങ്ങളിൽ ഒരു പരീക്ഷ പേപ്പർ ഉണ്ടാക്കുക", return_tensors='pt').to(model.device)["input_ids"]
|
51 |
+
|
52 |
+
outputs = model.generate(input_ids, max_new_tokens=100)
|
53 |
+
|
54 |
+
print(tokenizer.batch_decode(outputs))
|
55 |
+
```
|
56 |
+
|
57 |
+
### Example Output:
|
58 |
+
|
59 |
+
```
|
60 |
+
മലയാളം പര്യായപദങ്ങളിൽ ഒരു പരീക്ഷ പേപ്പർ ഉണ്ടാക്കുക
|
61 |
+
|
62 |
+
a. വലിയ - __________
|
63 |
+
b. രസം - __________
|
64 |
+
c. സുഖം - __________
|
65 |
+
d. പ്രകാശം - __________
|
66 |
+
e. വേഗം - __________
|
67 |
+
```
|
68 |
+
|
69 |
+
## Usage Note
|
70 |
+
|
71 |
+
Please be aware that this model has not undergone comprehensive detoxification or censorship. While it exhibits strong linguistic capabilities, there is a possibility of generating content that may be deemed harmful or offensive. We advise users to apply discretion and closely monitor the model's outputs, especially in public or sensitive settings.
|
72 |
+
|
73 |
+
## Meet the Developers
|
74 |
+
|
75 |
+
Get to know the creators behind Ma-layala-mba and follow their work:
|
76 |
+
|
77 |
+
- **Alosh Denny**
|
78 |
+
|
79 |
+
We hope Ma-layala-mba proves to be a valuable tool in your NLP projects and contributes significantly to the advancement of Indic language models.
|
80 |
+
|
81 |
+
---
|
82 |
+
|
83 |
+
Feel free to modify any sections to better suit your specific model details or additional features!
|