Misal-7B-base-v0.1 / README.md
maneprasad1's picture
Update README.md
852aba5 verified
|
raw
history blame
No virus
4.35 kB
metadata
datasets:
  - uonlp/CulturaX
  - l3cube-pune/MarathiNLP
language:
  - mr
metrics:
  - accuracy
tags:
  - marathi
  - sentiment analysis
  - reading comprehension
  - paraphrasing
  - translation
library_name: transformers
pipeline_tag: text-generation
license: llama2

Misal-7B-base-v0.1

Built by - smallstep.ai

What is Misal?

Misal 7B, a pretrained and instruction tuned large language model based on Meta’s Llama 7B architecture exclusively for Marathi.

Making of Misal?

Detailed blog here.

Pretraining :

During the pretraining phase of our large language model, the model was exposed to a vast corpus of text data comprising approximately 2 billion Marathi tokens. This corpus primarily consisted of newspaper data spanning the years 2016 to 2022, sourced primarily from the CulturaX dataset. In addition to this, we supplemented our training data with additional sources such as l3cube, ai4bharat, and other internet-based datasets.

image/png

Our model was pretrained using a single A100 80GB GPU on the QBlocks platform. We chose bfloat16 as training precision due to stability issues with float16 precision.

We used Parameter efficient finetuning for pretraining, using Low Rank Adaptation (LoRA), to achieve a training loss of approximately 2.8 after training for almost 2 days.

# LoRA config

peft:
  r: 64
  lora_alpha: 128
  target_modules:
      [
        "q_proj", "v_proj",
        "k_proj", "o_proj",
        "gate_proj", "up_proj",
        "down_proj",
      ]
  lora_dropout: 0.05
  bias: "none"
  task_type: "CAUSAL_LM"
  modules_to_save: ["embed_tokens", "lm_head"]

License

The model inherits the license from meta-llama/Llama-2-7b.

Usage

Installation

pip install transformers accelerate

Prompt

आपण एक मदतगार, आदरणीय आणि प्रामाणिक सहाय्यक आहात.नेहमी शक्य तितकी उपयुक्त उत्तर द्या. तुमची उत्तरे हानिकारक, अनैतिक, वर्णद्वेषी, लैंगिकतावादी, हानिकारक, धोकादायक किंवा बेकायदेशीर नसावीत. कृपया खात्री करा की तुमची उत्तरे सामाजिक दृष्टिकोनाने निष्पक्ष आणि सकारात्मक स्वरूपाची आहेत. जर एखाद्या प्रश्नाला काही अर्थ नसेल किंवा वस्तुस्थितीशी सुसंगती नसेल, तर उत्तर देण्याऐवजी काहीतरी बरोबर का नाही हे स्पष्ट करा. तुम्हाला एखाद्या प्रश्नाचे उत्तर माहित नसल्यास, कृपया चुकीची माहिती देऊ नये.

### Instruction:

<instruction>

### Input:

<input data>

### Response:

PyTorch

from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("smallstepai/Misal-7B-instruct-v0.1", torch_dtype=torch.bfloat16, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("smallstepai/Misal-7B-instruct-v0.1")

def ask_misal(model, tokenizer, instruction, inputs='', system_prompt='', max_new_tokens=200, device='cuda'):

    ip = dict(system_prompt=system_prompt, instruction=instruction, inputs=inputs)
    model_inputs = tokenizer.apply_chat_template(ip, return_tensors='pt')
    outputs = model.generate(model_inputs.to(device), max_new_tokens=max_new_tokens)
    response = tokenizer.decode(outputs[0]).split('### Response:')[1].strip()
    return response

instruction="सादरीकरण कसे करावे?"
resp = ask_misal(model, tokenizer, instruction=instruction, max_new_tokens=1024)
print(resp)

Team

Sagar Sarkale, Abhijeet Katte, Prasad Mane, Shravani Chavan