trained for only 700 steps, -> 1.75 epochs (so the output may not be good enough to call it a good model)

Usage

Here is an example to use the model:

from transformers import StoppingCriteria, StoppingCriteriaList
from unsloth import FastLanguageModel
import torch

hf_token = "<Your-hf-token>"
max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "Someman/Indic-gemma-2b-finetuned-sft-Navarasa-adapters-ne-v1.0",
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
    token=hf_token
)


FastLanguageModel.for_inference(model)
alpaca_prompt = """
### Instruction:
{}

### Input:
{}

### Response:
{}"""



inst = "LinkedIn मा कसरी बढ्ने? ५ अंकमा व्याख्या गर्नुहोस्"
input = ""

inputs = tokenizer(
[
    alpaca_prompt.format(
        inst, # instruction
        input, # input
        "", # output - leave this blank for generation!
    )
], return_tensors = "pt").to("cuda")

# since we use packing = True it starts generating another similar sample starting with <bos>. So we are using eos_token_id = tokenizer.bos_token_id
outputs = model.generate(**inputs, max_new_tokens = 800, use_cache = True)
result = tokenizer.batch_decode(outputs)[0]
print (result)
### Instruction: LinkedIn मा कसरी बढ्ने? ५ अंकमा व्याख्या गर्नुहोस्

Input:

Response:

  1. आफ्नो पृष्ठमा आकर्षक र आकर्षक रचनात्मक कथा सिर्जना गर्नुहोस्।
  2. आफ्नो पृष्ठमा अन्य प्रयोगकर्ताहरूसँग संलग्न हुनुहोस् र आफ्नो पृष्ठमा अन्य प्रयोगकर्ताहरूसँग सम्बन्ध निर्माण गर्नुहोस्।
  3. आफ्नो पृष्ठमा अन्य प्रयोगकर्ताहरूको काम र सफलताहरूको बारेमा जानकारी दिनुहोस्।
  4. आफ्नो पृष्ठमा अन्य प्रयोगकर्ताहरूलाई सम्पर्क गर्न र तिनीहरूको पृष्ठहरूमा जवाफ दिन अनुमति दिनुहोस्।
  5. आफ्नो पृष्ठमा अन्य प्रयोगकर्ताहरूलाई आफ्नो पृष्ठमा जडान गर्न प्रोत्साहन दिनुहोस्।
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Dataset used to train Someman/Indic-gemma-2b-finetuned-sft-Navarasa-adapters-ne-v1.0