File size: 755 Bytes
aebf9cd c2d29aa 837acef 092d6cb 6e81cd4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 |
---
license: mit
---
Base model: CorticalStack/gemma-7b-ultrachat-sft
This is finetuned from above base model and to be used for multi-turn chat based use-cases.
Unlike our AryaBhatta-GemmaOrca model which is skilled in science, literature and finetuned on Orca datasets, this model is fine-tuned on Ultra-Chat datasets. And show improved performance over AryaBhatta-GemmaOrca on Hellaswag datasets and in multi-turn conversations.
It is finetuned on 9 Indian languages (Hindi, Tamil, Punjabi, Bengali, Gujarati, Oriya, Telugu, Kannada, Malayalam) plus English.
Benchmarked on Indic LLM leaderboard:
https://huggingface.co/spaces/Cognitive-Lab/indic_llm_leaderboard
Release post: https://www.linkedin.com/feed/update/urn:li:activity:7184856055565180928 |