💨🦅 Vikhr-Qwen-2.5-1.5B-Instruct

RU

Инструктивная модель на основе Qwen-2.5-1.5B-Instruct, обученная на русскоязычном датасете GrandMaster-PRO-MAX. Создана для высокоэффективной обработки текстов на русском и английском языках, обеспечивая точные ответы и быстрое выполнение задач.

EN

Instructive model based on Qwen-2.5-1.5B-Instruct, trained on the Russian-language dataset GrandMaster-PRO-MAX. Designed for high-efficiency text processing in Russian and English, delivering precise responses and fast task execution.

Transformers

Downloads last month
1,383
GGUF
Model size
1.54B params
Architecture
qwen2

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-GGUF

Base model

Qwen/Qwen2.5-1.5B
Quantized
(4)
this model

Dataset used to train Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-GGUF

Space using Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-GGUF 1