Edit model card

Based on LLaMA 13B.

Trained on 4 LoRA modules.

Parameters:

{
  "base_model_name_or_path": "./llama-30b-hf",
  "bias": "none",
  "enable_lora": null,
  "fan_in_fan_out": false,
  "inference_mode": true,
  "lora_alpha": 16,
  "lora_dropout": 0.05,
  "merge_weights": false,
  "modules_to_save": null,
  "peft_type": "LORA",
  "r": 16,
  "target_modules": [
    "q_proj",
    "v_proj",
    "k_proj",
    "o_proj"
  ],
  "task_type": "CAUSAL_LM"
}

Cutoff length set to 512

Prompt template:

{
    "description": "A shorter template to experiment with.",
    "prompt_input": "### Задание:\n{instruction}\n\n### Вход:\n{input}\n\n### Ответ:\n",
    "prompt_no_input": "### Задание:\n{instruction}\n\n### Ответ:\n",
    "response_split": "### Ответ:"    
}

WandB report

Epochs: 4

Loss: 0.853

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) has been turned off for this model.

Dataset used to train lksy/llama_13b_ru_gpt4_alpaca