--- library_name: transformers tags: - generated_from_trainer model-index: - name: EVA-Qwen2.5-1.5B-FFT-v0.0 results: [] license: apache-2.0 language: - en base_model: - Qwen/Qwen2.5-1.5B datasets: - anthracite-org/kalo-opus-instruct-22k-no-refusal - Nopm/Opus_WritingStruct - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned - Gryphe/Sonnet3.5-Charcard-Roleplay - Gryphe/ChatGPT-4o-Writing-Prompts - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned - nothingiisreal/Reddit-Dirty-And-WritingPrompts - allura-org/Celeste-1.x-data-mixture - cognitivecomputations/dolphin-2.9.3 --- # EVA Qwen2.5-1.5BB v0.0

A small-scale RP/storywriting specialist model, full-parameter finetune of Qwen2.5-1.5B on mixture of synthetic and natural data.
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.
Unlike EVA-D 1.5B v0.0, this model was created without using DistillKit, and unlike other versions of EVA, Spectrum wasn't used either, since layer freezing is inefficient at small scale.


Training data:

Training time and hardware:

Model was created by Kearm, Auri and Cahvay.

Special thanks:

[Built with Axolotl](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config axolotl version: `0.4.1` ```yaml base_model: /media/kearm/Disk_2/HF_FAST_MoE_Fodder/Qwen2.5-1.5B load_in_8bit: false load_in_4bit: false strict: false plugins: - axolotl.integrations.liger.LigerPlugin liger_rope: true liger_rms_norm: true liger_swiglu: true liger_fused_linear_cross_entropy: true # plugins: # - axolotl.integrations.spectrum.SpectrumPlugin # spectrum_top_fraction: 0.5 # # Optional if using a pre-scanned model as your base_model. Useful if using a model mirror # spectrum_model_name: Qwen/Qwen2.5-32B datasets: - path: datasets/Celeste_Filtered_utf8fix.jsonl type: sharegpt - path: datasets/deduped_not_samantha_norefusals.jsonl type: sharegpt - path: datasets/deduped_SynthRP-Gens_processed_ShareGPT_converted_cleaned.jsonl type: sharegpt - path: datasets/deduped_Synthstruct-Gens_processed_sharegpt_converted_cleaned.jsonl type: sharegpt - path: datasets/Gryphe-4o-WP-filtered-sharegpt_utf8fix.jsonl type: sharegpt - path: datasets/Sonnet3-5-charcard-names-filtered-sharegpt_utf8fix.jsonl type: sharegpt - path: datasets/SystemChat_subset_filtered_sharegpt_utf8fix.jsonl type: sharegpt - path: datasets/S2.jsonl type: sharegpt - path: datasets/Turing.jsonl type: sharegpt chat_template: chatml shuffle_merged_datasets: true val_set_size: 0.05 output_dir: EVA-Qwen2.5-1.5B-FFT-v0.0 sequence_len: 10240 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true # adapter: qlora # lora_model_dir: # lora_r: 64 # lora_alpha: 128 # lora_dropout: 0.05 # lora_target_linear: true # peft_use_dora: true wandb_project: EVA-Qwen2.5-1.5B-FFT-v0.0 wandb_entity: wandb_watch: wandb_name: Unit-00 wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 3 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 0.000005 max_grad_norm: 1.5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: "unsloth" gradient_checkpointing_kwargs: use_reentrant: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 20 evals_per_epoch: 4 saves_per_epoch: 4 save_safetensors: true save_total_limit: 8 hub_model_id: hub_strategy: debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.15 # fsdp: # - full_shard # - auto_wrap # fsdp_config: # fsdp_limit_all_gathers: true # fsdp_sync_module_states: false # fsdp_offload_params: true # fsdp_cpu_ram_efficient_loading: true # fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP # fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer # fsdp_activation_checkpointing: true # fsdp_state_dict_type: SHARDED_STATE_DICT # Changed from FULL_STATE_DICT # fsdp_sharding_strategy: FULL_SHARD # fsdp_forward_prefetch: false # Added # fsdp_backward_prefetch: "BACKWARD_PRE" # Added # fsdp_backward_prefetch_limit: 1 # Added # fsdp_mixed_precision: BF16 # Added ```