See axolotl config
axolotl version: 0.4.0
base_model: Qwen/Qwen1.5-MoE-A2.7B
trust_remote_code: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: Drewskidang/chatlaw
type: sharegpt
- path: swag/articles_and_summaries.jsonl
ds_type: json # see other options below
type: summarizetldr
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./out
sequence_len: 4096 # supports up to 32k
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false
wandb_project: Qwen Qwen
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 4
num_epochs: 3
optimizer: adamw_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
out
This model is a fine-tuned version of Qwen/Qwen1.5-MoE-A2.7B on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.8947
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.6446 | 0.13 | 1 | 1.6456 |
1.639 | 0.26 | 2 | 1.3070 |
1.1786 | 0.52 | 4 | 1.1381 |
1.0398 | 0.79 | 6 | 1.0396 |
1.0073 | 1.02 | 8 | 1.0162 |
0.9318 | 1.28 | 10 | 1.0095 |
0.9704 | 1.54 | 12 | 0.9867 |
0.8477 | 1.8 | 14 | 0.9405 |
0.7665 | 2.03 | 16 | 0.9073 |
0.6283 | 2.3 | 18 | 0.9021 |
0.6257 | 2.56 | 20 | 0.8947 |
Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.1.1
- Datasets 2.18.0
- Tokenizers 0.15.0
- Downloads last month
- 17
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for YoungPanda/qwenqwen
Base model
Qwen/Qwen1.5-MoE-A2.7B