--- base_model: EleutherAI/pile-t5-xl tags: - generated_from_trainer model-index: - name: pile-t5-xl-instruction results: [] language: - en metrics: - rouge datasets: - taskydata/Pile-T5-Instruction --- # pile-t5-xl-instruction This model is a fine-tuned version of [EleutherAI/pile-t5-xl](https://huggingface.co/EleutherAI/pile-t5-xl) on [Pile-T5-Instruction](https://huggingface.co/datasets/taskydata/Pile-T5-Instruction) dataset. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - effective_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=0.00000001 - num_epochs: 6 ### Training results [Wandb](https://wandb.ai/jordanclive/tasky-instruction/runs/5yx1yzzk/overview)