--- base_model: google/flan-t5-base tags: - generated_from_trainer metrics: - rouge model-index: - name: flan-t5-base-trading_candles results: [] datasets: - 0xMaka/trading-candles-subset-qa-format widget: - text: "Context: -30811302.00,464.00,-156202.00,309984.00,276.00,7664.00,4174.00,824467.00,19741.12,19798.04,19860.18,19567.9 Question: identify candle" - text: "Context: 867553.00,-4282049.00,6306.00,4440418.00,13.00,50962.00,101.00,59152496.00,39512.71,39477.49,39512.71,39380.74 Question: identify candle" - text: "Context: -206.00,626162.00,-35917428.00,-49739.00,6669939.00,64.00,19988.00,7094559.00,17752.71,17752.71,17752.71,17752.71 Question: find candle: Four Price Doji" pipeline_tag: text2text-generation --- # flan-t5-base-trading_candles This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on [0xMaka/trading-candles-subset-qa-format](https://huggingface.co/datasets/0xMaka/trading-candles-subset-qa-format) dataset. It achieves the following results on the evaluation set: - Loss: 0.0061 - Rouge1: 88.3665 - Rouge2: 86.86 - Rougel: 88.3651 - Rougelsum: 88.3665 - Gen Len: 18.9025 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.019 | 1.0 | 70009 | 0.0089 | 88.0774 | 86.4734 | 88.0734 | 88.0748 | 18.9022 | | 0.0095 | 2.0 | 140018 | 0.0069 | 88.3636 | 86.8542 | 88.3612 | 88.3625 | 18.9016 | | 0.0071 | 3.0 | 210027 | 0.0061 | 88.3665 | 86.86 | 88.3651 | 88.3665 | 18.9025 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3