Edit model card

ashishbaraiya/my-tweets-finetuned

This model is a fine-tuned version of gpt2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0656
  • Validation Loss: 3.2945
  • Epoch: 98

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 4500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
9.3483 8.3624 0
7.2778 6.9685 1
5.9195 6.2234 2
5.0730 5.6830 3
4.4703 5.3916 4
3.8427 4.8847 5
3.3641 4.5318 6
2.8373 4.3084 7
2.4261 4.0802 8
2.0691 3.8920 9
1.8213 3.8208 10
1.5922 3.6103 11
1.3694 3.5038 12
1.1764 3.3149 13
1.0135 3.2981 14
0.8874 3.2975 15
0.7716 3.2103 16
0.6679 3.3297 17
0.5770 3.2517 18
0.5098 3.0959 19
0.4403 3.1526 20
0.3791 2.9750 21
0.3367 3.0588 22
0.3027 3.0408 23
0.2617 3.1930 24
0.2387 3.1227 25
0.2175 3.0582 26
0.2062 3.1239 27
0.1868 3.0407 28
0.1746 3.2357 29
0.1657 3.1285 30
0.1536 3.2110 31
0.1512 3.1890 32
0.1447 3.1713 33
0.1426 3.1498 34
0.1369 3.1877 35
0.1327 3.2019 36
0.1303 3.0486 37
0.1213 3.1264 38
0.1204 3.1468 39
0.1206 3.1846 40
0.1125 3.1880 41
0.1113 3.1980 42
0.1098 3.1759 43
0.1071 3.1385 44
0.1055 3.1730 45
0.1024 3.1820 46
0.0995 3.1252 47
0.0995 3.1279 48
0.1004 3.2428 49
0.0982 3.1116 50
0.0957 3.2210 51
0.0936 3.1351 52
0.0917 3.1618 53
0.0930 3.1924 54
0.0929 3.2831 55
0.0889 3.2458 56
0.0913 3.2061 57
0.0899 3.4128 58
0.0880 3.2114 59
0.0869 3.2738 60
0.0878 3.1723 61
0.0844 3.1465 62
0.0846 3.1106 63
0.0841 3.2216 64
0.0824 3.2971 65
0.0823 3.2267 66
0.0811 3.2503 67
0.0823 3.1981 68
0.0808 3.2618 69
0.0803 3.1607 70
0.0786 3.3295 71
0.0801 3.2952 72
0.0777 3.2545 73
0.0764 3.1248 74
0.0772 3.2185 75
0.0758 3.3147 76
0.0764 3.1842 77
0.0758 3.2346 78
0.0739 3.2914 79
0.0738 3.2163 80
0.0738 3.3555 81
0.0731 3.0948 82
0.0726 3.2040 83
0.0729 3.2187 84
0.0709 3.2877 85
0.0703 3.3668 86
0.0709 3.2290 87
0.0712 3.3148 88
0.0697 3.2762 89
0.0694 3.2083 90
0.0688 3.2673 91
0.0694 3.2816 92
0.0683 3.3135 93
0.0680 3.2971 94
0.0681 3.2272 95
0.0670 3.2317 96
0.0662 3.2029 97
0.0656 3.2945 98

Framework versions

  • Transformers 4.35.2
  • TensorFlow 2.15.0
  • Datasets 2.16.1
  • Tokenizers 0.15.1
Downloads last month
12
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ashishbaraiya/my-tweets-finetuned

Finetuned
(1186)
this model