Edit model card

mit-b0-finetuned-sidewalk-semantic

This model is a fine-tuned version of nvidia/mit-b0 on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.2125
  • Validation Loss: 0.5151
  • Epoch: 49

Model description

The model was fine-tuned from this model. More information about the model is available here.

Intended uses & limitations

This fine-tuned model is just for demonstration purposes. Before using it in production, it should be thoroughly inspected and adjusted if needed.

Training and evaluation data

segments/sidewalk-semantic

Training procedure

More information is available here: deep-diver/segformer-tf-transformers.

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'learning_rate': 6e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
2.0785 1.1753 0
1.1312 0.8807 1
0.9315 0.7585 2
0.7952 0.7261 3
0.7273 0.6701 4
0.6603 0.6396 5
0.6198 0.6238 6
0.5958 0.5925 7
0.5378 0.5714 8
0.5236 0.5786 9
0.4960 0.5588 10
0.4633 0.5624 11
0.4562 0.5450 12
0.4167 0.5438 13
0.4100 0.5248 14
0.3947 0.5354 15
0.3867 0.5069 16
0.3803 0.5285 17
0.3696 0.5318 18
0.3386 0.5162 19
0.3349 0.5312 20
0.3233 0.5304 21
0.3328 0.5178 22
0.3140 0.5131 23
0.3081 0.5049 24
0.3046 0.5011 25
0.3209 0.5197 26
0.2966 0.5151 27
0.2829 0.5166 28
0.2968 0.5210 29
0.2818 0.5300 30
0.2739 0.5221 31
0.2602 0.5340 32
0.2570 0.5124 33
0.2557 0.5234 34
0.2593 0.5098 35
0.2582 0.5329 36
0.2439 0.5373 37
0.2413 0.5141 38
0.2423 0.5210 39
0.2340 0.5043 40
0.2244 0.5300 41
0.2246 0.4978 42
0.2270 0.5385 43
0.2254 0.5125 44
0.2176 0.5510 45
0.2194 0.5384 46
0.2136 0.5186 47
0.2121 0.5356 48
0.2125 0.5151 49

Framework versions

  • Transformers 4.21.0.dev0
  • TensorFlow 2.8.0
  • Datasets 2.3.2
  • Tokenizers 0.12.1
Downloads last month
12
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train sayakpaul/mit-b0-finetuned-sidewalk-semantic