Edit model card

detr-r101-cd45rb-8ah-4l

This model is a fine-tuned version of facebook/detr-resnet-101 on the cd45rb_nan_xywh dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7442

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 25
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
3.1054 1.0 4606 2.2520
2.7633 2.0 9212 2.1251
2.6589 3.0 13818 2.0489
2.5832 4.0 18424 2.0000
2.5369 5.0 23030 1.9442
2.4955 6.0 27636 1.9363
2.4615 7.0 32242 1.8860
2.4326 8.0 36848 1.8732
2.404 9.0 41454 1.8580
2.3915 10.0 46060 1.8536
2.4311 11.0 50666 1.8881
2.4175 12.0 55272 1.8443
2.3763 13.0 59878 1.8380
2.371 14.0 64484 1.8314
2.3427 15.0 69090 1.8255
2.3415 16.0 73696 1.8258
2.3189 17.0 78302 1.7938
2.3049 18.0 82908 1.7959
2.2884 19.0 87514 1.7827
2.2769 20.0 92120 1.7849
2.2728 21.0 96726 1.7581
2.2547 22.0 101332 1.7553
2.249 23.0 105938 1.7521
2.2476 24.0 110544 1.7505
2.2371 25.0 115150 1.7442

Framework versions

  • Transformers 4.28.0
  • Pytorch 2.0.1
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
61
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using polejowska/detr-r101-cd45rb-8ah-4l 1