AdrienB134's picture
Create README.md
36d9180 verified
metadata
license: mit
base_model: croissantllm/CroissantCool-v0.2
datasets: asi/wikitext_fr
tags:
  - generated_from_trainer
  - mteb
metrics:
  - accuracy
model-index:
  - name: final
    results:
      - task:
          type: Clustering
        dataset:
          type: lyon-nlp/alloprof
          name: MTEB AlloProfClusteringP2P (fra-Latn)
          config: fra-Latn
          split: test
          revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
        metrics:
          - type: v_measure
            value: 62.345943052433995
      - task:
          type: Clustering
        dataset:
          type: lyon-nlp/alloprof
          name: MTEB AlloProfClusteringS2S (fra-Latn)
          config: fra-Latn
          split: test
          revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
        metrics:
          - type: v_measure
            value: 25.729454984521148
      - task:
          type: Reranking
        dataset:
          type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
          name: MTEB AlloprofReranking (fra-Latn)
          config: fra-Latn
          split: test
          revision: 65393d0d7a08a10b4e348135e824f385d420b0fd
        metrics:
          - type: map
            value: 26.596323297349183
          - type: mrr
            value: 26.091629657044162
      - task:
          type: Retrieval
        dataset:
          type: lyon-nlp/alloprof
          name: MTEB AlloprofRetrieval (fra-Latn)
          config: fra-Latn
          split: test
          revision: fcf295ea64c750f41fadbaa37b9b861558e1bfbd
        metrics:
          - type: map_at_1
            value: 0.345
          - type: map_at_10
            value: 0.9339999999999999
          - type: map_at_100
            value: 1.191
          - type: map_at_1000
            value: 1.3419999999999999
          - type: map_at_20
            value: 1.02
          - type: map_at_3
            value: 0.6689999999999999
          - type: map_at_5
            value: 0.753
          - type: mrr_at_1
            value: 0.345
          - type: mrr_at_10
            value: 0.9339999999999999
          - type: mrr_at_100
            value: 1.191
          - type: mrr_at_1000
            value: 1.3419999999999999
          - type: mrr_at_20
            value: 1.02
          - type: mrr_at_3
            value: 0.6689999999999999
          - type: mrr_at_5
            value: 0.753
          - type: ndcg_at_1
            value: 0.345
          - type: ndcg_at_10
            value: 1.384
          - type: ndcg_at_100
            value: 3.1510000000000002
          - type: ndcg_at_1000
            value: 9.014
          - type: ndcg_at_20
            value: 1.6920000000000002
          - type: ndcg_at_3
            value: 0.7849999999999999
          - type: ndcg_at_5
            value: 0.941
          - type: precision_at_1
            value: 0.345
          - type: precision_at_10
            value: 0.28900000000000003
          - type: precision_at_100
            value: 0.124
          - type: precision_at_1000
            value: 0.063
          - type: precision_at_20
            value: 0.20500000000000002
          - type: precision_at_3
            value: 0.374
          - type: precision_at_5
            value: 0.302
          - type: recall_at_1
            value: 0.345
          - type: recall_at_10
            value: 2.8930000000000002
          - type: recall_at_100
            value: 12.435
          - type: recall_at_1000
            value: 62.867
          - type: recall_at_20
            value: 4.102
          - type: recall_at_3
            value: 1.123
          - type: recall_at_5
            value: 1.5110000000000001
      - task:
          type: Classification
        dataset:
          type: mteb/amazon_reviews_multi
          name: MTEB AmazonReviewsClassification (fra-Latn)
          config: fra-Latn
          split: test
          revision: 1399c76144fd37290681b995c656ef9b2e06e26d
        metrics:
          - type: accuracy
            value: 32.662
          - type: f1
            value: 32.443152253731846
      - task:
          type: Retrieval
        dataset:
          type: maastrichtlawtech/bsard
          name: MTEB BSARDRetrieval (fra-Latn)
          config: fra-Latn
          split: test
          revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
        metrics:
          - type: map_at_1
            value: 0
          - type: map_at_10
            value: 0
          - type: map_at_100
            value: 0.062
          - type: map_at_1000
            value: 0.077
          - type: map_at_20
            value: 0
          - type: map_at_3
            value: 0
          - type: map_at_5
            value: 0
          - type: mrr_at_1
            value: 0
          - type: mrr_at_10
            value: 0
          - type: mrr_at_100
            value: 0.062
          - type: mrr_at_1000
            value: 0.077
          - type: mrr_at_20
            value: 0
          - type: mrr_at_3
            value: 0
          - type: mrr_at_5
            value: 0
          - type: ndcg_at_1
            value: 0
          - type: ndcg_at_10
            value: 0
          - type: ndcg_at_100
            value: 0.484
          - type: ndcg_at_1000
            value: 1.054
          - type: ndcg_at_20
            value: 0
          - type: ndcg_at_3
            value: 0
          - type: ndcg_at_5
            value: 0
          - type: precision_at_1
            value: 0
          - type: precision_at_10
            value: 0
          - type: precision_at_100
            value: 0.027
          - type: precision_at_1000
            value: 0.008
          - type: precision_at_20
            value: 0
          - type: precision_at_3
            value: 0
          - type: precision_at_5
            value: 0
          - type: recall_at_1
            value: 0
          - type: recall_at_10
            value: 0
          - type: recall_at_100
            value: 2.703
          - type: recall_at_1000
            value: 7.6579999999999995
          - type: recall_at_20
            value: 0
          - type: recall_at_3
            value: 0
          - type: recall_at_5
            value: 0
      - task:
          type: Clustering
        dataset:
          type: lyon-nlp/clustering-hal-s2s
          name: MTEB HALClusteringS2S (fra-Latn)
          config: fra-Latn
          split: test
          revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
        metrics:
          - type: v_measure
            value: 13.77084465510841
      - task:
          type: Clustering
        dataset:
          type: mlsum
          name: MTEB MLSUMClusteringP2P (fra-Latn)
          config: fra-Latn
          split: test
          revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
        metrics:
          - type: v_measure
            value: 45.43375637260015
      - task:
          type: Clustering
        dataset:
          type: mlsum
          name: MTEB MLSUMClusteringS2S (fra-Latn)
          config: fra-Latn
          split: test
          revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
        metrics:
          - type: v_measure
            value: 45.20564648796975
      - task:
          type: Classification
        dataset:
          type: mteb/mtop_domain
          name: MTEB MTOPDomainClassification (fra-Latn)
          config: fra-Latn
          split: test
          revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
        metrics:
          - type: accuracy
            value: 73.42937676166615
          - type: f1
            value: 72.65861284500563
      - task:
          type: Classification
        dataset:
          type: mteb/mtop_intent
          name: MTEB MTOPIntentClassification (fra-Latn)
          config: fra-Latn
          split: test
          revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
        metrics:
          - type: accuracy
            value: 58.54368932038836
          - type: f1
            value: 37.51985447597095
      - task:
          type: Classification
        dataset:
          type: mteb/masakhanews
          name: MTEB MasakhaNEWSClassification (fra-Latn)
          config: fra-Latn
          split: test
          revision: 18193f187b92da67168c655c9973a165ed9593dd
        metrics:
          - type: accuracy
            value: 75.56872037914692
          - type: f1
            value: 71.99185345982795
      - task:
          type: Clustering
        dataset:
          type: masakhane/masakhanews
          name: MTEB MasakhaNEWSClusteringP2P (fra-Latn)
          config: fra-Latn
          split: test
          revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
        metrics:
          - type: v_measure
            value: 38.20382948117535
      - task:
          type: Clustering
        dataset:
          type: masakhane/masakhanews
          name: MTEB MasakhaNEWSClusteringS2S (fra-Latn)
          config: fra-Latn
          split: test
          revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
        metrics:
          - type: v_measure
            value: 26.943825642352117
      - task:
          type: Classification
        dataset:
          type: mteb/amazon_massive_intent
          name: MTEB MassiveIntentClassification (fra-Latn)
          config: fra-Latn
          split: test
          revision: 4672e20407010da34463acc759c162ca9734bca6
        metrics:
          - type: accuracy
            value: 50.20847343644924
          - type: f1
            value: 47.32281768380685
      - task:
          type: Classification
        dataset:
          type: mteb/amazon_massive_scenario
          name: MTEB MassiveScenarioClassification (fra-Latn)
          config: fra-Latn
          split: test
          revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
        metrics:
          - type: accuracy
            value: 52.57565568258238
          - type: f1
            value: 50.95953249242336
      - task:
          type: Retrieval
        dataset:
          type: jinaai/mintakaqa
          name: MTEB MintakaRetrieval (fra-Latn)
          config: fra-Latn
          split: test
          revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
        metrics:
          - type: map_at_1
            value: 0.164
          - type: map_at_10
            value: 0.584
          - type: map_at_100
            value: 0.8240000000000001
          - type: map_at_1000
            value: 0.9769999999999999
          - type: map_at_20
            value: 0.6669999999999999
          - type: map_at_3
            value: 0.40299999999999997
          - type: map_at_5
            value: 0.47600000000000003
          - type: mrr_at_1
            value: 0.164
          - type: mrr_at_10
            value: 0.584
          - type: mrr_at_100
            value: 0.8240000000000001
          - type: mrr_at_1000
            value: 0.9769999999999999
          - type: mrr_at_20
            value: 0.6669999999999999
          - type: mrr_at_3
            value: 0.40299999999999997
          - type: mrr_at_5
            value: 0.47600000000000003
          - type: ndcg_at_1
            value: 0.164
          - type: ndcg_at_10
            value: 0.8670000000000001
          - type: ndcg_at_100
            value: 2.443
          - type: ndcg_at_1000
            value: 8.671
          - type: ndcg_at_20
            value: 1.176
          - type: ndcg_at_3
            value: 0.47800000000000004
          - type: ndcg_at_5
            value: 0.612
          - type: precision_at_1
            value: 0.164
          - type: precision_at_10
            value: 0.18
          - type: precision_at_100
            value: 0.10200000000000001
          - type: precision_at_1000
            value: 0.064
          - type: precision_at_20
            value: 0.152
          - type: precision_at_3
            value: 0.232
          - type: precision_at_5
            value: 0.20500000000000002
          - type: recall_at_1
            value: 0.164
          - type: recall_at_10
            value: 1.802
          - type: recall_at_100
            value: 10.156
          - type: recall_at_1000
            value: 64.21
          - type: recall_at_20
            value: 3.0300000000000002
          - type: recall_at_3
            value: 0.696
          - type: recall_at_5
            value: 1.024
      - task:
          type: PairClassification
        dataset:
          type: GEM/opusparcus
          name: MTEB OpusparcusPC (fra-Latn)
          config: fra-Latn
          split: test
          revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
        metrics:
          - type: cos_sim_accuracy
            value: 73.433242506812
          - type: cos_sim_ap
            value: 86.03577758642086
          - type: cos_sim_f1
            value: 82.1602478972997
          - type: cos_sim_precision
            value: 74.12140575079871
          - type: cos_sim_recall
            value: 92.15491559086395
          - type: dot_accuracy
            value: 68.8692098092643
          - type: dot_ap
            value: 75.51070462676913
          - type: dot_f1
            value: 81.47547628698824
          - type: dot_precision
            value: 68.83561643835617
          - type: dot_recall
            value: 99.80139026812313
          - type: euclidean_accuracy
            value: 73.84196185286103
          - type: euclidean_ap
            value: 86.27910998502644
          - type: euclidean_f1
            value: 82.5531914893617
          - type: euclidean_precision
            value: 72.22635889798957
          - type: euclidean_recall
            value: 96.32571996027805
          - type: manhattan_accuracy
            value: 73.9100817438692
          - type: manhattan_ap
            value: 86.43527306280204
          - type: manhattan_f1
            value: 82.57349808265872
          - type: manhattan_precision
            value: 72.31343283582089
          - type: manhattan_recall
            value: 96.22641509433963
          - type: max_accuracy
            value: 73.9100817438692
          - type: max_ap
            value: 86.43527306280204
          - type: max_f1
            value: 82.57349808265872
      - task:
          type: PairClassification
        dataset:
          type: paws-x
          name: MTEB PawsX (fra-Latn)
          config: fra-Latn
          split: test
          revision: 8a04d940a42cd40658986fdd8e3da561533a3646
        metrics:
          - type: cos_sim_accuracy
            value: 61.550000000000004
          - type: cos_sim_ap
            value: 60.30864957174996
          - type: cos_sim_f1
            value: 62.891311994372145
          - type: cos_sim_precision
            value: 46.08247422680412
          - type: cos_sim_recall
            value: 99.00332225913621
          - type: dot_accuracy
            value: 55.35
          - type: dot_ap
            value: 47.540176633815165
          - type: dot_f1
            value: 62.20227821884707
          - type: dot_precision
            value: 45.18555667001003
          - type: dot_recall
            value: 99.77851605758582
          - type: euclidean_accuracy
            value: 61.95
          - type: euclidean_ap
            value: 60.44070441806914
          - type: euclidean_f1
            value: 62.89978678038379
          - type: euclidean_precision
            value: 46.31083202511774
          - type: euclidean_recall
            value: 98.00664451827242
          - type: manhattan_accuracy
            value: 61.9
          - type: manhattan_ap
            value: 60.52939878134297
          - type: manhattan_f1
            value: 63.034188034188034
          - type: manhattan_precision
            value: 46.45669291338583
          - type: manhattan_recall
            value: 98.00664451827242
          - type: max_accuracy
            value: 61.95
          - type: max_ap
            value: 60.52939878134297
          - type: max_f1
            value: 63.034188034188034
      - task:
          type: STS
        dataset:
          type: Lajavaness/SICK-fr
          name: MTEB SICKFr (fra-Latn)
          config: fra-Latn
          split: test
          revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
        metrics:
          - type: cos_sim_pearson
            value: 55.697943925847646
          - type: cos_sim_spearman
            value: 53.33151992866752
          - type: euclidean_pearson
            value: 54.32882764397367
          - type: euclidean_spearman
            value: 53.54968438609837
          - type: manhattan_pearson
            value: 54.56634524641888
          - type: manhattan_spearman
            value: 53.81344727168701
      - task:
          type: STS
        dataset:
          type: mteb/sts22-crosslingual-sts
          name: MTEB STS22 (fra-Latn)
          config: fra-Latn
          split: test
          revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
        metrics:
          - type: cos_sim_pearson
            value: 22.771197036286605
          - type: cos_sim_spearman
            value: 60.29016180301653
          - type: euclidean_pearson
            value: 35.31319988418939
          - type: euclidean_spearman
            value: 59.61398871828641
          - type: manhattan_pearson
            value: 36.10315029818106
          - type: manhattan_spearman
            value: 60.5122301133988
      - task:
          type: STS
        dataset:
          type: mteb/stsb_multi_mt
          name: MTEB STSBenchmarkMultilingualSTS (fra-Latn)
          config: fra-Latn
          split: test
          revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
        metrics:
          - type: cos_sim_pearson
            value: 47.730796921644384
          - type: cos_sim_spearman
            value: 49.54059034135741
          - type: euclidean_pearson
            value: 49.48474815018905
          - type: euclidean_spearman
            value: 50.71533884079761
          - type: manhattan_pearson
            value: 50.10488858533032
          - type: manhattan_spearman
            value: 51.1375710610132
      - task:
          type: Summarization
        dataset:
          type: lyon-nlp/summarization-summeval-fr-p2p
          name: MTEB SummEvalFr (fra-Latn)
          config: fra-Latn
          split: test
          revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
        metrics:
          - type: cos_sim_pearson
            value: 29.102661066592816
          - type: cos_sim_spearman
            value: 29.615000554218955
          - type: dot_pearson
            value: 19.77690299595119
          - type: dot_spearman
            value: 19.112834848310158
      - task:
          type: Reranking
        dataset:
          type: lyon-nlp/mteb-fr-reranking-syntec-s2p
          name: MTEB SyntecReranking (fra-Latn)
          config: fra-Latn
          split: test
          revision: daf0863838cd9e3ba50544cdce3ac2b338a1b0ad
        metrics:
          - type: map
            value: 37.372655122655125
          - type: mrr
            value: 37.28174603174604
      - task:
          type: Retrieval
        dataset:
          type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
          name: MTEB SyntecRetrieval (fra-Latn)
          config: fra-Latn
          split: test
          revision: 19661ccdca4dfc2d15122d776b61685f48c68ca9
        metrics:
          - type: map_at_1
            value: 2
          - type: map_at_10
            value: 6.816999999999999
          - type: map_at_100
            value: 9.522
          - type: map_at_1000
            value: 9.522
          - type: map_at_20
            value: 8.402
          - type: map_at_3
            value: 4.167
          - type: map_at_5
            value: 4.867
          - type: mrr_at_1
            value: 2
          - type: mrr_at_10
            value: 6.816999999999999
          - type: mrr_at_100
            value: 9.522
          - type: mrr_at_1000
            value: 9.522
          - type: mrr_at_20
            value: 8.402
          - type: mrr_at_3
            value: 4.167
          - type: mrr_at_5
            value: 4.867
          - type: ndcg_at_1
            value: 2
          - type: ndcg_at_10
            value: 10.940999999999999
          - type: ndcg_at_100
            value: 25.96
          - type: ndcg_at_1000
            value: 25.96
          - type: ndcg_at_20
            value: 16.742
          - type: ndcg_at_3
            value: 4.893
          - type: ndcg_at_5
            value: 6.141
          - type: precision_at_1
            value: 2
          - type: precision_at_10
            value: 2.5
          - type: precision_at_100
            value: 1
          - type: precision_at_1000
            value: 0.1
          - type: precision_at_20
            value: 2.4
          - type: precision_at_3
            value: 2.333
          - type: precision_at_5
            value: 2
          - type: recall_at_1
            value: 2
          - type: recall_at_10
            value: 25
          - type: recall_at_100
            value: 100
          - type: recall_at_1000
            value: 100
          - type: recall_at_20
            value: 48
          - type: recall_at_3
            value: 7.000000000000001
          - type: recall_at_5
            value: 10
      - task:
          type: Retrieval
        dataset:
          type: jinaai/xpqa
          name: MTEB XPQARetrieval (fra-Latn)
          config: fra-Latn
          split: test
          revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
        metrics:
          - type: map_at_1
            value: 9.437
          - type: map_at_10
            value: 13.574
          - type: map_at_100
            value: 14.265
          - type: map_at_1000
            value: 14.527999999999999
          - type: map_at_20
            value: 13.834
          - type: map_at_3
            value: 12.277000000000001
          - type: map_at_5
            value: 12.936
          - type: mrr_at_1
            value: 14.285999999999998
          - type: mrr_at_10
            value: 18.269
          - type: mrr_at_100
            value: 18.991
          - type: mrr_at_1000
            value: 19.15
          - type: mrr_at_20
            value: 18.598
          - type: mrr_at_3
            value: 17
          - type: mrr_at_5
            value: 17.681
          - type: ndcg_at_1
            value: 14.285999999999998
          - type: ndcg_at_10
            value: 16.447
          - type: ndcg_at_100
            value: 20.617
          - type: ndcg_at_1000
            value: 27.589000000000002
          - type: ndcg_at_20
            value: 17.455000000000002
          - type: ndcg_at_3
            value: 14.540000000000001
          - type: ndcg_at_5
            value: 15.084
          - type: precision_at_1
            value: 14.285999999999998
          - type: precision_at_10
            value: 3.698
          - type: precision_at_100
            value: 0.734
          - type: precision_at_1000
            value: 0.18
          - type: precision_at_20
            value: 2.163
          - type: precision_at_3
            value: 8.366999999999999
          - type: precision_at_5
            value: 5.928
          - type: recall_at_1
            value: 9.437
          - type: recall_at_10
            value: 20.16
          - type: recall_at_100
            value: 38.527
          - type: recall_at_1000
            value: 85.102
          - type: recall_at_20
            value: 23.632
          - type: recall_at_3
            value: 14.562
          - type: recall_at_5
            value: 16.8
language:
  - fr

llm2vec-croissant-mntp

This model is a fine-tuned version of croissantllm/CroissantCool-v0.2 on asi/wikitext_fr. It achieves the following results on the evaluation set:

  • Loss: 1.8867
  • Accuracy: 0.6078

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3.0

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.0884 100 4.7866 0.1990
No log 0.1768 200 4.0496 0.3309
No log 0.2653 300 3.6525 0.3779
No log 0.3537 400 3.2410 0.4258
3.9116 0.4421 500 3.6305 0.3912
3.9116 0.5305 600 3.1770 0.4406
3.9116 0.6189 700 2.4478 0.5199
3.9116 0.7073 800 2.2383 0.5508
3.9116 0.7958 900 2.1547 0.5635
2.4568 0.8842 1000 2.0868 0.5759
2.4568 0.9726 1100 2.0399 0.5820
2.4568 1.0610 1200 2.0102 0.5873
2.4568 1.1494 1300 1.9805 0.5897
2.4568 1.2378 1400 1.9590 0.5955
1.9305 1.3263 1500 1.9381 0.5982
1.9305 1.4147 1600 1.9249 0.5995
1.9305 1.5031 1700 1.9223 0.6017
1.9305 1.5915 1800 1.9091 0.6037
1.9305 1.6799 1900 1.9038 0.6042
1.8511 1.7683 2000 1.8982 0.6045
1.8511 1.8568 2100 1.8924 0.6060
1.8511 1.9452 2200 1.8844 0.6072
1.8511 2.0336 2300 1.8873 0.6087
1.8511 2.1220 2400 1.8889 0.6068
1.8197 2.2104 2500 1.8848 0.6080
1.8197 2.2989 2600 1.8736 0.6091
1.8197 2.3873 2700 1.8858 0.6072
1.8197 2.4757 2800 1.8814 0.6088
1.8197 2.5641 2900 1.8649 0.6103
1.8116 2.6525 3000 1.8647 0.6091
1.8116 2.7409 3100 1.8755 0.6101
1.8116 2.8294 3200 1.8755 0.6099
1.8116 2.9178 3300 1.8867 0.6078

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.19.1
  • Tokenizers 0.19.1