Lemone-Embed: A Series of Fine-Tuned Embedding Models for French Taxation

This series is made up of 7 models, 3 basic models of different sizes trained on 1 epoch, 3 models trained on 2 epochs making up the Boost series and a Pro model with a non-Roberta architecture.

This sentence transformers model, specifically designed for French taxation, has been fine-tuned on a dataset comprising 43 million tokens, integrating a blend of semi-synthetic and fully synthetic data generated by GPT-4 Turbo and Llama 3.1 70B, which have been further refined through evol-instruction tuning and manual curation.

The model is tailored to meet the specific demands of information retrieval across large-scale tax-related corpora, supporting the implementation of production-ready Retrieval-Augmented Generation (RAG) applications. Its primary purpose is to enhance the efficiency and accuracy of legal processes in the taxation domain, with an emphasis on delivering consistent performance in real-world settings, while also contributing to advancements in legal natural language processing research.

This is a sentence-transformers model finetuned from intfloat/multilingual-e5-large. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: intfloat/multilingual-e5-large
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity
  • Developed by: Louis Brulé Naudet
  • Funded by: Microsoft for Startups
  • Shared by: Louis Brulé Naudet
  • Model type: Sentence Transformers
  • Language(s) (NLP): FR
  • License: Apache 2
  • Finetuned from model: intfloat/multilingual-e5-large

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("louisbrulenaudet/lemone-embed-l")
# Run inference
sentences = [
    "query: Décrivez avec précision les étapes détaillées requises pour traiter les réclamations collectives résultant de désastres agricoles comme définies dans l'article R*198-2 du Livre des procédures fiscales.",
    "passage: Selon l'article R*198-2 du Livre des procédures fiscales, le traitement des réclamations collectives en cas de catastrophes naturelles impactant les cultures agricoles, incluant des phénomènes tels que la grêle ou les inondations, exige la collaboration de plusieurs entités administratives. Initialement, deux commissaires sont nommés par l'administration fiscale pour superviser le processus. Ils sont assistés par un délégué de l'administration des impôts. Avant toute action, le maire de la commune affectée est notifié au moins dix jours avant l'inspection prévue, et il est chargé de communiquer cette date aux résidents via des affichages publics. Les agriculteurs affectés doivent alors rapporter leurs pertes à la mairie avant la réalisation d'un constat officiel par l'inspecteur des impôts, qui sera consigné dans un procès-verbal. Une fois ce document clôturé, aucune réclamation supplémentaire n'est acceptée.",
    "passage: Selon les dispositions de l'article R*196-6 du Livre des procédures fiscales, il est attribué aux sujets fiscaux un intervalle précisément défini pour élever des réclamations à l'égard des taxes, cotisations et autres prélèvements relatifs aux céréales et leurs transformés. Ce délai se prolonge jusqu'à la fin de la campagne agricole suivante celle au cours de laquelle l'avis de mise en recouvrement de la taxe a été notifié ou le règlement de l'imposition contestée effectué, permettant ainsi aux parties prenantes de se prévaloir de leurs prérogatives contestataires avec une certitude temporelle.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.9756
cosine_accuracy@3 0.9936
cosine_accuracy@5 0.9955
cosine_accuracy@10 0.9981
cosine_precision@1 0.9756
cosine_precision@3 0.3312
cosine_precision@5 0.1991
cosine_precision@10 0.0998
cosine_recall@1 0.9756
cosine_recall@3 0.9936
cosine_recall@5 0.9955
cosine_recall@10 0.9981
cosine_ndcg@10 0.988
cosine_mrr@10 0.9847
cosine_map@100 0.9847
dot_accuracy@1 0.9756
dot_accuracy@3 0.9936
dot_accuracy@5 0.9955
dot_accuracy@10 0.9981
dot_precision@1 0.9756
dot_precision@3 0.3312
dot_precision@5 0.1991
dot_precision@10 0.0998
dot_recall@1 0.9756
dot_recall@3 0.9936
dot_recall@5 0.9955
dot_recall@10 0.9981
dot_ndcg@10 0.988
dot_mrr@10 0.9847
dot_map@100 0.9847

Training Details

Training Dataset

  • Size: 296,234 training samples
  • Columns: query, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    query positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 54.2 tokens
    • max: 179 tokens
    • min: 75 tokens
    • mean: 182.28 tokens
    • max: 332 tokens
    • min: 53 tokens
    • mean: 190.2 tokens
    • max: 456 tokens
  • Loss: CachedGISTEmbedLoss with these parameters:
    {'guide': SentenceTransformer(
      (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel 
      (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
      (2): Normalize()
    ), 'temperature': 0.01}
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 128
  • learning_rate: 1e-05
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 1e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Energy Consumed: 1.732 kWh
  • Carbon Emitted: 0.639 kg of CO2
  • Hours Used: 3.122 hours

Training Hardware

  • On Cloud: No
  • GPU Model: 1 x NVIDIA H100 NVL
  • CPU Model: AMD EPYC 9V84 96-Core Processor
  • RAM Size: 314.69 GB

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.2
  • PyTorch: 2.3.0+cu121
  • Accelerate: 0.33.0
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

If you use this code in your research, please use the following BibTeX entry.

@misc{louisbrulenaudet2024,
  author =       {Louis Brulé Naudet},
  title =        {Lemone-Embed: A Series of Fine-Tuned Embedding Models for French Taxation},
  year =         {2024}
  howpublished = {\url{https://huggingface.co/datasets/louisbrulenaudet/lemone-embed-l}},
}

Feedback

If you have any feedback, please reach out at louisbrulenaudet@icloud.com.

Downloads last month
17
Safetensors
Model size
560M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for louisbrulenaudet/lemone-embed-l

Finetuned
(68)
this model

Datasets used to train louisbrulenaudet/lemone-embed-l

Collection including louisbrulenaudet/lemone-embed-l

Evaluation results

  • Cosine Accuracy@1 on Lemone information retrieval
    self-reported
    0.976
  • Cosine Accuracy@3 on Lemone information retrieval
    self-reported
    0.994
  • Cosine Accuracy@5 on Lemone information retrieval
    self-reported
    0.996
  • Cosine Accuracy@10 on Lemone information retrieval
    self-reported
    0.998
  • Cosine Precision@1 on Lemone information retrieval
    self-reported
    0.976
  • Cosine Precision@3 on Lemone information retrieval
    self-reported
    0.331
  • Cosine Precision@5 on Lemone information retrieval
    self-reported
    0.199
  • Cosine Precision@10 on Lemone information retrieval
    self-reported
    0.100
  • Cosine Recall@1 on Lemone information retrieval
    self-reported
    0.976
  • Cosine Recall@3 on Lemone information retrieval
    self-reported
    0.994