louisbrulenaudet's picture
Update README.md
06822f9 verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: title_main
      dtype: string
    - name: id_sub
      dtype: string
    - name: url_sourcepage
      dtype: string
    - name: date_publication
      dtype: string
    - name: hash
      dtype: string
    - name: lemone_pro_embeddings
      sequence: float64
  splits:
    - name: train
      num_bytes: 187013397
      num_examples: 16073
  download_size: 119486532
  dataset_size: 187013397
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: apache-2.0
task_categories:
  - question-answering
language:
  - fr
tags:
  - tax
  - legal
  - fiscalite
  - droit
  - taxation
pretty_name: Lemone-embeded dataset for French tax RAG over legal documents
size_categories:
  - 10K<n<100K

Dataset Description

Lemone-embedded, pre-built embeddings dataset for French taxation.

This database presents the embeddings generated by the Lemone-embed-pro model and aims at a large-scale distribution of the model even for the GPU-poor.

This sentence transformers model, specifically designed for French taxation, has been fine-tuned on a dataset comprising 43 million tokens, integrating a blend of semi-synthetic and fully synthetic data generated by GPT-4 Turbo and Llama 3.1 70B, which have been further refined through evol-instruction tuning and manual curation.

The model is tailored to meet the specific demands of information retrieval across large-scale tax-related corpora, supporting the implementation of production-ready Retrieval-Augmented Generation (RAG) applications. Its primary purpose is to enhance the efficiency and accuracy of legal processes in the taxation domain, with an emphasis on delivering consistent performance in real-world settings, while also contributing to advancements in legal natural language processing research.

This is a sentence-transformers model finetuned from Alibaba-NLP/gte-multilingual-base. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Usage with ChromaDB

We recommend integration via a vector-store to produce an optimal RAG pipeline. Here's a code extract for producing such a database with ChromaDB:

import chromadb
import polars as pl

from chromadb.config import Settings
from chromadb.utils import embedding_functions
from torch.cuda import is_available

client = chromadb.PersistentClient(
    path="./chroma.db",
    settings=Settings(anonymized_telemetry=False)
)

sentence_transformer_ef = embedding_functions.SentenceTransformerEmbeddingFunction(
    model_name="louisbrulenaudet/lemone-embed-pro",
    device="cuda" if is_available() else "cpu",
    trust_remote_code=True
)

collection = client.get_or_create_collection(
    name="tax",
    embedding_function=sentence_transformer_ef
)

dataframe = pl.scan_parquet('hf://datasets/louisbrulenaudet/lemone-docs-embedded/data/train-00000-of-00001.parquet').filter(
    pl.col(
        "text"
    ).is_not_null()
).collect()

collection.add(
    embeddings=dataframe["lemone_pro_embeddings"].to_list(),
    documents=dataframe["text"].to_list(),
    metadatas=dataframe.drop(
        [
            "lemone_pro_embeddings", 
            "text"
        ]
    ).to_dicts(),
    ids=[
        str(i) for i in range(0, dataframe.shape[0])
    ]
)

Here is a code for reproduction of this dataset:

import hashlib

from datetime import datetime
from typing import (
    IO,
    TYPE_CHECKING,
    Any,
    Dict,
    List,
    Type,
    Tuple,
    Union,
    Mapping,
    TypeVar,
    Callable,
    Optional,
    Sequence,
)

import chromadb
import polars as pl

from chromadb.config import Settings
from chromadb.utils import embedding_functions
from torch.cuda import is_available

client = chromadb.Client(
    settings=Settings(anonymized_telemetry=False)
)

sentence_transformer_ef = embedding_functions.SentenceTransformerEmbeddingFunction(
    model_name="louisbrulenaudet/lemone-embed-pro",
    device="cuda" if is_available() else "cpu",
    trust_remote_code=True
)

collection = client.get_or_create_collection(
    name="tax",
    embedding_function=sentence_transformer_ef
)

bofip_dataframe = pl.scan_parquet(
    "hf://datasets/louisbrulenaudet/bofip/data/train-00000-of-00001.parquet"
).with_columns(
    [
        (
            pl.lit("Bulletin officiel des finances publiques - impôts").alias(
                "title_main"
            )
        ),
        (
            pl.col("debut_de_validite")
            .str.strptime(pl.Date, format="%Y-%m-%d")
            .dt.strftime("%Y-%m-%d 00:00:00")
        ).alias("date_publication"),
        (
            pl.col("contenu")
            .map_elements(lambda x: hashlib.sha256(str(x).encode()).hexdigest(), return_dtype=pl.Utf8)
            .alias("hash")
        )
    ]
).rename(
    {
        "contenu": "text",
        "permalien": "url_sourcepage",
        "identifiant_juridique": "id_sub",
    }
).select(
    [
        "text",
        "title_main",
        "id_sub",
        "url_sourcepage",
        "date_publication",
        "hash"
    ]
)

books: List[str] = [
    "hf://datasets/louisbrulenaudet/code-douanes/data/train-00000-of-00001.parquet",
    "hf://datasets/louisbrulenaudet/code-impots/data/train-00000-of-00001.parquet",
    "hf://datasets/louisbrulenaudet/code-impots-annexe-i/data/train-00000-of-00001.parquet",
    "hf://datasets/louisbrulenaudet/code-impots-annexe-ii/data/train-00000-of-00001.parquet",
    "hf://datasets/louisbrulenaudet/code-impots-annexe-iii/data/train-00000-of-00001.parquet",
    "hf://datasets/louisbrulenaudet/code-impots-annexe-iv/data/train-00000-of-00001.parquet",
    "hf://datasets/louisbrulenaudet/code-impositions-biens-services/data/train-00000-of-00001.parquet",
    "hf://datasets/louisbrulenaudet/livre-procedures-fiscales/data/train-00000-of-00001.parquet"
]

legi_dataframe = pl.concat(
    [
        pl.scan_parquet(
            book
        ) for book in books
    ]
).with_columns(
    [
        (
            pl.lit("https://www.legifrance.gouv.fr/codes/article_lc/")
            .add(pl.col("id"))
            .alias("url_sourcepage")
        ),
        (
            pl.col("dateDebut")
            .cast(pl.Int64)
            .map_elements(
                lambda x: datetime.fromtimestamp(x / 1000).strftime("%Y-%m-%d %H:%M:%S"),
                return_dtype=pl.Utf8
            )
            .alias("date_publication")
        ),
        (
            pl.col("texte")
            .map_elements(lambda x: hashlib.sha256(str(x).encode()).hexdigest(), return_dtype=pl.Utf8)
            .alias("hash")
        )
    ]
).rename(
    {
        "texte": "text",
        "num": "id_sub",
    }
).select(
    [
        "text",
        "title_main",
        "id_sub",
        "url_sourcepage",
        "date_publication",
        "hash"
    ]
)

print("Starting embeddings production...")

dataframe = pl.concat(
    [
        bofip_dataframe,
        legi_dataframe
    ]
).filter(
    pl.col(
        "text"
    ).is_not_null()
).with_columns(
    pl.col("text").map_elements(
        lambda x: sentence_transformer_ef(
            [x]
        )[0].tolist(),
        return_dtype=pl.List(pl.Float64)
    ).alias("lemone_pro_embeddings")
).collect()

Citation

If you use this code in your research, please use the following BibTeX entry.

@misc{louisbrulenaudet2024,
  author =       {Louis Brulé Naudet},
  title =        {Lemone-Embed: A Series of Fine-Tuned Embedding Models for French Taxation},
  year =         {2024}
  howpublished = {\url{https://huggingface.co/datasets/louisbrulenaudet/lemone-embed-pro}},
}

Feedback

If you have any feedback, please reach out at louisbrulenaudet@icloud.com.