ONNXConfig for all

non-profit

AI & ML interests

Make all hub models available for conversion to ONNX format.

Recent Activity

OWG's activity

mkluczekย 
posted an update 24 days ago
view post
Post
1599
First Global and Dense Open Embedding Dataset of Earth! ๐ŸŒ ๐Ÿค—

Introducing the Major TOM embeddings dataset, created in collaboration with CloudFerro S.A. ๐Ÿ”ถ and ฮฆ-lab at the European Space Agency (ESA) ๐Ÿ›ฐ๏ธ. Together with @mikonvergence and Jฤ™drzej S. Bojanowski, we present the first open-access dataset of Copernicus embeddings, offering dense, global coverage across the full acquisition areas of Sentinel-1 and Sentinel-2 sensors.

๐Ÿ’ก Highlights:
๐Ÿ“Š Data: Over 8 million Sentinel-1 & Sentinel-2 images processed, distilling insights from 9.368 trillion pixels of raw data.
๐Ÿง  Models: Foundation models include SigLIP, DINOv2, and SSL4EO.
๐Ÿ“ฆ Scale: 62 TB of raw satellite data processed into 170M+ embeddings.

This project delivers open and free vectorized expansions of Major-TOM/README datasets, setting a new standard for embedding releases and enabling lightweight, scalable ingestion of Earth Observation (EO) data for countless applications.

๐Ÿค— Explore the datasets:
Major-TOM/Core-S2L1C-SSL4EO
Major-TOM/Core-S1RTC-SSL4EO
Major-TOM/Core-S2RGB-DINOv2
Major-TOM/Core-S2RGB-SigLIP

๐Ÿ“– Check paper: Global and Dense Embeddings of Earth: Major TOM Floating in the Latent Space (2412.05600)
๐Ÿ’ป Code notebook: https://github.com/ESA-PhiLab/Major-TOM/blob/main/05-Generate-Major-TOM-Embeddings.ipynb
  • 1 reply
ยท
louisbrulenaudetย 
posted an update about 2 months ago
view post
Post
1776
Iโ€™ve published a new dataset to simplify model merging ๐Ÿค—

This dataset facilitates the search for compatible architectures for model merging with @arcee_aiโ€™s mergekit, streamlining the automation of high-performance merge searches ๐Ÿ“–

Dataset : louisbrulenaudet/mergekit-configs
  • 1 reply
ยท
louisbrulenaudetย 
posted an update 2 months ago
view post
Post
1169
Introducing Lemone-router, a series of classification models designed to produce an optimal multi-agent system for different branches of tax law.

Trained on a base of 49k lines comprising a set of synthetic questions generated by GPT-4 Turbo and Llama 3.1 70B, which have been further refined through evol-instruction tuning and manual curation and authority documents, these models are based on an 8-category decomposition of the classification scheme derived from the Bulletin officiel des finances publiques - impรดts :

label2id = {
    "Bรฉnรฉfices professionnels": 0,
    "Contrรดle et contentieux": 1,
    "Dispositifs transversaux": 2,
    "Fiscalitรฉ des entreprises": 3,
    "Patrimoine et enregistrement": 4,
    "Revenus particuliers": 5,
    "Revenus patrimoniaux": 6,
    "Taxes sur la consommation": 7
}
	
id2label = {
    0: "Bรฉnรฉfices professionnels",
    1: "Contrรดle et contentieux",
    2: "Dispositifs transversaux",
    3: "Fiscalitรฉ des entreprises",
    4: "Patrimoine et enregistrement",
    5: "Revenus particuliers",
    6: "Revenus patrimoniaux",
    7: "Taxes sur la consommation"
}

It achieves the following results on the evaluation set:
- Loss: 0.4734
- Accuracy: 0.9191

Link to the collection: louisbrulenaudet/lemone-router-671cce21d6410f3570514762
louisbrulenaudetย 
posted an update 3 months ago
view post
Post
3109
๐Ÿšจ I have $3,500 in Azure credits, including access to an H100 (96 Go), expiring on November 12, 2024.

I wonโ€™t be able to use it all myself, so Iโ€™m reaching out to the @huggingface community: Are there any open-source projets with data ready for some compute power?

Letโ€™s collaborate and make the most of it together ๐Ÿ”—
ยท
louisbrulenaudetย 
posted an update 3 months ago
view post
Post
2102
My biggest release of the year: a series of 7 specialized embedding models for information retrieval within tax documents, is now available for free on Hugging Face ๐Ÿค—

These new models aim to offer an open source alternative for in-domain semantic search from largeย text corpora and will improve RAG systems and context addition for large language models.

Trained on more than 43 million tax tokens derived from semi-synthetic and raw-synthetic data, enriched by various methods (in particular MSFT's evol-instruct by @intfloat ), and corrected by humans, this project is the fruit of hundreds of hours of work and is the culmination of a global effort to open up legal technologies that has only just begun.

A big thank you to Microsoft for Startups for giving me access to state-of-the-art infrastructure to train these models, and to @julien-c , @clem ๐Ÿค—, @thomwolf and the whole HF team for the inference endpoint API and the generous provision of Meta LLama-3.1-70B. Special thanks also to @tomaarsen for his invaluable advice on training embedding models and Loss functions โค๏ธ

Models are available on my personal HF page, into the Lemone-embed collection: louisbrulenaudet/lemone-embed-66fdc24000df732b395df29b
  • 1 reply
ยท
louisbrulenaudetย 
posted an update 4 months ago
view post
Post
2596
The Romulus model series has been released on Hugging Face, continually pre-trained on 34,864,949 tokens of French laws and intended to serve as a foundation for fine-tuning on labeled data ๐Ÿค—

The training code, dataset and model weights are open and available free on HF and the training was based on H100 provided by Microsoft for Startups using Unsloth AI by @danielhanchen and @shimmyshimmer ๐Ÿฆฅ

Link to the base model: louisbrulenaudet/Romulus-cpt-Llama-3.1-8B-v0.1

Link to the instruct model: louisbrulenaudet/Romulus-cpt-Llama-3.1-8B-v0.1-Instruct

Link to the dataset: louisbrulenaudet/Romulus-cpt-fr

Please note that these models have not been aligned for the production of usable texts as they stand, and will certainly need to be refined for the desired tasks in order to produce satisfactory results.
  • 1 reply
ยท
louisbrulenaudetย 
posted an update 4 months ago
view post
Post
1575
An example of the application of LegalKit is the production of knowledge graphs, here is a demo Space ๐Ÿ”—

With the update of the French legal code data model uploaded to ๐Ÿค— and the introduction of a column dedicated to HTML text, it's now easy to extract links between different articles and produce complex graphs with just a few lines of Python.

This simplified demo highlights the ease of implementation and creative potential, and enables the generation of complete data sets, although requiring a powerful graphics card for display. The framework used for the moment is D3.js, but perhaps other solutions are possible. I'd be delighted to hear your suggestions, and look forward to hearing from the community.

Link to the ๐Ÿค— Space: louisbrulenaudet/legalkit-knowledge-graph
  • 2 replies
ยท
Niansuhย 
posted an update 4 months ago
view post
Post
2576
Plugins in NiansuhAI

Plugin Names:
1. WebSearch: Searches the web using search engines.
2. Calculator: Evaluates mathematical expressions, extending the base Tool class.
3. WebBrowser: Extracts and summarizes information from web pages.
4. Wikipedia: Retrieves information from Wikipedia using its API.
5. Arxiv: Searches and fetches article information from Arxiv.
6. WolframAlphaTool: Provides answers on math, science, technology, culture, society, and everyday life.

These plugins currently support the GPT-4O-2024-08-06 model, which also supports image analysis.

Try it now: https://huggingface.co/spaces/NiansuhAI/chat

Similar to: https://hf.co/chat
louisbrulenaudetย 
posted an update 4 months ago
view post
Post
1951
Understanding the json format response with HF's Serverless Inference API ๐Ÿค—

As it stands, there seems to be an inconsistency with the OpenAI documentation on the question of implementing the JSON response format using the InferenceClient completion API.

After investigating the InferenceClient source code, I share the official solution using a JSON Schema. This consolidates the structure of the response and simplifies parsing as part of an automated process for extracting metadata, information:
from huggingface_hub import InferenceClient

client = InferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")

messages = [
    {
        "role": "user",
        "content": "I saw a puppy a cat and a raccoon during my bike ride in the park. What did I saw and when?",
    },
]

response_format = {
    "type": "json",
    "value": {
        "properties": {
            "location": {"type": "string"},
            "activity": {"type": "string"},
            "animals_seen": {"type": "integer", "minimum": 1, "maximum": 5},
            "animals": {"type": "array", "items": {"type": "string"}},
        },
        "required": ["location", "activity", "animals_seen", "animals"],
    },
}

response = client.chat_completion(
    messages=messages,
    response_format=response_format,
    max_tokens=500,
)

print(response.choices[0].message.content)

As a reminder, json mode is activated with the OpenAI client as follows:
response = client.chat.completions.create(
     model="gpt-3.5-turbo-0125",
     messages=[...],
     response_format={"type": "json_object"}
)

One question remains unanswered, however, and will perhaps be answered by the community: it seems that an incompatibility persists for list of dictionaries generation, and currently, the production of simple dictionaries seems to be the only functional option.
  • 2 replies
ยท
louisbrulenaudetย 
posted an update 5 months ago
view post
Post
2760
๐Ÿš€ RAGoon is now available on PyPI, GitHub, and as a Space on Hugging Face for batched embeddings generation ๐Ÿค—

RAGoon is a set of NLP utilities for multi-model embedding production, high-dimensional vector visualization, and aims to improve language model performance by providing contextually relevant information through search-based querying, web scraping and data augmentation techniques.

At this stage, 5 major classes are available via RAGoon to facilitate:
- the production of chain embeddings for several models to simplify a continuous deployment process;
- production of LLM requests for web querying and content retrieval via the Google API;
- recursive chunking via tokens;
- data visualization and the function to load embeddings from a FAISS index, reduce their dimensionality using PCA and/or t-SNE, and visualize them in an interactive 3D graph;
- the creation of binary indexes for search with scalar (int8) rescoring.

Link to GitHub: https://github.com/louisbrulenaudet/ragoon
Link to the ๐Ÿค— Space: louisbrulenaudet/ragoon
Artplesย 
posted an update 5 months ago
view post
Post
2465
Looking for a combination of speed and quality? Look no further! I've created a space that merges Open WebUI's excellent interface and features with the lightning-fast performance of the Groq API. Experience top-tier models in no time. Try it out for free here:
L-AI/groq-chat

"A big thank you to Groq for providing their fantastic API at no cost!"
louisbrulenaudetย 
posted an update 6 months ago
view post
Post
869
You can now find the OBIS - Ocean Biodiversity Information System, on Hugging Face with 128M rows, via the Datasets package stream ๐Ÿค—

The datasets are integrated, allowing seamless search and mapping by species name, higher taxonomic level, geographic area, depth, time, and environmental parameters. OBIS originates from the Census of Marine Life (2000-2010) and was adopted as a project under IOC-UNESCOโ€™s International Oceanographic Data and Information (IODE) programme in 2009.

Collectively, they have provided over 45 million observations of nearly 120,000 marine species, ranging from bacteria to whales, from the surface to 10,900 meters depth, and from the tropics to the poles.

Link to the dataset: louisbrulenaudet/obis
Niansuhย 
posted an update 6 months ago
view post
Post
2786
Introducing Plugins in NiansuhAI (on July 20, 2024)

Plugin Names:
1. WebSearch: Tool for searching the web using search engines.
2. Calculator: Helps evaluate mathematical expressions; extends the base Tool class.
3. WebBrowser: Interacts with web pages to extract information or summarize content.
4. Wikipedia: Retrieves data from Wikipedia using its API.
5. Arxiv: Searches and fetches article information from Arxiv.
6. WolframAlphaTool: Answers questions on Math, Science, Technology, Culture, Society, and Everyday Life.

Similar to https://hf.co/chat
Niansuhย 
posted an update 6 months ago
louisbrulenaudetย 
posted an update 6 months ago
view post
Post
2117
Introducing the first two projects on the HFforLegal community: the 'Laws' dataset and the associated search tool based on @nreimers and @tomaarsen 's Sentence Transformers library ๐Ÿค—

The objective of these two tools is to centralize in a single format a set of rules from different countries and legal systems in order to facilitate NLP in the field of comparative law, enabling more accurate and comprehensive legal analysis across different jurisdictions ๐ŸŒ

Link to the dataset : HFforLegal/laws
Link to the space: HFforLegal/laws-retrieval

We need your contributions to enrich this new knowledge base, and you will find in the 'Laws' dataset all the information you need to format your data and submit them to the appropriate split.