Louis Brulé Naudet

louisbrulenaudet

AI & ML interests

Research in business taxation and development (NLP, LLM, Computer vision...), University Dauphine-PSL 📖 | Backed by the Microsoft for Startups Hub program and Google Cloud Platform for startups program.

Organizations

Posts 5

view post
Post
2545
Mixtral or Llama 70B on Google Spreadsheet thanks to Hugging Face's Serverless Inference API 🤗

The Add-on is now available on the HF repo "Journalists on Hugging Face" and allows rapid generation of synthetic data, automatic translation, answering questions and more from simple spreadsheet cells 🖥️

Link to the 🤗 Space : JournalistsonHF/huggingface-on-sheets

Although this tool was initially developed for journalists, it actually finds a much wider inking among daily users of the Google suite and the remaining use cases to be explored are numerous.

Only a free Hugging Face API key is required to start using this no-code extension.

Do not hesitate to submit ideas for features that we could add!

Thanks to @fdaudens for initiating this development.
view post
Post
897
I've just open sourced RAGoon, a small utility I use to integrate knowledge from the web into LLM inference based on Groq speed and pure Google search performance ⚡

RAGoon is a Python library available on PyPI that aims to improve the performance of language models by providing contextually relevant information through retrieval-based querying, parallel web scraping, and data augmentation techniques. It offers an integration of various APIs (OpenAI, Groq), enabling users to retrieve information from the web, enrich it with domain-specific knowledge, and feed it to language models for more informed responses.
from groq import Groq
# from openai import OpenAI
from ragoon import RAGoon

# Initialize RAGoon instance
ragoon = RAGoon(
    google_api_key="your_google_api_key",
    google_cx="your_google_cx",
    completion_client=Groq(api_key="your_groq_api_key")
)

# Search and get results
query = "I want to do a left join in python polars"
results = ragoon.search(
    query=query,
    completion_model="Llama3-70b-8192",
)

# Print list of results
print(results)

For the time being, this project remains simple, but can easily be integrated into a RAG pipeline.

Link to GitHub : https://github.com/louisbrulenaudet/ragoon