Morgan Funtowicz's picture

Morgan Funtowicz

mfuntowicz

AI & ML interests

Model inference low-level optimization, hardware affinity and large-scale distributed training.

Recent Activity

Articles

Organizations

Hugging Face's profile picture BigScience Workshop's profile picture Qualcomm's profile picture AWS Inferentia and Trainium's profile picture Hugging Face Infinity's profile picture Hugging Face Optimum's profile picture Need4Speed's profile picture Hugging Face Smol Cluster's profile picture Optimum Nvidia's profile picture Optimum AMD's profile picture gg-hf's profile picture Optimum-TPU's profile picture hsramall's profile picture Optimum-Intel's profile picture gg-tt's profile picture Hugging Face Machine Learning Optimization's profile picture Optimum Internal Testing's profile picture blhf's profile picture Huggingface HUGS's profile picture smol-explorers's profile picture

mfuntowicz's activity

reacted to alex-abb's post with ๐Ÿ‘๐Ÿ”ฅ 6 months ago
view post
Post
4812
Hi everyone!
I'm Alex, I'm 16, I've been an internship at Hugging Face for a little over a week and I've already learned a lot about using and prompting LLM models. With @victor as tutor I've just finished a space that analyzes your feelings by prompting an LLM chat model. The aim is to extend it so that it can categorize hugging face posts.

alex-abb/LLM_Feeling_Analyzer
ยท
reacted to IlyasMoutawwakil's post with ๐Ÿš€๐Ÿง  6 months ago
view post
Post
3980
Last week, Intel's new Xeon CPUs, Sapphire Rapids (SPR), landed on Inference Endpoints and I think they got the potential to reduce the cost of your RAG pipelines ๐Ÿ’ธ

Why ? Because they come with Intelยฎ AMX support, which is a set of instructions that support and accelerate BF16 and INT8 matrix multiplications on CPU โšก

I went ahead and built a Space to showcase how to efficiently deploy embedding models on SPR for both Retrieving and Ranking documents, with Haystack compatible components: https://huggingface.co/spaces/optimum-intel/haystack-e2e

Here's how it works:

- Document Store: A FAISS document store containing the seven-wonders dataset, embedded, indexed and stored on the Space's persistent storage to avoid unnecessary re-computation of embeddings.

- Retriever: It embeds the query at runtime and retrieves from the dataset N documents that are most semantically similar to the query's embedding.
We use the small variant of the BGE family here because we want a model that's fast to run on the entire dataset and has a small embedding space for fast similarity search. Specifically we use an INT8 quantized bge-small-en-v1.5, deployed on an Intel Sapphire Rapids CPU instance.

- Ranker: It re-embeds the retrieved documents at runtime and re-ranks them based on semantic similarity to the query's embedding. We use the large variant of the BGE family here because it's optimized for accuracy allowing us to filter the most relevant k documents that we'll use in the LLM prompt. Specifically we use an INT8 quantized bge-large-en-v1.5, deployed on an Intel Sapphire Rapids CPU instance.

Space: https://huggingface.co/spaces/optimum-intel/haystack-e2e
Retriever IE: optimum-intel/fastrag-retriever
Ranker IE: optimum-intel/fastrag-ranker