AI & ML interests

Open Source LLM Engineering Platform: observability/tracing, evaluation, playground, datasets&experiments, prompt management

Recent Activity

langfuse's activity

andrewrreed 
posted an update 12 days ago
view post
Post
2615
🚀 Supercharge your LLM apps with Langfuse on Hugging Face Spaces!

Langfuse brings end-to-end observability and tooling to accelerate your dev workflow from experiments through production

Now available as a Docker Space directly on the HF Hub! 🤗

🔍 Trace everything: monitor LLM calls, retrieval, and agent actions with popular frameworks
1⃣ One-click deployment: on Spaces with persistent storage and integrated OAuth
🛠 Simple Prompt Management: Version, edit, and update without redeployment
✅ Intuitive Evals: Collect user feedback, run model/prompt evaluations, and improve quality
📊 Dataset Creation: Build datasets directly from production data to enhance future performance

Kudos to the Langfuse team for this collab and the awesome, open-first product they’re building! 👏 @marcklingen @Clemo @MJannik

🔗 Space: langfuse/langfuse-template-space
🔗 Docs: https://huggingface.co/docs/hub/spaces-sdks-docker-langfuse
  • 1 reply
·
marcklingen 
updated a Space about 2 months ago
andrewrreed 
posted an update about 2 months ago
view post
Post
998
Trace LLM calls with Arize AI's Phoenix observability dashboards on Hugging Face Spaces! 🚀

✨ I just added a new recipe to the Open-Source AI Cookbook that shows you how to:
1️⃣ Deploy Phoenix on HF Spaces with persistent storage in a few clicks
2️⃣ Configure LLM tracing with the 𝗦𝗲𝗿𝘃𝗲𝗿𝗹𝗲𝘀𝘀 𝗜𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗔𝗣𝗜
3️⃣ Observe multi-agent application runs with the CrewAI integration

𝗢𝗯𝘀𝗲𝗿𝘃𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗶𝘀 𝗰𝗿𝘂𝗰𝗶𝗮𝗹 for building robust LLM apps.

Phoenix makes it easy to visualize trace data, evaluate performance, and track down issues. Give it a try!

🔗 Cookbook recipe: https://huggingface.co/learn/cookbook/en/phoenix_observability_on_hf_spaces
🔗 Phoenix docs: https://docs.arize.com/phoenix
andrewrreed 
posted an update 9 months ago
view post
Post
2563
🔬 Open LLM Progress Tracker 🔬

Inspired by the awesome work from @mlabonne , I created a Space to monitor the narrowing gap between open and proprietary LLMs as scored by the LMSYS Chatbot Arena ELO ratings 🤗

The goal is to have a continuously updated place to easily visualize these rapidly evolving industry trends 🚀

🔗 Open LLM Progress Tracker: andrewrreed/closed-vs-open-arena-elo
🔗 Source of Inspiration: https://www.linkedin.com/posts/maxime-labonne_arena-elo-graph-updated-with-new-models-activity-7187062633735368705-u2jB/
  • 2 replies
·
andrewrreed 
posted an update 9 months ago
view post
Post
2319
IMO, the "grounded generation" feature from Cohere's CommandR+ has flown under the radar...

For RAG use cases, responses directly include inline citations, making source attribution an inherent part of generation rather than an afterthought 😎

Who's working on an open dataset with this for the HF community to fine-tune with??

🔗CommandR+ Docs: https://docs.cohere.com/docs/retrieval-augmented-generation-rag

🔗Model on the 🤗 Hub: CohereForAI/c4ai-command-r-plus
  • 1 reply
·
andrewrreed 
posted an update 12 months ago
view post
Post
🚀 It's now easier than ever to switch from OpenAI to open LLMs

Hugging Face's TGI now supports an OpenAI compatible Chat Completion API

This means you can transition code that uses OpenAI client libraries (or frameworks like LangChain 🦜 and LlamaIndex 🦙) to run open models by changing just two lines of code 🤗

⭐ Here's how:
from openai import OpenAI

# initialize the client but point it to TGI
client = OpenAI(
    base_url="<ENDPOINT_URL>" + "/v1/",  # replace with your endpoint url
    api_key="<HF_API_TOKEN>",  # replace with your token
)
chat_completion = client.chat.completions.create(
    model="tgi",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Why is open-source software important?"},
    ],
    stream=True,
    max_tokens=500
)

# iterate and print stream
for message in chat_completion:
    print(message.choices[0].delta.content, end="")


🔗 Blog post ➡ https://huggingface.co/blog/tgi-messages-api
🔗 TGI docs ➡ https://huggingface.co/docs/text-generation-inference/en/messages_api
·