Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

openfree 
posted an update about 14 hours ago
view post
Post
2081
Agentic AI Era: Analyzing MCP vs MCO 🚀

Hello everyone!
With the rapid advancement of AI agent technology, two architectures have come into the spotlight: MCP (Model Context Protocol) and MCO (Model Context Open-json). Today, we’ll introduce the key features and differences of these two approaches.

VIDraft/Agentic-AI-CHAT

MCP: The Traditional Approach 🏛️
Centralized Function Registry: All functions are hardcoded into the core system.

Static Function Definitions & Tight Coupling: New features require changes to the core application code, limiting scalability.

Monolithic Design: Complex deployment and version management can cause a single error to affect the whole system.

Code Example:
'''py
FUNCTION_REGISTRY = {
"existing_function": existing_function,
"new_function": new_function # Adding a new function
}
'''

MCO: A Revolutionary Approach 🆕
JSON-based Function Definitions: Function details are stored in external JSON files, enabling dynamic module loading.

Loose Coupling & Microservices: Each function can be developed, tested, and deployed as an independent module.

Flexible Scalability: Add new features by simply updating the JSON and module files, without modifying the core system.

JSON Example:
[
{
"name": "analyze_sentiment",
"module_path": "nlp_tools",
"func_name_in_module": "sentiment_analysis",
"example_usage": "analyze_sentiment(text=\"I love this product!\")"
}
]

Why MCO? 💡
Enhanced Development Efficiency: Developers can focus on their own modules with independent testing and deployment.

Simplified Error Management: Errors remain confined within their modules, enabling quick hotfixes.

Future-Proofing: With potential features like remote function calls (RPC), access control, auto-documentation, and a function marketplace, MCO paves the way for rapid innovation.

Practical Use & Community 🤝
The MCO implementation has been successfully tested on Vidraft’s LLM (based on Google Gemma-3)
Kseniase 
posted an update 1 day ago
view post
Post
3386
16 new research on inference-time scaling:

For the last couple of weeks a large amount of studies on inference-time scaling has emerged. And it's so cool, because each new paper adds a trick to the toolbox, making LLMs more capable without needing to scale parameter count of the models.

So here are 13 new methods + 3 comprehensive studies on test-time scaling:

1. Inference-Time Scaling for Generalist Reward Modeling (2504.02495)
Probably, the most popular study. It proposes to boost inference-time scalability by improving reward modeling. To enhance performance, DeepSeek-GRM uses adaptive critiques, parallel sampling, pointwise generative RM, and Self-Principled Critique Tuning (SPCT)

2. T1: Tool-integrated Self-verification for Test-time Compute Scaling in Small Language Models (2504.04718)
Allows small models to use external tools, like code interpreters and calculator, to enhance self-verification

3. Z1: Efficient Test-time Scaling with Code (2504.00810)
Proposes to train LLMs on code-based reasoning paths to make test-time scaling more efficient, limiting unnecessary tokens with a special dataset and a Shifted Thinking Window

4. GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning (2504.00891)
Introduces GenPRM, a generative PRM, that uses CoT reasoning and code verification for step-by-step judgment. With only 23K training examples, GenPRM outperforms prior PRMs and larger models

5. Can Test-Time Scaling Improve World Foundation Model? (2503.24320)
SWIFT test-time scaling framework improves World Models' performance without retraining, using strategies like fast tokenization, Top-K pruning, and efficient beam search

6. Relevance Isn't All You Need: Scaling RAG Systems With Inference-Time Compute Via Multi-Criteria Reranking (2504.07104)
Proposes REBEL for RAG systems scaling, which uses multi-criteria optimization with CoT prompting for better performance-speed tradeoffs as inference compute increases

7. $φ$-Decoding: Adaptive Foresight Sampling for Balanced Inference-Time Exploration and Exploitation (2503.13288)
Proposes a φ-Decoding strategy that uses foresight sampling, clustering and adaptive pruning to estimate and select optimal reasoning steps

Read further below 👇

Also, subscribe to the Turing Post https://www.turingpost.com/subscribe
  • 2 replies
·
nyuuzyou 
posted an update 1 day ago
view post
Post
3464
🇷🇺 Russian Forum Messages Dataset - nyuuzyou/ruforum

Collection of approximately 58 million Russian forum messages featuring:

- Complete message content from Russian online forums spanning 2010-2025
- Comprehensive metadata including unique message IDs and timestamps
- Full text content preserving original user discussions and interactions
- Monolingual dataset focused exclusively on Russian language content

This dataset offers a unique textual archive of Russian online conversations suitable for text generation, sentiment analysis, and language modeling research. Released to the public domain under CC0 1.0 license.
thomwolf 
posted an update about 7 hours ago
view post
Post
592
If you've followed the progress of robotics in the past 18 months, you've likely noticed how robotics is increasingly becoming the next frontier that AI will unlock.

At Hugging Face—in robotics and across all AI fields—we believe in a future where AI and robots are open-source, transparent, and affordable; community-built and safe; hackable and fun. We've had so much mutual understanding and passion working with the Pollen Robotics team over the past year that we decided to join forces!

You can already find our open-source humanoid robot platform Reachy 2 on the Pollen website and the Pollen community and people here on the hub at pollen-robotics

We're so excited to build and share more open-source robots with the world in the coming months!
merve 
posted an update about 10 hours ago
view post
Post
1036
sooo many open AI releases past week, let's summarize! 🤗
merve/april-11-releases-67fcd78be33d241c0977b9d2

multimodal
> Moonshot AI released Kimi VL Thinking, first working open-source multimodal reasoning model and Kimi VL Instruct, both 16B MoEs with 3B active params (OS)
> InternVL3 released based on Qwen2.5VL, 7 ckpts with various sizes (1B to 78B)

LLMs
> NVIDIA released Llama-3_1-Nemotron-Ultra-253B-v1 an LLM built on Llama 405B for reasoning, chat and tool use
> Agentica released DeepCoder-14B-Preview, fine-tuned version of DeepSeek-R1-Distilled-Qwen-14B on problem-test pairs, along with the compiled dataset
> Zyphra/ZR1-1.5B is a new small reasoning LLM built on R1-Distill-1.5B (OS)
> Skywork-OR1-32B-Preview is a new reasoning model by Skywork

Image Generation
> HiDream releases three new models, HiDream I1 Dev, I1 Full, and I1 fast for image generation (OS)

*OS ones have Apache 2.0 or MIT licenses
JLouisBiz 
posted an update 1 day ago
view post
Post
2870
Article: https://huggingface.co/blog/JLouisBiz/semantical-website-links

You don't need to do the tedious work of finding all those links on your huge website.

Automating semantic links on websites using Large Language Models (LLMs) enhances user experience and efficiency. Here's a simplified workflow:

1. Store LLM embeddings in PostgreSQL: Use the vector data type to store text embeddings generated by an LLM.
2. Divide page texts into chunks for processing.
3. Generate embeddings using an LLM for each chunk of text.
4. Create template markup around specific terms needing links.

An automated program then:

- Converts marked-up terms to their corresponding LLMs' embeddings,
- Compares these with stored database embeddings (using cosine similarity),
- Identifies the most relevant page based on highest similarity score, and
- Automatically adds a link from the original content to this contextually related information.

This process improves navigation by directing users to highly contextual pages. It saves time as it automates creating semantic links while maintaining accuracy.
Yehor 
posted an update 3 days ago
view post
Post
2501
I have made a Rust project with integration of the latest state-of-the-art model for object detection, it outperforms YOLO!

Check it out: https://github.com/egorsmkv/rf-detr-usls
  • 2 replies
·
seawolf2357 
posted an update 4 days ago
view post
Post
5979
🔥 AgenticAI: The Ultimate Multimodal AI with 16 MBTI Girlfriend Personas! 🔥

Hello AI community! Today, our team is thrilled to introduce AgenticAI, an innovative open-source AI assistant that combines deep technical capabilities with uniquely personalized interaction. 💘

🛠️ MBTI 16 Types SPACES Collections link
seawolf2357/heartsync-mbti-67f793d752ef1fa542e16560

✨ 16 MBTI Girlfriend Personas

Complete MBTI Implementation: All 16 MBTI female personas modeled after iconic characters (Dana Scully, Lara Croft, etc.)
Persona Depth: Customize age groups and thinking patterns for hyper-personalized AI interactions
Personality Consistency: Each MBTI type demonstrates consistent problem-solving approaches, conversation patterns, and emotional expressions

🚀 Cutting-Edge Multimodal Capabilities

Integrated File Analysis: Deep analysis and cross-referencing of images, videos, CSV, PDF, and TXT files
Advanced Image Understanding: Interprets complex diagrams, mathematical equations, charts, and tables
Video Processing: Extracts key frames from videos and understands contextual meaning
Document RAG: Intelligent analysis and summarization of PDF/CSV/TXT files

💡 Deep Research & Knowledge Enhancement

Real-time Web Search: SerpHouse API integration for latest information retrieval and citation
Deep Reasoning Chains: Step-by-step inference process for solving complex problems
Academic Analysis: In-depth approach to mathematical problems, scientific questions, and data analysis
Structured Knowledge Generation: Systematic code, data analysis, and report creation

🖼️ Creative Generation Engine

FLUX Image Generation: Custom image creation reflecting the selected MBTI persona traits
Data Visualization: Automatic generation of code for visualizing complex datasets
Creative Writing: Story and scenario writing matching the selected persona's style

leonardlin 
posted an update about 7 hours ago
view post
Post
389
Happy to announce the release of Shisa V2, our latest generation of our bilingual Japanese-English language models. After hundreds of ablations and months of work, we're releasing some of the strongest open Japanese models at 7B, 8B, 12B, 14B, 32B and 70B! Full announcement here https://shisa.ai/posts/shisa-v2/ or visit the Shisa V2 HF collection: shisa-ai/shisa-v2-67fc98ecaf940ad6c49f5689
AdinaY 
posted an update about 12 hours ago
view post
Post
962
🔥 New reasoning models from the Chinese community, by Skywork 天工-昆仑万维

Skywork/skywork-or1-67fa1bcb41b436ef2def76b9

✨Skywork OR1-Math-7B > Optimized for math reasoning
✨Skywork-OR1-7B-preview > Excels in math & coding
✨Skywork-OR1-32B-preview > Matches Deepseek-R1 on math (AIME24/25) and coding (LiveCodeBench)

Released under the Apache 2.0 license 🥳
Final version coming in 2 weeks!