Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

mlabonne 
posted an update 1 day ago
jasoncorkill 
posted an update 1 day ago
view post
Post
2192
At Rapidata, we compared DeepL with LLMs like DeepSeek-R1, Llama, and Mixtral for translation quality using feedback from over 51,000 native speakers. Despite the costs, the performance makes it a valuable investment, especially in critical applications where translation quality is paramount. Now we can say that Europe is more than imposing regulations.

Our dataset, based on these comparisons, is now available on Hugging Face. This might be useful for anyone working on AI translation or language model evaluation.

Rapidata/Translation-deepseek-llama-mixtral-v-deepl
  • 1 reply
·
mlabonne 
posted an update 2 days ago
view post
Post
5365
✂️ Gemma 3 Abliterated

I noticed that Gemma 3 was much more resilient to refusal removal than other models like Qwen 2.5.

I experimented with different recipes and improved the abliteration technique I wrote about last year.

It's still experimental but the refusal rate is super low in my tests. Enjoy!

mlabonne/gemma-3-4b-it-abliterated
mlabonne/gemma-3-12b-it-abliterated
mlabonne/gemma-3-27b-it-abliterated

  • 1 reply
·
aifeifei798 
posted an update 2 days ago
view post
Post
2831
😊 This program is designed to remove emojis from a given text. It uses a regular expression (regex) pattern to match and replace emojis with an empty string, effectively removing them from the text. The pattern includes a range of Unicode characters that correspond to various types of emojis, such as emoticons, symbols, and flags. By using this program, you can clean up text data by removing any emojis that may be present, which can be useful for text processing, analysis, or other applications where emojis are not desired. 💻
import re

def remove_emojis(text):
    # Define a broader emoji pattern
    emoji_pattern = re.compile(
        "["
        u"\U0001F600-\U0001F64F"  # emoticons
        u"\U0001F300-\U0001F5FF"  # symbols & pictographs
        u"\U0001F680-\U0001F6FF"  # transport & map symbols
        u"\U0001F1E0-\U0001F1FF"  # flags (iOS)
        u"\U00002702-\U000027B0"
        u"\U000024C2-\U0001F251"
        u"\U0001F900-\U0001F9FF"  # supplemental symbols and pictographs
        u"\U0001FA00-\U0001FA6F"  # chess symbols and more emojis
        u"\U0001FA70-\U0001FAFF"  # more symbols and pictographs
        u"\U00002600-\U000026FF"  # miscellaneous symbols
        u"\U00002B50-\U00002B59"  # additional symbols
        u"\U0000200D"             # zero width joiner
        u"\U0000200C"             # zero width non-joiner
        u"\U0000FE0F"             # emoji variation selector
        "]+", flags=re.UNICODE
    )
    return emoji_pattern.sub(r'', text)
Kseniase 
posted an update 3 days ago
view post
Post
7130
15 types of attention mechanisms

Attention mechanisms allow models to dynamically focus on specific parts of their input when performing tasks. In our recent article, we discussed Multi-Head Latent Attention (MLA) in detail and now it's time to summarize other existing types of attention.

Here is a list of 15 types of attention mechanisms used in AI models:

1. Soft attention (Deterministic attention) -> Neural Machine Translation by Jointly Learning to Align and Translate (1409.0473)
Assigns a continuous weight distribution over all parts of the input. It produces a weighted sum of the input using attention weights that sum to 1.

2. Hard attention (Stochastic attention) -> Effective Approaches to Attention-based Neural Machine Translation (1508.04025)
Makes a discrete selection of some part of the input to focus on at each step, rather than attending to everything.

3. Self-attention -> Attention Is All You Need (1706.03762)
Each element in the sequence "looks" at other elements and "decides" how much to borrow from each of them for its new representation.

4. Cross-Attention (Encoder-Decoder attention) -> Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation (2104.08771)
The queries come from one sequence and the keys/values come from another sequence. It allows a model to combine information from two different sources.

5. Multi-Head Attention (MHA) -> Attention Is All You Need (1706.03762)
Multiple attention “heads” are run in parallel.​ The model computes several attention distributions (heads), each with its own set of learned projections of queries, keys, and values.

6. Multi-Head Latent Attention (MLA) -> DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model (2405.04434)
Extends MHA by incorporating a latent space where attention heads can dynamically learn different latent factors or representations.

7. Memory-Based attention -> End-To-End Memory Networks (1503.08895)
Involves an external memory and uses attention to read from and write to this memory.

See other types in the comments 👇
  • 1 reply
·
abhishek 
posted an update 2 days ago
view post
Post
2534
🚀 I'm thrilled to announce the launch of Arcee Conductor, a game-changing platform that's about to revolutionize the way you interact with AI models! 🤖 As the pioneers of small language models (SLMs), we've been working tirelessly to bring you the most exciting innovation in the AI space.
Here's a quick TL;DR of what Arcee Conductor is all about:

🌟 Choice and flexibility: Get access to multiple models, including our powerful SLMs and third-party LLMs, to choose the best one for your specific use case
🤖 Intelligent routing: Our platform evaluates which model is best-suited for each of your queries, ensuring you get the most accurate results
📈 Cost savings: Reduce your AI costs with our affordable SLMs, while still having access to leading LLMs when needed
🚀 Easy to get started: Sign up now and try Arcee Conductor today, with 400 million tokens (a $200 value) on us! 🎁
📊 Proven track record: Our SLMs have already racked up 222K+ downloads on Hugging Face, with customers seeing significant cost savings and improved accuracy

For a limited time, you can get $200 credits to use with Conductor for FREE. Check it out here: https://conductor.arcee.ai
  • 3 replies
·
AdinaY 
posted an update 1 day ago
etemiz 
posted an update 1 day ago
DualityAI-RebekahBogdanoff 
posted an update 2 days ago
view post
Post
2446
🤩 Ready to start creating synthetic data that bridges the Sim2Real gap and trains robust AI models?

🌟 Join Duality AI's free “Intro to Falcon” class this Thursday, March 20th.

Dive into the Digital Twin Simulation workflows—from setting up the simulation software, FalconEditor, to generating custom data. Get hands-on guidance, ask questions, and receive direct support to build a solid foundation for your simulations!

Sign up here:
https://forms.gle/xViMMSMsA5GP468J6

In the class, you will:

✔️Explore FalconCloud features like the digital twin library and cloud simulations
✔️Set up and navigate FalconEditor as a first-time user
✔️Configure FalconEditor for scenario and data creation
✔️Assemble a sim-ready scenario using Falcon library twins
✔️Generate multiple data types with Falcon’s virtual sensors

New: This class also covers the latest Falcon 5.1 features!

Interested in checking out FalconEditor? Start your FREE EDU tier account to access online resources and lessons designed to teach you FalconEditor:
https://falcon.duality.ai/auth/sign-up?utm_source=huggingface&utm_medium=social&utm_campaign=031825intro

Get access to comprehensive resources for digital twin simulation and synthetic data, access full-featured FalconEditor, and stay up-to-date on future webinars and training courses.
AdinaY 
posted an update about 5 hours ago