I didnt mean you lul
I meant him
@MonsterMMORPG
...
alkinun
AI & ML interests
Recent Activity
Organizations
AtAndDev's activity


brother, dunking on some great models to defend your "product" is not a great (hate to say it but) human value...

ma guys suffered ik :)
Its expensive for everyone, just go with o3-mini, they just figured out that they are not the single llm provider and just doubled the cost of r1 for o3-mini.

Several GPUs are fine tuning it at the same time, each using a different dataset and using QLoRA and the successful ones are merged later. Compared to LoRa this allows faster training and also reduced overfitting because the merge operation heals overfitting. The problem with this could be the 4 bit quantization may make models dumber. But I am not looking for sheer IQ. Too much mind is a problem anyway :)
Has anyone tried parallel QLoRa and merge before?
I also automated the dataset selection and benchmarking and converging to objectives (the fit function, the reward). It is basically trying to get higher score in AHA Leaderboard as fast as possible with a diverse set of organisms that "evolve by training".
I want to release some cool stuff when I have the time:
- how an answer to a single question changes over time, with each training round or day
- a chart to show AHA alignment over training rounds

Just finished finetuned gemma 3 12b and 27b with a custom rl-like orm for a half-subjective task (rating food and cosmetic products health based on some personal info), tho I want to serve it with a pay-per-token inference engine, does anyone know a platform to host? Btw, as of my knowledge together and some others support lora with a limited list of base models (which do not have gemma 3) so...
More info about that app coming soon :)
We are prepearing to launch...
Stay tooned.
This is getting too long.
See ya

julien-c/follow-history
As you can see, I still have more followers than @julien-c even if he's trying to change this by building such cool spaces 😝😝😝

Also, the links are just wrong as of my knowledge, open source just means its accessible to everyone to download... But the license differs like said, but the worst it can be is not to be used to make money, thats just it.
Please correct me if im wrong.

Well, the models are research and there is some real work going into them but I checked some of those products that are promoted here and they are either clones of spaces you can find here and some name added...
Plus, all models here are oss but licensed different like (cc-by-nc or custom licenses) but either way they provide competition, contribution and ideas here which is always plus to everyone.

Our dataset, based on these comparisons, is now available on Hugging Face. This might be useful for anyone working on AI translation or language model evaluation.
Rapidata/Translation-deepseek-llama-mixtral-v-deepl

301
you can use lm-evaluation-harness from Eleuther AI, tho its a bit slow from my testing.
Alternatively, you can use hf evals to match the scores from the public leaderboard.
Side note: The hf llm leaderboard seems to be outdated a bit, so to use the new and better benchmarks, I suggest evaluating locally.
The links:
https://github.com/EleutherAI/lm-evaluation-harness
https://huggingface.co/docs/evaluate/

'However, it's important to remember that users have the right to leverage these models commercially without an obligation to contribute.'
Yeah ik, I'm just saying dont promote in here.
Also I didnt mean you specifically, there are promotions or even spams that promote multiple paid gradio app. I'm mad at them not the models...
Thanky you for your kind response btw :)

import re
def remove_emojis(text):
# Define a broader emoji pattern
emoji_pattern = re.compile(
"["
u"\U0001F600-\U0001F64F" # emoticons
u"\U0001F300-\U0001F5FF" # symbols & pictographs
u"\U0001F680-\U0001F6FF" # transport & map symbols
u"\U0001F1E0-\U0001F1FF" # flags (iOS)
u"\U00002702-\U000027B0"
u"\U000024C2-\U0001F251"
u"\U0001F900-\U0001F9FF" # supplemental symbols and pictographs
u"\U0001FA00-\U0001FA6F" # chess symbols and more emojis
u"\U0001FA70-\U0001FAFF" # more symbols and pictographs
u"\U00002600-\U000026FF" # miscellaneous symbols
u"\U00002B50-\U00002B59" # additional symbols
u"\U0000200D" # zero width joiner
u"\U0000200C" # zero width non-joiner
u"\U0000FE0F" # emoji variation selector
"]+", flags=re.UNICODE
)
return emoji_pattern.sub(r'', text)