Donut Earthers 🍩

community

AI & ML interests

inviting members to join the donut earth community from all over the donut 🍩

Recent Activity

p3nGu1nZzΒ  new activity about 1 month ago
donut-earth/donut-AE:enhance-setup-script
cappuchΒ  new activity about 1 month ago
donut-earth/donut-AE:enhance-setup-script
p3nGu1nZzΒ  updated a model about 1 month ago
donut-earth/donut-AE
View all activity

donut-earth's activity

alielfilali01Β 
posted an update 4 days ago
view post
Post
1639
~75% on the challenging GPQA with only 40M parameters πŸ”₯πŸ₯³

GREAT ACHIEVEMENT ! Or is it ?

This new Work, "Data Laundering: Artificially Boosting Benchmark Results through Knowledge Distillation", take out the mystery about many models i personally suspected their results. Speacially on leaderboards other than the english one, Like the Open Arabic LLM Leaderbaord OALL/Open-Arabic-LLM-Leaderboard.

The authors of this work, first started by training a model on the GPQA data, which, unsurprisingly, led to the model achieving 100% performance.

Afterward, they trained what they referred to as a 'legitimate' model on legitimate data (MedMCQA). However, they introduced a distillation loss from the earlier, 'cheated' model.

What they discovered was fascinating: the knowledge of GPQA leaked through this distillation loss, even though the legitimate model was never explicitly trained on GPQA during this stage.

This raises important questions about the careful use of distillation in model training, especially when the training data is opaque. As they demonstrated, it’s apparently possible to (intentionally or unintentionally) leak test data through this method.

Find out more: Data Laundering: Artificially Boosting Benchmark Results through Knowledge Distillation (2412.15255)
  • 1 reply
Β·
takarajordanΒ 
posted an update 16 days ago
view post
Post
1112
I made an RSS feed for HuggingFace Daily Papers!! πŸ€—

Just Subscribe here: https://papers.takara.ai/api/feed

It updates every 24 hours, completely written as a serverless go script with a Redis cache (to avoid hitting HF all the time).

I'm open sourcing the code, you can check out my repo and deploy it on Vercel extremely easily!
https://github.com/404missinglink/HF-Daily-Papers-Feeds

thanks to @John6666 @p3nGu1nZz for your early support
alielfilali01Β 
posted an update 21 days ago
view post
Post
3377
Unpopular opinion: Open Source takes courage to do !

Not everyone is brave enough to release what they have done (the way they've done it) to the wild to be judged !
It really requires a high level of "knowing wth are you doing" ! It's kind of a super power !

Cheers to the heroes here who see this!
Β·
takarajordanΒ 
posted an update 23 days ago
view post
Post
2251
I'm super excited to release my first open-source text dataset:

WorldScenario 20K is a novel dataset of 20,000 synthetically generated multi-stakeholder scenarios designed to simulate real-world decision-making processes. Each scenario explores a unique environmental, societal, or economic issue.

I used the brand new meta-llama/Llama-3.3-70B-Instruct model to generate this dataset and I put the dataset through some post processing to clean and evaluate the dataset for diversity.

I'd appreciate some feedback and thoughts on my new release! Thanks!

takarajordan/WorldScenario_20K
Β·
alielfilali01Β 
posted an update 25 days ago
view post
Post
1504
Apparently i forgot to put this here !

Well, this is a bit late but consider given our recent blog a read if you are interested in Evaluation.

You don't have to be into Arabic NLP in order to read it, the main contribution we are introducing is a new evaluation measure for NLG. We made the fisrt application of this measure on Arabic for now and we will be working with colleagues from the community to expand it to other languages.

Blog:
Rethinking LLM Evaluation with 3C3H: AraGen Benchmark and Leaderboard
https://huggingface.co/blog/leaderboard-3c3h-aragen

Space:
inceptionai/AraGen-Leaderboard

Give it a read and let me know your thoughts πŸ€—
p3nGu1nZzΒ 
in donut-earth/donut-AE about 1 month ago

enhance-setup-script

4
#1 opened about 1 month ago by
p3nGu1nZz
cappuchΒ 
in donut-earth/donut-AE about 1 month ago

enhance-setup-script

4
#1 opened about 1 month ago by
p3nGu1nZz
cappuchΒ 
updated a Space about 1 month ago
takarajordanΒ 
posted an update about 1 month ago
cappuchΒ 
posted an update about 1 month ago
takarajordanΒ 
in donut-earth/proof about 1 month ago

Another fake image

#4 opened about 1 month ago by
qkasriel

Fake image in the dataset.

1
#1 opened about 1 month ago by
qkasriel
not-lainΒ 
updated a Space about 1 month ago
takarajordanΒ 
posted an update about 1 month ago
view post
Post
1121
First post here goes!

takarajordan/CineDiffusion

Super excited to announce CineDiffusionπŸŽ₯, it creates images up to 4.2 Megapixels in Cinematic ultrawide formats like:
- 2.39:1 (Modern Widescreen)
- 2.76:1 (Ultra Panavision 70)
- 3.00:1 (Experimental Ultra-wide)
- 4.00:1 (Polyvision)
- 2.55:1 (CinemaScope)
- 2.20:1 (Todd-AO)

More to come soon!!

Thanks to @John6666 and @Resoldjew for your early support <3

And thanks to the team at ShuttleAI for their brand new Shuttle-3 model, what an amazing job.

shuttleai/shuttle-3-diffusion
not-lainΒ 
posted an update about 2 months ago
view post
Post
1990
ever wondered how you can make an API call to a visual-question-answering model without sending an image url πŸ‘€

you can do that by converting your local image to base64 and sending it to the API.

recently I made some changes to my library "loadimg" that allows you to make converting images to base64 a breeze.
πŸ”— https://github.com/not-lain/loadimg

API request example πŸ› οΈ:
from loadimg import load_img
from huggingface_hub import InferenceClient

# or load a local image
my_b64_img = load_img(imgPath_url_pillow_or_numpy ,output_type="base64" ) 

client = InferenceClient(api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")

messages = [
	{
		"role": "user",
		"content": [
			{
				"type": "text",
				"text": "Describe this image in one sentence."
			},
			{
				"type": "image_url",
				"image_url": {
					"url": my_b64_img # base64 allows using images without uploading them to the web
				}
			}
		]
	}
]

stream = client.chat.completions.create(
    model="meta-llama/Llama-3.2-11B-Vision-Instruct", 
	messages=messages, 
	max_tokens=500,
	stream=True
)

for chunk in stream:
    print(chunk.choices[0].delta.content, end="")
alielfilali01Β 
posted an update about 2 months ago
view post
Post
2183
Unpopular opinion : o1-preview is more stupid than 4o and Qwen2.5-72B-Instruct in extremely underrated !
  • 2 replies
Β·
alielfilali01Β 
posted an update 2 months ago
view post
Post
1701
I feel like this incredible resource hasn't gotten the attention it deserves in the community!

@clefourrier and generally the HuggingFace evaluation team put together a fantastic guidebook covering a lot about π—˜π—©π—”π—Ÿπ—¨π—”π—§π—œπ—’π—‘ from basics to advanced tips.

link : https://github.com/huggingface/evaluation-guidebook

I haven’t finished it yet, but i'am enjoying every piece of it so far. Huge thanks @clefourrier and the team for this invaluable resource !
  • 3 replies
Β·