youssef boulaouane

byoussef

AI & ML interests

None yet

Recent Activity

Organizations

byoussef's activity

Reacted to rwightman's post with πŸš€ 4 days ago
view post
Post
986
Want to validate some hparams or figure out what timm model to use before commiting to download or training with a large dataset? Try mini-imagenet: timm/mini-imagenet

I had this sitting on my drive and forgot where I pulled it together from. It's 100 classes of imagenet, 50k train and 10k val images (from ImageNet-1k train set), and 5k test images (from ImageNet-1k val set). 7.4GB instead of > 100GB for the full ImageNet-1k. This ver is not reduced resolution like some other 'mini' versions. Super easy to use with timm train/val scripts, checkout the dataset card.

I often check fine-tuning with even smaller datasets like:
* timm/resisc45
* timm/oxford-iiit-pet
But those are a bit small to train any modest size model w/o starting from pretrained weights.
Reacted to averoo's post with πŸ”₯ 25 days ago
view post
Post
3773
Hello, researchers! I've tried to made reading HF Daily Papers easier and made a tool that does reviews with LLMs like Claude 3.5, GPT-4o and sometimes FLUX.

πŸ“š Classification by topics
πŸ“… Sorting by publication date and HF addition date
πŸ”„ Syncing every 2 hours
πŸ’» Hosted on GitHub
🌏 English, Russian, and Chinese
πŸ“ˆ Top by week/month (in progress)

πŸ‘‰ https://hfday.ru

Let me know what do you think of it.
upvoted an article about 1 month ago
liked a Space about 2 months ago
Reacted to rwightman's post with ❀️ about 2 months ago
view post
Post
2485
A 'small' MobileNet-V4 update, I just pushed weights for the smallest model I've trained in the series, a 0.5 width multiplier version of the MobileNet-V4 Conv Small.

Now you may look at this and say hey, why is this impressive? 64.8% top-1 and 2.2M params? MobileNetV3-Small 0.75, and MobileNet-V2 0.5 are both fewer params (at ~2M) and over 65% top-1, what gives? Well this is where MobileNet-V4 differs from the previous versions of the model family, it trades off (gives up) a little parameter efficiency for some computational efficiency.

So, let's look at the speed. On a 4090 w/ torchcompile
* 98K img/sec - timm/mobilenetv4_conv_small_050.e3000_r224_in1k
* 58K img/sec - timm/mobilenetv3_small_075.lamb_in1k
* 37K img/sec - timm/mobilenetv2_050.lamb_in1k

And there you go, if you have a need for speed, MNV4 is the better option.
upvoted an article about 2 months ago