zamal_

zamal

AI & ML interests

Anything that makes our life easy.

Organizations

FAU Erlangen-NΓΌrnberg's profile picture Open-Source AI Meetup's profile picture lora concepts library's profile picture OpenSky's profile picture That Time I got Reincarnated as a Hugging Face Organization's profile picture ZeroGPU Explorers's profile picture LocalLLaMA's profile picture MLX Community's profile picture Social Post Explorers's profile picture Paris AI Running Club's profile picture Hugging Face Party @ PyTorch Conference's profile picture

zamal's activity

reacted to their post with πŸ”₯ 2 months ago
view post
Post
1734
πŸš€ Announcement for the Lovely community! πŸš€

Just launched the zamal/DeepSeek-VL-1.3B-Chat on Hugging Face, and it's ready for YOU to explore! πŸ’¬πŸ–ΌοΈ

This full-fledged model is perfect for advanced image and text interactions, with zero GPU required. The Deepseek VL-1.3B Chat typically needs around 8 GB of VRAM and storage of almost 4 GB, but now you can experience it hassle-free right on our space!

Want something lighter? We’ve also uploaded a 4 bit quantized version (just around 1GB!), available on my profile. Perfect for those with limited hardware. πŸŒπŸ”

Come try it now and see what this model can do! πŸš€βœ¨

posted an update 2 months ago
view post
Post
1734
πŸš€ Announcement for the Lovely community! πŸš€

Just launched the zamal/DeepSeek-VL-1.3B-Chat on Hugging Face, and it's ready for YOU to explore! πŸ’¬πŸ–ΌοΈ

This full-fledged model is perfect for advanced image and text interactions, with zero GPU required. The Deepseek VL-1.3B Chat typically needs around 8 GB of VRAM and storage of almost 4 GB, but now you can experience it hassle-free right on our space!

Want something lighter? We’ve also uploaded a 4 bit quantized version (just around 1GB!), available on my profile. Perfect for those with limited hardware. πŸŒπŸ”

Come try it now and see what this model can do! πŸš€βœ¨

reacted to their post with πŸ”₯ 2 months ago
view post
Post
2039
Hello, lovely community! 🌟

zamal/Molmo-4bit Thrilled to announce that the Molmo 7B 4-bit Space is now live! πŸš€ The model size has been reduced by six times with almost no performance loss, and the results will leave you amazed!

It runs on zero GPU, making it incredibly accessible for everyone!

Check it out here and start exploring today!

Happy experimenting! πŸŽ‰
posted an update 2 months ago
view post
Post
2039
Hello, lovely community! 🌟

zamal/Molmo-4bit Thrilled to announce that the Molmo 7B 4-bit Space is now live! πŸš€ The model size has been reduced by six times with almost no performance loss, and the results will leave you amazed!

It runs on zero GPU, making it incredibly accessible for everyone!

Check it out here and start exploring today!

Happy experimenting! πŸŽ‰
posted an update 3 months ago
view post
Post
1941
πŸš€ New Model Release: zamal/Molmo-7B-GPTQ-4bit πŸš€

Hello lovely community,

zamal/Molmo-7B-GPTQ-4bit model is now available for all! This model has been highly quantized, reducing its size by almost six times. It now occupies significantly less space and vRAM, making it perfect for deployment on resource-constrained devices without compromising performance.

Now we get:
Efficient Performance: Maintains high accuracy while being highly quantized.
Reduced Size: The model size is reduced by nearly six times, optimizing storage and memory usage.
Versatile Application: Ideal for integrating a powerful visual language model into various projects particularly multi rag chains.
Check it out!

  • 1 reply
Β·
reacted to their post with πŸ”₯πŸ‘ 7 months ago
view post
Post
1324
Finally!
My first post for the lovely community out there!

Here's a highly quantized finetuned version of gemma focused exclusively on Prompt Engineering. Write as ambiguous you want and leave the job to this model

zamal/gemma-7b-finetuned
posted an update 8 months ago
view post
Post
1324
Finally!
My first post for the lovely community out there!

Here's a highly quantized finetuned version of gemma focused exclusively on Prompt Engineering. Write as ambiguous you want and leave the job to this model

zamal/gemma-7b-finetuned