wenhua cheng

wenhuach

AI & ML interests

Model Compression, CV

Recent Activity

Organizations

Intel's profile picture Need4Speed's profile picture Qwen's profile picture

wenhuach's activity

replied to their post about 13 hours ago
view reply

While that may be one reason, it doesn't fully explain why there are still many quantized models available for LLaMA 3.1 and LLaMA 3.3.

reacted to their post with 🚀 7 days ago
posted an update 7 days ago
replied to their post 12 days ago
view reply

You can try using auto-round-fast xxx for a slight accuracy drop, or auto-round-fast xxx --nsamples 1 --iters 1 for very fast execution without algorithm tuning.

replied to their post 13 days ago
view reply

Thank you for your suggestion. As our focus is on algorithm development and our computational resources are limited, we currently lack the bandwidth to support a large number of models. If you come across any models that would benefit from quantization, feel free to comment on any models under OPEA. We will make an effort to prioritize and quantize them if resources allow.

reacted to their post with 🔥👀 13 days ago
posted an update 13 days ago
reacted to their post with ❤️ 21 days ago
view post
Post
332
This week, OPEA Space released several new INT4 models, including:
nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
allenai/OLMo-2-1124-13B-Instruct
THUDM/glm-4v-9b
AIDC-AI/Marco-o1
and several others.
Let us know which models you'd like prioritized for quantization, and we'll do our best to make it happen!

https://huggingface.co/OPEA
  • 3 replies
·
replied to their post 22 days ago
posted an update 24 days ago
view post
Post
332
This week, OPEA Space released several new INT4 models, including:
nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
allenai/OLMo-2-1124-13B-Instruct
THUDM/glm-4v-9b
AIDC-AI/Marco-o1
and several others.
Let us know which models you'd like prioritized for quantization, and we'll do our best to make it happen!

https://huggingface.co/OPEA
  • 3 replies
·
reacted to their post with 🚀 about 1 month ago
view post
Post
980
OPEA space just releases nearly 20 int4 models, for example, QWQ-32B-Preview,
Llama-3.2-11B-Vision-Instruct, Qwen2.5, Llama3.1, etc. Check out https://huggingface.co/OPEA
posted an update about 1 month ago
view post
Post
980
OPEA space just releases nearly 20 int4 models, for example, QWQ-32B-Preview,
Llama-3.2-11B-Vision-Instruct, Qwen2.5, Llama3.1, etc. Check out https://huggingface.co/OPEA
posted an update 5 months ago
view post
Post
650
Try to find a better int4 algorithm for LLAMA3.1? For the 8B model, AutoRound boasts an average improvement across 10 zero-shot tasks, scoring 63.93 versus 63.15 (AWQ). Notably, on the MMLU task, it achieved 66.72 compared to 65.25, and on the ARC-C task, it scored 52.13 against 50.94. For further details and comparisons, visit the leaderboard at Intel/low_bit_open_llm_leaderboard.
posted an update 6 months ago