wenhua cheng

wenhuach

AI & ML interests

Model Compression, CV

Recent Activity

Organizations

Intel's profile picture Need4Speed's profile picture Qwen's profile picture

Posts 8

view post
Post
2443
Check out [DeepSeek-R1 INT2 model( OPEA/DeepSeek-R1-int2-mixed-sym-inc). This 200GB DeepSeek-R1 model shows only about a 2% drop in MMLU, though it's quite slow due to kernel issue.

| | BF16 | INT2-mixed |
| ------------- | ------ | ---------- |
| mmlu | 0.8514 | 0.8302 |
| hellaswag | 0.6935 | 0.6657 |
| winogrande | 0.7932 | 0.7940 |
| arc_challenge | 0.6212 | 0.6084 |

models

None public yet

datasets

None public yet