by @RachidAR
RachidAR
RachidAR
AI & ML interests
1.58 bit LLM
Organizations
Collections
5
-
Addition is All You Need for Energy-efficient Language Models
Paper • 2410.00907 • Published • 144 -
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 602 -
LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding
Paper • 2404.16710 • Published • 74 -
Beyond Scaling Laws: Understanding Transformer Performance with Associative Memory
Paper • 2405.08707 • Published • 27
models
26
RachidAR/Whisper-v3-large-turbo
Automatic Speech Recognition
•
Updated
RachidAR/Qwen2.5-Coder-1.5B-Q5_K_M-GGUF
Text Generation
•
Updated
•
7
RachidAR/Mistral-Small-Instruct-2409-Q4_K_M-GGUF
Updated
•
7
RachidAR/RWKV-v6-Finch-14B-HF-Q5_K_M-GGUF
Updated
•
11
•
1
RachidAR/RWKV-v6-Finch-7B-HF-Q5_K_M-GGUF
Updated
•
43
•
1
RachidAR/RWKV-v6-Finch-1B6-HF-Q5_K_M-GGUF
Updated
•
21
•
2
RachidAR/Phi-3.5-mini-instruct-Q5_K_M-GGUF
Text Generation
•
Updated
•
3
RachidAR/Phi-3-mini-4k-ins-June2024-Q5_K_M-imat-GGUF
Text Generation
•
Updated
•
8
RachidAR/Phi-3-mini-4k-instruct-June2024-Q6_K-GGUF
Text Generation
•
Updated
•
12
RachidAR/saiga_llama3_8b-Q6_K-GGUF
Updated
•
32
datasets
None public yet