{"paper_url": "https://huggingface.co/papers/2306.11987", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators](https://huggingface.co/papers/2401.14110) (2024)\n* [OneBit: Towards Extremely Low-bit Large Language Models](https://huggingface.co/papers/2402.11295) (2024)\n* [Model Compression and Efficient Inference for Large Language Models: A Survey](https://huggingface.co/papers/2402.09748) (2024)\n* [LQER: Low-Rank Quantization Error Reconstruction for LLMs](https://huggingface.co/papers/2402.02446) (2024)\n* [WKVQuant: Quantizing Weight and Key/Value Cache for Large Language Models Gains More](https://huggingface.co/papers/2402.12065) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}