README / README.md
mgoin's picture
Update README.md
7101434 verified
---
title: README
emoji: 💻
colorFrom: purple
colorTo: blue
sdk: static
pinned: false
---
# The Future of AI is Open
[Neural Magic](https://neuralmagic.com/) helps developers in accelerating deep learning performance using automated model compression technologies and inference engines.
Download our compression-aware inference engines and open source tools for fast model inference.
* [nm-vllm](https://neuralmagic.com/nm-vllm/): Enterprise-ready inferencing system based on the open-source library, vLLM, for at-scale operationalization of performant open-source LLMs
* [LLM Compressor](https://github.com/vllm-project/llm-compressor/): HF-native library for applying quantization and sparsity algorithms to llms for optimized deployment with vLLM
* [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering accelerated performance on CPUs and APIs to integrate ML into your application
![NM Workflow](https://cdn-uploads.huggingface.co/production/uploads/60466e4b4f40b01b66151416/QacT1zAnoidTKqRTY4NxH.png)
In this profile we provide accurate model checkpoints compressed with SOTA methods ready to run in vLLM such as W4A16, W8A16, W8A8 (int8 and fp8), and many more! If you would like help quantizing a model or have a request for us to add a checkpoint, please open an issue in https://github.com/vllm-project/llm-compressor.