README / README.md
bconsolvo's picture
Update README.md
b7bc081 verified
---
title: README
emoji: 🐒
colorFrom: purple
colorTo: gray
sdk: static
pinned: false
---
Intel and Hugging Face are building powerful optimization tools to accelerate training and inference with Transformers.
### Models
Check out Intel's models here on our Hugging Face page or directly through the [Hugging Face Models Hub search](https://huggingface.co/models?sort=trending&search=intel). Here are some of Intel's models:
| Model | Type |
| :--- | :--- |
| [dpt-hybrid-midas](https://huggingface.co/Intel/dpt-hybrid-midas) | Monocular depth estimation |
| [llava-gemma-2b](https://huggingface.co/Intel/llava-gemma-2b) | Multimodal |
| [gpt2 on Gaudi](https://huggingface.co/Habana/gpt2) | Text generation |
| [neural-chat-7b-v3-3-int8-ov](https://huggingface.co/OpenVINO/neural-chat-7b-v3-3-int8-ov) | Text generation |
### Datasets
Intel has created a number of [datasets](https://huggingface.co/Intel?sort_datasets=modified#datasets) for use in fine-tuning both vision and language models. Check out the datasets below on our page, including [orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) for natural language processing tasks and [SocialCounterfactuals](https://huggingface.co/datasets/Intel/SocialCounterfactuals) for vision tasks.
### Collections
Our Collections categorize models that pertain to Intel hardware and software. Here are a few:
| Collection | Description |
| :--- | :--- |
| [DPT 3.1](https://huggingface.co/collections/Intel/dpt-31-65b2a13eb0a5a381b6df9b6b) | Monocular depth (MiDaS) models, leveraging state-of-the-art vision backbones such as BEiT and Swinv2 |
| [Whisper](https://huggingface.co/collections/Intel/whisper-65b3d8d2d5bf0d622a866e3a) | Whisper models for automatic speech recognition (ASR) and speech translation, quantized for faster inference speeds. |
| [Intel Neural Chat](https://huggingface.co/collections/Intel/intel-neural-chat-65b3d2f2d0ba0a801668ef2c) | Fine-tuned 7B parameter LLM models, one of which made it to the top of the 7B HF LLM Leaderboard |
### Spaces
Check out Intel's leaderboards and other demo applications from our [Spaces](https://huggingface.co/Intel?sort_spaces=modified#spaces):
| Space | Description |
| :--- | :--- |
| [Powered-by-Intel LLM Leaderboard](https://huggingface.co/spaces/Intel/powered_by_intel_llm_leaderboard) | Evaluate, score, and rank open-source LLMs that have been pre-trained or fine-tuned on Intel Hardware 🦾 |
| [Intel Low-bit Quantized Open LLM Leaderboard](https://huggingface.co/spaces/Intel/low_bit_open_llm_leaderboard) | Evaluation leaderboard for quantized language models |
### Blogs
Get started with deploying Intel's models on Intel architecture with these hands-on tutorials from blogs written by staff from Hugging Face and Intel:
| Blog | Description |
| :--- | :--- |
| [Building Cost-Efficient Enterprise RAG applications with Intel Gaudi 2 and Intel Xeon](https://huggingface.co/blog/cost-efficient-rag-applications-with-intel) | Develop and deploy RAG applications as part of OPEA, the Open Platform for Enterprise AI |
| [Running Large Multimodal Models on an AI PC's NPU](https://huggingface.co/blog/bconsolvo/llava-gemma-2b-aipc-npu) | Run the llava-gemma-2b model on an AI PC's NPU |
| [A Chatbot on your Laptop: Phi-2 on Intel Meteor Lake](https://huggingface.co/blog/phi2-intel-meteor-lake) | Deploy Phi-2 on your local laptop with Intel OpenVINO in the Optimum Intel library |
| [Partnering to Democratize ML Hardware Acceleration](https://huggingface.co/blog/intel) | Intel and Hugging Face collaborate to build state-of-the-art hardware acceleration to train, fine-tune and predict with Transformers |
### Documentation
To learn more about deploying models on Intel hardware with Transformers, visit the resources listed below.
*Optimum Habana* - To deploy on Intel Gaudi accelerators, check out [optimum-habana](https://github.com/huggingface/optimum-habana/), the interface between Gaudi and the πŸ€— Transformers and Diffusers libraries. To install the latest stable release:
```bash
pip install --upgrade-strategy eager optimum[habana]
```
*Optimum Intel* - To deploy on all other Intel architectures, check out [optimum-intel](https://github.com/huggingface/optimum-intel), the interface between Intel architectures and the πŸ€— Transformers and Diffusers libraries. Depending on your need, you can use these backends:
| Accelerator | Installation |
|:---|:---|
| [Intel Neural Compressor](https://huggingface.co/docs/optimum/en/intel/optimization_inc) | `pip install --upgrade --upgrade-strategy eager "optimum[neural-compressor]"` |
| [OpenVINO](https://huggingface.co/docs/optimum/en/intel/inference) | `pip install --upgrade --upgrade-strategy eager "optimum[openvino]"` |
| [Intel Extension for PyTorch](https://intel.github.io/intel-extension-for-pytorch/#introduction) | `pip install --upgrade --upgrade-strategy eager "optimum[ipex]"` |
### Join Our Dev Community
Please join us on the [Intel DevHub Discord](https://discord.gg/kfJ3NKEw5t) to ask questions and interact with our AI developer community!