File size: 5,209 Bytes
0c348ce
 
 
 
 
 
 
 
b7bc081
0c348ce
b7bc081
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
title: README
emoji: 🐢
colorFrom: purple
colorTo: gray
sdk: static
pinned: false
---
Intel and Hugging Face are building powerful optimization tools to accelerate training and inference with Transformers.

### Models
Check out Intel's models here on our Hugging Face page or directly through the [Hugging Face Models Hub search](https://huggingface.co/models?sort=trending&search=intel). Here are some of Intel's models:

| Model | Type | 
| :--- | :--- | 
| [dpt-hybrid-midas](https://huggingface.co/Intel/dpt-hybrid-midas) | Monocular depth estimation | 
| [llava-gemma-2b](https://huggingface.co/Intel/llava-gemma-2b) | Multimodal |
| [gpt2 on Gaudi](https://huggingface.co/Habana/gpt2) | Text generation | 
| [neural-chat-7b-v3-3-int8-ov](https://huggingface.co/OpenVINO/neural-chat-7b-v3-3-int8-ov) | Text generation | 

### Datasets

Intel has created a number of [datasets](https://huggingface.co/Intel?sort_datasets=modified#datasets) for use in fine-tuning both vision and language models. Check out the datasets below on our page, including [orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) for natural language processing tasks and [SocialCounterfactuals](https://huggingface.co/datasets/Intel/SocialCounterfactuals) for vision tasks.

### Collections

Our Collections categorize models that pertain to Intel hardware and software. Here are a few:

| Collection | Description | 
| :--- | :--- | 
| [DPT 3.1](https://huggingface.co/collections/Intel/dpt-31-65b2a13eb0a5a381b6df9b6b) | Monocular depth (MiDaS) models, leveraging state-of-the-art vision backbones such as BEiT and Swinv2 | 
| [Whisper](https://huggingface.co/collections/Intel/whisper-65b3d8d2d5bf0d622a866e3a) | Whisper models for automatic speech recognition (ASR) and speech translation, quantized for faster inference speeds. | 
| [Intel Neural Chat](https://huggingface.co/collections/Intel/intel-neural-chat-65b3d2f2d0ba0a801668ef2c) | Fine-tuned 7B parameter LLM models, one of which made it to the top of the 7B HF LLM Leaderboard | 

### Spaces
Check out Intel's leaderboards and other demo applications from our [Spaces](https://huggingface.co/Intel?sort_spaces=modified#spaces):

| Space | Description | 
| :--- | :--- | 
| [Powered-by-Intel LLM Leaderboard](https://huggingface.co/spaces/Intel/powered_by_intel_llm_leaderboard) | Evaluate, score, and rank open-source LLMs that have been pre-trained or fine-tuned on Intel Hardware 🦾 | 
| [Intel Low-bit Quantized Open LLM Leaderboard](https://huggingface.co/spaces/Intel/low_bit_open_llm_leaderboard) | Evaluation leaderboard for quantized language models | 

### Blogs

Get started with deploying Intel's models on Intel architecture with these hands-on tutorials from blogs written by staff from Hugging Face and Intel:

| Blog | Description | 
| :--- | :--- | 
| [Building Cost-Efficient Enterprise RAG applications with Intel Gaudi 2 and Intel Xeon](https://huggingface.co/blog/cost-efficient-rag-applications-with-intel) | Develop and deploy RAG applications as part of OPEA, the Open Platform for Enterprise AI | 
| [Running Large Multimodal Models on an AI PC's NPU](https://huggingface.co/blog/bconsolvo/llava-gemma-2b-aipc-npu) | Run the llava-gemma-2b model on an AI PC's NPU | 
| [A Chatbot on your Laptop: Phi-2 on Intel Meteor Lake](https://huggingface.co/blog/phi2-intel-meteor-lake) | Deploy Phi-2 on your local laptop with Intel OpenVINO in the Optimum Intel library | 
| [Partnering to Democratize ML Hardware Acceleration](https://huggingface.co/blog/intel) | Intel and Hugging Face collaborate to build state-of-the-art hardware acceleration to train, fine-tune and predict with Transformers | 

### Documentation

To learn more about deploying models on Intel hardware with Transformers, visit the resources listed below.

*Optimum Habana* - To deploy on Intel Gaudi accelerators, check out [optimum-habana](https://github.com/huggingface/optimum-habana/), the interface between Gaudi and the 🤗 Transformers and Diffusers libraries. To install the latest stable release:

```bash
pip install --upgrade-strategy eager optimum[habana]
```

*Optimum Intel* - To deploy on all other Intel architectures, check out [optimum-intel](https://github.com/huggingface/optimum-intel), the interface between Intel architectures and the 🤗 Transformers and Diffusers libraries. Depending on your need, you can use these backends:

| Accelerator | Installation |
|:---|:---|
| [Intel Neural Compressor](https://huggingface.co/docs/optimum/en/intel/optimization_inc) | `pip install --upgrade --upgrade-strategy eager "optimum[neural-compressor]"`  |
| [OpenVINO](https://huggingface.co/docs/optimum/en/intel/inference)                                                                             | `pip install --upgrade --upgrade-strategy eager "optimum[openvino]"`           |
| [Intel Extension for PyTorch](https://intel.github.io/intel-extension-for-pytorch/#introduction)                 | `pip install --upgrade --upgrade-strategy eager "optimum[ipex]"` |


### Join Our Dev Community

Please join us on the [Intel DevHub Discord](https://discord.gg/kfJ3NKEw5t) to ask questions and interact with our AI developer community!