File size: 5,008 Bytes
0c348ce
 
 
 
 
 
 
 
 
ccbe0e4
5194ae2
5458a97
5194ae2
 
5458a97
5194ae2
299cb77
5194ae2
 
 
 
 
75118d3
5194ae2
 
 
5458a97
5194ae2
75118d3
 
 
 
5194ae2
 
 
 
 
75118d3
5194ae2
 
 
15cb7d0
5194ae2
8f50f2d
5194ae2
 
 
 
 
8f50f2d
5194ae2
 
 
c0621a4
5194ae2
 
 
 
 
 
 
15cb7d0
5194ae2
 
dc2bff9
5194ae2
30a7347
 
 
 
13ce78f
 
305db62
30a7347
7580608
435c65b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
299cb77
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
title: README
emoji: 🐢
colorFrom: purple
colorTo: gray
sdk: static
pinned: false
---

<div class="grid lg:grid-cols-3 gap-x-4 gap-y-7">
	<p class="lg:col-span-3">
	  Intel and Hugging Face are building powerful optimization tools to accelerate training and inference with Transformers.
	</p>
	<a
		href="https://huggingface.co/blog/intel"
		class="block overflow-hidden group"
	>
		<div
			class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850"
		>
			<img
				alt=""
				src="https://cdn-media.huggingface.co/marketing/intel-page/Intel-Hugging-Face-alt-version2-org-page.png"
				class="w-40"
			/>
		</div>
		<div class="underline">Learn more about Hugging Face collaboration with Intel AI</div>
	</a>
	<a
		href="https://github.com/huggingface/optimum"
		class="block overflow-hidden group"
	>
		<div
			class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850"
		>
			<img
				alt=""
				src="/blog/assets/25_hardware_partners_program/carbon_inc_quantizer.png"
				class="w-40"
			/>
		</div>
		<div class="underline">Quantize Transformers with Intel® Neural Compressor and Optimum</div>
	</a>
	<a href="https://huggingface.co/blog/generative-ai-models-on-intel-cpu" class="block overflow-hidden group">
		<div
			class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850"
		>
			<img
				alt=""
				src="/blog/assets/143_q8chat/thumbnail.png"
				class="w-40"
			/>
		</div>
		<div class="underline">Quantizing 7B LLM on Intel CPU</div>
	</a>
	<div class="lg:col-span-3">
		<p class="mb-2">
	    Intel optimizes the most widely adopted and innovative AI software 
	    tools, frameworks, and libraries for Intel® architecture. Whether 
	    you are computing locally or deploying AI applications on a massive 
	    scale, your organization can achieve peak performance with AI 
	    software optimized for Intel® Xeon® Scalable platforms.
		</p>
		<p class="mb-2">
	    Intel’s engineering collaboration with Hugging Face offers state-of-the-art hardware and software acceleration to train, fine-tune and predict with Transformers. 
	  </p>
	  <p>
	  	Useful Resources:
	  </p>
	  <ul>
	  	<li class="ml-6"><a href="https://huggingface.co/hardware/intel" class="underline" data-ga-category="intel-org" data-ga-action="clicked partner page" data-ga-label="partner page">- Intel AI + Hugging Face partner page</a></li>
	  	<li class="ml-6"><a href="https://github.com/IntelAI" class="underline" data-ga-category="intel-org" data-ga-action="clicked intel ai github" data-ga-label="intel ai github">- Intel AI GitHub</a></li>
        <li class="ml-6"><a href="https://www.intel.com/content/www/us/en/developer/partner/hugging-face.html" class="underline" data-ga-category="intel-org" data-ga-action="clicked intel partner page" data-ga-label="intel partner page">- Developer Resources from Intel and Hugging Face</a></li>
	  </ul>
	</div>
    <div class="lg:col-span-3">
	  <p class="mb-2">
	    To get started with Intel® hardware and software optimizations, download and install the Optimum-Intel® 
        and Intel® Extension for Transformers libraries with the following commands:
	  </p>
      <pre><code>
        $ python -m pip install "optimum-intel[extras]"@git+https://github.com/huggingface/optimum-intel.git
        $ python -m pip install intel-extension-for-transformers
      </code></pre>
      <p>
        <i>For additional information on these two libraries including installation, features, and usage, see the two links below.</i>
      </p>
      <p class="mb-2">
        Next, find your desired model (and dataset) by searching in the search box at the top-left of Hugging Face’s website. 
        Add “intel” to your search to narrow your search to Intel®-pretrained models.
      </p>
      <p class="mb-2">
        On the model’s page (called a “Model Card”) you will find description and usage information, an embedded 
        inferencing demo, and the associated dataset. In the upper-right of your screen, click “Use in Transformers” 
        for helpful code hints on how to import the model to your own workspace with an established Hugging Face pipeline and tokenizer.
      </p>
	  <p>
	  	Library Source and Documentation:
	  </p>
	  <ul>
	  	<li class="ml-6"><a href="https://github.com/huggingface/optimum-intel" class="underline" data-ga-category="intel-org" data-ga-action="clicked optimum intel" data-ga-label="optimum intel">- 🤗 Optimum-Intel® library</a></li>
	  	<li class="ml-6"><a href="https://github.com/intel/intel-extension-for-transformers" class="underline" data-ga-category="intel-org" data-ga-action="clicked intel extension for transformers" data-ga-label="intel extension for transformers">- Intel® Extension for Transformers</a></li>
	  </ul>
	</div>
</div>