File size: 1,716 Bytes
c7c6757
c17fbf3
 
 
 
 
 
 
 
 
c7c6757
 
 
c17fbf3
c7c6757
c17fbf3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- openvino
- llama
- llama-3
license: other
license_name: llama3
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE
base_model: meta-llama/Meta-Llama-3-8B
---
# Meta-Llama-3-8B INT4 Quantized

- INT-4 quantized version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) created using OpenVINO

## Model Details

Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.

**Model developers** Meta

**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.

**Input** Models input text only.

**Output** Models generate text and code only.

**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.

**Model Release Date** April 18, 2024.

## Usage

```python
>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.intel.openvino import OVModelForCausalLM
>>> model_name = 'rajatkrishna/Meta-Llama-3-8B-OpenVINO-INT4'
>>> model = OVModelForCausalLM.from_pretrained(model_name)
>>> pipe = pipeline("text-generation", model=model, tokenizer=model_name)
>>> pipe("Hey how are you doing today?")
```