File size: 3,577 Bytes
bd92f96 72d780d bd92f96 72d780d 8c55819 bd92f96 5be5fa0 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 47e1276 bd92f96 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 |
---
library_name: transformers
license: llama2
pipeline_tag: text-generation
tags:
- GGUF
- llama-2
- llama
- meta
- facebook
- quantized
- 7b
---
# Model Card for alokabhishek/Llama-2-7b-chat-hf-GGUF
<!-- Provide a quick summary of what the model is/does. -->
This repo GGUF quantized version of Meta's meta-llama/Llama-2-7b-chat-hf model using llama.cpp.
## Model Details
- Model creator: [Meta](https://huggingface.co/meta-llama)
- Original model: [Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
### About GGUF quantization using llama.cpp
- llama.cpp github repo: [llama.cpp github repo](https://github.com/ggerganov/llama.cpp)
# How to Get Started with the Model
Use the code below to get started with the model.
## How to run from Python code
#### First install the package
```shell
# Base ctransformers with CUDA GPU acceleration
! pip install ctransformers[cuda]>=0.2.24
# Or with no GPU acceleration
# ! pip install ctransformers>=0.2.24
! pip install -U sentence-transformers
! pip install transformers huggingface_hub torch
```
# Import
```python
from ctransformers import AutoModelForCausalLM
from transformers import pipeline, AutoModel, AutoTokenizer
from sentence_transformers import SentenceTransformer
import os
```
# Use a pipeline as a high-level helper
```python
# Load LLM and Tokenizer
model_llama = AutoModelForCausalLM.from_pretrained(
"alokabhishek/Llama-2-7b-chat-hf-GGUF",
model_file="llama-2-7b-chat-hf.Q4_K_M.gguf", # replace Q4_K_M.gguf with Q5_K_M.gguf as needed
model_type="llama",
gpu_layers=50, # Use `gpu_layers` to specify how many layers will be offloaded to the GPU.
hf=True
)
tokenizer_llama = AutoTokenizer.from_pretrained(
"alokabhishek/Llama-2-7b-chat-hf-GGUF",
use_fast=True
)
# Create a pipeline
pipe_llama = pipeline(model=model_llama, tokenizer=tokenizer_llama, task='text-generation')
prompt_llama = "Tell me a funny joke about Large Language Models meeting a Blackhole in an intergalactic Bar."
output_llama = pipe_llama(prompt_llama, max_new_tokens=512)
print(output_llama[0]["generated_text"])
```
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |