bad
from transformers import pipeline
import torch
Load the model pipeline
pipe = pipeline("text-generation", model="GreenBitAI/LLaMA-2-1.1B-2bit-groupsize8", trust_remote_code=True, torch_dtype="auto", device_map="auto")
Use a properly formatted string as input, with do_sample
response = pipe("Who are you?", max_new_tokens=22, do_sample=True)
Print response
print(response[0]['generated_text'])
for predictions and inference.
tokenizer_config.json: 100%
776/776 [00:00<00:00, 51.8kB/s]
tokenizer.model: 100%
500k/500k [00:00<00:00, 31.7MB/s]
tokenizer.json: 100%
1.84M/1.84M [00:00<00:00, 6.66MB/s]
special_tokens_map.json: 100%
414/414 [00:00<00:00, 22.7kB/s]
Device set to use cpu
Setting pad_token_id
to eos_token_id
:2 for open-end generation.
Who are you?avingrent daß dancecondeavejouIX EventossenAVavidgame-+dedIX Dat Danceavid Counvol races
import torch
from transformers import pipeline
model_id = "GreenBitAI/LLaMA-2-1.1B-2bit-groupsize8"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
pipe("The key to life is")
/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_auth.py:94: UserWarning:
The secret HF_TOKEN
does not exist in your Colab secrets.
To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.
You will be able to reuse this secret in all of your notebooks.
Please note that authentication is recommended but still optional to access public models or datasets.
warnings.warn(
Some weights of the model checkpoint at GreenBitAI/LLaMA-2-1.1B-2bit-groupsize8 were not used when initializing LlamaForCausalLM: {'model.layers.2.self_attn.o_proj.qzeros_zeros', 'model.layers.15.mlp.down_proj.qzeros_scales', 'model.layers.6.mlp.gate_proj.g_idx', 'model.layers.15.self_attn.q_proj.wf', 'model.layers.17.mlp.gate_proj.qzeros_scales', 'model.layers.19.mlp.down_proj.qstatistic', 'model.layers.3.self_attn.o_proj.qscales_zeros', 'model.layers.12.mlp.up_proj.qscales_zeros', 'model.layers.17.self_attn.o_proj.qscales_zeros', 'model.layers.7.self_attn.v_proj.wf', 'model.layers.8.mlp.up_proj.qzeros_zeros', 'model.layers.9.self_attn.q_proj.bias', 'model.layers.3.self_attn.q_proj.qweight', 'model.layers.9.mlp.gate_proj.qweight', 'model.layers.9.self_attn.k_proj.weight', 'model.layers.9.self_attn.o_proj.weight', 'model.layers.9.self_attn.q_proj.weight', 'model.layers.9.self_attn.v_proj.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Device set to use cpu
Setting pad_token_id
to eos_token_id
:2 for open-end generation.
[{'generated_text': 'The key to life ismodulemodulemodulemodulemodulemodulemodulemodulemodulemodulemodulemodulemodulemodulemodulemodulemodule instruct instruct instruct'}]