yuchenglu's picture
Update README.md
98dd319
|
raw
history blame
No virus
2.76 kB
metadata
license: llama2
language:
  - en
library_name: transformers
datasets:
  - togethercomputer/llama-instruct

LLaMA-2-7B-32K-Chat

Model Description

LLaMA-2-7B-32K-Chat is an open-source, long-context chat model finetuned from Llama-2-7B-32K, over high-quality instructions and chat data. We build Llama-2-7B-32K-Chat with less than 200 lines of Python script using Together API, and we also make the recipe fully available. We hope that this can enable everyone to finetune their own version of Llama-2-7B-32K — play with Together API and give us feedback!

Llama-2-7B-32K-Chat is fine-tuned over 19K single- and multi-round conversations generated by human instructions and Llama-2-70B-Chat outputs, The dataset is also released here.

Inference

You can use the Together API to try out LLaMA-2-7B-32K-Chat for inference. The updated inference stack allows for efficient inference.

To run the model locally, we strongly recommend to install Flash Attention V2, which is necessary to obtain the best performance:

# Please update the path of `CUDA_HOME`
export CUDA_HOME=/usr/local/cuda-11.8
pip install transformers==4.31.0
pip install sentencepiece
pip install ninja
pip install flash-attn --no-build-isolation
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary

You can use this model directly from the Hugging Face Model Hub or fine-tune it on your own data using the OpenChatKit.

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("togethercomputer/LLaMA-2-7B-32K")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/LLaMA-2-7B-32K", trust_remote_code=True, torch_dtype=torch.float16)

input_context = "Your text here"
input_ids = tokenizer.encode(input_context, return_tensors="pt")
output = model.generate(input_ids, max_length=128, temperature=0.7)
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(output_text)

Alternatively, you can set trust_remote_code=False if you prefer not to use flash attention.

To chat with the model, the prompt is in the format of

[INST] Write a song about elepants [\INST]

Limitations and Bias

As with all language models, LLaMA-2-7B-32K-Chat may generate incorrect or biased content. It's important to keep this in mind when using the model.

Community

Join us on Together Discord