Shaltiel's picture
Update README.md
8b9e4d8 verified
metadata
license: apache-2.0
pipeline_tag: text-generation
language:
  - en
  - he
tags:
  - instruction-tuned
base_model: dicta-il/dictalm2.0
inference: false

Model Card for DictaLM-2.0-Instruct

The DictaLM-2.0-Instruct Large Language Model (LLM) is an instruct fine-tuned version of the DictaLM-2.0 generative model using a variety of conversation datasets.

For full details of this model please read our release blog post.

This model contains the GPTQ 4-bit quantized version of the instruct-tuned model designed for chat DictaLM-2.0-Instruct.

You can view and access the full collection of base/instruct unquantized/quantized versions of DictaLM-2.0 here.

Instruction format

In order to leverage instruction fine-tuning, your prompt should be surrounded by [INST] and [/INST] tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.

E.g.

text = """<s>[INST] 讗讬讝讛 专讜讟讘 讗讛讜讘 注诇讬讱? [/INST]
讟讜讘, 讗谞讬 讚讬 诪讞讘讘 讻诪讛 讟讬驻讜转 诪讬抓 诇讬诪讜谉 住讞讜讟 讟专讬. 讝讛 诪讜住讬祝 讘讚讬讜拽 讗转 讛讻诪讜转 讛谞讻讜谞讛 砖诇 讟注诐 讞诪爪诪抓 诇讻诇 诪讛 砖讗谞讬 诪讘砖诇 讘诪讟讘讞!</s>[INST] 讛讗诐 讬砖 诇讱 诪转讻讜谞讬诐 诇诪讬讜谞讝? [/INST]"

This format is available as a chat template via the apply_chat_template() method:

Example Code

Running this code requires less than 5GB of GPU VRAM.

from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("dicta-il/dictalm2.0-instruct-GPTQ", device_map=device)
tokenizer = AutoTokenizer.from_pretrained("dicta-il/dictalm2.0-instruct-GPTQ")

messages = [
    {"role": "user", "content": "讗讬讝讛 专讜讟讘 讗讛讜讘 注诇讬讱?"},
    {"role": "assistant", "content": "讟讜讘, 讗谞讬 讚讬 诪讞讘讘 讻诪讛 讟讬驻讜转 诪讬抓 诇讬诪讜谉 住讞讜讟 讟专讬. 讝讛 诪讜住讬祝 讘讚讬讜拽 讗转 讛讻诪讜转 讛谞讻讜谞讛 砖诇 讟注诐 讞诪爪诪抓 诇讻诇 诪讛 砖讗谞讬 诪讘砖诇 讘诪讟讘讞!"},
    {"role": "user", "content": "讛讗诐 讬砖 诇讱 诪转讻讜谞讬诐 诇诪讬讜谞讝?"}
]

encoded = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device)

generated_ids = model.generate(encoded, max_new_tokens=50, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
# <s> [INST] 讗讬讝讛 专讜讟讘 讗讛讜讘 注诇讬讱? [/INST]
# 讟讜讘, 讗谞讬 讚讬 诪讞讘讘 讻诪讛 讟讬驻讜转 诪讬抓 诇讬诪讜谉 住讞讜讟 讟专讬. 讝讛 诪讜住讬祝 讘讚讬讜拽 讗转 讛讻诪讜转 讛谞讻讜谞讛 砖诇 讟注诐 讞诪爪诪抓 诇讻诇 诪讛 砖讗谞讬 诪讘砖诇 讘诪讟讘讞!</s>  [INST] 讛讗诐 讬砖 诇讱 诪转讻讜谞讬诐 诇诪讬讜谞讝? [/INST]
# 讘讟讞, 讛谞讛 诪转讻讜谉 拽诇 诪讗讜讚 诇诪讬讜谞讝 讘讬转讬:
# 
# 诪专讻讬讘讬诐:
# - 2 讘讬爪讬诐 讙讚讜诇讜转
# - 1 讻祝 讞专讚诇 讚讬讝'讜谉
# - 2 讻驻讜转
# (it stopped early because we set max_new_tokens=50)

Model Architecture

DictaLM-2.0-Instruct follows the Zephyr-7B-beta recipe for fine-tuning an instruct model, with an extended instruct dataset for Hebrew.

Limitations

The DictaLM 2.0 Instruct model is a demonstration that the base model can be fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.

Citation

If you use this model, please cite:

[Will be added soon]