pszemraj's picture
Update README.md
676a799
|
raw
history blame
1.55 kB
metadata
license: mit
datasets:
  - databricks/databricks-dolly-15k
language:
  - en
pipeline_tag: text-generation
tags:
  - dolly
  - dolly-v2
  - instruct
  - sharded
  - quantized
inference: false

dolly-v2-7b: 8-bit sharded checkpoint

Open In Colab

This is a sharded checkpoint (with ~2GB shards) of the databricks/dolly-v2-7b model in 8-bit precision using bitsandbytes.

Refer to the original model for all details. For more info on loading 8bit models, refer to the example repo and/or the 4.28.0 release info.

  • total model size is only ~7.5 GB!
  • this enables low-RAM loading, i.e. Colab :)

Basic Usage

install/upgrade transformers, accelerate, and bitsandbytes. For this to work you must have transformers>=4.28.0 and bitsandbytes>0.37.2.

pip install -U -q transformers bitsandbytes accelerate

Load the model. As it is serialized in 8bit you don't need to do anything special:

from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "ethzanalytics/dolly-v2-7b-sharded-8bit"
tokenizer = AutoTokenizer.from_pretrained(model_name)

model = AutoModelForCausalLM.from_pretrained(model_name)