license: apache-2.0
datasets:
- NeelNanda/pile-10k
language:
- en
tags:
- text-generation-inference
Model Details
This model is an int8 model by smooth quant of tiiuae/falcon-7b
Uses
Direct Use
Evaluate
Env setup: following https://github.com/intel/intel-extension-for-transformers/blob/main/examples/huggingface/pytorch/text-generation/quantization/README.md
# Installation
git clone https://github.com/intel/intel-extension-for-transformers.git
# install ITREX
cd intel-extension-for-transformers
git checkout d6e6e9f944a2b3f9cf7d8346a310233094885dda
pip install -r requirements.txt
pip install -v .
# install requirements
cd examples/huggingface/pytorch/text-generation/quantization
pip install -r requirements.txt
pip install neural-compressor==2.5
pip install transformers==4.35.2
pip install torch==2.2.0+cpu --index-url https://download.pytorch.org/whl/cpu
pip install intel-extension-for-pytorch==2.2.0
Evaluate the model
python run_generation.py \
--model tiiuae/falcon-7b \
--output_dir <git_clone_path>/falcon-7b-sq-int8-inc \
--tasks lambada_openai \
--int8 --accuracy --benchmark \
--batch_size 1 \
--alpha 0.95
Results
Metric | fp32 | int8 sq |
---|---|---|
Avg. | 0.6982 | 0.6992 |
lambada_openai | 0.7467 | 0.7648 |
hellaswag | 0.5778 | 0.5659 |
winogrande | 0.6732 | 0.6717 |
piqa | 0.7949 | 0.7943 |
Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software: