Quantized with these options: python convert.py \ -i /workspace/text-generation-webui/models/meta-llama_Llama-3.3-70B-Instruct \ -o /workspace/temp/ \ -m /workspace/measurement.json \ -l 16000 \ -cf /workspace/2.25 \ -b 2.25 \ -c /workspace/converted_rp.parquet \ -r 100 Dataset is long context synthetic roleplay multiturn dataset pre formatted with llama instruct role added between each turn. https://huggingface.co/datasets/openerotica/multi-turn-aware-quantization-llama-3.3-rp-test