OPEA
/

Safetensors
llama
4-bit precision
awq
cicdatopea commited on
Commit
7cb82f1
·
verified ·
1 Parent(s): 6e9a459

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -10,7 +10,7 @@ datasets:
10
 
11
  ## Model Details
12
 
13
- This model is an int4 model with group_size 128 and symmetric quantization of [falcon3-10B](https://huggingface.co/OPEA/falcon3-10B-int4-sym-inc/blob/main/README.md) generated by [intel/auto-round](https://github.com/intel/auto-round).
14
 
15
  ## How To Use
16
 
@@ -126,7 +126,7 @@ Here is the sample command to generate the model.
126
 
127
  ```bash
128
  auto-round \
129
- --model falcon3-10B \
130
  --device 0 \
131
  --group_size 128 \
132
  --nsamples 512 \
 
10
 
11
  ## Model Details
12
 
13
+ This model is an int4 model with group_size 128 and symmetric quantization of [Falcon3-10B-Base](https://huggingface.co/tiiuae/Falcon3-10B-Base) generated by [intel/auto-round](https://github.com/intel/auto-round).
14
 
15
  ## How To Use
16
 
 
126
 
127
  ```bash
128
  auto-round \
129
+ --model tiiuae/Falcon3-10B-Base \
130
  --device 0 \
131
  --group_size 128 \
132
  --nsamples 512 \