fbaldassarri
commited on
Commit
β’
41f0d63
1
Parent(s):
2c2cff1
Updating README
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ This model has been quantized in INT4, group-size 128, and optimized for inferen
|
|
12 |
|
13 |
## π¨ Disclaimers
|
14 |
* This is an UNOFFICIAL quantization of the OFFICIAL model checkpoint released by iGenius.
|
15 |
-
* This model is based also on the conversion made by [Sapienza NLP, Sapienza University of Rome](https://huggingface.co/sapienzanlp).
|
16 |
* The original model was developed using LitGPT, therefore, the weights need to be converted before they can be used with Hugging Face transformers.
|
17 |
|
18 |
## π¨ Terms and Conditions
|
@@ -21,8 +21,6 @@ This model has been quantized in INT4, group-size 128, and optimized for inferen
|
|
21 |
## π¨ Reproducibility
|
22 |
This model has been quantized using Intel [auto-round](https://github.com/intel/auto-round), based on [SignRound technique](https://arxiv.org/pdf/2309.05516v4).
|
23 |
|
24 |
-
|
25 |
-
|
26 |
```
|
27 |
git clone https://github.com/fbaldassarri/model-conversion.git
|
28 |
|
|
|
12 |
|
13 |
## π¨ Disclaimers
|
14 |
* This is an UNOFFICIAL quantization of the OFFICIAL model checkpoint released by iGenius.
|
15 |
+
* This model is based also on the conversion made for HF Transformers by [Sapienza NLP, Sapienza University of Rome](https://huggingface.co/sapienzanlp).
|
16 |
* The original model was developed using LitGPT, therefore, the weights need to be converted before they can be used with Hugging Face transformers.
|
17 |
|
18 |
## π¨ Terms and Conditions
|
|
|
21 |
## π¨ Reproducibility
|
22 |
This model has been quantized using Intel [auto-round](https://github.com/intel/auto-round), based on [SignRound technique](https://arxiv.org/pdf/2309.05516v4).
|
23 |
|
|
|
|
|
24 |
```
|
25 |
git clone https://github.com/fbaldassarri/model-conversion.git
|
26 |
|