iproskurina commited on
Commit
cdc960d
1 Parent(s): 02a3f91

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -4
README.md CHANGED
@@ -82,13 +82,25 @@ The grouping size used for quantization is equal to 128.
82
 
83
  ### Install the necessary packages
84
 
 
 
 
 
 
 
 
 
 
 
 
 
85
  ```shell
86
- pip install accelerate==0.26.1 datasets==2.16.1 dill==0.3.7 gekko==1.0.6 multiprocess==0.70.15 peft==0.7.1 rouge==1.0.1 sentencepiece==0.1.99
87
- git clone https://github.com/upunaprosk/AutoGPTQ
88
  cd AutoGPTQ
89
- pip install -v .
 
90
  ```
91
- Recommended transformers version: 4.35.2.
92
 
93
  ### You can then use the following code
94
 
 
82
 
83
  ### Install the necessary packages
84
 
85
+ Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
86
+
87
+ ```shell
88
+ pip3 install --upgrade transformers optimum
89
+ # If using PyTorch 2.1 + CUDA 12.x:
90
+ pip3 install --upgrade auto-gptq
91
+ # or, if using PyTorch 2.1 + CUDA 11.x:
92
+ pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
93
+ ```
94
+
95
+ If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
96
+
97
  ```shell
98
+ pip3 uninstall -y auto-gptq
99
+ git clone https://github.com/PanQiWei/AutoGPTQ
100
  cd AutoGPTQ
101
+ git checkout v0.5.1
102
+ pip3 install .
103
  ```
 
104
 
105
  ### You can then use the following code
106