TheBloke commited on
Commit
417a101
·
1 Parent(s): 1f3e287

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -7
README.md CHANGED
@@ -68,14 +68,18 @@ So please first update text-genration-webui to the latest version.
68
 
69
  ## How to download and use this model in text-generation-webui
70
 
71
- 1. Launch text-generation-webui with the following command-line arguments: `--autogptq --trust-remote-code`
72
  2. Click the **Model tab**.
73
- 3. Under **Download custom model or LoRA**, enter `TheBloke/WizardLM-Uncensored-Falcon-7B-GPTQ`.
74
- 4. Click **Download**.
75
- 5. Wait until it says it's finished downloading.
76
- 6. Click the **Refresh** icon next to **Model** in the top left.
77
- 7. In the **Model drop-down**: choose the model you just downloaded, `WizardLM-Uncensored-Falcon-7B-GPTQ`.
78
- 8. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
 
 
 
 
79
 
80
  ## Try it for free on Google Colab
81
 
 
68
 
69
  ## How to download and use this model in text-generation-webui
70
 
71
+ 1. Launch text-generation-webui
72
  2. Click the **Model tab**.
73
+ 3. Untick **Autoload model**
74
+ 4. Under **Download custom model or LoRA**, enter `TheBloke/WizardLM-Uncensored-Falcon-7B-GPTQ`.
75
+ 5. Click **Download**.
76
+ 6. Wait until it says it's finished downloading.
77
+ 7. Click the **Refresh** icon next to **Model** in the top left.
78
+ 8. In the **Model drop-down**: choose the model you just downloaded, `WizardLM-Uncensored-Falcon-7B-GPTQ`.
79
+ 9. Make sure **Loader** is set to **AutoGPTQ**. This model will not work with ExLlama or GPTQ-for-LLaMa.
80
+ 10. Tick **Trust Remote Code**, followed by **Save Settings**
81
+ 11. Make sure Click **Reload**.
82
+ 12. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
83
 
84
  ## Try it for free on Google Colab
85