Text Generation
Transformers
PyTorch
llama
guannaco
alpaca
conversational
text-generation-inference
JosephusCheung commited on
Commit
677ae12
1 Parent(s): 8156219

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -17,12 +17,10 @@ tags:
17
 
18
  ![](https://huggingface.co/JosephusCheung/Guanaco/resolve/main/StupidBanner.png)
19
 
20
- **You need a Colab pro for full version, or you can run the notebook on your own machine (8bit not well performed, should use fp16)**
21
 
22
  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1ocSmoy3ba1EkYu7JWT1oCw9vz8qC2cMk#scrollTo=zLORi5OcPcIJ)
23
 
24
- Free T4 Colab demo, please check 4bit version: [JosephusCheung/GuanacoOnConsumerHardware](https://huggingface.co/JosephusCheung/GuanacoOnConsumerHardware).
25
-
26
  **It is highly recommended to use fp16 inference for this model, as 8-bit precision may significantly affect performance. If you require a more Consumer Hardware friendly version, please use the specialized quantized, only 5+GB V-Ram required** [JosephusCheung/GuanacoOnConsumerHardware](https://huggingface.co/JosephusCheung/GuanacoOnConsumerHardware).
27
 
28
  **You are encouraged to use the latest version of transformers from GitHub.**
 
17
 
18
  ![](https://huggingface.co/JosephusCheung/Guanaco/resolve/main/StupidBanner.png)
19
 
20
+ **You can run on Colab free now**
21
 
22
  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1ocSmoy3ba1EkYu7JWT1oCw9vz8qC2cMk#scrollTo=zLORi5OcPcIJ)
23
 
 
 
24
  **It is highly recommended to use fp16 inference for this model, as 8-bit precision may significantly affect performance. If you require a more Consumer Hardware friendly version, please use the specialized quantized, only 5+GB V-Ram required** [JosephusCheung/GuanacoOnConsumerHardware](https://huggingface.co/JosephusCheung/GuanacoOnConsumerHardware).
25
 
26
  **You are encouraged to use the latest version of transformers from GitHub.**