pcuenq HF staff commited on
Commit
979c450
1 Parent(s): 9f16e66

Fix link to blog post

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -56,7 +56,7 @@ for seq in sequences:
56
 
57
  💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
58
 
59
- For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
60
 
61
  You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct.
62
 
 
56
 
57
  💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
58
 
59
+ For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost](https://huggingface.co/blog/falcon).
60
 
61
  You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct.
62