Text2Text Generation
Transformers
PyTorch
t5
text-generation-inference
Inference Endpoints
soujanyaporia commited on
Commit
801c157
โ€ข
1 Parent(s): fdd9882

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -6,6 +6,8 @@ datasets:
6
 
7
  ## ๐Ÿฎ ๐Ÿฆ™ Flan-Alpaca: Instruction Tuning from Humans and Machines
8
 
 
 
9
  ๐Ÿ“ฃ Curious to know the performance of ๐Ÿฎ ๐Ÿฆ™ **Flan-Alpaca** on large-scale LLM evaluation benchmark, **InstructEval**? Read our paper [https://arxiv.org/pdf/2306.04757.pdf](https://arxiv.org/pdf/2306.04757.pdf). We evaluated more than 10 open-source instruction-tuned LLMs belonging to various LLM families including Pythia, LLaMA, T5, UL2, OPT, and Mosaic. Codes and datasets: [https://github.com/declare-lab/instruct-eval](https://github.com/declare-lab/instruct-eval)
10
 
11
 
 
6
 
7
  ## ๐Ÿฎ ๐Ÿฆ™ Flan-Alpaca: Instruction Tuning from Humans and Machines
8
 
9
+ ๐Ÿ“ฃ We developed Flacuna by fine-tuning Vicuna-13B on the Flan collection. Flacuna is better than Vicuna at problem-solving. Access the model here https://huggingface.co/declare-lab/flacuna-13b-v1.0.
10
+
11
  ๐Ÿ“ฃ Curious to know the performance of ๐Ÿฎ ๐Ÿฆ™ **Flan-Alpaca** on large-scale LLM evaluation benchmark, **InstructEval**? Read our paper [https://arxiv.org/pdf/2306.04757.pdf](https://arxiv.org/pdf/2306.04757.pdf). We evaluated more than 10 open-source instruction-tuned LLMs belonging to various LLM families including Pythia, LLaMA, T5, UL2, OPT, and Mosaic. Codes and datasets: [https://github.com/declare-lab/instruct-eval](https://github.com/declare-lab/instruct-eval)
12
 
13