afaji commited on
Commit
95f0007
1 Parent(s): eb3734e

auto-rename

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -24,22 +24,22 @@ widget:
24
  should probably proofread and complete it, then remove this comment. -->
25
 
26
  <p align="center" width="100%">
27
- <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini/main/images/LaMnin.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
28
  </p>
29
 
30
  # LaMini-Neo-125M
31
 
32
  [![Model License](https://img.shields.io/badge/Model%20License-CC%20By%20NC%204.0-red.svg)]()
33
 
34
- This model is one of our LaMini model series in paper "[LaMini: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini)".
35
- This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on [LaMini dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini/).
36
- You can view other LaMini model series as follow. Note that not all models are performing as well. Models with ✩ are those with the best overall performance given their size/architecture. More details can be seen in our paper.
37
 
38
  <table>
39
  <thead>
40
  <tr>
41
  <th>Base model</th>
42
- <th colspan="4">LaMini series (#parameters)</th>
43
  </tr>
44
  </thead>
45
  <tbody>
@@ -121,10 +121,10 @@ print("Response": generated_text)
121
  ## Training Procedure
122
 
123
  <p align="center" width="100%">
124
- <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a>
125
  </p>
126
 
127
- We initialize with [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) and fine-tune it on our [LaMini dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 125M.
128
 
129
  ### Training Hyperparameters
130
 
@@ -142,7 +142,7 @@ More information needed
142
 
143
  ```bibtex
144
  @misc{lamini,
145
- title={LaMini: A Diverse Herd of Distilled Models from Large-Scale Instructions},
146
  author={},
147
  year={2023},
148
  publisher = {GitHub},
 
24
  should probably proofread and complete it, then remove this comment. -->
25
 
26
  <p align="center" width="100%">
27
+ <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/Lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
28
  </p>
29
 
30
  # LaMini-Neo-125M
31
 
32
  [![Model License](https://img.shields.io/badge/Model%20License-CC%20By%20NC%204.0-red.svg)]()
33
 
34
+ This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)".
35
+ This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/).
36
+ You can view other LaMini-LM model series as follow. Note that not all models are performing as well. Models with ✩ are those with the best overall performance given their size/architecture. More details can be seen in our paper.
37
 
38
  <table>
39
  <thead>
40
  <tr>
41
  <th>Base model</th>
42
+ <th colspan="4">LaMini-LM series (#parameters)</th>
43
  </tr>
44
  </thead>
45
  <tbody>
 
121
  ## Training Procedure
122
 
123
  <p align="center" width="100%">
124
+ <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a>
125
  </p>
126
 
127
+ We initialize with [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 125M.
128
 
129
  ### Training Hyperparameters
130
 
 
142
 
143
  ```bibtex
144
  @misc{lamini,
145
+ title={LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions},
146
  author={},
147
  year={2023},
148
  publisher = {GitHub},