nihitdesai
commited on
Commit
•
7e96a35
1
Parent(s):
3e4b062
Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ tags:
|
|
15 |
RefuelLLM-2-small, aka Llama-3-Refueled, is a Llama3-8B base model instruction tuned on a corpus of 2750+ datasets, spanning tasks such as classification, reading comprehension, structured attribute extraction and entity resolution. We're excited to open-source the model for the community to build on top of.
|
16 |
|
17 |
* More details about [RefuelLLM-2 family of models](https://www.refuel.ai/blog-posts/announcing-refuel-llm-2)
|
18 |
-
* You can also try out the models in
|
19 |
|
20 |
**Model developers** - Refuel AI
|
21 |
|
@@ -23,13 +23,13 @@ RefuelLLM-2-small, aka Llama-3-Refueled, is a Llama3-8B base model instruction t
|
|
23 |
|
24 |
**Output** - Text only.
|
25 |
|
26 |
-
**Architecture** -
|
27 |
|
28 |
**Release Date** - May 8, 2024.
|
29 |
|
30 |
## How to use
|
31 |
|
32 |
-
This repository contains weights for
|
33 |
|
34 |
```python
|
35 |
>>> import torch
|
@@ -49,10 +49,10 @@ This repository contains weights for RefuelLLM-2-small that are compatible for u
|
|
49 |
|
50 |
## Training Data
|
51 |
|
52 |
-
|
53 |
1. Human annotated datasets like Flan, Task Source, and the Aya collection
|
54 |
2. Synthetic datasets like OpenOrca, OpenHermes and WizardLM
|
55 |
-
3. Proprietary datasets developed or licensed by Refuel
|
56 |
|
57 |
## Benchmarks
|
58 |
|
@@ -64,7 +64,7 @@ In this section, we report the results for Refuel models on our benchmark of lab
|
|
64 |
<tr><td></td><td></td><td>Overall</td><td>Classification</td><td>Reading Comprehension</td><td>Structure Extraction</td><td>Entity Matching</td><td></td></tr>
|
65 |
<tr><td>Refuel</td><td>RefuelLLM-2</td><td>83.82%</td><td>84.94%</td><td>76.03%</td><td>88.16%</td><td>92.00%</td><td></td></tr>
|
66 |
<tr><td>OpenAI</td><td>GPT-4-Turbo</td><td>80.88%</td><td>81.77%</td><td>72.08%</td><td>84.79%</td><td>97.20%</td><td></td></tr>
|
67 |
-
<tr><td>Refuel</td><td>RefuelLLM-2-small</td><td>79.67%</td><td>81.72%</td><td>70.04%</td><td>84.28%</td><td>92.00%</td><td></td></tr>
|
68 |
<tr><td>Anthropic</td><td>Claude-3-Opus</td><td>79.19%</td><td>82.49%</td><td>67.30%</td><td>88.25%</td><td>94.96%</td><td></td></tr>
|
69 |
<tr><td>Meta</td><td>Llama3-70B-Instruct</td><td>78.20%</td><td>79.38%</td><td>66.03%</td><td>85.96%</td><td>94.13%</td><td></td></tr>
|
70 |
<tr><td>Google</td><td>Gemini-1.5-Pro</td><td>74.59%</td><td>73.52%</td><td>60.67%</td><td>84.27%</td><td>98.48%</td><td></td></tr>
|
@@ -78,5 +78,5 @@ In this section, we report the results for Refuel models on our benchmark of lab
|
|
78 |
|
79 |
## Limitations
|
80 |
|
81 |
-
The
|
82 |
on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
|
|
|
15 |
RefuelLLM-2-small, aka Llama-3-Refueled, is a Llama3-8B base model instruction tuned on a corpus of 2750+ datasets, spanning tasks such as classification, reading comprehension, structured attribute extraction and entity resolution. We're excited to open-source the model for the community to build on top of.
|
16 |
|
17 |
* More details about [RefuelLLM-2 family of models](https://www.refuel.ai/blog-posts/announcing-refuel-llm-2)
|
18 |
+
* You can also try out the models in our [LLM playground](https://labs.refuel.ai/playground)
|
19 |
|
20 |
**Model developers** - Refuel AI
|
21 |
|
|
|
23 |
|
24 |
**Output** - Text only.
|
25 |
|
26 |
+
**Architecture** - Llama-3-Refueled is built on top of Llama-3-8B-instruct which is an auto-regressive language model that uses an optimized transformer architecture.
|
27 |
|
28 |
**Release Date** - May 8, 2024.
|
29 |
|
30 |
## How to use
|
31 |
|
32 |
+
This repository contains weights for Llama-3-Refueled that are compatible for use with HuggingFace. See the snippet below for usage with Transformers:
|
33 |
|
34 |
```python
|
35 |
>>> import torch
|
|
|
49 |
|
50 |
## Training Data
|
51 |
|
52 |
+
The model was both trained on over 4 Billion tokens, spanning 2750+ NLP tasks. Our training collection consists majorly of:
|
53 |
1. Human annotated datasets like Flan, Task Source, and the Aya collection
|
54 |
2. Synthetic datasets like OpenOrca, OpenHermes and WizardLM
|
55 |
+
3. Proprietary datasets developed or licensed by Refuel AI
|
56 |
|
57 |
## Benchmarks
|
58 |
|
|
|
64 |
<tr><td></td><td></td><td>Overall</td><td>Classification</td><td>Reading Comprehension</td><td>Structure Extraction</td><td>Entity Matching</td><td></td></tr>
|
65 |
<tr><td>Refuel</td><td>RefuelLLM-2</td><td>83.82%</td><td>84.94%</td><td>76.03%</td><td>88.16%</td><td>92.00%</td><td></td></tr>
|
66 |
<tr><td>OpenAI</td><td>GPT-4-Turbo</td><td>80.88%</td><td>81.77%</td><td>72.08%</td><td>84.79%</td><td>97.20%</td><td></td></tr>
|
67 |
+
<tr><td>Refuel</td><td>RefuelLLM-2-small (Llama-3-Refueled)</td><td>79.67%</td><td>81.72%</td><td>70.04%</td><td>84.28%</td><td>92.00%</td><td></td></tr>
|
68 |
<tr><td>Anthropic</td><td>Claude-3-Opus</td><td>79.19%</td><td>82.49%</td><td>67.30%</td><td>88.25%</td><td>94.96%</td><td></td></tr>
|
69 |
<tr><td>Meta</td><td>Llama3-70B-Instruct</td><td>78.20%</td><td>79.38%</td><td>66.03%</td><td>85.96%</td><td>94.13%</td><td></td></tr>
|
70 |
<tr><td>Google</td><td>Gemini-1.5-Pro</td><td>74.59%</td><td>73.52%</td><td>60.67%</td><td>84.27%</td><td>98.48%</td><td></td></tr>
|
|
|
78 |
|
79 |
## Limitations
|
80 |
|
81 |
+
The Llama-3-Refueled does not have any moderation mechanisms. We're looking forward to engaging with the community
|
82 |
on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
|