Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ We are thrilled to introduce **Alpha-Instruct**, our latest language model, whic
|
|
9 |
|
10 |
A key aspect of Alpha-Instruct's development is our **community-based approach**. We draw inspiration and ideas from various communities, shaping our datasets, methodologies, and the model itself. In return, we are committed to sharing our insights with the community, providing detailed information on the data, methods, and models used in Alpha-Instruct's creation.
|
11 |
|
12 |
-
Alpha-Instruct has achieved outstanding performance on the **LogicKor, scoring an impressive 6.
|
13 |
|
14 |
**For more information and technical details about Alpha-Instruct, stay tuned to our updates and visit our [website](https://allganize-alpha.github.io/) (Soon).**
|
15 |
|
@@ -37,14 +37,14 @@ Results in [LogicKor](https://github.com/StableFluffy/LogicKor)* are as follows:
|
|
37 |
| MLP-KTLim/llama-3-Korean-Bllossom-8B | 4.238 | 3.404 | 3.821 |
|
38 |
| Alpha-Ko-Evo | 5.143 | 5.238 | 5.190 |
|
39 |
| Alpha-Ko-Instruct (alt) | 7.095 | 6.571 | **6.833** |
|
40 |
-
| Alpha-Ko-Instruct | **7.143** | 6.
|
41 |
| Alpha-Ko-Instruct-marlin (4bit) | 6.857 | 5.738 | 6.298 |
|
42 |
|
43 |
*Self report(Default settings with 'alpha' template, mean of 3).
|
44 |
|
45 |
Result in KoBEST(acc, num_shot=5) are as follows:
|
46 |
|
47 |
-
| Task | beomi/Llama-3-Open-Ko-8B-Instruct | maywell/Llama-3-Ko-8B-Instruct | Alpha-Ko-Evo | Alpha-Ko-Instruct
|
48 |
| --- | --- | --- | --- | --- |
|
49 |
| kobest overall | 0.6220 | 0.6852 |0.7229|0.7055
|
50 |
| kobest_boolq| 0.6254 | 0.7208 | 0.8547 | 0.8369
|
@@ -53,7 +53,7 @@ Result in KoBEST(acc, num_shot=5) are as follows:
|
|
53 |
| kobest_sentineg| 0.8388 | 0.9194 |0.9471 | 0.9244
|
54 |
| kobest_wic| 0.5738| 0.6040 |0.6095 | 0.5730
|
55 |
|
56 |
-
* '
|
57 |
|
58 |
## How to use
|
59 |
|
|
|
9 |
|
10 |
A key aspect of Alpha-Instruct's development is our **community-based approach**. We draw inspiration and ideas from various communities, shaping our datasets, methodologies, and the model itself. In return, we are committed to sharing our insights with the community, providing detailed information on the data, methods, and models used in Alpha-Instruct's creation.
|
11 |
|
12 |
+
Alpha-Instruct has achieved outstanding performance on the **LogicKor, scoring an impressive 6.62**. Remarkably, this performance rivals that of 70B models, showcasing the efficiency and power of our 8B model. This achievement highlights Alpha-Instruct's advanced computational and reasoning skills, making it a leading choice for diverse and demanding language tasks.
|
13 |
|
14 |
**For more information and technical details about Alpha-Instruct, stay tuned to our updates and visit our [website](https://allganize-alpha.github.io/) (Soon).**
|
15 |
|
|
|
37 |
| MLP-KTLim/llama-3-Korean-Bllossom-8B | 4.238 | 3.404 | 3.821 |
|
38 |
| Alpha-Ko-Evo | 5.143 | 5.238 | 5.190 |
|
39 |
| Alpha-Ko-Instruct (alt) | 7.095 | 6.571 | **6.833** |
|
40 |
+
| Alpha-Ko-Instruct | **7.143** | 6.065 | 6.620 |
|
41 |
| Alpha-Ko-Instruct-marlin (4bit) | 6.857 | 5.738 | 6.298 |
|
42 |
|
43 |
*Self report(Default settings with 'alpha' template, mean of 3).
|
44 |
|
45 |
Result in KoBEST(acc, num_shot=5) are as follows:
|
46 |
|
47 |
+
| Task | beomi/Llama-3-Open-Ko-8B-Instruct | maywell/Llama-3-Ko-8B-Instruct | Alpha-Ko-Evo | Alpha-Ko-Instruct |
|
48 |
| --- | --- | --- | --- | --- |
|
49 |
| kobest overall | 0.6220 | 0.6852 |0.7229|0.7055
|
50 |
| kobest_boolq| 0.6254 | 0.7208 | 0.8547 | 0.8369
|
|
|
53 |
| kobest_sentineg| 0.8388 | 0.9194 |0.9471 | 0.9244
|
54 |
| kobest_wic| 0.5738| 0.6040 |0.6095 | 0.5730
|
55 |
|
56 |
+
* For reference, 'merged' models are chosen.
|
57 |
|
58 |
## How to use
|
59 |
|