Update README
Browse files
README.md
CHANGED
@@ -1,3 +1,35 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
base_model: yanolja/KoSOLAR-10.7B-v0.2
|
4 |
+
tags:
|
5 |
+
- generated_from_trainer
|
6 |
+
model-index:
|
7 |
+
- name: yanolja/Bookworm-10.7B-v0.4-DPO
|
8 |
+
results: []
|
9 |
---
|
10 |
+
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
11 |
+
# KoSOLAR-10.7B-v0.2
|
12 |
+
|
13 |
+
## Join Our Community on Discord!
|
14 |
+
|
15 |
+
If you're passionate about the field of Large Language Models and wish to exchange knowledge and insights, we warmly invite you to join our Discord server. It's worth noting that Korean is the primary language used in this server. The landscape of LLM is evolving rapidly, and without active sharing, our collective knowledge risks becoming outdated swiftly. Let's collaborate and drive greater impact together! Join us here: [Discord Link](https://discord.gg/b27bAHg95m).
|
16 |
+
|
17 |
+
## About the Model
|
18 |
+
|
19 |
+
Bookworm-10.7B-v0.4-DPO is an instruction tuned model based on [yanolja/KoSOLAR-10.7B-v0.2](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.2). To be updated.
|
20 |
+
|
21 |
+
### Our Dedicated Team
|
22 |
+
|
23 |
+
#### Research
|
24 |
+
- Myeongho Jeong
|
25 |
+
- Seungtaek Choi
|
26 |
+
- Seungduk Kim
|
27 |
+
|
28 |
+
#### Engineering
|
29 |
+
- Sanghoon Han
|
30 |
+
- Suhyun Kang
|
31 |
+
- Geon Kim
|
32 |
+
- Rifqi Alfi
|
33 |
+
|
34 |
+
#### Product Management
|
35 |
+
- Bokyung Huh
|