Update README.md
Browse files
README.md
CHANGED
@@ -25,11 +25,11 @@ pipeline_tag: question-answering
|
|
25 |
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
26 |
|
27 |
- **Developed by:** SJ Lee, YJ Kim, SH Kim
|
28 |
-
- **
|
29 |
- **Language(s) (NLP):** Korean
|
30 |
- **Finetuned from model [optional]:** google/gemma2-2b
|
31 |
|
32 |
-
### Model Sources
|
33 |
|
34 |
<!-- Provide the basic links for the model. -->
|
35 |
|
@@ -46,8 +46,8 @@ This is the model card of a 🤗 transformers model that has been pushed on the
|
|
46 |
|
47 |
[More Information Needed]
|
48 |
|
49 |
-
###
|
50 |
-
|
51 |
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
52 |
|
53 |
[More Information Needed]
|
@@ -77,9 +77,9 @@ Use the code below to get started with the model.
|
|
77 |
[More Information Needed]
|
78 |
|
79 |
## Training Details
|
80 |
-
|
81 |
### Training Data
|
82 |
-
|
83 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
84 |
|
85 |
[More Information Needed]
|
@@ -88,11 +88,6 @@ Use the code below to get started with the model.
|
|
88 |
|
89 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
90 |
|
91 |
-
#### Preprocessing [optional]
|
92 |
-
|
93 |
-
[More Information Needed]
|
94 |
-
|
95 |
-
|
96 |
#### Training Hyperparameters
|
97 |
|
98 |
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
|
|
25 |
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
26 |
|
27 |
- **Developed by:** SJ Lee, YJ Kim, SH Kim
|
28 |
+
- **Activity with:** GMLB 2024, Gemma Sprint
|
29 |
- **Language(s) (NLP):** Korean
|
30 |
- **Finetuned from model [optional]:** google/gemma2-2b
|
31 |
|
32 |
+
### Model Sources
|
33 |
|
34 |
<!-- Provide the basic links for the model. -->
|
35 |
|
|
|
46 |
|
47 |
[More Information Needed]
|
48 |
|
49 |
+
### Data Resource
|
50 |
+
https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=data&dataSetSn=71776
|
51 |
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
52 |
|
53 |
[More Information Needed]
|
|
|
77 |
[More Information Needed]
|
78 |
|
79 |
## Training Details
|
80 |
+
Gemma2 2B + Lora Finetuning
|
81 |
### Training Data
|
82 |
+
train.csv
|
83 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
84 |
|
85 |
[More Information Needed]
|
|
|
88 |
|
89 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
90 |
|
|
|
|
|
|
|
|
|
|
|
91 |
#### Training Hyperparameters
|
92 |
|
93 |
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|