๐ ์ ์ฃผ ๋ฐฉ์ธ ๋ฒ์ญ ๋ชจ๋ธ ๐
- ์ ์ฃผ์ด -> ํ์ค์ด
- Made by. ๊ตฌ๋ฆ ์์ฐ์ด์ฒ๋ฆฌ ๊ณผ์ 3๊ธฐ 3์กฐ!!
- github link : https://github.com/Goormnlpteam3/JeBERT
1. Seq2Seq Transformer Model
- encoder : BertConfig
- decoder : BertConfig
- Tokenizer : WordPiece Tokenizer
2. Dataset
- Jit Dataset
- AI HUB(+์๋์ ๋ฌธ์)
3. Hyper Parameters
- Epoch : 10 epochs(best at 8 epoch)
- Random Seed : 42
- Learning Rate : 5e-5
- Warm up Ratio : 0.1
- Batch Size : 32
4. BLEU Score
- Jit + AI HUB(+์๋์ ๋ฌธ์) Dataset : 79.0
CREDIT
- ์ฃผํ์ค : wngudwns2798@gmail.com
- ๊ฐ๊ฐ๋ : 1st9aram@gmail.com
- ๊ณ ๊ด์ฐ : rhfprl11@gmail.com
- ๊น์์ฐ : s01090445778@gmail.com
- ์ด์๊ฒฝ : hjtwin2@gmail.com
- ์กฐ์ฑ์ : eun102476@gmail.com
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.