You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

devngho/ko_edu_classifier_v2_LaBSE

์ด ๋ชจ๋ธ์€ sentence-transformers/LaBSE์— classifier๋ฅผ ์ถ”๊ฐ€ํ•œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. HuggingFaceFW/fineweb-edu-classifier์˜ ํ•œ๊ตญ์–ด ๋ฒ„์ „์„ ๋ชฉํ‘œ๋กœ, ํ•œ๊ตญ์–ด ์›น ํŽ˜์ด์ง€์˜ ๊ต์œก์„ฑ ์ ์ˆ˜๋ฅผ ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ํ•™์Šต์—๋Š” blueapple8259/c4-ko-cleaned-2์—์„œ ์ถ”์ถœํ•œ 500k ์ƒ˜ํ”Œ์„ Qwen/Qwen2.5-32B-Instruct๋กœ ํ‰๊ฐ€ํ•œ devngho/ko_llm_annotations ๋ฐ์ดํ„ฐ์…‹์ด ์‚ฌ์šฉ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

์ด ์—ฐ๊ตฌ๋Š” Google์˜ TPU Research Cloud (TRC)์˜ Cloud TPU ์ œ๊ณต์œผ๋กœ ์ˆ˜ํ–‰๋˜์—ˆ์Šต๋‹ˆ๋‹ค. โšก

์ƒ์„ธ

  • ์ œ์ž‘: devngho
  • ์–ธ์–ด: ko
  • ๋ผ์ด์„ ์Šค: mit
  • ๊ธฐ๋ฐ˜ ๋ชจ๋ธ: sentence-transformers/LaBSE

ํ•™์Šต ์ƒ์„ธ

  • learning_rate: 3e-4 (cosine)
  • warmup_ratio: 0.1
  • batch_size: 512
  • optimizer: adamw(b1=0.9, b2=0.98, eps=1e-8, weight_decay=0.01)
  • duration: 3h 21m

ํ•™์Šต ์žฅ๋น„

TPU v4-8

์„ฑ๋Šฅ

Validation Report:
              precision    recall  f1-score   support

           0       0.51      0.28      0.36       198
           1       0.69      0.50      0.58      1553
           2       0.39      0.69      0.49      1159
           3       0.54      0.42      0.47       967
           4       0.54      0.10      0.16       219

    accuracy                           0.50      4096
   macro avg       0.53      0.40      0.42      4096
weighted avg       0.55      0.50      0.50      4096

Confusion Matrix:
[[ 56 108  34   0   0]
 [ 51 782 677  43   0]
 [  2 202 797 155   3]
 [  0  37 508 407  15]
 [  0   1  53 144  21]]

ํ•œ๊ตญ์–ด ์ž„๋ฒ ๋”ฉ์˜ ํ•œ๊ณ„์™€ qwen2.5 32b ๋ชจ๋ธ์˜ ํ‰๊ฐ€ ํ•œ๊ณ„๋กœ ์„ฑ๋Šฅ์ด ๋‚ฎ์€ ๊ฒƒ์œผ๋กœ ๋ณด์ž…๋‹ˆ๋‹ค. 3 ์ด์ƒ๊ณผ ๋ฏธ๋งŒ์œผ๋กœ ๊ตฌ๋ถ„ํ•  ๋•Œ f1 score๋Š” ์•ฝ 0.59์ž…๋‹ˆ๋‹ค.

devngho/ko_edu_classifier_v2_LaBSE

This model is sentence-transformers/LaBSE with classfier head. It is designed to evaluate the educational value of Korean web pages, similar to the HuggingFaceFW/fineweb-edu-classifier, but focused on Korean content. The training data comes from devngho/ko_llm_annotations dataset, contains 500k samples extracted from blueapple8259/c4-ko-cleaned-2 and evaluated using Qwen/Qwen2.5-32B-Instruct.

This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).โšก

Training detail

  • learning_rate: 3e-4 (cosine)
  • warmup_ratio: 0.1
  • batch_size: 512
  • optimizer: adamw(b1=0.9, b2=0.98, eps=1e-8, weight_decay=0.01)
  • duration: 3h 21m

Training hardware

TPU v4-8

Performance

Validation Report:
              precision    recall  f1-score   support

           0       0.51      0.28      0.36       198
           1       0.69      0.50      0.58      1553
           2       0.39      0.69      0.49      1159
           3       0.54      0.42      0.47       967
           4       0.54      0.10      0.16       219

    accuracy                           0.50      4096
   macro avg       0.53      0.40      0.42      4096
weighted avg       0.55      0.50      0.50      4096

Confusion Matrix:
[[ 56 108  34   0   0]
 [ 51 782 677  43   0]
 [  2 202 797 155   3]
 [  0  37 508 407  15]
 [  0   1  53 144  21]]

The low performance is likely due to the limitations of Korean embeddings and the evaluation limitations of the Qwen2.5 32B model. The F1 score is about 0.59 when separating above and below 3.

Downloads last month
0
Safetensors
Model size
471M params
Tensor type
BF16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for devngho/ko_edu_classifier_v2_LaBSE

Finetuned
(28)
this model

Dataset used to train devngho/ko_edu_classifier_v2_LaBSE

Space using devngho/ko_edu_classifier_v2_LaBSE 1

Collection including devngho/ko_edu_classifier_v2_LaBSE