SetFit with mini1013/master_domain

This is a SetFit model that can be used for Text Classification. This SetFit model uses mini1013/master_domain as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
2.0
  • '셀렉스 매일 마시는 프로틴 12l 160ml × 48개 출산/육아 > 분유 > 특수분유'
  • '일동후디스 초유밀플러스2단계 1캔(1gx90포)) 출산/육아 > 분유 > 특수분유'
  • 'gvp 스마트폰 카드포켓 스마트링블랙 출산/육아 > 분유 > 특수분유'
0.0
  • '매일유업 앱솔루트 명작 2FL 액상 2단계 240ml 24개 x2개 출산/육아 > 분유 > 국내분유'
  • '매일유업 앱솔루트 센서티브 1단계 900g x 1개 [라면] 봉지라면_얼큰한 너구리 120g 20개 출산/육아 > 분유 > 국내분유'
  • '매일유업 앱솔루트 센서티브 1단계 900g x 1개 [음료] 우유두유_삼육검은콩앤칼슘파우치190ml40팩 출산/육아 > 분유 > 국내분유'
1.0
  • '힙 압타밀 HA 뢰벤짠 밀라산 홀레 퇴퍼 베바 세레락 프레 2단계 콤비오틱 무전분 산양 [퇴퍼] Töpfer_퇴퍼 락타나 600g (최대8통)_[1통] xPRE Topfer 출산/육아 > 분유 > 수입분유'
  • '뉴트리시아 압타밀 프로누트라 어드밴스 2단계 800g [음료] 탄산음료_데미소다피치250ml30캔 출산/육아 > 분유 > 수입분유'
  • '퇴퍼 홀레 뢰벤짠 힙 노발락 압타밀 무전분 AR 킨더밀쉬 압타밀 오가닉(New)_오가닉 2 800g 1통_◆dm4056631003169_1◆ 출산/육아 > 분유 > 수입분유'

Evaluation

Metrics

Label Accuracy
all 1.0

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_bc6")
# Run inference
preds = model("남양유업 아이엠마더 액상 3단계 240ml x96개  출산/육아 > 분유 > 국내분유")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 7 14.9429 30
Label Training Sample Count
0.0 70
1.0 70
2.0 70

Training Hyperparameters

  • batch_size: (256, 256)
  • num_epochs: (30, 30)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 50
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.01
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0238 1 0.4943 -
1.1905 50 0.4806 -
2.3810 100 0.1671 -
3.5714 150 0.0003 -
4.7619 200 0.0 -
5.9524 250 0.0 -
7.1429 300 0.0 -
8.3333 350 0.0 -
9.5238 400 0.0 -
10.7143 450 0.0 -
11.9048 500 0.0 -
13.0952 550 0.0 -
14.2857 600 0.0 -
15.4762 650 0.0 -
16.6667 700 0.0 -
17.8571 750 0.0 -
19.0476 800 0.0 -
20.2381 850 0.0 -
21.4286 900 0.0 -
22.6190 950 0.0 -
23.8095 1000 0.0 -
25.0 1050 0.0 -
26.1905 1100 0.0 -
27.3810 1150 0.0 -
28.5714 1200 0.0 -
29.7619 1250 0.0 -

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.1.0
  • Sentence Transformers: 3.3.1
  • Transformers: 4.44.2
  • PyTorch: 2.2.0a0+81ea7a4
  • Datasets: 3.2.0
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
945
Safetensors
Model size
111M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for mini1013/master_cate_bc6

Base model

klue/roberta-base
Finetuned
(213)
this model

Evaluation results