hyojuuun's picture
Add new SentenceTransformer model
2ded547 verified
metadata
base_model: thenlper/gte-base
library_name: sentence-transformers
metrics:
  - pearson_cosine
  - spearman_cosine
  - pearson_manhattan
  - spearman_manhattan
  - pearson_euclidean
  - spearman_euclidean
  - pearson_dot
  - spearman_dot
  - pearson_max
  - spearman_max
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:206874
  - loss:ContrastiveLoss
widget:
  - source_sentence: >-
      Cardiac silhouette size is top normal. Aorta is tortuous and demonstrates
      mild atherosclerotic calcifications diffusely. Hilar contours are normal.
      Pulmonary vasculature is normal. Lungs are clear. No pleural effusion or
      pneumothorax is present. No acute osseous abnormality is detected.  
    sentences:
      - 'No acute cardiopulmonary process. '
      - 'No acute cardiopulmonary abnormality. '
      - 'Normal chest radiographs. '
  - source_sentence: >-
      The lungs are mildly hyperexpanded but clear. No pleural effusion or
      pneumothorax is seen. The cardiac and mediastinal silhouettes are
      unremarkable.  
    sentences:
      - 'Findings worrisome for early/mild left lower lobe pneumonia. '
      - 'No acute cardiopulmonary process. The mediastinum is not widened. '
      - 'No radiographic evidence of acute cardiopulmonary disease. '
  - source_sentence: >-
      Lung volumes are slightly low. The cardiomediastinal silhouette and
      pulmonary vasculature a similar to the prior examination, and
      unremarkable, accounting for low lung volumes. Midline sternal wires are
      intact and well aligned. Mediastinal clips and anastomotic markers are
      noted. The lungs are clear. There is no pleural effusion or pneumothorax.
      Bilateral shoulder prostheses are partially imaged.  
    sentences:
      - 'No acute cardiopulmonary process. '
      - 'No acute intrathoracic abnormality. '
      - >-
        Pulmonary edema, increasing pleural effusions, known mass in the right
        lower lung. 
  - source_sentence: >-
      The left hemi thorax remains opacified. The right lung is now clear. The
      right mediastinal silhouette is unchanged. An endotracheal tube feeding
      tube and right internal jugular catheter remain in place.  
    sentences:
      - 'The right lung now appears clear. No other significant change. '
      - 'No acute cardiopulmonary abnormality. '
      - >-
        Chest findings within normal limits, no secondary metastases suspicious
        lesions identified. 
  - source_sentence: >-
      The atient is status post coronary artery bypass graft surgery. The heart
      is mildly enlarged. There is a large hiatal hernia with an air-fluid
      level. Otherwise, the mediastinal and hilar contours are unremarkable. The
      lungs appear clear. The chest is hyperinflated. There is no pleural
      effusion or pneumothorax. Bony structures are unremarkable.  
    sentences:
      - >-
        1. Left apical pneumothorax still small, but considerably larger. Left
        base pneumothorax also slightly larger. 2. Minimal lucency adjacent to
        the the aortic knob may also represent part of the left lung
        pneumothorax. Attention to this area on followup films to exclude any
        mediastinal air is requested. 3. Extensive subcutaneous emphysema,
        equivocally slightly greater than on the prior film. 4. Minimal interval
        change in position of the left chest tube. 5. Right pneumothorax also
        increased, still small in width, but now seen not only at the right lung
        apex, but also along the right lateral chest wall and at the right
        costophrenic angle in the adjoining lung base. 
      - 'No evidence of acute disease. Normal cardiac size. '
      - >-
        No evidence of acute disease. Hyperinflation. Large hiatal hernia.
        Status post coronary artery bypass graft surgery. 
model-index:
  - name: SentenceTransformer based on thenlper/gte-base
    results:
      - task:
          type: semantic-similarity
          name: Semantic Similarity
        dataset:
          name: validation
          type: validation
        metrics:
          - type: pearson_cosine
            value: 0.8022517557853334
            name: Pearson Cosine
          - type: spearman_cosine
            value: 0.810529949353046
            name: Spearman Cosine
          - type: pearson_manhattan
            value: 0.8243043367211444
            name: Pearson Manhattan
          - type: spearman_manhattan
            value: 0.8105359053829688
            name: Spearman Manhattan
          - type: pearson_euclidean
            value: 0.824484835649088
            name: Pearson Euclidean
          - type: spearman_euclidean
            value: 0.8105299161732425
            name: Spearman Euclidean
          - type: pearson_dot
            value: 0.802251755767147
            name: Pearson Dot
          - type: spearman_dot
            value: 0.8105299280214241
            name: Spearman Dot
          - type: pearson_max
            value: 0.824484835649088
            name: Pearson Max
          - type: spearman_max
            value: 0.8105359053829688
            name: Spearman Max

SentenceTransformer based on thenlper/gte-base

This is a sentence-transformers model finetuned from thenlper/gte-base. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: thenlper/gte-base
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("hyojuuun/gte_base_MIMICCXR_FT")
# Run inference
sentences = [
    'The atient is status post coronary artery bypass graft surgery. The heart is mildly enlarged. There is a large hiatal hernia with an air-fluid level. Otherwise, the mediastinal and hilar contours are unremarkable. The lungs appear clear. The chest is hyperinflated. There is no pleural effusion or pneumothorax. Bony structures are unremarkable.  ',
    'No evidence of acute disease. Hyperinflation. Large hiatal hernia. Status post coronary artery bypass graft surgery. ',
    '1. Left apical pneumothorax still small, but considerably larger. Left base pneumothorax also slightly larger. 2. Minimal lucency adjacent to the the aortic knob may also represent part of the left lung pneumothorax. Attention to this area on followup films to exclude any mediastinal air is requested. 3. Extensive subcutaneous emphysema, equivocally slightly greater than on the prior film. 4. Minimal interval change in position of the left chest tube. 5. Right pneumothorax also increased, still small in width, but now seen not only at the right lung apex, but also along the right lateral chest wall and at the right costophrenic angle in the adjoining lung base. ',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.8023
spearman_cosine 0.8105
pearson_manhattan 0.8243
spearman_manhattan 0.8105
pearson_euclidean 0.8245
spearman_euclidean 0.8105
pearson_dot 0.8023
spearman_dot 0.8105
pearson_max 0.8245
spearman_max 0.8105

Training Details

Training Dataset

Unnamed Dataset

  • Size: 206,874 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 3 tokens
    • mean: 78.31 tokens
    • max: 324 tokens
    • min: 4 tokens
    • mean: 26.68 tokens
    • max: 165 tokens
    • min: 0.0
    • mean: 0.5
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    The lung volumes are low which accentuates the linear and interstitial opacities. An ill-defined opacity in the left lung in the third/fourth interspace has increased since the prior can be early pneumonia. No pneumothorax. Mild to moderate gastric and small bowel distension partially visualized. No evidence of acute cardiopulmonary disease. 0.0
    PA and lateral views of the chest were provided demonstrating no focal consolidation, effusion or pneumothorax. The cardiomediastinal silhouette is normal. Bony structures are intact. No free air below the right hemidiaphragm. No acute intrathoracic process. 1.0
    Previously seen right-sided PICC is no longer seen. Enlargement of the cardiomediastinal silhouette is grossly stable. There are low lung volumes, which accentuate the bronchovascular markings. No focal consolidation is seen. There is no pleural effusion or pneumothorax. Low lung volumes but no focal consolidation to suggest pneumonia. 1.0
  • Loss: ContrastiveLoss with these parameters:
    {
        "distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
        "margin": 0.5,
        "size_average": true
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 96
  • per_device_eval_batch_size: 96
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 96
  • per_device_eval_batch_size: 96
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss validation_spearman_max
0.0464 100 - 0.6178
0.0928 200 - 0.6904
0.1392 300 - 0.7290
0.1856 400 - 0.7596
0.2320 500 0.0191 0.7715
0.2784 600 - 0.7783
0.3248 700 - 0.7851
0.3712 800 - 0.7885
0.4176 900 - 0.7942
0.4640 1000 0.0118 0.7965
0.5104 1100 - 0.8061
0.5568 1200 - 0.8035
0.6032 1300 - 0.8082
0.6497 1400 - 0.8105

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.2.0
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.34.2
  • Datasets: 3.0.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

ContrastiveLoss

@inproceedings{hadsell2006dimensionality,
    author={Hadsell, R. and Chopra, S. and LeCun, Y.},
    booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)},
    title={Dimensionality Reduction by Learning an Invariant Mapping},
    year={2006},
    volume={2},
    number={},
    pages={1735-1742},
    doi={10.1109/CVPR.2006.100}
}