metadata
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-1b7
tags:
- generated_from_trainer
model-index:
- name: Bloom-1b7-glue-mrpc-IT-baseline
results: []
Bloom-1b7-glue-mrpc-IT-baseline
This model is a fine-tuned version of bigscience/bloom-1b7 on an unknown dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
Instruction Tuned on the glue-mrpc task here: https://huggingface.co/datasets/adambjorn/UnrelatedForgettingOverhead/viewer/glue-mrpc
Training procedure
Given a set of prompts:
prompts = [
"Determine if the following sentences are equivalent: Sentence 1: {sentence1} Sentence 2: {sentence2}. Answer: ",
"Are these sentences saying the same thing? First: {sentence1} Second: {sentence2}. Response: ",
"Check sentence equivalence: \"{sentence1}\" versus \"{sentence2}\". Result: ",
]
Concatenate the prompts, the two sentences and the label as so:
input_text = prompt.format(sentence1=sentence1, sentence2=sentence2)
input_text += " " + responses[label]
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
Training results
Final results: {'loss': 0.0949, 'grad_norm': 5.0146379470825195, 'learning_rate': 6.000000000000001e-07, 'epoch': 10.0}
Average results: {'train_runtime': 363.2148, 'train_samples_per_second': 5.506, 'train_steps_per_second': 1.377, 'train_loss': 0.4939311617612839, 'epoch': 10.0}
Framework versions
- Transformers 4.38.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2