ishant0121 commited on
Commit
110bfa6
1 Parent(s): b900dbb

Model save

Browse files
README.md CHANGED
@@ -1,16 +1,12 @@
1
  ---
2
  license: apache-2.0
3
- base_model: mistralai/Mistral-7B-v0.1
4
  tags:
5
- - alignment-handbook
6
- - trl
7
- - sft
8
- - generated_from_trainer
9
  - trl
10
  - sft
11
  - generated_from_trainer
12
  datasets:
13
- - HuggingFaceH4/ultrachat_200k
14
  model-index:
15
  - name: zephyr-7b-sft-full
16
  results: []
@@ -21,9 +17,9 @@ should probably proofread and complete it, then remove this comment. -->
21
 
22
  # zephyr-7b-sft-full
23
 
24
- This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset.
25
  It achieves the following results on the evaluation set:
26
- - Loss: 1.0320
27
 
28
  ## Model description
29
 
@@ -59,7 +55,7 @@ The following hyperparameters were used during training:
59
 
60
  | Training Loss | Epoch | Step | Validation Loss |
61
  |:-------------:|:-----:|:----:|:---------------:|
62
- | 1.0197 | 1.0 | 3488 | 1.0320 |
63
 
64
 
65
  ### Framework versions
 
1
  ---
2
  license: apache-2.0
3
+ base_model: TinyLlama/TinyLlama-1.1B-step-50K-105b
4
  tags:
 
 
 
 
5
  - trl
6
  - sft
7
  - generated_from_trainer
8
  datasets:
9
+ - generator
10
  model-index:
11
  - name: zephyr-7b-sft-full
12
  results: []
 
17
 
18
  # zephyr-7b-sft-full
19
 
20
+ This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-step-50K-105b](https://huggingface.co/TinyLlama/TinyLlama-1.1B-step-50K-105b) on the generator dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 1.3203
23
 
24
  ## Model description
25
 
 
55
 
56
  | Training Loss | Epoch | Step | Validation Loss |
57
  |:-------------:|:-----:|:----:|:---------------:|
58
+ | 1.3518 | 1.0 | 3653 | 1.3203 |
59
 
60
 
61
  ### Framework versions
all_results.json CHANGED
@@ -1,14 +1,9 @@
1
  {
2
  "epoch": 1.0,
3
- "eval_loss": 1.0320301055908203,
4
- "eval_runtime": 4528.944,
5
- "eval_samples": 124719,
6
- "eval_samples_per_second": 18.466,
7
- "eval_steps_per_second": 1.154,
8
- "total_flos": 182579059752960.0,
9
- "train_loss": 0.025541797006895784,
10
- "train_runtime": 4827.4873,
11
  "train_samples": 83145,
12
- "train_samples_per_second": 11.557,
13
- "train_steps_per_second": 0.723
14
  }
 
1
  {
2
  "epoch": 1.0,
3
+ "total_flos": 66190143651840.0,
4
+ "train_loss": 1.3573657579367173,
5
+ "train_runtime": 8062.8425,
 
 
 
 
 
6
  "train_samples": 83145,
7
+ "train_samples_per_second": 7.249,
8
+ "train_steps_per_second": 0.453
9
  }
generation_config.json CHANGED
@@ -1,6 +1,7 @@
1
  {
2
- "_from_model_config": true,
3
  "bos_token_id": 1,
4
  "eos_token_id": 2,
 
 
5
  "transformers_version": "4.41.2"
6
  }
 
1
  {
 
2
  "bos_token_id": 1,
3
  "eos_token_id": 2,
4
+ "max_length": 2048,
5
+ "pad_token_id": 0,
6
  "transformers_version": "4.41.2"
7
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cd1cc3ab3138978cfaabb13b4c070fdf8abcd5746d969d43dcdeb7894d456d8d
3
  size 2200119864
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:255ed0475eb2e61e886a1e01ffff1f186d6d7e3b2a2a6ade35fdd64222d797d8
3
  size 2200119864
runs/Jul02_08-04-09_ishant-gpu/events.out.tfevents.1719908314.ishant-gpu.55464.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:57cba252731d0f3ade0b6f9746c0b9686cdcdd7afd93215501edf8041131a582
3
- size 155689
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85986aec543252f2f13c69541395d5fae79ba83dabfb45ad655140da39f539f8
3
+ size 159690
train_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "epoch": 1.0,
3
- "total_flos": 182579059752960.0,
4
- "train_loss": 0.025541797006895784,
5
- "train_runtime": 4827.4873,
6
  "train_samples": 83145,
7
- "train_samples_per_second": 11.557,
8
- "train_steps_per_second": 0.723
9
  }
 
1
  {
2
  "epoch": 1.0,
3
+ "total_flos": 66190143651840.0,
4
+ "train_loss": 1.3573657579367173,
5
+ "train_runtime": 8062.8425,
6
  "train_samples": 83145,
7
+ "train_samples_per_second": 7.249,
8
+ "train_steps_per_second": 0.453
9
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff