ninyx commited on
Commit
e587852
1 Parent(s): 9413ed2

Model save

Browse files
Files changed (3) hide show
  1. README.md +79 -0
  2. adapter_model.safetensors +1 -1
  3. results.json +4 -0
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - sft
7
+ - generated_from_trainer
8
+ base_model: mistralai/Mistral-7B-Instruct-v0.2
9
+ datasets:
10
+ - generator
11
+ metrics:
12
+ - bleu
13
+ - rouge
14
+ model-index:
15
+ - name: Mistral-7B-Instruct-v0.2-advisegpt-v0.2
16
+ results: []
17
+ ---
18
+
19
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
20
+ should probably proofread and complete it, then remove this comment. -->
21
+
22
+ # Mistral-7B-Instruct-v0.2-advisegpt-v0.2
23
+
24
+ This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
25
+ It achieves the following results on the evaluation set:
26
+ - Loss: 0.2663
27
+ - Bleu: {'bleu': 0.8973677595827475, 'precisions': [0.9445357190260831, 0.9043357485277211, 0.884048801354545, 0.8699324916460348], 'brevity_penalty': 0.9967654892802698, 'length_ratio': 0.9967707090483877, 'translation_length': 1235588, 'reference_length': 1239591}
28
+ - Rouge: {'rouge1': 0.94002209733584, 'rouge2': 0.8959242425644911, 'rougeL': 0.931506639182089, 'rougeLsum': 0.9379602925725274}
29
+ - Exact Match: {'exact_match': 0.0}
30
+
31
+ ## Model description
32
+
33
+ More information needed
34
+
35
+ ## Intended uses & limitations
36
+
37
+ More information needed
38
+
39
+ ## Training and evaluation data
40
+
41
+ More information needed
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 2e-05
49
+ - train_batch_size: 3
50
+ - eval_batch_size: 1
51
+ - seed: 42
52
+ - gradient_accumulation_steps: 10
53
+ - total_train_batch_size: 30
54
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
+ - lr_scheduler_type: cosine
56
+ - num_epochs: 8
57
+ - mixed_precision_training: Native AMP
58
+
59
+ ### Training results
60
+
61
+ | Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge | Exact Match |
62
+ |:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------:|:--------------------:|
63
+ | 0.0486 | 1.0 | 953 | 0.2746 | {'bleu': 0.8908185969537198, 'precisions': [0.9404575669873605, 0.8973236909340203, 0.8757372017868406, 0.8612917392143906], 'brevity_penalty': 0.9973237695370935, 'length_ratio': 0.9973273442611313, 'translation_length': 1236278, 'reference_length': 1239591} | {'rouge1': 0.9362204684890618, 'rouge2': 0.8885056700228431, 'rougeL': 0.9265631818483623, 'rougeLsum': 0.9336844419843153} | {'exact_match': 0.0} |
64
+ | 0.0491 | 2.0 | 1906 | 0.2663 | {'bleu': 0.8973677595827475, 'precisions': [0.9445357190260831, 0.9043357485277211, 0.884048801354545, 0.8699324916460348], 'brevity_penalty': 0.9967654892802698, 'length_ratio': 0.9967707090483877, 'translation_length': 1235588, 'reference_length': 1239591} | {'rouge1': 0.94002209733584, 'rouge2': 0.8959242425644911, 'rougeL': 0.931506639182089, 'rougeLsum': 0.9379602925725274} | {'exact_match': 0.0} |
65
+ | 0.0457 | 3.0 | 2859 | 0.2713 | {'bleu': 0.9011515587348646, 'precisions': [0.9455932972222963, 0.9073373719165547, 0.8876381124008869, 0.8739477250662966], 'brevity_penalty': 0.9976982094967736, 'length_ratio': 0.9977008545560592, 'translation_length': 1236741, 'reference_length': 1239591} | {'rouge1': 0.9413126529676463, 'rouge2': 0.899299253704354, 'rougeL': 0.9334048668464673, 'rougeLsum': 0.9393911208574579} | {'exact_match': 0.0} |
66
+ | 0.0427 | 4.0 | 3812 | 0.2973 | {'bleu': 0.90301748257064, 'precisions': [0.9459337521093097, 0.908889094215937, 0.8900854627247164, 0.8768615812629189], 'brevity_penalty': 0.9977289348603028, 'length_ratio': 0.9977315098286451, 'translation_length': 1236779, 'reference_length': 1239591} | {'rouge1': 0.9416505951538487, 'rouge2': 0.9013278888630185, 'rougeL': 0.9338287357833286, 'rougeLsum': 0.9396676143080369} | {'exact_match': 0.0} |
67
+ | 0.0387 | 5.0 | 4765 | 0.3174 | {'bleu': 0.9027081754085503, 'precisions': [0.9453687163030563, 0.9082821849767655, 0.8894881333354993, 0.8763610309353894], 'brevity_penalty': 0.9980126956655654, 'length_ratio': 0.9980146677412146, 'translation_length': 1237130, 'reference_length': 1239591} | {'rouge1': 0.9410039582969136, 'rouge2': 0.9003660562981124, 'rougeL': 0.9333230818742395, 'rougeLsum': 0.9390562109303144} | {'exact_match': 0.0} |
68
+ | 0.0375 | 6.0 | 5718 | 0.3550 | {'bleu': 0.9020801569628037, 'precisions': [0.9452423604006518, 0.9080153383434265, 0.889329438008671, 0.8762846622568347], 'brevity_penalty': 0.9974911930257172, 'length_ratio': 0.9974943348249543, 'translation_length': 1236485, 'reference_length': 1239591} | {'rouge1': 0.9405264680129318, 'rouge2': 0.8997693944544493, 'rougeL': 0.9326541656452509, 'rougeLsum': 0.9385618029898112} | {'exact_match': 0.0} |
69
+ | 0.0361 | 7.0 | 6671 | 0.3875 | {'bleu': 0.9012292336144602, 'precisions': [0.9445250811645649, 0.907065679940991, 0.8882131409279821, 0.8750920375317651], 'brevity_penalty': 0.9976529282968476, 'length_ratio': 0.99765567836488, 'translation_length': 1236685, 'reference_length': 1239591} | {'rouge1': 0.9398793690117034, 'rouge2': 0.8988761356423343, 'rougeL': 0.9319409400581564, 'rougeLsum': 0.9378780059486569} | {'exact_match': 0.0} |
70
+ | 0.0336 | 8.0 | 7624 | 0.4056 | {'bleu': 0.9006875075046153, 'precisions': [0.9440281128914361, 0.906425354183536, 0.8876174257258417, 0.874531174789441], 'brevity_penalty': 0.9976876979719658, 'length_ratio': 0.997690367225964, 'translation_length': 1236728, 'reference_length': 1239591} | {'rouge1': 0.9394107383916215, 'rouge2': 0.8982745501459677, 'rougeL': 0.9313967228794204, 'rougeLsum': 0.9373525606500615} | {'exact_match': 0.0} |
71
+
72
+
73
+ ### Framework versions
74
+
75
+ - PEFT 0.10.0
76
+ - Transformers 4.40.1
77
+ - Pytorch 2.3.0+cu121
78
+ - Datasets 2.19.0
79
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:25f7d3d6b4cc229615fa7963560c6a59431bc7d79304b9b3aef3188c977bb676
3
  size 864513616
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db214dc5720052275d803324bd7b5d2d9146472847e24c282f7c9a2ccc634944
3
  size 864513616
results.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ Pre-training results:
2
+ {"eval_loss": 2.381751298904419, "eval_bleu": {"bleu": 0.3763668727961896, "precisions": [0.7113682572149176, 0.41969742764485324, 0.29141291931264046, 0.24017336770041536], "brevity_penalty": 0.9899090183536557, "length_ratio": 0.9899595915104256, "translation_length": 1227145, "reference_length": 1239591}, "eval_rouge": {"rouge1": 0.6995658688454499, "rouge2": 0.3911380458337215, "rougeL": 0.5482311516914178, "rougeLsum": 0.6769687911860702}, "eval_exact_match": {"exact_match": 0.0}, "eval_runtime": 2197.2062, "eval_samples_per_second": 1.354, "eval_steps_per_second": 1.354}
3
+ Post-training results:
4
+ {"eval_loss": 0.2662665843963623, "eval_bleu": {"bleu": 0.8973677595827475, "precisions": [0.9445357190260831, 0.9043357485277211, 0.884048801354545, 0.8699324916460348], "brevity_penalty": 0.9967654892802698, "length_ratio": 0.9967707090483877, "translation_length": 1235588, "reference_length": 1239591}, "eval_rouge": {"rouge1": 0.94002209733584, "rouge2": 0.8959242425644911, "rougeL": 0.931506639182089, "rougeLsum": 0.9379602925725274}, "eval_exact_match": {"exact_match": 0.0}, "eval_runtime": 1802.811, "eval_samples_per_second": 1.65, "eval_steps_per_second": 1.65, "epoch": 8.0}