AshtonIsNotHere commited on
Commit
3ae8ed6
·
1 Parent(s): bb2ad36

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +84 -0
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - AshtonIsNotHere/nlp_pp_code_dataset
7
+ metrics:
8
+ - accuracy
9
+ model-index:
10
+ - name: codellama_CodeLlama-7b-hf_08_27_23_15_32_28
11
+ results:
12
+ - task:
13
+ name: Causal Language Modeling
14
+ type: text-generation
15
+ dataset:
16
+ name: AshtonIsNotHere/nlp_pp_code_dataset
17
+ type: AshtonIsNotHere/nlp_pp_code_dataset
18
+ split: test
19
+ metrics:
20
+ - name: Accuracy
21
+ type: accuracy
22
+ value: 0.8968056729128353
23
+ ---
24
+
25
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
26
+ should probably proofread and complete it, then remove this comment. -->
27
+
28
+ # codellama_CodeLlama-7b-hf_08_27_23_15_32_28
29
+
30
+ This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the AshtonIsNotHere/nlp_pp_code_dataset dataset.
31
+ It achieves the following results on the evaluation set:
32
+ - Loss: 0.4129
33
+ - Accuracy: 0.8968
34
+
35
+ ## Model description
36
+
37
+ More information needed
38
+
39
+ ## Intended uses & limitations
40
+
41
+ More information needed
42
+
43
+ ## Training and evaluation data
44
+
45
+ More information needed
46
+
47
+ ## Training procedure
48
+
49
+ ### Training hyperparameters
50
+
51
+ The following hyperparameters were used during training:
52
+ - learning_rate: 0.00012
53
+ - train_batch_size: 1
54
+ - eval_batch_size: 1
55
+ - seed: 42
56
+ - distributed_type: multi-GPU
57
+ - num_devices: 4
58
+ - gradient_accumulation_steps: 4
59
+ - total_train_batch_size: 16
60
+ - total_eval_batch_size: 4
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: linear
63
+ - lr_scheduler_warmup_ratio: 0.1
64
+ - num_epochs: 7.0
65
+
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
+ | No log | 1.0 | 61 | 0.5100 | 0.8726 |
71
+ | No log | 1.99 | 122 | 0.4129 | 0.8968 |
72
+ | No log | 2.99 | 183 | 0.4166 | 0.9072 |
73
+ | No log | 4.0 | 245 | 0.4595 | 0.9090 |
74
+ | No log | 5.0 | 306 | 0.5181 | 0.9093 |
75
+ | No log | 5.99 | 367 | 0.5553 | 0.9090 |
76
+ | No log | 6.97 | 427 | 0.5603 | 0.9089 |
77
+
78
+
79
+ ### Framework versions
80
+
81
+ - Transformers 4.30.2
82
+ - Pytorch 2.0.1+cu117
83
+ - Datasets 2.13.0
84
+ - Tokenizers 0.13.3