Safetensors
Eval Results
NeuralNovel commited on
Commit
e021188
1 Parent(s): 5046e6b

Delete .ipynb_checkpoints

Browse files
.ipynb_checkpoints/README-checkpoint.md DELETED
@@ -1,58 +0,0 @@
1
- ---
2
- license: other
3
- library_name: peft
4
- tags:
5
- - llama-factory
6
- - lora
7
- - generated_from_trainer
8
- base_model: abacusai/Smaug-72B-v0.1
9
- model-index:
10
- - name: train_2024-02-17-03-49-55
11
- results: []
12
- ---
13
-
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
- # train_2024-02-17-03-49-55
18
-
19
- This model is a fine-tuned version of [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1) on the Neural-DPO dataset.
20
-
21
- ## Model description
22
-
23
- More information needed
24
-
25
- ## Intended uses & limitations
26
-
27
- More information needed
28
-
29
- ## Training and evaluation data
30
-
31
- More information needed
32
-
33
- ## Training procedure
34
-
35
- ### Training hyperparameters
36
-
37
- The following hyperparameters were used during training:
38
- - learning_rate: 0.001
39
- - train_batch_size: 2
40
- - eval_batch_size: 8
41
- - seed: 42
42
- - gradient_accumulation_steps: 4
43
- - total_train_batch_size: 8
44
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
- - lr_scheduler_type: cosine
46
- - num_epochs: 3.0
47
-
48
- ### Training results
49
-
50
-
51
-
52
- ### Framework versions
53
-
54
- - PEFT 0.8.2
55
- - Transformers 4.37.2
56
- - Pytorch 2.2.0+cu121
57
- - Datasets 2.17.0
58
- - Tokenizers 0.15.2