cashu commited on
Commit
51b33e0
·
1 Parent(s): 8a36fa4

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -52
README.md CHANGED
@@ -1,11 +1,10 @@
1
  ---
2
- base_model: NousResearch/Llama-2-7b-chat-hf
3
  tags:
4
  - generated_from_trainer
5
  model-index:
6
  - name: llama-2-mcq-gen
7
  results: []
8
- library_name: peft
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -13,7 +12,7 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # llama-2-mcq-gen
15
 
16
- This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on an unknown dataset.
17
 
18
  ## Model description
19
 
@@ -29,50 +28,6 @@ More information needed
29
 
30
  ## Training procedure
31
 
32
-
33
- The following `bitsandbytes` quantization config was used during training:
34
- - load_in_8bit: False
35
- - load_in_4bit: True
36
- - llm_int8_threshold: 6.0
37
- - llm_int8_skip_modules: None
38
- - llm_int8_enable_fp32_cpu_offload: False
39
- - llm_int8_has_fp16_weight: False
40
- - bnb_4bit_quant_type: nf4
41
- - bnb_4bit_use_double_quant: False
42
- - bnb_4bit_compute_dtype: float16
43
-
44
- The following `bitsandbytes` quantization config was used during training:
45
- - load_in_8bit: False
46
- - load_in_4bit: True
47
- - llm_int8_threshold: 6.0
48
- - llm_int8_skip_modules: None
49
- - llm_int8_enable_fp32_cpu_offload: False
50
- - llm_int8_has_fp16_weight: False
51
- - bnb_4bit_quant_type: nf4
52
- - bnb_4bit_use_double_quant: False
53
- - bnb_4bit_compute_dtype: float16
54
-
55
- The following `bitsandbytes` quantization config was used during training:
56
- - load_in_8bit: False
57
- - load_in_4bit: True
58
- - llm_int8_threshold: 6.0
59
- - llm_int8_skip_modules: None
60
- - llm_int8_enable_fp32_cpu_offload: False
61
- - llm_int8_has_fp16_weight: False
62
- - bnb_4bit_quant_type: nf4
63
- - bnb_4bit_use_double_quant: False
64
- - bnb_4bit_compute_dtype: float16
65
-
66
- The following `bitsandbytes` quantization config was used during training:
67
- - load_in_8bit: False
68
- - load_in_4bit: True
69
- - llm_int8_threshold: 6.0
70
- - llm_int8_skip_modules: None
71
- - llm_int8_enable_fp32_cpu_offload: False
72
- - llm_int8_has_fp16_weight: False
73
- - bnb_4bit_quant_type: nf4
74
- - bnb_4bit_use_double_quant: False
75
- - bnb_4bit_compute_dtype: float16
76
  ### Training hyperparameters
77
 
78
  The following hyperparameters were used during training:
@@ -85,7 +40,7 @@ The following hyperparameters were used during training:
85
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
86
  - lr_scheduler_type: cosine
87
  - lr_scheduler_warmup_ratio: 0.03
88
- - num_epochs: 1
89
 
90
  ### Training results
91
 
@@ -93,10 +48,6 @@ The following hyperparameters were used during training:
93
 
94
  ### Framework versions
95
 
96
- - PEFT 0.4.0
97
- - PEFT 0.4.0
98
- - PEFT 0.4.0
99
- - PEFT 0.4.0
100
  - Transformers 4.31.0
101
  - Pytorch 2.2.2+cu121
102
  - Datasets 2.18.0
 
1
  ---
2
+ base_model: NousResearch/Llama-2-13b-chat-hf
3
  tags:
4
  - generated_from_trainer
5
  model-index:
6
  - name: llama-2-mcq-gen
7
  results: []
 
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
12
 
13
  # llama-2-mcq-gen
14
 
15
+ This model is a fine-tuned version of [NousResearch/Llama-2-13b-chat-hf](https://huggingface.co/NousResearch/Llama-2-13b-chat-hf) on an unknown dataset.
16
 
17
  ## Model description
18
 
 
28
 
29
  ## Training procedure
30
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  ### Training hyperparameters
32
 
33
  The following hyperparameters were used during training:
 
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: cosine
42
  - lr_scheduler_warmup_ratio: 0.03
43
+ - num_epochs: 2
44
 
45
  ### Training results
46
 
 
48
 
49
  ### Framework versions
50
 
 
 
 
 
51
  - Transformers 4.31.0
52
  - Pytorch 2.2.2+cu121
53
  - Datasets 2.18.0