artificialguybr commited on
Commit
2096744
1 Parent(s): df5d4e3

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: Qwen/Qwen2-1.5B
5
+ language:
6
+ - en
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - generated_from_trainer
10
+ - instruction-tuning
11
+ model-index:
12
+ - name: outputs/qwen2.5-1.5b-ft-synthia15-i
13
+ results: []
14
+ ---
15
+
16
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
17
+
18
+ # Qwen2-1.5B Fine-tuned on Synthia v1.5-I
19
+
20
+ This model is a fine-tuned version of [Qwen/Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B) on the Synthia v1.5-I dataset, which contains over 20.7k instruction-following examples.
21
+
22
+ ## Model Description
23
+
24
+ Qwen2-1.5B is part of the latest Qwen2 series of large language models. The base model brings significant improvements in:
25
+ - Language understanding and generation
26
+ - Structured data processing
27
+ - Support for multiple languages
28
+ - Long context handling
29
+
30
+ This fine-tuned version enhances the base model's instruction-following capabilities through training on the Synthia v1.5-I dataset.
31
+
32
+ ### Model Architecture
33
+ - Type: Causal Language Model
34
+ - Parameters: 1.5B
35
+ - Training Framework: Transformers 4.45.0.dev0
36
+
37
+ ## Intended Uses & Limitations
38
+
39
+ This model is intended for:
40
+ - Instruction following and task completion
41
+ - Text generation and completion
42
+ - Conversational AI applications
43
+
44
+ The model inherits the capabilities of the base Qwen2-1.5B model, while being specifically tuned for instruction following.
45
+
46
+ ## Training Procedure
47
+
48
+ ### Training Data
49
+ The model was fine-tuned on the Synthia v1.5-I dataset containing 20.7k instruction-following examples.
50
+
51
+ ### Training Hyperparameters
52
+
53
+ The following hyperparameters were used during training:
54
+ - Learning rate: 1e-05
55
+ - Train batch size: 5
56
+ - Eval batch size: 5
57
+ - Seed: 42
58
+ - Gradient accumulation steps: 8
59
+ - Total train batch size: 40
60
+ - Optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
+ - LR scheduler type: cosine
62
+ - LR scheduler warmup steps: 100
63
+ - Number of epochs: 3
64
+ - Sequence length: 4096
65
+ - Sample packing: enabled
66
+ - Pad to sequence length: enabled
67
+
68
+ ## Framework Versions
69
+
70
+ - Transformers 4.45.0.dev0
71
+ - Pytorch 2.3.1+cu121
72
+ - Datasets 2.21.0
73
+ - Tokenizers 0.19.1
74
+
75
+ <details><summary>See axolotl config</summary>
76
+
77
+ axolotl version: `0.4.1`