RichardErkhov commited on
Commit
c963ea4
1 Parent(s): 21d0d11

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ CulturaX-zh-unsupervised-20241030-122021 - AWQ
11
+ - Model creator: https://huggingface.co/autoprogrammer/
12
+ - Original model: https://huggingface.co/autoprogrammer/CulturaX-zh-unsupervised-20241030-122021/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ library_name: transformers
20
+ license: llama3.2
21
+ base_model: meta-llama/Llama-3.2-1B-Instruct
22
+ tags:
23
+ - generated_from_trainer
24
+ model-index:
25
+ - name: CulturaX-zh-unsupervised-20241030-122021
26
+ results: []
27
+ ---
28
+
29
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
+ should probably proofread and complete it, then remove this comment. -->
31
+
32
+ # CulturaX-zh-unsupervised-20241030-122021
33
+
34
+ This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on an unknown dataset.
35
+
36
+ ## Model description
37
+
38
+ More information needed
39
+
40
+ ## Intended uses & limitations
41
+
42
+ More information needed
43
+
44
+ ## Training and evaluation data
45
+
46
+ More information needed
47
+
48
+ ## Training procedure
49
+
50
+ ### Training hyperparameters
51
+
52
+ The following hyperparameters were used during training:
53
+ - learning_rate: 5e-05
54
+ - train_batch_size: 8
55
+ - eval_batch_size: 8
56
+ - seed: 42
57
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
58
+ - lr_scheduler_type: linear
59
+ - training_steps: 3750
60
+
61
+ ### Training results
62
+
63
+
64
+
65
+ ### Framework versions
66
+
67
+ - Transformers 4.46.1
68
+ - Pytorch 2.4.1
69
+ - Datasets 3.0.1
70
+ - Tokenizers 0.20.1
71
+
72
+