Jackmin108 commited on
Commit
9142312
1 Parent(s): a4cac67

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - PrimeIntellect/fineweb-edu
5
+ - PrimeIntellect/fineweb
6
+ - PrimeIntellect/StackV1-popular
7
+ - mlfoundations/dclm-baseline-1.0-parquet
8
+ - open-web-math/open-web-math
9
+ language:
10
+ - en
11
+ pipeline_tag: text-generation
12
+ ---
13
+ # INTELLECT-1-step-88000
14
+
15
+ | | Step | Model URL |
16
+ |---|------|-----------|
17
+ | | 17000 | https://huggingface.co/PrimeIntellect/INTELLECT-1-step-17000 |
18
+ | | 28600 | https://huggingface.co/PrimeIntellect/INTELLECT-1-step-28600 |
19
+ | | 39200 | https://huggingface.co/PrimeIntellect/INTELLECT-1-step-39200 |
20
+ | | 49200 | https://huggingface.co/PrimeIntellect/INTELLECT-1-step-49200 |
21
+ | | 59200 | https://huggingface.co/PrimeIntellect/INTELLECT-1-step-59200 |
22
+ | | 69200 | https://huggingface.co/PrimeIntellect/INTELLECT-1-step-69200 |
23
+ | | 78000 | https://huggingface.co/PrimeIntellect/INTELLECT-1-step-78000 |
24
+ | -> | 88000 | https://huggingface.co/PrimeIntellect/INTELLECT-1-step-88000 |
25
+
26
+ ## **Model Overview**
27
+ **INTELLECT-1** is the first collaboratively trained 10 billion parameter language model trained from scratch on 1 trillion tokens of English text and code.
28
+
29
+ **INTELLECT-1** was trained on up to 14 concurrent nodes distributed across 3 continents, with contributions from 30 independent community contributors providing compute.
30
+ The training code utilizes the [prime framework](https://github.com/PrimeIntellect-ai/prime), a scalable distributed training framework designed for fault-tolerant, dynamically scaling, high-perfomance training on unreliable, globally distributed workers.
31
+ The key abstraction that allows dynamic scaling is the `ElasticDeviceMesh` which manages dynamic global process groups for fault-tolerant communication across the internet and local process groups for communication within a node
32
+ The global all-reduce was done with custom int8 all-reduce kernels to reduce the communication payload required, greatly reducing the communication overhead.
33
+
34
+ For more detailed technical insights, please refer to our [technical paper](https://github.com/PrimeIntellect-ai/prime).
35
+
36
+ ## **Model Details**
37
+ - **Model Contributors**: samsja, Prime Intellect, Arcee AI, kotaro, skre_0, marlo, rodeo, Herb, Olas, superchillen, Hugging Face, mev_pete, 0xfr_, dj, primeprimeint1234, Marco Giglio, realtek, Hyperbolic, hecataeus, NWO, Virtual Machine, droll, SemiAnalysis, _waiting__, toptickcrypto, sto, Johannes, washout_segment_0b, klee
38
+ - **Release Date**: 29 Nov 2024
39
+ - **Model License**: Apache 2.0
40
+
41
+ ## **Technical Specifications**
42
+ | **Parameter** | **Value** |
43
+ |----------------------|------------------------|
44
+ | Parameter Size | 10B |
45
+ | Number of Layers | 42 |
46
+ | Number of Attention Heads | 32 |
47
+ | Hidden Size | 4096 |
48
+ | Context Length | 8192 |
49
+ | Vocabulary Size | 128256 |
50
+
51
+ ## **Citations**
52
+ If you use this model in your research, please cite it as follows:
53
+ ```
54
+ @article{}
55
+ ```