Triangle104's picture
Update README.md
ca080dc verified
---
license: apache-2.0
datasets:
- PrimeIntellect/fineweb-edu
- PrimeIntellect/fineweb
- PrimeIntellect/StackV1-popular
- mlfoundations/dclm-baseline-1.0-parquet
- open-web-math/open-web-math
- arcee-ai/EvolKit-75K
- arcee-ai/Llama-405B-Logits
- arcee-ai/The-Tomb
- mlabonne/open-perfectblend-fixed
- microsoft/orca-agentinstruct-1M-v1-cleaned
- Post-training-Data-Flywheel/AutoIF-instruct-61k-with-funcs
- Team-ACE/ToolACE
- Synthia-coder
- ServiceNow-AI/M2Lingual
- AI-MO/NuminaMath-TIR
- allenai/tulu-3-sft-personas-code
- allenai/tulu-3-sft-personas-math
- allenai/tulu-3-sft-personas-math-grade
- allenai/tulu-3-sft-personas-algebra
language:
- en
base_model: PrimeIntellect/INTELLECT-1-Instruct
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/INTELLECT-1-Instruct-Q4_K_S-GGUF
This model was converted to GGUF format from [`PrimeIntellect/INTELLECT-1-Instruct`](https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct) for more details on the model.
---
Model details:
-
INTELLECT-1 is the first collaboratively trained 10
billion parameter language model trained from scratch on 1 trillion
tokens of English text and code.
This is an instruct model. The base model associated with it is INTELLECT-1.
INTELLECT-1 was trained on up to 14 concurrent nodes
distributed across 3 continents, with contributions from 30 independent
community contributors providing compute.
The training code utilizes the prime framework,
a scalable distributed training framework designed for fault-tolerant,
dynamically scaling, high-perfomance training on unreliable, globally
distributed workers.
The key abstraction that allows dynamic scaling is the ElasticDeviceMesh
which manages dynamic global process groups for fault-tolerant
communication across the internet and local process groups for
communication within a node.
The model was trained using the DiLoCo
algorithms with 100 inner steps. The global all-reduce was done with
custom int8 all-reduce kernels to reduce the communication payload
required, greatly reducing the communication overhead by a factor 400x.
For more detailed technical insights, please refer to our technical paper.
Note: You must add a BOS token at the beginning of each sample. Performance may be impacted otherwise.
Usage
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("PrimeIntellect/INTELLECT-1-Instruct")
tokenizer = AutoTokenizer.from_pretrained("PrimeIntellect/INTELLECT-1-Instruct")
input_text = "What is the Metamorphosis of Prime Intellect about?"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output_ids = model.generate(input_ids, max_length=50, num_return_sequences=1)
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output_text)
Example text generation pipeline
import torch
from transformers import pipeline
torch.set_default_device("cuda")
pipe = pipeline("text-generation", model="PrimeIntellect/INTELLECT-1")
print(pipe("What is prime intellect ?"))
Model Details
Compute Contributors: Prime Intellect, Arcee AI,
kotaro, skre_0, marlo, rodeo, Herb, Olas, superchillen, Hugging Face,
mev_pete, 0xfr_, dj, primeprimeint1234, Marco Giglio, realtek,
Hyperbolic, hecataeus, NWO, Virtual Machine, droll, SemiAnalysis, waiting_, toptickcrypto, sto, Johannes, washout_segment_0b, klee
Release Date: 29 Nov 2024
Model License: Apache 2.0
Technical Specifications
Parameter:
Value
Parameter Size:
10B
Number of Layers:
42
Number of Attention Heads:
32
Hidden Size:
4096
Context Length:
8192
Vocabulary Size:
128256
Training Details:
-
Dataset: 55% fineweb-edu, 10% fineweb, 20% Stack V1, 10% dclm-baseline, 5% open-web-math
Tokens: 1 Trillion
Optimizer: Diloco/LocalSGD - Inner Optimizer: AdamW, Outer Optmizer: Nesterov SGD
Post-training
The post-training has been handled by arcee
After completing the globally distributed pretraining phase, we
applied several post-training techniques to enhance INTELLECT-1's
capabilities and task-specific performance. Our post-training
methodology consisted of three main phases.
First, we conducted an extensive series of 16 Supervised Fine-Tuning
(SFT) trainings, with individual runs ranging from 1 to 3.3 billion
tokens each. The most successful configuration used 2.4 billion training
tokens over 3 epochs. We used MergeKit, EvolKit, and DistillKit from
Arcee AI to combine the models, generate the data sets, and distill the
logits, respectively. For training data, we used a diverse set of
high-quality datasets:
New Datasets (released with INTELLECT-1):
-
arcee-ai/EvolKit-75k (generated via EvolKit)
arcee-ai/Llama-405B-Logits
arcee-ai/The-Tomb
Instruction Following:
-
mlabonne/open-perfectblend-fixed (generalist capabilities)
microsoft/orca-agentinstruct-1M-v1-cleaned (Chain-of-Thought)
Post-training-Data-Flywheel/AutoIF-instruct-61k-with-funcs
Domain-Specific:
-
Team-ACE/ToolACE (function calling)
Synthia coder (programming)
ServiceNow-AI/M2Lingual (multilingual)
AI-MO/NuminaMath-TIR (mathematics)
Tulu-3 Persona Datasets:
-
allenai/tulu-3-sft-personas-code
allenai/tulu-3-sft-personas-math
allenai/tulu-3-sft-personas-math-grade
allenai/tulu-3-sft-personas-algebra
Second, we execute 8 distinct Direct Preference Optimization (DPO)
runs with various combinations of data sets to enhance specific
performance metrics and align the model with human preferences. A key
advantage in our post-training process was INTELLECT-1's use of the
Llama-3 tokenizer, which allowed us to utilize logits from
Llama-3.1-405B to heal and maintain precision during the post-training
process via DistillKit.
Finally, we performed 16 strategic merges between candidate models
using MergeKit to create superior combined models that leverage the
strengths of different training runs. During the post-training phase, we
observed that when using a ChatML template without an explicit BOS
(begin-of-sequence) token, the initial loss was approximately 15.
However, when switching to the Llama 3.1 chat template, the loss for
these trainings started much lower at approximately 1.1, indicating
better alignment with the underlying Llama 3 tokenizer.
The combination of these post-training techniques resulted in
significant improvements in various benchmarks, particularly in
knowledge retrieval, grade school math, instruction following and
reasoning.
Citations
If you use this model in your research, please cite it as follows:
@article{jaghouar2024intellect,
title={INTELLECT-1 Technical Report.},
author={Jaghouar, Sami and Ong, Jack Min and Basra, Manveer and Obeid, Fares and Straube, Jannik and Keiblinger, Michael and Bakouch, Elie and Atkins, Lucas and Panahi, Maziyar and Goddard, Charles and Ryabinin, Max and Hagemann, Johannes},
journal={arXiv preprint},
year={2024}
}
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/INTELLECT-1-Instruct-Q4_K_S-GGUF --hf-file intellect-1-instruct-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/INTELLECT-1-Instruct-Q4_K_S-GGUF --hf-file intellect-1-instruct-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/INTELLECT-1-Instruct-Q4_K_S-GGUF --hf-file intellect-1-instruct-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/INTELLECT-1-Instruct-Q4_K_S-GGUF --hf-file intellect-1-instruct-q4_k_s.gguf -c 2048
```