File size: 1,590 Bytes
544b1d6
 
5ddb075
 
 
544b1d6
 
 
5b4edf7
544b1d6
5b4edf7
544b1d6
 
 
 
 
95d2164
 
544b1d6
 
 
 
 
5b4edf7
5d2e355
544b1d6
 
 
 
 
 
5b4edf7
5d2e355
544b1d6
 
 
 
e7ea6af
95d2164
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
license: other
pipeline_tag: text-generation
tags:
- cortex.cpp
---

## Overview
OLMo-2 is a series of Open Language Models designed to enable the science of language models. These models are trained on the Dolma dataset, with all code, checkpoints, logs (coming soon), and associated training details made openly available.

The OLMo-2 13B Instruct November 2024 is a post-trained variant of the OLMo-2 13B model, which has undergone supervised fine-tuning on an OLMo-specific variant of the Tülu 3 dataset. Additional training techniques include Direct Preference Optimization (DPO) and Reinforcement Learning from Virtual Rewards (RLVR), optimizing it for state-of-the-art performance across various tasks, including chat, MATH, GSM8K, and IFEval.

## Variants

| No | Variant | Cortex CLI command |
| --- | --- | --- |
| 1 | [Olmo-2-7b](https://huggingface.co/cortexso/olmo-2/tree/7b) | `cortex run olmo-2:7b` |
| 2 | [Olmo-2-13b](https://huggingface.co/cortexso/olmo-2/tree/13b) | `cortex run olmo-2:13b` |

## Use it with Jan (UI)

1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart)
2. Use in Jan model Hub:
    ```bash
    cortexhub/olmo-2
    ```

## Use it with Cortex (CLI)

1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart)
2. Run the model with command:
    ```bash
    cortex run olmo-2
    ```
    
## Credits

- **Author:** allenai
- **Converter:** [Homebrew](https://homebrew.ltd/)
- **Original License:** [Licence](https://choosealicense.com/licenses/apache-2.0/)
- **Papers:** [Paper](https://arxiv.org/abs/2501.00656)