You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

This is first of many reasoning and reflect instruction-tuned generative mmodel in 3B size (text in/text out).

Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) with GRPO fine tuning using unsloth, to align with human preferences for helpfulness and safety.

Use with transformers

Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.

Make sure to update your transformers installation via pip install --upgrade transformers.

import torch
from transformers import pipeline

model_id = "EpistemeAI/ReasoningCore-3B-Instruct-r01-Reflect"
pipe = pipeline(
    "text-generation",
    model=model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
messages = [
    {"role": "system", "content": "You are a powerful assistant Respond in the following format:
<reasoning>
...
</reasoning>
<reflecting>
...
</reflecting>
<answer>
...
</answer>"},
    {"role": "user", "content": "Which is bigger? 9.11 or 9.9?"},
]
outputs = pipe(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])

Using SuperTransformer

import SuperTransformer
# Load SuperTransformer Class,  (1) Loads Huggingface model, (2) System Prompt (3) Text/prompt (4)Max tokens
SuperTransformers = SuperTransformers("EpistemeAI/ReasoningCore-3B-Instruct-r01-Reflect","You are a highly knowledgeable assistant with expertise in chemistry and physics. <reasoning>...</reasoning><reflecting></reflecting><answer></answer>","What is the area of a circle, radius=16, reason step by step", 2026)
# 8-bit quantization
SuperTransformers.HuggingFaceTransformer8bit()
# or 4-bit quantization
SuperTransformers.HuggingFaceTransformer4bit()

Uploaded model

  • Developed by: EpistemeAI
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
28
Safetensors
Model size
3.21B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for EpistemeAI/ReasoningCore-3B-Instruct-r01-Reflect

Finetunes
1 model
Quantizations
1 model