Thought-Ranked Llama 3.2 3B v3.0

What's New in v3?

The major advancement in v3 is the integration of reinforcement learning to refine the model's outputs. Using OpenRLHF with REINFORCE and Gemini 1.5 Flash 8B as a judge, we've optimized the model to produce higher quality responses across various criteria including relevance, accuracy, clarity, style, and completeness.

This RL fine-tuning process used a sophisticated reward model that evaluates responses on a 0-99 scale, considering factors such as:

  • Intent fulfillment and practical utility
  • Factual accuracy and logical consistency
  • Clarity and understandability
  • Style and tone appropriateness
  • Completeness and detail sufficiency

How It Works

The model maintains the same powerful thought chain capabilities from v2.2, but with enhanced output quality. Here's an example:

<thoughts>
<thought>First, I should consider the moon's main effects on Earth</thought>
<thought>The moon controls our tides, so ocean patterns would change dramatically</thought>
<thought>Without the moon's gravitational pull, Earth's rotation would become unstable</thought>
<thought>This would lead to extreme climate changes and disrupted ecosystems</thought>
<thought>The loss of moonlight would affect nocturnal animals and human culture</thought>
<thought>Combining all these effects, we'd see a cascade of environmental changes</thought>
</thoughts>

The disappearance of the moon would have far-reaching consequences for Earth...
[detailed answer follows]

System Messages

The model continues to support various system prompts:

  1. Basic prompt:
{"role": "system", "content": "You are a helpful assistant. Think before responding."}
  1. Specific thought count:
{"role": "system", "content": "You are a helpful assistant. Think 3 thoughts before responding."}
  1. Standard helper:
{"role": "system", "content": "You are a helpful assistant."}

Technical Details

Base Architecture

  • Base Model: Llama 3.2 3B
  • Initial Training: 2,500 carefully selected examples with up to 6 levels of thought chains
  • Thought Selection: Multi-level thought generation with external ranking system

RL Fine-tuning

  • Framework: OpenRLHF
  • Algorithm: REINFORCE
  • Judge Model: Gemini 1.5 Flash 8B
  • Training Parameters:
    • Actor Learning Rate: 5e-7
    • Critic Learning Rate: 9e-6
    • Initial KL Coefficient: 0.01
    • Batch Size: 128
    • Max Epochs: 1
    • Prompt/Generation Max Length: 1024
    • BF16 Precision
    • Flash Attention enabled
    • Gradient Checkpointing
  • Training Data: OpenRLHF/prompt-collection-v0.1
  • Infrastructure: Ray distributed training with VLLM acceleration

What's It Good For?

The model excels at tasks requiring careful thinking and high-quality outputs:

✅ Breaking down complex problems with logical progression ✅ Step-by-step mathematical solutions with clear explanations ✅ Detailed analysis with well-structured arguments ✅ Clear and appropriate explanations of complicated concepts ✅ Well-reasoned decision-making with supporting evidence

Limitations

  • May still occasionally overthink simple problems
  • Bounded by base Llama 3.2 3B model capabilities
  • Not suitable for critical decisions without human oversight
  • Could generate irrelevant thought chains in edge cases
  • RL training might lead to occasional reward hacking behaviors

Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("ericflo/Llama-3.2-3B-COT-v3.0")
tokenizer = AutoTokenizer.from_pretrained("ericflo/Llama-3.2-3B-COT-v3.0")

messages = [
    {"role": "system", "content": "You are a helpful assistant. Think 3 thoughts before responding."},
    {"role": "user", "content": "How would you teach a child to ride a bike?"}
]

input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt")
output = model.generate(input_ids, temperature=1.0)
response = tokenizer.decode(output[0])

Citation

@misc{thought-ranked-llama-v3,
  title={Thought-Ranked Llama 3.2 v3: RL-Optimized Hierarchical Chain-of-Thought Generation},
  author={[Eric Florenzano]},
  year={2024},
  howpublished={\url{https://huggingface.co/ericflo/Llama-3.2-3B-COT-v3}}
}

Acknowledgments

This model builds on the Llama 3.2 3B base model from Meta and incorporates RL training using Google's Gemini 1.5 Flash 8B as a judge. Special thanks to the open-source AI community for their contributions to chain-of-thought prompting techniques and reinforcement learning frameworks.

Downloads last month
209
Safetensors
Model size
3.21B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ericflo/Llama-3.2-3B-COTv3

Quantized
(210)
this model
Quantizations
1 model

Dataset used to train ericflo/Llama-3.2-3B-COTv3