YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Language:

  • English
    License: Apache-2.0
    Tags:
  • causal-lm
  • text-generation
  • 14b
  • ozone-research
  • resonance
  • open-source
    Base Model:
  • Qwen/Qwen2.5-14B-Instruct

Resonance-01: Ozone Research

Model Description

Resonance-01 is a 14 billion parameter language model developed by Ozone Research, built on the Qwen/Qwen2.5-14B-Instruct framework. It’s a causal language model optimized for advanced text generation and reasoning.
MAJOR HIGHLIGHT: CHAIN-OF-THOUGHT (CoT) REASONING
Resonance-01 thinks logically, breaking down complex problems into clear, step-by-step solutions—setting it apart from other models. This is Ozone Research’s most advanced release to date.

Join Our Community

https://discord.gg/ozone

Intended Uses & Limitations

Resonance-01 leverages its CoT reasoning for research, content creation, and enterprise applications. Key uses include:

  • Text generation with logical flow
  • Question answering with detailed, step-by-step explanations
  • Creative writing with coherent narratives
  • Technical analysis requiring structured problem-solving

Limitations:

  • May generate biased or inaccurate content; safeguards are recommended.
  • Performance varies by task and input complexity.

How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "ozone-research/Resonance-01"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

prompt = "Explain why the sky is blue."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids

generation_output = model.generate(input_ids, max_length=100)
print(tokenizer.decode(generation_output[0]))

Evaluation

Benchmarks

Benchmark data is pending, but Resonance-01’s CoT reasoning is expected to excel on MMLU Pro, particularly in problem-solving tasks. Full results coming soon.

Training Details

  • Training Infrastructure: 1x H100
  • Training Procedure: 3x Epoch using LoRA

Contact

For inquiries, reach out at info@ozone-ai.com or visit https://www.ozone-ai.com. You can also contact via discord at https://discord.gg/ozone

Attribution

Built with Qwen. Users must comply with the Qwen license agreement.

Downloads last month
20
Safetensors
Model size
14.8B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for ozone-research/milo-1

Quantizations
1 model