I created this model with the primary intent to review python code. This is a continued work in progress and I will update as time moves on.

Provide the model with python code and it will provide an analysis, which will include suggestions to improve the code.

This model can be used to generate prompts to in turn generate "clean" synthetic code bases for fine tuning models that generate python code for tool use.


base_model: cognitivecomputations/dolphin-2.9.2-qwen2-7b language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl

Uploaded model

  • Developed by: dbands
  • License: apache-2.0
  • Finetuned from model : cognitivecomputations/dolphin-2.9.2-qwen2-7b

This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
19
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for dbands/tantrum_16bit

Base model

Qwen/Qwen2-7B
Finetuned
(8)
this model
Quantizations
2 models

Dataset used to train dbands/tantrum_16bit