Text Generation
Transformers
Safetensors
mistral
code
Inference Endpoints
text-generation-inference
Edit model card

Model Card for tinymistral-v2-pycoder-instruct-248m

This modelcard is for tinymistral-v2-pycoder-instruct, a python-specific code generation model on top of Locutusque/TinyMistral-248M-v2-Instruct.

Model Details

This instruct model follows the original in using ChatML format.

An empty prompt will return various information from the base model, but using the instruct format will deliver python code of varying quality.

Model Description

Model is in active development, base model is in active development, and all should be treated with caution.

Uses

Generate python code.

Direct Use

Probably could be fine tuned with a more comprehensive dataset. Experiments are in progress.

How to Get Started with the Model

Use the prompt format below to get started with the model.

<|im_start|>user Write a function for multiplying two numbers, from variables 'a' and 'b'.<|im_end|> <|im_start|>assistant

Training Details

Training Data

Custom formatted existing python data from:

Training Procedure

Repeat training depending on compute budget.

Preprocessing

Conversion to alpaca/instruct format.

Training Hyperparameters

  • Training regime: fp16, merge of parameter fine-tune adapters when necessary and helpful.

Evaluation

Metrics

Latest metrics:

  • epoch: 4.87
  • global_step: 220
  • learning_rate: 0.00006713780918727916
  • loss: 2.3736
Downloads last month
12
Safetensors
Model size
248M params
Tensor type
F32
·
Inference API
This model can be loaded on Inference API (serverless).

Datasets used to train jtatman/tinymistral-v2-pycoder-instruct-248m