Model Card: Minimalist Assistant

Model Details

  • Base Model: Mistral Instruct v2
  • Tokenizer: based on Mistral Instruction following

Intended Use

  • As Editor Assistant for revision and paraphrasing
  • Avoids technical jargon in favor of clear and accessible language

Training Data

  • Initial Training: 14,000 conversations in minimalist style and more accessible language
    • Dataset: kevin009/system-defined-sft-llama3-14k
  • Further Training: 8,000 revision conversations to enhance rewriting and paraphrasing tasks.

Performance and Limitations

  • Limitations:
    • May produce shorter outputs compared to original version.
    • Potential biases

Ethical Considerations

  • Designed for daily use, potential biases from training data should be considered
  • The model does not have implemented safety measures to prevent generation of potentially harmful or offensive content

Additional Information

  • Fine-tuned to address limitations in writing tasks observed in other models
  • Personalized for everyday use cases
  • Motivation for development was to create a model better suited for writing tasks, as existing models were found lacking in this area
  • SFT fine-tuned model
Downloads last month
4
Safetensors
Model size
7.24B params
Tensor type
F32
·
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kevin009/minirewrite

Finetunes
2 models
Quantizations
1 model