Model Card: LsTam/test_Qwen_fr_14b

Model Overview

  • Base Model: Qwen2.5-14B-Instruct
  • Target Language: French
  • Use Case: Natural Language Processing tasks including text generation, translation, comprehension, and instruction-following in French.

Intended Use

  • Primary Application: Testing and evaluating the performance of the Qwen2.5-14B-Instruct model on various NLP tasks in the French language.

Model Details

  • Architecture: Transformer-based large language model.
  • Parameters: 14 billion parameters.
  • FT Data: Derived from a diverse set of French corpora:
    • French created MCQ, ture false, fill in the blank exercice
    • French instruction dataset such as 'angeluriot/french_instruct'
    • other french corpora and internal data

Limitations

  • Bias: Bias analysis is coming.
Downloads last month
2,696
Safetensors
Model size
14.8B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for LsTam/test_Qwen_fr_14b

Quantizations
1 model