Edit model card

Model Description

FALLAMA is a specialized model fine-tuned to detect and classify logical fallacies in text. Built on top of the LLAMA 3.1 8B architecture, this model has been carefully trained to recognize a wide range of logical fallacies and categorize them according to a predefined taxonomy.

  • Developed by: [Elia Mengozzi]
  • Model type: [LLAMA 3.1 8B]
  • License: [Apache 2.0]
  • Finetuned from model [optional]: [LLAMA 3.1 8B]

Uses

Training Details: FALLAMA was fine-tuned using a dataset specifically curated for logical fallacies, covering a diverse range of fallacy types. The model was trained to not only identify the presence of a fallacy in a statement but also to classify it into one of the predefined categories.

Bias, Risks, and Limitations

FALLAMA is not a serious model and was developed solely for educational purposes. The results produced by this model should not be considered valid or reliable, and they should always be manually checked and verified. The model's outputs are intended to illustrate the process of detecting and classifying logical fallacies, but they are not suitable for any critical or professional use.

Training Data

FALLAMA was fine-tuned using a dataset specifically curated for logical fallacies, covering a diverse range of fallacy types.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train EliaMengozzi/fallama