--- metrics: - accuracy library_name: peft pipeline_tag: image-classification tags: - orange disease classification - leaves classifier - orange diseases --- ViTOrangeLeafDiseaseClassifier This model is a fine-tuned version of the Vision Transformer (ViT) model, specifically google/vit-base-patch16-224-in21k, tailored for detecting various diseases in orange leaves. The model was fine-tuned on a dataset containing 5185 images of orange leaves categorized into 10 different classes. Model Description The OrangeLeafDiseaseDetector model is designed to classify orange leaf images into one of the following ten categories: Aleurocanthus spiniferus Chancre citrique Cochenille blanche Dépérissement des agrumes Feuille saine Jaunissement des feuilles Maladie de l'oïdium Maladie du dragon jaune Mineuse des agrumes Trou de balle Intended Uses & Limitations Intended Uses This model is intended to help farmers, agricultural researchers, and agronomists diagnose diseases in orange leaves based on images. The use cases include: Early detection of diseases to prevent the spread and reduce crop loss. Assisting in field research and agricultural studies. Limitations The model is only as good as the dataset it was trained on. It might not perform well on images significantly different from those in the training dataset. Environmental factors like lighting, leaf condition, and background can affect the model's accuracy. The model should not be used as the sole diagnostic tool. It is recommended to use it alongside other diagnostic methods. Training Data The model was trained on a custom dataset of 5185 images of orange leaves, categorized into the aforementioned ten classes. The images include various disease conditions and healthy leaves, collected from different sources. Training Procedure Hyperparameters The following hyperparameters were used during the training process: learning_rate: 0.01 train_batch_size: 16 eval_batch_size: 16 gradient_accumulation_steps: 6 num_train_epochs: 15 weight_decay: 1e-5 logging_steps: 10 fp16: True (mixed precision training) save_strategy: "epoch" eval_strategy: "epoch" load_best_model_at_end: True metric_for_best_model: "accuracy" Training Results The model achieved the following results on the evaluation set: Training Loss: 0.004 Validation Loss: 0.005 Accuracy: 99.65% Framework Versions PEFT: 0.11.1 Transformers: 4.41.2 PyTorch: 2.2.1+cu121 Datasets: 2.20.0 Tokenizers: 0.19.1 Citation If you use this model, please cite the original Vision Transformer paper and acknowledge the dataset contributors. For any further questions or support, feel free to contact me on khadijaasehnoune@gmail.com.