--- model: vgg16-fruits-classifier base_model: VGG16 dataset: fruits-360 metrics: - accuracy: 96.02% - loss: 0.1585 license: mit tags: - image-classification - food - fruit-recognition - transfer-learning library_name: keras --- # VGG16 Fruits Classifier ## Model Description This model is a fine-tuned version of the VGG16 architecture on the Fruits-360 dataset from Kaggle. VGG16 is a convolutional neural network model proposed by K. Simonyan and A. Zisserman from the University of Oxford. The model achieves a high level of accuracy in recognizing 131 different types of fruits. ## Intended Uses & Limitations ### Intended Uses - **Food Recognition**: This model can be used to classify images of fruits into one of 131 categories. It is particularly useful for applications in grocery stores, dietary monitoring, and educational tools about fruits. ### Limitations - **Generalization**: The model is trained specifically on the Fruits-360 dataset and may not generalize well to fruits not included in this dataset or images with significantly different characteristics (e.g., different lighting, angles, or backgrounds). - **Biases**: Performance may vary across different fruit categories due to the distribution and quality of the dataset images. ## Training and Evaluation Data The model was trained and evaluated on the Fruits-360 dataset. This dataset contains over 70,000 images of fruits, split into training and test sets. ### Dataset Statistics - **Training Set**: 54,190 images - **Validation Set**: 13,502 images - **Test Set**: 22,688 images Each image is labeled with one of 131 fruit categories. ## Training Procedure ### Data Augmentation - **Preprocessing**: All images were preprocessed using the `preprocess_input` function from the VGG16 module. - **Augmentation**: Images were augmented with random rotations, shifts, zooms, flips, and more to improve the model's generalization. ### Model Architecture The model architecture consists of the VGG16 base with the top fully connected layers removed. A global average pooling layer and a dense output layer with softmax activation were added. ### Hyperparameters - **Optimizer**: Adam with a learning rate of 0.0001 - **Loss Function**: Categorical Crossentropy - **Batch Size**: 64 - **Epochs**: 5 ### Training Results The model was trained for 5 epochs, achieving a validation accuracy of 97.04% and a test accuracy of 96.02%. ## Evaluation Metrics - **Accuracy**: 96.02% - **Loss**: 0.1585 These metrics indicate that the model performs very well on the test set of the Fruits-360 dataset. ## Framework Versions - **TensorFlow**: 2.x - **Keras**: 2.x ## Example Code You can use this model with the following example code: ```python from tensorflow.keras.preprocessing import image from tensorflow.keras.applications.vgg16 import preprocess_input import numpy as np # Load the model model = tf.keras.models.load_model('path_to_model') # Load an image file that you want to classify img_path = 'path_to_image' img = image.load_img(img_path, target_size=(100, 100)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) # Predict the class of the image predictions = model.predict(x) predicted_class = np.argmax(predictions, axis=1) print(f"Predicted class: {predicted_class}")