File size: 3,069 Bytes
42e455d 5ad2f5f 42e455d 3613abb e031682 3613abb a27f812 3613abb e031682 3613abb e031682 42e455d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
---
title: Image Classification Cifar10 GradCAM
emoji: 💻
colorFrom: pink
colorTo: indigo
sdk: gradio
sdk_version: 3.39.0
app_file: app.py
pinned: false
license: apache-2.0
---
# IMAGE CLASSIFICATION with GradCAM
A simple Gradio interface to visualize the output of a CNN trained on **CIFAR10 dataset** with **GradCAM** and **Misclassified images**.
The architecture is inspired from David Page’s (myrtle.ai) DAWNBench winning model archiecture. Please refer to https://myrtle.ai/learn/how-to-train-your-resnet-8-bag-of-tricks/ to know more.
### Instructions
1. Please input the image and select the number of top predictions to display - you will see the top predictions and their corresponding confidence scores.
2. You can also select whether to show GradCAM for the particular image (utilizes the gradients of the classification score with respect to the final convolutional feature map, to identify the parts of an input image that most impact the classification score).
3. You need to select the model layer where the gradients need to be plugged from - this affects how much of the image is used to compute the GradCAM.
4. You can also select whether to show misclassified images - these are the images that the model misclassified. Please select the number of misclassified images to display - the pipeline selects the bunch randomly from the misclassified images in the test set.
5. Some examples are provided in the examples tab.
>Please refer to the training repo - https://github.com/Madhur-1/ERA-v1/edit/master/S12 for more details on the training.
## CIFAR-10 Dataset
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.
![Data Samples](Store/image.png)
## Model Metrics
| Train Acc | Test Acc | Train Loss | Test Loss |
|-----------|----------|------------|-----------|
| 96.47 | 92.50 | 0.10 | 0.23 |
![image](https://github.com/Madhur-1/ERA-v1/assets/64495917/99f9bb9d-d907-41f5-b134-a214750b1c4b)
## Grad-CAM
Note: The following has been taken from https://towardsdatascience.com/understand-your-algorithm-with-grad-cam-d3b62fce353
Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say ‘dog’ in a classification network or a sequence of words in captioning network) flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept.
![Alt text](Store/image-2.png)
------
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|