ParthGoyal974
commited on
Commit
•
ad9b54a
1
Parent(s):
f275200
Upload Documentation_2.txt
Browse files- Documentation_2.txt +61 -0
Documentation_2.txt
ADDED
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Image Classifier Documentation
|
2 |
+
|
3 |
+
## Introduction
|
4 |
+
|
5 |
+
This documentation outlines the steps involved in building and training an image classifier using TensorFlow. The classifier is designed to classify images into two categories: "pepe" and "not pepe". The classifier is trained using a dataset consisting of images of both categories.
|
6 |
+
|
7 |
+
## Requirements
|
8 |
+
|
9 |
+
- TensorFlow
|
10 |
+
- OpenCV (cv2)
|
11 |
+
- Matplotlib
|
12 |
+
- NumPy
|
13 |
+
- imghdr
|
14 |
+
- scikit-learn (for evaluation metrics)
|
15 |
+
|
16 |
+
## Data Collection and Preprocessing
|
17 |
+
|
18 |
+
1. **Data Augmentation**: The dataset is augmented using the `ImageDataGenerator` from TensorFlow's `preprocessing.image` module. Augmentation techniques include rotation, shifting, shearing, zooming, and flipping.
|
19 |
+
|
20 |
+
2. **Data Validation**: Images are validated for compatibility with the dataset. Images with extensions other than JPEG, JPG, BMP, and PNG are removed. Corrupted images are removed from the dataset.
|
21 |
+
|
22 |
+
3. **Data Loading**: The augmented dataset is loaded using `tf.keras.utils.image_dataset_from_directory`. This function creates a TensorFlow dataset from image files arranged in directories corresponding to class labels.
|
23 |
+
|
24 |
+
4. **Data Splitting**: The dataset is split into training, validation, and test sets using TensorFlow's dataset manipulation functions.
|
25 |
+
|
26 |
+
## Model Architecture
|
27 |
+
|
28 |
+
The image classifier model consists of several layers:
|
29 |
+
|
30 |
+
- Input Layer: Accepts images of size 256x256 pixels with 3 color channels.
|
31 |
+
- Convolutional Layers: Three sets of convolutional layers with batch normalization and max-pooling.
|
32 |
+
- Flatten Layer: Flattens the output from convolutional layers.
|
33 |
+
- Dense Layers: Two dense layers with ReLU activation.
|
34 |
+
- Output Layer: Dense layer with a sigmoid activation function for binary classification.
|
35 |
+
|
36 |
+
## Model Training
|
37 |
+
|
38 |
+
The model is compiled using the Adam optimizer with a specified learning rate and binary cross-entropy loss function. Training is performed for a fixed number of epochs, with validation data used to monitor model performance and prevent overfitting.
|
39 |
+
|
40 |
+
## Evaluation Metrics
|
41 |
+
|
42 |
+
- **Precision**: Precision is calculated as the ratio of true positives to the sum of true positives and false positives. It measures the accuracy of positive predictions.
|
43 |
+
- **Recall**: Recall is calculated as the ratio of true positives to the sum of true positives and false negatives. It measures the ability of the model to identify positive instances.
|
44 |
+
- **Binary Accuracy**: Binary accuracy calculates the accuracy of binary predictions.
|
45 |
+
|
46 |
+
The F1 score, which is the harmonic mean of precision and recall, is also calculated as an evaluation metric.
|
47 |
+
|
48 |
+
## Note: The model might be slightly overfit due to a small dataset.
|
49 |
+
|
50 |
+
## Performance
|
51 |
+
|
52 |
+
Highest F1 Score: 96.7%
|
53 |
+
Acuuracy: 92.2%
|
54 |
+
|
55 |
+
## Test Loss and Visualization
|
56 |
+
|
57 |
+
The test loss and additional evaluation metrics are computed using the trained model on the test dataset. The loss curves (training loss and validation loss) are plotted to visualize the model's performance during training.
|
58 |
+
|
59 |
+
## Conclusion
|
60 |
+
|
61 |
+
This documentation provides an overview of the image classifier built using TensorFlow. It outlines the steps involved in data collection, preprocessing, model architecture, training, and evaluation. By following these steps, users can train and evaluate their own image classifiers for various applications.
|