Sijuade commited on
Commit
31b9437
β€’
1 Parent(s): 84a4148

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +158 -1
README.md CHANGED
@@ -8,4 +8,161 @@ sdk_version: 3.39.0
8
  app_file: app.py
9
  pinned: false
10
  license: mit
11
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  app_file: app.py
9
  pinned: false
10
  license: mit
11
+ ---
12
+
13
+
14
+ # S12
15
+
16
+ # CIFAR10 Image Classification with PyTorch Lightning
17
+
18
+ This project implements an image classifier trained on the CIFAR10 dataset using PyTorch Lightning. The project aims to showcase the use of ResNet architecture, data augmentation, custom dataset classes, and learning rate schedulers.
19
+
20
+ ## Project Structure
21
+
22
+ The project is structured as follows:
23
+
24
+ 1. Data loading and preprocessing
25
+ 2. Dataset statistics calculation
26
+ 3. Data Augmentation
27
+ 3. Model creation
28
+ 4. Training and evaluation
29
+
30
+ ### Data Loading and Preprocessing
31
+
32
+ The data for this project is the CIFAR10 dataset, which is loaded using PyTorch's built-in datasets. To ensure that our model generalizes well, we apply several data augmentations to our training set including normalization, padding, random cropping, and horizontal flipping.
33
+
34
+ ### Dataset Statistics Calculation
35
+
36
+ Before we start training our model, we calculate per-channel mean and standard deviation for our dataset. These statistics are used to normalize our data, which helps make our training process more stable.
37
+
38
+ ```
39
+ Dataset Mean - [0.49139968 0.48215841 0.44653091]
40
+ Dataset Std - [0.24703223 0.24348513 0.26158784]
41
+ ```
42
+
43
+ ### Dataset Augmentation
44
+ ```python
45
+ def get_transforms(means, stds):
46
+ train_transforms = A.Compose(
47
+ [
48
+ A.Normalize(mean=means, std=stds, always_apply=True),
49
+ A.RandomCrop(height=32, width=32, pad=4, always_apply=True),
50
+ A.HorizontalFlip(),
51
+ A.Cutout (fill_value=means),
52
+ ToTensorV2(),
53
+ ]
54
+ )
55
+
56
+ test_transforms = A.Compose(
57
+ [
58
+ A.Normalize(mean=means, std=stds, always_apply=True),
59
+ ToTensorV2(),
60
+ ]
61
+ )
62
+
63
+ return(train_transforms, test_transforms)
64
+ ```
65
+ ![image](https://github.com/Delve-ERAV1/S10/assets/11761529/a0098b5b-e9d4-448b-a6c1-4b24ea9bdd98)
66
+
67
+
68
+ ### Model Creation
69
+
70
+ The model we use for this project is a Custom ResNet, a type of convolutional neural network known for its high performance on image classification tasks.
71
+
72
+ ```
73
+ | Name | Type | Params
74
+ ---------------------------------------------------
75
+ 0 | criterion | CrossEntropyLoss | 0
76
+ 1 | accuracy | MulticlassAccuracy | 0
77
+ 2 | prep_layer | Sequential | 1.9 K
78
+ 3 | layer_one | Sequential | 74.0 K
79
+ 4 | res_block1 | ResBlock | 295 K
80
+ 5 | layer_two | Sequential | 295 K
81
+ 6 | layer_three | Sequential | 1.2 M
82
+ 7 | res_block2 | ResBlock | 4.7 M
83
+ 8 | max_pool | MaxPool2d | 0
84
+ 9 | fc | Linear | 5.1 K
85
+ ---------------------------------------------------
86
+ 6.6 M Trainable params
87
+ 0 Non-trainable params
88
+ 6.6 M Total params
89
+ 26.292 Total estimated model params size (MB)
90
+ ```
91
+
92
+ #### ResNet Architecture and Residual Blocks
93
+
94
+ The defining feature of the ResNet architecture is its use of residual blocks and skip connections. Each residual block consists of a series of convolutional layers followed by a skip connection that adds the input of the block to its output. These connections allow the model to learn identity functions, making it easier for the network to learn complex patterns. This characteristic is particularly beneficial in deeper networks, as it helps to alleviate the problem of vanishing gradients.
95
+
96
+ ### Training and Evaluation
97
+
98
+ To train our model, we use the Adam optimizer with a OneCycle learning rate scheduler.
99
+
100
+ ```
101
+ Epoch 23: 100%
102
+ 196/196 [00:27<00:00, 7.11it/s, v_num=0, val_loss=0.639, val_acc=0.776, train_loss=0.686, train_acc=0.762]
103
+
104
+ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
105
+ ┃ Test metric ┃ DataLoader 0 ┃
106
+ ┑━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
107
+ β”‚ test_acc β”‚ 0.8758000135421753 β”‚
108
+ β”‚ test_loss β”‚ 0.39947837591171265 β”‚
109
+
110
+ ```
111
+
112
+ #### OneCycle Learning Rate Scheduler
113
+
114
+ The OneCycle learning rate scheduler varies the learning rate between a minimum and maximum value according to a certain policy. This dynamic learning rate can help improve the performance of our model. We train our model for a total of 24 epochs.
115
+
116
+ ### Learning Rate Finder
117
+
118
+ ```python
119
+ def LR_Finder(model, criterion, optimizer, trainloader):
120
+
121
+ lr_finder = LRFinder(model, optimizer, criterion, device="cuda")
122
+ lr_finder.range_test(trainloader, end_lr=10, num_iter=200, step_mode='exp')
123
+ max_lr = lr_finder.plot(suggest_lr=True, skip_start=0, skip_end=0)
124
+ lr_finder.reset()
125
+
126
+ return(max_lr[1])
127
+ ```
128
+
129
+ ![image](https://github.com/Delve-ERAV1/S12/assets/11761529/7f86fde6-532a-4c58-be91-5252216e125b)
130
+
131
+
132
+ ## Dependencies
133
+
134
+ This project requires the following dependencies:
135
+
136
+ - torch
137
+ - torchvision
138
+ - numpy
139
+ - albumentations
140
+ - matplotlib
141
+ - torchsummary
142
+
143
+
144
+ ## Usage
145
+
146
+ To run this project, you can clone the repository and run the main script:
147
+
148
+ ```bash
149
+ git clone https://github.com/Delve-ERAV1/S12.git
150
+ cd S12
151
+ gradio app.py
152
+ ```
153
+
154
+ ### Upload New Image
155
+ ![cam](https://github.com/Delve-ERAV1/S12/assets/11761529/465538f3-884e-4446-8e9b-ed0824dd5670)
156
+
157
+ ### View Misclassified Images
158
+
159
+ ![upload](https://github.com/Delve-ERAV1/S12/assets/11761529/cbb2cb46-21ee-420b-af93-e08f5a1f4505)
160
+
161
+ ## Results
162
+
163
+ ![image](https://github.com/Delve-ERAV1/S12/assets/11761529/9f8843f5-9465-445c-9068-50b3197ea371)
164
+
165
+ ## References
166
+
167
+ Deep Residual Learning for Image Recognition Kaiming He et al
168
+ Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates Leslie N. Smith