\documentclass[main.tex]{subfiles} \begin{document} \subsection{Forms of imbalance} \label{sec:imbalance} Class imbalance can take many forms particularly in the context of multiclass classification, which is typical in CNNs. In some problems only one class might be underrepresented or overrepresented and in other every class will have a different number of examples. In this study we define and investigate two types of imbalance that we believe are representative of most of the real-world cases. The first type is \textit{step imbalance}. In \textit{step imbalance}, the number of examples is equal within minority classes and equal within majority classes but differs between the majority and minority classes. This type of imbalance is characterized by two parameters. One is the fraction of minority classes defined by \begin{equation} \mu = \frac{|\{i \in \{1, \ldots, N\}: C_i \text{ is minority}\}|}{N}, \label{eq:minority} \end{equation} where $C_i$ is a set of examples in class $i$ and $N$ is the total number of classes. The other parameter is a ratio between the number of examples in majority classes and the number of examples in minority classes defined as follows. \begin{equation} \rho = \frac{\max_{i}\{|C_i|\}}{\min_{i}\{|C_i|\}} \label{eq:ratio} \end{equation} An example of this type of imbalance is the situation when among the total of 10 classes, 5 of them have 500 training examples and another 5 have 5\,000. In this case $\rho = 10$ and $\mu = 0.5$, as shown in Figure~\ref{fig:imbalance-step10}. A dataset with the same number of examples in total that has smaller imbalance ratio, corresponding to parameter $\rho = 2$, but more classes being minority, $\mu = 0.9$, is presented in Figure~\ref{fig:imbalance-step2}. \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.31\textwidth} \includegraphics[width=\linewidth]{imbalance/imbalance_step10} \caption{${\rho = 10, \mu = 0.5}$} \label{fig:imbalance-step10} \end{subfigure} \begin{subfigure}[b]{0.31\textwidth} \includegraphics[width=\linewidth]{imbalance/imbalance_step2} \caption{${\rho = 2, \mu = 0.9}$} \label{fig:imbalance-step2} \end{subfigure} \begin{subfigure}[b]{0.31\textwidth} \includegraphics[width=\linewidth]{imbalance/imbalance_line10} \caption{${\rho = 10}$} \label{fig:imbalance-line10} \end{subfigure} \caption{Example distributions of imbalanced set together with corresponding values of parameters $\rho$ and $\mu$ for \textit{step imbalance} (a~-~b) and $\rho$ for \textit{linear imbalance} (c).} \label{fig:imbalance} \end{figure} The second type of imbalance we call \textit{linear imbalance}. We define it with one parameter that is a ratio between the maximum and minimum number of examples among all classes, as in Equation~\ref{eq:ratio} for imbalance ratio in \textit{step imbalance}. However, the number of examples in the remaining classes is interpolated linearly such that the difference between consecutive pairs of classes is constant. An example of linear imbalance distribution with $\rho = 10$ is shown in Figure~\ref{fig:imbalance-line10}. \subsection{Methods of addressing imbalance compared in this study} In total, we examine seven methods to handle CNN training on a dataset with class imbalance which cover most of the commonly used approaches in the context of deep learning: \begin{enumerate} \itemsep0em \item Random minority oversampling \item Random majority undersampling \item Two-phase training with pre-training on randomly oversampled dataset \item Two-phase training with pre-training on randomly undersampled dataset \item Thresholding with prior class probabilities \item Oversampling with thresholding \item Undersampling with thresholding \end{enumerate} We examine two variants of two-phase training method. One on oversampled and the other on undersampled dataset. For the second phase, we keep the same hyperparameters and learning rate decay policy as in the first phase. Only the base learning rate from the first phase is multiplied by the factor of $10^{-1}$. Regarding thresholding, this method originally uses the imbalanced training set to train a neural network. We, in addition, combine it with oversampling and undersampling. Selected methods are representative of the available approaches. Sampling can be used to explicitly incorporate cost of the examples by their appearance. It makes them one of many implementations of cost-sensitive learning~\cite{zhou2006training}. Thresholding is another way of applying cost-sensitiveness by moving the output threshold such that higher cost examples are harder to misclassify. Ensemble methods require training of multiple classifiers. Because of considerable time needed to train deep models, it is often not practical and may be even infeasible to train multiple deep neural networks. One-class methods have a very limited application to datasets with extremely high imbalance. Moreover, they are applied to anomaly detection problem that is beyond the scope of our study. Importantly, we focused on methods that are widely used and relatively straightforward to implement as our aim is to draw conclusions that will be practical and serve as a guidance to a large number of deep learning researchers and engineers. \subsection{Datasets and models} \label{sec:data} In our study, we used three benchmark datasets: MNIST~\cite{lecun1998gradient}, CIFAR\nobreakdash-10~\cite{krizhevsky2009learning} and ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012~\cite{russakovsky2015imagenet}. All of them are provided with a split on training and test set that are both labeled. For each dataset we choose different model with a set of hyperparameters used for its training that is known to perform well based on the literature. Datasets together with their corresponding models are of increasing complexity. This allows us to draw some conclusions on simple task and then verify how they scale to more complex ones. All networks for the same dataset were trained with equal number of iterations. It means that the number of epochs differs between the imbalanced versions of dataset. This way we keep the number of weights' updates constant. Also, all networks were trained from a random initialization of weights and no pretraining was applied. An overview of some information about the datasets and their corresponding models is given in Table~\ref{tab:datasets}. All experiments were implemented in the deep learning framework \textit{Caffe}~\cite{jia2014caffe}. \begin{table}[!ht] \centering \begin{tabular}{c c c c c c c c} \toprule & \multicolumn{3}{c}{Image dimensions} & & \multicolumn{2}{c}{Images per class} & \\ \cmidrule{2-4} \cmidrule{6-7} Dataset & Width & Height & Depth & No. classes & Training & Test & CNN model \\ \midrule MNIST & 28 & 28 & 1 & 10 & 5\,000 & 1\,000 & LeNet-5 \\ CIFAR\nobreakdash-10 & 32 & 32 & 3 & 10 & 5\,000 & 1\,000 & All-CNN \\ ILSVRC-2012 & $\geq 256$ & $\geq 256$ & 3 & 1\,000 & 1\,000 & 50 & ResNet-10 \\ \bottomrule \end{tabular} \caption{Summary of the used datasets. The number of images per class refers to the perfectly balanced subsets used for experiments. Provided image dimensions for ImageNet are given after rescaling.} \label{tab:datasets} \end{table} \subsubsection{MNIST} MNIST is considered simple and solved problem that involves digits’ images classification. The dataset consists of grayscale images of size $28\times28$. There are ten classes corresponding to digits from 0 to 9. The number of examples per class in the original training dataset ranges from 5421 in class 5 to 6742 in class 1. In artificially imbalanced versions we uniformly at random subsample each class to contain no more than 5\,000 examples. The CNN model that we use for MNIST is the modern version of LeNet\nobreakdash-5~\cite{lecun1998gradient}. The network architecture is presented in Table~\ref{tab:lenet}. All networks for this dataset were trained for 10\,000 iterations. Optimization algorithm is stochastic gradient descent (SGD) with momentum value of $\mu = 0.9$~\cite{qian1999momentum}. The learning rate decay policy is defined as ${\eta_t = \eta_0 \cdot \left( 1 + \gamma \cdot t \right)^{-\alpha}}$, where ${\eta_0 = 0.01}$ is a base learning rate, ${\gamma = 0.0001}$ and ${\alpha = 0.75}$ are decay parameters and $t$ is the current iteration. Furthermore, we used a batch size of 64 and a weight decay value of $\lambda = 0.0005$. Network weights were initialized randomly with uniform distribution and Xavier variance~\cite{glorot2010understanding} whereas the biases were initialized with zero. No data augmentation was used. Test error of the model trained as described above on the original MNIST dataset was below 1\%. \begin{table}[!ht] \centering \begin{tabular}{c c c c c c} \toprule & \multicolumn{3}{c}{Data dimensions} & & \\ \cmidrule{2-4} Layer & Width & Height & Depth & Kernel size & Stride \\ \midrule Input & 28 & 28 & 1 & - & - \\ Convolution & 24 & 24 & 20 & 5 & 1 \\ Max Pooling & 12 & 12 & 20 & 2 & 2 \\ Convolution & 8 & 8 & 50 & 5 & 1 \\ Max Pooling & 4 & 4 & 50 & 2 & 2 \\ Fully Connected & 1 & 1 & 500 & - & - \\ ReLU & 1 & 1 & 500 & - & - \\ Fully Connected & 1 & 1 & 10 & - & - \\ Softmax & 1 & 1 & 10 & - & - \\ \bottomrule \end{tabular} \caption{Architecture of LeNet-5 CNN used in MNIST experiments.} \label{tab:lenet} \end{table} Experiments on MNIST dataset are performed on the following imbalance parameters space. For \textit{linear imbalance} we test values of ${\rho \in \{ 10, 25, 50, 100, 250, 500, 1\,000, 2\,500, 5\,000 \}}$. For \textit{step imbalance} the set of $\rho$ values is the same and for each we use all possible number of minority classes from 1 to 9, which corresponds to ${\mu \in \{ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 \}}$. The experiment for each combination of parameters is repeated 50 times. Every time the subset of minority classes is randomized. This way, we have created 4\,050 artificially imbalanced training sets for \textit{step imbalance} and 450 for linear imbalance. As we have evaluated four methods that require training a model, the total number of trained networks, including baseline, is 22\,500. \subsubsection{CIFAR-10} CIFAR\nobreakdash-10 is a significantly more complex image classification problem than MNIST. It contains $32\times32$ color images with ten classes of natural objects. It does not have any natural imbalance at all. There are exactly 5\,000 training and 1\,000 test examples in each class. We do not use any data augmentation but follow standard preprocessing comprising global contrast normalization and ZCA whitening~\cite{goodfellow2013maxout}. For CIFAR\nobreakdash-10 experiments we use one of the best performing type of CNN model on this dataset, i.e. All-CNN~\cite{springenberg2014striving}. The network architecture is presented in Table~\ref{tab:allcnn}. The networks were trained for 70\,000 iterations using SGD with momentum $\mu = 0.9$. The base learning rate was multiplied by a fixed multiplier of 0.1 after 40\,000, 50\,000 and 60\,000 iterations. The number of examples in a batch was 256 and a weight decay value was $\lambda = 0.001$. Network weights were initialized with Xavier procedure and the biases set to zero. Test error of the model trained as described above on the original CIFAR\nobreakdash-10 dataset was 9.75\%. We have found the network training to be quite sensitive to initialization and the choice of base learning rate. Sometimes the network gets stuck in a very poor local minimum. Also, for more imbalanced datasets the training required lower base learning rate to train at all. Therefore, for each case we were searching for the best one from the fixed set ${\eta_0 \in \{ 0.05, 0.005, 0.0005, 0.00005 \}}$. Similar procedure was used by the authors of the model architecture~\cite{springenberg2014striving}. Moreover, each training was repeated twice on the same dataset. For a particular method and imbalanced dataset, we pick the model with the best score on the test set over all eight runs. \begin{table}[!ht] \centering \begin{tabular}{c c c c c c c} \toprule & \multicolumn{3}{c}{Data dimensions} & & & \\ \cmidrule{2-4} Layer & Width & Height & Depth & Kernel size & Stride & Padding \\ \midrule Input & 32 & 32 & 3 & - & - & - \\ Dropout (0.2) & 32 & 32 & 3 & - & - & - \\ $2\times$(Convolution + ReLU) & 32 & 32 & 96 & 3 & 1 & 1 \\ Convolution + ReLU & 16 & 16 & 96 & 3 & 2 & 1 \\ Dropout (0.5) & 16 & 16 & 96 & - & - & - \\ $2\times$(Convolution + ReLU) & 16 & 16 & 192 & 3 & 1 & 1 \\ Convolution + ReLU & 8 & 8 & 192 & 3 & 2 & 1 \\ Dropout (0.5) & 8 & 8 & 192 & - & - & - \\ Convolution + ReLU & 6 & 6 & 192 & 3 & 1 & 0 \\ Convolution + ReLU & 6 & 6 & 192 & 1 & 1 & 0 \\ Convolution + ReLU & 6 & 6 & 10 & 1 & 1 & 0 \\ Average Pooling & 1 & 1 & 10 & 6 & - & - \\ Softmax & 1 & 1 & 10 & - & - & -\\ \bottomrule \end{tabular} \caption{Architecture of All-CNN used in CIFAR\nobreakdash-10 experiments.} \label{tab:allcnn} \end{table} The network architecture does not have any fully connected layers. Therefore, during the fine-tuning in two-phase training method we update the weights of two last convolutional layers with kernels of size 1. The imbalance parameters space used in CIFAR\nobreakdash-10 experiments is considerably sparser than the one used for MNIST due to the significantly longer time required to train one network. The set of tested values was narrowed to make the experiment run in a reasonable time. For \textit{linear} and \textit{step imbalance}, we test values of ${\rho \in \{ 2, 10, 20, 50 \}}$. In \textit{step imbalance}, for each value of $\rho$, the set of values of parameter $\mu$ was ${\mu \in \{ 0.2, 0.5, 0.8 \}}$, which corresponds to having two, five and eight minority classes, respectively. And for all the cases, the classes chosen to be minority were the ones with the lowest label value. It means that for a fixed number of minority classes the same classes were always picked as minority. Also, all of them were included in a larger set of minority classes. In total we trained 640 networks on this dataset. \subsubsection{ImageNet} For evaluation we use a ILSVRC\nobreakdash-2012 competition subset of ImageNet, widely used as a benchmark to compare classifiers' performance. The number of examples in majority classes was reduced from 1\,200 to 1\,000. Classes with less than 1\,000 cases were always chosen as a minority ones for imbalanced subsets. The only data preprocessing applied is resizing such that the smaller dimension is 256 pixels long and the aspect ratio is preserved. During training, as input we use a randomly cropped $224\times224$ pixel square patch and a single centered crop in a test phase. Moreover, during training we randomly mirror images, but there is no color, scale or aspect ratio augmentation. A model architecture employed for this dataset is ResNet\nobreakdash-10~\cite{simon2016imagenet}, i.e. a residual network~\cite{he2016deep} with batch normalization layers that are known to accelerate deep networks training~\cite{ioffe2015batch}. It consists of four residual blocks that give us nine convolutional layers and one fully connected. The first residual block outputs data tensor of depth 64 and then each one increases it by a factor of two. Fully connected layer outputs 1\,000 values to softmax that transforms them to class probabilities. The architecture of one residual block is presented in Figure~\ref{fig:resnet}. \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{nets/resnet2} \caption{Architecture of a single residual block in ResNet used in ILSVRC-2012 experiments.} \label{fig:resnet} \end{figure} The networks were trained for 320\,000 iterations using SGD with momentum $\mu = 0.9$. The base learning rate is set to ${\eta_0 = 0.1}$ and decays linearly to 0 in the last iteration. The number of examples in a batch was $256$ and a weight decay $\lambda = 0.0001$. Network weights were initialized with Kaiming (also known as MSRA) initialization procedure~\cite{he2015delving}. Top-1 test error of the model trained as described above on the original ILSVRC\nobreakdash-2012 dataset was 62.56\% and 99.50 multi-class ROC AUC for a single centered crop. We have chosen relatively small ResNet for the sake of faster training~\footnote{It takes five days to train one ResNet\nobreakdash-10 network on Nvidia GTX 1070 GPU.}. We test only one case of small and two cases of large \textit{step imbalance} and run it on the baseline, undersampling and oversampling methods. Specifically, all three \textit{step imbalanced} subsets are defined with ${\mu = 0.1, \rho = 10}$, ${\mu = 0.8, \rho = 50}$ and ${\mu = 0.9, \rho = 100}$. They correspond to 100 minority classes with imbalance ratio of 10, 800 minority classes with imbalance of 50, and 900 minority classes with imbalance ratio of 100, respectively. Moreover, for the highest imbalance, we train three networks for each method with randomized selection of minority classes and subsampled set of examples in each class. This is done in order to estimate variability in performance of methods. In total, this gives us 15 ResNet\nobreakdash-10 networks trained on five artificially imbalanced subsets of ILSVRC\nobreakdash-2012. \subsection{Evaluation metrics and testing} \label{sec:evaluation} The metric that is most widely used to evaluate a classifier performance in the context of multiclass classification with CNNs is overall accuracy which is the proportion of test examples that were correctly classified. However, it has some significant and long acknowledged limitations, particularly in the context of imbalanced datasets~\cite{chawla2005data}. Specifically, when the test set is imbalanced, accuracy will favor classes that are overrepresented in some cases leading to highly misleading assessment. An example of this is a situation when the majority class represents 99\% of all cases and the classifier assigns the label of the majority class to all test cases. A misleading accuracy of 99\% will be assigned to a classifier that has a very limited use. Another issue might arise when the test set is balanced and a training set is imbalanced. This might result in a situation when a decision threshold is moved to reflect the estimated class prior probabilities and cause a low accuracy measure in the test set while the true discriminative power of the classifier does not change. A measure that addresses these issues is area under the receiver operating characteristic curve (ROC AUC)~\cite{bradley1997use} which is a plot of the false positive rate to the true positive rate for all possible prediction thresholds. We used a specific implementation of the ROC AUC available in scikit-learn python package~\cite{pedregosa2011scikit}. It calculates sensitivities and specificities at all thresholds defined by the responses of the classifier in the test set followed by the AUC calculation using the trapezoid rule. ROC AUC is a well-studied and sound measure of discrimination~\cite{ling2003auc} and has been widely used as an evaluation metric for classifiers. ROC has also been used to compare performance of classifiers trained on imbalanced datasets~\cite{mazurowski2008training, maloof2003learning}. Since the basic version of ROC is only suitable for binary classification, we use a multi-class modification of it~\cite{provost2003tree}. The multi-class ROC is calculated by taking the average of AUCs obtained independently for each class for the binary classification task of distinguishing a given class from all the other classes. Test set of all used datasets has equal number of examples in each class. Usually, it is assumed that the class distribution of a test set follows the one of a training set. We do not change a test set to match artificially imbalanced training set. The reason is that the score achieved by each classifier on the same test set is more comparable and the largest number of cases in each of the classes provides the most accurate performance estimation. \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.9\textwidth} \includegraphics[width=\linewidth]{legend/step_multi_roc_auc_legend} \end{subfigure} \vspace{0.5em} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{mnist/step_multi_roc_auc_r2_75} \caption{2 minority classes} \label{fig:mnist-step_multi_roc_auc_2min} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{mnist/step_multi_roc_auc_r5_75} \caption{5 minority classes} \label{fig:mnist-step_multi_roc_auc_5min} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{mnist/step_multi_roc_auc_r8_75} \caption{8 minority classes} \label{fig:mnist-step_multi_roc_auc_8min} \end{subfigure} \vspace{0.5em} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{cifar/step_multi_roc_auc_r2_84} \caption{2 minority classes} \label{fig:cifar-step_multi_roc_auc_2min} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{cifar/step_multi_roc_auc_r5_84} \caption{5 minority classes} \label{fig:cifar-step_multi_roc_auc_5min} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{cifar/step_multi_roc_auc_r8_84} \caption{8 minority classes} \label{fig:cifar-step_multi_roc_auc_8min} \end{subfigure} \caption{Comparison of methods with respect to multi-class ROC AUC on MNIST (a~-~c) and CIFAR\nobreakdash-10 (d~-~f) for \textit{step imbalance} with fixed number of minority classes.} \label{fig:step_multi_roc_auc_min} \end{figure} \end{document}