Masters Thesis

Compressed Deep Supervision and Residual Learning Network for Scene Recognition

One of the promising processes to elevate the accuracy of the convolutional neural networks is by increasing the depth of the convolutional neural networks. However, increasing the depth of the convolutional neural network leads to a boost in the number of layers, which means an increase in the number of parameters. Which drive the depth convolutional neural network to be slow in convergence during the backpropagation process and prone to overfitting and degradation. We used two different techniques, the residual learning plus the deep supervision, to build the models. We trained the models to classify a large-scale scene dataset MIT Places 205 and MIT Places 365-Standard. the result from the experiments proved that the proposed models named (Residual-CNDS) have addressed the problems of overfitting, slower convergence, and degradation. the proposed models came in two models (Residual-CNDS8), and (Residual-CNDS10), which include eight and ten convolutional layers sequentially. Furthermore, reforming the proposed Residual-CNDS8 by applying a compression method to optimize the size and the time needed to train the Residual-CNDS8. Therefore, we proposed a Residual Squeeze CNDS, which address the issue of speed and size while maintaining addressing the issues of overfitting, slower convergence, and degradation. with matching the accuracy of Residual-CNDS8 on MIT Places 365-Standard scene dataset, the Residual Squeeze CNDS is 87.64% smaller in size and 13.33% faster in the training time.

Relationships

In Collection:

Items