A New Optimizer for Image Classification using Wide ResNet (WRN)

Authors

  • Arman I. Mohammed College of Science, Duhok University, Kurdistan Region, Iraq
  • Ahmed AK. Tahir College of Science, Duhok University, Kurdistan Region, Iraq

DOI:

https://doi.org/10.25007/ajnu.v9n4a858

Abstract

A new optimization algorithm called Adam Meged with AMSgrad (AMAMSgrad) is modified and used for training a convolutional neural network type Wide Residual Neural Network, Wide ResNet (WRN), for image classification purpose. The modification includes the use of the second moment as in AMSgrad and the use of Adam updating rule but with   and (2) as the power of the denominator. The main aim is to improve the performance of the AMAMSgrad optimizer by a proper selection of   and the power of the denominator. The implementation of AMAMSgrad and the two known methods (Adam and AMSgrad) on the Wide ResNet using CIFAR-10 dataset for image classification reveals that WRN performs better with AMAMSgrad optimizer compared to its performance with Adam and AMSgrad optimizers. The accuracies of training, validation and testing are improved with AMAMSgrad over Adam and AMSgrad. AMAMSgrad needs less number of epochs to reach maximum performance compared to Adam and AMSgrad. With AMAMSgrad, the training accuracies are (90.45%, 97.79%, 99.98%, 99.99%) respectively at epoch (60, 120, 160, 200), while validation accuracy for the same epoch numbers are (84.89%, 91.53%, 95.05%, 95.23). For testing, the WRN with AMAMSgrad provided an overall accuracy of 94.8%. All these accuracies outrages those provided by WRN with Adam and AMSgrad. The classification metric measures indicate that the given architecture of WRN with the three optimizers performs significantly well and with high confidentiality, especially with AMAMSgrad optimizer.

Downloads

Download data is not yet available.

References

1. Badrinarayanan, V., Kendall, A., and Cipolla, R., 2017, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation”, IEEE Transactions on Pattern Analysis And Machine Intelligence, Vol. 39, No. 12, Pp. (2481-2495).
2. Borges, L. R., 2015, “Analysis of the Wisconsin Breast Cancer Dataset and Machine Learning for Breast Cancer Detection”, Proceedings of XI Workshop de Visão Computacional ‐ October 05th‐07th, 2015, Pp. (15-19).
3. Khan, A. A., and Yong, S., 2016, “An Evaluation of Convolutional Neural Nets for Medical Image Anatomy Classification”, Springer International Publishing Switzerland 2016, in P.J. Soh et al. (eds.), Advances in Machine Learning and Signal Processing, Lecture Notes in Electrical Engineering, DOI 10.1007/978-3-319-32213-1_26, Pp. (293-303).
4. Haryanto, T., Wasito1, I. and Suhartanto, H., 2017, “Convolutional Neural Network (CNN) for Gland Images Classification”, International Conference on Information & Communication Technology and System (ICTS), Pp. (55-60).
5. Kapoor, I., and Mishra, A., 2018, “Automated Classification Method for Early Diagnosis of Alopecia Using Machine Learning”, International Conference on Computational Intelligence and Data Science (ICCIDS 2018), ESEVIER, ScienceDirect, Pp. (437-443).
6. Antipov, G., Berrani, S.A., Ruchaud, N. and Dugelay, J.L., 2015, October. “Learned vs. hand-crafted features for pedestrian gender recognition”. In Proceedings of the 23rd ACM international conference on Multimedia. pp. (1263-1266).
7. Akcay, S., Kundegorski, M. E., Willcocks, C. G., and Breckon, T. P., 2018, “Using Deep Convolutional Neural Network Architectures for Object Classification andDetection within cX-ray Baggage Security Imagery”, IEEE Transactions on Information Forensics and Security,DOI 10.1109/TIFS.2018.2812196,Pp. (1-13).
8. Bian, P., Li, W., Jin, Y., and Zhi, R., 2018, “Ensemble feature learning for material recognition with convolutional neural networks”, EURASIP Journal on Image and Video Processing, 2018:64, Pp. (1-11).
9. Zuo, J., Xu, G., Fu, K., Sun, X., and Sun, H., 2018, “Aircraft Type Recognition Based on Segmentation With Deep Convolutional Neural Networks”, IEEE Geoscience And Remote Sensing Letters, Vol. 15, No. 2, Pp. (282-286).
10. Xu, H., Han, Z., Feng, S., Zhou, H., and Fang, Y., 2018 “Foreign object debris material recognition based on convolutional neural networks”, EURASIP Journal on Image and Video Processing, https://doi.org/10.1186/s13640-018-0261-2, 2018:21, Pp. (1-10).
11. Wan, J., Chen, B., Xu, B., Liu, H., and Jin, L., 2019, “Convolutional neural networks for radar HRRP target recognition and rejection”, EURASIP Journal on Advances in Signal Processing, https://doi.org/10.1186/s13634-019-0603-y, 2019:5, Pp. (1-17).
12. Krizhevsky A., Sutskever I. and Hinton G.E., 2012, “ImageNet classification with deep convolutional neural networks”, Proceedings of the 25th International Conference on neural information processing systems (NIPS), Lake Tahoe, December, pp. (1097-1105).
13. Zhang P., Niu X., Dou Y., and Xia F., 2017, “Airport Detection on Optical Satellite Images Using Deep Convolutional Neural Networks”, IEEE Geoscience and Remote Sensing Letters, Vol. 14, No. 8, pp. (1183–1187).
14. Hoseini F., Shahbahrami A. and Bayat P., 2018, “An Efficient Implementation of Deep Convolutional Neural Networks for MRI Segmentation”, Journal of Digital Imaging, Vol. 31, No. 5, pp. (738-747).
15. Zeiler M.D., Fergus R., 2014, “Visualizing and Understanding Convolutional Networks”,in Fleet D., Pajdla T., Schiele B., Tuytelaars T., (eds) Computer Vision – European Conference on.
16. Simonyan, K. and Zisserman A., 2015, “Very deep convolutional networks for large-scale image recognition”, International Conference on Learning Representations (ICLR), (pp. 1409.1556).
17. Szegedy C., Liu W., Jia Y., Sermanet P., Reed S., Anguelov D., Erhan D., Vanhoucke V. and Rabinovich, A., 2015, “Going deeper with convolutions”, in Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 1-9).
18. Zagoruyko, S., and Komodakis, N., 2017, “Wide Residual Networks”, rXiv:1605.07146v4, Pp. (1-15).
19. Rumelhart, D.E., Hinton, G.E. and Williams, R.J., 1986. “Learning representations by back-propagating errors”. Cognitive modeling, Nature, LETERSTO NATURE, Vol. 323, PP (533-536).
20. Qian, N. 1999, “On the momentum term in gradient descent learning algorithms”, Neural Networks, ELSEVIER, Vol. 12, Issue 1, (pp. 145–151).
21. Duchi J., Hazan E. and Singer Y., 2011, “Adaptive subgradient methods for online learning and stochastic optimization”, Journal of Machine Learning Research, 12(Jul), pp. (2121-2159).
22. Kingma, D. P., and Ba, J. L., 2015, “Adam: A Method for Stochastic Optimization”, in Proceedings of the International Conference on Learning Representations (ICLR), pp. (1-15).
23. Reddi S. J., Kale S. and Kumar S., 2018, “On the Convergence of Adam And Beyond”, Proceedings of the International Conference on Learning Representations (ICLR), pp. (1-23).
24. Zeiler M. D., 2012, “Adadelta: An Adaptive Learning Rate Method”, arXiv preprint arXiv, pp. (1212-5701).
25. Tieleman T. and Hinton G., 2012, “Lecture 6.5-rmsprop: Divide the Gradient by a Running Average of Its Recent Magnitude”, COURSERA: Neural Networks for Machine Learning, 4, pp. (26-31).
26. Loshchilov I. and Hutter F., 2019, “Decoupled Weight Decay Regularization”, Proceedings of the International Conference on Learning Representations (ICLR), pp. (1-8).
27. Ma, J. and Yarats, D., 2019. “Quasi-hyperbolic momentum and Adam for deep learning”. International Conference on Learning Representations (ICLR), Pp. (1-38).
28. Hoseini F., Shahbahrami A. and Bayat P., 2019, “AdaptAhead Optimization Algorithm for Learning Deep CNN Applied to MRI Segmentation”, Journal of Digital Imaging, Society of imaging informatics in medicine, Springer, Vol. 32, issue 1, Pp. (105-115).
29. Huang G., Liu Z., Van Der Maaten L. and Weinberger K.Q., 2017, “Densely connected convolutional networks”, in Proceedings of the IEEE conference on computer vision and pattern recognition. (pp. 4700-4708).
30. Johnson M., Schuster M., Le Q. V., Krikun M., Wu Y., Chen, Z., … Dean, J. (2017). Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation. Transactions of the Association for Computational Linguistics, Vol. 5, (pp. 339–351).
31. He K., Zhang X., Ren S. and Sun J., 2016, ”Deep residual learning for image recognition”, in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). (pp. 770-778).
32. Tai, Y., Yang, J., and Liu, X., 2017, “Image Super-Resolution via Deep Recursive Residual Network”, Proceedings of 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), DOI: 10.1109/CVPR.2017.1, Pp. (3147-3155).
33. Korzeniowski F., 2018, “Experiments with AMSGrad” Retrieved December 24, 2018, from https://fdlm.github.io/post/amsgrad/.
34. Carneiro, T., Da Nóbrega1, R. V., Nepomuceno, T., and others, 2018, “Performance Analysis of Google Colaboratory as a Tool for Accelerating Deep Learning Applications”, IEEE Access, DOI: 10.1109/ACCESS.2018.2874767, IEEE Access.
35. He, K., Zhang, X., Ren, S., and Sun, J., 2015, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification”, In Proceedings of the IEEE international conference on computer vision, Pp. (1026-1034).
36. Krogh, A. and Hertz, J.A., 1992. “A simple weight decay can improve generalization”. In Advances in neural information processing systems (pp. 950-957).
37. Drăgulescu B., Bucos M., Vasiu R., 2015, “Predicting Assignment Submissions in a Multi-class Classification Problem”, TEM Journal, Vol. 4, No. 3, Pp.(244-254).

Published

2020-09-02

How to Cite

Mohammed, A. I., & Tahir, A. A. (2020). A New Optimizer for Image Classification using Wide ResNet (WRN). Academic Journal of Nawroz University, 9(4), 1–13. https://doi.org/10.25007/ajnu.v9n4a858

Issue

Section

Articles