Enhancing Breast Cancer Classification using a Modified GoogLeNet Architecture with Attention Mechanism
DOI:
https://doi.org/10.58564/IJSER.3.1.2024.145Keywords:
Breast cancer, computer-aided diagnosis, deep learning, attention mechanism, spatial transformer networkAbstract
Breast cancer incidence has been soaring sharply, and this is causing grave concern worldwide due to its high mortality rates. It should be correctly diagnosed in the early stages in order to achieve better patient outcomes. Over the last decade, there has been a great demand for diagnosis systems based on AI that could be used in breast cancer detection and classification. These computerized devices utilize deep learning algorithms to analyze medical scans thereby allowing for subtle abnormality recognition and distinguishing malignant from benign tumors. Computer-aided diagnosis named CAD systems can assist radiologists and pathologists to be more precise with their diagnoses while at the same time increasing productivity. Furthermore, recent advances in CNN architectures coupled with attention mechanisms have further improved CAD systems for breast cancer diagnosis. Attention-based CNN models focus on crucial regions hence enhancing classification accuracy and reliability. In this study, we introduce a new approach that improves the classification of breast cancer using GoogLeNet architecture modified by an attention mechanism that is based on image regions. The modified GoogLeNet has a spatial transformer network (STN), which allows it to focus on significant areas in breast histopathology images using selective attention. Through the attention mechanism, the model becomes better at learning discriminatory features that indicate different subtypes of breast cancer. In order to evaluate the effectiveness of this method, we implemented experiments using the BreaKHis dataset for classifying breast carcinomas. This dataset has been intentionally collected under various magnifications so as to facilitate binary and multiple classification tasks. These outcomes clearly show that modified GoogLeNet with attention outperforms the original GoogLeNet architecture in terms of accuracy. For binary classification, the proposed model demonstrated an accuracy rate of 98.08%, whereas GoogLeNet’s rate was 94.99%. For multi-class classification, at 100x magnification, this model achieved an accuracy of 94.63% while the accuracy of the original GoogLeNet was 85.06%. It is evident from these findings that the efficiency of breast cancer diagnosis significantly improved under this proposed approach. The findings of the study show that incorporating the modified GoogLeNet framework with an attention mechanism in CAD systems for breast cancer detection can improve their performance. Combining deep learning and attention models together can lead to more accurate treatment decisions and better patient results. More efforts are needed to further develop CAD systems in this area to assist ongoing endeavors towards upgrading them and ultimately contribute to saving many lives in the fight against breast cancer.
References
M. C. Chun. (2018). Breast Cancer: Symptoms, Risk Factors, and Treatment, Medical News Today. Accessed: Mar. 10, 2018. [Online]. Available: https://www.medicalnewstoday.com/articles/37136.php
World Health Organization. Accessed: Mar. 10, 2018. [Online]. Available: http://www.who.int/en/
P. Boyle and B. Levin. (2008). World Cancer Report. [Online]. Available: http://www.iarc.fr/en/publications/pdfs-online/wcr/2008/wcr2008.pdfB. Smith, “An approach to graphs of linear forms (Unpublished work style),” unpublished.
[World Health Organization. Breast Cancer. Available online: https://www.who.int/news-room/fact-sheets/detail/breast-cancer (accessed on 15 June 2022).
Wilkinson, Louise, and Toral Gathani. "Understanding breast cancer as a global health concern." The British Journal of Radiology 95.1130 (2022): 20211033.
M. L. Giger, H. Chan, and J. Boone, “Anniversary Paper: History and status of CAD and quantitative image analysis : The role of medical physics and AAPM,” Med. Phys., vol. 35, no.12, pp. 5799–5820, 2008, doi: 10.1118/1.3013555.]
Q. Li and R. M. Nishikawa, “Computer aided detection and diagnosis in medical imaging: A review of clinical and educational applications,” in Proceedings of the Fourth International Conference on Technological Ecosystems for Enhancing Multiculturality - TEEM ’16, 2016, pp.1–425, doi: https://doi.org/10.1145/3012430.3012567].
M. Kashif, K. R. Malik, S. Jabbar, and J. Chaudhry, Application of machine learning and image processing for detection of breast cancer. Elsevier Inc., 2020.]
[P. Taylor, J. Champness, K. Johnston, and H. Potts, “Impact of computer-aided detection prompts on screening mammography,” 2005. doi: 10.3310/hta9060.
Zhang H, Han L, Chen K, et al. Diagnostic efciency of the breast ultrasound computer-aided prediction model based on convolutional neural network in breast cancer. J Digit Imaging. 2020;33:1218–23.
Araújo T, Aresta G, et al. Classifcation of breast cancer histology images using convolutional neural networks. PLoS ONE. 2017;12:e0177544.
Nahid AA, Mehrabi MA, Kong Y. Histopathological breast cancer image classifcation by deep neural network techniques guided by local clustering. Biomed Res Int. 2018;2018:2362108.
Arevalo J, Gonza´lez FA, Ramos-Polla´n R, Oliveira JL, Lopez MAG. Representation learning for mammography mass lesion classifcation with convolutional neural networks. Comput Methods Programs Biomed.2016;127:248–57.
Huynh BQ, Li H, Giger ML. Digital mammographic tumor classifcation using transfer learning from deep convolutional neural networks. J Med Imaging. 2016;3:034501.
Yeşim E, Muhammed Y, Ahmet C. Convolutional neural networks based classifcation of breast ultrasonography images by hybrid method with respect to benign, malignant, and normal using mRMR. Comput Biol Med. 2021;133:104407.
Zheng Y, et al. Feature extraction from histopathological images based on nucleus-guided convolutional neural network for breast lesion classifcation. Pattern Recogn. 2017;71:14–25.
Van Eycke YR, et al. Segmentation of glandular epithelium in colorectal tumours to automatically compartmentalise IHC biomarker quantifcation: a deep learning approach. Med Image Anal. 2018;49:35–45.
Sudharshan PJ, et al. Multiple instance learning for histopathological breast cancer image classifcation. Expert Syst Appl. 2019;117:103–11.
Xu J, Luo X, Wang G, Gilmore H, Madabhushi A. A deep convolutional neural network for segmenting and classifying epithelial and stromal regions in histopathological images. Neurocomput. 2016;191:214–23.
Zhang X, et al. High-throughput histopathological image analysis via robust cell segmentation and hashing. Med Image Anal. 2015;26:306–15.
Al-Kadi OS. Texture measures combination for improved meningioma classifcation of histopathological images. Pattern Recogn. 2010;43:2043–53.
Szegedy C, Vanhoucke V, Iofe, Shlens J, Wojna Z. Rethinking the Inception Architecture for Computer Vision. arXiv 2015; reprint arXiv:1512.00567.
Barret Z, Vasudevan V, Shlens J, Quoc VL. Learning transferable architectures for scalable image recognition. arXiv 2017; preprint arXiv:1707.07012.
[12] G. E. Hinton. A parallel computation that assigns canonical object-based frames of reference. In IJCAI,
[13] G. E. Hinton, A. Krizhevsky, and S. D. Wang. Transforming auto-encoders. In ICANN. 2011.
[27] T. Tieleman. Optimizing Neural Networks that Generate Images. PhD thesis, University of Toronto, 2014
[3] J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE PAMI, 35(8):1872–1886, 2013.
[5] T. S. Cohen and M. Welling. Transformation properties of learned visual representations. ICLR, 2015.
[8] R. Gens and P. M. Domingos. Deep symmetry networks. In NIPS, 2014.
[17] A. Kanazawa, A. Sharma, and D. Jacobs. Locally scale-invariant convolutional neural networks. In NIPS,2014.
[19] K. Lenc and A. Vedaldi. Understanding image representations by measuring their equivariance and equiv alence. CVPR, 2015.
[25] K. Sohn and H. Lee. Learning invariant representations with local transformations. arXiv:1206.6418,2012.
[1] J. Ba, V. Mnih, and K. Kavukcuoglu. Multiple object recognition with visual attention. ICLR, 2015.
[6] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scalable object detection using deep neural networks.
[9] R. B. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014.
[11] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Draw: A recurrent neural network for image generation. ICML, 2015.
[23] P. Sermanet, A. Frome, and E. Real. Attention for fine-grained categorization. arXiv:1412.7054, 2014.
[27] T. Tieleman. Optimizing Neural Networks that Generate Images. PhD thesis, University of Toronto, 2014.
[26] M. F. Stollenga, J. Masci, F. Gomez, and J. Schmidhuber. Deep networks with internal selective attention through feedback connections. In NIPS, 2014.
[7] B. J. Frey and N. Jojic. Fast, large-scale transformation-invariant clustering. In NIPS, 2001.
Jaderberg, Max, Karen Simonyan, and Andrew Zisserman. "Spatial transformer networks." Advances in neural information processing systems 28 (2015).
Spanhol F. A., Oliveira L. S., Petitjean C., Heutte L., "Breast cancer histopathological image classification using Convolutional Neural Networks," 2016 International Joint Conference on Neural Networks (IJCNN), 2016, pages 2560-2567.
[39] Sharma, S., Mehra, R., & Kumar, S. (2021). Optimised CNN in conjunction with efficient pooling strategy for the multi‐classification of breast cancer. IET Image Processing, 15(4), 936-946.]
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” CVPR, 2015.]
Yang, Le, et al. "GoogLeNet based on residual network and attention mechanism identification of rice leaf diseases." Computers and Electronics in Agriculture 204 (2023): 107543.
Jaderberg, Max, Karen Simonyan, and Andrew Zisserman. "Spatial transformer networks." Advances in neural information processing systems 28 (2015).
Zhu, Xizhou, et al. "An empirical study of spatial attention mechanisms in deep networks." Proceedings of the IEEE/CVF international conference on computer vision. 2019.
Liu, Dichao, Yu Wang, and Jien Kato. "Supervised Spatial Transformer Networks for Attention Learning in Fine-grained Action Recognition." VISIGRAPP (4: VISAPP). 2019.
Sun, Jiamei, and Alexander Binder. "Comparison of deep learning architectures for H&E histopathology images." 2017 IEEE Conference on Big Data and Analytics (ICBDA). IEEE, 2017.
Hardt, Moritz, Ben Recht, and Yoram Singer. "Train faster, generalize better: Stability of stochastic gradient descent." International conference on machine learning. PMLR, 2016.
D. Martin and W. Powers, “Evaluation: From precision, recall and F-measure to ROC, informedness, markedness & correlation,” J. Mach. Learn. Technol., vol. 2, no. 1, pp. 37–63, 2011, doi: 10.9735/2229-3981
Kelleher, John D., Brian Tierney, and Aoife D'Arcy. "Chapter 8 - Evaluating Models." Data Science An Introduction, 2nd ed., Elsevier, 2018, pp. 189-214.
Zerouaoui, Hasnae, and Ali Idri. "Deep hybrid architectures for binary classification of medical breast cancer images." Biomedical Signal Processing and Control 71 (2022): 103226.
Abdulaal, Alaa Hussein, et al. "A self-learning deep neural network for classification of breast histopathological images." Biomedical Signal Processing and Control 87 (2024): 105418.
Nawaz, Majid, Adel A. Sewissy, and Taysir Hassan A. Soliman. "Multi-class breast cancer classification using deep learning convolutional neural network." Int. J. Adv. Comput. Sci. Appl 9.6 (2018): 316-332.
Heenaye-Mamode Khan, Maleika, et al. "Multi-class classification of breast cancer abnormalities using Deep Convolutional Neural Network (CNN)." Plos one 16.8 (2021): e0256500.
Mi, Weiming, et al. "Deep learning-based multi-class classification of breast digital pathology images." Cancer Management and Research (2021): 4605-4617.

Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Alaa Hussein Abdulaal , Morteza Valizadeh , Baraa M. Albaker , Riyam Ali Yassin , Mehdi Chehel Amirani , A. F. M. Shahen Shah

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.