Feature Extraction Techniques for Facial Expression Recognition (FER)
DOI:
https://doi.org/10.58564/IJSER.2.3.2023.85Keywords:
Feature extraction, Handcrafted method, Deep learning method, fisher Vector EncodingAbstract
Facial expression recognition (FER) is a significant area of study in computer vision and affective computing. In numerous applications, such as human-computer interaction, emotion detection, and behavior analysis. Feature extraction is a crucial stage in facial expression recognition systems, as it involves extracting pertinent information from facial images in order to accurately represent various facial expressions. The purpose of this paper is to investigate and compare the various feature extraction techniques used in facial expression recognition, as well as their merits, limitations, and impact on overall system performance. Using benchmark datasets and performance metrics, the evaluation provides insight into the efficacy of various feature extraction methods. In this study, we propose a method for facial expression recognition that integrates deep learning, Principal Component Analysis (PCA), and Gray Level Co-occurrence Matrix (GLCM). Use 1D-CNN for classification.
References
Y. Wang, Y. Li, Y. Song, and X. Rong, “Facial expression recognition based on auxiliary models,” Algorithms, vol. 12, no. 11, Nov. 2019, doi: 10.3390/a12110227. DOI: https://doi.org/10.3390/a12110227
A. A. Pise et al., “Methods for Facial Expression Recognition with Applications in Challenging Situations,” Comput. Intell. Neurosci., vol. 2022, 2022, doi: 10.1155/2022/9261438. DOI: https://doi.org/10.1155/2022/9261438
V. Upadhyay and D. Kotak, “A Review on Different Facial Feature Extraction Methods for Face Emotions Recognition System,” Proc. 4th Int. Conf. Inven. Syst. Control. ICISC 2020, no. Icisc, pp. 15–19, 2020, doi: 10.1109/ICISC47916.2020.9171172. DOI: https://doi.org/10.1109/ICISC47916.2020.9171172
E. K. Babu, K. Mistry, M. N. Anwar, and L. Zhang, “Facial Feature Extraction Using a Symmetric Inline Matrix-LBP Variant for Emotion Recognition,” Sensors, vol. 22, no. 22, 2022, doi: 10.3390/s22228635. DOI: https://doi.org/10.3390/s22228635
C. X. Ã, “EMOTION DETECTION FROM FACE USING,” vol. 6, no. 6, 2019.
J. Liu, H. Wang, and Y. Feng, “An End-to-End Deep Model with Discriminative Facial Features for Facial Expression Recognition,” IEEE Access, vol. 9, pp. 12158–12166, 2021, doi: 10.1109/ACCESS.2021.3051403. DOI: https://doi.org/10.1109/ACCESS.2021.3051403
J. Kim and D. Lee, “Facial Expression Recognition Robust to Occlusion and to Intra-Similarity Problem Using Relevant Subsampling,” Sensors, vol. 23, no. 5, 2023, doi: 10.3390/s23052619. DOI: https://doi.org/10.3390/s23052619
M. Sajjad et al., “A comprehensive survey on deep facial expression recognition: challenges, applications, and future guidelines,” Alexandria Eng. J., vol. 68, pp. 817–840, 2023, doi: 10.1016/j.aej.2023.01.017. DOI: https://doi.org/10.1016/j.aej.2023.01.017
S. Rifai, Y. Bengio, A. Courville, P. Vincent, and M. Mirza, “Disentangling factors of variation for facial expression recognition,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 7577 LNCS, no. PART 6, pp. 808–822, 2012, doi: 10.1007/978-3-642-33783-3_58. DOI: https://doi.org/10.1007/978-3-642-33783-3_58
A. Fathallah, L. Abdi, and A. Douik, “Facial expression recognition via deep learning,” Proc. IEEE/ACS Int. Conf. Comput. Syst. Appl. AICCSA, vol. 2017-Octob, pp. 745–750, 2018, doi: 10.1109/AICCSA.2017.124. DOI: https://doi.org/10.1109/AICCSA.2017.124
S. Srisuk, A. Boonkong, D. Arunyagool, and S. Ongkittikul, “Handcraft and Learned Feature Extraction Techniques for Robust Face Recognition : A Review,” iEECON 2018 - 6th Int. Electr. Eng. Congr., pp. 1–4, 2018, doi: 10.1109/IEECON.2018.8712272. DOI: https://doi.org/10.1109/IEECON.2018.8712272
C. Gautam and K. R. Seeja, “ScienceDirect ScienceDirect emotion recognition using Handcrafted features and CNN Conference on Chahak using Facial emotion recognition Handcrafted features and CNN,” Procedia Comput. Sci., vol. 218, no. 2022, pp. 1295–1303, 2023, doi: 10.1016/j.procs.2023.01.108. DOI: https://doi.org/10.1016/j.procs.2023.01.108
C. Wang, “Human Emotional Facial Expression Recognition,” Ieee, pp. 1–8.
A. Shivakanth, “Object recognition using SIFT,” Int J Innov Sci Eng Technol, vol. 1, no. 4, pp. 378–381, 2014.
P. Kumar, S. L. Happy, and A. Routray, “A real-time robust facial expression recognition system using HOG features,” Int. Conf. Comput. Anal. Secur. Trends, CAST 2016, pp. 289–293, 2017, doi: 10.1109/CAST.2016.7914982. DOI: https://doi.org/10.1109/CAST.2016.7914982
M. Turk and A. Pentland, “E i g e d c e s for Recognition,” vol. 3, no. 1.
X. Qian, X. S. Hua, P. Chen, and L. Ke, “PLBP: An effective local binary patterns texture descriptor with pyramid representation,” Pattern Recognit., vol. 44, no. 10–11, pp. 2502–2515, 2011, doi: 10.1016/j.patcog.2011.03.029. DOI: https://doi.org/10.1016/j.patcog.2011.03.029
M. Moe Htay, “Feature extraction and classification methods of facial expression: a survey,” Comput. Sci. Inf. Technol., vol. 2, no. 1, pp. 26–32, 2021, doi: 10.11591/csit.v2i1.p26-32. DOI: https://doi.org/10.11591/csit.v2i1.p26-32
G. Amato, F. Falchi, C. Gennaro, and C. Vairo, “A Comparison of Face Verification with Facial Landmarks and Deep Features RUBICON-Robotics UBIquitous COgnitive Network View project Rubicon FP7 View project A Comparison of Face Verification with Facial Landmarks and Deep Features,” no. c, pp. 1–6, 2018, [Online]. Available: https://www.researchgate.net/publication/338048224
O. Çeliktutan, S. Ulukaya, and B. Sankur, “A comparative study of face landmarking techniques,” Eurasip J. Image Video Process., vol. 2013, no. 1, pp. 1–27, 2013, doi: 10.1186/1687-5281-2013-13. DOI: https://doi.org/10.1186/1687-5281-2013-13
P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression,” 2010 IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. - Work. CVPRW 2010, no. July, pp. 94–101, 2010, doi: 10.1109/CVPRW.2010.5543262. DOI: https://doi.org/10.1109/CVPRW.2010.5543262
Y. li Tian, T. Kanade, and J. F. Cohn, “Recognizing Action Units for Facial Expression Analysis,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 1, no. 2, pp. 294–301, 2001, doi: 10.1109/cvpr.2000.855832. DOI: https://doi.org/10.1109/CVPR.2000.855832
M. Ghayoumi and A. K. Bansal, “Unifying Geometric Features and Facial Action Units for Improved Performance of Facial Expression Analysis,” pp. 259–266, 2016, [Online]. Available: http://arxiv.org/abs/1606.00822
Y. Bi, M. Zhang, and B. Xue, “Genetic Programming for Automatic Global and Local Feature Extraction to Image Classification,” 2018 IEEE Congr. Evol. Comput. CEC 2018 - Proc., pp. 1–8, 2018, doi: 10.1109/CEC.2018.8477911. DOI: https://doi.org/10.1109/CEC.2018.8477911
E. Tsalera, A. Papadakis, and M. Samarakou, “applied sciences Feature Extraction with Handcrafted Methods and Convolutional Neural Networks for Facial Emotion Recognition,” 2022. DOI: https://doi.org/10.3390/app12178455
L. Nanni, S. Ghidoni, and S. Brahnam, “Handcrafted vs Non-Handcrafted Features for computer vision classification,” pp. 1–43.
Y. Ding, Y. Cheng, X. Cheng, B. Li, X. You, and X. Yuan, “Noise-resistant network : a deep-learning method for face recognition under noise,” 2017, doi: 10.1186/s13640-017-0188-z. DOI: https://doi.org/10.1186/s13640-017-0188-z
S. Lapuschkin, A. Binder, G. Montavon, K. R. Muller, and W. Samek, “Analyzing Classifiers: Fisher Vectors and Deep Neural Networks,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 2912–2920, 2016, doi: 10.1109/CVPR.2016.318. DOI: https://doi.org/10.1109/CVPR.2016.318
A. Núñez-Marcos, G. Azkune, and I. Arganda-Carreras, “Egocentric Vision-based Action Recognition: A survey,” Neurocomputing, vol. 472, pp. 175–197, 2022, doi: 10.1016/j.neucom.2021.11.081. DOI: https://doi.org/10.1016/j.neucom.2021.11.081
S. Afshar and A. A. Salah, “Facial Expression Recognition in the Wild Using Improved Dense Trajectories and Fisher Vector Encoding,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work., pp. 1517–1525, 2016, doi: 10.1109/CVPRW.2016.189. DOI: https://doi.org/10.1109/CVPRW.2016.189
H. Mohammed, M. N. Hussain, and F. Al Alawy, “Facial Expression Recognition: Machine Learning Algorithms and Feature Extraction Techniques,” Al-Iraqia J. Sci. Eng. Res., vol. 2, no. 2, pp. 23–28, 2023, doi: 10.58564/ijser.2.2.2023.67. DOI: https://doi.org/10.58564/IJSER.2.2.2023.67
R. Bala and K. M. Braun, “Color-to-grayscale conversion to maintain discriminability,” Color Imaging IX Process. Hardcopy, Appl., vol. 5293, no. December 2003, p. 196, 2003, doi: 10.1117/12.532192. DOI: https://doi.org/10.1117/12.532192
V. P. Vishwakarma, S. Pandey, and M. N. Gupta, “Adaptive histogram equalization and logarithm transform with rescaled low frequency DCT coefficients for illumination normalization,” Int. J. Recent Trends Eng. Technol., vol. 1, no. 1, pp. 318–322, 2009.
A. Prajapati, S. Naik, and S. Mehta, “Evaluation of Different Image Interpolation Algorithms,” Int. J. Comput. Appl., vol. 58, no. 12, pp. 6–12, 2012, doi: 10.5120/9332-3638. DOI: https://doi.org/10.5120/9332-3638
Hummady, G. K., & Ahmad, M. L. (2022). A Review: Face Recognition Techniques using Deep Learning. Al-Iraqia Journal for Scientific Engineering Research, 1(1), 1–9. https://doi.org/10.33193/IJSER.1.1.2022.33 DOI: https://doi.org/10.33193/IJSER.1.1.2022.33
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Hadeel Mohammed, Mohammed Nasser Hussain, Faiz Al Alawy

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Deprecated: json_decode(): Passing null to parameter #1 ($json) of type string is deprecated in /var/www/vhosts/ijser.aliraqia.edu.iq/httpdocs/plugins/generic/citations/CitationsPlugin.inc.php on line 49







