A DEEP LEARNING FRAMEWORK FOR REAL-TIME STRESS DETECTION USING FACIAL EXPRESSIONS

Authors

  • N. Nithya Department of Data Science, SRM Institute of Science and Technology, Ramapuram Campus, Chennai, India
  • A.Althaf Ali Department of MCA, Madanapalle Institute of Technology & Science (MITS), Andhra Pradesh, India
  • S. Parvathi Department of CSE, Erode Sengunthar Engineering College, Perundurai , India
  • P.Mohamed Sajid Department of EEE, C. Abdul Hakeem College of Engineering and Technology, Melvisharam, India

DOI:

https://doi.org/10.30572/2018/KJE/170239

Keywords:

Stress detection, facial expression recognition, VGG, attention, multi task learning, computer vision, affective computing

Abstract

Given its direct effects on human health productivity at work and general well-being stress detection has emerged as a crucial area of affective computing research. Self-reports and physiological sensors which are either subjective invasive or expensive to implement at scale are frequently used in traditional stress assessment methods. This study suggests an Enhanced VGG-Based Approach for Facial Expression-Driven Stress Detection (EVSD-Net) which uses subtle facial cues to accurately identify stress levels in order to overcome these limitations. By adding lightweight convolutional blocks to eliminate unnecessary parameters a hybrid channel–spatial attention mechanism to highlight stress-relevant areas (like wrinkles on the forehead tight lips and strained eyes) and a fine-tuned transfer learning strategy initialized with pretrained ImageNet weights the suggested framework enhances the standard VGG16 architecture. Both binary (stressed and non-stressed) and Softmax classifiers are supported by a hybrid classification layer that integrates fully connected (FC) layers. non-stressed) and multi-class stress classification (high medium and low). Preprocessing and augmentation of facial datasets improved feature extraction using the modified VGG network dimensionality reduction and regularization for better generalization and Softmax classification comprise the four main stages of the methodology. Accuracy Precision Recall F1-score and AUC metrics were used to evaluate the experimental results using four benchmark facial expression and stress-related datasets. The results show that EVSD-Net outperforms baseline models like standard VGG16 ResNet18 and MobileNetV3 with an overall accuracy of 97.84%. The superiority of the suggested model is further confirmed by a comparison with previous research which shows significant improvements in detection accuracy and robustness under various illumination pose and demographic circumstances

Downloads

Download data is not yet available.

References

Abbosh, Younis, et al. “KERATOCONUS DETECTION USING DEEP LEARNING”. Kufa Journal of Engineering, vol. 16, no. 2, Apr. 2025, pp. 280-94, https://doi.org/10.30572/2018/KJE/160217.

Banskota, N., Alsadoon, A., Prasad, P., Dawoud, A., Rashid, T., & Alsadoon, O. (2023). A novel enhanced convolution neural network with extreme learning machine: Facial emotional recognition in psychology practices. Multimedia Tools and Applications, 82(5), 6479–6503. https://doi.org/10.1007/s11042-022-13517-5

Ghosh, S., Priyankar, A., Ekbal, A., & Bhattacharyya, P. (2023). Multitasking of sentiment detection and emotion recognition in code-mixed Hinglish data. Knowledge-Based Systems, 260, 110182. https://doi.org/10.1016/j.knosys.2022.110182

Gupta, S., Kumar, P., & Tekchandani, R. (2023). Facial emotion recognition based real-time learner engagement detection system in online learning context using deep learning models. Multimedia Tools and Applications, 82(8), 11365–11394. https://doi.org/10.1007/s11042-022-13856-3

Halder, S., & Afsari, K. (2023). Robots in inspection and monitoring of buildings and infrastructure: A systematic review. Applied Sciences, 13(4), 2304. https://doi.org/10.3390/app13042304

Hassan, Raghdaazad, and Ibrahim Ahmed Saleh. “PREDICTION OF SOFTWARE ANOMALIES METHODS BASED ON ENSEMBLE LEARNING METHODS”. Kufa Journal of Engineering, vol. 16, no. 3, July 2025, pp. 639-57, https://doi.org/10.30572/2018/KJE/160336.

Helaly, R., Messaoud, S., Bouaafia, S., Hajjaji, M., & Mtibaa, A. (2023). DTL-I-ResNet18: Facial emotion recognition based on deep transfer learning and improved ResNet18. Signal, Image and Video Processing, 17(8), 2731–2744. https://doi.org/10.1007/s11760-023-02489-8

Jang, G., Kim, D., Lee, I., & Jung, H. (2023). Cooperative beamforming with artificial noise injection for physical-layer security. IEEE Access, 11, 22553–22573. https://doi.org/10.1109/ACCESS.2023.3248220

Karilingappa, K., Jayadevappa, D., & Ganganna, S. (2023). Human emotion detection and classification using modified Viola-Jones and convolution neural network. IAES International Journal of Artificial Intelligence, 12(1), 79–87. https://doi.org/10.11591/ijai.v12.i1.pp79-87

Kumar, R., Corvisieri, G., Fici, T. F., Hussain, S. I., Tegolo, D., & Valenti, C. (2025). Transfer learning for facial expression recognition. Information, 16(4), 320. https://doi.org/10.3390/info16040320

Li, X., Xiao, Z., Li, C., Li, C., Liu, H., & Fan, G. (2023). Facial expression recognition network with slow convolution and zero-parameter attention mechanism. Optik, 283, 170892. https://doi.org/10.1016/j.ijleo.2023.170892

Manjunath, R., et al. (2023). A smart biomedical healthcare system to detect stress using Internet of Medical Things, machine learning and artificial intelligence. International Journal of Intelligent Systems and Applications in Engineering, 11(4), 335–343. https://ijisae.org/index.php/IJISAE/article/view/3531

Mansour, Hassanain Shakir, et al. “A Novel Deep 2D-CNN Model for ECG-Based Arrhythmia Diagnosis With Selective Attention Mechanism and CWT Integration”. Kufa Journal of Engineering, vol. 16, no. 2, Apr. 2025, pp. 423-44, https://doi.org/10.30572/2018/KJE/160225.

Miolla, A., Cardaioli, M., & Scarpazza, C. (2023). Padova Emotional Dataset of Facial Expressions (PEDFE): A unique dataset of genuine and posed emotional facial expressions. Behavior Research Methods, 55(5), 2559–2574. https://doi.org/10.3758/s13428-022-02072-y

Sadhsaivam, J., Garg, S., V. A., Eakambaram, S., Dayal, S., & Kalia, A. (2024). Real-time stress detection via facial recognition using VGG16 CNN: A non-intrusive approach. In Proceedings of the 2024 3rd International Conference on Automation, Computing and Renewable Systems (ICACRS) (pp. 1012–1018). IEEE. https://doi.org/10.1109/ICACRS62842.2024.10841664

Sasikala, V., Rajeswari, T., Begum, S. N., Sri, C. D., & Sravya, M. (2022). Stress detection from sensor data using machine learning algorithms. In Proceedings of the 2022 International Conference on Electronics and Renewable Systems (ICEARS) (pp. 1335–1340). IEEE. https://doi.org/10.1109/ICEARS53579.2022.9751881

Shruti, M., Harshini, M., & Haritha, I. V. S. L. (2022). Stress level detection of IT professionals using machine learning. International Journal of Creative Research Thoughts, 10(4), 2856–2860. https://ijcrt.org/viewfull.php?p_id=IJCRT2204384

Sridhar, P., Jahnavi Pramodhani, R., Priya, S. P., & Kumar, C. K. (2023). Human stress detection using deep learning. International Journal of Biomedical Engineering and Technology, 14(2), 144–159. https://doi.org/10.1504/IJBET.2023.129654

Zainudin, Z., Hasan, S., Shamsuddin, S. M., & Argawal, S. (2021). Stress detection using machine learning and deep learning. Journal of Physics: Conference Series, 1997(1), 012019. https://doi.org/10.1088/1742-6596/1997/1/012019

Downloads

Published

2026-05-02

How to Cite

Nithya, N., et al. “A DEEP LEARNING FRAMEWORK FOR REAL-TIME STRESS DETECTION USING FACIAL EXPRESSIONS”. Kufa Journal of Engineering, vol. 17, no. 2, May 2026, pp. 647-68, https://doi.org/10.30572/2018/KJE/170239.

Share