A person’s facial expressions reflect the psychological state when a strong emotionis experienced. This research aims to detect emotions and extract human behaviorby using deep learning techniques. Currently, Facial emotions are used the mostfrequently among these techniques since they convey a person’s mental state andthoughts. On the other hand, distinguishing between different emotions can be achallenging task nowadays, as there is often no established model or framework tohelp. Due to the discrepancies among the complexity, such as changes in illumina-tion, noise, obstruction, and over-fitting, accurately detecting and classifying humanfacial expressions is also a crucial task in image analysis.In addition to the challenges mentioned above, integrating facial expression recog-nition into real-world applications, such as remote patient assessments, presents fur-ther complexities. This research presents advancements in this area, specifically fo-cusing on integrating a trained model into a real-time facial expression detectionsystem within the DoctorLINK (eHealth Products by Italtel n.d.) web-based applica-tion. DoctorLINK is the telemedicine platform that ITALTEL has created, targetedat healthcare institutions, to monitor the health status of patients by detecting clin-ical parameters with medical sensors and real-time video communications throughwhich doctors can interact through video calls with patients and specialists. Further-more, DoctorLINK seamlessly integrates medical sensors into its video call software,allowing for comprehensive remote evaluations of patients’ health conditions, in-cluding monitoring blood pressure, ECG (electrocardiogram ( readings, POW (pulseoximeter waveform), temperature, oxygen saturation levels, respiratory rate, breath-ing patterns, and spirometry. Firstly, in this thesis, promising results have beenachieved by using algorithms that use deep learning techniques in a wide varietyof applications. Secondly, this research work reported constitutes an improvementover existing work with Resnet-152 and VGG-19-based techniques that can be easilyapplied to focus on large body areas such as the face. Thirdly, we applied a rangeof deep learning characteristics, including convolutional neural networks (CNN), aswell as specific models such as VGG-19 and Resnet-152, across multiple datasets.Through this approach, we substantially improved image recognition capabilitiesby leveraging the strengths of these models in extracting features and detecting pat-terns relevant to emotion recognition. This study explores the CK+, FER2013, andJAFFE datasets for emotion detection and classification. Finally, the simulation results demonstrate that the suggested strategy VGG-19can achieve up to 98% accuracy by applying the CK+ dataset. Using this methodcan examine an individual’s emotional patterns and psychological state. This paperwill present an effective method for detecting anger, disgust, pleasure, anxiety, sor-row, peace, and surprise using convolutional neural networks. The model employs aHaar cascade classifier to recognize universal facial expressions in real time. Follow-ing this approach, we implemented our facial expression detection system withinDoctorLINK.
(2025). Development of a Facial Expression Recognition System for the DoctorLINK Telemedicine Platform Using Transfer Learning.
Development of a Facial Expression Recognition System for the DoctorLINK Telemedicine Platform Using Transfer Learning
KUMAR, Rajesh
2025-02-27
Abstract
A person’s facial expressions reflect the psychological state when a strong emotionis experienced. This research aims to detect emotions and extract human behaviorby using deep learning techniques. Currently, Facial emotions are used the mostfrequently among these techniques since they convey a person’s mental state andthoughts. On the other hand, distinguishing between different emotions can be achallenging task nowadays, as there is often no established model or framework tohelp. Due to the discrepancies among the complexity, such as changes in illumina-tion, noise, obstruction, and over-fitting, accurately detecting and classifying humanfacial expressions is also a crucial task in image analysis.In addition to the challenges mentioned above, integrating facial expression recog-nition into real-world applications, such as remote patient assessments, presents fur-ther complexities. This research presents advancements in this area, specifically fo-cusing on integrating a trained model into a real-time facial expression detectionsystem within the DoctorLINK (eHealth Products by Italtel n.d.) web-based applica-tion. DoctorLINK is the telemedicine platform that ITALTEL has created, targetedat healthcare institutions, to monitor the health status of patients by detecting clin-ical parameters with medical sensors and real-time video communications throughwhich doctors can interact through video calls with patients and specialists. Further-more, DoctorLINK seamlessly integrates medical sensors into its video call software,allowing for comprehensive remote evaluations of patients’ health conditions, in-cluding monitoring blood pressure, ECG (electrocardiogram ( readings, POW (pulseoximeter waveform), temperature, oxygen saturation levels, respiratory rate, breath-ing patterns, and spirometry. Firstly, in this thesis, promising results have beenachieved by using algorithms that use deep learning techniques in a wide varietyof applications. Secondly, this research work reported constitutes an improvementover existing work with Resnet-152 and VGG-19-based techniques that can be easilyapplied to focus on large body areas such as the face. Thirdly, we applied a rangeof deep learning characteristics, including convolutional neural networks (CNN), aswell as specific models such as VGG-19 and Resnet-152, across multiple datasets.Through this approach, we substantially improved image recognition capabilitiesby leveraging the strengths of these models in extracting features and detecting pat-terns relevant to emotion recognition. This study explores the CK+, FER2013, andJAFFE datasets for emotion detection and classification. Finally, the simulation results demonstrate that the suggested strategy VGG-19can achieve up to 98% accuracy by applying the CK+ dataset. Using this methodcan examine an individual’s emotional patterns and psychological state. This paperwill present an effective method for detecting anger, disgust, pleasure, anxiety, sor-row, peace, and surprise using convolutional neural networks. The model employs aHaar cascade classifier to recognize universal facial expressions in real time. Follow-ing this approach, we implemented our facial expression detection system withinDoctorLINK.File | Dimensione | Formato | |
---|---|---|---|
Thesis.pdf
accesso aperto
Descrizione: PhD Thesis
Tipologia:
Tesi di dottorato
Dimensione
10.66 MB
Formato
Adobe PDF
|
10.66 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.