Unknown2.jpg

MICRO-EXPRESSION IMAGE ANALYSIS

Autism Level using Micro-Expression

  

Attention Residual Network for Micro-Expression Recognition using Image Analysis. A Deep Learning technique to recognize ASD intensity using their Micro-Expressions.

 

PUBLICATION

Springer Journal

Status - Accepted and yet to be published

FACIAL EXPRESSION IMAGE ANALYSIS TO CLASSIFY HIGH AND LOW LEVEL AUTISM SPECTRUM DISORDERED KIDS USING ATTENTION MECHANISM EMBEDDED DEEP LEARNING TECHNIQUE

2020 The Second International Conference on Advances in Electrical and Computer Technologies - Springer

 

MICRO-EXPRESSION IMAGE ANALYSIS

Abstract

One of the developmental disorder found in early childhood is Autism Spectrum Disorder (ASD). Kids suffering from ASD are affected by the way they act in society and interact with others. Usually, ASD kids are associated with excess or poor emotional facial-expressions. The primary focus of this paper is to use advanced deep learning techniques to classify ASD kids into two classes namely Low ASD kids and High ASD kids. Low and High here mentions the intensity of the ASD disorder in the kids. The proposed work aims at achieving this classification with the computer vision techniques and by learning on their facial expressions. Several videos of Low ASD kids and High ASD kids were collected. Each frame of these videos was then parsed into images to train and test an Attention based Residual Neural Network. This proposed model brings a novel method of embedding Attention mechanism on Residual Convolution Neural Networks, which results in carrying the most significant features from the initial and primary layers to the very end with very little distortion. This is done by the attention block, which weighs every parameter according to its significance. This way, the features with higher weighs are passed till the deep layers of the network without any loss of information. Thus, the proposed work is able to classify effectively Low and High ASD kids based on the videos collected successfully yielding results with a state-of-the-art accuracy of around 94%.

 

@SundarAnand