Gesture recognition algorithm with the help of OpenCV, using string of feature graphs, HyperNEAT with novelty search, and resilient backpropagation

Fig. 1: Gesture classifier
Figure 1 shows a gesture classifier that takes a raw video input through a number of stages.
  1. Pre-defined features are extracted from the video. 
  2. Extracted features for every frame of the video are compared and the difference for every frame is recorded in an affinity matrix that describes the similarity of every single frame with every other frame in a video.
  3. Single layer detector neural networks are evolved using novelty search to extract unique features from the affinity matrix. 
  4. Extracted features are inputted into the final classifier neural network that is trained to classify gestures in the video.

Fig 2: Gesture classifier evaluation

Figure 2 shows the evaluation results of the gesture classifier. 
  • Human and robot subject small datasets were created for controlled algorithm testing. 
  • Algorithm had also been tested on a public ChaLearn[3] dataset. 
  • The algorithm had not been tailored to different datasets.

Implementation: https://github.com/mocialov/MSc-in-Robotics-and-Autonomous-Systems/tree/master/gesture_recognition_pipeline