Fig. 1: Gesture classifier |
- Pre-defined features are extracted from the video.
- Extracted features for every frame of the video are compared and the difference for every frame is recorded in an affinity matrix that describes the similarity of every single frame with every other frame in a video.
- Single layer detector neural networks are evolved using novelty search to extract unique features from the affinity matrix.
- Extracted features are inputted into the final classifier neural network that is trained to classify gestures in the video.
Fig 2: Gesture classifier evaluation |
Figure 2 shows the evaluation results of the gesture classifier.
- Human and robot subject small datasets were created for controlled algorithm testing.
- Algorithm had also been tested on a public ChaLearn[3] dataset.
- The algorithm had not been tailored to different datasets.
Implementation: https://github.com/mocialov/MSc-in-Robotics-and-Autonomous-Systems/tree/master/gesture_recognition_pipeline