Gesture recognition learning paper now on TechRxiv
Andy’s paper on incremental learning of gesture recognition using hyperdimensional computing is now available on TechRxiv. Read it here:
A. Zhou, R. Muller, and J. Rabaey, “Incremental Learning in Multiple Limb Positions for Electromyography-Based Gesture Recognition using Hyperdimensional Computing”. TechRxiv, 29-Sep-2021, doi: 10.36227/techrxiv.16643257.v1.
Prosthetic control for rehabilitation, among many other applications, can leverage in-sensor hand gesture recognition in which lightweight machine learning models for classifying electromyogram (EMG) signals are embedded on miniature, low-power devices. While research efforts have demonstrated high accuracy in controlled settings, these methods have yet to make a significant commercial or clinical impact due to the wide variety of scenarios and situational contexts that are faced during everyday use. Typical static models suffer from the effects of EMG signal variation caused by changing contexts in which they are deployed. Here, we propose an incremental learning algorithm using hyperdimensional (HD) computing that can efficiently learn gesture patterns performed in new limb positions, a context-change which normally significantly degrades classification accuracy. A prototype-based learning algorithm, HD computing enables memory- and computation-efficient incorporation of new training examples into the model, while preserving information about already learned contexts. We present various configurations of the incremental HD classifier, allowing system designers to trade classification performance for implementation efficiency as measured through memory footprint. Incremental learning experiments with data from 5 subjects show that HD computing can achieve similar accuracies as incrementally trained SVM and LDA classifiers while requiring a fraction of the memory allocation.