Graph Convolutional Networks for Exercise Motion Classification
Event Type
Technical Groups
Human Performance Modeling
System Development
Usability and System Evaluation
TimeThursday, October 28th1:00pm - 1:15pm EDT
LocationVirtual 1
DescriptionThe growth in self-fitness mobile applications has encouraged people to turn to personal fitness, which entails the need to integrate self-tracking applications with exercise motion data to reduce fatigue and mitigate the risk of injury. The advancements in computer vision and motion capture technologies hold great promise to improve exercise classification performance. This study investigates a supervised deep learning model performance, Graph Convolutional Network (GCN) to classify three workouts using the Azure Kinect device's motion data. The model defines the skeleton as a graph and combines GCN layers, a readout layer, and multi-layer perceptrons to build an end-to-end framework for graph classification. The model achieves an accuracy of 95.86% in classifying 19,442 frames. The current model exchanges feature information between each joint and its 1-nearest neighbor, which impact fades in graph-level classification. Therefore, a future study on improved feature utilization can enhance the model performance in classifying inter-user exercise variation.