University of Twente Student Theses

Login

Automatic IMU-to-segment labelling using deep learning approaches

Li, R. (2022) Automatic IMU-to-segment labelling using deep learning approaches.

Full text not available from this repository.

Full Text Status:Access to this publication is restricted
Embargo date:15 December 2024
Abstract:Human motion capture, the process of recording people’s movements, contributes to kinematics research, medical rehabilitation, augmented reality, meanwhile com- mercially succeeds in video game development, the film-making industry, etc. The captured information is utilized to animate 2-D or 3-D character models. Our work focuses on the inertial motion tracking system composed of miniature inertial sen- sors, biomechanical models and sensor fusion algorithms. An inertial measurement unit(IMU) consists of an accelerometer and a gyroscope, and some include a mag- netometer as well. It is highly appreciated in motion tracking for being affordable, trusty and energy-efficient, and has been developed since the early 1930s. The common wearable motion capture set contains several wireless IMUs allowing users to receive real-time data after installation. However, the current installation process of the wearable full-body IMUs set is inefficient and troubled by human-made errors. Because the sensor-to-segment placement and alignment are crucial to the reliability and informativeness of the recording, the wearable motion trackers should be placed on a predefined location with a specific orientation concerning the segment. Assigning the numbered IMUs to their corresponding can be time-consuming and prone to error when 20 IMUs, for ex- ample, are involved. Referring to Xsens MTw Awinda used in this project, attaching all the 17 IMUs on corresponding segments takes 250 seconds, yet random assign- ment without restrictions on sensors pairing with specific body segments takes 180 seconds. On the one hand, incorrect placement due to unintentional human error will directly lead to the failure of the visualization (twisted and misplaced body parts, etc.) of sensor data in the supporting software. On the other hand, if the calibration is passed because the switched IMUs, for example, on the left and right shoulder, are kinematically similar, the following recording task is meaningless based on the wrong labelled data source. What’s more, the mislabelled data are difficult to be detected without referring to the previously used hardware. Therefore, to optimize the current installation process, automatic IMU-to-Segment (I2S) assignment methods based on recorded inertial data are proposed. The tra- ditional methods take advantage of a large number of manually-selected features and shallow machine learning methods. However, shallow machine learning methods have some common shortcomings: a. Manually selecting features is time- consuming; b. The feature set is case-by-case and subjective, requiring prior knowl- edge; c. The commonly used features, like the magnitude of acceleration changing from individual to individual, are not discriminative enough which causes poor ro- bustness of the shallow machine learning-based models. In this case, deep learning methods, which learn the features directly from the data, are considered to address the mentioned problems. Some previous research is done to minimize the manual labour in the installation stage of wearing IMUs. The previous researchers success- fully applied deep learning methods to the I2S assignment task. But the CNN+GRU model for an arbitrary amount of IMUs is only applied to lower body configuration. Another method including PointNet and attention model extracting sensor-wise inter- dependencies surpasses the CNN+GRU model and works for full-body configuration as well. However, this model lacks flexibility in the number of IMUs. In this project, we explore if convolution-based models can reduce the man- ual feature selection and keep the flexibility of the amount of test IMUs in the full- body I2S assignment tasks based on the acceleration, angular velocity, and rotation quaternion. The involved full-body motion capturing system from Xsens called MTw Awinda consists of 17 IMUs. Each IMU is marked by a sticker indicating its corre- sponding segment currently. The training dataset XsensMotion includes 69 trials on around 30 subjects, and the self-collected test set involves 30 testing trials collected on 5 subjects absent from XsensMotion. To increase the prediction accuracy, we also apply data processing methods including heading correction, walking motion filter on the dataset. The long trials are sliced into shorter sequences of 2 seconds using the sliding window method. The proposed model involves three convolutional layers, one GRU layer, and three linear layers. Dissimilar hyper-parameter settings in convolutional layers are designed to realize hierarchical feature merging. Besides the input (acceleration and angular velocity) usually involved in existing approaches, the effectiveness of rotation quaternion is explored in this project, which to the best of our knowledge has not been done by previous researchers. The biased trials and special segments are specifically studied in this project. It has been proved that hierachical feature merging model with the walking motion filter has the best per- formance among the mentioned models with all three configurations. The achieved performance is also comparable to the previous research without losing the system flexibility. Adding rotation quaternion in input or heading correction can neither con- tribute to the overall performance. To enhance the performance, we apply majority voting on predictions based on sliced windows to generate one final label for the whole trial. The trial-wise performance on lower-body configuration (left and right foot upper leg, lower leg, and pelvis) achieves 100% accuracy using the proposed model.
Item Type:Essay (Master)
Faculty:EEMCS: Electrical Engineering, Mathematics and Computer Science
Programme:Computer Science MSc (60300)
Link to this item:https://purl.utwente.nl/essays/93919
Export this item as:BibTeX
EndNote
HTML Citation
Reference Manager

 

Repository Staff Only: item control page