Estimating Ground Reaction Forces (GRF) and Lower Extremity Joint Moment in Multiple Walking Environment Using Deep Learning

Estimating Ground Reaction Forces (GRF) and Lower Extremity Joint Moment in Multiple Walking Environment Using Deep Learning


INTRODUCTION
Gait analyses provide useful information for assessing the performance of dynamic activities.Specifically, Ground Reaction Forces (GRFs) and lower extremity joint moments can provide valuable information to clinicians to assess the patients' walking performance associated with their treatments.Traditional techniques for measuring GRF and joint moment require strict lab setup (motion capture, force plate) and expert post-processing, making it difficult to assess these biomechanics parameters outside the lab.To address these issues, Inertial Measurement Unit (IMU) sensors with machine learning algorithms are suggested to estimate GRFs and joint moments [1].However, these studies were confined to a highly repetitive motion (treadmill and/or level-ground).Also, we are unclear of their algorithm's capability for reliable assessments of GRF and joint moment in various walking conditions, including stairs and slope walking.Thus, we propose a novel deep learning algorithm that enables an accurate real-time estimate of GRF and joint moment in various walking conditions using IMU sensors.

CLINICAL SIGNIFICANCE
Estimating GRF and joint moments in various walking conditions outside the lab is essential to assist clinicians to assess and treat patients' pathologic walking functions.

METHODS
We leveraged a publicly available dataset to validate our algorithm [2].The dataset contains multiple walking scenarios (i.e., treadmill, level-ground, slope, and stair walking) of 20 participants.We applied segmentation (gait cycle extraction) on the dataset to get GRFs and corresponding joint moments.As IMU sensors were only available for the right side of the leg, we estimated GRFs and joint moments for the right leg only.We used three IMU sensors at the thigh, foot, and shank as inputs to predict 3D GRF and joint moment of the hip, knee, and ankle.We used kinetics-Net (FM) for the deep learning model as demonstrated in Figure 1.In the Kinetics-Net (FM), we utilized different layers, i.e., GRU, Conv1D, Conv2D, Fully Connected (FC), to construct three subnetworks (GRU-Net, GRU-Conv1D-Net, GRU-Conv2D-Net).The reason for building subnetworks is to introduce diversity in the prediction and improve the performance using the ensemble method.These subnetworks are ensembled end-to-end to achieve better performance than each subnetwork separately.We also designed a Fusion Module (FM) to combine output from three subnetworks.The purpose of FM is to put proper weight to the subnetworks to optimize the performance of the Kinetics-Net, which in return perform better than a simple model averaging.We compared the results of Kinetics-Net (FM) with a simple model averaging approach (Kinetics-Net (A)) in the demonstration section.Figure 1 shows our proposed deep learning model Kinetics-Net (FM) with primary building blocks.

DEMONSTRATION
We used Normalized Root Mean Square Error (NRMSE) and Pearson Correlation Coefficient (PCC) as evaluation metrics for the proposed method.In Table 1, the mean and standard deviation of NRMSE and PCC from Kinetics-Net (FM), Kinetics-Net (A) are shown for all the participants in all walking conditions.Kinetics-Net (FM) outperforms Kinetics-Net (A) for all joint moment and GRF, which demonstrates the effectiveness of the fusion module over a simple average approach for the ensemble of the models.

SUMMARY
Our end-to-end trained ensemble model with a fusion model can reliably predict lower extremity joint moments and GRFs using three IMU sensors in different walking conditions.Our estimated moments and GRFs can be helpful to monitor patients without any constrained lab environment and expertise in data processing.