Code Monkey home page Code Monkey logo

a-deep-learning-framework-for-assessing-physical-rehabilitation-exercises's Introduction

A-Deep-Learning-Framework-for-Assessing-Physical-Rehabilitation-Exercises

PWC arXiv

The codes in this repository are based on the eponymous research project A Deep Learning Framework for Assessing Physical Rehabilitation Exercises. The proposed framework for automated quality assessment of physical rehabilitation exercises encompasses metrics for quantifying movement performance, scoring functions for mapping the performance metrics into numerical scores of movement quality, techniques for dimensionality reduction, and deep neural network models for regressing quality scores of input movements via supervised learning.

Data

UI-PRMD dataset of rehabilitation movements is used. It contains full-body skeletal joint displacements for 10 movements performed by 10 healthy subjects. The codes employ 117-dimensional skeletal angles acquired with a Vicon optical tracker for the deep squat exercise. The subset of movement data used in the paper can be downloaded from the "Reduced Data Set" section on the website.

Neural Network Codes

The codes were developed using the Keras library.

  • SpatioTemporalNN_Vicon - the proposed deep spatio-temporal model in the paper.
  • CNN_Vicon - a basic convolutional neural network for predicting movement quality scores.
  • RNN_Vicon - a basic recurrent neural network for predicting movement quality scores.
  • Autoencoder_Dims_Reduction - a model for reducing the dimensionality of Vicon-captured movement data.
  • SpatioTemporalNN_Kinect - implementation of the proposed deep learning model for predicting quality scores on Kinect-captured data.

Distance Functions

The codes were developed using MATLAB.

  • Maximum Variance - distance functions on reduced-dimensionality data using the maximum variance approach.
  • PCA - distance functions on reduced-dimensionality data using PCA.
  • Autoencoder - distance functions on reduced-dimensionality data using an autoencoder neural network.
  • No Dimensionality Reduction - distance functions on full-body skeletal data (117 dimensions).

Please see the List of Files and Functions document for a complete list and brief descriptions of all files in the repository.

Use

  • Run "Prepare_Data_for_NN" to read the movements data, and perform pre-processing steps, such as length alignment and centering. Alternatively, skip this step, the outputs are saved in the Data folder (Data_Correct.csv and Data_Incorrect.csv).
  • Run "Autoencoder_Dims_Reduction" to reduce the dimensionality of the movement data. Alternatively, skip this step, the outputs are saved in the Data folder (Autoencoder_Output_Correct.csv and Autoencoder_Output_Incorrect.csv).
  • Run "Prepare_Labels_for_NN" to generate quality scores for the individual movement repetitions. Alternatively, skip this step, the outputs are saved in the Data folder (Labels_Correct.csv and Labels_Incorrect.csv)
  • Run "SpatioTemporalNN_Vicon" to train the model and predict movement quality scores on the Vicon-captured movement data.
  • Run "SpatioTemporalNN_Kinect" to train the model and predict movement quality scores on Kinect-captured movement data.

A slightly different version of the codes with verified reproducibility is also published on Code Ocean and can be accessed via the following link: https://codeocean.com/capsule/7213982/tree/v3

Citation

If you use the codes or the methods in your work, please cite the following article:

@ARTICLE{Liao2020,
title={A Deep Learning Framework for Assessing Physical Rehabilitation Exercises},
author={Liao, Y. and Vakanski, A. and Xian, M.},
journal={IEEE Transactions on Neural Systems and Rehabilitation Engineering}, 
year={2020},
month={Feb.},
volume={28},
number={2}
pages={468-477},
}

License

MIT License

Acknowledgments

This work was supported by the Institute for Modeling Collaboration and Innovation (IMCI) at the University of Idaho through NIH Award #P20GM104420.

Contact or Questions

A. Vakanski, e-mail: vakanski at uidaho.edu.

a-deep-learning-framework-for-assessing-physical-rehabilitation-exercises's People

Contributors

avakanski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

a-deep-learning-framework-for-assessing-physical-rehabilitation-exercises's Issues

Cannot be reproduced

Dear author, can you share the data and score labels for the 10 movements?My reproduction is poor, there may be a problem with the data.I use the same method handles Reduced Data,but get a bad result.for example the result EX1.
GMM_Loglikelihood_Scores_test
GMM_Movement_Quality_Scores_test

problem when executing SpatioTemporalNN_Vicon

Hi,
I have read your paper and I am working with your code ,
I ran every necessary functions to execute the code for Vicon Data in google Colab
Unfortunately in the last function, SpatioTemporalNN_Vicon, in part 13 of code , I have faced an error that I can not solve it.

while executing this line :
concat_trunk = TempPyramid(seq_input_trunk, seq_input_trunk_2, seq_input_trunk_4, seq_input_trunk_8, timesteps, n_dim1)

an execption rased :
ValueError Traceback (most recent call last)
in ()
54 print(conv)
55
---> 56 concat_trunk = TempPyramid(seq_input_trunk, seq_input_trunk_2, seq_input_trunk_4, seq_input_trunk_8, timesteps, n_dim1)
57 concat_left_arm = TempPyramid(seq_input_left_arm, seq_input_left_arm_2, seq_input_left_arm_4, seq_input_left_arm_8, timesteps, n_dim2)
58 concat_right_arm = TempPyramid(seq_input_right_arm, seq_input_right_arm_2, seq_input_right_arm_4, seq_input_right_arm_8, timesteps, n_dim2)

6 frames
in TempPyramid(input_f, input_2, input_4, input_8, seq_len, n_dims)
16
17 #### Recurrent layers
---> 18 x = concatenate([conv1, conv2, conv3, upsample1], axis=-1)
19 return x

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/merge.py in concatenate(inputs, axis, **kwargs)
929 A tensor, the concatenation of the inputs alongside axis axis.
930 """
--> 931 return Concatenate(axis=axis, **kwargs)(inputs)
932
933

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in call(self, *args, **kwargs)
924 if _in_functional_construction_mode(self, inputs, args, kwargs, input_list):
925 return self._functional_construction_call(inputs, args, kwargs,
--> 926 input_list)
927
928 # Maintains info about the Layer.call stack.

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list)
1096 # Build layer if applicable (if the build method has been
1097 # overridden).
-> 1098 self._maybe_build(inputs)
1099 cast_inputs = self._maybe_cast_inputs(inputs, input_list)
1100

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in _maybe_build(self, inputs)
2641 # operations.
2642 with tf_utils.maybe_init_scope(self):
-> 2643 self.build(input_shapes) # pylint:disable=not-callable
2644 # We must set also ensure that the layer is marked as built, and the build
2645 # shape is stored since user defined build functions may not be calling

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/tf_utils.py in wrapper(instance, input_shape)
321 if input_shape is not None:
322 input_shape = convert_shapes(input_shape, to_tuples=True)
--> 323 output_shape = fn(instance, input_shape)
324 # Return shapes from fn as TensorShapes.
325 if output_shape is not None:

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/merge.py in build(self, input_shape)
517 shape[axis] for shape in shape_set if shape[axis] is not None)
518 if len(unique_dims) > 1:
--> 519 raise ValueError(err_msg)
520
521 def _merge_function(self, inputs):

ValueError: A Concatenate layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 68, 192), (None, 68, 192), (None, 34, 192), (None, 34, 192)]

would you please help me solve this problem ?
Thanks for your attention

About the joints used in the data

For the Vicon one, there are 117 dimensions(30 joints) used, and for the Kinect one there are 88 dimensions(22 joints) used, can you tell me which joints are selected here? I could only find this kind of pic from the Internet, but seems like have different joints number from your research.
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.