Code Monkey home page Code Monkey logo

radhar's Introduction

RadHAR

RadHAR: Human Activity Recognition from Point Clouds Generated through a Millimeter-wave Radar

This repo contains the implementation for the paper: RadHAR: Human Activity Recognition from Point Clouds Generated through a Millimeter-wave Radar presented at the 3rd ACM Workshop on Millimeter-Wave Networks and Sensing Systems (mmNets) 2019 (co-located with MobiCom).

Authors

Akash Deep Singh, Sandeep Singh Sandha, Luis Garcia, Mani Srivastava

Dataset

Point cloud dataset collected using mmWave Radar. The data is collected using the ROS package for TI mmWave radar developed by radar-lab. Raw dataset is available in the repo. The data is partitioned into the folders train and test. Each train and test folder further contains the folders which have respective activity classes.

Format

header: 
  seq: 6264
  stamp: 
    secs: 1538888235
    nsecs: 712113897
  frame_id: "ti_mmwave"   # Frame ID used for multi-sensor scenarios
point_id: 17              # Point ID of the detecting frame (Every frame starts with 0)
x: 8.650390625            # Point x coordinates in m (front from antenna)
y: 6.92578125             # Point y coordinates in m (left/right from antenna, right positive)
z: 0.0                    # Point z coordinates in m (up/down from antenna, up positive)
range: 11.067276001       # Radar measured range in m
velocity: 0.0             # Radar measured range rate in m/s
doppler_bin: 8            # Doppler bin location of the point (total bins = num of chirps)
bearing: 38.6818885803    # Radar measured angle in degrees (right positive)
intensity: 13.6172780991  # Radar measured intensity in dB

For more information about the ROS package used to collect data and its description, please click here!

Data Preprocessing

Data preprocessing is done by extracting by the voxels. Voxel extraction code is available here. The file have variables which need to be set to the path of the raw data folders. The path is controlled using the below variables.

parent_dir = 'Path_to_training_or_test_data'
sub_dirs=['boxing','jack','jump','squats','walk']
extract_path = 'Path_to_put_extracted_data'
  • Separate train and test files with 71.6 minutes data in train and 21.4 minutes data in test.
  • Voxels of dimensions 10x32x32 (where 10 is the depth).
  • Windows of 2 seconds (60 frames) having a sliding factor of 0.33 seconds (10 frames).
  • We finally get 12097 samples in training and 3538 samples in testing.
  • For deep learning classifiers, we use 20% of the training samples for validation.

Finally the voxels have the format: 60*10*32*32, where 60 represents the time, 10 represent the depth, and 32*32 represents the x and y dimension.

Classifiers:

  • SVM Classifier: Code
  • Multi-layer Perceptron (MLP) Classifier: Code
  • Bi-directional LSTM Classifier: Code
  • Time-distributed CNN + Bi-directional LSTM Classifier: Code

Pretrained Classifiers and Preprocessed Dataset:

The pretrained Bi-directional LSTM Classifier (90% accuracy) and Time-distributed CNN + Bi-directional LSTM Classifier (92% accuracy) along with the preprocessed training and test dataset are available here. The size of preprocessed dataset is around 70 GB. Classifiers code in the repo has the data loading code for the preprocessed dataset. The trained classifiers expects the test data in the same format as in the classifier training code files. The classifier can be loaded using the keras load_model function.

Cite:

You can cite our paper if you have used this code in any of your projects:

@inproceedings{singh2019radhar,
  title={RadHAR: Human Activity Recognition from Point Clouds Generated through a Millimeter-wave Radar},
  author={Singh, Akash Deep and Sandha, Sandeep Singh and Garcia, Luis and Srivastava, Mani},
  booktitle={Proceedings of the 3rd ACM Workshop on Millimeter-wave Networks and Sensing Systems},
  pages={51--56},
  year={2019},
  organization={ACM}
}

radhar's People

Contributors

ozanbaskan avatar sandeep-iitr avatar thecyclone avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

radhar's Issues

Are there any preprocess methods adopted before voxelization?

Hi there! Your work is awe-inspiring, and the codes are pretty clear. I have some questions about the preprocesses. Have you adopted some preprocess methods such as CFAR, clustering algorithms to remove the 3D points caused by static background and just preserve the points from body movement? I also checked the ROS package you mentioned, and there is also no information about this. I am trying to collect some points cloud using TI radar, but I find many noisy points. Could you give me some tips to set the radar parameters to collect such nice data if the dataset is not processed?

I am looking forward to your reply. Thank you very much.

Test results

Hello Do you have the code to test the training results?

slowness on voxelization of 3D points

voxel function taking so much time due unnecessary looping for coordinate mapping(scaling). the function taking O(N^3) time to scale all the 3D points which slow the over all performance. I have optimise the voxel function to time complexity O(NlogN) but i dont know how to contribute since I am beginner to open source.

Classifiers Accuracy

I have trouble getting the classifier's accuracy. Is there is a way for obtaining the accuracies

Testing script

Hey author,
Could you please provide the working script to use the TD_CNN_LSTM model for prediction?

An issue in voxel.py

Hi, I found a bug in the function voxalize that leads to incorrect results. I checked the value of np.sum(pixel) before return which should be equal to the number of points in the frame, but it is always smaller than it. So I guess some points are not correctly taken into account. I checked the code line by line, and I found line 78 does not work well when the point is close to the maximum boundary(in one of x, y, z dimension). Could you please fix this issue?

How to use the code from radar lab to capture the point cloud data?

Hello, your work is impressive and we also want to use the TI mmWave radar to collecte some raw point cloud data. However, while using the ROS package developed by radar-lab, we are not so familiar with that and don't know how to capture the message as well as how to write it to a txt file. I wonder if you are kind to provide some instructions about using the above ROS package to attain the txt file in this repository. Looking forward to your reply, thanks!

Why the Velocity of the Points is 0.0?

Hello,
Your work and the dataset are impressive.
I am going to create the Range-Doppler Map from the provided point could frames (I do not have concrete knowledge about FMCW. It is a native idea).
However, I found that the velocity of the data points (I have checked some of the files) is always 0.0, which is wired based on my native understanding. Could you please tell me something about it?
Looking forward to your reply. Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.