Code Monkey home page Code Monkey logo

3dscanning's People

Contributors

barisyazici avatar juanraul8 avatar mirjang avatar radialdistortion avatar zielon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

3dscanning's Issues

Creation of a normal map

Adapt the code from Exercise 3 PointCloud.h for generating a normal map from a depth map.
Check out the constructor of the class.

Complex AR Animation

Task: Implement a complex AR animation on unity using the color map, the camera pose and the scene mesh given by the tracker.

The animation should consist on basic rigid body collision between virtual objects (e.g. spheres) and physical objects (e.g. table).

Unity modules:

  • Graphics.
  • Animation.
  • Physics.

basic icp implementation

Task: Implement a basic tracker using ICP Algorithm.

The implementation will be based on the exercise 3 of the course [1].

References:
[0] Lecture 5: Rigid Surface Tracking & Reconstruction (3D Scanning/Justus Thies Slides).
[1] Exercise 3: Registration – Procrustes & ICP

Normal maps.

Task: show the normal maps derived by the cloud points in a new test or an existing one.

Flipped Mesh

Bug: mesh generated by reconstruction is mirrored on the x/z plane.

SDF Computation: Volumetric Fusion.

Task: Implement a surface integration to compute the SDF representation (voxels representation) of the world.

@Mirjang support.

References:
[0] Lecture 5: Rigid Surface Tracking & Reconstruction (3D Scanning/Justus Thies Slides).
[1] Exercise 2: Implicit Surfaces - Marching Cubes.

tracker skeleton in c++

Task: Define and implement the tracker skeleton, which will be implemented in C++.

The implementation will be based on the exercise 3 of the course [1].

References:
[0] Lecture 5: Rigid Surface Tracking & Reconstruction (3D Scanning/Justus Thies Slides).
[1] Exercise 3: Registration – Procrustes & ICP

Depth Map Conversion

Task: Appy bilateral filter to the raw depth map as it is proposed in the Kinect Fusion paper.

The depth values should be also scale to transform depth values to real scale in meters.

For example, the factor 5000 is iused in the datasets [1]. A value of 5000 in the depth map corresponds to 1 meter.

Reference:
[0] KinectFusion: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/ismar2011.pdf
[1] RGB-D SLAM Dataset and Benchmark : https://vision.in.tum.de/data/datasets/rgbd-dataset

Sensor Installation

Task: Install Sensor software and libraries.

You can find instruction of the readme.

I am planning to integrate sensor to our project. So I need you install the dependencies so you can compile the project with sensor.

Implement basic unity scene

Task: Read current RGB camera from (DLL tracker) and set as texture for the sprite attached to the camera.

The screen should show the image created by the Tracker.

Prepare progress presentation

1- First slide explains the current level of progress
2- Second slide includes a road map for next two weeks

PS: For current status include everything before Fusion.
Second slide: Mesh integration to Unity, sensor integration with correct pose.

Demo Setup

Task: Install sensor functionality on @Mirjang computer to first version demo.

We should create the necessary functionality to store our test in order to load it and execute it in realtime time unity (now our reconstruction is not realtime).

Experiment Dataset requires:

  • RGB capture.
  • Depth capture.
  • Pose capture (Trajectory in SLAM datasets).
  • Meshes.

@juanraul8 support.

Camera Tracking: ICP Implementation (Optimization)

Task: Implementing the optimization part of the ICP algorithm to estimate the camera pose of each frame.

The implementation will be based on the exercise 3 [1] and 4 [2] of the course.

References:
[0] Lecture 5: Rigid Surface Tracking & Reconstruction (3D Scanning/Justus Thies Slides).
[1] Exercise 3: Registration – Procrustes & ICP.
[2] Exercise 4: Feature Matching – Bundle Adjustment.

Camera Tracking: ICP Implementation (Features Detection).

Task: Implementing the correspondence matching between frames in order to estimate the camera pose of each frame.

The implementation will be based on the exercise 3 [1] and 4 [2] of the course.

References:
[0] Lecture 5: Rigid Surface Tracking & Reconstruction (3D Scanning/Justus Thies Slides).
[1] Exercise 3: Registration – Procrustes & ICP.
[2] Exercise 4: Feature Matching – Bundle Adjustment.

Point Cloud Test

Task: Implement a test (WindowTests) to check the point cloud previous to the volumetric fusion.

Similart to the Exercise 1 [0], we can generate a simple mesh from the point cloud. Then we can check the result in MeshLab [1]

@juanraul8 support.

References:
[0] Exercise 1: Camera Intrinsics, Back-projection, Meshes.
[1] MeshLab: http://www.meshlab.net/

Processing a scene offline

In unity a user can select an option to record/scan the scene. (Lets say maximum a few minutes of recording)
Then the scene has to be processed offline. In the UI the user will see some kind of progress bar with no possibility do use any buttons. Once the processing is finished and the mesh is loaded to our scene the UI is available again. The user can use the camera and our ICP pose estimation to look at the constructed mesh from different angles.

Linked: #86

Basic AR Animation

Task: Implement a basic AR animation on unity using the color map and the camera pose given by the tracker.

The animation should consist on placing virtual objects on the points selected by the user (e.g. mouse interaction).

Unity modules:

  • Graphics.
  • Animation.

@Mirjang support.

Hardware requirements

Task: Obtaining iPad from the 3D Scanning Group.

Relevant questions:
-Which is the policy to rent the devices?
-How many devices can we get?

Efficient Marching Cubes Implementation

Task: Implement efficient Marching Cubes solution in order to be able to extract surfaces in realtime.

I would recommend to check the current solutions (libraries, frameworks, etc). Then, we can use it directly.

It would nice though to check the code and try to understand it (at least generally).

NB: Tasks that were already implemented in the exercises can be replaced with efficient external code (said by professor).

The implementation should be a class so we can easily change between naive implementation and gpu implementation.

Library References:
[0] PCL: http://www.pointclouds.org/
[1] GPU-Marching-Cubes: https://github.com/smistad/GPU-Marching-Cubes
[2] GPU-accelerated data expansion for the Marching Cubes algorithm :https://www.nvidia.com/content/gtc-2010/pdfs/gtc10_session2020_hp5mc_dyken_ziegler.pdf

Realtime Volumetric Fusion Implementation

Task: Implement efficient Volumetric Fusion solution in order to be able to compute SDF in realtime.

I would recommend to check the current solutions (libraries, frameworks, etc) in order to adapt our current solultion to improve quality and performance.

References:
[0] CUDA TSDF: https://github.com/Scoobadood/TSDF
[1] Kinfu: http://pointclouds.org/documentation/tutorials/using_kinfu_large_scale.php
[2] Open3D: http://www.open3d.org/docs/tutorial/Advanced/rgbd_integration.html#tsdf-volume-integration
[3] Elastic Reconstruction: https://github.com/qianyizh/ElasticReconstruction

Demo Test

Task: real time test on professor´s computer.

On Tuesday, we should install and run our solution in professor's laptop.

Frame-to-model Cuda ICP

Task: use depth map estimation of the model (sdf function) instead of previous depth maps.

The depth map estimation can be obtained by raycasting the sdf function with the previous pose.

Surface extraction: Marching Cubes.

Task: Implementing Marching Cubes algorithm to extract surface from a SDF, i.e. obtaining a semi-dense mesh from the scene.

@Zielon support.

References:
[0] Lecture 5: Rigid Surface Tracking & Reconstruction (3D Scanning/Justus Thies Slides).
[1] Exercise 2: Implicit Surfaces - Marching Cubes.

Artifacts Volumetric Fusion

Task: the mesh generated using volumetric fusion have important holes.

Possible solution:

  • Filling holes implementation of volumetric fusion.
  • Smoothing meshes as post processing task.

Wrong Camera Movement

Task: Fix the bug related to the camera movement.

Up and down movement of the camera is working on the opposite direction.

DatasetReader simplification

The dataset has a few flaws.

  1. It skips the first frame. The iterator is set to i + 1 in getSequentialFrame() where at the start i = 0. So we read the frame with number 1 in the first read call.
  2. I want to be able just to pass an index of a frame which I am interested in. The whole logic for skipping, waiting, whatsoever is pointless because I cannot read for instance every 10th frame. Moreover, the read call is still in a loop so I have the index there.

It is a quick fix. Can you @Mirjang do those changes ASAP?

Efficient ICP Implementation

Task: Implement efficient ICP solution in order to be able to estimate pose in realtime.

I would recommend to check the current solutions (libraries, frameworks, etc). Then, we can use it directly.

It would nice though to check the code and try to understand it (at least generally).

NB: Tasks that were already implemented in the exercises can be replaced with efficient external code (said by professor).

We should try to integrate the ICPCUDA library to our Tracker solution. The implementation should be a class so we can easily change between naive implementation and cuda implementation.

Library References:
[0] PCL: http://www.pointclouds.org/
[1] ICPCUDA: https://github.com/mp3guy/ICPCUDA.

Camera Tracking: ICP Implementation (Outliers)

Task: Implementing the outliers pruning in order to estimate the camera pose of each frame.

The implementation will be based on the exercise 3 [1] and 4 [2] of the course.

References:
[0] Lecture 5: Rigid Surface Tracking & Reconstruction (3D Scanning/Justus Thies Slides).
[1] Exercise 3: Registration – Procrustes & ICP.
[2] Exercise 4: Feature Matching – Bundle Adjustment.

Software Stack

Task: choosing the software technologies and tools and deciding how to integrate them depending of the task.

Sensor data for 640 x 480 resolution.

The resolution 320 x 240 for color map and depth map works fine. However, the depth maps for the resolution 640 x 480 does not work properly. It seems depth map processing is significantly slower.

There is no problem when only depth maps are used.

Cleanup the testing class

The testing class has too many methods. We should split them into separate files.

  1. Add a readme file
  2. Create a testing class for each component

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.