Code Monkey home page Code Monkey logo

free2cad's Introduction

Free2CAD: Parsing Freehand Drawings into CAD Commands

Introduction

This repository contains the implementation of Free2CAD proposed in our SIGGRAPH 2022 paper.

It contains two parts: 1) network training, 2) training dataset and trained network deployment (e.g., for interactive modeling).

The code is released under the MIT license.

Network training

This part contains the Python code for building, training and testing the nueral network using TensorFlow.

💡 Great news: we have released the docker image used for training to ease the burden for configuration, please read README file within the networkTraining folder for more details.

Training dataset and network deployment

This part contains the code for deploying the trained network in a C++ project that can be an interactive 3D modeling application. It also provides instructions to download the training dataset we generated, and our trained networks.

Please read the README file in dataAndModel folder for more details.

🔴 IMPORTANT about data downloading❗🔴

The current data hosting is broken, we are working on it to find more permanent alternative places. Hopefull, we can make it alive soon.

Citation

If you use our code or model, please cite our paper:

@Article{Li:2022:Free2CAD, 
	Title = {Free2CAD: Parsing Freehand Drawings into CAD Commands}, 
	Author = {Changjian Li and Hao Pan and Adrien Bousseau and Niloy J. Mitra}, 
	Journal = {ACM Trans. Graph. (Proceedings of SIGGRAPH 2022)}, 
	Year = {2022}, 
	Number = {4}, 
	Volume = {41},
	Pages={93:1--93:16},
	numpages = {16},
	DOI={https://doi.org/10.1145/3528223.3530133},
	Publisher = {ACM} 
}

Contact

Any questions you could contact Changjian Li ([email protected]) or Hao Pan ([email protected]) for help.

free2cad's People

Contributors

enigma-li avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

free2cad's Issues

About sketch rendering

Thanks for your great work!

However, I have some problem about sketch rendering. I'm wondering how the line is perturbed with the endpoints alone are applied with noise.

Is there any code I can refer to?

How to Generate 3D CAD Models from 2D Sketches Using Given Model Configuration?

Hello,

I've successfully obtained the configuration details for several models, but I'm unclear on how to utilize them for converting 2D sketches into 3D CAD models. Specifically, I'm puzzled about where to input my 2D sketch and how the regNet_300k model can generate a 3D CAD model from a sketch, especially when dealing with unusual shapes.

Here are the input and output specifications for each model:

embedSNet Input and Output Names and Shapes:

  • Signature: serving_default
    • Input Name: input_tensor, Shape: (None, 256, 256, 1), Dtype: <dtype: 'float32'>
    • Output Name: output_recons, Shape: (None, 256, 256, 1), Dtype: <dtype: 'float32'>
    • Output Name: output_code, Shape: (None, 256), Dtype: <dtype: 'float32'>

gpTFNet_250k Input and Output Names and Shapes:

  • Signature: serving_default
    • Input Name: dec_padding_mask, Shape: (1, 1, 1, None), Dtype: <dtype: 'float32'>
    • Input Name: look_ahead_mask, Shape: (1, 1, None, None), Dtype: <dtype: 'float32'>
    • Input Name: inp, Shape: (1, None, 256), Dtype: <dtype: 'float32'>
    • Input Name: tar, Shape: (1, None, 256), Dtype: <dtype: 'float32'>
    • Input Name: enc_padding_mask, Shape: (1, 1, 1, None), Dtype: <dtype: 'float32'>
    • Output Name: res_group_label, Shape: (None, None, None), Dtype: <dtype: 'float32'>

regNet_300k Input and Output Names and Shapes:

  • Signature: serving_default
    • Input Name: gp_S_map, Shape: (1, None, 256, 256, 1), Dtype: <dtype: 'float32'>
    • Input Name: gp_ND_maps, Shape: (1, None, 256, 256, 4), Dtype: <dtype: 'float32'>
    • Output Name: base_curve, Shape: (None, None, 256, 256, 1), Dtype: <dtype: 'float32'>
    • Output Name: face_map, Shape: (None, None, 256, 256, 1), Dtype: <dtype: 'float32'>

Given these details, could you guide me on the correct process for feeding my 2D sketch into the system and using regNet_300k to create a 3D CAD model? Any insights or examples would be greatly appreciated.

Thank you!

Inference Guide

Hello,
Firstly, I appreciate your commendable work on this project.

I've successfully set up the environment and verified it by running python train_AES_SG.py without any issues. Upon reviewing the supplementary material provided at this link, I noticed the use of Fusion360 for testing. However, I'm uncertain about how to leverage your pretrained network within Fusion360.

Could you kindly provide detailed instructions on this? It would be of immense help.

Thank you!

I encountered the following error while running the build.sh script in the docker you provided. What should I do to resolve it?

2023-09-01 08:06:15.849558: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
2023-09-01 08:06:15.849640: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
2023-09-01 08:06:15.849653: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2023-09-01 08:06:18.093492: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
2023-09-01 08:06:18.093573: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
2023-09-01 08:06:18.093588: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.

Command Sequences from provided Dataset into CAD

Hello,

I've successfully set up and executed the Fusion360 Python plugin available at AutodeskAILab/Fusion360GalleryDataset. However, I am encountering two challenges:

Dataset Compatibility: I'm unsure how to modify the Fusion360GalleryDataset so that it becomes compatible with the format you require for your plugin or tool. Could you provide a step-by-step guide or outline the necessary transformations?

Command Sequences to CAD: I need detailed instructions on how to take the command sequences from the dataset provided and use them to generate the corresponding CAD models in Fusion360. A walkthrough on this process would be greatly appreciated.

image

Question about "util_functions.h" in trained_model.h

Hi,I'm currently working with your source code and I came across the trained_model.h file. In that file, I noticed a reference to "util_functions.h." I'm wondering if this header file is something you've written yourself or if it belongs to a specific library.

Could you provide more information about the origin and content of "util_functions.h"? It would be helpful for me to understand its purpose and whether it's part of your codebase or an external dependency.

Thank you for your time and assistance!

how to download 'Training data’ and ‘checkpoint’

Thank you for sharing the code for the wonderful paper! Could you please tell me the following? I cannot download the ‘Training data’ and ‘checkpoint’ mentioned in the README of the dataAndModel directory because the links are not working. Could you please tell me how to download them?

Trained network request

Hello,

I hope this message finds you well. First and foremost, I would like to extend my gratitude for making these training scripts (train_AES_SG.py, train_grouper.py, train_Regressor.py) available. I have been diligently working to train them on my end.

Despite achieving a relatively low loss, I have observed that the prediction results still deviate significantly from the Ground Truth. I understand that you have kindly provided a checkpoint file, available at this link. Upon inspection, I suspect that the content might be the test results instead of the network weights, though I stand to be corrected.

I am reaching out to kindly request if it would be possible for you to share the trained network weights resulting from all three scripts mentioned earlier. Having access to these would be immensely helpful and would aid me significantly in understanding the expected outcomes of the training process.

Thank you very much for your time and assistance in advance. I am looking forward to your positive response and am more than willing to provide any additional information that might be required.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.