Code Monkey home page Code Monkey logo

pixel2meshplusplus's Introduction

Hi there 👋

Walsvid's GitHub stats

Top Langs

pixel2meshplusplus's People

Contributors

topinfrassi01 avatar walsvid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pixel2meshplusplus's Issues

what‘’s the cm_mat?

Hello, i think i have a problem when reading the code. what's the "cm_mat" in [../utils/tools.py cameraMat(param)]? is it the "R"?
but i can't understand it if it's "R".... Could you tell me the method of coordinate transformation in your model?

indices[0,7]= -1 is nor in [0,157]

hello! Have you run the train.py successfully? I got a problem "InvalidArgumenrError : indices[0,7]= -1 is nor in [0,157] " when run in "sess.run(model , feed_dict = feed_dict" .

The train data is loaded in this github , It has upset me for many days .
Hope for your reply ``

Questions about input images

Thank you for sharing this code. I have some questions to ask
Can the input pictures be any three angle pictures of the object? Or is the input image related to camera parameters?

A problem about running "demo.py"

Hello, I used 2080ti, tensorflow-gpu 1.12.0, CUDA 10.0.130, cudnn7.6.4 to run the code "demo.py", but I got a problem "Traceback (most recent call last):
File "E:/pythondata/Pixel2MeshPlusPlus-master/demo.py", line 12, in
from modules.models_mvp2m import MeshNetMVP2M as MVP2MNet
File "E:\pythondata\Pixel2MeshPlusPlus-master\modules\models_mvp2m.py", line 9, in
from modules.losses import mesh_loss, laplace_loss
File "E:\pythondata\Pixel2MeshPlusPlus-master\modules\losses.py", line 6, in
from modules.chamfer import nn_distance
File "E:\pythondata\Pixel2MeshPlusPlus-master\modules\chamfer.py", line 5, in
nn_distance_module = tf.load_op_library('./external/tf_nndistance_so.so')
File "D:\anaconda\anaconda\envs\tf1.12\lib\site-packages\tensorflow\python\framework\load_library.py", line 60, in load_op_library
lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.NotFoundError: .\external\tf_nndistance_so.so not found

Process finished with exit code 1
"
I tried to read this file with absolute path, but it didn't work either. Below is the package I used
absl-py (0.9.0)
astor (0.8.1)
certifi (2016.2.28)
gast (0.3.3)
grpcio (1.30.0)
h5py (2.10.0)
importlib-metadata (1.7.0)
Keras-Applications (1.0.8)
Keras-Preprocessing (1.1.2)
Markdown (3.2.2)
numpy (1.19.0)
opencv-python (4.3.0.36)
Pillow (7.2.0)
pip (9.0.1)
protobuf (3.12.2)
scipy (1.5.1)
setuptools (36.4.0)
six (1.15.0)
tensorboard (1.12.2)
tensorflow (1.12.0)
termcolor (1.1.0)
tflearn (0.3.2)
Werkzeug (1.0.1)
wheel (0.29.0)
wincertstore (0.2)
zipp (3.1.0)
Hope for your reply~

some touble about NotFoundError

 hello , I have established the same environment as you , but I run into some troule.

 nn_distance_module = tf.load_op_library('./external/tf_nndistance_so.so')
 lib_handle = py_tf.TF_LoadLibrary(library_filename)

 error:     tensorflow.python.framework.errors_impl.NotFoundError: ./external/tf_nndistance_so.so: undefined symbol: _ZN10tensorflow12OpDefBuilder5InputESs

The file  named tf_nndistance_so.so does exit, and I try google but I get any useful information.Could you give me a hand ? Thank you! 

about support_number=2

thanks for sharing! In the train.py file, why you set support_number=2. Is it because of pooling layer, the vertices number increase double?

"ValueError: No variables to save" when loading pre-trained CNN

As the title states, I'm trying to run train_p2mpp.py and I have a problem when it comes to loading the pre-trained CNN from checkpoint.

With the original configuration, where I know the path is good :

  pre_trained_cnn_path: dir/models/coarse_mvp2m
  cnn_step: 50

Inside the load_cnn function of MeshNet :

    def loadcnn(self, sess=None, ckpt_path=None, step=None):
        if not sess:
            raise AttributeError('TensorFlow session not provided.')
            
        variables_to_restore = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='meshnetmvp2m/cnn')
        var_list = {var.name: var for var in variables_to_restore}
        saver = tf.train.Saver(var_list)
        save_path = os.path.join(ckpt_path, '{}.ckpt-{}'.format(self.name, step))
        saver.restore(sess, save_path)
        print('=> !!CNN restored from file: {}, epoch {}'.format(save_path, step))

With the original configuration, I have the following stacktrace :

=> load data
=> initialize session
=> load pre-trained cnn
Traceback (most recent call last):
  File "train_p2mpp.py", line 154, in <module>
    main(args)
  File "train_p2mpp.py", line 100, in main
    model.loadcnn(sess=sess, ckpt_path=cfg.p2mpp.pre_trained_cnn_path, step=cfg.p2mpp.cnn_step)
  File "/pix2meshpp/source/modules/models_p2mpp.py", line 109, in loadcnn
    saver = tf.train.Saver(var_list)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py", line 832, in __init__
    self.build()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py", line 844, in build
    self._build(self._filename, build_save=True, build_restore=True)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py", line 869, in _build
    raise ValueError("No variables to save")
ValueError: No variables to save

Here's what I've tried, that didn't work :

  1. Changing meshnet to meshnetmvp2m/cnn as proposed in this issue
  2. Changing self.name to meshnetmvp2m, as this is the name of the ckpt file inside the coarse_mvp2m folder

I have noticed, when printing the variables of MeshNetMVP2M, that the variable name and scope fits the bill of what's written above : meshnetmvp2m/cnn/*.

However, I've made it work by loading the CNN from refine_p2mpp by changing the configuration file to :

pre_trained_cnn_path: dir/models/refine_p2mpp
cnn_step: 10

Which feels "hacky" since I'm trying to re-train that same network.

Am I missing something or is there a bug where it seems like load_cnn is written to load from refine_p2mpp but the configuration file is written to load from coarse_mvp2m?

If I figure out the issue before I have feedback I'll push a PR.

Thanks for your support!

About /data/demo/cameras.txt

Thank you for sharing this amazing work! I'm a beginner in 3D shape generation.

The question I want to ask is what the five numbers represent in the cameras file.

Thanks!

Question about final mesh results

Greets, I can generate the prediction with mesh representation by demo.py, but do you show the ground truth with mesh representation directly by using the original obj file from ShapeNet dataset? Or do you also use the demo.py method to regenerate the ground truth from the corresponding sampling ground truth data? Thanks in advance.

Run on single image

Hi, does this work for single image. How to run on single RGB with camera parameters? Thanks

undefined symbol related issue while running demo.py

while running demo.py on google colaboratory I got an issue:

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Traceback (most recent call last):
File "demo.py", line 12, in
from modules.models_mvp2m import MeshNetMVP2M as MVP2MNet
File "/content/Pixel2MeshPlusPlus/modules/models_mvp2m.py", line 9, in
from modules.losses import mesh_loss, laplace_loss
File "/content/Pixel2MeshPlusPlus/modules/losses.py", line 6, in
from modules.chamfer import nn_distance
File "/content/Pixel2MeshPlusPlus/modules/chamfer.py", line 4, in
nn_distance_module = tf.load_op_library('./external/tf_nndistance_so.so')
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/load_library.py", line 60, in load_op_library
lib_handle = py_tf.TF_LoadLibrary(library_filename)

tensorflow.python.framework.errors_impl.NotFoundError: ./external/tf_nndistance_so.so: undefined symbol: _ZN10tensorflow14kernel_factory17OpKernelRegistrar12InitInternalEPKNS_9KernelDefEN4absl11string_viewESt10unique_ptrINS0_15OpKernelFactoryESt14default_deleteIS8_EE

Python -> 3.6
TensorFlow -> 1.12.0
CUDA - 10.0.130
gcc-7 g++-7

can anybody helps me to solve this issue?

Compatility issue

I am using Python 3.6, TensorFlow 1.12.0 on Ubuntu 16.04. I have installed all the requirements accordingly except CUDA. I want to to test this code. I get this error when I run it.
Traceback (most recent call last):
File "demo.py", line 12, in
from modules.models_mvp2m import MeshNetMVP2M as MVP2MNet
File "/home/bharadwaj/Pixel2MeshPlusPlus/modules/models_mvp2m.py", line 9, in
from modules.losses import mesh_loss, laplace_loss
File "/home/bharadwaj/Pixel2MeshPlusPlus/modules/losses.py", line 6, in
from modules.chamfer import nn_distance
File "/home/bharadwaj/Pixel2MeshPlusPlus/modules/chamfer.py", line 4, in
nn_distance_module = tf.load_op_library('./external/tf_nndistance_so.so')
File "/home/bharadwaj/anaconda3/envs/Pixel2Mesh++/lib/python3.6/site-packages/tensorflow/python/framework/load_library.py", line 60, in load_op_library
lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.NotFoundError: libcudart.so.9.0: cannot open shared object file: No such file or directory
Can this code be made compatible for CPU? What are the changes to be made to it CPU compatible. Thanks in advance.

Error when 'make' the Makefile

I changed the references in the makefile
nvcc=/usr/local/cuda-10.0/bin/nvcc
cudalib=/usr/local/cuda-10.0/lib64
then make
I got the output:
make: 放弃循环依赖 tf_approxmatch_g.cu <- tf_approxmatch_g.cu.o 。 g++ -std=c++11 tf_approxmatch.cpp tf_approxmatch_g.cu.o -o tf_approxmatch_so.so -shared -fPIC -lcudart -L /usr/local/cuda-10.0/lib64 -O2 -D_GLIBCXX_USE_CXX11_ABI=0 tf_approxmatch.cpp:1:10: fatal error: tensorflow/core/framework/op.h: No such file or directory #include "tensorflow/core/framework/op.h" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. Makefile:18: recipe for target 'tf_approxmatch_so.so' failed make: *** [tf_approxmatch_so.so] Error 1

I'm confused that where I can set the path to the head file?

About running demo

Hi.First thanks for sharing this!
I am new to the tensorflow and not familiar with CUDA, i use the MacBook Pro and I want to successfully run the demo first, I run the code and it says:
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(./external/tf_nndistance_so.so, 6): image not found

I think its problem of CUDA implementations, is there a way not use the CUDA if I just want to use the pre-trained model to get a result first?

About the input shape of the model

Thank you for sharing this amazing work!

One question I wanted to ask is that I see you used the "batch" dimension of the input to wrap the multi-view input images, which means this model can never really be used with a batch size larger than 1 right?

Thanks!

Question regarding loss regularization

Hi,

I was intrigued regarding the constants we can find in the mesh_loss function (for example the *500 on edge_loss) so I looked at your paper but I didn't find any mention of these, I assume they are regularization constants.

I have two questions regarding these constants :

  • it seems like there is a factor of 3000 that appears in every loss, (3000 for chamfer_loss which isn't regularized, 1500 for laplace_loss` which would indicate a regularization factor of 0.5). What is the number 3000 based on?
  • the same goes for the 0.55 value at tf.reduce_mean(dist2) in chamfer loss. What is this number?
def laplace_loss_2(pred1, pred2, placeholders, block_id):
    # laplace term
    lap1 = laplace_coord(pred1, placeholders, block_id)
    lap2 = laplace_coord(pred2, placeholders, block_id)
    laplace_loss = tf.reduce_mean(tf.reduce_sum(tf.square(tf.subtract(lap1, lap2)), 1)) * 1500
    move_loss = tf.reduce_mean(tf.reduce_sum(tf.square(tf.subtract(pred1, pred2)), 1)) * 100
    return laplace_loss + move_loss

def mesh_loss_2(pred, placeholders, block_id):
    gt_pt = placeholders['labels'][:, :3]  # gt points
    gt_nm = placeholders['labels'][:, 3:]  # gt normals

    # edge in graph
    nod1 = tf.gather(pred, placeholders['edges'][block_id - 1][:, 0])
    nod2 = tf.gather(pred, placeholders['edges'][block_id - 1][:, 1])
    edge = tf.subtract(nod1, nod2)

    # edge length loss
    edge_length = tf.reduce_sum(tf.square(edge), 1)
    edge_loss = tf.reduce_mean(edge_length) * 500

    # chamfer distance
    sample_pt = sample(pred, placeholders, block_id)
    sample_pred = tf.concat([pred, sample_pt], axis=0)
    dist1, idx1, dist2, idx2 = nn_distance(gt_pt, sample_pred)
    point_loss = (tf.reduce_mean(dist1) + 0.55 * tf.reduce_mean(dist2)) * 3000

    # normal cosine loss
    normal = tf.gather(gt_nm, tf.squeeze(idx2, 0))
    normal = tf.gather(normal, placeholders['edges'][block_id - 1][:, 0])
    cosine = tf.abs(tf.reduce_sum(tf.multiply(unit(normal), unit(edge)), 1))
    normal_loss = tf.reduce_mean(cosine) * 0.5

    total_loss = point_loss + edge_loss + normal_loss
    return total_loss

I notice the same question was asked on the Pixel2Mesh repository here but there are no answer. Hopefully we could kill two birds with one stone with an answer here or there :)

Camera Parameters

What are the values in the camera parameters? How to define these values for any new test case? For example, if I have 3 images of an object from different view points, how to define camera.txt?

about the initial template

hello!
I have a question about the initial template.
In P2M the mesh is always deformed from the same initial template--Ellipsoid,
Is it possible to be deformed from different template?

Questions about the unpooling layer

Hello, thank you for sharing this code!
I understand the unpooling layer which uesd for adding vertices , but the I have some questions about the code !

  1. In the unpooling layer of the layer.py , Input are the coordinates of the vertices of the morphed model. I have trouble understanding this code.add_feat = (1 / 2.0) * tf.reduce_sum(tf.gather(X, self.pool_idx), 1)Does it finish adding vertices and attribute the feature which is the mean of coordinates of adjacent vertices?
    2 Is the unpooling layer output a simply cascade of vertex coordinates of the morphed model and coordinate features of the added vertices?outputs = tf.concat([X, add_feat], 0)

Sorry to bother you, hope to get your answer!

indices[3304] = [2, 44, 112] does not index into param shape [3,112,112,32]

Hi!When I run step3 train_p2mpp.py I got error:
2020-11-22 17:39:06.603485: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at gather_nd_op.cc:50 : Invalid argument: indices[3304] = [2, 44, 112] does not index into param shape [3,112,112,32]
2020-11-22 17:39:06.608624: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at gather_nd_op.cc:50 : Invalid argument: indices[3304] = [2, 45, 112] does not index into param shape [3,112,112,32]
2020-11-22 17:39:06.681729: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at gather_nd_op.cc:50 : Invalid argument: indices[3304] = [2, 23, 56] does not index into param shape [3,56,56,64]
2020-11-22 17:39:06.684780: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at gather_nd_op.cc:50 : Invalid argument: indices[3304] = [2, 22, 56] does not index into param shape [3,56,56,64]
Traceback (most recent call last):
File "/home/fullo/.conda/envs/p2mplus/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/home/fullo/.conda/envs/p2mplus/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/fullo/.conda/envs/p2mplus/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[3304] = [2, 44, 112] does not index into param shape [3,112,112,32]
[[{{node graph_localproj_1_layer_1/GatherNd_29}} = GatherNd[Tindices=DT_INT32, Tparams=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](meshnet/pixel2mesh/cnn/conv2d_5/Relu, graph_localproj_1_layer_1/stack_47)]]

I try google but I get any useful information.Can you help me?

Running Demo File Shared Libraries issue

I am facing this shared libraries error when i am trying to start demo.py

If you depend on functionality not listed there, please file an issue.

Traceback (most recent call last):
File "demo.py", line 12, in
from modules.models_mvp2m import MeshNetMVP2M as MVP2MNet
File "/content/gdrive/MyDrive/new/Pixel2MeshPlusPlus/modules/models_mvp2m.py", line 9, in
from modules.losses import mesh_loss, laplace_loss
File "/content/gdrive/MyDrive/new/Pixel2MeshPlusPlus/modules/losses.py", line 6, in
from modules.chamfer import nn_distance
File "/content/gdrive/MyDrive/new/Pixel2MeshPlusPlus/modules/chamfer.py", line 4, in
nn_distance_module = tf.load_op_library('./external/tf_nndistance_so.so')
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/load_library.py", line 61, in load_op_library
lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.NotFoundError: ./external/tf_nndistance_so.so: cannot open shared object file: No such file or directory

the dat file of shapenet

Hello, I want to train the network with my own data set. May I ask how the dat file of shapenet that you use is generated

Some problems about running test part

Hello, I used 2080ti, tensorflow-gpu 1.12.0, CUDA 9.0, cudnn7.6.5 to reproduce the code test part, but I encountered a bug in the middle.
image
I found that the reason was that running on 2080ti would crash, so I used CPU to reproduce, Step 1 of test part can run correctly, but there is a problem when running step 2。

WARNING:tensorflow:From /root/anaconda3/envs/python3.6/lib/python3.6/site-packages/tflearn/initializations.py:119: UniformUnitScaling.__init__ (from tensorflow.python.ops.init_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.initializers.variance_scaling instead with distribution=uniform to get equivalent behavior.
Traceback (most recent call last):
  File "/root/anaconda3/envs/python3.6/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1628, in _create_c_op
    c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Shape must be rank 3 but is rank 4 for 'graph_localproj_1_layer_1/Pad' (op: 'Pad') with input shapes: [3,224,224,16], [3,2].

Would this problem only appear in the CPU version and not in the GPU version? Do you have a good solution?

Custom input for demo

for giving the custom input for demo what is the cameras.txt means and can you please explain whether those are bounding box values and class or something else

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.