bop19_cdpn_2019iccv's People
Forkers
greatwallet oztc wavelet303 willer94 shougangshen bitliuyu junweston jxw-tmp thu-da-6d-pose-group suyz526 cosmoshua asei-3 cpsiffbop19_cdpn_2019iccv's Issues
Trained models
Hi! Thank you for making your great work available!
I was wondering if you could also make your trained models available in another platform besides Baidu. It is quite difficult to download files from there.
Thank you in advance!
Detection problem when calling mmdection
Current Dataset: lmo
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
[ ] 0/95, elapsed: 0s, ETA:Traceback (most recent call last):
File "inference.py", line 170, in
main()
File "inference.py", line 162, in main
result_dict = single_gpu_test(model, data_loader)
File "inference.py", line 61, in single_gpu_test
result = model(return_loss=False, rescale=True, **data)
File "/home/zhuyu/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/zhuyu/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/zhuyu/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/D/lxj/PoseEst/mmdetection-1.2.0/mmdet/core/fp16/decorators.py", line 49, in new_func
return old_func(*args, **kwargs)
TypeError: forward() missing 1 required positional argument: 'img_metas'
A question about training
When I refer to test.py to code train.py , I find I can't make it. My loss.backward() can't work!
output_coor_x_ = output_coor_x_.squeeze()
output_coor_y_ = output_coor_y_.squeeze()
output_coor_z_ = output_coor_z_.squeeze()
####
output_coor_ = torch.stack([torch.argmax(output_coor_x_, axis=0),
torch.argmax(output_coor_y_, axis=0),
torch.argmax(output_coor_z_, axis=0)], axis=2)
output_coor_[output_coor_ == cfg.network.coor_bin] = 0
output_coor_ = 2.0 * output_coor_.float() / (63.0-1.0) - 1.0 # [-1,1]
I try to use variable with before '####', it work; but when I use variable after '####', it not work and output as follow:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I am sure that parameters's requires_grad=True in the model.
Maybe I shouldn't use torch.argmax() ?
Could help me solve this problem or tell me how you trian it?
Thank you!
There is no translation head in the code?
Hello, Mr li, I can`t find the translation head in resnet18, and T_vercor is coming from PnP, Why isn't the highlight of your paper in the code?
Problem on _worker_init_fn() in main.py
Thanks for your code first. I encountered the error below when trying to run the code.
Traceback (most recent call last):
File "main.py", line 132, in
main()
File "main.py", line 125, in main
worker_init_fn=_worker_init_fn()
File "main.py", line 118, in _worker_init_fn
np.random.seed(np_seed)
File "mtrand.pyx", line 244, in numpy.random.mtrand.RandomState.seed
File "_mt19937.pyx", line 166, in numpy.random._mt19937.MT19937._legacy_seeding
File "_mt19937.pyx", line 180, in numpy.random._mt19937.MT19937._legacy_seeding
ValueError: Seed must be between 0 and 2**32 - 1
Then I outputted the torch_seed and np_seed finding that torch_seed = 1648085986 and np_seed = -1.
I think it's because the line116 np_seed = torch_seed // 2**32 - 1
in the function on_worker_init_fn() of main.py
Maybe it can be solved using np_seed = torch_seed // (2**32 - 1)
or np_seed = torch_seed % 2**32
?
Detection question
Is detection/annotations/test_lmo.json incomplete?
A question about the output of the network
Hello, Mr Li. Sorry to be a bother. I want to ask why the output channel of the network is 2+195=197? I carefully read the code, If I don't get it wrong, I know the first 2 channels is confidence map and the second 195=65*3 channels is coordinate map that describe x y z map respectively. Why are there 65 channels for x y z map respectively?
At test time, I find that you get the final coordinate-confidence map using argmax like this:
output_coor_ = np.stack([np.argmax(output_coor_x_, axis=0), np.argmax(output_coor_y_, axis=0), np.argmax(output_coor_z_, axis=0)], axis=2)
output_conf_ = np.argmax(output_conf_, axis=0)
I observed that the file 'cfg.yaml' contain the 'coor_bin=64', how to understand this set? The final question is how to set the ground truth when training the network? Looking forward to your reply.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.