View Code? Open in Web Editor
NEW
[ICLR 2023] Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image Transformers Help 3D Representation Learning?
Home Page: https://openreview.net/forum?id=8Oun8ZUVe8N
License: MIT License
Python 98.15%
Cuda 1.42%
C++ 0.25%
Shell 0.18%
act's People
Contributors
act's Issues
When we were carrying out point cloud visualization, the point cloud Angle was wrong. Could you please provide the code of point cloud rotation?
Looking forward to your reply.
Firstly, thanks for your sharing so outstanding work!!
I want to try to train ACT on S3DIS using xyz+rgb,
but paper and coder seem to train only using xyz,
could you help to tell me how to do this?
Thanks a lot!
How to visualise the reconstructed results?
Looking forward to your reply!
Firstly, thanks for sharing you outstanding work!
why is the number of prompt zero in ./cfgs/autoencoder/act_dvae_with_pretrained_transformer.yaml?
following as :
num_prompt_token: 0
use_deep_prompt: false
Could you provide the cfg file when you train the dvae ?
Thank you!
Could you please tell me why results are not reproduced on Modelnet40 classification and few shot classification? Are specific data enhancement or random seeding used?
In the paper Table.2 the result of MLP-LINEAR and MLP-3, I want to know the number of runs to get the mean and variance. Thank you!
Hi,Thanks for your reply before!!
I have some questions.
Is the results about S3DIS reported in Table 4 is from main.py ( running semantic_segmentation/main.py) or main_test.py(semantic_segmentation/main_test.py)?
Is the number of points we used in S3DIS test 2048?
Why not use the all points of one room?
Look forward your reply~
Excellent work.
How do I get the results of t-SNE feature visualization?
Looking forward to your reply!