Thank you for your work on the paper and your code publication. I wish to ask if there is already a pre-trained model for KITTI-360 or a pipeline setup for it. I have been currently working on setting up this dataset to see occupancy (density fields) results from the input view setup in the dataset. As I could see the experiment result for KITTI-360 in the paper Fig. 14 in Additional Qualitive results, I hope to see if the pre-trained model is available.
Would you share some insights regarding the training setup for KITTI-360? :)
Where can I find the correspondency between semantic indices and classes ? It looks like 5 corresponds to 'cars', what about the others ?
Thank you in advance,
Hi! This is a great work! I'm recently investigating efficient generalizable NeRF. My research is highly related to NeO-360. I really appreciate this work and would like to do my research based on it! When do you plan to release the whole code?
Thanks for your jobs!,
i have a question, in the paper, a single image can generate three planes, and when infer, the fr(residual feat) can get from train image, but when i have multi view images, how to use multi view images generate the three planes , and when i want to genenrate a novel view, how to get fr ?
Hello, Neo-360 is a good paper. But I still have a question. If Tri-Plane is removed from your pipeline and the 3D sample point is projected into the 3D feature grid directly without 3DCNN and tri-plane, will the performance of Neo-360 degrade? In your ablation studies, I see the importance of your 3D feature grid but not the tri-plane. Could you explain why tri-plane matters?
Could you share the axes convention you used for the world and the cameras ?
For example:
- the x-axis is pointing to the camera's right
- the y-axis is pointing to the camera's forward
- the z-axis is pointing to the camera's upward
Hello! Since I have only 4 A100 available now, I reduced the chunk size from 16*64 to 256. But Out-of-memory error still appears, do you have any idea to fix it?