[AAAI 2024] The official implementation of the paper "3D-STMN: Dependency-Driven Superpoint-Text Matching Network for End-to-End 3D Referring Expression Segmentation"
I notice that the visual data augmentation is not applied. But you have given the visual data augmentation measures in the "data_aug" function. I want to know there is any code about the corresponding change code of the textual description if set the "aug==True"? Thanks!
After inference, we can get some txt files. Is there method used to visualize the inference results you follow in the " "Acknowledgement" part ? I read these repos but find they use the ply file to visualize. Thanks!
Hello,
I have read your code but do not find the "preprocessed features" been used when training?
If I want to achieve this, should I change the code here? What I should do is to use the "ann_ids, scan_ids, object_id" to load the corresponding feature files?
Thanks!
Install segmentator from this repo (We wrap the segmentator in ScanNet).
Install Stanford CoreNLP toolkit from the official website.
These two lines of instructions are difficult for me as a new learner in this field.
Can you provide more detailed instructions so that it's easy to follow?
Thank you very much.