ylabbe / robopose Goto Github PK
View Code? Open in Web Editor NEWCode for "Single-view robot pose and joint angle estimation via render & compare", CVPR 2021 (Oral).
License: MIT License
Code for "Single-view robot pose and joint angle estimation via render & compare", CVPR 2021 (Oral).
License: MIT License
If I try to re-train the dream models I can’t seem to download them.
Attached you can see the terminal output I get when trying to download.
Do you have an answer for my issue?/ do you know why I’m unable to download them?
Btw thanks for your fantastic project.
$ python -m robopose.scripts.download --datasets=dream.train
Setting OMP and MKL num threads to 1.
0:00:00.001091 - Copying robopose:zip_files/dream/synthetic/panda_synth_train_dr.zip to /home/********/robopose/local_data/downloads/panda_synth_train_dr.zip
2023-07-31 17:03:48 ERROR : Google drive root 'zip_files/dream/synthetic/panda_synth_train_dr.zip': error reading source root directory: directory not found
2023-07-31 17:03:48 ERROR : Attempt 1/3 failed with 1 errors and: directory not found
2023-07-31 17:03:49 ERROR : Google drive root 'zip_files/dream/synthetic/panda_synth_train_dr.zip': error reading source root directory: directory not found
2023-07-31 17:03:49 ERROR : Attempt 2/3 failed with 1 errors and: directory not found
2023-07-31 17:03:49 ERROR : Google drive root 'zip_files/dream/synthetic/panda_synth_train_dr.zip': error reading source root directory: directory not found
2023-07-31 17:03:49 ERROR : Attempt 3/3 failed with 1 errors and: directory not found
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Errors: 1 (retrying may help)
Elapsed time: 3.3s
2023/07/31 17:03:49 Failed to copyto: directory not found
0:00:03.341509 - Extracting dataset panda_synth_train_dr.zip...
Traceback (most recent call last):
File "/home/********/anaconda3/envs/robopose/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/********/anaconda3/envs/robopose/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/********/robopose/robopose/scripts/download.py", line 205, in <module>
main()
File "/home/********/robopose/robopose/scripts/download.py", line 79, in main
download_dream_dataset(synt_or_real, ds_name)
File "/home/********/robopose/robopose/scripts/download.py", line 33, in download_dream_dataset
zipfile.ZipFile(DOWNLOAD_DIR / zip_name).extractall(LOCAL_DATA_DIR / 'dream_datasets' / real_or_synt)
File "/home/********/anaconda3/envs/robopose/lib/python3.7/zipfile.py", line 1240, in __init__
self.fp = io.open(file, filemode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/********/robopose/local_data/downloads/panda_synth_train_dr.zip'
Hi! This is really fantastic work and I enjoyed reading your paper. Actually I would like to know whether your ADD metric includes the keypoints that aren't in the image / camera viewing frustum.
Reading your paper, I know that you guys slightly modified the ADD evaluation method used in DREAM by considering all images (regardless of their number of visible keypoints) instead of discarding some. DREAM 's paper stated they didn't include the keypoints that are out of the image . Based on your paper, you stated that you only changed the range of the image considered, so may I assume that you guys also didn't include the keypoints that are out of the image when calculating ADD, just like DREAM's paper stated ?
I'm actually having a bit of a hard time trying to find the answer by running your code. The running result seems that you guys included all the keypoints whether or not it's captured in the image (didn't managed to locate something simalar to "valid_mask" that represent keypoints in/out of the image)
I'm wondering if you can share that information directly with me, if my understanding on your paper or my running is wrong, I'd be really appreciated if you can point it out for me!
Thanks~~
Hi! This is really fantastic work and I enjoyed reading your paper. Right now I have some questions regarding the indices of validation set and training set.
In robopose/training/train_articulated.py
, your sampler for the training dataset covers the whole indices of the training dataset while the sampler for the validation dataset ramdomly picks out 10% of the indices of a whole training dataset. I wonder if it will bring forth a data leak, that the validation set shares same data with the training set?
I have printed out all the view_id
fields in the frame_index when iterating with ds_val_iter
and ds_train_iter
, and I found most of the ids in the validation set is in the training set as well, which may results from the code above.
I'm wondering is there a specific cause for not splitting them completely? Maybe it is better to leverage all of the training set and the validation set is just for observation? We are dealing with the same dataset and we do feel that the size of the training set is a bit small.
And if our obervation is correct, we are also wondering how did you prevent the overfitting problem during training (when the two set overlap with each other).
Thank you!! :)
Hi, it is really a great work that estimates the camera-to robot pose from a single RGB image. As the source code of DREAM has provided the ROS demo for fast deployment in the real world. Could you provide a demo code for fast testing if the input is not the image from the DREAM dataset but the single image from the Webcam camera or our own camera?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.