Code Monkey home page Code Monkey logo

h36m-fetch's Introduction

Human3.6M dataset fetcher

Human3.6M is a 3D human pose dataset containing 3.6 million human poses and corresponding images. The scripts in this repository make it easy to download, extract, and preprocess the images and annotations from Human3.6M.

Please do not ask me for a copy of the Human3.6M dataset. I do not own the data, nor do I have permission to redistribute it. Please visit http://vision.imar.ro/human3.6m/ in order to request access and contact the maintainers of the dataset.

Requirements

  • Python 3
  • axel
  • CDF
  • ffmpeg 3.2.4

Alternatively, a Dockerfile is provided which has all of the requirements set up. You can use it to run scripts like so:

$ docker-compose run --rm --user="$(id -u):$(id -g)" main python3 <script>

Usage

  1. Firstly, you will need to create an account at http://vision.imar.ro/human3.6m/ to gain access to the dataset.
  2. Once your account has been approved, log in and inspect your cookies to find your PHPSESSID.
  3. Copy the configuration file config.ini.example to config.ini and fill in your PHPSESSID.
  4. Use the download_all.py script to download the dataset, extract_all.py to extract the downloaded archives, and process_all.py to preprocess the dataset into an easier to use format.

Frame sampling

Not all frames are selected during the preprocessing step. We assume that the data will be used in the Protocol #2 setup (see "Compositional Human Pose Regression"), so for subjects S9 and S11 every 64th frame is used. For the training subjects (S1, S5, S6, S7, and S8), only "interesting" frames are used. That is, near-duplicate frames during periods of low movement are skipped.

You can edit select_frame_indices_to_include() in process_all.py to change this behaviour.

License

The code in this repository is licensed under the terms of the Apache License, Version 2.0.

Please read the license agreement for the Human3.6M dataset itself, which specifies citations you must make when using the data in your own research. The file metadata.xml is directly copied from the "Visualisation and large scale prediction software" bundle from the Human3.6M website, and is subject to the same license agreement.

h36m-fetch's People

Contributors

anibali avatar danbmh avatar dependabot[bot] avatar fwilliams avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

h36m-fetch's Issues

The metadata XML

Hello, could you please provide metadata.xml v1.2? I have registered an account on the official website for a long time but failed to pass it, so I can't download it through the official website. Do you have the v1.2 version?

Too many redirects

The issue is same as this one ->
#1

But that issue was closed with no solution mentioned. Can you please help, it is showing 'Too many redirects' error.

I get error with Axel when using download_all.py

Hi, thanks for your repo. I've been trying to use your code to fetch the data. However, I encounter the error:
FileNotFoundError: [Errno 2] No such file or directory: 'axel'
I double-checked the file directory. Do you know what caused this issue? Thanks

Too many redirects.

It says too many redirects. Which version of axel do you use?

Initializing download: http://vision.imar.ro/human3.6m/filebrowser.php?download=1&filepath=Poses/D2_Positions&filename=SubjectSpecific_3.tgz
Too many redirects.

About Frame sampling

In process_all.py why frames = frame_indices + 1?
Why frame indices begin from 1? not 0?
Thanks!

where to find ground truth of 3d

i decoded h36m dataset to h5 format, for each data subset, there are 4 views and 500 images per view, 2000 images totally.
on hdf viewer(to read the data inside h5 file), i can see 2000_32_2 for 2d, and 2000_32_3 for 3d, so my question is where to find ground truth of 3d joint points, in my understanding, it should be 500_32_3 for 3d, am i right or wrong?

process_all.py error

when i try to run process_all.py, i got this error message: "spacepy.pycdf.CDFError: NO_SUCH_CDF: The specified CDF does not exist", i already install the requirements.txt using pip, can someone help me please

mask file in dataset

hello, i wonder how the mask in h36m is genereted? is it got by some auto-label method?

Error processing sequence, skipping

docker command is working for extract_all.py, but when i run process_all.py, it pops out ""Error processing sequence, skipping", when does it occur and how to solve it?

thanks

Asking the meaning of annotations

Can you please help me explain the differences between D3_positions, D3_position_mono and D3_position_mono_universal? What are their meanings?
Thank you so much!

Image sequences not created

I have (it seems) successfully run the download.py and extract.py scripts.
But when I run the process_all.py script it runs, but there are no files in the 'imageSequence' folders.

A 'processed' folder is created, has folders within it, but no image files.

Running Ubuntu 18.04LTS

Any ideas why?

Errors:
Error processing sequence, skipping: ('S11', '16', '1', '54138969'):00<?, ?it/s]
Error processing sequence, skipping: ('S11', '16', '1', '55011271')
Error processing sequence, skipping: ('S11', '16', '1', '58860488')
Error processing sequence, skipping: ('S11', '16', '1', '60457274')
100%|########################################8| 209/210 [00:29<00:00, 7.16it/sError processing sequence, skipping: ('S11', '16', '2', '54138969'):00<?, ?it/s]
Error processing sequence, skipping: ('S11', '16', '2', '55011271')
Error processing sequence, skipping: ('S11', '16', '2', '58860488')
Error processing sequence, skipping: ('S11', '16', '2', '60457274')

What is the data structure and meaning of Poses_D2_Positions_S1

I downloaded Poses_D2_Positions_S1 from the official website of human3.6m. After parsing, I got an array of 1x1383x64. I wonder if this means that these data have 1383 frames each containing 32 key points. If so, I would like to know what are the positions of these 32 key points?

about the dataset

Hi, i am a developer, but not an student. It is hard for me to apply the human3.6m dataset from the website, can you share the dataset with me, thanks~

TypeError: float() argument must be a string or a number, not '_NoValueType'

Traceback (most recent call last):
97%|#########6| 29/30 [00:18<00:00, 1.53it/s]
File "D:/limengyi/learning/motioNet/h36m-fetch/process_all.py", line 146, in process_subaction

annots = process_view(out_dir, subject, action, subaction, camera)

97%|#########6| 29/30 [00:18<00:00, 1.53it/s]
File "D:/limengyi/learning/motioNet/h36m-fetch/process_all.py", line 90, in process_view
75%|#######5 | 3/4 [00:00<00:00, 10.09it/s]
frame_indices = select_frame_indices_to_include(subject, poses_3d_univ)
File "D:/limengyi/learning/motioNet/h36m-fetch/process_all.py", line 49, in select_frame_indices_to_include
max_move = ((joints3d - prev_joints3d) ** 2).sum(axis=-1).max()
File "D:\anaconda\envs\python36\lib\site-packages\numpy\core_methods.py", line 47, in _sum
return umr_sum(a, axis, dtype, out, keepdims, initial, where)
TypeError: float() argument must be a string or a number, not '_NoValueType'

Where can I find the extrinsic parameters?

Thank you for providing this useful fetcher! I want to project the 3d poses into the image however I could not find the extrinsic parameters to do so. Would you be able to provided that as well?

dataset

hi,
I can't download human 3.6. Can you provide human 3.6 data set?
Thank in advance! >.<

fail to run download_all.py

Hello,
I installed python3, axel, and CDF, then I ran . However, I got the following error.
image
Traceback (most recent call last):
File "download_all.py", line 99, in
download_all(phpsessid)
File "download_all.py", line 93, in download_all
download_file(BASE_URL + '?' + query, out_file, phpsessid)
File "download_all.py", line 38, in download_file
url])
File "/home/user1/anaconda3/lib/python3.5/subprocess.py", line 247, in call
with Popen(*popenargs, **kwargs) as p:
File "/home/user1/anaconda3/lib/python3.5/subprocess.py", line 676, in init
restore_signals, start_new_session)
File "/home/user1/anaconda3/lib/python3.5/subprocess.py", line 1289, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'axel'

missing data

Hi, I run the script but encounter the problem that,

missing data for S11/WalkingDog.54138969
missing data for S11/WalkingDog.55011271
missing data for S11/WalkingDog.58860488
missing data for S11/WalkingDog.60457274

fmpeg: error while loading shared libraries: libx264.so.138

When running the preprocess_all.py script, it returned the following error: fmpeg: error while loading shared libraries: libx264.so.138: cannot open shared object file: No such file or directory

Full error log:

fmpeg: error while loading shared libraries: libx264.so.138: cannot open shared object file: No such file or directory                                                                                                                                                                                               | 0/4 [00:00<?, ?it/s]
!!! Error processing sequence, skipping: ('S11', '16', '2', '54138969')                                                                                                                                                                                                                                                                    
Traceback (most recent call last):                                                                                                                                                                                                                                                                                                         
  File "/home/user/miniconda/lib/python3.6/shutil.py", line 544, in move                                                                                                                                                                                                                                                                   
    os.rename(src, real_dst)
OSError: [Errno 18] Invalid cross-device link: '/tmp/tmp_eyben20/img_000001.jpg' -> 'processed/S11/WalkingTogether-2/imageSequence/54138969/img_000001.jpg'

I could solve it by deleting the version tag from ffmpeg=3.2.4 in the Dockerfile and rebuilding the container afterwards. The version which was then installed was ffmpeg-4.1.

Package resolution with current requirements

Pip is having a hard time resolving the dependencies in current requirements.txt. I'd suggest a more permissive one, or an entirely frozen one, with all versions already resolved.

Account creation on http://vision.imar.ro/human3.6m/

I am trying for the last 15 days to access the dataset. I have created an account on the site but no one has yet grant me access to the dataset.
Do you know what applies to this site? Is it active? Has anyone recently managed to download the dataset as described in the Usage section?

Looking forward for your reply and help.

Error processing sequence

!!! Error processing sequence, skipping: ('S5', '15', '2', '58860488')
27%|###########3 | 57/210 [00:08<00:22, 6.79it/s]

Traceback (most recent call last):
File "/home/disk_share/hoh/data/learnable-triangulation/data/h36m-fetch/process_all.py", line 144, in process_subaction
annots = process_view(out_dir, subject, action, subaction, camera)
File "/home/disk_share/hoh/data/learnable-triangulation/data/h36m-fetch/process_all.py", line 106, in process_view
call([
File "/home/hoh/disk/anaconda3/envs/torch/lib/python3.8/subprocess.py", line 340, in call
with Popen(*popenargs, **kwargs) as p:
File "/home/hoh/disk/anaconda3/envs/torch/lib/python3.8/subprocess.py", line 858, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "/home/hoh/disk/anaconda3/envs/torch/lib/python3.8/subprocess.py", line 1706, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'ffmpeg'

27%|###########3 | 57/210 [00:08<00:22, 6.79it/s]

!!! Error processing sequence, skipping: ('S5', '15', '2', '60457274')
27%|###########3 | 57/210 [00:08<00:22, 6.79it/s]

Traceback (most recent call last):
File "/home/disk_share/hoh/data/learnable-triangulation/data/h36m-fetch/process_all.py", line 144, in process_subaction
annots = process_view(out_dir, subject, action, subaction, camera)
File "/home/disk_share/hoh/data/learnable-triangulation/data/h36m-fetch/process_all.py", line 106, in process_view
call([
File "/home/hoh/disk/anaconda3/envs/torch/lib/python3.8/subprocess.py", line 340, in call
with Popen(*popenargs, **kwargs) as p:
File "/home/hoh/disk/anaconda3/envs/torch/lib/python3.8/subprocess.py", line 858, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "/home/hoh/disk/anaconda3/envs/torch/lib/python3.8/subprocess.py", line 1706, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'ffmpeg'

Processing can be performed here, but in the end, only the corresponding folder in the process has no results

Cannot load CDF C library; checked . Try 'os.environ["CDF_LIB"] = library_directory'

python process_all.py
Traceback (most recent call last):
File "process_all.py", line 4, in
from spacepy import pycdf
File "/home/chuanjiang/anaconda3/lib/python3.7/site-packages/spacepy/pycdf/init.py", line 1258, in
'before import.').format(', '.join(_libpath)))
Exception: Cannot load CDF C library; checked . Try 'os.environ["CDF_LIB"] = library_directory' before import.

Failed to verify your PHPSESSID

╰─➤ python download_all.py
Could not read PHPSESSID from config.ini.
Enter PHPSESSID: YunYang
Traceback (most recent call last):
File "download_all.py", line 106, in
verify_phpsessid(phpsessid)
File "download_all.py", line 59, in verify_phpsessid
assert resp.url == test_url, fail_message
AssertionError: Failed to verify your PHPSESSID. Please ensure that you are currently logged in at http://vision.imar.ro/human3.6m/ and that you have copied the PHPSESSID cookie correctly.

Error running extract_all.py

Hi there

I am running through the preprocessing scripts, and download_all.py works ok, but I am getting error messages when running extract_all.py.
Only 1 folder 'S1' is created and get these error messages: (Am I missing a module?)

python3 extract_all.py
0%| | 0/7 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/usr/lib/python3.6/tarfile.py", line 1643, in gzopen
t = cls.taropen(name, mode, fileobj, **kwargs)
File "/usr/lib/python3.6/tarfile.py", line 1619, in taropen
return cls(name, mode, fileobj, **kwargs)
File "/usr/lib/python3.6/tarfile.py", line 1482, in init
self.firstmember = self.next()
File "/usr/lib/python3.6/tarfile.py", line 2297, in next
tarinfo = self.tarinfo.fromtarfile(self)
File "/usr/lib/python3.6/tarfile.py", line 1092, in fromtarfile
buf = tarfile.fileobj.read(BLOCKSIZE)
File "/usr/lib/python3.6/gzip.py", line 276, in read
return self._buffer.read(size)
File "/usr/lib/python3.6/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "/usr/lib/python3.6/gzip.py", line 463, in read
if not self._read_gzip_header():
File "/usr/lib/python3.6/gzip.py", line 411, in _read_gzip_header
raise OSError('Not a gzipped file (%r)' % magic)
OSError: Not a gzipped file (b'\x00\x00')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "extract_all.py", line 50, in
extract_all()
File "extract_all.py", line 46, in extract_all
path.join(out_dir, 'Videos'))
File "extract_all.py", line 24, in extract_tgz
with tarfile.open(tgz_file, 'r:gz') as tar:
File "/usr/lib/python3.6/tarfile.py", line 1589, in open
return func(name, filemode, fileobj, **kwargs)
File "/usr/lib/python3.6/tarfile.py", line 1647, in gzopen
raise ReadError("not a gzip file")
tarfile.ReadError: not a gzip file

preprocess dataset

since i can't ask for the dataset because you don't have permission to redistribute, can i have the preprocess dataset?, i already register for a pretty long time in vision.imar.ro, but they still not verificate my account

PHPSESSID

All people around me haven't the account, so I can't download the dataset.
Could you give me a PHPSESSID?
Thank you very much!

Access to Human3.6M

The maintainers are no longer active, is there any other way to get access to the dataset?

FileNotFoundError when processing sequence

!!! Error processing sequence, skipping: ('S1', '16', '1', '55011271')
Traceback (most recent call last):
  File ".\process_all.py", line 150, in process_subaction
    annots = process_view(out_dir, subject, action, subaction, camera)
  File ".\process_all.py", line 117, in process_view
    path.join(tmp_dir, 'img_%06d.jpg')
  File "C:\Users\DELL\anaconda3\lib\subprocess.py", line 339, in call
    with Popen(*popenargs, **kwargs) as p:
  File "C:\Users\DELL\anaconda3\lib\subprocess.py", line 800, in __init__
    restore_signals, start_new_session)
  File "C:\Users\DELL\anaconda3\lib\subprocess.py", line 1207, in _execute_child
    startupinfo)
FileNotFoundError: [WinError 2] Не удается найти указанный файл

!!! Error processing sequence, skipping: ('S1', '16', '1', '58860488')
Traceback (most recent call last):
  File ".\process_all.py", line 150, in process_subaction
    annots = process_view(out_dir, subject, action, subaction, camera)
  File ".\process_all.py", line 117, in process_view
    path.join(tmp_dir, 'img_%06d.jpg')
  File "C:\Users\DELL\anaconda3\lib\subprocess.py", line 339, in call
    with Popen(*popenargs, **kwargs) as p:
  File "C:\Users\DELL\anaconda3\lib\subprocess.py", line 800, in __init__
    restore_signals, start_new_session)
  File "C:\Users\DELL\anaconda3\lib\subprocess.py", line 1207, in _execute_child
    startupinfo)
FileNotFoundError: [WinError 2] Не удается найти указанный файл

!!! Error processing sequence, skipping: ('S1', '16', '1', '60457274')
Traceback (most recent call last):
  File ".\process_all.py", line 150, in process_subaction
    annots = process_view(out_dir, subject, action, subaction, camera)
  File ".\process_all.py", line 117, in process_view
    path.join(tmp_dir, 'img_%06d.jpg')
  File "C:\Users\DELL\anaconda3\lib\subprocess.py", line 339, in call
    with Popen(*popenargs, **kwargs) as p:
  File "C:\Users\DELL\anaconda3\lib\subprocess.py", line 800, in __init__
    restore_signals, start_new_session)
  File "C:\Users\DELL\anaconda3\lib\subprocess.py", line 1207, in _execute_child
    startupinfo)
FileNotFoundError: [WinError 2] Не удается найти указанный файл

I am getting the above error message when I run process_all.py. I am pretty sure that I have extracted all the files correctly. @anibali could you please suggest a solution? Thanks.

Originally posted by @yerzhan7orazayev in #22 (comment)

Processing all frames in process_all.py

The process_all.py script processes every 64th frame for the subjects 'S9' and 'S11' in lines:

  if subject == 'S9' or subject == 'S11':
        return np.arange(0, len(poses_3d_univ), 64)

How can I change the above code to process all frames, without skipping any?

Tried: Replacing 64 with 1 and all but did not work

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.