Code Monkey home page Code Monkey logo

deepfashion3d's People

Contributors

kv2000 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepfashion3d's Issues

How does GCN handle different set of input feature lines

Hi @kv2000 ,
Superb work and dataset, which in my opinion is a big contribution to this research area.
Meanwhile, I have several questions.
[1] some inconsistency
As mentioned in the abstract, there are 563 garment instances. But according to table 1, the total number is 599.
I suppose you remove some garment instances. But why, e.g., poor SMPL fitting results?

How many random views are used for rendering synthetic images, 5 (section 4.1) or 3 (section 4.3)?
I admit that this does not really matter but somewhat confusing.

[2] Why ignore the shape (i.e., beta) parameter of the SMPL model from the whole pipeline?
The scale of a kid’s cloth is different from that of an adult’s cloth.

[3] The gcn part. In the original Pixel2Mesh paper, the input is always the fixed ellipsoid from which the deformations are.
But in your paper, the input is the varying set of feature lines depending on the category of the cloth (as shown in Fig. 3), right? How does the GCN manage to handle varying set of feature lines?

Thanks very much!

where is RGB images

Hi,
thank you for your great work! I can't find RGB image when I using deepFashion3D dataset, where can I get these images?

how to load the deepfashion3d data?

Hi! I have a very basic question on loading the data to view. I was trying to view the point cloud, mesh, and the body (from the v2 release) using meshplot but ran into the following issues:

  1. not sure which SMPL pkl to use, I don't know where to find the info for whether the body is female, male, or neutral?
  2. the packed_pose has field "pose", "scale", and "trans"; I used "pose" and "trans" to initialize the smpl model and then scaled it with the "scale" float but the resulting body mesh and garment mesh do not line up when I load them into the same meshplot plot:
Screenshot 2023-07-03 at 3 04 00 PM

The code I used to create this plot is as follows:

import smplx
import torch
import numpy as np
import pickle
import meshplot as mp
import open3d as o3d
import igl

df3d_folder = "../../../Downloads/data/deepfashion3dv2"
pose_folder = os.path.join(df3d_folder, "packed_pose")
pc_folder = os.path.join(df3d_folder, "point_cloud")
fl_folder = os.path.join(df3d_folder, "featureline_annotation")
mesh_folder = os.path.join(df3d_folder, "filtered_registered_mesh")

garment_id = 19 # the first in the short-sleeve upper category
garment_id = str(garment_id)

available_poses = [filename.split(".")[0] for filename in os.listdir(os.path.join(pose_folder, garment_id))]
available_poses = sorted(available_poses)
# available_poses

pose_id = available_poses[0]
fl_id = "_".join(pose_id.split("-"))
# pose_id, fl_id

# load pose and see if I can reconstruct the body
with open(os.path.join(pose_folder, garment_id, pose_id + ".pkl"), "rb") as f:
    packed_pose = pickle.load(f)
# packed_pose

model_n = smplx.create("./models/", model_type="smpl", gender="female",
                       global_orient=packed_pose["pose"][:3].reshape(1, 3),
                       body_pose=packed_pose["pose"][3:].reshape(1, 69),
                       transl=packed_pose["trans"].reshape(1, 3),
                       dtype=torch.float64)

output_n = model_n()
verts_n = output_n.vertices.detach().cpu().numpy().squeeze()
faces_n = model_n.faces

# newvn = (verts_n - offset) * packed_pose["scale"]# + offset
newvn = verts_n * packed_pose["scale"]

# next try to load the .ply files
pcd = o3d.io.read_point_cloud(os.path.join(pc_folder, garment_id, pose_id + ".ply"))

# visualize the garment mesh on body
v, f = igl.read_triangle_mesh(os.path.join(mesh_folder, pose_id, "model_cleaned.obj"))

p = mp.plot(verts_n, faces_n, c=np.array([0.8,1,0.8]))
p.add_mesh(v, f)
p.add_points(np.asarray(pcd.points))

If anyone could help me understand how to correctly load the data to view them that'll be great! Thanks!

The form link is invalid

Thank you for your work! The link is invalid (or inaccessible in China), please update your link again

Cannot find multi-view real images

Thanks for your great contribution.
After I downloaded the dataset, i could not find the multi-view real images as predicted on the Homepage.

Signed Form

Hi,

The google form has "Signed". May I know what this means?

Thanks,
Sai

hi i would say there is still a problem i want to use them as a quantitative results to benchmark some models and the smpl is projecting outside the cloth and most of the clothes are actually for women but only the men smpl fits a in terms of height but the garment in colliding with it.

          hi i would say there is still a problem i want to use them as a quantitative results to benchmark some models and the smpl is projecting outside the cloth and most of the clothes are actually for women but only the men smpl fits a in terms of height but the garment in colliding with it.

is there a better way such that the smpl doesnt cooincide?
if not the dataset wont be of any help to be

image

Originally posted by @msverma101 in #12 (comment)

Can not find "deep_fashion_3d_point_cloud_nnnotations" folder

Hello, thanks for sharing the amazing dataset.

I have downloaded "deep_fashion_3d_point_cloud.rar" and successfully unziped the file with the passcode that provided via email.

However, I could not find the folder named "deep_fashion_3d_point_cloud_nnnotations" as mentioned in README page.

Maybe it hasn't been published yet?

Can we use on Unity?

Hi, just a quick question I know this was made with point cloud but Is there any way that I can use the result on Unity ?

how did you annotate

I would like to know what software did you use to complete the annotation work?

Regarding SMPL Shape parameters.

Hi,
Thanks for making this excellent dataset public!

Since your SMPL's are not optimized for shape, we have garments that loosely/tightly fit the provided garment. If we want to optimize each and every SMPL model for shape parameters w.r.t garment, how can we do that?

Any tips/tricks will be helpful from anyone.

Expectation date of annotation release?

Hello,

According to your paper, there are four types of annotation in the dataset: "GT point cloud", "3D pose", "feature lines" and "multi-view real images".

I believe that first two annotations are already provided but couldn't find "feature lines" and "multi-view real images".

I'd very appreciate if you notify me of the expectation date for annotation data release.

Thanks

Unaligned point clouds

Hi @kv2000 , in the data provided, why are feature line and scan not aligned? How to solve this problem?

Generally speaking, it is only necessary to record the corresponding vertex index on the scanning data. Why is the absolute point cloud of feature lines provided here?

Selection_002
Selection_001

Correspondence of featureline vertex location in dataset to that on mesh-template generated from SMPL?

Hi,

Thanks for this amazing dataset. I was going through the dataset. Could you let me know about the following querry?

The smpl template is deformed based on featurelines vertices, using handle-based deformation.

Assuming, handle points are the boundary point on garment-template. Through segmentation masks of SMPL, we know which one is neckline, hemline etc.
However, how do we know the correspondence between such point on template to that of corresponding feature_line vertex locations, that are provided in the dataset. How to set hard-constraint if we don't know correspondence.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.