Code Monkey home page Code Monkey logo

hrn's People

Contributors

younglbw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hrn's Issues

Video?

Hi, can this run on every frame of a video?

Is your method animatiable?

The detail capture is excellent. Is your method scalable to animation of different expressions? or expression transfer?

一些疑问

在补充材料里面看到了有做完整人头的重建,里面提到“In addition, we unwrap the new head model and recalculate a new set of UV coordinates.”想问下,这个展开是用什么软件做的嘛?

vertex

我理解的这个项目,人脸的高频细节,如皱纹、酒窝等,似乎是在最后将3DMM重建后的面部render出的图像,与原始图像进行一些运算得到的。有没有可能将这种高频细节的结果反向应用回3DMM的模型,去修正顶点的位置呢?如果能修正3DMM模型的顶点位置,项目的价值又可以向前一大步了

Multi view reconstruction

Dear author,
When i run the demo for multiview reconstruction, i get results for each view - 5 different meshes.
Is there a way that i can obtain the one final mesh for each subject?

测试的时候发现对前两张人脸的重建时间远长于对之后的人脸进行重建,是什么原因?

您好,非常感谢您将模型开源!
在使用过程中,我发现模型在进行重建时,对于输入的前两张人脸平均重建时间达到了15秒左右,而后续人脸的重建时间只需要1到2秒,与Readme中提到的一致,具体可见这一个log(我已对结果存储部分代码进行修改,没有选择存储全部结果,所以存储时间很快):

  0%|                                                               | 0/19 [00:00<?, ?it/s]save results 0.21760940551757812
  5%|█████▌                                                 | 1/19 [00:15<04:45, 15.87s/it]save results 0.2519359588623047
 11%|███████████                                            | 2/19 [00:30<04:19, 15.24s/it]save results 0.21797394752502441
 16%|████████████████▌                                      | 3/19 [00:32<02:25,  9.11s/it]save results 0.2182457447052002
 21%|██████████████████████                                 | 4/19 [00:34<01:33,  6.21s/it]save results 0.22073841094970703
 26%|███████████████████████████▋                           | 5/19 [00:36<01:04,  4.63s/it]

请问具体是什么原因造成的?有什么方法可以避免这一点吗?

About face masking

Can you tell me how to get a face mask? The article cited doesn't give me code and pre-trained models, can you give me a little advice, or alternatives?

训练代码

您好,请问后续会开源训练代码嘛?

extra_results为空的问题

很棒的工作!
我运行notebooks/HRN_inference.ipynb可以正常出结果。
但是demo.py不管是single还是multi都会遇到extra_results为空的问题

纹理图分辨率

谢谢你开源的模型,为我提供了很多的帮助,我有几个问题想请教您。

我之前使用了modelscope开源的模型,很高清,但是眼部纹理会有像是粘贴上去的问题。
直接用github上的代码运行时,眼部纹理问题消失了,但是分辨率只有256了,比较模糊,我想问下这两个模型的区别是什么,有办法提升纹理图分辨率吗?

谢谢!!

关于下载pretrained models的问题

通过GoogleDrive链接,下载到的是ICCV2021_model,里面包含snapshot_99.pth.tar。并不是pretrained_models目录里的内容,请问是什么原因呢?pretrained_models下载的链接是什么呢?

Replace nvdiffrast with pytorch3d or something else

Thanks your great work. Could you replace nvdiffrast with pytorch3d or something else in util/nv_diffrast.py.
There are some buddy have try to replace it for Deep3DFaceRecon_pytorch in https://github.com/ryanhe312/Deep3DFaceRecon_pytorch, but your version nv_diffrast.py just been more complicated for me since it used dr.texture function in nvdiffrast.
Would you please give me some advice or just add a new feature to choose pytorch3d or nvdiffrast in some way.
Very very thanks.

A little question about the texture fitting and the 'fat_face' function

Great work! Thanks for sharing the code.
I have noticed that during the texture fitting process in the final visualization stage (at [https://github.com/youngLBW/HRN/blob/main/models/facerecon_model.py#L417]), the high color is further fitted to the tensor self.input_img_for_tex to improve the output look.
After checking the code, I found that self.input_img_for_tex is generated by a fat_face function (at [https://github.com/youngLBW/HRN/blob/623155deab5883a25f884e4ce72ede34f2ca8f4b/facelandmark/large_model_infer.py#L347]) input with the original image. And if I understand correctly, it turns out that the self.input_img_for_tex is the same input image with face expanding.
My question is why this face expanding step is needed? Thanks for your answering.

AttributeError: module 'torch' has no attribute 'complex'

(HRN) yc@razer3080ti:~/testing3dai/HRN/HRN$ pip install torch-complex
Keyring is skipped due to an exception: 'keyring.backends'
Requirement already satisfied: torch-complex in /home/yc/anaconda3/lib/python3.7/site-packages (0.4.3)
Requirement already satisfied: numpy in /home/yc/anaconda3/lib/python3.7/site-packages (from torch-complex) (1.18.1)
(HRN) yc@razer3080ti:~/testing3dai/HRN/HRN$ CUDA_VISIBLE_DEVICES=0 python demo.py --input_type single_view --input_root ./assets/examples/single_view_image --output_root ./assets/examples/single_view_image_results
Traceback (most recent call last):
  File "demo.py", line 2, in <module>
    from models.hrn import Reconstructor
  File "/home/yc/testing3dai/HRN/HRN/models/__init__.py", line 22, in <module>
    from models.base_model import BaseModel
  File "/home/yc/testing3dai/HRN/HRN/models/base_model.py", line 6, in <module>
    from . import networks
  File "/home/yc/testing3dai/HRN/HRN/models/networks.py", line 16, in <module>
    from kornia.geometry import warp_affine
  File "/home/yc/anaconda3/lib/python3.7/site-packages/kornia/__init__.py", line 3, in <module>
    from . import filters
  File "/home/yc/anaconda3/lib/python3.7/site-packages/kornia/filters/__init__.py", line 3, in <module>
    from .bilateral import BilateralBlur, JointBilateralBlur, bilateral_blur, joint_bilateral_blur
  File "/home/yc/anaconda3/lib/python3.7/site-packages/kornia/filters/bilateral.py", line 3, in <module>
    from kornia.core import Module, Tensor, pad
  File "/home/yc/anaconda3/lib/python3.7/site-packages/kornia/core/__init__.py", line 1, in <module>
    from ._backend import (
  File "/home/yc/anaconda3/lib/python3.7/site-packages/kornia/core/_backend.py", line 24, in <module>
    complex = torch.complex
AttributeError: module 'torch' has no attribute 'complex'

Colab: AttributeError: FaceReconstructionPipeline: _3D

Hi, thank you for sharing your amazing work!

I'm currently trying to run the Colab notebook demo, but I get the following error when running the 2nd cell:

The code:

import os
import cv2
from moviepy.editor import ImageSequenceClip
from modelscope.models.cv.face_reconstruction.utils import write_obj
from modelscope.outputs import OutputKeys
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks

face_reconstruction = pipeline(Tasks.face_reconstruction, model='damo/cv_resnet50_face-reconstruction', model_revision='v2.0.0-HRN')

The error:

2023-06-15 17:24:06,122 - modelscope - INFO - PyTorch version 2.0.1+cu118 Found.
2023-06-15 17:24:06,137 - modelscope - INFO - TensorFlow version 2.12.0 Found.
2023-06-15 17:24:06,140 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
2023-06-15 17:24:06,256 - modelscope - INFO - Loading done! Current index file version is 1.6.1, with md5 5703f34095fb68603c387701a17ebb07 and a total number of 849 components indexed
2023-06-15 17:24:10,793 - modelscope - INFO - Use user-specified model revision: v2.0.0-HRN
2023-06-15 17:24:18,423 - modelscope - INFO - initiate model from /root/.cache/modelscope/hub/damo/cv_resnet50_face-reconstruction
2023-06-15 17:24:18,425 - modelscope - INFO - initiate model from location /root/.cache/modelscope/hub/damo/cv_resnet50_face-reconstruction.
2023-06-15 17:24:18,429 - modelscope - INFO - initialize model from /root/.cache/modelscope/hub/damo/cv_resnet50_face-reconstruction
initialize network with normal
initialize network with normal
2023-06-15 17:24:27,319 - modelscope - WARNING - No preprocessor field found in cfg.
2023-06-15 17:24:27,322 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2023-06-15 17:24:27,324 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': '/root/.cache/modelscope/hub/damo/cv_resnet50_face-reconstruction'}. trying to build by task and model information.
2023-06-15 17:24:27,326 - modelscope - WARNING - No preprocessor key ('face_reconstruction', 'face-reconstruction') found in PREPROCESSOR_MAP, skip building preprocessor.
loading the model from /root/.cache/modelscope/hub/damo/cv_resnet50_face-reconstruction/pytorch_model.pt
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /usr/local/lib/python3.10/dist-packages/modelscope/utils/registry.py:212 in build_from_cfg       │
│                                                                                                  │
│   209 │   │   if hasattr(obj_cls, '_instantiate'):                                               │
│   210 │   │   │   return obj_cls._instantiate(**args)                                            │
│   211 │   │   else:                                                                              │
│ ❱ 212 │   │   │   return obj_cls(**args)                                                         │
│   213 │   except Exception as e:                                                                 │
│   214 │   │   # Normal TypeError does not print class name.                                      │
│   215 │   │   raise type(e)(f'{obj_cls.__name__}: {e}')                                          │
│                                                                                                  │
│ /usr/local/lib/python3.10/dist-packages/modelscope/pipelines/cv/face_reconstruction_pipeline.py: │
│ 106 in __init__                                                                                  │
│                                                                                                  │
│   103 │   │   │   os.path.join(model_root, 'face_alignment', 'depth-6c4283c0e0.zip'),            │
│   104 │   │   │   save_ckpt_dir)                                                                 │
│   105 │   │   self.lm_sess = face_alignment.FaceAlignment(                                       │
│ ❱ 106 │   │   │   face_alignment.LandmarksType._3D, flip_input=False)                            │
│   107 │   │                                                                                      │
│   108 │   │   config = tf.ConfigProto(allow_soft_placement=True)                                 │
│   109 │   │   config.gpu_options.per_process_gpu_memory_fraction = 0.2                           │
│                                                                                                  │
│ /usr/lib/python3.10/enum.py:437 in __getattr__                                                   │
│                                                                                                  │
│    434 │   │   try:                                                                              │
│    435 │   │   │   return cls._member_map_[name]                                                 │
│    436 │   │   except KeyError:                                                                  │
│ ❱  437 │   │   │   raise AttributeError(name) from None                                          │
│    438 │                                                                                         │
│    439 │   def __getitem__(cls, name):                                                           │
│    440 │   │   return cls._member_map_[name]                                                     │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: _3D

During handling of the above exception, another exception occurred:

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <cell line: 9>:9                                                                              │
│                                                                                                  │
│ /usr/local/lib/python3.10/dist-packages/modelscope/pipelines/builder.py:140 in pipeline          │
│                                                                                                  │
│   137 │   if preprocessor is not None:                                                           │
│   138 │   │   cfg.preprocessor = preprocessor                                                    │
│   139 │                                                                                          │
│ ❱ 140 │   return build_pipeline(cfg, task_name=task)                                             │
│   141                                                                                            │
│   142                                                                                            │
│   143 def add_default_pipeline_info(task: str,                                                   │
│                                                                                                  │
│ /usr/local/lib/python3.10/dist-packages/modelscope/pipelines/builder.py:56 in build_pipeline     │
│                                                                                                  │
│    53 │   │   │   :obj:`Tasks` for more details.                                                 │
│    54 │   │   default_args (dict, optional): Default initialization arguments.                   │
│    55 │   """                                                                                    │
│ ❱  56 │   return build_from_cfg(                                                                 │
│    57 │   │   cfg, PIPELINES, group_key=task_name, default_args=default_args)                    │
│    58                                                                                            │
│    59                                                                                            │
│                                                                                                  │
│ /usr/local/lib/python3.10/dist-packages/modelscope/utils/registry.py:215 in build_from_cfg       │
│                                                                                                  │
│   212 │   │   │   return obj_cls(**args)                                                         │
│   213 │   except Exception as e:                                                                 │
│   214 │   │   # Normal TypeError does not print class name.                                      │
│ ❱ 215 │   │   raise type(e)(f'{obj_cls.__name__}: {e}')                                          │
│   216                                                                                            │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: FaceReconstructionPipeline: _3D

I successfully installed all libraries and made sure to restart the runtime. I'm also using a GPU on runtime and CUDA is seems to be loading without issues. The same error also occurs when running the notebook locally, and it also seems to occur on the updated notebook mentioned on issue #22

Please let me know if there's a fix for the issue.

Thanks!

How to save deformation map and displacement map as texture? (.png or .jpg)

Hi!

My goal: is to import highly detailed meshes and textures into 3D software like Blender, Unreal for rendering

I found displacement_map and deformation_map in output

output['displacement_map'] (64, 64, 3)
output['deformation_map'] (256, 256, 1)

How to save deformation map and displacement map as texture? (.png or .jpg as pictured above)

Thank you very much!

image

list(list(float64)<iv=None>)<iv=None> cannot be represented as a Numpy dtype

023-10-24 09:48:59.912303: I tensorflow/core/platform/profile_utils/cpu_utils.cc:114] CPU Frequency: 1697970000 Hz
predict ./assets/examples/single_view_image
0%| | 0/1 [00:07<?, ?it/s]
Traceback (most recent call last):
File "/home/ty/0_server_code/li/01.3dface/demo.py", line 78, in
run_hrn(args)
File "/home/ty/0_server_code/li/01.3dface/demo.py", line 26, in run_hrn
output = reconstructor.predict(img, visualize=True, save_name=save_name, out_dir=out_dir)
File "/home/ty/0_server_code/li/01.3dface/models/hrn.py", line 243, in predict
output = self.predict_base(img)
File "/home/ty/0_server_code/li/01.3dface/models/hrn.py", line 153, in predict_base
landmarks = self.prepare_data(img, self.lm_sess, five_points=landmarks)
File "/home/ty/0_server_code/li/01.3dface/models/hrn.py", line 98, in prepare_data
landmark = lm_sess.get_landmarks_from_image(input_img)[0]
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/face_alignment/api.py", line 120, in get_landmarks_from_image
detected_faces = self.face_detector.detect_from_image(image.copy())
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/face_alignment/detection/sfd/sfd_detector.py", line 44, in detect_from_image
bboxlist = detect(self.face_detector, image, device=self.device)[0]
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/face_alignment/detection/sfd/detect.py", line 19, in detect
return batch_detect(net, img, device)
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/face_alignment/detection/sfd/detect.py", line 45, in batch_detect
bboxlists = get_predictions(List(olist), batch_size)
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/dispatcher.py", line 501, in _compile_for_args
raise e
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/dispatcher.py", line 434, in _compile_for_args
return_val = self.compile(tuple(argtypes))
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/dispatcher.py", line 979, in compile
cres = self._compiler.compile(args, return_type)
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/dispatcher.py", line 141, in compile
status, retval = self._compile_cached(args, return_type)
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/dispatcher.py", line 155, in _compile_cached
retval = self._compile_core(args, return_type)
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/dispatcher.py", line 168, in _compile_core
cres = compiler.compile_extra(self.targetdescr.typing_context,
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/compiler.py", line 686, in compile_extra
return pipeline.compile_extra(func)
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/compiler.py", line 428, in compile_extra
return self._compile_bytecode()
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/compiler.py", line 492, in _compile_bytecode
return self._compile_core()
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/compiler.py", line 471, in _compile_core
raise e
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/compiler.py", line 462, in _compile_core
pm.run(self.state)
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/compiler_machinery.py", line 343, in run
raise patched_exception
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/compiler_machinery.py", line 334, in run
self._runPass(idx, pass_inst, state)
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/compiler_lock.py", line 35, in _acquire_compile_lock
return func(*args, **kwargs)
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/compiler_machinery.py", line 289, in _runPass
mutated |= check(pss.run_pass, internal_state)
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/compiler_machinery.py", line 262, in check
mangled = func(compiler_state)
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/typed_passes.py", line 398, in run_pass
lower.create_cpython_wrapper(flags.release_gil)
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/lowering.py", line 247, in create_cpython_wrapper
self.context.create_cpython_wrapper(self.library, self.fndesc,
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/cpu.py", line 183, in create_cpython_wrapper
builder.build()
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/callwrapper.py", line 123, in build
self.build_wrapper(api, builder, closure, args, kws)
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/callwrapper.py", line 188, in build_wrapper
obj = api.from_native_return(retty, retval, env_manager)
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/pythonapi.py", line 1402, in from_native_return
out = self.from_native_value(typ, val, env_manager)
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/pythonapi.py", line 1416, in from_native_value
return impl(typ, val, c)
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/core/boxing.py", line 399, in box_array
np_dtype = numpy_support.as_dtype(typ.dtype)
File "/home/ty/1_server_software/anaconda3/envs/3dface/lib/python3.9/site-packages/numba/np/numpy_support.py", line 156, in as_dtype
raise NotImplementedError("%r cannot be represented as a Numpy dtype"
NotImplementedError: Failed in nopython mode pipeline (step: native lowering)
list(list(float64)<iv=None>)<iv=None> cannot be represented as a Numpy dtype

training code

great job on this - when will the training code be released?

Putting texture onto mesh for multiview reconstruction

Thank you for your great work and publishing them!
For single view image reconstruction, i could use the texture jpeg in the results and set Texture using Meshlab but when i try to do the same for the resulting mesh of multiview reconstruction, i get this following error:

Untitled
Could you please explain how you put the texture to the resulting face meshes as in demos?
Thank you a lot !

cmd process too long.

Hi, thanks for your contribution. I test the project, but got stuck as follows: . It takes very long time. Do you know why?

CUDA_VISIBLE_DEVICES=0 python demo.py --input_type single_view --input_root ./assets/examples/single_view_image --output_root ./assets/examples/single_view_image_results
2023-04-19 21:23:19.692314: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
/home/eason/anaconda3/envs/HRN/lib/python3.6/site-packages/tensorflow/python/autograph/utils/testing.py:21: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
----------------- Options ---------------
add_image: True
base_ckpt_path:
bfm_folder: assets/3dmm_assets/BFM
bfm_model: BFM_model_front.mat
camera_d: 10.0
center: 112.0
checkpoints_dir: assets/pretrained_models
dataset_mode: None
ddp_port: 12355
display_per_batch: True
epoch: 10
eval_batch_nums: inf
focal: 1015.0
gpu_ids: 0
img_folder: examples
init_path: checkpoints/init_model/resnet50-0676ba61.pth
isTrain: False [default: None]
model: facerecon
name: hrn_v1.1
net_recog: r50
net_recog_path: ../pretrained_models/recog_model/ms1mv3_arcface_r50_fp16/backbone.pth
net_recon: resnet50
phase: test
rot_angle: 10.0
scale_delta: 0.1
shift_pixs: 10.0
suffix:
use_crop_face: True
use_ddp: False [default: True]
use_last_fc: False
use_predef_M: False
verbose: False
vis_batch_nums: 1
w_adv: 1.0
w_color: 1.92
w_contour: 20.0
w_dis_reg: 10.0
w_exp: 0.8
w_feat: 0.2
w_gamma: 10.0
w_id: 1.0
w_lm: 0.0016
w_reflc: 5.0
w_reg: 0.0003
w_smooth: 5000.0
w_tex: 0.017
world_size: 1
z_far: 15.0
z_near: 5.0
----------------- End -------------------

environment:
Package Version


absl-py 1.4.0
albumentations 1.3.0
astunparse 1.6.3
cachetools 4.2.4
certifi 2021.5.30
charset-normalizer 2.0.12
colorama 0.4.5
cycler 0.11.0
dataclasses 0.8
decorator 4.4.2
einops 0.4.1
face-alignment 1.3.5
future 0.18.3
fvcore 0.1.5.post20210915
gast 0.3.3
google-auth 2.17.3
google-auth-oauthlib 0.4.6
google-pasta 0.2.0
grpcio 1.48.2
h5py 2.10.0
idna 3.4
imageio 2.15.0
importlib-metadata 4.8.3
importlib-resources 5.4.0
iopath 0.1.9
joblib 1.1.1
Keras-Preprocessing 1.1.2
kiwisolver 1.3.1
kornia 0.5.11
llvmlite 0.36.0
Markdown 3.3.7
matplotlib 3.3.4
mkl-fft 1.3.0
mkl-random 1.1.1
mkl-service 2.3.0
networkx 2.5.1
ninja 1.11.1
numba 0.53.1
numpy 1.18.1
nvdiffrast 0.3.0
oauthlib 3.2.2
olefile 0.46
opencv-python 4.5.5.64
opencv-python-headless 4.5.5.64
opt-einsum 3.3.0
packaging 21.3
Pillow 8.4.0
pip 21.2.2
portalocker 2.3.2
protobuf 3.19.6
pyasn1 0.4.8
pyasn1-modules 0.2.8
pyparsing 3.0.9
python-dateutil 2.8.2
pytorch3d 0.6.1
PyWavelets 1.1.1
PyYAML 6.0
qudida 0.0.4
requests 2.27.1
requests-oauthlib 1.3.1
rsa 4.9
scikit-image 0.17.2
scikit-learn 0.24.2
scipy 1.4.1
setuptools 58.0.4
six 1.16.0
tabulate 0.8.10
tensorboard 2.10.1
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
tensorflow-estimator 2.3.0
tensorflow-gpu 2.3.0
termcolor 1.1.0
threadpoolctl 3.1.0
tifffile 2020.9.3
torch 1.6.0
torchsummary 1.5.1
torchvision 0.7.0
tqdm 4.64.1
trimesh 3.21.5
typing_extensions 4.1.1
urllib3 1.26.15
Werkzeug 2.0.3
wheel 0.37.1
wrapt 1.15.0
yacs 0.1.8
zipp 3.6.0

How can it be used for face recognition tasks?

Thanks so much for the wonderful work! Here are some questions I would like to ask:

  1. The work achieved good results for face 3D reconstruction task, so is it valid for face recognition task as well?
  2. How can it be used for face recognition tasks? Is using output['canonical_deformation_map'] sufficient?
  3. If I want to use multiple views to improve face recognition, how should I use them?

img and img_hd

Can you tell me what it means to replace him img_hd at the source code

请问如何获得ground-truth的maps标签?

已有初始输入图片、对应的面扫网格、低频部分产生的网格M_0;
面扫网格和M_0配准、和原图片对齐之后,原文提到能通过“fitting the image and scan using the loss
functions mentioned in Sec. 3.2”来得到deformation map 和 displacement map。
按照这里的阐述,我实在弄不明白具体该如何编码来实现这部分的fitting;
fitting的双方是image和scan也有些不解,比如deformation map是为了指导M_0形变成为M_1,直观看来是否应该scan和M_0进行一些操作来获得ground-truth 的 deformation map呢?

Bad results when face has bangs

As title mentioned, if the face has bangs, especially girls, the high mesh will get very bad results on forehead. However, I know you got very high mark on forehead part in REALY. Why I got this situation and Do you have some suggestions to solve this problem?

The following pictures show some examples.

Screenshot from 2023-06-28 11-58-32

Screenshot from 2023-06-28 10-49-04

寻求实验结果

hi, 工作十分出色。请问您能公开在FaceScape-lab各个旋转角度的预测结果嘛。谢谢

使用中出现的问题

感谢您的辛勤工作,我搭好环境后,使用过程中报错
image
您能帮我解答一下吗

about the rasterizer and dense mesh texture

congrats, its a great work!
Just spend some time installing pytorch3D on Windows, it's really painful, frome user friendly pespective, just FYI, it would be much easier to use nvdiffrast instead pytorch3D, since you already use nvdiffrast in your project, Also, if you have time ,pls consider replace the segment_face from tensorflow to pytorch (since you already use pytorch used in your project) .
Those two improvement coulds help others to build applications and futher research projects based on HRN.

BTW, I spend a day to read you code and add the uv data to dense mesh, its also very important for many users.

Cmd command takes 25 sec while modelscope takes 1 sec

Hi. Im new in python. Below command takes 25 sec to producing results. 100% 1/1 [00:26<00:00, 26.75s/it]
CUDA_VISIBLE_DEVICES=0 python demo.py --input_type single_view --input_root ./assets/examples/single_view_image --output_root ./assets/examples/single_view_image_results

But with the modelscope you provided in googlecolab in homepage, it takes 1 sec to producing results.

face_reconstruction = pipeline(Tasks.face_reconstruction, model='damo/cv_resnet50_face-reconstruction', model_revision='v2.0.0-HRN')
result = face_reconstruction('first_frame.jpg')

Neither was the first run.

  1. Why does it take so long to run with the first method?
  2. First method also accepts multiple_view, while second method accepts only one image. Is there any way to give multiple images?
  3. First method creates both mid and high-frequency mesh while second method creates only mid-frequency mesh. Is there a high-frequency mesh generation method in the second method? Any code changes?
  4. My nose is crooked. HRN does not represent correctly high curved things like my nose from front view. I know the faces in the dataset it was trained on were shapely faces, but wouldn't it adequately represent people with facial paralysis?
  5. What I wrote in item 4 was for photographs taken from the front view. In side view If the nose is arched, the reprensentation does not fit properly. image

Thanks.

What is the purpose of uv_rasterizer ?

Hi,
I have looked into the source code, but confused by the uv_rasterizer. The input mesh of uv_rasterizer is actually the original UV map, and the original UV map is rasterized to another matrix. What is the purpose of this operation?

About BFM to FLAME

In Supplementary Material for Paper, "Given a face model from BFM database, we firstly use a template FLAME model and apply the flame-fitting algorithm to fit the face model.". For the face model obtained by HRN, what are the landmarks used in flame-fitting?

CUDA 11 support?

/home/yc/anaconda3/envs/HRN/lib/python3.8/site-packages/h5py/__init__.py:46: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.
  from ._conv import register_converters as _register_converters
Traceback (most recent call last):
  File "demo.py", line 78, in <module>
    run_hrn(args)
  File "demo.py", line 15, in run_hrn
    reconstructor = Reconstructor(params)
  File "/home/yc/testing3dai/HRN/HRN/models/hrn.py", line 25, in __init__
    opt = TestOptions().parse(params)
  File "/home/yc/testing3dai/HRN/HRN/options/base_options.py", line 119, in parse
    opt = self.gather_options(params)
  File "/home/yc/testing3dai/HRN/HRN/options/base_options.py", line 68, in gather_options
    model_option_setter = models.get_option_setter(model_name)
  File "/home/yc/testing3dai/HRN/HRN/models/__init__.py", line 50, in get_option_setter
    model_class = find_model_using_name(model_name)
  File "/home/yc/testing3dai/HRN/HRN/models/__init__.py", line 33, in find_model_using_name
    modellib = importlib.import_module(model_filename)
  File "/home/yc/anaconda3/envs/HRN/lib/python3.8/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 843, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/yc/testing3dai/HRN/HRN/models/facerecon_model.py", line 7, in <module>
    from .losses import perceptual_loss, photo_loss, reg_loss, reflectance_loss, landmark_loss, TVLoss, TVLoss_std, contour_aware_loss
  File "/home/yc/testing3dai/HRN/HRN/models/losses.py", line 6, in <module>
    from pytorch3d.ops import (
  File "/home/yc/anaconda3/envs/HRN/lib/python3.8/site-packages/pytorch3d/ops/__init__.py", line 5, in <module>
    from .graph_conv import GraphConv
  File "/home/yc/anaconda3/envs/HRN/lib/python3.8/site-packages/pytorch3d/ops/graph_conv.py", line 8, in <module>
    from pytorch3d import _C
ImportError: libc10_cuda.so: cannot open shared object file: No such file or directory

ImportError: libc10_cuda.so: cannot open shared object file: No such file or directory

training code

Hello, thank you for really nice work. Do you have plans to release training code?

colab

can you create a colab to test this please

Is there a way to export 3d recontructed face after applying the deformation and displacement map ?

Thank you for sharing this beautiful project

The current exported 3d mesh I get when running single view task is hrn_mid_mesh.obj and do not include all details as like in final 2d output

By backtracing through https://github.com/youngLBW/HRN/blob/main/models/facerecon_model.py#L823
and
https://github.com/youngLBW/HRN/blob/main/models/facerecon_model.py#L364

I am trying to export the 3d face after applying the deformation and displacement map but I am not sure it is possible ?
It seam that everything is apply frame by frame in 2d space inside render_uv_texture https://github.com/youngLBW/HRN/blob/main/util/nv_diffrast.py#L150 I am right ?

If there is a way to export final deformation and displacement in 3d space could you share it ?

Thanks

when i run multi-view get error but single no problem

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 35709]],
which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed
to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Is there any conflict in the provided environment?

Requirements
This implementation is only tested under Ubuntu/CentOS environment with Nvidia GPUs and CUDA installed.

Python >= 3.8

PyTorch >= 1.6

requirements.txt:
numpy==1.18.1

torch==1.6.0

torchvision==0.7.0

tensorflow-gpu==2.3.0

opencv-python==4.5.5.64
opencv-python-headless==4.5.5.64
protobuf==3.20.1
tqdm
kornia
pillow
scipy
tensorboard
scikit-image
albumentations
torchsummary
numba
einops
trimesh
face-alignment
ninja
imageio

Is there any confict?

Could not find "01_MorphableModel.mat" in the Google Drive link

Hey,

I got an error saying "No such file or directory: 'assets/3dmm_assets/BFM/01_MorphableModel.mat' "

I have seen the Google drive link for assets/BFM but it does not contain 01_MorphableModel.mat.
The folder assets/BFM only has the following two files:

  1. similarity_Lm3D_all.mat
  2. BFM_model_front.mat

Please help by identifying the issue at your earliest.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.