lakonik / ssdnerf Goto Github PK
View Code? Open in Web Editor NEW[ICCV 2023] Single-Stage Diffusion NeRF
Home Page: https://lakonik.github.io/ssdnerf/
License: MIT License
[ICCV 2023] Single-Stage Diffusion NeRF
Home Page: https://lakonik.github.io/ssdnerf/
License: MIT License
I am very interested in your work and thanks for the release of the code. Your paper has many results of extracted mesh, but I do not know how can I get the extracted meshes using your code.
Thank you for the awesome library!
I encountered the following error while executing the inference code
python test.py ./configs/paper_cfgs/ssdnerf_cars_recons1v.py ./checkpoints/ssdnerf_cars_recons1v_80k_emaonly.pth --gpu-ids 0
2023-12-24 16:08:56,824 - mmgen - INFO - Try to load Tero's Inception Model from 'work_dirs/cache/inception-2015-12-05.pt'.
2023-12-24 16:08:56,824 - mmgen - INFO - Load Tero's Inception Model failed. 'The provided filename work_dirs/cache/inception-2015-12-05.pt does not exist' occurs.
2023-12-24 16:08:56,824 - mmgen - INFO - Try to download Inception Model from work_dirs/cache/inception-2015-12-05.pt...
Download Failed. Invalid URL 'work_dirs/cache/inception-2015-12-05.pt': No scheme supplied. Perhaps you meant https://work_dirs/cache/inception-2015-12-05.pt? occurs.
Traceback (most recent call last):
File "/root/anaconda3/envs/ssdnerf/lib/python3.8/site-packages/mmcv/utils/registry.py", line 69, in build_from_cfg
return obj_cls(**args)
File "/data/lgh/mmgeneration/mmgen/core/evaluation/metrics.py", line 494, in __init__
self.inception_net, self.inception_style = load_inception(
File "/data/lgh/mmgeneration/mmgen/core/evaluation/metrics.py", line 85, in load_inception
raise RuntimeError('Cannot Load Inception Model, please check the input '
RuntimeError: Cannot Load Inception Model, please check the input `inception_args`: {'type': 'StyleGAN', 'inception_path': 'work_dirs/cache/inception-2015-12-05.pt'}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 75, in <module>
main()
File "test.py", line 58, in main
tools.test.main()
File "/data/lgh/SSDNeRF/tools/test.py", line 200, in main
metrics = [build_metric(metric) for metric in metrics]
File "/data/lgh/SSDNeRF/tools/test.py", line 200, in <listcomp>
metrics = [build_metric(metric) for metric in metrics]
File "/data/lgh/mmgeneration/mmgen/core/registry.py", line 30, in build_metric
return build(cfg, METRICS)
File "/data/lgh/mmgeneration/mmgen/core/registry.py", line 25, in build
return build_from_cfg(cfg, registry, default_args)
File "/root/anaconda3/envs/ssdnerf/lib/python3.8/site-packages/mmcv/utils/registry.py", line 72, in build_from_cfg
raise type(e)(f'{obj_cls.__name__}: {e}')
RuntimeError: FID: Cannot Load Inception Model, please check the input `inception_args`: {'type': 'StyleGAN', 'inception_path': 'work_dirs/cache/inception-2015-12-05.pt'}
Where do I need to get this filework_dirs/cache/inception-2015-12-05.pt
? Is it generated during training?
Thanks a lot!
Hi! Thanks for releasing the code of your wonderful work, including both training and inference. I want to train SSDNeRF on my own dataset and have prepared the dataset in the format of the shapenet srn dataset. However, after two days of training, the results are still not good. Here are the visualization results of one tri-plane representation.
It can be seen that there are many noises in the feature plane, which also reflects in the final geometry reconstruction.
And the test rendering result is also not good.
I use the config file ssdnerf_cars_uncond for training. So what should I pay attention to during the training process? Can you give some suggestions based on the current training results? Thank you very much!
FileNotFoundError: GenerativeEvalHook3D: [Errno 2] No such file or directory: 'work_dirs/cache/chairs_test_inception_stylegan.pkl'
Hello Hansheng,
thank you very much for this clean codebase, great work!
If I am not mistaken, the denoising UNet is the typical DDPM architecture but expecting concatenated triplanes instead of images.
Geometrically, this concatenation and the resulting kernel sharing within the convolutional layers is not intuitive in my opinion.
Do you see what I mean or should I elaborate on this?
In the code, I have seen that you have also overridden all mmgen modules (MultiHeadAttention, DenoisingResBlock etc.) in order to make them grouped operations. It seems like you have also tried to denoise the planes individually.
If this is the case, I am very curious about the results, how they compare with denoising the triplanes jointly, and your interpretation of them :)
Again, thanks for your efforts.
Best regards
Chris
Hello, Thanks for sharing great work!
Currently, MMGeneration doesn't seem to have updated their code recently and many diffusion repositories are using diffuser library.
My questions are :
Thanks!
A very impressive job. I saw in the explanation that there are two stages: training and testing, and in the provided trained file names, the trained model files all have an item name, such as "car", "chair", etc. May I ask if the method of our paper is to train a model for each type of item to generate a new model? I think this is a good idea, and I would also like to ask if it is possible to use the training code to train the model of the object I want, such as a robot. Thank you.
Hi! Thank you for sharing great work!
I have a question about this function.
https://github.com/Lakonik/SSDNeRF/blob/3e50d1d9287d92ae40b5831fd6933ac64e125577/lib/models/autodecoders/base_nerf.py#L401C16-L491
What is this function training for?
As I understand it, there are 3 things in SSDNeRF
Diffusion U-Net
triplane (code)
MLP decoder
and in this function, it is expected to train triplane code and MLP decoder.
First of all, this function doesn't seem to learn the MLP decoder because of
SSDNeRF/lib/models/autodecoders/base_nerf.py
Line 412 in 3e50d1d
In this function, the rendering loss is calculated and the triplane code is updated, but the gradient of the rendering loss is overwritten due to the prior gradient caching you mentioned in the paper,
SSDNeRF/lib/models/autodecoders/base_nerf.py
Lines 456 to 467 in 3e50d1d
If this part
SSDNeRF/lib/models/autodecoders/base_nerf.py
Line 412 in 3e50d1d
Am I misunderstanding something about prior gradient caching?
I want to report the following error when using the GUI interface to display.
python demo/ssdnerf_gui.py configs/paper_cfgs/ssdnerf_cars_uncond.py work_dirs/ssdnerf_cars_uncond/save/0000.pth --fp16
2023-08-14 17:52:19,420 - mmgen - INFO - Apply 'timestep_weight' rescale_mode for loss_ddpm_mse. Please make sure the passed weight can be updated by external functions.
load checkpoint from local path: work_dirs/ssdnerf_cars_uncond/save/0000.pth
The model and loaded state dict do not match exactly
unexpected key in source state_dict: scene_name, param
In addition, I also wanted to know if I could export a model with a map, the surface of the exported STL model file was a bit rough, and whether there was a way to improve the accuracy of the model.
Hi,
I have a naive question about Tab. 1 in the paper. Does A+-B
represent mean+-std
, which means you run the experiments for multiple times and report the mean and std value?
Best,
Hello, would it be possible to provide us with an official dockerfile?
Why using your code multi graphics card training is as fast as single-card training (2x3090)๏ผ
Thank you so much for an awesome code library!
I am trying to train a neural network to predict triplane codes from a reference image view of an object. I am using your triplane-nerf library for the rendering and it works pretty well but I am seeing some odd pixelation & artifacts even after training to convergence. Below is a very brief code description of the optimization procedure that I follow during training. The parameters of decoder
and predictor_net
are optimized. Am I doing anything wrong here? I've included a visualization of the predicted (rendered) image vs. target image at the bottom of this message.
I noticed that the output density_bitfield
from nerf.get_density does not have grad. Don't we need gradients to flow through the density MLP in order to facilitate proper training? Is there a way to do this with grad?
from lib.models.autodecoders.base_nerf import BaseNeRF
from lib.models.decoders.triplane_decoder import TriPlaneDecoder
from lib.core.utils.nerf_utils import get_cam_rays
decoder = TriPlaneDecoder(
base_layers=[3 * 6, 64],
density_layers=[64, 1],
color_layers=[64, 3],
dir_layers=[16, 64],
)
nerf = BaseNeRF(code_size=(3, 6, 64, 64), grid_size=64)
def render(code, density_bitfield, h, w, intrinsics, poses):
rays_o, rays_d = get_cam_rays(poses, intrinsics, h=h, w=w)
batch_size, height, width, channels = rays_o.shape
rays_o = rays_o.view(batch_size, height * width, channels)
rays_d = rays_d.view(batch_size, height * width, channels)
outputs = decoder(rays_o, rays_d, code, density_bitfield, nerf.grid_size)
image = outputs['image'] + nerf.bg_color * (1 - outputs['weights_sum'].unsqueeze(-1))
return image.reshape(batch_size, h, w, 3)
for _ in range(iterations):
# reference_img is size 128 x 128
triplane_code = predictor_net(reference_img, reference_intrinsics, reference_poses)
_, density_bitfield = nerf.get_density(
decoder, triplane_code, cfg=dict(density_thresh=0.1, density_step=16))
pred_img = render(
triplane_code, density_bitfield, h=128, w=128,
intrinsics=target_intrinsics, poses=target_poses)
loss = (pred_img - target_img).pow(2).mean()
loss.backward()
#optimizer.step() ... etc
Hi, I wonder to know if it's possible to apply your work to indoor scenes reconstruction. I plan to train the code on Scannet Dataset which provides images for indoor scenes. After training with many scenes from dataset, during test I want to get the room 3D mesh from unseen scene with 4 images taken from 4 corners of a room (not from dataset).
I mainly consider two issues.
First of all, thank you so much for sharing your great work!
I have a question about the types of data needed for single-view reconstruction.
In the paper, it appears that 3D can be reconstructed with just a single view image.
However, in the code, poses and intrinsics are also needed. Also, the chairs_test_cache.pkl file seems to be needed for chair reconstruction. Are poses and intrinsics also necessary for single view reconstruction test? And what does the pickle file do?
Lastly, I would really appreciate it if you could tell me what steps must be taken to test single view reconstruction from an arbitrary real (chair) image.
Thanks
@Lakonik Thank you for your great SSDNeRF work.
And I have one confusion with your work: Why the iteration number is 1M in unconditional generation training, which is significantly larger than 80K in reconstruction training.
Thank you for your great work! It reports an error: 'No such file: 'abotables_inception_stylegan.pkl''. How can I obtain 'abotables_inception_stylegan.pkl'.
Hi,
Thanks for sharing your great job! I would like to know when will you release the code? Couldn't wait to have a try.
Best,
Thanks for the open source.
I'm running a docker with CUDA11.7 Ubuntu22.04 on a machine with 4 RTX3090 24G cards.
When i followed the instruction of environment setup, there are some problems:
AssertionError: MMCV==1.6.0 is used but incompatible. Please install mmcv>=1.3.16, <=1.5.0.
ModuleNotFoundError: No module named 'mmgen.models.architectures.stylegan.ada'
Hello. Can you release the new checkpoints for new configs?
Thank you for sharing the code. The work is very interesting.
Have you tried training image-conditioned diffusion model for sparse-view reconstruction?
As I understand the paper, SSDNeRF uses unconditional diffusion model + guidance, instead of a conditional diffusion model.
I found concatenation conditioning in the code, but it seems to be unused.
Hi, thanks for your work.
When I try to run "python train.py configs/supp_cfgs/ssdnerf_cars_reconskitti.py".
It turns out "[Errno 2] No such file or directory: 'data/shapenet/cars_test/1a3782ae4bd711b66b418c7d9fedcaa9/rgb'".
After I check the config file, it seems like I have no "test_pose_override", "cars_train" and "cars_test" files. (Only have cars_kitti)
How can i get these files?
Look forward your reply, thanks.
I have read your paper on SSDNeRF and found it very interesting. I have a few questions about the implementation details that I could not find in the main text or supplementary materials. I would greatly appreciate it if you could provide some clarification.
Could you please explain how the denoising network UNet is designed in your approach? Is it a traditional 2D Unet architecture?
How do you aggregate the information from the three planes in the triplane during your approach, while ensuring that the network is 3D-aware?
Could you provide more information on how the NeRF decoder is designed and implemented? If possible, sharing some relevant code snippets would be very helpful.
Thank you for your time and effort in addressing these questions. I am looking forward to learning more about your work and its underlying techniques.
Apologies if I missed it in your paper. But could you estimate the training time / GPUs required for your model?
Why am I getting an error 'data/abo/tables_test/B072ZMSBQT_2/pose/.ipynb_checkpoints.txt not found' when I run 'python tools/inception_stat.py configs/paper_cfgs/ssdnerf_abotables_uncond.py'?
Hello, may I ask how long it will take to reconstruct the Mesh with texture using the picture?
Thank you very much for your work. I now want to use the model(ssdnerf_chairs_recons1v_80k_emaonly.pth) you provided to reconstruct my own data. I reconstructed a mesh from the chair image I provided below, but it seems to have some discrepancies with reality.
Therefore, I would like to test with two or four images. I have a few questions:
2023-11-08 13:30:22,626 - mmgen - INFO - evaluation
2023-11-08 13:30:22,626 - mmgen - INFO - Set random seed to 2021, deterministic: False, use_rank_shift: False
2023-11-08 13:30:23,048 - mmgen - INFO - Apply 'timestep_weight' rescale_mode for loss_ddpm_mse. Please make sure the passed weight can be updated by external functions.
load checkpoint from local path: work_dirs/cache/ssdnerf_chairs_recons1v_80k_emaonly.pth
The model and loaded state dict do not match exactly
missing keys in source state_dict: decoder.aabb, decoder.base_net.0.weight, decoder.base_net.0.bias, decoder.density_net.0.weight, decoder.density_net.0.bias, decoder.dir_net.0.weight, decoder.dir_net.0.bias, decoder.color_net.0.weight, decoder.color_net.0.bias, diffusion.denoising.time_embedding.blocks.0.weight, diffusion.denoising.time_embedding.blocks.0.bias, diffusion.denoising.time_embedding.blocks.2.weight, diffusion.denoising.time_embedding.blocks.2.bias, diffusion.denoising.in_blocks.0.0.weight, diffusion.denoising.in_blocks.0.0.bias, diffusion.denoising.in_blocks.1.0.conv_1.0.weight, diffusion.denoising.in_blocks.1.0.conv_1.0.bias, diffusion.denoising.in_blocks.1.0.conv_1.2.weight, diffusion.denoising.in_blocks.1.0.conv_1.2.bias, diffusion.denoising.in_blocks.1.0.norm_with_embedding.norm.weight, diffusion.denoising.in_blocks.1.0.norm_with_embedding.norm.bias, diffusion.denoising.in_blocks.1.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.in_blocks.1.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.in_blocks.1.0.conv_2.2.weight, diffusion.denoising.in_blocks.1.0.conv_2.2.bias, diffusion.denoising.in_blocks.2.0.conv_1.0.weight, diffusion.denoising.in_blocks.2.0.conv_1.0.bias, diffusion.denoising.in_blocks.2.0.conv_1.2.weight, diffusion.denoising.in_blocks.2.0.conv_1.2.bias, diffusion.denoising.in_blocks.2.0.norm_with_embedding.norm.weight, diffusion.denoising.in_blocks.2.0.norm_with_embedding.norm.bias, diffusion.denoising.in_blocks.2.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.in_blocks.2.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.in_blocks.2.0.conv_2.2.weight, diffusion.denoising.in_blocks.2.0.conv_2.2.bias, diffusion.denoising.in_blocks.3.0.downsample.weight, diffusion.denoising.in_blocks.3.0.downsample.bias, diffusion.denoising.in_blocks.4.0.conv_1.0.weight, diffusion.denoising.in_blocks.4.0.conv_1.0.bias, diffusion.denoising.in_blocks.4.0.conv_1.2.weight, diffusion.denoising.in_blocks.4.0.conv_1.2.bias, diffusion.denoising.in_blocks.4.0.norm_with_embedding.norm.weight, diffusion.denoising.in_blocks.4.0.norm_with_embedding.norm.bias, diffusion.denoising.in_blocks.4.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.in_blocks.4.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.in_blocks.4.0.conv_2.2.weight, diffusion.denoising.in_blocks.4.0.conv_2.2.bias, diffusion.denoising.in_blocks.4.0.shortcut.weight, diffusion.denoising.in_blocks.4.0.shortcut.bias, diffusion.denoising.in_blocks.5.0.conv_1.0.weight, diffusion.denoising.in_blocks.5.0.conv_1.0.bias, diffusion.denoising.in_blocks.5.0.conv_1.2.weight, diffusion.denoising.in_blocks.5.0.conv_1.2.bias, diffusion.denoising.in_blocks.5.0.norm_with_embedding.norm.weight, diffusion.denoising.in_blocks.5.0.norm_with_embedding.norm.bias, diffusion.denoising.in_blocks.5.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.in_blocks.5.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.in_blocks.5.0.conv_2.2.weight, diffusion.denoising.in_blocks.5.0.conv_2.2.bias, diffusion.denoising.in_blocks.6.0.downsample.weight, diffusion.denoising.in_blocks.6.0.downsample.bias, diffusion.denoising.in_blocks.7.0.conv_1.0.weight, diffusion.denoising.in_blocks.7.0.conv_1.0.bias, diffusion.denoising.in_blocks.7.0.conv_1.2.weight, diffusion.denoising.in_blocks.7.0.conv_1.2.bias, diffusion.denoising.in_blocks.7.0.norm_with_embedding.norm.weight, diffusion.denoising.in_blocks.7.0.norm_with_embedding.norm.bias, diffusion.denoising.in_blocks.7.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.in_blocks.7.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.in_blocks.7.0.conv_2.2.weight, diffusion.denoising.in_blocks.7.0.conv_2.2.bias, diffusion.denoising.in_blocks.7.1.norm.weight, diffusion.denoising.in_blocks.7.1.norm.bias, diffusion.denoising.in_blocks.7.1.qkv.weight, diffusion.denoising.in_blocks.7.1.qkv.bias, diffusion.denoising.in_blocks.7.1.proj.weight, diffusion.denoising.in_blocks.7.1.proj.bias, diffusion.denoising.in_blocks.8.0.conv_1.0.weight, diffusion.denoising.in_blocks.8.0.conv_1.0.bias, diffusion.denoising.in_blocks.8.0.conv_1.2.weight, diffusion.denoising.in_blocks.8.0.conv_1.2.bias, diffusion.denoising.in_blocks.8.0.norm_with_embedding.norm.weight, diffusion.denoising.in_blocks.8.0.norm_with_embedding.norm.bias, diffusion.denoising.in_blocks.8.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.in_blocks.8.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.in_blocks.8.0.conv_2.2.weight, diffusion.denoising.in_blocks.8.0.conv_2.2.bias, diffusion.denoising.in_blocks.8.1.norm.weight, diffusion.denoising.in_blocks.8.1.norm.bias, diffusion.denoising.in_blocks.8.1.qkv.weight, diffusion.denoising.in_blocks.8.1.qkv.bias, diffusion.denoising.in_blocks.8.1.proj.weight, diffusion.denoising.in_blocks.8.1.proj.bias, diffusion.denoising.in_blocks.9.0.downsample.weight, diffusion.denoising.in_blocks.9.0.downsample.bias, diffusion.denoising.in_blocks.10.0.conv_1.0.weight, diffusion.denoising.in_blocks.10.0.conv_1.0.bias, diffusion.denoising.in_blocks.10.0.conv_1.2.weight, diffusion.denoising.in_blocks.10.0.conv_1.2.bias, diffusion.denoising.in_blocks.10.0.norm_with_embedding.norm.weight, diffusion.denoising.in_blocks.10.0.norm_with_embedding.norm.bias, diffusion.denoising.in_blocks.10.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.in_blocks.10.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.in_blocks.10.0.conv_2.2.weight, diffusion.denoising.in_blocks.10.0.conv_2.2.bias, diffusion.denoising.in_blocks.10.0.shortcut.weight, diffusion.denoising.in_blocks.10.0.shortcut.bias, diffusion.denoising.in_blocks.10.1.norm.weight, diffusion.denoising.in_blocks.10.1.norm.bias, diffusion.denoising.in_blocks.10.1.qkv.weight, diffusion.denoising.in_blocks.10.1.qkv.bias, diffusion.denoising.in_blocks.10.1.proj.weight, diffusion.denoising.in_blocks.10.1.proj.bias, diffusion.denoising.in_blocks.11.0.conv_1.0.weight, diffusion.denoising.in_blocks.11.0.conv_1.0.bias, diffusion.denoising.in_blocks.11.0.conv_1.2.weight, diffusion.denoising.in_blocks.11.0.conv_1.2.bias, diffusion.denoising.in_blocks.11.0.norm_with_embedding.norm.weight, diffusion.denoising.in_blocks.11.0.norm_with_embedding.norm.bias, diffusion.denoising.in_blocks.11.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.in_blocks.11.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.in_blocks.11.0.conv_2.2.weight, diffusion.denoising.in_blocks.11.0.conv_2.2.bias, diffusion.denoising.in_blocks.11.1.norm.weight, diffusion.denoising.in_blocks.11.1.norm.bias, diffusion.denoising.in_blocks.11.1.qkv.weight, diffusion.denoising.in_blocks.11.1.qkv.bias, diffusion.denoising.in_blocks.11.1.proj.weight, diffusion.denoising.in_blocks.11.1.proj.bias, diffusion.denoising.in_blocks.12.0.downsample.weight, diffusion.denoising.in_blocks.12.0.downsample.bias, diffusion.denoising.in_blocks.13.0.conv_1.0.weight, diffusion.denoising.in_blocks.13.0.conv_1.0.bias, diffusion.denoising.in_blocks.13.0.conv_1.2.weight, diffusion.denoising.in_blocks.13.0.conv_1.2.bias, diffusion.denoising.in_blocks.13.0.norm_with_embedding.norm.weight, diffusion.denoising.in_blocks.13.0.norm_with_embedding.norm.bias, diffusion.denoising.in_blocks.13.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.in_blocks.13.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.in_blocks.13.0.conv_2.2.weight, diffusion.denoising.in_blocks.13.0.conv_2.2.bias, diffusion.denoising.in_blocks.13.1.norm.weight, diffusion.denoising.in_blocks.13.1.norm.bias, diffusion.denoising.in_blocks.13.1.qkv.weight, diffusion.denoising.in_blocks.13.1.qkv.bias, diffusion.denoising.in_blocks.13.1.proj.weight, diffusion.denoising.in_blocks.13.1.proj.bias, diffusion.denoising.in_blocks.14.0.conv_1.0.weight, diffusion.denoising.in_blocks.14.0.conv_1.0.bias, diffusion.denoising.in_blocks.14.0.conv_1.2.weight, diffusion.denoising.in_blocks.14.0.conv_1.2.bias, diffusion.denoising.in_blocks.14.0.norm_with_embedding.norm.weight, diffusion.denoising.in_blocks.14.0.norm_with_embedding.norm.bias, diffusion.denoising.in_blocks.14.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.in_blocks.14.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.in_blocks.14.0.conv_2.2.weight, diffusion.denoising.in_blocks.14.0.conv_2.2.bias, diffusion.denoising.in_blocks.14.1.norm.weight, diffusion.denoising.in_blocks.14.1.norm.bias, diffusion.denoising.in_blocks.14.1.qkv.weight, diffusion.denoising.in_blocks.14.1.qkv.bias, diffusion.denoising.in_blocks.14.1.proj.weight, diffusion.denoising.in_blocks.14.1.proj.bias, diffusion.denoising.mid_blocks.0.conv_1.0.weight, diffusion.denoising.mid_blocks.0.conv_1.0.bias, diffusion.denoising.mid_blocks.0.conv_1.2.weight, diffusion.denoising.mid_blocks.0.conv_1.2.bias, diffusion.denoising.mid_blocks.0.norm_with_embedding.norm.weight, diffusion.denoising.mid_blocks.0.norm_with_embedding.norm.bias, diffusion.denoising.mid_blocks.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.mid_blocks.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.mid_blocks.0.conv_2.2.weight, diffusion.denoising.mid_blocks.0.conv_2.2.bias, diffusion.denoising.mid_blocks.1.norm.weight, diffusion.denoising.mid_blocks.1.norm.bias, diffusion.denoising.mid_blocks.1.qkv.weight, diffusion.denoising.mid_blocks.1.qkv.bias, diffusion.denoising.mid_blocks.1.proj.weight, diffusion.denoising.mid_blocks.1.proj.bias, diffusion.denoising.mid_blocks.2.conv_1.0.weight, diffusion.denoising.mid_blocks.2.conv_1.0.bias, diffusion.denoising.mid_blocks.2.conv_1.2.weight, diffusion.denoising.mid_blocks.2.conv_1.2.bias, diffusion.denoising.mid_blocks.2.norm_with_embedding.norm.weight, diffusion.denoising.mid_blocks.2.norm_with_embedding.norm.bias, diffusion.denoising.mid_blocks.2.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.mid_blocks.2.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.mid_blocks.2.conv_2.2.weight, diffusion.denoising.mid_blocks.2.conv_2.2.bias, diffusion.denoising.out_blocks.0.0.conv_1.0.weight, diffusion.denoising.out_blocks.0.0.conv_1.0.bias, diffusion.denoising.out_blocks.0.0.conv_1.2.weight, diffusion.denoising.out_blocks.0.0.conv_1.2.bias, diffusion.denoising.out_blocks.0.0.norm_with_embedding.norm.weight, diffusion.denoising.out_blocks.0.0.norm_with_embedding.norm.bias, diffusion.denoising.out_blocks.0.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.out_blocks.0.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.out_blocks.0.0.conv_2.2.weight, diffusion.denoising.out_blocks.0.0.conv_2.2.bias, diffusion.denoising.out_blocks.0.0.shortcut.weight, diffusion.denoising.out_blocks.0.0.shortcut.bias, diffusion.denoising.out_blocks.0.1.norm.weight, diffusion.denoising.out_blocks.0.1.norm.bias, diffusion.denoising.out_blocks.0.1.qkv.weight, diffusion.denoising.out_blocks.0.1.qkv.bias, diffusion.denoising.out_blocks.0.1.proj.weight, diffusion.denoising.out_blocks.0.1.proj.bias, diffusion.denoising.out_blocks.1.0.conv_1.0.weight, diffusion.denoising.out_blocks.1.0.conv_1.0.bias, diffusion.denoising.out_blocks.1.0.conv_1.2.weight, diffusion.denoising.out_blocks.1.0.conv_1.2.bias, diffusion.denoising.out_blocks.1.0.norm_with_embedding.norm.weight, diffusion.denoising.out_blocks.1.0.norm_with_embedding.norm.bias, diffusion.denoising.out_blocks.1.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.out_blocks.1.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.out_blocks.1.0.conv_2.2.weight, diffusion.denoising.out_blocks.1.0.conv_2.2.bias, diffusion.denoising.out_blocks.1.0.shortcut.weight, diffusion.denoising.out_blocks.1.0.shortcut.bias, diffusion.denoising.out_blocks.1.1.norm.weight, diffusion.denoising.out_blocks.1.1.norm.bias, diffusion.denoising.out_blocks.1.1.qkv.weight, diffusion.denoising.out_blocks.1.1.qkv.bias, diffusion.denoising.out_blocks.1.1.proj.weight, diffusion.denoising.out_blocks.1.1.proj.bias, diffusion.denoising.out_blocks.2.0.conv_1.0.weight, diffusion.denoising.out_blocks.2.0.conv_1.0.bias, diffusion.denoising.out_blocks.2.0.conv_1.2.weight, diffusion.denoising.out_blocks.2.0.conv_1.2.bias, diffusion.denoising.out_blocks.2.0.norm_with_embedding.norm.weight, diffusion.denoising.out_blocks.2.0.norm_with_embedding.norm.bias, diffusion.denoising.out_blocks.2.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.out_blocks.2.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.out_blocks.2.0.conv_2.2.weight, diffusion.denoising.out_blocks.2.0.conv_2.2.bias, diffusion.denoising.out_blocks.2.0.shortcut.weight, diffusion.denoising.out_blocks.2.0.shortcut.bias, diffusion.denoising.out_blocks.2.1.norm.weight, diffusion.denoising.out_blocks.2.1.norm.bias, diffusion.denoising.out_blocks.2.1.qkv.weight, diffusion.denoising.out_blocks.2.1.qkv.bias, diffusion.denoising.out_blocks.2.1.proj.weight, diffusion.denoising.out_blocks.2.1.proj.bias, diffusion.denoising.out_blocks.2.2.conv.weight, diffusion.denoising.out_blocks.2.2.conv.bias, diffusion.denoising.out_blocks.3.0.conv_1.0.weight, diffusion.denoising.out_blocks.3.0.conv_1.0.bias, diffusion.denoising.out_blocks.3.0.conv_1.2.weight, diffusion.denoising.out_blocks.3.0.conv_1.2.bias, diffusion.denoising.out_blocks.3.0.norm_with_embedding.norm.weight, diffusion.denoising.out_blocks.3.0.norm_with_embedding.norm.bias, diffusion.denoising.out_blocks.3.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.out_blocks.3.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.out_blocks.3.0.conv_2.2.weight, diffusion.denoising.out_blocks.3.0.conv_2.2.bias, diffusion.denoising.out_blocks.3.0.shortcut.weight, diffusion.denoising.out_blocks.3.0.shortcut.bias, diffusion.denoising.out_blocks.3.1.norm.weight, diffusion.denoising.out_blocks.3.1.norm.bias, diffusion.denoising.out_blocks.3.1.qkv.weight, diffusion.denoising.out_blocks.3.1.qkv.bias, diffusion.denoising.out_blocks.3.1.proj.weight, diffusion.denoising.out_blocks.3.1.proj.bias, diffusion.denoising.out_blocks.4.0.conv_1.0.weight, diffusion.denoising.out_blocks.4.0.conv_1.0.bias, diffusion.denoising.out_blocks.4.0.conv_1.2.weight, diffusion.denoising.out_blocks.4.0.conv_1.2.bias, diffusion.denoising.out_blocks.4.0.norm_with_embedding.norm.weight, diffusion.denoising.out_blocks.4.0.norm_with_embedding.norm.bias, diffusion.denoising.out_blocks.4.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.out_blocks.4.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.out_blocks.4.0.conv_2.2.weight, diffusion.denoising.out_blocks.4.0.conv_2.2.bias, diffusion.denoising.out_blocks.4.0.shortcut.weight, diffusion.denoising.out_blocks.4.0.shortcut.bias, diffusion.denoising.out_blocks.4.1.norm.weight, diffusion.denoising.out_blocks.4.1.norm.bias, diffusion.denoising.out_blocks.4.1.qkv.weight, diffusion.denoising.out_blocks.4.1.qkv.bias, diffusion.denoising.out_blocks.4.1.proj.weight, diffusion.denoising.out_blocks.4.1.proj.bias, diffusion.denoising.out_blocks.5.0.conv_1.0.weight, diffusion.denoising.out_blocks.5.0.conv_1.0.bias, diffusion.denoising.out_blocks.5.0.conv_1.2.weight, diffusion.denoising.out_blocks.5.0.conv_1.2.bias, diffusion.denoising.out_blocks.5.0.norm_with_embedding.norm.weight, diffusion.denoising.out_blocks.5.0.norm_with_embedding.norm.bias, diffusion.denoising.out_blocks.5.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.out_blocks.5.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.out_blocks.5.0.conv_2.2.weight, diffusion.denoising.out_blocks.5.0.conv_2.2.bias, diffusion.denoising.out_blocks.5.0.shortcut.weight, diffusion.denoising.out_blocks.5.0.shortcut.bias, diffusion.denoising.out_blocks.5.1.norm.weight, diffusion.denoising.out_blocks.5.1.norm.bias, diffusion.denoising.out_blocks.5.1.qkv.weight, diffusion.denoising.out_blocks.5.1.qkv.bias, diffusion.denoising.out_blocks.5.1.proj.weight, diffusion.denoising.out_blocks.5.1.proj.bias, diffusion.denoising.out_blocks.5.2.conv.weight, diffusion.denoising.out_blocks.5.2.conv.bias, diffusion.denoising.out_blocks.6.0.conv_1.0.weight, diffusion.denoising.out_blocks.6.0.conv_1.0.bias, diffusion.denoising.out_blocks.6.0.conv_1.2.weight, diffusion.denoising.out_blocks.6.0.conv_1.2.bias, diffusion.denoising.out_blocks.6.0.norm_with_embedding.norm.weight, diffusion.denoising.out_blocks.6.0.norm_with_embedding.norm.bias, diffusion.denoising.out_blocks.6.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.out_blocks.6.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.out_blocks.6.0.conv_2.2.weight, diffusion.denoising.out_blocks.6.0.conv_2.2.bias, diffusion.denoising.out_blocks.6.0.shortcut.weight, diffusion.denoising.out_blocks.6.0.shortcut.bias, diffusion.denoising.out_blocks.6.1.norm.weight, diffusion.denoising.out_blocks.6.1.norm.bias, diffusion.denoising.out_blocks.6.1.qkv.weight, diffusion.denoising.out_blocks.6.1.qkv.bias, diffusion.denoising.out_blocks.6.1.proj.weight, diffusion.denoising.out_blocks.6.1.proj.bias, diffusion.denoising.out_blocks.7.0.conv_1.0.weight, diffusion.denoising.out_blocks.7.0.conv_1.0.bias, diffusion.denoising.out_blocks.7.0.conv_1.2.weight, diffusion.denoising.out_blocks.7.0.conv_1.2.bias, diffusion.denoising.out_blocks.7.0.norm_with_embedding.norm.weight, diffusion.denoising.out_blocks.7.0.norm_with_embedding.norm.bias, diffusion.denoising.out_blocks.7.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.out_blocks.7.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.out_blocks.7.0.conv_2.2.weight, diffusion.denoising.out_blocks.7.0.conv_2.2.bias, diffusion.denoising.out_blocks.7.0.shortcut.weight, diffusion.denoising.out_blocks.7.0.shortcut.bias, diffusion.denoising.out_blocks.7.1.norm.weight, diffusion.denoising.out_blocks.7.1.norm.bias, diffusion.denoising.out_blocks.7.1.qkv.weight, diffusion.denoising.out_blocks.7.1.qkv.bias, diffusion.denoising.out_blocks.7.1.proj.weight, diffusion.denoising.out_blocks.7.1.proj.bias, diffusion.denoising.out_blocks.8.0.conv_1.0.weight, diffusion.denoising.out_blocks.8.0.conv_1.0.bias, diffusion.denoising.out_blocks.8.0.conv_1.2.weight, diffusion.denoising.out_blocks.8.0.conv_1.2.bias, diffusion.denoising.out_blocks.8.0.norm_with_embedding.norm.weight, diffusion.denoising.out_blocks.8.0.norm_with_embedding.norm.bias, diffusion.denoising.out_blocks.8.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.out_blocks.8.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.out_blocks.8.0.conv_2.2.weight, diffusion.denoising.out_blocks.8.0.conv_2.2.bias, diffusion.denoising.out_blocks.8.0.shortcut.weight, diffusion.denoising.out_blocks.8.0.shortcut.bias, diffusion.denoising.out_blocks.8.1.norm.weight, diffusion.denoising.out_blocks.8.1.norm.bias, diffusion.denoising.out_blocks.8.1.qkv.weight, diffusion.denoising.out_blocks.8.1.qkv.bias, diffusion.denoising.out_blocks.8.1.proj.weight, diffusion.denoising.out_blocks.8.1.proj.bias, diffusion.denoising.out_blocks.8.2.conv.weight, diffusion.denoising.out_blocks.8.2.conv.bias, diffusion.denoising.out_blocks.9.0.conv_1.0.weight, diffusion.denoising.out_blocks.9.0.conv_1.0.bias, diffusion.denoising.out_blocks.9.0.conv_1.2.weight, diffusion.denoising.out_blocks.9.0.conv_1.2.bias, diffusion.denoising.out_blocks.9.0.norm_with_embedding.norm.weight, diffusion.denoising.out_blocks.9.0.norm_with_embedding.norm.bias, diffusion.denoising.out_blocks.9.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.out_blocks.9.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.out_blocks.9.0.conv_2.2.weight, diffusion.denoising.out_blocks.9.0.conv_2.2.bias, diffusion.denoising.out_blocks.9.0.shortcut.weight, diffusion.denoising.out_blocks.9.0.shortcut.bias, diffusion.denoising.out_blocks.10.0.conv_1.0.weight, diffusion.denoising.out_blocks.10.0.conv_1.0.bias, diffusion.denoising.out_blocks.10.0.conv_1.2.weight, diffusion.denoising.out_blocks.10.0.conv_1.2.bias, diffusion.denoising.out_blocks.10.0.norm_with_embedding.norm.weight, diffusion.denoising.out_blocks.10.0.norm_with_embedding.norm.bias, diffusion.denoising.out_blocks.10.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.out_blocks.10.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.out_blocks.10.0.conv_2.2.weight, diffusion.denoising.out_blocks.10.0.conv_2.2.bias, diffusion.denoising.out_blocks.10.0.shortcut.weight, diffusion.denoising.out_blocks.10.0.shortcut.bias, diffusion.denoising.out_blocks.11.0.conv_1.0.weight, diffusion.denoising.out_blocks.11.0.conv_1.0.bias, diffusion.denoising.out_blocks.11.0.conv_1.2.weight, diffusion.denoising.out_blocks.11.0.conv_1.2.bias, diffusion.denoising.out_blocks.11.0.norm_with_embedding.norm.weight, diffusion.denoising.out_blocks.11.0.norm_with_embedding.norm.bias, diffusion.denoising.out_blocks.11.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.out_blocks.11.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.out_blocks.11.0.conv_2.2.weight, diffusion.denoising.out_blocks.11.0.conv_2.2.bias, diffusion.denoising.out_blocks.11.0.shortcut.weight, diffusion.denoising.out_blocks.11.0.shortcut.bias, diffusion.denoising.out_blocks.11.1.conv.weight, diffusion.denoising.out_blocks.11.1.conv.bias, diffusion.denoising.out_blocks.12.0.conv_1.0.weight, diffusion.denoising.out_blocks.12.0.conv_1.0.bias, diffusion.denoising.out_blocks.12.0.conv_1.2.weight, diffusion.denoising.out_blocks.12.0.conv_1.2.bias, diffusion.denoising.out_blocks.12.0.norm_with_embedding.norm.weight, diffusion.denoising.out_blocks.12.0.norm_with_embedding.norm.bias, diffusion.denoising.out_blocks.12.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.out_blocks.12.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.out_blocks.12.0.conv_2.2.weight, diffusion.denoising.out_blocks.12.0.conv_2.2.bias, diffusion.denoising.out_blocks.12.0.shortcut.weight, diffusion.denoising.out_blocks.12.0.shortcut.bias, diffusion.denoising.out_blocks.13.0.conv_1.0.weight, diffusion.denoising.out_blocks.13.0.conv_1.0.bias, diffusion.denoising.out_blocks.13.0.conv_1.2.weight, diffusion.denoising.out_blocks.13.0.conv_1.2.bias, diffusion.denoising.out_blocks.13.0.norm_with_embedding.norm.weight, diffusion.denoising.out_blocks.13.0.norm_with_embedding.norm.bias, diffusion.denoising.out_blocks.13.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.out_blocks.13.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.out_blocks.13.0.conv_2.2.weight, diffusion.denoising.out_blocks.13.0.conv_2.2.bias, diffusion.denoising.out_blocks.13.0.shortcut.weight, diffusion.denoising.out_blocks.13.0.shortcut.bias, diffusion.denoising.out_blocks.14.0.conv_1.0.weight, diffusion.denoising.out_blocks.14.0.conv_1.0.bias, diffusion.denoising.out_blocks.14.0.conv_1.2.weight, diffusion.denoising.out_blocks.14.0.conv_1.2.bias, diffusion.denoising.out_blocks.14.0.norm_with_embedding.norm.weight, diffusion.denoising.out_blocks.14.0.norm_with_embedding.norm.bias, diffusion.denoising.out_blocks.14.0.norm_with_embedding.embedding_layer.1.weight, diffusion.denoising.out_blocks.14.0.norm_with_embedding.embedding_layer.1.bias, diffusion.denoising.out_blocks.14.0.conv_2.2.weight, diffusion.denoising.out_blocks.14.0.conv_2.2.bias, diffusion.denoising.out_blocks.14.0.shortcut.weight, diffusion.denoising.out_blocks.14.0.shortcut.bias, diffusion.denoising.out.conv.weight, diffusion.denoising.out.conv.bias, diffusion.denoising.out.gn.weight, diffusion.denoising.out.gn.bias, diffusion.ddpm_loss.norm_factor
2023-11-08 13:30:24,918 - mmgen - INFO - Try to load Tero's Inception Model from 'work_dirs/cache/inception-2015-12-05.pt'.
2023-11-08 13:30:25,052 - mmgen - INFO - Load Tero's Inception Model successfully.
2023-11-08 13:30:25,090 - mmgen - INFO - FID: Adopt Inception in StyleGAN style
2023-11-08 13:30:25,145 - mmgen - INFO - Load reference inception pkl from work_dirs/cache/chairs_test_inception_stylegan.pkl
2023-11-08 13:30:25,201 - mmgen - INFO - Sample 8 fake scenes for evaluation
However, considering that the "chairs" test dataset still yielded reasonable results, I believe this may not have an impact. I'm not sure if my understanding is correct.
If my data is not suitable, please let me know. After all, there is still a lot of noise compared to standard datasets. Looking forward to your response.
Thank you for the awesome library!
I am wondering: is it possible to use your CUDA extensions (e.g. raymarching
and shencoder
) with CUDA 12? I have been using your extensions in my project and I am trying to accelerate my compute with CUDA 12. I'm wondering if you have any advice or suggestions. Are there any extra flags/args that need to be included in setup.py
? Would this just work out of the box?
A very impressive work. I tested some cars and the generated OBJ files were able to obtain the outline of the car, but the surface seemed to have burrs like contour lines. I observed that the image used for training was 128x128, which is very small. I think such a small resolution may not include enough details of the car. I am not sure if increasing the resolution of the training image can improve the precision of the model. Thank you!
Nice work! I'm trying to train a custom face dataset (FaceScape) using the one-view reconstruction setting. Since the cameras in this dataset is already in OpenCV format, there's no conversion needed. I have tuned the radius parameter to reflect the change in the object size. However, the test PSNR never improves, although training PSNR does.
It also seems the rendered faces are not correctly localized. However I double checked the camera parameters, and made sure they are in the same convention as SRN car dataset and the poses loaded are c2w matrices. The only difference is that the intrinsics for each camera are different, but I've modified the dataloader correspondingly.
I'm a bit lost on what could be wrong. Do you make other assumptions on the camera parameters? Are there any other parameters that I should tune?
hi, when I training the model with the configs stage1_***.py, it reports
File "train.py", line 81, in
main()
File "train.py", line 63, in main
tools.train.main()
File "/mnt/workspace/SSDNeRF/tools/train.py", line 228, in main
train_model(
File "/mnt/workspace/SSDNeRF/lib/apis/train.py", line 199, in train_model
runner.run(data_loaders, cfg.workflow, cfg.total_iters)
File "/mnt/workspace/mmgeneration/mmgen/core/runners/dynamic_iterbased_runner.py", line 285, in run
iter_runner(iter_loaders[i], **kwargs)
File "/mnt/workspace/mmgeneration/mmgen/core/runners/dynamic_iterbased_runner.py", line 236, in train
self.call_hook('after_train_iter')
File "/mnt/workspace/mmgeneration/mmgen/core/runners/dynamic_iterbased_runner.py", line 181, in call_hook
getattr(hook, fn_name)(self)
File "/mnt/workspace/miniconda3/envs/geom/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/mnt/workspace/SSDNeRF/lib/core/evaluation/eval_hooks.py", line 38, in after_train_iter
log_vars = evaluate_3d(
File "/mnt/workspace/SSDNeRF/lib/apis/test.py", line 31, in evaluate_3d
outputs_dict = model.val_step(
File "/mnt/workspace/miniconda3/envs/geom/lib/python3.8/site-packages/mmcv/parallel/data_parallel.py", line 99, in val_step
return self.module.val_step(*inputs[0], **kwargs[0])
File "/mnt/workspace/SSDNeRF/lib/models/autodecoders/base_nerf.py", line 642, in val_step
cond_imgs = data['cond_imgs'] # (num_scenes, num_imgs, h, w, 3)
KeyError: 'cond_imgs'
I think the stage1 training does not need cond_imags, but the function val_step force to load the cond_imgs from data.
Hi Hansheng,
How did you get the triplane visualization shown in the paper?
Best,
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.