Code Monkey home page Code Monkey logo

richdreamer's Introduction



RichDreamer

RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D.

Lingteng Qiu*, Guanying Chen*, Xiaodong Gu*, Qi Zuo, Mutian Xu, Yushuang Wu, Weihao Yuan, Zilong Dong, Liefeng Bo, Xiaoguang Han

如果您熟悉中文,可以阅读中文版本的README

Our method is based on Normal-Depth diffusion Model, for more details please refer to normal-depth-diffusion.

richdreamer

TODO 🚩

  • Text to ND Diffusion Model
  • Multiview-ND and Multiview-Albedo Diffusion Models
  • Release code
  • Provide the generation trial on ModelScope's 3D Object Generation
  • Docker image

News

Architecture

architecture

Install

  • System requirement: Ubuntu20.04
  • Tested GPUs: RTX4090, A100

Install requirements using following scripts.

git clone https://github.com/modelscope/RichDreamer.git --recursive
conda create -n rd
conda activate rd
# install dependence of threestudio
pip install -r requirements_3d.txt

we also provide a dockerfile to build docker image or use our built docker image.

sudo docker build -t mv3dengine_22.04:cu118 -f docker/Dockerfile .

Download pretrained weights.

  1. MultiView Normal-Depth Diffusion Model (ND-MV)
  2. MultiView Depth-conditioned Albedo Diffusion Model (Alebdo-MV)

Or your can download weights using the following scripts.

python tools/download_nd_models.py
# copy 256_tets file for dmtet.
cp ./pretrained_models/Damo_XR_Lab/Normal-Depth-Diffusion-Model/256_tets.npz ./load/tets/
# link your huggingface models to ./pretrained_models/huggingface
cd pretrained_models && ln -s ~/.cache/huggingface ./

if you cannot visit huggingface to download SD 1.5, SD 2.1 and CLIPs, you can download SD and CLIP models from aliyun and then put $download_sd_clip file to pretrained_models/huggingface/hub/.

mkdir -p pretrained_models/huggingface/hub/
cd pretrained_models/huggingface/hub/
mv /path/to/${download_sd_clip} ./
tar -xvf ${download_sd_clip} ./

Generation

Make sure you have the following models.

RichDreamer
|-- pretrained_models
    |-- Damo_XR_Lab
        |-- Normal-Depth-Diffusion-Model
            |-- nd_mv_ema.ckpt
            |-- albedo_mv_ema.ckpt
    
    |-- huggingface
        |-- hub
            |-- models--runwayml--stable-diffusion-v1-5
            |-- models--openai--clip-vit-large-patch14
            |-- models--stabilityai--stable-diffusion-2-1-base
            |-- models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K

Note we setting environment variable TRANSFORMERS_OFFLINE=1 DIFFUSERS_OFFLINE=1 HF_HUB_OFFLINE=1 in all *.sh file before running command to prevent from connecting to huggingface each time.

If you using above script to download SD and CLIP models, you do nothing, if you download via huggingface api, in first run, you need to set TRANSFORMERS_OFFLINE=0 DIFFUSERS_OFFLINE=0 HF_HUB_OFFLINE=0 in *.sh and it will connect hugging face and automatic download models.

Our (NeRF)

# Quick Start, Single A-100 80G
python3 ./run_nerf.py -t $prompt -o $output

# Run from prompt list
# e.g. bash ./scripts/nerf/run_batch.sh 0 1 ./prompts_nerf.txt
# We also provide run_batch_res256.sh to optimize using high-resolution rendering images to achieve better results, but it will consume more memory and time.
bash ./scripts/nerf/run_batch.sh $start_id $end_id ${prompts_nerf.txt}

# If you don't have an A-100 device, we offer a save memory version to generate results.
# For single GTX-3090/4090, 24GB GPU memory.
# e.g. bash ./scripts/nerf/run_batch_fast.sh 0 1 ./prompts_nerf.txt
# or e.g. python3 ./run_nerf.py -t "a dog, 3d asset" -o ./outputs/nerf --img_res 128 --save_mem 1
bash ./scripts/nerf/run_batch_fast.sh $start_id $end_id ${prompts_nerf.txt}
# or using:
python3 ./run_nerf.py -t "$prompt" -o $output --img_res 128 --save_mem 1

Our (DMTet)

Tips for Training DMTet

1. Rendering High-resolution:

We found that optimizing a high-resolution DMTet sphere directly is more challenging compared to the NeRF method. For example, both Fantasia3D and SweetDreamer require 4 or 8 GPUs for optimization, which is difficult to obtain for most individuals. During the experimental process, we observed that optimization becomes significantly more stable when we increase the rendering resolution of DMTet, such as to 1024. This trick allows us to optimize from DMTet using only a single GPU, which was previously not feasible.

2. PBR Modeling:

Fantasia3D offers three strategies for conducting PBR modeling. If you Do Not require generating models that support relighting and only aim for enhanced realism, we recommend using the sampling strategy fantasia3d_2. Otherwise we suggest you to use fantasia3d strategy_0 with our depth condition albedo-sds.

# Quick Start, Single A-100 80G
python3 ./run_dmtet.py -t "$prompt" -o $output

# Run from prompt list
# e.g. bash ./scripts/nerf/run_batch.sh 0 1 ./prompts_dmtet.txt
bash ./scripts/dmtet/run_batch.sh $start_id $end_id ${prompts_dmtet.txt} 

# If you don't have an A-100 device, we offer a save memory version to generate results.
# For single GTX-3090/4090, 24GB GPU memory.
# bash ./scripts/dmtet/run_batch_fast.sh 0 1 ./prompts_dmtet.txt
bash ./scripts/dmtet/run_batch_fast.sh $start_id $end_id ${prompts_dmtet.txt}

Acknowledgement

This work is built on many amazing research works and open-source projects:

Thanks for their excellent work and great contribution to 3D generation area.

We would like to express our special gratitude to Rui Chen for the valuable discussion in training Fantasia3D and PBR modeling.

Additionally, we extend our heartfelt thanks to Chao Xu for his assistance in conducting relighting experiments.

Citation

@article{qiu2023richdreamer,
    title={RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D}, 
    author={Lingteng Qiu and Guanying Chen and Xiaodong Gu and Qi zuo and Mutian Xu and Yushuang Wu and Weihao Yuan and Zilong Dong and Liefeng Bo and Xiaoguang Han},
    year={2023},
    journal = {arXiv preprint arXiv:2311.16918}
}

richdreamer's People

Contributors

gxd1994 avatar lingtengqiu avatar yingdachen avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.