Code Monkey home page Code Monkey logo

Comments (4)

Megidd avatar Megidd commented on September 28, 2024 2

On an Ubuntu server with a CUDA GPU like this:

$ nvidia-smi
Sun Jun  9 09:43:21 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.08              Driver Version: 545.23.08    CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3060        On  | 00000000:41:00.0 Off |                  N/A |
|  0%   39C    P8              14W / 170W |    114MiB / 12288MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A      1122      G   /usr/lib/xorg/Xorg                           38MiB |
|    0   N/A  N/A      1337      G   /usr/bin/gnome-shell                         56MiB |
+---------------------------------------------------------------------------------------+

Here are the steps we did.

Step 1

Install some required drivers and packages.

sudo apt-get install libegl1-mesa-dev
nvidia-smi
sudo apt-get install libnvidia-gl-545

Step 2

Install Conda.

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
sudo bash miniconda.sh -b -u -p /usr/local
conda update -y conda

Step 3

Clone the repository and go inside it.

git clone https://github.com/pengHTYX/Era3D.git
cd Era3D # Make sure you are inside repository

Step 4

Create Python environment only once. For the future runs, just activate it.

eval "$(conda shell.bash hook)"
conda create -n Era3D python=3.9 -y

Step 5

Activate Python environment. Every time you want to run, the Python environment has to be activated.

eval "$(conda shell.bash hook)"
conda activate Era3D

Step 6

Install Python environment dependencies, only once. For the future runs, they would already be satisfied when you activate the Python environment.

pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 xformers
# Path to your nvcc, just in case it's not already on the PATH:
export PATH=$PATH:/usr/local/cuda-12.3/bin/
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
pip install git+https://github.com/NVlabs/nvdiffrast
pip install -r requirements.txt

Step 7

Download the model/weights.

rm -rf download_model.py
touch download_model.py
echo 'from huggingface_hub import snapshot_download' >> download_model.py
echo 'snapshot_download(repo_id="pengHTYX/MacLab-Era3D-512-6view", local_dir="./pengHTYX/MacLab-Era3D-512-6view/")' >> download_model.py
python download_model.py

Step 8

Run the examples. Note that when (Pdb) appears on your console, that's a debugging breakpoint. You should type continue to move on.

python test_mvdiffusion_unclip.py --config configs/test_unclip-512-6view.yaml \
	    pretrained_model_name_or_path='pengHTYX/MacLab-Era3D-512-6view' \
	        validation_dataset.crop_size='420' \
		    validation_dataset.root_dir=examples \
		        seed=600 \
			    save_dir='mv_res'  \
			        save_mode='rgb'

from era3d.

Megidd avatar Megidd commented on September 28, 2024 1

This is how we are trying to run it on Google Colab:

# First: Press runtime on the top > Change runtime type > Select T4 GPU

# Then do the following sections on separate notebook code blocks:

!wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
!bash miniconda.sh -b -u -p /usr/local
!rm miniconda.sh
!conda update -y conda

%cd /content/
!rm -rf Era3D
!git clone https://github.com/pengHTYX/Era3D.git
%cd Era3D

!sudo apt-get install libegl1-mesa-dev
!nvidia-smi
!sudo apt-get install libnvidia-gl-535

%%shell
eval "$(conda shell.bash hook)"
conda create -n Era3D python=3.9 -y
conda activate Era3D
pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2
wget https://download.pytorch.org/whl/cu121/xformers-0.0.23.post1-cp39-cp39-manylinux2014_x86_64.whl#sha256=a117e4cc835d9a19c653d79b5c66e37c72f713241e2d85b6561a15006f84b6e6
pip install xformers-0.0.23.post1-cp39-cp39-manylinux2014_x86_64.whl
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
pip install git+https://github.com/NVlabs/nvdiffrast
pip install -r requirements.txt

%%shell
eval "$(conda shell.bash hook)"
conda activate Era3D
rm -rf download_model.py
touch download_model.py
echo 'from huggingface_hub import snapshot_download' >> download_model.py
echo 'snapshot_download(repo_id="pengHTYX/MacLab-Era3D-512-6view", local_dir="./pengHTYX/MacLab-Era3D-512-6view/")' >> download_model.py
python download_model.py
ls -lhrtc

%%shell
ls pengHTYX/MacLab-Era3D-512-6view/

%%shell
eval "$(conda shell.bash hook)"
conda activate Era3D
python test_mvdiffusion_unclip.py --config configs/test_unclip-512-6view.yaml \
    pretrained_model_name_or_path='pengHTYX/MacLab-Era3D-512-6view' \
    validation_dataset.crop_size='420' \
    validation_dataset.root_dir=examples \
    seed=600 \
    save_dir='mv_res'  \
    save_mode='rgb'

from era3d.

ibaraki-douji avatar ibaraki-douji commented on September 28, 2024 1

First thx for your Colab workflow, (for people using it, there is 5 cells which all start with %%shell exept the first one)

As for a normal server, i used Vast AI with the nvidia/cuda:11.8.0-devel-ubuntu22.04 image.

And here is what i did :

# install conda
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
chmod +x miniconda.sh
./miniconda.sh #follow conda install (say yes to link your shell)

# restart your shell (or ssh reconnect for exemple)

conda create -n Era3D python=3.9
conda activate Era3D

# install the deps
pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu118

pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
pip install git+https://github.com/NVlabs/nvdiffrast

pip install -r requirements.txt

# clone the repo
git clone https://github.com/pengHTYX/Era3D
cd Era3D

# downlaod the model
touch download_model.py
echo 'from huggingface_hub import snapshot_download' >> download_model.py
echo 'snapshot_download(repo_id="pengHTYX/MacLab-Era3D-512-6view", local_dir="./pengHTYX/MacLab-Era3D-512-6view/")' >> download_model.py
python download_model.py

# in case it's not installed install this for the GL lib
apt-get update && apt-get install ffmpeg libsm6 libxext6  -y

# run the thing
python test_mvdiffusion_unclip.py --config configs/test_unclip-512-6view.yaml \
    pretrained_model_name_or_path='pengHTYX/MacLab-Era3D-512-6view' \
    validation_dataset.crop_size='420' \
    validation_dataset.root_dir=examples \
    seed=600 \
    save_dir='mv_res'  \
    save_mode='rgb'

# once you get the `(Pdb)` type `continue` to run the first image and type `help` to see the things

Currently i'm trying to get NSR working with Vast

from era3d.

ibaraki-douji avatar ibaraki-douji commented on September 28, 2024 1

for the NSR i was missing this :

apt-get install libegl1-mesa-dev

Also to avoid typing continue everytime with (Pdb) type pdb.set_trace = lambda: None then continue and it will no longer ask you

from era3d.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.