Code Monkey home page Code Monkey logo

text2video-zero's Introduction

Text2Video-Zero

This repository is the official implementation of Text2Video-Zero.

Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators
Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan, Humphrey Shi

Paper | Video | Hugging Face Spaces | Project


Our method Text2Video-Zero enables zero-shot video generation using (i) a textual prompt (see rows 1, 2), (ii) a prompt combined with guidance from poses or edges (see lower right), and (iii) Video Instruct-Pix2Pix, i.e., instruction-guided video editing (see lower left). Results are temporally consistent and follow closely the guidance and textual prompts.

News

  • [03/23/2023] Paper Text2Video-Zero released!
  • [03/25/2023] The first version of our huggingface demo (containing zero-shot text-to-video generation and Video Instruct Pix2Pix) released!
  • [03/27/2023] The full version of our huggingface demo released! Now also included: text and pose conditional video generation, text and edge conditional video generation, and text, edge and dreambooth conditional video generation.
  • [03/28/2023] Code for all our generation methods released! We added a new low-memory setup. Minimum required GPU VRAM is currently 12 GB. It will be further reduced in the upcoming releases.
  • [03/29/2023] Improved Huggingface demo! (i) For text-to-video generation, any base model for stable diffusion and any dreambooth model hosted on huggingface can now be loaded! (ii) We improved the quality of Video Instruct-Pix2Pix. (iii) We added two longer examples for Video Instruct-Pix2Pix.
  • [03/30/2023] New code released! It includes all improvements of our latest huggingface iteration. See the news update from 03/29/2023. In addition, generated videos (text-to-video) can have arbitrary length.
  • [04/06/2023] We integrated Token Merging into our code. When the highest compression is used and chunk size set to 2, our code can run with less than 7 GB VRAM.
  • [04/11/2023] New code and Huggingface demo released! We integrated depth control, based on MiDaS.
  • [04/13/2023] Our method has been integrad into ๐Ÿงจ Diffusers!

Contribute

We are on a journey to democratize AI and empower the creativity of everyone, and we believe Text2Video-Zero is a great research direction to unleash the zero-shot video generation and editing capacity of the amazing text-to-image models!

To achieve this goal, all contributions are welcome. Please check out these external implementations and extensions of Text2Video-Zero. We thank the authors for their efforts and contributions:

Setup

  1. Clone this repository and enter:
git clone https://github.com/Picsart-AI-Research/Text2Video-Zero.git
cd Text2Video-Zero/
  1. Install requirements using Python 3.9 and CUDA >= 11.6
virtualenv --system-site-packages -p python3.9 venv
source venv/bin/activate
pip install -r requirements.txt

Inference API

To run inferences create an instance of Model class

import torch
from model import Model

model = Model(device = "cuda", dtype = torch.float16)

Text-To-Video

To directly call our text-to-video generator, run this python command which stores the result in tmp/text2video/A_horse_galloping_on_a_street.mp4 :

prompt = "A horse galloping on a street"
params = {"t0": 44, "t1": 47 , "motion_field_strength_x" : 12, "motion_field_strength_y" : 12, "video_length": 8}

out_path, fps = f"./text2video_{prompt.replace(' ','_')}.mp4", 4
model.process_text2video(prompt, fps = fps, path = out_path, **params)

To use a different stable diffusion base model run this python command:

from hf_utils import get_model_list
model_list = get_model_list()
for idx, name in enumerate(model_list):
  print(idx, name)
idx = int(input("Select the model by the listed number: ")) # select the model of your choice
model.process_text2video(prompt, model_name = model_list[idx], fps = fps, path = out_path, **params)

Hyperparameters (Optional)

You can define the following hyperparameters:

  • Motion field strength: motion_field_strength_x = $\delta_x$ and motion_field_strength_y = $\delta_y$ (see our paper, Sect. 3.3.1). Default: motion_field_strength_x=motion_field_strength_y= 12.
  • $T$ and $T'$ (see our paper, Sect. 3.3.1). Define values t0 and t1 in the range {0,...,50}. Default: t0=44, t1=47 (DDIM steps). Corresponds to timesteps 881 and 941, respectively.
  • Video length: Define the number of frames video_length to be generated. Default: video_length=8.

Text-To-Video with Pose Control

To directly call our text-to-video generator with pose control, run this python command:

prompt = 'an astronaut dancing in outer space'
motion_path = '__assets__/poses_skeleton_gifs/dance1_corr.mp4'
out_path = f"./text2video_pose_guidance_{prompt.replace(' ','_')}.gif"
model.process_controlnet_pose(motion_path, prompt=prompt, save_path=out_path)

Text-To-Video with Edge Control

To directly call our text-to-video generator with edge control, run this python command:

prompt = 'oil painting of a deer, a high-quality, detailed, and professional photo'
video_path = '__assets__/canny_videos_mp4/deer.mp4'
out_path = f'./text2video_edge_guidance_{prompt}.mp4'
model.process_controlnet_canny(video_path, prompt=prompt, save_path=out_path)

Hyperparameters

You can define the following hyperparameters for Canny edge detection:

  • low threshold. Define value low_threshold in the range $(0, 255)$. Default: low_threshold=100.
  • high threshold. Define value high_threshold in the range $(0, 255)$. Default: high_threshold=200. Make sure that high_threshold > low_threshold.

You can give hyperparameters as arguments to model.process_controlnet_canny


Text-To-Video with Edge Guidance and Dreambooth specialization

Load a dreambooth model then proceed as described in Text-To-Video with Edge Guidance

prompt = 'your prompt'
video_path = 'path/to/your/video'
dreambooth_model_path = 'path/to/your/dreambooth/model'
out_path = f'./text2video_edge_db_{prompt}.gif'
model.process_controlnet_canny_db(dreambooth_model_path, video_path, prompt=prompt, save_path=out_path)

The value video_path can be the path to a mp4 file. To use one of the example videos provided, set video_path="woman1", video_path="woman2", video_path="woman3", or video_path="man1".

The value dreambooth_model_path can either be a link to a diffuser model file, or the name of one of the dreambooth models provided. To this end, set dreambooth_model_path = "Anime DB", dreambooth_model_path = "Avatar DB", dreambooth_model_path = "GTA-5 DB", or dreambooth_model_path = "Arcane DB". The corresponding keywords are: 1girl (for Anime DB), arcane style (for Arcane DB) avatar style (for Avatar DB) and gtav style (for GTA-5 DB).

Custom Dreambooth Models

To load custom Dreambooth models, transfer control to the custom model and convert it to diffuser format. Then, the value of dreambooth_model_path must link to the folder containing the diffuser file. Dreambooth models can be obtained, for instance, from CIVITAI.


Video Instruct-Pix2Pix

To perform pix2pix video editing, run this python command:

prompt = 'make it Van Gogh Starry Night'
video_path = '__assets__/pix2pix video/camel.mp4'
out_path = f'./video_instruct_pix2pix_{prompt}.mp4'
model.process_pix2pix(video_path, prompt=prompt, save_path=out_path)

Text-To-Video with Depth Control

To directly call our text-to-video generator with depth control, run this python command:

prompt = 'oil painting of a deer, a high-quality, detailed, and professional photo'
video_path = '__assets__/depth_videos/deer.mp4'
out_path = f'./text2video_depth_control_{prompt}.mp4'
model.process_controlnet_depth(video_path, prompt=prompt, save_path=out_path)

Low Memory Inference

Each of the above introduced interface can be run in a low memory setup. In the minimal setup, a GPU with 12 GB VRAM is sufficient.

To reduce the memory usage, add chunk_size=k as additional parameter when calling one of the above defined inference APIs. The integer value k must be in the range {2,...,video_length}. It defines the number of frames that are processed at once (without any loss in quality). The lower the value the less memory is needed.

When using the gradio app, set chunk_size in the Advanced options.

Thanks to the great work of Token Merging, the memory usage can be further reduced. It can be configured using the merging_ratio parameter with values in [0,1]. The higher the value, the more compression is applied (leading to faster inference and less memory requirements). Be aware that too high values will decrease the image quality.

We plan to continue optimizing our code to enable even lower memory consumption.


Ablation Study

To replicate the ablation study, add additional parameters when calling the above defined inference APIs.

  • To deactivate cross-frame attention: Add use_cf_attn=False to the parameter list.
  • To deactivate enriching latent codes with motion dynamics: Add use_motion_field=False to the parameter list.

Note: Adding smooth_bg=True activates background smoothing. However, our code does not include the salient object detector necessary to run that code.


Inference using Gradio

Click to see details.

From the project root folder, run this shell command:

python app.py

Then access the app locally with a browser.

To access the app remotely, run this shell command:

python app.py --public_access

For security information about public access we refer to the documentation of gradio.


Results

Text-To-Video

"A cat is running on the grass" "A panda is playing guitar on times square" "A man is running in the snow" "An astronaut is skiing down the hill"
"A panda surfing on a wakeboard" "A bear dancing on times square" "A man is riding a bicycle in the sunshine" "A horse galloping on a street"
"A tiger walking alone down the street" "A panda surfing on a wakeboard" "A horse galloping on a street" "A cute cat running in a beautiful meadow"
"A horse galloping on a street" "A panda walking alone down the street" "A dog is walking down the street" "An astronaut is waving his hands on the moon"

Text-To-Video with Pose Guidance

"A bear dancing on the concrete" "An alien dancing under a flying saucer" "A panda dancing in Antarctica" "An astronaut dancing in the outer space"

Text-To-Video with Edge Guidance

"White butterfly" "Beautiful girl" "A jellyfish" "beautiful girl halloween style"
"Wild fox is walking" "Oil painting of a beautiful girl close-up" "A santa claus" "A deer"

Text-To-Video with Edge Guidance and Dreambooth specialization

"anime style" "arcane style" "gta-5 man" "avatar style"

Video Instruct Pix2Pix

"Replace man with chimpanze" "Make it Van Gogh Starry Night style" "Make it Picasso style"
"Make it Expressionism style" "Make it night" "Make it autumn"

Related Links

License

Our code is published under the CreativeML Open RAIL-M license. The license provided in this repository applies to all additions and contributions we make upon the original stable diffusion code. The original stable diffusion code is under the CreativeML Open RAIL-M license, which can found here.

BibTeX

If you use our work in your research, please cite our publication:

@article{text2video-zero,
    title={Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators},
    author={Khachatryan, Levon and Movsisyan, Andranik and Tadevosyan, Vahram and Henschel, Roberto and Wang, Zhangyang and Navasardyan, Shant and Shi, Humphrey},
    journal={arXiv preprint arXiv:2303.13439},
    year={2023}
}

Alternative ways to use Text2Video-Zero

Text2Video-Zero can alternatively used via

Click to see details.

Text2Video-Zero in ๐Ÿงจ Diffusers Library

Text2Video-Zero is available in ๐Ÿงจ Diffusers, starting from version 0.15.0!

Diffusers can be installed using the following command:

virtualenv --system-site-packages -p python3.9 venv
source venv/bin/activate
pip install diffusers torch imageio

To generate a video from a text prompt, run the following command:

import torch
import imageio
from diffusers import TextToVideoZeroPipeline

# load stable diffusion model weights
model_id = "runwayml/stable-diffusion-v1-5"

# create a TextToVideoZero pipeline
pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

# define the text prompt
prompt = "A panda is playing guitar on times square"

# generate the video using our pipeline
result = pipe(prompt=prompt).images
result = [(r * 255).astype("uint8") for r in result]

# save the resulting image
imageio.mimsave("video.mp4", result, fps=4)

For more information, including how to run text and pose conditional video generation, text and edge conditional video generation and text and edge and dreambooth conditional video generation, please check the documentation.

text2video-zero's People

Contributors

erjanmx avatar honghuis avatar johndpope avatar levon-khachatryan avatar mickelliu avatar rob-hen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

text2video-zero's Issues

About the detailed motivation of the work.

Hi there,

It is really a nice work. I'm very curious about why the motion dynamics and background smoothing can work and the mechanism behind them. Specifically, since the operations are applied in the latent space, wouldn't translation and blending destroy the structure of the final decoded image and turn it into a completely different image?

Does anyone have any ideas?

Condition Text To Video generation on first frame

Hi!

This is a great work with amazing results, good job!

I was hoping you could provide some guidance on the following issue. I'm trying to condition the video generation by providing the first frame for text to video generation.

I noticed that the inference scheme for TextToVideoPipeline accepts xT as parameter, so I've made some modifications to the code and I'm providing the latent encoding of the first frame instead of sampling random latents.

I'm using the VAE and the SD preprocessing scheme. I've tested that encoding the frame and decoding the latents produces (practically) the same image and it works.

My issue is that the full generation produces low quality results, with a diagonal camera movement from left to right and low resolution and weird "filters". I assume it has to do with the backward diffusion steps on the first frame, but I'm kind of stuck on what to do next.

I would really appreciate your input on this one.
Thanks!

Here's snippets of code on how I encode:

def encode_latents(self, image: PIL.Image):
        img = np.array(image.convert("RGB"))
        img = img[None].transpose(0, 3, 1, 2)
        img = torch.from_numpy(img).to(dtype=torch.float32) / 127.5 - 1.0
        masked_image_latents = self.vae.encode(img.to(device=self.device, dtype=torch.float16)).latent_dist.sample()
        return torch.unsqueeze(masked_image_latents, 2) * 0.18215

Results (all parameters are left to default of original text to video):
Input image (text = "horse galloping"):
horse_resized
Produced output:

text2video_A_horse_galloping.mp4

Input image (first frame of previous generated gif - "a horse galloping on a street"):
generated_horse

Produced output:

text2video_A_horse_galloping_on_a_street_nice.mp4

Set NSFW option

How can I turn ON the requires_safety_checker in model.process_text2video?
I thought it was True by default but I can still create NSFW videos.

the first generated image of each chuck is a computational waste

**kwargs).images[1:])

currently the first image of each chunk processing is wasted, discarded. Would it be possible to avoid processing it? because in a scenario where several chuck's will be processed, the first image will always be identical, it does not apply scene changes, only from the second onwards, it would be interesting to avoid the procedural cost of that first image, if possible.

Temporal inconsistency

Hi, Thanks for open-sourcing the code. I'm currently using video pix2pix and have seen that the temporal inconsistency between the frames generated. This is much worse in >16 frame videos. May I know how to fix this.

video_instruct_pix2pix_make.it.in.cartoon.style.mp4

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.00 GiB. GPU 0 has a total capacty of 12.00 GiB

Could someone give me a hand, I have not been able to get this working, I am using a 12gb rtx2060, I am using it with miniconda3, for the env, in windows, and I tried to find out what the error is but I did not find anything that is clear enough, yes It is the reserved memory, I don't know how to change that, I share the nvidia-smi info
the demo.py that I put together, pip list, and what it tells me.

how to install it
conda create -n textovideo python=3.9 pip -y
conda activate textovideo
pip install -r requirements.txt
pip3 install numpy --pre torch torchvision torchaudio --force-reinstall --index-url https://download.pytorch.org/whl/nightly/cu118

pip install kwargsยฟ?ยฟ??
pip install ffprobe?ยฟ?


(demo.py)
....................................................................
import torch
from model import Model
model = Model(device = "cuda", dtype = torch.float16)

prompt = 'oil painting of a deer, a high-quality, detailed, and professional photo'
video_path = 'assets/depth_videos/deer.mp4'
out_path = f'./text2video_depth_control_{prompt}.mp4'
model.process_controlnet_depth(video_path, prompt=prompt, save_path=out_path)
....................................................................
nvidia-smi
Mon May 8 06:18:38 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 531.14 Driver Version: 531.14 CUDA Version: 12.1 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 2060 WDDM | 00000000:07:00.0 On | N/A |
| 38% 50C P0 36W / 184W| 381MiB / 12288MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 1700 C+G C:\Windows\System32\dwm.exe N/A |
| 0 N/A N/A 1964 C+G ...m Files\Mozilla Firefox\firefox.exe N/A |
| 0 N/A N/A 3328 C+G ...0_x64__8wekyb3d8bbwe\Calculator.exe N/A |
| 0 N/A N/A 7772 C+G ...0_x64__pwbj9vvecjh7j\PrimeVideo.exe N/A |
| 0 N/A N/A 7900 C+G ....Experiences.TextInput.InputApp.exe N/A |
| 0 N/A N/A 7992 C+G C:\Windows\explorer.exe N/A |
| 0 N/A N/A 8984 C+G ....Cortana_cw5n1h2txyewy\SearchUI.exe N/A |
| 0 N/A N/A 11796 C+G ...m Files\Mozilla Firefox\firefox.exe N/A |
| 0 N/A N/A 13252 C+G ...5n1h2txyewy\ShellExperienceHost.exe N/A |
| 0 N/A N/A 13996 C+G ....Cortana_cw5n1h2txyewy\SearchUI.exe N/A |
| 0 N/A N/A 14864 C+G ...siveControlPanel\SystemSettings.exe N/A |
| 0 N/A N/A 16188 C+G ....0_x64__8wekyb3d8bbwe\YourPhone.exe N/A |
| 0 N/A N/A 18040 C+G ...t.LockApp_cw5n1h2txyewy\LockApp.exe N/A |
| 0 N/A N/A 35656 C+G ...YourPhoneServer\YourPhoneServer.exe N/A |
| 0 N/A N/A 52948 C+G ...inaries\Win64\EpicGamesLauncher.exe N/A |
| 0 N/A N/A 57284 C+G C:\Windows\System32\WWAHost.exe N/A |
| 0 N/A N/A 59664 C+G ...ne\Binaries\Win64\EpicWebHelper.exe N/A |
| 0 N/A N/A 65604 C+G ...Brave-Browser\Application\brave.exe N/A |
| 0 N/A N/A 98620 C+G ...61.0_x64__8wekyb3d8bbwe\GameBar.exe N/A |
+---------------------------------------------------------------------------------------+
pip list
Package Version


absl-py 1.4.0
accelerate 0.16.0
addict 2.4.0
aiofiles 23.1.0
aiohttp 3.8.4
aiosignal 1.3.1
albumentations 1.3.0
altair 4.2.2
antlr4-python3-runtime 4.9.3
anyio 3.6.2
args 0.1.0
async-timeout 4.0.2
attrs 23.1.0
basicsr 1.4.2
beautifulsoup4 4.12.2
braceexpand 0.1.7
bs4 0.0.1
cachetools 5.3.0
certifi 2022.12.7
charset-normalizer 2.1.1
click 8.1.3
colorama 0.4.6
coloredlogs 15.0.1
contourpy 1.0.7
cycler 0.11.0
decorator 4.4.2
decord 0.6.0
diffusers 0.14.0
einops 0.6.0
entrypoints 0.4
fastapi 0.95.1
ffmpy 0.3.0
ffprobe 0.5
filelock 3.9.0
flatbuffers 23.3.3
fonttools 4.39.3
frozenlist 1.3.3
fsspec 2023.4.0
ftfy 6.1.1
future 0.18.3
google-auth 2.17.3
google-auth-oauthlib 1.0.0
gradio 3.23.0
grpcio 1.54.0
h11 0.14.0
httpcore 0.17.0
httpx 0.24.0
huggingface-hub 0.14.1
humanfriendly 10.0
idna 3.4
imageio 2.9.0
imageio-ffmpeg 0.4.2
importlib-metadata 6.6.0
importlib-resources 5.12.0
invisible-watermark 0.1.5
Jinja2 3.1.2
joblib 1.2.0
jsonschema 4.17.3
kiwisolver 1.4.4
kornia 0.6.0
kwargs 1.0.1
linkify-it-py 2.0.2
lmdb 1.4.1
Markdown 3.4.3
markdown-it-py 2.2.0
MarkupSafe 2.1.2
matplotlib 3.7.1
mdit-py-plugins 0.3.3
mdurl 0.1.2
moviepy 1.0.3
mpmath 1.2.1
multidict 6.0.4
networkx 3.0rc1
numpy 1.24.1
oauthlib 3.2.2
omegaconf 2.3.0
onnx 1.14.0
onnxruntime 1.14.1
open-clip-torch 2.16.0
opencv-contrib-python 4.7.0.72
opencv-python 4.7.0.72
opencv-python-headless 4.7.0.72
orjson 3.8.11
packaging 23.1
pandas 2.0.1
Pillow 9.3.0
pip 23.0.1
prettytable 3.6.0
proglog 0.1.10
protobuf 3.20.3
psutil 5.9.5
pyasn1 0.5.0
pyasn1-modules 0.3.0
pydantic 1.10.7
pyDeprecate 0.3.1
pydub 0.25.1
pyparsing 3.0.9
pyreadline3 3.4.1
pyrsistent 0.19.3
python-dateutil 2.8.2
python-multipart 0.0.6
pytorch-lightning 1.5.0
pytz 2023.3
PyWavelets 1.4.1
PyYAML 6.0
qudida 0.0.4
regex 2023.5.5
requests 2.28.1
requests-oauthlib 1.3.1
rsa 4.9
safetensors 0.2.7
scikit-image 0.19.3
scikit-learn 1.2.2
scipy 1.10.1
semantic-version 2.10.0
sentencepiece 0.1.99
setuptools 66.0.0
six 1.16.0
sniffio 1.3.0
soupsieve 2.4.1
starlette 0.26.1
sympy 1.11.1
tb-nightly 2.14.0a20230506
tensorboard 2.13.0
tensorboard-data-server 0.7.0
tensorboardX 2.6
test-tube 0.7.5
threadpoolctl 3.1.0
tifffile 2023.4.12
timm 0.6.12
tokenizers 0.13.3
tomesd 0.1.2
toolz 0.12.0
torch 2.1.0.dev20230506+cu118
torchaudio 2.1.0.dev20230507+cu118
torchmetrics 0.6.0
torchvision 0.16.0.dev20230507+cu118
tqdm 4.64.1
transformers 4.26.0
typing_extensions 4.4.0
tzdata 2023.3
uc-micro-py 1.0.2
urllib3 1.26.13
uvicorn 0.22.0
wcwidth 0.2.6
webdataset 0.2.5
websockets 11.0.2
Werkzeug 2.3.3
wheel 0.38.4
yapf 0.32.0
yarl 1.9.2
zipp 3.15.0

the errors

(textovideo) PS H:\ia\Text2Video-Zero-main> python demo.py
H:\ia\Text2Video-Zero-main\annotator\openpose\body.py:5: DeprecationWarning: Please use gaussian_filter from the scipy.ndimage namespace, the scipy.ndimage.filters namespace is deprecated.
from scipy.ndimage.filters import gaussian_filter
H:\ia\Text2Video-Zero-main\annotator\openpose\hand.py:6: DeprecationWarning: Please use gaussian_filter from the scipy.ndimage namespace, the scipy.ndimage.filters namespace is deprecated.
from scipy.ndimage.filters import gaussian_filter
C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\skimage\util\dtype.py:27: DeprecationWarning: np.bool8 is a deprecated alias for np.bool_. (Deprecated NumPy 1.24)
np.bool8: (False, True),
cuda
cuda
Module Depth
C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\safetensors\torch.py:98: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
with safe_open(filename, framework="pt", device=device) as f:
text_encoder\model.safetensors not found
Fetching 15 files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 15/15 [00:00<?, ?it/s]
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_controlnet.StableDiffusionControlNetPipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at huggingface/diffusers#254 .
Processing chunk 1 / 2
0%| | 0/20 [00:01<?, ?it/s]
Traceback (most recent call last):
File "H:\ia\Text2Video-Zero-main\demo.py", line 8, in
model.process_controlnet_depth(video_path, prompt=prompt, save_path=out_path)
File "H:\ia\Text2Video-Zero-main\model.py", line 243, in process_controlnet_depth
result = self.inference(image=control,
File "H:\ia\Text2Video-Zero-main\model.py", line 120, in inference
result.append(self.inference_chunk(frame_ids=frame_ids,
File "H:\ia\Text2Video-Zero-main\model.py", line 79, in inference_chunk
return self.pipe(prompt=prompt[frame_ids].tolist(),
File "C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion_controlnet.py", line 749, in call
down_block_res_samples, mid_block_res_sample = self.controlnet(
File "C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\torch\nn\modules\module.py", line 1502, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\torch\nn\modules\module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\diffusers\models\controlnet.py", line 461, in forward
sample, res_samples = downsample_block(
File "C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\torch\nn\modules\module.py", line 1502, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\torch\nn\modules\module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 837, in forward
hidden_states = attn(
File "C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\torch\nn\modules\module.py", line 1502, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\torch\nn\modules\module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\diffusers\models\transformer_2d.py", line 265, in forward
hidden_states = block(
File "C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\torch\nn\modules\module.py", line 1502, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\torch\nn\modules\module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\diffusers\models\attention.py", line 291, in forward
attn_output = self.attn1(
File "C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\torch\nn\modules\module.py", line 1502, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\torch\nn\modules\module.py", line 1511, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\diffusers\models\cross_attention.py", line 205, in forward
return self.processor(
File "H:\ia\Text2Video-Zero-main\utils.py", line 218, in call
attention_probs = attn.get_attention_scores(query, key, attention_mask)
File "C:\Users\ultim\miniconda3\envs\textovideo\lib\site-packages\diffusers\models\cross_attention.py", line 242, in get_attention_scores
attention_scores = torch.baddbmm(
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.00 GiB. GPU 0 has a total capacty of 12.00 GiB of which 555.75 MiB is free. Of the allocated memory 7.91 GiB is allocated by PyTorch, and 658.32 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
(textovideo) PS H:\ia\Text2Video-Zero-main>


Thank you very much in advance if you can answer and guide me a bit, with other things I have not had problems, eg stable diff, stable diff fast, when I ran the demos I downloaded a number of things,

07/05/2023 06:51 209.267.595 body_pose_model.pth
07/05/2023 06:32 13 ckpts.txt
07/05/2023 07:01 492.757.791 dpt_hybrid-midas-501f0c75.pt
07/05/2023 06:53 147.341.049 hand_pose_model.pth

about model load need more time๏ผŸ

I have downloaded it locally and it takes a particularly long time to load, how can I load the local cache directly without linking to the network๏ผŸ

need more time
image

image
image

How to get this file: models/text-to-video/model_index.json

"model.process_text2video()" is working perfectly now, but "model.process_controlnet_pose()" reports an error:
OSError: Error no file named model_index.json found in directory models/text-to-video.
I think it's possible that this file can't be found. Where can I get this file?

Code Release?

Edge guidance is cool, how long of a video can I create?

How to select the best model?

Hi there, thank you for your wonderful work. I have a naive question.

There are 154 models in the process_text2video model zoo. How can we select the most suitable pre-trained model for it?

Thank you.

Device type CUDA is not supported for torch.Generator() api

I tried runnig the inference api script and I get the error message that the CUDA device type is not supported.

import torch
from model import Model

model = Model(device = "cuda", dtype = torch.float16)

error message:

File "C:\...\model.py", line 30, in __init__
self.generator = torch.Generator(device = device)

RuntimeError: Device CUDA is not supported for torch.Generator( ) api.

System parameters:
Windows 10 enterprise
Display Adapters: Intel(R) iris(R) XE Graphics and NVIDIA RTX A1000 GPU
32G RAM

My thoughts and fixes to get this running

Right, here are the REAL instructions for using this repo;

python3.9 -m venv ./.venv
source ./.venv/bin/activate
pip install --upgrade pip
pip install wheel

make the following change in requirements.txt;

-opencv-contrib-python==4.3.0.36
+opencv-contrib-python==4.4.0.46

install requirements;

pip install -r requirements.txt

disable the app from sharing itself and your machine to the world, without your permission or knowledge of doing so;
ARE YOU BEING SERIOUS?!?

-_,_,link = demo.queue(api_open=False).launch(file_directories=['temporal'], share=True)
+_,_,link = demo.queue(api_open=False).launch(file_directories=['temporal'], share=False)

then, it should launch with python app.py, enter a prompt and wait for the models to download..... somewhere?!?!

inferrence begins and then you end up with something that resembles 8 SD images stitched together with ffmpeg, a gif basically.

7gb9bb

No requirements.txt file in repository

I was looking to experiment around with this platform but as part of the readme, I need the requirements.txt file to install the dependencies but after further search, the requirements.txt file is not located in the repository nor is a list of the required dependencies necessary to operate this framework. Unless I'm missing something, could you add the requirements.txt file to the main repository?
Thanks for releasing this, I look forward to experimenting around with it!

about deflicker

Do you have any good solutions for the flickering issue in generated videos?

Face expression does not change

Hi, I am trying text2video-zero using pose/edge with a Dreambooth model, while the motion is good, I find that the output face expression doesn't change. Do you have any idea about what's going on?

install requirements.txt error

Error reported when I executed๏ผšpip install -r requirements.txt
error message๏ผš
ERROR: Could not find a version that satisfies the requirement decord==0.6.0 (from versions: none)
ERROR: No matching distribution found for decord==0.6.0

I am sure my python version is 3.9
I'm trying to switch pip sources๏ผŒbut it didn't work either

what should i do๏ผŒthank

Cross Frame Attention vs Sparse-Causal Attention

Hi, your work is amazing!

After reading your paper, I have one question. What exactly is the difference between Cross Frame Attention and the Sparse-Causal Attention from the Tune-A-Video paper?

Thank you.

custom model directory

Hi:
This is a great work with amazing results, good job!
Do models have to be downloaded from huggingface?
Whether to support custom model directory?

loading dreambooth model fails.

(sadtalker) โžœ  Text2Video-Zero git:(main) โœ— python test.py          
/home/oem/Documents/gitWorkspace/Text2Video-Zero/annotator/openpose/body.py:5: DeprecationWarning: Please use `gaussian_filter` from the `scipy.ndimage` namespace, the `scipy.ndimage.filters` namespace is deprecated.
  from scipy.ndimage.filters import gaussian_filter
/home/oem/Documents/gitWorkspace/Text2Video-Zero/annotator/openpose/hand.py:6: DeprecationWarning: Please use `gaussian_filter` from the `scipy.ndimage` namespace, the `scipy.ndimage.filters` namespace is deprecated.
  from scipy.ndimage.filters import gaussian_filter
/home/oem/miniconda3/envs/sadtalker/lib/python3.8/site-packages/skimage/util/dtype.py:27: DeprecationWarning: `np.bool8` is a deprecated alias for `np.bool_`.  (Deprecated NumPy 1.24)
  np.bool8: (False, True),
cuda
cuda
Traceback (most recent call last):
  File "test.py", line 10, in <module>
    model.process_controlnet_canny_db(dreambooth_model_path.as_posix(), video_path.as_posix(), prompt=prompt, save_path=out_path.as_posix())
  File "/home/oem/Documents/gitWorkspace/Text2Video-Zero/model.py", line 246, in process_controlnet_canny_db
    db_path = gradio_utils.get_model_from_db_selection(db_path)
  File "/home/oem/Documents/gitWorkspace/Text2Video-Zero/gradio_utils.py", line 68, in get_model_from_db_selection
    raise Exception
Exception

this specific model uses a subsequent realisticVisionV20_v20.vae.pt file.

from pathlib import Path
import torch
from model import Model

model = Model(device = "cuda", dtype = torch.float16)
prompt = 'sexy asian'
video_path = Path('./crop.mp4')
dreambooth_model_path = Path('/home/oem/Documents/gitWorkspace/stable-diffusion-webui/models/Stable-diffusion/realisticVisionV20_v20.safetensors')
out_path = Path(f'./{prompt}.gif')
model.process_controlnet_canny_db(dreambooth_model_path.as_posix(), video_path.as_posix(), prompt=prompt, save_path=out_path.as_posix())

seems like this needs to be more flexible.

does 'PAIR/controlnet-canny-anime' correspond to a .pt file?

def get_model_from_db_selection(db_selection):
    if db_selection == "Anime DB":
        input_video_path = 'PAIR/controlnet-canny-anime'
    elif db_selection == "Avatar DB":
        input_video_path = 'PAIR/controlnet-canny-avatar'
    elif db_selection == "GTA-5 DB":
        input_video_path = 'PAIR/controlnet-canny-gta5'
    elif db_selection == "Arcane DB":
        input_video_path = 'PAIR/controlnet-canny-arcane'
    else:
        raise Exception
    return input_video_path
 

OSError when using pose control

When I was testing the t2v with pose control, I meet this error:
OSError: fusing/stable-diffusion-v1-5-controlnet-openpose does not appear to have a file named diffusion_pytorch_model.bin.
It seems like there is file missing, how can I get it? Thanks!

about custom dreambooth model

Thank you for your good work, but I'm not sure how to use custom dreambooth model
Can you give me an example, please

environment setup

I think it should be conda env create -f environment.yaml

but pip install -r requirements.txt in readme

Production questions; AutoSave and multiple runs and prompt lists?

It would be so great to have a video slave turning out versions all night long, so for this an autosave after every finished render would be great, plus the possibility to select a number of runs with different seeds. And the most important perhaps: To be able to input a prompt list from an external text file.

And again: Thank you ever so much!

torch.cuda.OutOfMemoryError

Thank you very much for your release. What are the graphics card configuration requirements?
torch.cuda.OutOfMemoryError appears when my RTX3060 runs the example

Using multiple GPU's?

Incredible project, I was working on something for editing video, had something working on local but you released before. Anyways, is it possible to use more than one gpu to speed up the processing? IE in your example,

import torch
from model import Model

model = Model(device = "cuda", dtype = torch.float16)

prompt = "A horse galloping on a street"
params = {"t0": 44, "t1": 47 , "motion_field_strength_x" : 12, "motion_field_strength_y" : 12, "video_length": 8}

out_path, fps = f"./text2video_{prompt.replace(' ','_')}.mp4", 4
model.process_text2video(prompt, fps = fps, path = out_path, **params)

Possible offer the device = ["cuda", "cuda1"]]

And if this isn't trivial, if you could point me in the direction of where I should go about looking to try and implement this and hopefully submit a pull request with it

class TextToVideoPipeline missing

I think in the main branch, the text_to_video_pipeline.py is missing. Might have to add it back, had issues loading up the model.

Using custom model

It seems to require a model_index.json file if I want to try out with a dreambooth model.
Most available models don't have this file. Is there a way to load a model without the json file or a way to generate it?

Control Nets Not creating Moving images

Hello, I am trying to create a moving image based on a depth map, but it seems there is very little movement, and the image does not match the depth map.

this is the original depth map
Imgur

this is an example of the output when typing prompt " A girl walks around New York City"
Imgur

Do others get similar results, or are you able to get a working video? Thank you for your help! Im using windows 10, and have had success with the text2video features.

about t2v timestep

Hello, thank you for your excellent work, I am curious why t0=881, t1=941 in the text-to-video code. I mean shouldn't XT take the output at t=981?

Frame rate

How can we change FPS in making video?

RuntimeError: Device type CUDA is not supported for torch.Generator() api.

โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ H:\AI video\t2v0\Text2Video-Zero\app.py:14 in โ”‚
โ”‚ โ”‚
โ”‚ 11 import os โ”‚
โ”‚ 12 โ”‚
โ”‚ 13 on_huggingspace = os.environ.get("SPACE_AUTHOR_NAME") == "PAIR" โ”‚
โ”‚ โฑ 14 model = Model(device='cuda', dtype=torch.float16) โ”‚
โ”‚ 15 parser = argparse.ArgumentParser() โ”‚
โ”‚ 16 parser.add_argument('--public_access', action='store_true', โ”‚
โ”‚ 17 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ help="if enabled, the app can be access from a public url", default= โ”‚
โ”‚ โ”‚
โ”‚ H:\AI video\t2v0\Text2Video-Zero\model.py:27 in init โ”‚
โ”‚ โ”‚
โ”‚ 24 โ”‚ def init(self, device, dtype, **kwargs): โ”‚
โ”‚ 25 โ”‚ โ”‚ self.device = device โ”‚
โ”‚ 26 โ”‚ โ”‚ self.dtype = dtype โ”‚
โ”‚ โฑ 27 โ”‚ โ”‚ self.generator = torch.Generator(device=device) โ”‚
โ”‚ 28 โ”‚ โ”‚ self.pipe_dict = { โ”‚
โ”‚ 29 โ”‚ โ”‚ โ”‚ ModelType.Pix2Pix_Video: StableDiffusionInstructPix2PixPipeline, โ”‚
โ”‚ 30 โ”‚ โ”‚ โ”‚ ModelType.Text2Video: TextToVideoPipeline, โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
RuntimeError: Device type CUDA is not supported for torch.Generator() api.

worked fine before, not sure whats changed, tried reinstalling torch but didn't help

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.