Code Monkey home page Code Monkey logo

sd-webui-animatediff-bak's Introduction

AnimateDiff for Stable Diffusion WebUI

This extension aim for integrating AnimateDiff w/ CLI into AUTOMATIC1111 Stable Diffusion WebUI w/ ControlNet. You can generate GIFs in exactly the same way as generating images after enabling this extension.

This extension implements AnimateDiff in a different way. It does not require you to clone the whole SD1.5 repository. It also applied (probably) the least modification to ldm, so that you do not need to reload your model weights if you don't want to.

You might also be interested in another extension I created: Segment Anything for Stable Diffusion WebUI.

Table of Contents

Update

  • 2023/07/20 v1.1.0: Fix gif duration, add loop number, remove auto-download, remove xformers, remove instructions on gradio UI, refactor README, add sponsor QR code.
  • 2023/07/24 v1.2.0: Fix incorrect insertion of motion modules, add option to change path to motion modules in Settings/AnimateDiff, fix loading different motion modules.
  • 2023/09/04 v1.3.0: Support any community models with the same architecture; fix grey problem via #63
  • 2023/09/11 v1.4.0: Support official v2 motion module (different architecture: GroupNorm not hacked, UNet middle layer has motion module).
  • 2023/09/14: v1.4.1: Always change beta, alpha_comprod and alpha_comprod_prev to resolve grey problem in other samplers.
  • 2023/09/16: v1.5.0: Randomize init latent to support better img2gif; add other output formats and infotext output; add appending reversed frames; refactor code to ease maintaining.
  • 2023/09/19: v1.5.1: Support xformers, sdp, sub-quadratic attention optimization - VRAM usage decrease to 5.60GB with default setting.
  • 2023/09/22: v1.5.2: Option to disable xformers at Settings/AnimateDiff due to a bug in xformers, API support, option to enable GIF paletter optimization at Settings/AnimateDiff, gifsicle optimization move to Settings/AnimateDiff.
  • 2023/09/25: v1.6.0: Motion LoRA supported. See Motion Lora for more information.
  • 2023/09/27: v1.7.0: ControlNet supported. See ControlNet V2V for more information. Safetensors for some motion modules are also available now.
  • 2023/09/29: v1.8.0: Infinite generation supported. See WebUI Parameters for more information.
  • 2023/10/01: v1.8.1: Now you can uncheck Batch cond/uncond in Settings/Optimization if you want. This will reduce your VRAM (5.31GB -> 4.21GB for SDP) but take longer time.
  • 2023/10/08: v1.9.0: Prompt travel supported. You must have ControlNet installed (you do not need to enable ControlNet) to try it. See Prompt Travel for how to trigger this feature.
  • 2023/10/11: v1.9.1: Use state_dict key to guess mm version, replace match case with if else to support python<3.10, option to save PNG to custom dir (see Settings/AnimateDiff for detail), move hints to js, install imageio[ffmpeg] automatically when MP4 save fails.
  • 2023/10/16: v1.9.2: Add context generator to completely remove any closed loop, prompt travel support closed loop, infotext fully supported including prompt travel, README refactor
  • 2023/10/19: v1.9.3: Support webp output format. See #233 for more information.
  • 2023/10/21: v1.9.4: Save prompt travel to output images, Reverse merged to Closed loop (See WebUI Parameters), remove TimestepEmbedSequential hijack, remove hints.js, better explanation of several context-related parameters.
  • 2023/10/25: v1.10.0: Support img2img batch. You need ControlNet installed to make it work properly (you do not need to enable ControlNet). See ControlNet V2V for more information.

For future update plan, please query here.

How to Use

  1. Update your WebUI to v1.6.0 and ControlNet to v1.1.410, then install this extension via link. I do not plan to support older version.
  2. Download motion modules and put the model weights under stable-diffusion-webui/extensions/sd-webui-animatediff/model/. If you want to use another directory to save model weights, please go to Settings/AnimateDiff. See model zoo for a list of available motion modules.
  3. Enable Pad prompt/negative prompt to be same length in Settings/Optimization and click Apply settings. You must do this to prevent generating two separate unrelated GIFs. Checking Batch cond/uncond is optional, which can improve speed but increase VRAM usage.
  4. DO NOT disable hash calculation, otherwise AnimateDiff will have trouble figuring out when you switch motion module.

WebUI

  1. Go to txt2img if you want to try txt2gif and img2img if you want to try img2gif.
  2. Choose an SD1.5 checkpoint, write prompts, set configurations such as image width/height. If you want to generate multiple GIFs at once, please change batch number, instead of batch size.
  3. Enable AnimateDiff extension, set up each parameter, then click Generate.
  4. You should see the output GIF on the output gallery. You can access GIF output at stable-diffusion-webui/outputs/{txt2img or img2img}-images/AnimateDiff. You can also access image frames at stable-diffusion-webui/outputs/{txt2img or img2img}-images/{date}. You may choose to save frames for each generation into separate directories in Settings/AnimateDiff.

API

Just like how you use ControlNet. Here is a sample. Due to the limitation of WebUI, you will not be able to get a video, but only a list of generated frames. You will have to view GIF in your file system, as mentioned at WebUI item 4. For most up-to-date parameters, please read here.

'alwayson_scripts': {
  'AnimateDiff': {
    'args': [{
      'model': 'mm_sd_v15_v2.ckpt',   # Motion module
      'format': ['GIF'],      # Save format, 'GIF' | 'MP4' | 'PNG' | 'WEBP' | 'TXT'
      'enable': True,         # Enable AnimateDiff
      'video_length': 16,     # Number of frames
      'fps': 8,               # FPS
      'loop_number': 0,       # Display loop number
      'closed_loop': 'R+P',   # Closed loop, 'N' | 'R-P' | 'R+P' | 'A'
      'batch_size': 16,       # Context batch size
      'stride': 1,            # Stride 
      'overlap': -1,          # Overlap
      'interp': 'Off',        # Frame interpolation, 'Off' | 'FILM'
      'interp_x': 10          # Interp X
      'video_source': 'path/to/video.mp4',  # Video source
      'video_path': 'path/to/frames',       # Video path
      'latent_power': 1,      # Latent power
      'latent_scale': 32,     # Latent scale
      'last_frame': None,     # Optional last frame
      'latent_power_last': 1, # Optional latent power for last frame
      'latent_scale_last': 32 # Optional latent scale for last frame
      }
    ]
  }
},

WebUI Parameters

  1. Save format — Format of the output. Choose at least one of "GIF"|"MP4"|"WEBP"|"PNG". Check "TXT" if you want infotext, which will live in the same directory as the output GIF. Infotext is also accessible via stable-diffusion-webui/params.txt and outputs in all formats.

    1. You can optimize GIF with gifsicle (apt install gifsicle required, read #91 for more information) and/or palette (read #104 for more information). Go to Settings/AnimateDiff to enable them.
    2. You can set quality and lossless for WEBP via Settings/AnimateDiff. Read #233 for more information.
  2. Number of frames — Choose whatever number you like.

    If you enter 0 (default):

    • If you submit a video via Video source / enter a video path via Video path / enable ANY batch ControlNet, the number of frames will be the number of frames in the video (use shortest if more than one videos are submitted).
    • Otherwise, the number of frames will be your Context batch size described below.

    If you enter something smaller than your Context batch size other than 0: you will get the first Number of frames frames as your output GIF from your whole generation. All following frames will not appear in your generated GIF, but will be saved as PNGs as usual. Do not set Number of frames to be something smaler than Context batch size other than 0 because of #213.

  3. FPS — Frames per second, which is how many frames (images) are shown every second. If 16 frames are generated at 8 frames per second, your GIF’s duration is 2 seconds. If you submit a source video, your FPS will be the same as the source video.

  4. Display loop number — How many times the GIF is played. A value of 0 means the GIF never stops playing.

  5. Context batch size — How many frames will be passed into the motion module at once. The model is trained with 16 frames, so it’ll give the best results when the number of frames is set to 16. Choose [1, 24] for V1 motion modules and [1, 32] for V2 motion modules.

  6. Closed loop — Closed loop means that this extension will try to make the last frame the same as the first frame.

    1. When Number of frames > Context batch size, including when ControlNet is enabled and the source video frame number > Context batch size and Number of frames is 0, closed loop will be performed by AnimateDiff infinite context generator.
    2. When Number of frames <= Context batch size, AnimateDiff infinite context generator will not be effective. Only when you choose A will AnimateDiff append reversed list of frames to the original list of frames to form closed loop.

    See below for explanation of each choice:

    • N means absolutely no closed loop - this is the only available option if Number of frames is smaller than Context batch size other than 0.
    • R-P means that the extension will try to reduce the number of closed loop context. The prompt travel will not be interpolated to be a closed loop.
    • R+P means that the extension will try to reduce the number of closed loop context. The prompt travel will be interpolated to be a closed loop.
    • A means that the extension will aggressively try to make the last frame the same as the first frame. The prompt travel will be interpolated to be a closed loop.
  7. Stride — Max motion stride as a power of 2 (default: 1).

    1. Due to the limitation of the infinite context generator, this parameter is effective only when Number of frames > Context batch size, including when ControlNet is enabled and the source video frame number > Context batch size and Number of frames is 0.
    2. "Absolutely no closed loop" is only possible when Stride is 1.
    3. For each 1 <= $2^i$ <= Stride, the infinite context generator will try to make frames $2^i$ apart temporal consistent. For example, if Stride is 4 and Number of frames is 8, it will make the following frames temporal consistent:
      • Stride == 1: [0, 1, 2, 3, 4, 5, 6, 7]
      • Stride == 2: [0, 2, 4, 6], [1, 3, 5, 7]
      • Stride == 4: [0, 4], [1, 5], [2, 6], [3, 7]
  8. Overlap — Number of frames to overlap in context. If overlap is -1 (default): your overlap will be Context batch size // 4.

    1. Due to the limitation of the infinite context generator, this parameter is effective only when Number of frames > Context batch size, including when ControlNet is enabled and the source video frame number > Context batch size and Number of frames is 0.
  9. Frame Interpolation — Interpolate between frames with Deforum's FILM implementation. Requires Deforum extension. #128

  10. Interp X — Replace each input frame with X interpolated output frames. #128.

  11. Video source — [Optional] Video source file for ControlNet V2V. You MUST enable ControlNet. It will be the source control for ALL ControlNet units that you enable without submitting a control image or a path to ControlNet panel. You can of course submit one control image via Single Image tab or an input directory via Batch tab, which will override this video source input and work as usual.

  12. Video path — [Optional] Folder for source frames for ControlNet V2V, but lower priority than Video source. You MUST enable ControlNet. It will be the source control for ALL ControlNet units that you enable without submitting a control image or a path to ControlNet. You can of course submit one control image via Single Image tab or an input directory via Batch tab, which will override this video path input and work as usual.

    • For people who want to inpaint videos: enter a folder which contains two sub-folders image and mask on ControlNet inpainting unit. These two sub-folders should contain the same number of images. This extension will match them according to the same sequence. Using my Segment Anything extension can make your life much easier.

Please read

Img2GIF

You need to go to img2img and submit an init frame via A1111 panel. You can optionally submit a last frame via extension panel.

By default: your init_latent will be changed to

init_alpha = (1 - frame_number ^ latent_power / latent_scale)
init_latent = init_latent * init_alpha + random_tensor * (1 - init_alpha)

If you upload a last frame: your init_latent will be changed in a similar way. Read this code to understand how it works.

Motion LoRA

Download and use them like any other LoRA you use (example: download motion lora to stable-diffusion-webui/models/Lora and add <lora:v2_lora_PanDown:0.8> to your positive prompt). Motion LoRA only supports V2 motion modules.

Prompt Travel

Write positive prompt following the example below.

The first line is head prompt, which is optional. You can write no/single/multiple lines of head prompts.

The second and third lines are for prompt interpolation, in format frame number: prompt. Your frame number should be in ascending order, smaller than the total Number of frames. The first frame is 0 index.

The last line is tail prompt, which is optional. You can write no/single/multiple lines of tail prompts. If you don't need this feature, just write prompts in the old way.

1girl, yoimiya (genshin impact), origen, line, comet, wink, Masterpiece, BestQuality. UltraDetailed, <lora:LineLine2D:0.7>,  <lora:yoimiya:0.8>, 
0: closed mouth
8: open mouth
smile

ControlNet V2V

You need to go to txt2img / img2img-batch and submit source video or path to frames. Each ControlNet will find control images according to this priority:

  1. ControlNet Single Image tab or Batch tab. Simply upload a control image or a directory of control frames is enough.
  2. Img2img Batch tab Input directory if you are using img2img batch. If you upload a directory of control frames, it will be the source control for ALL ControlNet units that you enable without submitting a control image or a path to ControlNet panel.
  3. AnimateDiff Video Source. If you upload a video through Video Source, it will be the source control for ALL ControlNet units that you enable without submitting a control image or a path to ControlNet panel.
  4. AnimateDiff Video Path. If you upload a path to frames through Video Path, it will be the source control for ALL ControlNet units that you enable without submitting a control image or a path to ControlNet panel.

Number of frames will be capped to the minimum number of images among all folders you provide. Each control image in each folder will be applied to one single frame. If you upload one single image for a ControlNet unit, that image will control ALL frames.

For people who want to inpaint videos: enter a folder which contains two sub-folders image and mask on ControlNet inpainting unit. These two sub-folders should contain the same number of images. This extension will match them according to the same sequence. Using my Segment Anything extension can make your life much easier.

AnimateDiff in img2img batch will be available in v1.10.0.

Model Zoo

VRAM

Actual VRAM usage depends on your image size and context batch size. You can try to reduce image size or context batch size to reduce VRAM usage. I list some data tested on Ubuntu 22.04, NVIDIA 4090, torch 2.0.1+cu117, H=W=512, frame=16 (default setting) below. w//w/o means Batch cond/uncond in Settings/Optimization is checked/unchecked.

Optimization VRAM w/ VRAM w/o
No optimization 12.13GB
xformers/sdp 5.60GB 4.21GB
sub-quadratic 10.39GB

Batch Size

Batch size on WebUI will be replaced by GIF frame number internally: 1 full GIF generated in 1 batch. If you want to generate multiple GIF at once, please change batch number.

Batch number is NOT the same as batch size. In A1111 WebUI, batch number is above batch size. Batch number means the number of sequential steps, but batch size means the number of parallel steps. You do not have to worry too much when you increase batch number, but you do need to worry about your VRAM when you increase your batch size (where in this extension, video frame number). You do not need to change batch size at all when you are using this extension.

We are currently developing approach to support batch size on WebUI in the near future.

FAQ

  1. Q: Will ADetailer be supported?

    A: I'm not planning to support ADetailer. However, I plan to refactor my Segment Anything to achieve similar effects.

Demo

Basic Usage

AnimateDiff Extension img2img
image 00013-10788741199826055000 00018-727621716

Motion LoRA

No LoRA PanDown PanLeft
00094-1401397431 00095-3197605735 00093-2722547708

Prompt Travel

00201-2296305953

The prompt is similar to above.

ControlNet V2V

TODO

Tutorial

TODO

Thanks

I thank researchers from Shanghai AI Lab, especially @guoyww for creating AnimateDiff. I also thank @neggles and @s9roll7 for creating and improving AnimateDiff CLI Prompt Travel. This extension could not be made possible without these creative works.

I also thank community developers, especially

and many others who have contributed to this extension.

I also thank community users, especially @streamline who provided dataset and workflow of ControlNet V2V. His workflow is extremely amazing and definitely worth checking out.

Star History

Star History Chart

Sponsor

You can sponsor me via WeChat, AliPay or PayPal. You can also support me via patreon, ko-fi or afdian.

WeChat AliPay PayPal
216aff0250c7fd2bb32eeb4f7aae623 15fe95b4ada738acf3e44c1d45a1805 IMG_1419_

sd-webui-animatediff-bak's People

Contributors

advtech92 avatar alexpinilla avatar asdfgh avatar clonephaze avatar continue-revolution avatar fluttyproger avatar hsyhhssyy avatar jeryzeng avatar neversay avatar remixer-dec avatar rjkip avatar spensercai avatar xiongxiao avatar zappityzap avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.