Code Monkey home page Code Monkey logo

comfyui-dynamicrafterwrapper's People

Contributors

haohaocreates avatar kijai avatar naomi-ken-korem avatar painebenjamin avatar phr00t avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

comfyui-dynamicrafterwrapper's Issues

Out of Memory issue

Hi,
Thank you for your great work! I met a problem with out_of_memory when I try to use dynamic rafter_1024 model to generate video from 1024576 images. I used 48G RTX6000. Also when I tried to generate video from smaller resolution such as 320320 images, it said "assert not torch.isnan(samples).any().item(), "Resulting tensor containts NaNs. I'm unsure why this happens, changing step count and/or image dimensions
might help."
AssertionError: Resulting tensor containts NaNs. I'm unsure why this happens, changing step count and/or image dimensions might help. "

Thank you again.

Batch Prompt Scheduling with Batch Interpolation?

image

I think this is how batch image interpolation could work, right? Ultimately I want to interpolate between 3 frames, with 2 "transition prompts" that go from frame 1->2 and then 2->3. I converted the "prompt" input to a string input and I want to feed it a batch string.

I'm a developer myself so I may be able to help implement this myself. Is this suppose to work somehow already or already planned? I don't know how batch interpolation is supposed to work without a batch prompt input of some kind?

DownloadAndLoadDynamiCrafterModel: Repository Not Found for url: https://huggingface.co/api/models/Kijai/DynamiCrafter_pruned/revision/main.

When trying tooncrafter_example_01 workflow, I can get past DownloadAndLoadDynamiCrafterModel even if I downloaded already the models in the checkpoint folders.

401 Client Error. (Request ID: Root=1-555..3)

Repository Not Found for url: https://huggingface.co/api/models/Kijai/DynamiCrafter_pruned/revision/main.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
User Access Token "testing" is expired

  File "C:\Users\Desktop\ComfyCloud\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Desktop\ComfyCloud\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Desktop\ComfyCloud\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Desktop\ComfyCloud\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\nodes.py", line 81, in loadmodel
    snapshot_download(repo_id="Kijai/DynamiCrafter_pruned",
  File "C:\Users\Desktop\ComfyCloud\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\utils\_validators.py", line 119, in _inner_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Desktop\ComfyCloud\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\_snapshot_download.py", line 255, in snapshot_download
    raise api_call_error
  File "C:\Users\Desktop\ComfyCloud\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\_snapshot_download.py", line 186, in snapshot_download
    repo_info = api.repo_info(repo_id=repo_id, repo_type=repo_type, revision=revision, token=token)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Desktop\ComfyCloud\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\utils\_validators.py", line 119, in _inner_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Desktop\ComfyCloud\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\hf_api.py", line 2418, in repo_info
    return method(
           ^^^^^^^
  File "C:\Users\Desktop\ComfyCloud\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\utils\_validators.py", line 119, in _inner_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Desktop\ComfyCloud\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\hf_api.py", line 2228, in model_info
    hf_raise_for_status(r)
  File "C:\Users\Desktop\ComfyCloud\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\utils\_errors.py", line 352, in hf_raise_for_status
    raise RepositoryNotFoundError(message, response) from e


ToonCrafter is obfuscated by being implemented in this repo

Hey Kij! Thanks so much for the work you already put into the repo. My worry is that ToonCrafter isn't obvious that it's included in this repo from the title and especially not obvious when the repo is marked as a wrapper. This repo is probably going to be the first one that people see as it's following the standard naming convention in the title. I'll be migrating the comfy implement code over to that instead of this repo so things are just less confusing. I'll make sure you are properly attributed as you already put in so much into it. Let me know if there is any issues with that before I start machine gunning some PRs.

AttributeError: 'LatentVisualDiffusion' object has no attribute 'unsqueeze'

!!! Exception during processing !!!
Traceback (most recent call last):
File "E:\IMAGE\ComfyUI-base-py311\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\IMAGE\ComfyUI-base-py311\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\IMAGE\ComfyUI-base-py311\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\IMAGE\ComfyUI-base-py311\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\nodes.py", line 237, in process
mask = F.interpolate(mask.unsqueeze(0), size=(H // 8, W // 8), mode="nearest")
^^^^^^^^^^^^^^
File "E:\IMAGE\ComfyUI-base-py311\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1688, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'LatentVisualDiffusion' object has no attribute 'unsqueeze'

[profiler] #125 LayerUtility: ImageScaleByAspectRatio V2: 0.0 seconds, total 0.0 seconds(#118)
[profiler] #130 LayerUtility: ImageMaskScaleAs: 0.0 seconds, total 0.0 seconds(#125 #116)
[profiler] #129 ChinesePrompt_Mix: 1.7494 seconds, total 1.7494 seconds
[profiler] #121 DynamiCrafterI2V: 1.4825 seconds, total 3.2319 seconds(#129 #119 #125 #130 #143)
[profiler] #122 GetImageRangeFromBatch: 0.001 seconds, total 3.2329 seconds(#121)
[profiler] #123 RIFE VFI: 0.0009 seconds, total 3.2338 seconds(#122)
[profiler] #120 VHS_VideoCombine: 0.0007 seconds, total 3.2345 seconds(#123)
'🔥 - 11 Nodes not included in prompt but is activated'
[rgthree] Using rgthree's optimized recursive execution.
input prompt 风吹着树叶,眨眼,迷人的微笑
output prompt 风吹着树叶, 眨眼, 迷人的微笑
correct_prompt_syntax:: ['风吹着树叶, 眨眼, 迷人的微笑']
test en_text ['♪ The wind blows the leaves, blinks, smiles ♪']
input prompt ♪ The wind blows the leaves, blinks, smiles ♪
output prompt ♪ The wind blows the leaves, blinks, smiles ♪
VAE using dtype: torch.bfloat16
!!! Exception during processing !!!
Traceback (most recent call last):
File "E:\IMAGE\ComfyUI-base-py311\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\IMAGE\ComfyUI-base-py311\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\IMAGE\ComfyUI-base-py311\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\IMAGE\ComfyUI-base-py311\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\nodes.py", line 237, in process
mask = F.interpolate(mask.unsqueeze(0), size=(H // 8, W // 8), mode="nearest")
^^^^^^^^^^^^^^
File "E:\IMAGE\ComfyUI-base-py311\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1688, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'LatentVisualDiffusion' object has no attribute 'unsqueeze'

I encountered a mistake in the first step

Error occurred when executing DownloadAndLoadDynamiCrafterModel:

No module named 'pytorch_lightning'

File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper-main\nodes.py", line 129, in loadmodel self.model = instantiate_from_config(model_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper-main\utils\utils.py", line 33, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper-main\utils\utils.py", line 42, in get_obj_from_str return getattr(importlib.import_module(module, package=package_directory_name), cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "importlib_init_.py", line 126, in import_module File "", line 1204, in _gcd_import File "", line 1176, in _find_and_load File "", line 1147, in _find_and_load_unlocked File "", line 690, in _load_unlocked File "", line 940, in exec_module File "", line 241, in _call_with_frames_removed File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper-main\lvdm\models\ddpm3d.py", line 19, in import pytorch_lightning as pl

Request to add a video resolution option

Although the official requirement is a fixed width and height, I have tried, for example, to change:
576 wide, 1024 high.
The effect is also okay.

00067-1266358409_sample0.mp4

Inconsistent Black Output

image

I haven't figured out why, but sometimes I'll just get black output from DynamiCrafter. As you can see, the DynamicCrafter did get 3 proper 512x512 images and the parameters all looked OK. It took a long time to process the blackness.

image

That above "RuntimeWarning" on line 1433 (the first one) is related to saving the images, as I suspect they are not being outputted correctly from the DynamiCrafter node (so it has nothing proper to save).

I've noticed this a few times, and I would close the server, reopen it, try a different seed and the problem would go away... but it keeps coming back randomly...

Latest version gives OOM

After updating to latest version, I am getting out of memory issues. But I did managed to use it successfully many times before I updated this node to the latest version. Is there a way to reinstall the previous version of the node?

No operator found for memory_efficient_attention_forward with inputs:

i have this error
can anybody help me ?

Error occurred when executing DynamiCrafterInterp Simple:

No operator found for memory_efficient_attention_forward with inputs:
query : shape=(80, 2560, 1, 64) (torch.float16)
key : shape=(80, 2560, 1, 64) (torch.float16)
value : shape=(80, 2560, 1, 64) (torch.float16)
attn_bias :
p : 0.0
decoderF is not supported because:
xFormers wasn't build with CUDA support
attn_bias type is
operator wasn't built - see python -m xformers.info for more info
[email protected] is not supported because:
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
cutlassF is not supported because:
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
smallkF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
xFormers wasn't build with CUDA support
dtype=torch.float16 (supported: {torch.float32})
operator wasn't built - see python -m xformers.info for more info
unsupported embed per head: 64

File "D:\deepfake\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\deepfake\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\deepfake\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\deepfake\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafter\nodes.py", line 103, in run_inference
imgs= model.get_image(image, prompt, steps, cfg_scale, eta, motion, seed,image1)
File "D:\deepfake\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafter\scripts\gradio\i2v_test_application.py", line 106, in get_image
batch_samples = batch_ddim_sampling(model, cond, noise_shape, n_samples=1, ddim_steps=steps, ddim_eta=eta, cfg_scale=cfg_scale)
File "D:\deepfake\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafter\scripts\evaluation\funcs.py", line 59, in batch_ddim_sampling
samples, _ = ddim_sampler.sample(S=ddim_steps,
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafter\lvdm\models\samplers\ddim.py", line 113, in sample
samples, intermediates = self.ddim_sampling(conditioning, size,
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafter\lvdm\models\samplers\ddim.py", line 186, in ddim_sampling
outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafter\lvdm\models\samplers\ddim.py", line 222, in p_sample_ddim
e_t_cond = self.model.apply_model(x, t, c, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\ComfyUI\custom_nodes\cg-image-picker....\custom_nodes\ComfyUI-DynamiCrafter\lvdm\models\ddpm3d.py", line 551, in apply_model
x_recon = self.model(x_noisy, t, **cond, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\ComfyUI\custom_nodes\cg-image-picker....\custom_nodes\ComfyUI-DynamiCrafter\lvdm\models\ddpm3d.py", line 714, in forward
out = self.diffusion_model(xc, t, context=cc, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\ComfyUI\custom_nodes\cg-image-picker....\custom_nodes\ComfyUI-DynamiCrafter\lvdm\modules\networks\openaimodel3d.py", line 583, in forward
h = module(h, emb, context=context, batch_size=b)
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\ComfyUI\custom_nodes\cg-image-picker....\custom_nodes\ComfyUI-DynamiCrafter\lvdm\modules\networks\openaimodel3d.py", line 41, in forward
x = layer(x, context)
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\ComfyUI\custom_nodes\cg-image-picker....\custom_nodes\ComfyUI-DynamiCrafter\lvdm\modules\attention.py", line 304, in forward
x = block(x, context=context, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\ComfyUI\custom_nodes\cg-image-picker....\custom_nodes\ComfyUI-DynamiCrafter\lvdm\modules\attention.py", line 239, in forward
return checkpoint(self._forward, input_tuple, self.parameters(), self.checkpoint)
File "D:\deepfake\ComfyUI_windows_portable\ComfyUI\custom_nodes\cg-image-picker....\custom_nodes\ComfyUI-DynamiCrafter\lvdm\common.py", line 94, in checkpoint
return func(*inputs)
File "D:\deepfake\ComfyUI_windows_portable\ComfyUI\custom_nodes\cg-image-picker....\custom_nodes\ComfyUI-DynamiCrafter\lvdm\modules\attention.py", line 243, in _forward
x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None, mask=mask) + x
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1520, in call_impl
return forward_call(*args, **kwargs)
File "D:\deepfake\ComfyUI_windows_portable\ComfyUI\custom_nodes\cg-image-picker....\custom_nodes\ComfyUI-DynamiCrafter\lvdm\modules\attention.py", line 175, in efficient_forward
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=None)
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\xformers\ops\fmha_init.py", line 247, in memory_efficient_attention
return memory_efficient_attention(
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\xformers\ops\fmha_init.py", line 365, in _memory_efficient_attention
return memory_efficient_attention_forward(
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\xformers\ops\fmha_init.py", line 381, in _memory_efficient_attention_forward
op = _dispatch_fw(inp, False)
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\xformers\ops\fmha\dispatch.py", line 125, in _dispatch_fw
return _run_priority_list(
File "D:\deepfake\ComfyUI_windows_portable\python_embeded\lib\site-packages\xformers\ops\fmha\dispatch.py", line 65, in _run_priority_list
raise NotImplementedError(msg)

Close
Queue size: 0
⚙️
×
Queue Prompt
Extra options
Queue FrontView QueueView History
Save
Load
Refresh
Clipspace
Clear
Load Default
Manager
Share
Idle
NotImplementedError 2 DynamiCrafterInterp Simple

ToonCrafter flash between image transitions

I'm noticing a dulling/fading between the image transitions in ToonCrafter outputs

Example video:

AnimateDiff_00003.mp4

Each frame (with and without last image pruning)

Screenshot 2024-06-01 at 23 21 35 Screenshot 2024-06-01 at 23 22 23

Workflow (just a screenshot):

Screenshot 2024-06-01 at 23 24 49

XFormers not available

I tried to install XFormers, and the terminal prompts that the installation was successful. Why is there still an error when running the DynamiCrafterWrapper workflow? I have tried various methods, but there is no solution. I would like to ask for your help.
image

How to make 1024x576 video?

How to make a video sized as 1024x576?

Tried to put a size 1024x576 using the 512 checkpoint and the generated result does not move.
Besides, only the 512 interpolation works.... the 1024 checkpoint will produce garbage

ModuleNotFoundError: No module named 'decord'

File "G:\AI\ComfyUI\ComfyUI\nodes.py", line 1888, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "G:\AI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper_init
.py", line 1, in
from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
File "G:\AI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\nodes.py", line 5, in
from .scripts.evaluation.funcs import load_model_checkpoint, get_latent_z
File "G:\AI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\scripts\evaluation\funcs.py", line 4, in
from decord import VideoReader, cpu
ModuleNotFoundError: No module named 'decord'

Cannot import G:\AI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper module for custom nodes: No module named 'decord'

Error when executing TooncrafterDecode, "no Xformers" even though I do have it.

I already had xformers installed, i reinstalled it too, but it gives me this error.

For info: I am running this on an rtx4050 laptop gpu 6gb,
LoadDynamicCrafter model: Tooncrafter fp16
dtpe bf16
Fp8 Unet: enabled

Image size: 384x216

first generation makes it to the Tooncrafterinterpolation, but then when it goes to Tooncrafter decode it gives me this:

Screenshot 2024-06-02 135733

Error occurred when executing ToonCrafterDecode:

XFormers not available, it is required for ToonCrafter decoder. Alternatively you can use a standard VAE Decode -node instead, but this has a negative effect on the image quality though.

File "H:\AI\StableSwarmUI\dlbackend\comfy\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\AI\StableSwarmUI\dlbackend\comfy\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\AI\StableSwarmUI\dlbackend\comfy\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\AI\StableSwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\nodes.py", line 670, in process
raise Exception("XFormers not available, it is required for ToonCrafter decoder. Alternatively you can use a standard VAE Decode -node instead, but this has a negative effect on the image quality though.")

Does the recent refractor compromise output quality?

After the recent update, it looks like the overall quality of animation seem to drop. Example below see a very minimal change in the background yet the generation seem to be frantic on its transition. On my testing 2 days before, it was a simply good side moving background transition.

2024-06-04.17-46-41.mp4

Even if there is no background movement, the character movement also become much less consistent.

Is it possible to revert to how it works before?

TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.

Trying to run tooncrafter through the node on M1 Max MBP, using example workflow in the examples folder, when using auto dtype it runs the model in fp16 and the vae in fp32 which then causes this error: RuntimeError: Input type (c10::Half) and bias type (float) should be the same, running it with fp32 set to both gave me a CUDA error but I simply swapped the "comfyui.git/app/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/models/samplers/ddim.py", line 21, in register_buffer
attr = attr.to(torch.device("cuda"))" from cuda to mps, doing this I ran into another error:
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.

Using fp16 for vae dtype causes the following error (all other models are also set to fp16 here):
comfyui.git/app/env/lib/python3.10/site-packages/torch/nn/functional.py", line 4089, in interpolate
return torch._C._nn._upsample_bicubic2d_aa(input, output_size, align_corners, scale_factors)
RuntimeError: "compute_index_ranges_weights" not implemented for 'Half'

I'm fine with any method of running it but just can't seem to do so currently, any idea on how to fix it?

Have you considered doing a node for "SEINE" ? - Very similar IMG+TXT-->Vid

Hey Kijai,

Really appreciate your work lately.

Wasn't aware of this project, but it reminds me a lot of SEINE which really flew under the radar imho. It was released the same week as SDV so perhaps that's why, but it also does image to vid and utilizes a txt prompt. I find the results pretty good but my only method of using it (as a non-developer) is via command line.

Thanks for considering, and sorry if a wrapper/node is already out there somewhere - I have looked.

Repo: https://github.com/Vchitect/SEINE

Example vid that was on reddit: https://old.reddit.com/r/StableDiffusion/comments/182w6ab/seine_imgtxtprompt_sd_video_model_that_also_came/

ERROR:assert not torch.isnan(samples).any().item(), "Resulting tensor containts NaNs. I'm unsure why this happens, changing step count and/or image dimensions might help."

ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\nodes.py", line 277, in process
assert not torch.isnan(samples).any().item(), "Resulting tensor containts NaNs. I'm unsure why this happens, changing step count and/or image dimensions might help."
AssertionError: Resulting tensor containts NaNs. I'm unsure why this happens, changing step count and/or image dimensions might help.

Error occurred when executing DynamiCrafterModelLoader:

Error occurred when executing DynamiCrafterModelLoader:

cannot import name '_is_local_file_protocol' from 'lightning_fabric.utilities.cloud_io' (E:\ComfyUI\venv\lib\site-packages\lightning_fabric\utilities\cloud_io.py)

File "E:\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\nodes.py", line 81, in loadmodel
self.model = instantiate_from_config(model_config)
File "E:\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\utils\utils.py", line 33, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "E:\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\utils\utils.py", line 42, in get_obj_from_str
return getattr(importlib.import_module(module, package=package_directory_name), cls)
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\importlib_init_.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in load_unlocked
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "E:\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\lvdm\models\ddpm3d.py", line 19, in
import pytorch_lightning as pl
File "E:\ComfyUI\venv\lib\site-packages\pytorch_lightning_init
.py", line 27, in
from pytorch_lightning.callbacks import Callback # noqa: E402
File "E:\ComfyUI\venv\lib\site-packages\pytorch_lightning\callbacks_init
.py", line 24, in
from pytorch_lightning.callbacks.model_checkpoint import ModelCheckpoint
File "E:\ComfyUI\venv\lib\site-packages\pytorch_lightning\callbacks\model_checkpoint.py", line 37, in
from lightning_fabric.utilities.cloud_io import _is_dir, _is_local_file_protocol, get_filesystem

Getting a Giant Error Message When Attempting to Run

This is the error I receive in my Comfy cmd window. It's not pretty. 😂

I'm trying to run the "workflow.json" file installed with the custom node. I've downloaded both the 1024 and 512 models and plaed them into my ComfyUI/models/DynamiCrafter folder. Does any of this gibberish ring a bell for you, oh Code Master?

got prompt
LatentVisualDiffusion: Running in v-prediction mode
AE working on z of shape (1, 4, 32, 32) = 4096 dimensions.
Loaded ViT-H-14 model config.
Loading pretrained ViT-H-14 weights (laion2b_s32b_b79k).
!!! Exception during processing !!!
Traceback (most recent call last):
  File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafter\nodes.py", line 45, in run_inference
    image2video = Image2Video('./tmp/', resolution=resolution,frame_length=frame_length)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafter\scripts\gradio\i2v_test.py", line 33, in __init__
    model = instantiate_from_config(model_config)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafter\utils\utils.py", line 34, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts\..\..\custom_nodes\ComfyUI-DynamiCrafter\lvdm\models\ddpm3d.py", line 680, in __init__
    super().__init__(*args, **kwargs)
  File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts\..\..\custom_nodes\ComfyUI-DynamiCrafter\lvdm\models\ddpm3d.py", line 414, in __init__
    self.instantiate_cond_stage(cond_stage_config)
  File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts\..\..\custom_nodes\ComfyUI-DynamiCrafter\lvdm\models\ddpm3d.py", line 447, in instantiate_cond_stage
    model = instantiate_from_config(config)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts\..\..\custom_nodes\ComfyUI-DynamiCrafter\utils\utils.py", line 34, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts\..\..\custom_nodes\ComfyUI-DynamiCrafter\lvdm\modules\encoders\condition.py", line 188, in __init__
    model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\open_clip\factory.py", line 324, in create_model_and_transforms
    model = create_model(
            ^^^^^^^^^^^^^
  File "H:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\open_clip\factory.py", line 237, in create_model
    load_checkpoint(model, checkpoint_path)
  File "H:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\open_clip\factory.py", line 112, in load_checkpoint
    incompatible_keys = model.load_state_dict(state_dict, strict=strict)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 2152, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for CLIP:
        Unexpected key(s) in state_dict: "visual.class_embedding", "visual.positional_embedding", "visual.proj", "visual.conv1.weight", "visual.ln_pre.weight", "visual.ln_pre.bias", "visual.transformer.resblocks.0.ln_1.weight", "visual.transformer.resblocks.0.ln_1.bias", "visual.transformer.resblocks.0.attn.in_proj_weight", "visual.transformer.resblocks.0.attn.in_proj_bias", "visual.transformer.resblocks.0.attn.out_proj.weight", "visual.transformer.resblocks.0.attn.out_proj.bias", "visual.transformer.resblocks.0.ln_2.weight", "visual.transformer.resblocks.0.ln_2.bias", "visual.transformer.resblocks.0.mlp.c_fc.weight", "visual.transformer.resblocks.0.mlp.c_fc.bias", "visual.transformer.resblocks.0.mlp.c_proj.weight", "visual.transformer.resblocks.0.mlp.c_proj.bias", "visual.transformer.resblocks.1.ln_1.weight", "visual.transformer.resblocks.1.ln_1.bias", "visual.transformer.resblocks.1.attn.in_proj_weight", "visual.transformer.resblocks.1.attn.in_proj_bias", "visual.transformer.resblocks.1.attn.out_proj.weight", "visual.transformer.resblocks.1.attn.out_proj.bias", "visual.transformer.resblocks.1.ln_2.weight", "visual.transformer.resblocks.1.ln_2.bias", "visual.transformer.resblocks.1.mlp.c_fc.weight", "visual.transformer.resblocks.1.mlp.c_fc.bias", "visual.transformer.resblocks.1.mlp.c_proj.weight", "visual.transformer.resblocks.1.mlp.c_proj.bias", "visual.transformer.resblocks.2.ln_1.weight", "visual.transformer.resblocks.2.ln_1.bias", "visual.transformer.resblocks.2.attn.in_proj_weight", "visual.transformer.resblocks.2.attn.in_proj_bias", "visual.transformer.resblocks.2.attn.out_proj.weight", "visual.transformer.resblocks.2.attn.out_proj.bias", "visual.transformer.resblocks.2.ln_2.weight", "visual.transformer.resblocks.2.ln_2.bias", "visual.transformer.resblocks.2.mlp.c_fc.weight", "visual.transformer.resblocks.2.mlp.c_fc.bias", "visual.transformer.resblocks.2.mlp.c_proj.weight", "visual.transformer.resblocks.2.mlp.c_proj.bias", "visual.transformer.resblocks.3.ln_1.weight", "visual.transformer.resblocks.3.ln_1.bias", "visual.transformer.resblocks.3.attn.in_proj_weight", "visual.transformer.resblocks.3.attn.in_proj_bias", "visual.transformer.resblocks.3.attn.out_proj.weight", "visual.transformer.resblocks.3.attn.out_proj.bias", "visual.transformer.resblocks.3.ln_2.weight", "visual.transformer.resblocks.3.ln_2.bias", "visual.transformer.resblocks.3.mlp.c_fc.weight", "visual.transformer.resblocks.3.mlp.c_fc.bias", "visual.transformer.resblocks.3.mlp.c_proj.weight", "visual.transformer.resblocks.3.mlp.c_proj.bias", "visual.transformer.resblocks.4.ln_1.weight", "visual.transformer.resblocks.4.ln_1.bias", "visual.transformer.resblocks.4.attn.in_proj_weight", "visual.transformer.resblocks.4.attn.in_proj_bias", "visual.transformer.resblocks.4.attn.out_proj.weight", "visual.transformer.resblocks.4.attn.out_proj.bias", "visual.transformer.resblocks.4.ln_2.weight", "visual.transformer.resblocks.4.ln_2.bias", "visual.transformer.resblocks.4.mlp.c_fc.weight", "visual.transformer.resblocks.4.mlp.c_fc.bias", "visual.transformer.resblocks.4.mlp.c_proj.weight", "visual.transformer.resblocks.4.mlp.c_proj.bias", "visual.transformer.resblocks.5.ln_1.weight", "visual.transformer.resblocks.5.ln_1.bias", "visual.transformer.resblocks.5.attn.in_proj_weight", "visual.transformer.resblocks.5.attn.in_proj_bias", "visual.transformer.resblocks.5.attn.out_proj.weight", "visual.transformer.resblocks.5.attn.out_proj.bias", "visual.transformer.resblocks.5.ln_2.weight", "visual.transformer.resblocks.5.ln_2.bias", "visual.transformer.resblocks.5.mlp.c_fc.weight", "visual.transformer.resblocks.5.mlp.c_fc.bias", "visual.transformer.resblocks.5.mlp.c_proj.weight", "visual.transformer.resblocks.5.mlp.c_proj.bias", "visual.transformer.resblocks.6.ln_1.weight", "visual.transformer.resblocks.6.ln_1.bias", "visual.transformer.resblocks.6.attn.in_proj_weight", "visual.transformer.resblocks.6.attn.in_proj_bias", "visual.transformer.resblocks.6.attn.out_proj.weight", "visual.transformer.resblocks.6.attn.out_proj.bias", "visual.transformer.resblocks.6.ln_2.weight", "visual.transformer.resblocks.6.ln_2.bias", "visual.transformer.resblocks.6.mlp.c_fc.weight", "visual.transformer.resblocks.6.mlp.c_fc.bias", "visual.transformer.resblocks.6.mlp.c_proj.weight", "visual.transformer.resblocks.6.mlp.c_proj.bias", "visual.transformer.resblocks.7.ln_1.weight", "visual.transformer.resblocks.7.ln_1.bias", "visual.transformer.resblocks.7.attn.in_proj_weight", "visual.transformer.resblocks.7.attn.in_proj_bias", "visual.transformer.resblocks.7.attn.out_proj.weight", "visual.transformer.resblocks.7.attn.out_proj.bias", "visual.transformer.resblocks.7.ln_2.weight", "visual.transformer.resblocks.7.ln_2.bias", "visual.transformer.resblocks.7.mlp.c_fc.weight", "visual.transformer.resblocks.7.mlp.c_fc.bias", "visual.transformer.resblocks.7.mlp.c_proj.weight", "visual.transformer.resblocks.7.mlp.c_proj.bias", "visual.transformer.resblocks.8.ln_1.weight", "visual.transformer.resblocks.8.ln_1.bias", "visual.transformer.resblocks.8.attn.in_proj_weight", "visual.transformer.resblocks.8.attn.in_proj_bias", "visual.transformer.resblocks.8.attn.out_proj.weight", "visual.transformer.resblocks.8.attn.out_proj.bias", "visual.transformer.resblocks.8.ln_2.weight", "visual.transformer.resblocks.8.ln_2.bias", "visual.transformer.resblocks.8.mlp.c_fc.weight", "visual.transformer.resblocks.8.mlp.c_fc.bias", "visual.transformer.resblocks.8.mlp.c_proj.weight", "visual.transformer.resblocks.8.mlp.c_proj.bias", "visual.transformer.resblocks.9.ln_1.weight", "visual.transformer.resblocks.9.ln_1.bias", "visual.transformer.resblocks.9.attn.in_proj_weight", "visual.transformer.resblocks.9.attn.in_proj_bias", "visual.transformer.resblocks.9.attn.out_proj.weight", "visual.transformer.resblocks.9.attn.out_proj.bias", "visual.transformer.resblocks.9.ln_2.weight", "visual.transformer.resblocks.9.ln_2.bias", "visual.transformer.resblocks.9.mlp.c_fc.weight", "visual.transformer.resblocks.9.mlp.c_fc.bias", "visual.transformer.resblocks.9.mlp.c_proj.weight", "visual.transformer.resblocks.9.mlp.c_proj.bias", "visual.transformer.resblocks.10.ln_1.weight", "visual.transformer.resblocks.10.ln_1.bias", "visual.transformer.resblocks.10.attn.in_proj_weight", "visual.transformer.resblocks.10.attn.in_proj_bias", "visual.transformer.resblocks.10.attn.out_proj.weight", "visual.transformer.resblocks.10.attn.out_proj.bias", "visual.transformer.resblocks.10.ln_2.weight", "visual.transformer.resblocks.10.ln_2.bias", "visual.transformer.resblocks.10.mlp.c_fc.weight", "visual.transformer.resblocks.10.mlp.c_fc.bias", "visual.transformer.resblocks.10.mlp.c_proj.weight", "visual.transformer.resblocks.10.mlp.c_proj.bias", "visual.transformer.resblocks.11.ln_1.weight", "visual.transformer.resblocks.11.ln_1.bias", "visual.transformer.resblocks.11.attn.in_proj_weight", "visual.transformer.resblocks.11.attn.in_proj_bias", "visual.transformer.resblocks.11.attn.out_proj.weight", "visual.transformer.resblocks.11.attn.out_proj.bias", "visual.transformer.resblocks.11.ln_2.weight", "visual.transformer.resblocks.11.ln_2.bias", "visual.transformer.resblocks.11.mlp.c_fc.weight", "visual.transformer.resblocks.11.mlp.c_fc.bias", "visual.transformer.resblocks.11.mlp.c_proj.weight", "visual.transformer.resblocks.11.mlp.c_proj.bias", "visual.transformer.resblocks.12.ln_1.weight", "visual.transformer.resblocks.12.ln_1.bias", "visual.transformer.resblocks.12.attn.in_proj_weight", "visual.transformer.resblocks.12.attn.in_proj_bias", "visual.transformer.resblocks.12.attn.out_proj.weight", "visual.transformer.resblocks.12.attn.out_proj.bias", "visual.transformer.resblocks.12.ln_2.weight", "visual.transformer.resblocks.12.ln_2.bias", "visual.transformer.resblocks.12.mlp.c_fc.weight", "visual.transformer.resblocks.12.mlp.c_fc.bias", "visual.transformer.resblocks.12.mlp.c_proj.weight", "visual.transformer.resblocks.12.mlp.c_proj.bias", "visual.transformer.resblocks.13.ln_1.weight", "visual.transformer.resblocks.13.ln_1.bias", "visual.transformer.resblocks.13.attn.in_proj_weight", "visual.transformer.resblocks.13.attn.in_proj_bias", "visual.transformer.resblocks.13.attn.out_proj.weight", "visual.transformer.resblocks.13.attn.out_proj.bias", "visual.transformer.resblocks.13.ln_2.weight", "visual.transformer.resblocks.13.ln_2.bias", "visual.transformer.resblocks.13.mlp.c_fc.weight", "visual.transformer.resblocks.13.mlp.c_fc.bias", "visual.transformer.resblocks.13.mlp.c_proj.weight", "visual.transformer.resblocks.13.mlp.c_proj.bias", "visual.transformer.resblocks.14.ln_1.weight", "visual.transformer.resblocks.14.ln_1.bias", "visual.transformer.resblocks.14.attn.in_proj_weight", "visual.transformer.resblocks.14.attn.in_proj_bias", "visual.transformer.resblocks.14.attn.out_proj.weight", "visual.transformer.resblocks.14.attn.out_proj.bias", "visual.transformer.resblocks.14.ln_2.weight", "visual.transformer.resblocks.14.ln_2.bias", "visual.transformer.resblocks.14.mlp.c_fc.weight", "visual.transformer.resblocks.14.mlp.c_fc.bias", "visual.transformer.resblocks.14.mlp.c_proj.weight", "visual.transformer.resblocks.14.mlp.c_proj.bias", "visual.transformer.resblocks.15.ln_1.weight", "visual.transformer.resblocks.15.ln_1.bias", "visual.transformer.resblocks.15.attn.in_proj_weight", "visual.transformer.resblocks.15.attn.in_proj_bias", "visual.transformer.resblocks.15.attn.out_proj.weight", "visual.transformer.resblocks.15.attn.out_proj.bias", "visual.transformer.resblocks.15.ln_2.weight", "visual.transformer.resblocks.15.ln_2.bias", "visual.transformer.resblocks.15.mlp.c_fc.weight", "visual.transformer.resblocks.15.mlp.c_fc.bias", "visual.transformer.resblocks.15.mlp.c_proj.weight", "visual.transformer.resblocks.15.mlp.c_proj.bias", "visual.transformer.resblocks.16.ln_1.weight", "visual.transformer.resblocks.16.ln_1.bias", "visual.transformer.resblocks.16.attn.in_proj_weight", "visual.transformer.resblocks.16.attn.in_proj_bias", "visual.transformer.resblocks.16.attn.out_proj.weight", "visual.transformer.resblocks.16.attn.out_proj.bias", "visual.transformer.resblocks.16.ln_2.weight", "visual.transformer.resblocks.16.ln_2.bias", "visual.transformer.resblocks.16.mlp.c_fc.weight", "visual.transformer.resblocks.16.mlp.c_fc.bias", "visual.transformer.resblocks.16.mlp.c_proj.weight", "visual.transformer.resblocks.16.mlp.c_proj.bias", "visual.transformer.resblocks.17.ln_1.weight", "visual.transformer.resblocks.17.ln_1.bias", "visual.transformer.resblocks.17.attn.in_proj_weight", "visual.transformer.resblocks.17.attn.in_proj_bias", "visual.transformer.resblocks.17.attn.out_proj.weight", "visual.transformer.resblocks.17.attn.out_proj.bias", "visual.transformer.resblocks.17.ln_2.weight", "visual.transformer.resblocks.17.ln_2.bias", "visual.transformer.resblocks.17.mlp.c_fc.weight", "visual.transformer.resblocks.17.mlp.c_fc.bias", "visual.transformer.resblocks.17.mlp.c_proj.weight", "visual.transformer.resblocks.17.mlp.c_proj.bias", "visual.transformer.resblocks.18.ln_1.weight", "visual.transformer.resblocks.18.ln_1.bias", "visual.transformer.resblocks.18.attn.in_proj_weight", "visual.transformer.resblocks.18.attn.in_proj_bias", "visual.transformer.resblocks.18.attn.out_proj.weight", "visual.transformer.resblocks.18.attn.out_proj.bias", "visual.transformer.resblocks.18.ln_2.weight", "visual.transformer.resblocks.18.ln_2.bias", "visual.transformer.resblocks.18.mlp.c_fc.weight", "visual.transformer.resblocks.18.mlp.c_fc.bias", "visual.transformer.resblocks.18.mlp.c_proj.weight", "visual.transformer.resblocks.18.mlp.c_proj.bias", "visual.transformer.resblocks.19.ln_1.weight", "visual.transformer.resblocks.19.ln_1.bias", "visual.transformer.resblocks.19.attn.in_proj_weight", "visual.transformer.resblocks.19.attn.in_proj_bias", "visual.transformer.resblocks.19.attn.out_proj.weight", "visual.transformer.resblocks.19.attn.out_proj.bias", "visual.transformer.resblocks.19.ln_2.weight", "visual.transformer.resblocks.19.ln_2.bias", "visual.transformer.resblocks.19.mlp.c_fc.weight", "visual.transformer.resblocks.19.mlp.c_fc.bias", "visual.transformer.resblocks.19.mlp.c_proj.weight", "visual.transformer.resblocks.19.mlp.c_proj.bias", "visual.transformer.resblocks.20.ln_1.weight", "visual.transformer.resblocks.20.ln_1.bias", "visual.transformer.resblocks.20.attn.in_proj_weight", "visual.transformer.resblocks.20.attn.in_proj_bias", "visual.transformer.resblocks.20.attn.out_proj.weight", "visual.transformer.resblocks.20.attn.out_proj.bias", "visual.transformer.resblocks.20.ln_2.weight", "visual.transformer.resblocks.20.ln_2.bias", "visual.transformer.resblocks.20.mlp.c_fc.weight", "visual.transformer.resblocks.20.mlp.c_fc.bias", "visual.transformer.resblocks.20.mlp.c_proj.weight", "visual.transformer.resblocks.20.mlp.c_proj.bias", "visual.transformer.resblocks.21.ln_1.weight", "visual.transformer.resblocks.21.ln_1.bias", "visual.transformer.resblocks.21.attn.in_proj_weight", "visual.transformer.resblocks.21.attn.in_proj_bias", "visual.transformer.resblocks.21.attn.out_proj.weight", "visual.transformer.resblocks.21.attn.out_proj.bias", "visual.transformer.resblocks.21.ln_2.weight", "visual.transformer.resblocks.21.ln_2.bias", "visual.transformer.resblocks.21.mlp.c_fc.weight", "visual.transformer.resblocks.21.mlp.c_fc.bias", "visual.transformer.resblocks.21.mlp.c_proj.weight", "visual.transformer.resblocks.21.mlp.c_proj.bias", "visual.transformer.resblocks.22.ln_1.weight", "visual.transformer.resblocks.22.ln_1.bias", "visual.transformer.resblocks.22.attn.in_proj_weight", "visual.transformer.resblocks.22.attn.in_proj_bias", "visual.transformer.resblocks.22.attn.out_proj.weight", "visual.transformer.resblocks.22.attn.out_proj.bias", "visual.transformer.resblocks.22.ln_2.weight", "visual.transformer.resblocks.22.ln_2.bias", "visual.transformer.resblocks.22.mlp.c_fc.weight", "visual.transformer.resblocks.22.mlp.c_fc.bias", "visual.transformer.resblocks.22.mlp.c_proj.weight", "visual.transformer.resblocks.22.mlp.c_proj.bias", "visual.transformer.resblocks.23.ln_1.weight", "visual.transformer.resblocks.23.ln_1.bias", "visual.transformer.resblocks.23.attn.in_proj_weight", "visual.transformer.resblocks.23.attn.in_proj_bias", "visual.transformer.resblocks.23.attn.out_proj.weight", "visual.transformer.resblocks.23.attn.out_proj.bias", "visual.transformer.resblocks.23.ln_2.weight", "visual.transformer.resblocks.23.ln_2.bias", "visual.transformer.resblocks.23.mlp.c_fc.weight", "visual.transformer.resblocks.23.mlp.c_fc.bias", "visual.transformer.resblocks.23.mlp.c_proj.weight", "visual.transformer.resblocks.23.mlp.c_proj.bias", "visual.transformer.resblocks.24.ln_1.weight", "visual.transformer.resblocks.24.ln_1.bias", "visual.transformer.resblocks.24.attn.in_proj_weight", "visual.transformer.resblocks.24.attn.in_proj_bias", "visual.transformer.resblocks.24.attn.out_proj.weight", "visual.transformer.resblocks.24.attn.out_proj.bias", "visual.transformer.resblocks.24.ln_2.weight", "visual.transformer.resblocks.24.ln_2.bias", "visual.transformer.resblocks.24.mlp.c_fc.weight", "visual.transformer.resblocks.24.mlp.c_fc.bias", "visual.transformer.resblocks.24.mlp.c_proj.weight", "visual.transformer.resblocks.24.mlp.c_proj.bias", "visual.transformer.resblocks.25.ln_1.weight", "visual.transformer.resblocks.25.ln_1.bias", "visual.transformer.resblocks.25.attn.in_proj_weight", "visual.transformer.resblocks.25.attn.in_proj_bias", "visual.transformer.resblocks.25.attn.out_proj.weight", "visual.transformer.resblocks.25.attn.out_proj.bias", "visual.transformer.resblocks.25.ln_2.weight", "visual.transformer.resblocks.25.ln_2.bias", "visual.transformer.resblocks.25.mlp.c_fc.weight", "visual.transformer.resblocks.25.mlp.c_fc.bias", "visual.transformer.resblocks.25.mlp.c_proj.weight", "visual.transformer.resblocks.25.mlp.c_proj.bias", "visual.transformer.resblocks.26.ln_1.weight", "visual.transformer.resblocks.26.ln_1.bias", "visual.transformer.resblocks.26.attn.in_proj_weight", "visual.transformer.resblocks.26.attn.in_proj_bias", "visual.transformer.resblocks.26.attn.out_proj.weight", "visual.transformer.resblocks.26.attn.out_proj.bias", "visual.transformer.resblocks.26.ln_2.weight", "visual.transformer.resblocks.26.ln_2.bias", "visual.transformer.resblocks.26.mlp.c_fc.weight", "visual.transformer.resblocks.26.mlp.c_fc.bias", "visual.transformer.resblocks.26.mlp.c_proj.weight", "visual.transformer.resblocks.26.mlp.c_proj.bias", "visual.transformer.resblocks.27.ln_1.weight", "visual.transformer.resblocks.27.ln_1.bias", "visual.transformer.resblocks.27.attn.in_proj_weight", "visual.transformer.resblocks.27.attn.in_proj_bias", "visual.transformer.resblocks.27.attn.out_proj.weight", "visual.transformer.resblocks.27.attn.out_proj.bias", "visual.transformer.resblocks.27.ln_2.weight", "visual.transformer.resblocks.27.ln_2.bias", "visual.transformer.resblocks.27.mlp.c_fc.weight", "visual.transformer.resblocks.27.mlp.c_fc.bias", "visual.transformer.resblocks.27.mlp.c_proj.weight", "visual.transformer.resblocks.27.mlp.c_proj.bias", "visual.transformer.resblocks.28.ln_1.weight", "visual.transformer.resblocks.28.ln_1.bias", "visual.transformer.resblocks.28.attn.in_proj_weight", "visual.transformer.resblocks.28.attn.in_proj_bias", "visual.transformer.resblocks.28.attn.out_proj.weight", "visual.transformer.resblocks.28.attn.out_proj.bias", "visual.transformer.resblocks.28.ln_2.weight", "visual.transformer.resblocks.28.ln_2.bias", "visual.transformer.resblocks.28.mlp.c_fc.weight", "visual.transformer.resblocks.28.mlp.c_fc.bias", "visual.transformer.resblocks.28.mlp.c_proj.weight", "visual.transformer.resblocks.28.mlp.c_proj.bias", "visual.transformer.resblocks.29.ln_1.weight", "visual.transformer.resblocks.29.ln_1.bias", "visual.transformer.resblocks.29.attn.in_proj_weight", "visual.transformer.resblocks.29.attn.in_proj_bias", "visual.transformer.resblocks.29.attn.out_proj.weight", "visual.transformer.resblocks.29.attn.out_proj.bias", "visual.transformer.resblocks.29.ln_2.weight", "visual.transformer.resblocks.29.ln_2.bias", "visual.transformer.resblocks.29.mlp.c_fc.weight", "visual.transformer.resblocks.29.mlp.c_fc.bias", "visual.transformer.resblocks.29.mlp.c_proj.weight", "visual.transformer.resblocks.29.mlp.c_proj.bias", "visual.transformer.resblocks.30.ln_1.weight", "visual.transformer.resblocks.30.ln_1.bias", "visual.transformer.resblocks.30.attn.in_proj_weight", "visual.transformer.resblocks.30.attn.in_proj_bias", "visual.transformer.resblocks.30.attn.out_proj.weight", "visual.transformer.resblocks.30.attn.out_proj.bias", "visual.transformer.resblocks.30.ln_2.weight", "visual.transformer.resblocks.30.ln_2.bias", "visual.transformer.resblocks.30.mlp.c_fc.weight", "visual.transformer.resblocks.30.mlp.c_fc.bias", "visual.transformer.resblocks.30.mlp.c_proj.weight", "visual.transformer.resblocks.30.mlp.c_proj.bias", "visual.transformer.resblocks.31.ln_1.weight", "visual.transformer.resblocks.31.ln_1.bias", "visual.transformer.resblocks.31.attn.in_proj_weight", "visual.transformer.resblocks.31.attn.in_proj_bias", "visual.transformer.resblocks.31.attn.out_proj.weight", "visual.transformer.resblocks.31.attn.out_proj.bias", "visual.transformer.resblocks.31.ln_2.weight", "visual.transformer.resblocks.31.ln_2.bias", "visual.transformer.resblocks.31.mlp.c_fc.weight", "visual.transformer.resblocks.31.mlp.c_fc.bias", "visual.transformer.resblocks.31.mlp.c_proj.weight", "visual.transformer.resblocks.31.mlp.c_proj.bias", "visual.ln_post.weight", "visual.ln_post.bias".

Prompt executed in 25.48 seconds

Add a "Image Resolution Standardization" node.

Hi,May I ask if it's possible to add an image scaling node that can standardize the resolution of the image to be scaled, in order to avoid situations like the following after scaling?

Error occurred when executing VHS_VideoCombine:

An error occured in the ffmpeg subprocess:
[libx264 @ 0x55cb2a2dedc0] width not divisible by 2 (759x1024)
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height


File "/mnt/workspace/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/mnt/workspace/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/mnt/workspace/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/mnt/workspace/ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/nodes.py", line 358, in combine_video
output_process.send(images.tobytes())
File "/mnt/workspace/ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/nodes.py", line 130, in ffmpeg_process
raise Exception("An error occured in the ffmpeg subprocess:\n" \

I think the image preprocessing method like the one in the official demo is also good, filling the sides of the image with black to make the resolution of the image standardized. Of course, in Comfyui, this resolution can be relatively free, as long as it is a multiple of 32 (not sure, maybe multiples of 16 are also acceptable?).

Errors during model loading

Looks like I ran into the same set of errors that occur when using the other fork as well.
Interestingly, in this case, I had to queue prompt before the error started appearing, while for the other fork it simply appeared during ComfyUI loading.

I did do the requirements.txt installation as well.

Error occurred when executing DynamiCrafterModelLoader:
cannot import name '_TORCH_GREATER_EQUAL_1_13' from 'lightning_fabric.utilities.imports' (C:\tools\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\lightning_fabric\utilities\imports.py)

File "C:\tools\StabilityMatrix\Packages\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\tools\StabilityMatrix\Packages\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\tools\StabilityMatrix\Packages\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\tools\StabilityMatrix\Packages\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\nodes.py", line 76, in loadmodel
self.model = instantiate_from_config(model_config)
File "C:\tools\StabilityMatrix\Packages\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\utils\utils.py", line 33, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "C:\tools\StabilityMatrix\Packages\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\utils\utils.py", line 42, in get_obj_from_str
return getattr(importlib.import_module(module, package=package_directory_name), cls)
File "importlib_init_.py", line 126, in import_module
File "", line 1050, in _gcd_import
File "", line 1027, in find_and_load
File "", line 1006, in find_and_load_unlocked
File "", line 688, in load_unlocked
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "C:\tools\StabilityMatrix\Packages\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\lvdm\models\ddpm3d.py", line 19, in
import pytorch_lightning as pl
File "C:\tools\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\pytorch_lightning_init
.py", line 27, in
from pytorch_lightning.callbacks import Callback # noqa: E402
File "C:\tools\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\pytorch_lightning\callbacks_init
.py", line 29, in
from pytorch_lightning.callbacks.pruning import ModelPruning
File "C:\tools\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\pytorch_lightning\callbacks\pruning.py", line 31, in
from pytorch_lightning.core.module import LightningModule
File "C:\tools\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\pytorch_lightning\core_init
.py", line 16, in
from pytorch_lightning.core.module import LightningModule
File "C:\tools\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\pytorch_lightning\core\module.py", line 62, in
from pytorch_lightning.trainer import call
File "C:\tools\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\pytorch_lightning\trainer_init
.py", line 17, in
from pytorch_lightning.trainer.trainer import Trainer
File "C:\tools\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 46, in
from pytorch_lightning.loops import _PredictionLoop, TrainingEpochLoop
File "C:\tools\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\pytorch_lightning\loops_init
.py", line 15, in
from pytorch_lightning.loops.evaluation_loop import _EvaluationLoop # noqa: F401
File "C:\tools\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\pytorch_lightning\loops\evaluation_loop.py", line 29, in
from pytorch_lightning.loops.utilities import _no_grad_context, _select_data_fetcher, _verify_dataloader_idx_requirement
File "C:\tools\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\pytorch_lightning\loops\utilities.py", line 24, in
from lightning_fabric.utilities.imports import _TORCH_EQUAL_2_0, _TORCH_GREATER_EQUAL_1_13

blurriness in frames?

in some transitions, the output gets blurry then clears up again. Is there a way to fix this?

Error occurred when executing DynamiCrafterModelLoader:

Error occurred when executing DynamiCrafterModelLoader:

An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.

File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\nodes.py", line 82, in loadmodel
self.model = instantiate_from_config(model_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\utils\utils.py", line 33, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\lvdm\models\ddpm3d.py", line 680, in init
super().init(*args, **kwargs)
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\lvdm\models\ddpm3d.py", line 414, in init
self.instantiate_cond_stage(cond_stage_config)
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\lvdm\models\ddpm3d.py", line 447, in instantiate_cond_stage
model = instantiate_from_config(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\utils\utils.py", line 33, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\lvdm\modules\encoders\condition.py", line 188, in init
model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\open_clip\factory.py", line 324, in create_model_and_transforms
model = create_model(
^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\open_clip\factory.py", line 231, in create_model
checkpoint_path = download_pretrained(pretrained_cfg, cache_dir=cache_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\open_clip\pretrained.py", line 434, in download_pretrained
target = download_pretrained_from_hf(model_id, cache_dir=cache_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\open_clip\pretrained.py", line 404, in download_pretrained_from_hf
cached_file = hf_hub_download(model_id, filename, revision=revision, cache_dir=cache_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\file_download.py", line 1221, in hf_hub_download
return _hf_hub_download_to_cache_dir(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\file_download.py", line 1325, in _hf_hub_download_to_cache_dir
_raise_on_head_call_error(head_call_error, force_download, local_files_only)
File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\file_download.py", line 1826, in _raise_on_head_call_error
raise LocalEntryNotFoundError(

When I try the workflow in ComfyUI, this error always occur. Can anyone help me? Mnay thanks.

How to use your updated bf16 pruned models?

How to use your updated bf16 pruned models? But from my experience, it causes a significant increase in GPU memory usage and extremely slow speed. Is there any trick to use it more efficiently? Or do I need to modify the code? Thanks.

Error with DynamicCrafterI2V

Kijai,
(you're amazing!!)
I'm testing your implementation of ToonCrafter, and I'm receiving this error. How can I fix?

(snip)

Error occurred when executing DynamiCrafterI2V:

mat1 and mat2 shapes cannot be multiplied (257x1664 and 1280x1024)

File "D:\COMFYUI_BETA\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\nodes.py", line 394, in process
img_emb = self.model.image_proj_model(cond_images)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\lvdm\modules\encoders\resampler.py", line 136, in forward
x = self.proj_in(x)
^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\COMFYUI_BETA\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 116, in forward
return F.linear(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

DynamiCrafterI2V went wrong "Input type (float) and bias type (c10::BFloat16) should be the same"

Using latest commit 6d2b666
Error occurred when executing DynamiCrafterI2V:

Input type (float) and bias type (c10::BFloat16) should be the same

File "/home/aigc/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/home/aigc/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/home/aigc/ComfyUI/custom_nodes/ComfyUI-0246/utils.py", line 381, in new_func
res_value = old_func(*final_args, **kwargs)
File "/home/aigc/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/home/aigc/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/nodes.py", line 372, in process
z = get_latent_z(self.model, image.unsqueeze(2)) #bc,1,hw
File "/home/aigc/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/scripts/evaluation/funcs.py", line 66, in get_latent_z
z = model.encode_first_stage(x)
File "/home/aigc/ComfyUI/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/aigc/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/models/ddpm3d.py", line 504, in encode_first_stage
frame_batch = self.first_stage_model.encode(x[index:index+1,:,:,:])
File "/home/aigc/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/models/autoencoder_old.py", line 99, in encode
h = self.encoder(x)
File "/home/aigc/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/aigc/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/home/aigc/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/modules/networks/ae_modules.py", line 436, in forward
hs = [self.conv_in(x)]
File "/home/aigc/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/aigc/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/home/aigc/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/aigc/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
QQ截图20240604151605

Garbage interpolation at frame counts <16

Is there any way to successfully run interpolation with less than 16 frames? I see the option for frame count, but it seems like 16 is the only selection that works. If I try 4 or 8 frames, it is just a noisy mess not matter how many steps are used:

image

Is there a trick to getting fewer frames to work? It is sooooo much faster at generation with fewer frames.

Unable to export videos of larger size

I found the following error during use.
The following error occurs when using dynamicrafter_1024_v1_bf16 model
image

The following error occurs when using dynamicrafter_512_interp_v1_bf16 model
image

When I want to export videos of larger and higher resolution, none of these three models dynamicrafter_1024_v1_bf16, dynamicrafter_512_interp_v1_bf16, dynamicrafter_512_interp-fp16 work and errors appear
image

Better Prompting (please merge included commit)

For some reason when I try to make a pull request, it wants to remove some code and I don't know why...

Anyway, this is a better way of handling prompts! Now if you have lots of frames and you want a "global" prompt to apply to all of the transitions, you can easily do that by having a "master" prompt which is separated with the ":" character. Then, you list all of your individual prompts with "|" which will be appended to the "master" prompt. Here is a simple example of how it works:

image

I already did all this and tested it, the commit is here ready for merging:

42514f6

I handle using the last prompt available if there are not enough + handle situations where no "master" prompt is used. It should be good to go!

Exception occurred: No module named 'lvdm' when executing `dynamicrafter_looping_example_01.json`

[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
got prompt
!!! Exception during processing !!!
Traceback (most recent call last):
File "E:\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\nodes.py", line 82, in loadmodel
self.model = instantiate_from_config(model_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\utils\utils.py", line 33, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper\utils\utils.py", line 42, in get_obj_from_str
return getattr(importlib.import_module(module, package=package_directory_name), cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "importlib_init_.py", line 126, in import_module
File "", line 1204, in _gcd_import
File "", line 1176, in _find_and_load
File "", line 1126, in _find_and_load_unlocked
File "", line 241, in _call_with_frames_removed
File "", line 1204, in _gcd_import
File "", line 1176, in _find_and_load
File "", line 1126, in _find_and_load_unlocked
File "", line 241, in _call_with_frames_removed
File "", line 1204, in _gcd_import
File "", line 1176, in _find_and_load
File "", line 1140, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'lvdm'

I try to install https://github.com/YingqingHe/LVDM but it didn't work.

Prompt executed in 0.02 seconds

AttributeError: module 'comfy.sd' has no attribute 'CLIPType'

Kijai,
Thanks your contribution.

I'm trying the latest version of [dynamicrafter_i2v_example_01.json] example. I got some clip error.

model checkpoint loaded.
Model using dtype: torch.float16
Loading model from: /home/my_wsl/ComfyUI/models/clip_vision/CLIP-ViT-H-fp16.safetensors
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "/home/my_wsl/ComfyUI/execution.py", line 155, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/home/my_wsl/ComfyUI/execution.py", line 85, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/home/my_wsl/ComfyUI/custom_nodes/ComfyUI-0246/utils.py", line 363, in new_func
res_value = old_func(*final_args, **kwargs)
File "/home/my_wsl/ComfyUI/execution.py", line 78, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/home/my_wsl/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/nodes.py", line 164, in loadmodel
clip_type = comfy.sd.CLIPType.STABLE_DIFFUSION
AttributeError: module 'comfy.sd' has no attribute 'CLIPType'

Always says "CUDA Out of Memory"

image

I have a 12GB 4080 and I'm trying to run the 512x512 interpolation model on a 512x512 pair of images. It says it is requesting only 5GB of VRAM, which I should definitely have.

I'm trying to use the most basic workflow:

image

Error occurred when executing DynamiCrafterModelLoader

i got this error

local variable 'config_file' referenced before assignment

File "/home/priya/PycharmProjects/ComfyUI/comfyui/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/home/priya/PycharmProjects/ComfyUI/comfyui/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/home/priya/PycharmProjects/ComfyUI/comfyui/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/home/priya/PycharmProjects/ComfyUI/comfyui/custom_nodes/ComfyUI-DynamiCrafterWrapper/nodes.py", line 79, in loadmodel
config = OmegaConf.load(config_file)
Screenshot from 2024-05-17 15-34-45

How to solve this ? TIA

'VisionTransformer' object has no attribute 'input_patchnorm'

PixPin_2024-03-17_16-04-28

/mnt/workspace/ComfyUI
** ComfyUI startup time: 2024-03-17 16:08:59.078604
** Platform: Linux
** Python version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]
** Python executable: /opt/conda/bin/python
** Log path: /mnt/workspace/ComfyUI/comfyui.log

Prestartup times for custom nodes:
0.0 seconds: /mnt/workspace/ComfyUI/custom_nodes/ComfyUI-Manager

Total VRAM 22732 MB, total RAM 30179 MB
xformers version: 0.0.23.post1
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA A10 : cudaMallocAsync
VAE dtype: torch.bfloat16
Using xformers cross attention

Loading: ComfyUI-Manager (V2.10.1)

ComfyUI Revision: 2066 [f2fe635c] | Released on '2024-03-15'

Import times for custom nodes:
0.0 seconds: /mnt/workspace/ComfyUI/custom_nodes/AIGODLIKE-COMFYUI-TRANSLATION
0.0 seconds: /mnt/workspace/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper
0.1 seconds: /mnt/workspace/ComfyUI/custom_nodes/ComfyUI-Manager
0.2 seconds: /mnt/workspace/ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite

Starting server

To see the GUI go to: [http://127.0.0.1:8188]

[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
Client protocols [''] don’t overlap server-known ones ()
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
FETCH DATA from: /mnt/workspace/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json
Client protocols [''] don’t overlap server-known ones ()
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
got prompt
[2024-03-17 16:09:18,404] [INFO] [real_accelerator.py:161:get_accelerator] Setting ds_accelerator to cuda (auto detect)
LatentVisualDiffusion: Running in v-prediction mode
AE working on z of shape (1, 4, 32, 32) = 4096 dimensions.
Loaded ViT-H-14 model config.
Loading pretrained ViT-H-14 weights (laion2b_s32b_b79k).
Loaded ViT-H-14 model config.
Loading pretrained ViT-H-14 weights (laion2b_s32b_b79k).
Client protocols [''] don’t overlap server-known ones ()

model checkpoint loaded.
Using torch.bfloat16 VAE
!!! Exception during processing !!!
Traceback (most recent call last):
File "/mnt/workspace/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/mnt/workspace/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/mnt/workspace/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/mnt/workspace/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/nodes.py", line 176, in process
cond_images = self.model.embedder(image)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/mnt/workspace/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/modules/encoders/condition.py", line 339, in forward
z = self.encode_with_vision_transformer(image)
File "/mnt/workspace/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/modules/encoders/condition.py", line 346, in encode_with_vision_transformer
if self.model.visual.input_patchnorm:
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1695, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'VisionTransformer' object has no attribute 'input_patchnorm'

Prompt executed in 97.39 seconds
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/mnt/workspace/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/modules/encoders/condition.py", line 339, in forward
z = self.encode_with_vision_transformer(image)
File "/mnt/workspace/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/modules/encoders/condition.py", line 346, in encode_with_vision_transformer
if self.model.visual.input_patchnorm:
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1695, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")

[FEATURE REQUEST] Removing frames around keyframes to make for faster transitions

Right now, lots of frames are made right around middle keyframes, causing significant pauses in the motion that might not be desired. I think there should be an option to remove frames around keyframes in the middle of the animation. For example:

image

6 or 7 of the frames in the middle are nearly identical, so she doesn't transition smoothly from grabbing her hat and taking it off. I may tackle try and tackle this one myself.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.