Code Monkey home page Code Monkey logo

comfyui-easyanimatewrapper's People

Contributors

kijai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

kustomzone

comfyui-easyanimatewrapper's Issues

How to use with full_input_video?

I noticed the EasyAnimate Sampler includes a full_input_video input. However, no matter what I try, I get this error if I pass a video as input.

RuntimeError: The size of tensor a (7) must match the size of tensor b (6) at non-singleton dimension 2

config.json does not exist

I' downloaded everything as described in the structure folder the jason file is there but it gives me these errors.

Error occurred when executing DownloadAndLoadEasyAnimateModel:

K:\ComfyUI\ComfyUI_Ex\ComfyUI\models\easyanimate\common\vae\config.json does not exist

File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\custom_nodes\ComfyUI-EasyAnimateWrapper\nodes.py", line 113, in loadmodel
vae = Choosen_AutoencoderKL.from_pretrained(common_path, subfolder="vae").to(dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI_Ex\ComfyUI\custom_nodes\ComfyUI-EasyAnimateWrapper\easyanimate\models\autoencoder_magvit.py", line 491, in from_pretrained
raise RuntimeError(f"{config_file} does not exist")

Error occurred when executing EasyAnimateSampler

I am a technical scumbag.Is there any solution to the error below? thx


微信截图_20240711174114
微信截图_20240711182903

Error occurred when executing EasyAnimateSampler:

The image to be converted to a PIL image contains values outside the range [0, 1], got [-0.07665368914604187, 1.0637787580490112] which cannot be converted to uint8.

File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-EasyAnimateWrapper\nodes.py", line 227, in process
sample = pipeline(
File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-EasyAnimateWrapper\easyanimate\pipeline\pipeline_easyanimate_inpaint.py", line 999, in call
inputs = self.clip_image_processor(images=clip_image, return_tensors="pt")
File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\transformers\image_processing_utils.py", line 551, in call
return self.preprocess(images, **kwargs)
File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\transformers\models\clip\image_processing_clip.py", line 323, in preprocess
images = [
File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\transformers\models\clip\image_processing_clip.py", line 324, in
self.resize(image=image, size=size, resample=resample, input_data_format=input_data_format)
File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\transformers\models\clip\image_processing_clip.py", line 191, in resize
return resize(
File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\transformers\image_transforms.py", line 326, in resize
do_rescale = _rescale_for_pil_conversion(image)
File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\transformers\image_transforms.py", line 150, in _rescale_for_pil_conversion
raise ValueError(

I did run it successfully but the results were very poor

I did run it successfully but the results were very poor, so I'm wondering if there might be something wrong with the parameters?

missing keys: 0;

unexpected keys: 96;

[] ['loss.discriminator.main.0.bias', 'loss.discriminator.main.0.weight', 'loss.discriminator.main.11.bias', 'loss.discriminator.main.11.weight', 'loss.discriminator.main.2.weight', 'loss.discriminator.main.3.bias', 'loss.discriminator.main.3.num_batches_tracked', 'loss.discriminator.main.3.running_mean', 'loss.discriminator.main.3.running_var', 'loss.discriminator.main.3.weight', 'loss.discriminator.main.5.weight', 'loss.discriminator.main.6.bias', 'loss.discriminator.main.6.num_batches_tracked', 'loss.discriminator.main.6.running_mean', 'loss.discriminator.main.6.running_var', 'loss.discriminator.main.6.weight', 'loss.discriminator.main.8.weight', 'loss.discriminator.main.9.bias', 'loss.discriminator.main.9.num_batches_tracked', 'loss.discriminator.main.9.running_mean', 'loss.discriminator.main.9.running_var', 'loss.discriminator.main.9.weight', 'loss.discriminator3d.blocks.0.conv1.bias', 'loss.discriminator3d.blocks.0.conv1.weight', 'loss.discriminator3d.blocks.0.conv2.bias', 'loss.discriminator3d.blocks.0.conv2.weight', 'loss.discriminator3d.blocks.0.downsampler.filt', 'loss.discriminator3d.blocks.0.norm1.bias', 'loss.discriminator3d.blocks.0.norm1.weight', 'loss.discriminator3d.blocks.0.norm2.bias', 'loss.discriminator3d.blocks.0.norm2.weight', 'loss.discriminator3d.blocks.0.shortcut.0.filt', 'loss.discriminator3d.blocks.0.shortcut.1.bias', 'loss.discriminator3d.blocks.0.shortcut.1.weight', 'loss.discriminator3d.blocks.1.conv1.bias', 'loss.discriminator3d.blocks.1.conv1.weight', 'loss.discriminator3d.blocks.1.conv2.bias', 'loss.discriminator3d.blocks.1.conv2.weight', 'loss.discriminator3d.blocks.1.downsampler.filt', 'loss.discriminator3d.blocks.1.norm1.bias', 'loss.discriminator3d.blocks.1.norm1.weight', 'loss.discriminator3d.blocks.1.norm2.bias', 'loss.discriminator3d.blocks.1.norm2.weight', 'loss.discriminator3d.blocks.1.shortcut.0.filt', 'loss.discriminator3d.blocks.1.shortcut.1.bias', 'loss.discriminator3d.blocks.1.shortcut.1.weight', 'loss.discriminator3d.blocks.2.conv1.bias', 'loss.discriminator3d.blocks.2.conv1.weight', 'loss.discriminator3d.blocks.2.conv2.bias', 'loss.discriminator3d.blocks.2.conv2.weight', 'loss.discriminator3d.blocks.2.norm1.bias', 'loss.discriminator3d.blocks.2.norm1.weight', 'loss.discriminator3d.blocks.2.norm2.bias', 'loss.discriminator3d.blocks.2.norm2.weight', 'loss.discriminator3d.blocks.2.shortcut.0.bias', 'loss.discriminator3d.blocks.2.shortcut.0.weight', 'loss.discriminator3d.conv_in.bias', 'loss.discriminator3d.conv_in.weight', 'loss.discriminator3d.conv_norm_out.bias', 'loss.discriminator3d.conv_norm_out.weight', 'loss.discriminator3d.conv_out.bias', 'loss.discriminator3d.conv_out.weight', 'loss.logvar', 'loss.perceptual_loss.lin0.model.1.weight', 'loss.perceptual_loss.lin1.model.1.weight', 'loss.perceptual_loss.lin2.model.1.weight', 'loss.perceptual_loss.lin3.model.1.weight', 'loss.perceptual_loss.lin4.model.1.weight', 'loss.perceptual_loss.net.slice1.0.bias', 'loss.perceptual_loss.net.slice1.0.weight', 'loss.perceptual_loss.net.slice1.2.bias', 'loss.perceptual_loss.net.slice1.2.weight', 'loss.perceptual_loss.net.slice2.5.bias', 'loss.perceptual_loss.net.slice2.5.weight', 'loss.perceptual_loss.net.slice2.7.bias', 'loss.perceptual_loss.net.slice2.7.weight', 'loss.perceptual_loss.net.slice3.10.bias', 'loss.perceptual_loss.net.slice3.10.weight', 'loss.perceptual_loss.net.slice3.12.bias', 'loss.perceptual_loss.net.slice3.12.weight', 'loss.perceptual_loss.net.slice3.14.bias', 'loss.perceptual_loss.net.slice3.14.weight', 'loss.perceptual_loss.net.slice4.17.bias', 'loss.perceptual_loss.net.slice4.17.weight', 'loss.perceptual_loss.net.slice4.19.bias', 'loss.perceptual_loss.net.slice4.19.weight', 'loss.perceptual_loss.net.slice4.21.bias', 'loss.perceptual_loss.net.slice4.21.weight', 'loss.perceptual_loss.net.slice5.24.bias', 'loss.perceptual_loss.net.slice5.24.weight', 'loss.perceptual_loss.net.slice5.26.bias', 'loss.perceptual_loss.net.slice5.26.weight', 'loss.perceptual_loss.net.slice5.28.bias', 'loss.perceptual_loss.net.slice5.28.weight', 'loss.perceptual_loss.scaling_layer.scale', 'loss.perceptual_loss.scaling_layer.shift']

diffusion_pytorch_model.bin does not exist

Error occurred when executing DownloadAndLoadEasyAnimateModel:

C:\ComfyUI\ComfyUI_windows_portable\ComfyUI\models\easyanimate\common\vae\diffusion_pytorch_model.bin does not exist

File "C:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-EasyAnimateWrapper\nodes.py", line 105, in loadmodel
vae = AutoencoderKLMagvit.from_pretrained(common_path, subfolder="vae").to(dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-EasyAnimateWrapper\easyanimate\models\autoencoder_magvit.py", line 504, in from_pretrained
raise RuntimeError(f"{model_file} does not exist")

Will support for 60-second video generation be available in the future?

Hi, thanks for your great node!
Does this support long video generation? I noticed that the maximum frame adjustment in the comfyui node is 144 frames (6 seconds), while the official gradio provides 1440 frames (60 seconds, though I haven't tested it yet). I have used the comfyui node you created, which quickly generated the video, whereas the official gradio's generation speed is very slow and inefficient. Your node has greatly improved the efficiency.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.