Code Monkey home page Code Monkey logo

comfyui-diffusersstablecascade's Introduction

comfyui-diffusersstablecascade's People

Contributors

kijai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

comfyui-diffusersstablecascade's Issues

Mac M1 Error : low_cpu_mem_usage=True

Hi,
After installing get the error :

Error occurred when executing DiffusersStableCascade: Using low_cpu_mem_usage=True or a device_map requires Accelerate: pip install accelerate

Thanks in advance.

High VRAM use - ComfyUI won't unload models

Because the models are loaded directly, ComfyUI model manager doesn't know about them, and can't unload them. There are probably better ways to deal with this and once ComfyUI adds a native version, it shouldn't matter. But in order to run this on my 12GB GPU, I had to unload the models in between phases. Probably a better way to do this, but I'm still pretty new to ComfyUI development, so this was my solution for now. I found that because it took some time to load/unload the models in between, this worked pretty well to run batches, and because the latents are smaller I could run 3-4 at a time without trouble.

def process(self, width, height, seed, steps, guidance_scale, prompt, negative_prompt, batch_size, decoder_steps, image=None):
        
        comfy.model_management.unload_all_models()
        torch.manual_seed(seed)

        device = comfy.model_management.get_torch_device()
        #load the prior
        if not hasattr(self, 'prior') or self.prior == None:
            self.prior = StableCascadePriorPipeline.from_pretrained("stabilityai/stable-cascade-prior", torch_dtype=torch.bfloat16).to(device)

        prior_output = self.prior(
            image=image,
            prompt=prompt,
            height=height,
            width=width,
            negative_prompt=negative_prompt,
            guidance_scale=guidance_scale,
            num_images_per_prompt=batch_size,
            num_inference_steps=steps
        )
        #unload the prior
        if hasattr(self, 'prior'):
            self.prior = None
            gc.collect()
        #load the decoder
        if not hasattr(self, 'decoder') or self.decoder == None:
            self.decoder = StableCascadeDecoderPipeline.from_pretrained("stabilityai/stable-cascade",  torch_dtype=torch.float16).to(device)
            
        decoder_output = self.decoder(
            image_embeddings=prior_output.image_embeddings.half(),
            prompt=prompt,
            negative_prompt=negative_prompt,
            guidance_scale=0.0,
            output_type="pil",
            num_inference_steps=decoder_steps
        ).images
        
        #unload the decoder
        if hasattr(self, 'decoder'):
            self.decoder = None
            gc.collect()
            
        tensors = [ToTensor()(img) for img in decoder_output]
        batch_tensor = torch.stack(tensors).permute(0, 2, 3, 1).cpu()
        
        
        return (batch_tensor,image)

Error with directML

Hi
Thanks for the node.
When i use directml option in comfyui i got this error.
Do you schedule a version of the node for it ?
Regards

got prompt
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 6/6 [00:01<00:00, 3.56it/s]
[F D:\a_work\1\s\pytorch-directml-plugin\torch_directml\csrc\engine\dml_util.cc:118] Invalid or unsupported data type.

Installation ?

Wanted to test before Comfy implemented.
To recap:
install folder inside comfy nodes,
install requirements,
download model (took stageC big)
relaunch comfy

Still giving me errors when loading the model. Is it the right way to install ?

Errors regarding torch.Size, "expecting 64, got 16..." Ideas how to resolve?

As posted on Reddit:

https://www.reddit.com/r/comfyui/comments/1arh2du/stable_cascade_working_yesterday_not_working/

Using this custom node:
ComfyUI-DiffusersStableCascade

With the workflow detailed here:
https://civitai.com/models/306016?modelVersionId=343529

Win10, 64GB RAM, 3090

The install went well. The workflow produces excellent images. I got embeddings to work. I couldn't figure out LoRAs. Started working on that today (after updating via ComfyUI Manager) and suddenly nothing works for Stable Cascade. So I went back to the original workflow from civitai, and that doesn't work either. Apparently changes have occurred in the stable cascade custom node that have changed parameters. I now see "SEGM_DETECTOR" has a red "X" next to it. (See image in original reddit post.)

I also get an error that pops up in a red box. (See code below.)

The linked reddit post above contains an image created with stable cascade and the Civitai workflow with prompt:
"A strawberry frog in a cranberry bog on a log in the fog."

I'd be grateful for ideas on how to fix this.

C

`Error occurred when executing DiffusersStableCascade:

Cannot load C:\Users\Chris.cache\huggingface\hub\models--stabilityai--stable-cascade\snapshots\f2a84281d6f8db3c757195dd0c9a38dbdea90bb4\decoder because embedding.1.weight expected shape tensor(..., device='meta', size=(320, 64, 1, 1)), but got torch.Size([320, 16, 1, 1]). If you want to instead overwrite randomly initialized weights, please make sure to pass both low_cpu_mem_usage=False and ignore_mismatched_sizes=True. For more information, see also: huggingface/diffusers#1619 (comment) as an example.

File "D:\work\ai\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\work\ai\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\work\ai\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\work\ai\ComfyUI\custom_nodes\ComfyUI-DiffusersStableCascade\nodes.py", line 44, in process
self.decoder = StableCascadeDecoderPipeline.from_pretrained("stabilityai/stable-cascade", torch_dtype=torch.float16).to(device)
File "C:\Users\Chris\miniconda3\envs\sd\lib\site-packages\huggingface_hub\utils_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "D:\work\ai\ComfyUI\custom_nodes\ComfyUI-DiffusersStableCascade\src\diffusers\src\diffusers\pipelines\pipeline_utils.py", line 1263, in from_pretrained
loaded_sub_model = load_sub_model(
File "D:\work\ai\ComfyUI\custom_nodes\ComfyUI-DiffusersStableCascade\src\diffusers\src\diffusers\pipelines\pipeline_utils.py", line 531, in load_sub_model
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File "C:\Users\Chris\miniconda3\envs\sd\lib\site-packages\huggingface_hub\utils_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "D:\work\ai\ComfyUI\custom_nodes\ComfyUI-DiffusersStableCascade\src\diffusers\src\diffusers\models\modeling_utils.py", line 669, in from_pretrained
unexpected_keys = load_model_dict_into_meta(
File "D:\work\ai\ComfyUI\custom_nodes\ComfyUI-DiffusersStableCascade\src\diffusers\src\diffusers\models\modeling_utils.py", line 154, in load_model_dict_into_meta
raise ValueError(`

ERROR: Invalid requirement:

ERROR: Invalid requirement: 'Files\SD\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DiffusersStableCascade\requirements.txt'
Hint: It looks like a path. File 'Files\SD\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DiffusersStableCascade\requirements.txt' does not exist.
Happens while installing requirements
Can't fix it. Could you be so kind to wxplain what's the reason of this ?

cannot import name 'List' from 'typing_extensions'

Getting this error

Cannot import /mnt/home/Applications/ComfyUI/custom_nodes/ComfyUI-DiffusersStableCascade module for custom nodes: Failed to import diffusers.pipelines.stable_cascade.pipeline_stable_cascade because of the following error (look up to see its traceback):
cannot import name 'List' from 'typing_extensions'

I installed this through the comfy manager

Things I've tried so far...
git pull within the folder

manual install for pip install git+https://github.com/kashif/diffusers.git@wuerstchen-v3 although pip install -r requirements.txt seems to have already got that one

I do have this dependency error with pip as well, if it's relevant

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
lama-cleaner 1.2.1 requires diffusers==0.16.1, but you have diffusers 0.27.0.dev0 which is incompatible.
lama-cleaner 1.2.1 requires transformers==4.27.4, but you have transformers 4.37.2 which is incompatible.

Assume it may be a matter of incompatibility in packages for another custom node I have? All my other nodes seem to load fine

using python-3.10.11 in the venv
using pip 24.0

System configuration

OS: EndeavourOS Linux x86_64
Kernel: 6.7.4-arch1-1
Uptime: 1 day, 21 hours, 54 mins
Packages: 1812 (pacman), 13 (flatpak)
Shell: zsh 5.9
Resolution: 3440x1440
DE: Hyprland
WM: sway
Theme: Breeze-Dark [GTK2/3]
Icons: Breeze-Noir-White-Blue [GTK2/3]
Terminal: terminology
Terminal Font: MesloLGSDZ Nerd Font
CPU: AMD Ryzen 7 5800X (16) @ 3.800GHz
GPU: NVIDIA GeForce RTX 4090
Memory: 12241MiB / 96467MiB

It has been installed according to your requirements, but it still doesn't match.

Error occurred when executing DiffusersStableCascade:

Cannot load H:\ComfyUI-ainewsto\ComfyUI.cache\huggingface\hub\models--stabilityai--stable-cascade\snapshots\f2a84281d6f8db3c757195dd0c9a38dbdea90bb4\decoder because embedding.1.weight expected shape tensor(..., device='meta', size=(320, 64, 1, 1)), but got torch.Size([320, 16, 1, 1]). If you want to instead overwrite randomly initialized weights, please make sure to pass both low_cpu_mem_usage=False and ignore_mismatched_sizes=True. For more information, see also: huggingface/diffusers#1619 (comment) as an example.

diffusers The version of does not match.But I downloaded this according to your request.

[Request] Change the width and height steps to 128

Because of the way sizes are used in Cascade, there is an error thrown if you don't get it correct. If you change the multiple of the width and height to step in units of 128, instead of 8, then this bypasses the errors.

Example error:
RuntimeError: pixel_unshuffle expects height to be divisible by downscale_factor, but input.size(-2)=277 is not divisible by 2

Hello author, could you please explain the reason behind this error message in the code?

Error occurred when executing DiffusersStableCascade:

cutlassF: no kernel found to launch!

File "J:\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DiffusersStableCascade\nodes.py", line 49, in process
prior_output = self.prior(
^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "j:\comfyui_windows_portable\python_embeded\src\diffusers\src\diffusers\pipelines\stable_cascade\pipeline_stable_cascade_prior.py", line 556, in call
predicted_image_embedding = self.prior(
^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "j:\comfyui_windows_portable\python_embeded\src\diffusers\src\diffusers\pipelines\stable_cascade\modeling_stable_cascade_common.py", line 316, in forward
level_outputs = self._down_encode(x, r_embed, clip)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "j:\comfyui_windows_portable\python_embeded\src\diffusers\src\diffusers\pipelines\stable_cascade\modeling_stable_cascade_common.py", line 255, in _down_encode
x = block(x, clip)
^^^^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "j:\comfyui_windows_portable\python_embeded\src\diffusers\src\diffusers\pipelines\wuerstchen\modeling_wuerstchen_common.py", line 108, in forward
x = x + self.attention(norm_x, encoder_hidden_states=kv)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "j:\comfyui_windows_portable\python_embeded\src\diffusers\src\diffusers\models\attention_processor.py", line 522, in forward
return self.processor(
^^^^^^^^^^^^^^^
File "j:\comfyui_windows_portable\python_embeded\src\diffusers\src\diffusers\models\attention_processor.py", line 1254, in call
hidden_states = F.scaled_dot_product_attention(

issue on Mac M3: BFloat16 is not supported on MPS

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "/Users/dfl/sd/ComfyUI/execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/dfl/sd/ComfyUI/execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/dfl/sd/ComfyUI/execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/dfl/sd/ComfyUI/custom_nodes/ComfyUI-DiffusersStableCascade/nodes.py", line 44, in process
    self.prior = StableCascadePriorPipeline.from_pretrained("stabilityai/stable-cascade-prior", torch_dtype=torch.bfloat16).to(device)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/dfl/sd/ComfyUI/venv/src/diffusers/src/diffusers/pipelines/pipeline_utils.py", line 862, in to
    module.to(device, dtype)
  File "/Users/dfl/sd/ComfyUI/venv/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2595, in to
    return super().to(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/dfl/sd/ComfyUI/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1160, in to
    return self._apply(convert)
           ^^^^^^^^^^^^^^^^^^^^
  File "/Users/dfl/sd/ComfyUI/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
    module._apply(fn)
  File "/Users/dfl/sd/ComfyUI/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
    module._apply(fn)
  File "/Users/dfl/sd/ComfyUI/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
    module._apply(fn)
  File "/Users/dfl/sd/ComfyUI/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 833, in _apply
    param_applied = fn(param)
                    ^^^^^^^^^
  File "/Users/dfl/sd/ComfyUI/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1158, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: BFloat16 is not supported on MPS

I start comfy with this script:

PYTORCH_ENABLE_MPS_FALLBACK=1
./venv/bin/python main.py --force-fp16

tried changing various permutations to no effect

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.