Code Monkey home page Code Monkey logo

comfyui-ccsr's People

Contributors

csslc avatar haohaocreates avatar kijai avatar melmass avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

comfyui-ccsr's Issues

Size Mismatch error

even with a basic 512x512 image, i'm getting this error for size mismatch.

Screenshot 2024-04-04 at 11 54 47 AM

using just this workflow
Screenshot 2024-04-04 at 11 57 25 AM

Allow for optional prompting

Since the first stage sampler allows for passing positive and negative prompts, why not allow the user to write these prompts instead of leaving them blank (which probably induces a bias in fact)?

I don't know if it would have a lot of influence on the result but it could probably slightly help handling different textures better, for instance by prompting "skin texture" for portraits and "tree, leaves" for forest landscapes.

However I do understand that adding another two parameters that probably are not that useful could lead to a heavier node so you might have done that by design.

Installed on RunPod, and received this error

Hey hey!
Great node for upscaling, and I've used on my local machine (Win11), however, when using RunPod, I get this error:

How to fix??

Thanks in advance!

RROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "/workspace/ComfyUI/execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/workspace/ComfyUI/execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/workspace/ComfyUI/execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/workspace/ComfyUI/custom_nodes/ComfyUI-CCSR/nodes.py", line 76, in process
self.model = instantiate_from_config(config)
File "/workspace/ComfyUI/custom_nodes/ComfyUI-CCSR/utils/common.py", line 18, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/workspace/ComfyUI/custom_nodes/ComfyUI-CCSR/model/ccsr_stage2.py", line 297, in init
super().init(*args, **kwargs)
File "/workspace/ComfyUI/custom_nodes/ComfyUI-CCSR/ldm/models/diffusion/ddpm_ccsr_stage2.py", line 666, in init
self.instantiate_cond_stage(cond_stage_config)
File "/workspace/ComfyUI/custom_nodes/ComfyUI-CCSR/ldm/models/diffusion/ddpm_ccsr_stage2.py", line 779, in instantiate_cond_stage
model = instantiate_from_config(config)
File "/workspace/ComfyUI/custom_nodes/ComfyUI-CCSR/ldm/util.py", line 80, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/workspace/ComfyUI/custom_nodes/ComfyUI-CCSR/ldm/modules/encoders/modules.py", line 148, in init
model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version)
File "/usr/local/lib/python3.10/dist-packages/open_clip/factory.py", line 384, in create_model_and_transforms
model = create_model(
File "/usr/local/lib/python3.10/dist-packages/open_clip/factory.py", line 290, in create_model
load_checkpoint(model, checkpoint_path)
File "/usr/local/lib/python3.10/dist-packages/open_clip/factory.py", line 161, in load_checkpoint
incompatible_keys = model.load_state_dict(state_dict, strict=strict)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for CLIP:
Unexpected key(s) in state_dict: "visual.class_embedding", "visual.positional_embedding", "visual.proj", "visual.conv1.weight", "visual.ln_pre.weight", "visual.ln_pre.bias", "visual.transformer.resblocks.0.ln_1.weight", "visual.transformer.resblocks.0.ln_1.bias", "visual.transformer.resblocks.0.attn.in_proj_weight", "visual.transformer.resblocks.0.attn.in_proj_bias", "visual.transformer.resblocks.0.attn.out_proj.weight", "visual.transformer.resblocks.0.attn.out_proj.bias", "visual.transformer.resblocks.0.ln_2.weight", "visual.transformer.resblocks.0.ln_2.bias", "visual.transformer.resblocks.0.mlp.c_fc.weight", "visual.transformer.resblocks.0.mlp.c_fc.bias", "visual.transformer.resblocks.0.mlp.c_proj.weight", "visual.transformer.resblocks.0.mlp.c_proj.bias", "visual.transformer.resblocks.1.ln_1.weight", "visual.transformer.resblocks.1.ln_1.bias", "visual.transformer.resblocks.1.attn.in_proj_weight", "visual.transformer.resblocks.1.attn.in_proj_bias", "visual.transformer.resblocks.1.attn.out_proj.weight", "visual.transformer.resblocks.1.attn.out_proj.bias", "visual.transformer.resblocks.1.ln_2.weight", "visual.transformer.resblocks.1.ln_2.bias", "visual.transformer.resblocks.1.mlp.c_fc.weight", "visual.transformer.resblocks.1.mlp.c_fc.bias", "visual.transformer.resblocks.1.mlp.c_proj.weight", "visual.transformer.resblocks.1.mlp.c_proj.bias", "visual.transformer.resblocks.2.ln_1.weight", "visual.transformer.resblocks.2.ln_1.bias", "visual.transformer.resblocks.2.attn.in_proj_weight", "visual.transformer.resblocks.2.attn.in_proj_bias", "visual.transformer.resblocks.2.attn.out_proj.weight", "visual.transformer.resblocks.2.attn.out_proj.bias", "visual.transformer.resblocks.2.ln_2.weight", "visual.transformer.resblocks.2.ln_2.bias", "visual.transformer.resblocks.2.mlp.c_fc.weight", "visual.transformer.resblocks.2.mlp.c_fc.bias", "visual.transformer.resblocks.2.mlp.c_proj.weight", "visual.transformer.resblocks.2.mlp.c_proj.bias", "visual.transformer.resblocks.3.ln_1.weight", "visual.transformer.resblocks.3.ln_1.bias", "visual.transformer.resblocks.3.attn.in_proj_weight", "visual.transformer.resblocks.3.attn.in_proj_bias", "visual.transformer.resblocks.3.attn.out_proj.weight", "visual.transformer.resblocks.3.attn.out_proj.bias", "visual.transformer.resblocks.3.ln_2.weight", "visual.transformer.resblocks.3.ln_2.bias", "visual.transformer.resblocks.3.mlp.c_fc.weight", "visual.transformer.resblocks.3.mlp.c_fc.bias", "visual.transformer.resblocks.3.mlp.c_proj.weight", "visual.transformer.resblocks.3.mlp.c_proj.bias", "visual.transformer.resblocks.4.ln_1.weight", "visual.transformer.resblocks.4.ln_1.bias", "visual.transformer.resblocks.4.attn.in_proj_weight", "visual.transformer.resblocks.4.attn.in_proj_bias", "visual.transformer.resblocks.4.attn.out_proj.weight", "visual.transformer.resblocks.4.attn.out_proj.bias", "visual.transformer.resblocks.4.ln_2.weight", "visual.transformer.resblocks.4.ln_2.bias", "visual.transformer.resblocks.4.mlp.c_fc.weight", "visual.transformer.resblocks.4.mlp.c_fc.bias", "visual.transformer.resblocks.4.mlp.c_proj.weight", "visual.transformer.resblocks.4.mlp.c_proj.bias", "visual.transformer.resblocks.5.ln_1.weight", "visual.transformer.resblocks.5.ln_1.bias", "visual.transformer.resblocks.5.attn.in_proj_weight", "visual.transformer.resblocks.5.attn.in_proj_bias", "visual.transformer.resblocks.5.attn.out_proj.weight", "visual.transformer.resblocks.5.attn.out_proj.bias", "visual.transformer.resblocks.5.ln_2.weight", "visual.transformer.resblocks.5.ln_2.bias", "visual.transformer.resblocks.5.mlp.c_fc.weight", "visual.transformer.resblocks.5.mlp.c_fc.bias", "visual.transformer.resblocks.5.mlp.c_proj.weight", "visual.transformer.resblocks.5.mlp.c_proj.bias", "visual.transformer.resblocks.6.ln_1.weight", "visual.transformer.resblocks.6.ln_1.bias", "visual.transformer.resblocks.6.attn.in_proj_weight", "visual.transformer.resblocks.6.attn.in_proj_bias", "visual.transformer.resblocks.6.attn.out_proj.weight", "visual.transformer.resblocks.6.attn.out_proj.bias", "visual.transformer.resblocks.6.ln_2.weight", "visual.transformer.resblocks.6.ln_2.bias", "visual.transformer.resblocks.6.mlp.c_fc.weight", "visual.transformer.resblocks.6.mlp.c_fc.bias", "visual.transformer.resblocks.6.mlp.c_proj.weight", "visual.transformer.resblocks.6.mlp.c_proj.bias", "visual.transformer.resblocks.7.ln_1.weight", "visual.transformer.resblocks.7.ln_1.bias", "visual.transformer.resblocks.7.attn.in_proj_weight", "visual.transformer.resblocks.7.attn.in_proj_bias", "visual.transformer.resblocks.7.attn.out_proj.weight", "visual.transformer.resblocks.7.attn.out_proj.bias", "visual.transformer.resblocks.7.ln_2.weight", "visual.transformer.resblocks.7.ln_2.bias", "visual.transformer.resblocks.7.mlp.c_fc.weight", "visual.transformer.resblocks.7.mlp.c_fc.bias", "visual.transformer.resblocks.7.mlp.c_proj.weight", "visual.transformer.resblocks.7.mlp.c_proj.bias", "visual.transformer.resblocks.8.ln_1.weight", "visual.transformer.resblocks.8.ln_1.bias", "visual.transformer.resblocks.8.attn.in_proj_weight", "visual.transformer.resblocks.8.attn.in_proj_bias", "visual.transformer.resblocks.8.attn.out_proj.weight", "visual.transformer.resblocks.8.attn.out_proj.bias", "visual.transformer.resblocks.8.ln_2.weight", "visual.transformer.resblocks.8.ln_2.bias", "visual.transformer.resblocks.8.mlp.c_fc.weight", "visual.transformer.resblocks.8.mlp.c_fc.bias", "visual.transformer.resblocks.8.mlp.c_proj.weight", "visual.transformer.resblocks.8.mlp.c_proj.bias", "visual.transformer.resblocks.9.ln_1.weight", "visual.transformer.resblocks.9.ln_1.bias", "visual.transformer.resblocks.9.attn.in_proj_weight", "visual.transformer.resblocks.9.attn.in_proj_bias", "visual.transformer.resblocks.9.attn.out_proj.weight", "visual.transformer.resblocks.9.attn.out_proj.bias", "visual.transformer.resblocks.9.ln_2.weight", "visual.transformer.resblocks.9.ln_2.bias", "visual.transformer.resblocks.9.mlp.c_fc.weight", "visual.transformer.resblocks.9.mlp.c_fc.bias", "visual.transformer.resblocks.9.mlp.c_proj.weight", "visual.transformer.resblocks.9.mlp.c_proj.bias", "visual.transformer.resblocks.10.ln_1.weight", "visual.transformer.resblocks.10.ln_1.bias", "visual.transformer.resblocks.10.attn.in_proj_weight", "visual.transformer.resblocks.10.attn.in_proj_bias", "visual.transformer.resblocks.10.attn.out_proj.weight", "visual.transformer.resblocks.10.attn.out_proj.bias", "visual.transformer.resblocks.10.ln_2.weight", "visual.transformer.resblocks.10.ln_2.bias", "visual.transformer.resblocks.10.mlp.c_fc.weight", "visual.transformer.resblocks.10.mlp.c_fc.bias", "visual.transformer.resblocks.10.mlp.c_proj.weight", "visual.transformer.resblocks.10.mlp.c_proj.bias", "visual.transformer.resblocks.11.ln_1.weight", "visual.transformer.resblocks.11.ln_1.bias", "visual.transformer.resblocks.11.attn.in_proj_weight", "visual.transformer.resblocks.11.attn.in_proj_bias", "visual.transformer.resblocks.11.attn.out_proj.weight", "visual.transformer.resblocks.11.attn.out_proj.bias", "visual.transformer.resblocks.11.ln_2.weight", "visual.transformer.resblocks.11.ln_2.bias", "visual.transformer.resblocks.11.mlp.c_fc.weight", "visual.transformer.resblocks.11.mlp.c_fc.bias", "visual.transformer.resblocks.11.mlp.c_proj.weight", "visual.transformer.resblocks.11.mlp.c_proj.bias", "visual.transformer.resblocks.12.ln_1.weight", "visual.transformer.resblocks.12.ln_1.bias", "visual.transformer.resblocks.12.attn.in_proj_weight", "visual.transformer.resblocks.12.attn.in_proj_bias", "visual.transformer.resblocks.12.attn.out_proj.weight", "visual.transformer.resblocks.12.attn.out_proj.bias", "visual.transformer.resblocks.12.ln_2.weight", "visual.transformer.resblocks.12.ln_2.bias", "visual.transformer.resblocks.12.mlp.c_fc.weight", "visual.transformer.resblocks.12.mlp.c_fc.bias", "visual.transformer.resblocks.12.mlp.c_proj.weight", "visual.transformer.resblocks.12.mlp.c_proj.bias", "visual.transformer.resblocks.13.ln_1.weight", "visual.transformer.resblocks.13.ln_1.bias", "visual.transformer.resblocks.13.attn.in_proj_weight", "visual.transformer.resblocks.13.attn.in_proj_bias", "visual.transformer.resblocks.13.attn.out_proj.weight", "visual.transformer.resblocks.13.attn.out_proj.bias", "visual.transformer.resblocks.13.ln_2.weight", "visual.transformer.resblocks.13.ln_2.bias", "visual.transformer.resblocks.13.mlp.c_fc.weight", "visual.transformer.resblocks.13.mlp.c_fc.bias", "visual.transformer.resblocks.13.mlp.c_proj.weight", "visual.transformer.resblocks.13.mlp.c_proj.bias", "visual.transformer.resblocks.14.ln_1.weight", "visual.transformer.resblocks.14.ln_1.bias", "visual.transformer.resblocks.14.attn.in_proj_weight", "visual.transformer.resblocks.14.attn.in_proj_bias", "visual.transformer.resblocks.14.attn.out_proj.weight", "visual.transformer.resblocks.14.attn.out_proj.bias", "visual.transformer.resblocks.14.ln_2.weight", "visual.transformer.resblocks.14.ln_2.bias", "visual.transformer.resblocks.14.mlp.c_fc.weight", "visual.transformer.resblocks.14.mlp.c_fc.bias", "visual.transformer.resblocks.14.mlp.c_proj.weight", "visual.transformer.resblocks.14.mlp.c_proj.bias", "visual.transformer.resblocks.15.ln_1.weight", "visual.transformer.resblocks.15.ln_1.bias", "visual.transformer.resblocks.15.attn.in_proj_weight", "visual.transformer.resblocks.15.attn.in_proj_bias", "visual.transformer.resblocks.15.attn.out_proj.weight", "visual.transformer.resblocks.15.attn.out_proj.bias", "visual.transformer.resblocks.15.ln_2.weight", "visual.transformer.resblocks.15.ln_2.bias", "visual.transformer.resblocks.15.mlp.c_fc.weight", "visual.transformer.resblocks.15.mlp.c_fc.bias", "visual.transformer.resblocks.15.mlp.c_proj.weight", "visual.transformer.resblocks.15.mlp.c_proj.bias", "visual.transformer.resblocks.16.ln_1.weight", "visual.transformer.resblocks.16.ln_1.bias", "visual.transformer.resblocks.16.attn.in_proj_weight", "visual.transformer.resblocks.16.attn.in_proj_bias", "visual.transformer.resblocks.16.attn.out_proj.weight", "visual.transformer.resblocks.16.attn.out_proj.bias", "visual.transformer.resblocks.16.ln_2.weight","visual.transformer.resblocks.16.ln_2.bias", "visual.transformer.resblocks.16.mlp.c_fc.weight", "visual.transformer.resblocks.16.mlp.c_fc.bias", "visual.transformer.resblocks.16.mlp.c_proj.weight", "visual.transformer.resblocks.16.mlp.c_proj.bias", "visual.transformer.resblocks.17.ln_1.weight", "visual.transformer.resblocks.17.ln_1.bias", "visual.transformer.resblocks.17.attn.in_proj_weight", "visual.transformer.resblocks.17.attn.in_proj_bias", "visual.transformer.resblocks.17.attn.out_proj.weight", "visual.transformer.resblocks.17.attn.out_proj.bias", "visual.transformer.resblocks.17.ln_2.weight", "visual.transformer.resblocks.17.ln_2.bias", "visual.transformer.resblocks.17.mlp.c_fc.weight", "visual.transformer.resblocks.17.mlp.c_fc.bias", "visual.transformer.resblocks.17.mlp.c_proj.weight", "visual.transformer.resblocks.17.mlp.c_proj.bias", "visual.transformer.resblocks.18.ln_1.weight", "visual.transformer.resblocks.18.ln_1.bias", "visual.transformer.resblocks.18.attn.in_proj_weight", "visual.transformer.resblocks.18.attn.in_proj_bias", "visual.transformer.resblocks.18.attn.out_proj.weight", "visual.transformer.resblocks.18.attn.out_proj.bias", "visual.transformer.resblocks.18.ln_2.weight", "visual.transformer.resblocks.18.ln_2.bias", "visual.transformer.resblocks.18.mlp.c_fc.weight", "visual.transformer.resblocks.18.mlp.c_fc.bias", "visual.transformer.resblocks.18.mlp.c_proj.weight", "visual.transformer.resblocks.18.mlp.c_proj.bias", "visual.transformer.resblocks.19.ln_1.weight", "visual.transformer.resblocks.19.ln_1.bias", "visual.transformer.resblocks.19.attn.in_proj_weight", "visual.transformer.resblocks.19.attn.in_proj_bias", "visual.transformer.resblocks.19.attn.out_proj.weight", "visual.transformer.resblocks.19.attn.out_proj.bias", "visual.transformer.resblocks.19.ln_2.weight", "visual.transformer.resblocks.19.ln_2.bias", "visual.transformer.resblocks.19.mlp.c_fc.weight", "visual.transformer.resblocks.19.mlp.c_fc.bias", "visual.transformer.resblocks.19.mlp.c_proj.weight", "visual.transformer.resblocks.19.mlp.c_proj.bias", "visual.transformer.resblocks.20.ln_1.weight", "visual.transformer.resblocks.20.ln_1.bias", "visual.transformer.resblocks.20.attn.in_proj_weight", "visual.transformer.resblocks.20.attn.in_proj_bias", "visual.transformer.resblocks.20.attn.out_proj.weight", "visual.transformer.resblocks.20.attn.out_proj.bias", "visual.transformer.resblocks.20.ln_2.weight", "visual.transformer.resblocks.20.ln_2.bias", "visual.transformer.resblocks.20.mlp.c_fc.weight", "visual.transformer.resblocks.20.mlp.c_fc.bias", "visual.transformer.resblocks.20.mlp.c_proj.weight", "visual.transformer.resblocks.20.mlp.c_proj.bias", "visual.transformer.resblocks.21.ln_1.weight", "visual.transformer.resblocks.21.ln_1.bias", "visual.transformer.resblocks.21.attn.in_proj_weight", "visual.transformer.resblocks.21.attn.in_proj_bias", "visual.transformer.resblocks.21.attn.out_proj.weight", "visual.transformer.resblocks.21.attn.out_proj.bias", "visual.transformer.resblocks.21.ln_2.weight", "visual.transformer.resblocks.21.ln_2.bias", "visual.transformer.resblocks.21.mlp.c_fc.weight", "visual.transformer.resblocks.21.mlp.c_fc.bias", "visual.transformer.resblocks.21.mlp.c_proj.weight", "visual.transformer.resblocks.21.mlp.c_proj.bias", "visual.transformer.resblocks.22.ln_1.weight", "visual.transformer.resblocks.22.ln_1.bias", "visual.transformer.resblocks.22.attn.in_proj_weight", "visual.transformer.resblocks.22.attn.in_proj_bias", "visual.transformer.resblocks.22.attn.out_proj.weight", "visual.transformer.resblocks.22.attn.out_proj.bias", "visual.transformer.resblocks.22.ln_2.weight", "visual.transformer.resblocks.22.ln_2.bias", "visual.transformer.resblocks.22.mlp.c_fc.weight", "visual.transformer.resblocks.22.mlp.c_fc.bias", "visual.transformer.resblocks.22.mlp.c_proj.weight", "visual.transformer.resblocks.22.mlp.c_proj.bias", "visual.transformer.resblocks.23.ln_1.weight", "visual.transformer.resblocks.23.ln_1.bias", "visual.transformer.resblocks.23.attn.in_proj_weight", "visual.transformer.resblocks.23.attn.in_proj_bias", "visual.transformer.resblocks.23.attn.out_proj.weight", "visual.transformer.resblocks.23.attn.out_proj.bias", "visual.transformer.resblocks.23.ln_2.weight", "visual.transformer.resblocks.23.ln_2.bias", "visual.transformer.resblocks.23.mlp.c_fc.weight", "visual.transformer.resblocks.23.mlp.c_fc.bias", "visual.transformer.resblocks.23.mlp.c_proj.weight", "visual.transformer.resblocks.23.mlp.c_proj.bias", "visual.transformer.resblocks.24.ln_1.weight", "visual.transformer.resblocks.24.ln_1.bias", "visual.transformer.resblocks.24.attn.in_proj_weight", "visual.transformer.resblocks.24.attn.in_proj_bias", "visual.transformer.resblocks.24.attn.out_proj.weight", "visual.transformer.resblocks.24.attn.out_proj.bias", "visual.transformer.resblocks.24.ln_2.weight", "visual.transformer.resblocks.24.ln_2.bias", "visual.transformer.resblocks.24.mlp.c_fc.weight", "visual.transformer.resblocks.24.mlp.c_fc.bias", "visual.transformer.resblocks.24.mlp.c_proj.weight", "visual.transformer.resblocks.24.mlp.c_proj.bias", "visual.transformer.resblocks.25.ln_1.weight", "visual.transformer.resblocks.25.ln_1.bias", "visual.transformer.resblocks.25.attn.in_proj_weight", "visual.transformer.resblocks.25.attn.in_proj_bias", "visual.transformer.resblocks.25.attn.out_proj.weight", "visual.transformer.resblocks.25.attn.out_proj.bias", "visual.transformer.resblocks.25.ln_2.weight", "visual.transformer.resblocks.25.ln_2.bias", "visual.transformer.resblocks.25.mlp.c_fc.weight", "visual.transformer.resblocks.25.mlp.c_fc.bias", "visual.transformer.resblocks.25.mlp.c_proj.weight", "visual.transformer.resblocks.25.mlp.c_proj.bias", "visual.transformer.resblocks.26.ln_1.weight", "visual.transformer.resblocks.26.ln_1.bias", "visual.transformer.resblocks.26.attn.in_proj_weight", "visual.transformer.resblocks.26.attn.in_proj_bias", "visual.transformer.resblocks.26.attn.out_proj.weight", "visual.transformer.resblocks.26.attn.out_proj.bias", "visual.transformer.resblocks.26.ln_2.weight", "visual.transformer.resblocks.26.ln_2.bias", "visual.transformer.resblocks.26.mlp.c_fc.weight", "visual.transformer.resblocks.26.mlp.c_fc.bias", "visual.transformer.resblocks.26.mlp.c_proj.weight", "visual.transformer.resblocks.26.mlp.c_proj.bias", "visual.transformer.resblocks.27.ln_1.weight", "visual.transformer.resblocks.27.ln_1.bias", "visual.transformer.resblocks.27.attn.in_proj_weight", "visual.transformer.resblocks.27.attn.in_proj_bias", "visual.transformer.resblocks.27.attn.out_proj.weight", "visual.transformer.resblocks.27.attn.out_proj.bias", "visual.transformer.resblocks.27.ln_2.weight", "visual.transformer.resblocks.27.ln_2.bias", "visual.transformer.resblocks.27.mlp.c_fc.weight", "visual.transformer.resblocks.27.mlp.c_fc.bias", "visual.transformer.resblocks.27.mlp.c_proj.weight", "visual.transformer.resblocks.27.mlp.c_proj.bias", "visual.transformer.resblocks.28.ln_1.weight", "visual.transformer.resblocks.28.ln_1.bias", "visual.transformer.resblocks.28.attn.in_proj_weight", "visual.transformer.resblocks.28.attn.in_proj_bias", "visual.transformer.resblocks.28.attn.out_proj.weight", "visual.transformer.resblocks.28.attn.out_proj.bias","visual.transformer.resblocks.28.ln_2.weight", "visual.transformer.resblocks.28.ln_2.bias", "visual.transformer.resblocks.28.mlp.c_fc.weight", "visual.transformer.resblocks.28.mlp.c_fc.bias", "visual.transformer.resblocks.28.mlp.c_proj.weight", "visual.transformer.resblocks.28.mlp.c_proj.bias", "visual.transformer.resblocks.29.ln_1.weight", "visual.transformer.resblocks.29.ln_1.bias", "visual.transformer.resblocks.29.attn.in_proj_weight", "visual.transformer.resblocks.29.attn.in_proj_bias", "visual.transformer.resblocks.29.attn.out_proj.weight", "visual.transformer.resblocks.29.attn.out_proj.bias", "visual.transformer.resblocks.29.ln_2.weight", "visual.transformer.resblocks.29.ln_2.bias", "visual.transformer.resblocks.29.mlp.c_fc.weight", "visual.transformer.resblocks.29.mlp.c_fc.bias", "visual.transformer.resblocks.29.mlp.c_proj.weight", "visual.transformer.resblocks.29.mlp.c_proj.bias", "visual.transformer.resblocks.30.ln_1.weight", "visual.transformer.resblocks.30.ln_1.bias", "visual.transformer.resblocks.30.attn.in_proj_weight", "visual.transformer.resblocks.30.attn.in_proj_bias", "visual.transformer.resblocks.30.attn.out_proj.weight", "visual.transformer.resblocks.30.attn.out_proj.bias", "visual.transformer.resblocks.30.ln_2.weight", "visual.transformer.resblocks.30.ln_2.bias", "visual.transformer.resblocks.30.mlp.c_fc.weight", "visual.transformer.resblocks.30.mlp.c_fc.bias", "visual.transformer.resblocks.30.mlp.c_proj.weight", "visual.transformer.resblocks.30.mlp.c_proj.bias", "visual.transformer.resblocks.31.ln_1.weight", "visual.transformer.resblocks.31.ln_1.bias", "visual.transformer.resblocks.31.attn.in_proj_weight", "visual.transformer.resblocks.31.attn.in_proj_bias", "visual.transformer.resblocks.31.attn.out_proj.weight", "visual.transformer.resblocks.31.attn.out_proj.bias", "visual.transformer.resblocks.31.ln_2.weight", "visual.transformer.resblocks.31.ln_2.bias", "visual.transformer.resblocks.31.mlp.c_fc.weight", "visual.transformer.resblocks.31.mlp.c_fc.bias", "visual.transformer.resblocks.31.mlp.c_proj.weight", "visual.transformer.resblocks.31.mlp.c_proj.bias", "visual.ln_post.weight", "visual.ln_post.bias".

Prompt executed in 5.71 seconds
got prompt
Requested to load SD1ClipModel

HELP: Error occurred when executing CCSR_Upscale: (ProtocolError('Connection aborted.', ConnectionResetError

Using Comfyui Portable for Windows
** Python version: 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)]

I'm getting the below error message. I have updated comfyui and node too.

DiffusionWrapper has 865.91 M params.
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\urllib3\connectionpool.py", line 715, in urlopen
httplib_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\urllib3\connectionpool.py", line 404, in _make_request
self._validate_conn(conn)
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\urllib3\connectionpool.py", line 1058, in validate_conn
conn.connect()
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\urllib3\connection.py", line 419, in connect
self.sock = ssl_wrap_socket(
^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\urllib3\util\ssl
.py", line 449, in ssl_wrap_socket
ssl_sock = ssl_wrap_socket_impl(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\urllib3\util\ssl
.py", line 493, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "ssl.py", line 517, in wrap_socket
File "ssl.py", line 1108, in _create
File "ssl.py", line 1379, in do_handshake
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\requests\adapters.py", line 486, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\urllib3\connectionpool.py", line 799, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\urllib3\util\retry.py", line 550, in increment
raise six.reraise(type(error), error, _stacktrace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\urllib3\packages\six.py", line 769, in reraise
raise value.with_traceback(tb)
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\urllib3\connectionpool.py", line 715, in urlopen
httplib_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\urllib3\connectionpool.py", line 404, in _make_request
self._validate_conn(conn)
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\urllib3\connectionpool.py", line 1058, in validate_conn
conn.connect()
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\urllib3\connection.py", line 419, in connect
self.sock = ssl_wrap_socket(
^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\urllib3\util\ssl
.py", line 449, in ssl_wrap_socket
ssl_sock = ssl_wrap_socket_impl(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\urllib3\util\ssl
.py", line 493, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "ssl.py", line 517, in wrap_socket
File "ssl.py", line 1108, in _create
File "ssl.py", line 1379, in do_handshake
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-CCSR\nodes.py", line 76, in process
self.model = instantiate_from_config(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-CCSR\utils\common.py", line 18, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-CCSR\model\ccsr_stage2.py", line 297, in init
super().init(*args, **kwargs)
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-CCSR\ldm\models\diffusion\ddpm_ccsr_stage2.py", line 666, in init
self.instantiate_cond_stage(cond_stage_config)
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-CCSR\ldm\models\diffusion\ddpm_ccsr_stage2.py", line 779, in instantiate_cond_stage
model = instantiate_from_config(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-CCSR\ldm\util.py", line 80, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-CCSR\ldm\modules\encoders\modules.py", line 148, in init
model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\open_clip\factory.py", line 384, in create_model_and_transforms
model = create_model(
^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\open_clip\factory.py", line 283, in create_model
checkpoint_path = download_pretrained(pretrained_cfg, cache_dir=cache_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\open_clip\pretrained.py", line 582, in download_pretrained
target = download_pretrained_from_hf(model_id, cache_dir=cache_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\open_clip\pretrained.py", line 552, in download_pretrained_from_hf
cached_file = hf_hub_download(model_id, filename, revision=revision, cache_dir=cache_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\utils_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\file_download.py", line 1461, in hf_hub_download
http_get(
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\file_download.py", line 468, in http_get
r = _request_wrapper(
^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\file_download.py", line 425, in _request_wrapper
response = get_session().request(method=method, url=url, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\utils_http.py", line 63, in send
return super().send(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxx\Downloads\ComfyUI_windows_portable\python_embeded\Lib\site-packages\requests\adapters.py", line 501, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: (ProtocolError('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None)), '(Request ID: dfb6486b-a7f1-47e6-a8fe-b9d1d15ea4a8)')

Prompt executed in 10.29 seconds

RTX 4080 - Allocation on device 0 would exceed allowed memory. (out of memory)

When i try to upscale x4 512x512 image at default settings i receive this error:

Error occurred when executing CCSR_Upscale:

Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 9.28 GiB
Requested : 4.00 GiB
Device limit : 16.00 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB

File "C:\Comfy\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI\custom_nodes\ComfyUI-CCSR\nodes.py", line 140, in process
samples = sampler.sample_ccsr(
^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI\custom_nodes\ComfyUI-CCSR\model\q_sampler.py", line 1052, in sample_ccsr
img_pixel = (self.model.decode_first_stage(img) + 1) / 2
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI\custom_nodes\ComfyUI-CCSR\ldm\models\diffusion\ddpm_ccsr_stage2.py", line 977, in decode_first_stage
return self.first_stage_model.decode(z)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI\custom_nodes\ComfyUI-CCSR\ldm\models\autoencoder.py", line 90, in decode
dec = self.decoder(z)
^^^^^^^^^^^^^^^
File "C:\Comfy\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 635, in forward
h = self.up[i_level].block[i_block](h, temb, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 140, in forward
h = self.norm1(h)
^^^^^^^^^^^^^
File "C:\Comfy\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI\comfy\ops.py", line 90, in forward
return super().forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\python_embeded\Lib\site-packages\torch\nn\modules\normalization.py", line 287, in forward
return F.group_norm(
^^^^^^^^^^^^^
File "C:\Comfy\python_embeded\Lib\site-packages\torch\nn\functional.py", line 2561, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

No issue when i try to upscale the same image x3 . The system is with 32GB RAM .

cannot import name '_parse_tpu_devices' from 'lightning_fabric._graveyard.tpu'

cannot load module. please help

(IMPORT FAILED): D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-CCSR

ImportError: cannot import name '_parse_tpu_devices' from 'lightning_fabric._graveyard.tpu' (D:\ComfyUI\ComfyUI_windows_portable\python_embeded\lib\site-packages\lightning_fabric_graveyard\tpu.py)

Cannot import D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-CCSR module for custom nodes: cannot import name '_parse_tpu_devices' from 'lightning_fabric._graveyard.tpu' (D:\ComfyUI\ComfyUI_windows_portable\python_embeded\lib\site-packages\lightning_fabric_graveyard\tpu.py)
Traceback (most recent call last):

No such file or directory: ".../custom_nodes/ComfyUI-CCSR/empty_text_embed.safetensors"

Hi, after I updated to the latest version, an error message appears when I try to run this custom node:

Error occurred when executing CCSR_Upscale:  No such file or directory: "/home/silver/Playgrounds/AI/sd-comfyui/custom_nodes/ComfyUI-CCSR/empty_text_embed.safetensors"  File "/home/silver/Playgrounds/AI/sd-comfyui/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/home/silver/Playgrounds/AI/sd-comfyui/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/home/silver/Playgrounds/AI/sd-comfyui/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/home/silver/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/silver/Playgrounds/AI/sd-comfyui/custom_nodes/ComfyUI-CCSR/nodes.py", line 77, in process empty_text_embed_sd = comfy.utils.load_torch_file(os.path.join(script_directory, "empty_text_embed.safetensors")) File "/home/silver/Playgrounds/AI/sd-comfyui/comfy/utils.py", line 15, in load_torch_file sd = safetensors.torch.load_file(ckpt, device=device.type) File "/home/silver/.local/lib/python3.10/site-packages/safetensors/torch.py", line 308, in load_file with safe_open(filename, framework="pt", device=device) as f:

I try to find this 'safetensors' file everywhere but no luck. Do you have any idea what went wrong? Thank you for your time.

Error occurred when executing CCSR_Upscale:

Not really sure what is this issue, but I tried to delete everything and instsall from scratch but the issue persists.

I can confirm I put real-world_ccsr.ckpt in \ComfyUI\models\checkpoints

Error occurred when executing CCSR_Upscale:

invalid load key, '\xd0'.

File "C:\Users\josep\Downloads\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\josep\Downloads\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\josep\Downloads\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-0246\utils.py", line 381, in new_func
res_value = old_func(*final_args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\josep\Downloads\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\josep\Downloads\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\josep\Downloads\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-CCSR\nodes.py", line 78, in process
load_state_dict(self.model, torch.load(ccsr_model, map_location="cpu"), strict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\josep\Downloads\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\serialization.py", line 1040, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\josep\Downloads\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\serialization.py", line 1258, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

"upsample_nearest2d_out_frame" not implemented for 'BFloat16' in the decode image process

Error occurred when executing VAEDecodeTiled_TiledDiffusion:

"upsample_nearest2d_out_frame" not implemented for 'BFloat16'

File "/root/autodl-tmp/ComfyUI/execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/root/autodl-tmp/ComfyUI/execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/root/autodl-tmp/ComfyUI/execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/root/autodl-tmp/ComfyUI/custom_nodes/ComfyUI-TiledDiffusion/tiled_vae.py", line 815, in process
ret = VAEDecode().decode(_vae, samples) if is_decoder else VAEEncode().encode(_vae, samples)
File "/root/autodl-tmp/ComfyUI/nodes.py", line 287, in decode
return (vae.decode(samples["samples"]), )
File "/root/autodl-tmp/ComfyUI/comfy/sd.py", line 254, in decode
pixel_samples[x:x+batch_number] = self.process_output(self.first_stage_model.decode(samples).to(self.output_device).float())
File "/root/autodl-tmp/ComfyUI/comfy/ldm/models/autoencoder.py", line 202, in decode
dec = self.decoder(dec, **decoder_kwargs)
File "/root/anaconda3/envs/sd/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/autodl-tmp/ComfyUI/custom_nodes/ComfyUI-TiledDiffusion/tiled_vae.py", line 474, in call
return self.vae_tile_forward(x)
File "/root/autodl-tmp/ComfyUI/custom_nodes/ComfyUI-TiledDiffusion/tiled_vae.py", line 360, in wrapper
ret = fn(*args, **kwargs)
File "/root/anaconda3/envs/sd/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/root/autodl-tmp/ComfyUI/custom_nodes/ComfyUI-TiledDiffusion/tiled_vae.py", line 635, in vae_tile_forward
downsampled_z = F.interpolate(z, scale_factor=scale_factor, mode='nearest-exact')
File "/root/anaconda3/envs/sd/lib/python3.10/site-packages/torch/nn/functional.py", line 3938, in interpolate
return torch._C._nn._upsample_nearest_exact2d(input, output_size, scale_factors)

Error occurred when executing CCSR_Upscale

Error occurred when executing CCSR_Upscale:

An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.

File "/mnt/drive0/AIGC/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/drive0/AIGC/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/drive0/AIGC/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/drive0/AIGC/ComfyUI_ENV/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/mnt/drive0/AIGC/ComfyUI/custom_nodes/ComfyUI-CCSR/nodes.py", line 76, in process
self.model = instantiate_from_config(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/drive0/AIGC/ComfyUI/custom_nodes/ComfyUI-CCSR/utils/common.py", line 18, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/drive0/AIGC/ComfyUI/custom_nodes/ComfyUI-CCSR/model/ccsr_stage2.py", line 297, in init
super().init(*args, **kwargs)
File "/mnt/drive0/AIGC/ComfyUI/custom_nodes/ComfyUI-CCSR/ldm/models/diffusion/ddpm_ccsr_stage2.py", line 666, in init
self.instantiate_cond_stage(cond_stage_config)
File "/mnt/drive0/AIGC/ComfyUI/custom_nodes/ComfyUI-CCSR/ldm/models/diffusion/ddpm_ccsr_stage2.py", line 779, in instantiate_cond_stage
model = instantiate_from_config(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/drive0/AIGC/ComfyUI/custom_nodes/ComfyUI-CCSR/ldm/util.py", line 80, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/drive0/AIGC/ComfyUI/custom_nodes/ComfyUI-CCSR/ldm/modules/encoders/modules.py", line 148, in init
model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/drive0/AIGC/ComfyUI_ENV/lib/python3.11/site-packages/open_clip/factory.py", line 384, in create_model_and_transforms
model = create_model(
^^^^^^^^^^^^^
File "/mnt/drive0/AIGC/ComfyUI_ENV/lib/python3.11/site-packages/open_clip/factory.py", line 283, in create_model
checkpoint_path = download_pretrained(pretrained_cfg, cache_dir=cache_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/drive0/AIGC/ComfyUI_ENV/lib/python3.11/site-packages/open_clip/pretrained.py", line 582, in download_pretrained
target = download_pretrained_from_hf(model_id, cache_dir=cache_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/drive0/AIGC/ComfyUI_ENV/lib/python3.11/site-packages/open_clip/pretrained.py", line 552, in download_pretrained_from_hf
cached_file = hf_hub_download(model_id, filename, revision=revision, cache_dir=cache_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/drive0/AIGC/ComfyUI_ENV/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/mnt/drive0/AIGC/ComfyUI_ENV/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1371, in hf_hub_download
raise LocalEntryNotFoundError(

ComfyUI after update

image

Cannot import D:\StabilityMatrix\Packages\ComfyUI\custom_nodes\ComfyUI-CCSR module for custom nodes: 'ComfyUIManagerLogger' object has no attribute 'isatty'

Error occurred when executing CCSR_Upscale:

Error occurred when executing CCSR_Upscale:

No operator found for memory_efficient_attention_forward with inputs:
query : shape=(5, 4096, 1, 64) (torch.float16)
key : shape=(5, 4096, 1, 64) (torch.float16)
value : shape=(5, 4096, 1, 64) (torch.float16)
attn_bias :
p : 0.0
decoderF is not supported because:
xFormers wasn't build with CUDA support
attn_bias type is
operator wasn't built - see python -m xformers.info for more info
[email protected] is not supported because:
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
tritonflashattF is not supported because:
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
triton is not available
cutlassF is not supported because:
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
smallkF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
xFormers wasn't build with CUDA support
dtype=torch.float16 (supported: {torch.float32})
operator wasn't built - see python -m xformers.info for more info
unsupported embed per head: 64

What should I do? I don't have much experience with python or installing things via console, only basic comands from git and such, so I don't understand anything from this error 🥲

Error occurred when executing SamplerCustom:

Error occurred when executing SamplerCustom:

The size of tensor a (768) must match the size of tensor b (1280) at non-singleton dimension 1

File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 271, in sample
samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 113, in sample_custom
samples = comfy.samplers.sample(real_model, noise, positive_copy, negative_copy, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 595, in sample
positive = encode_model_conds(model.extra_conds, positive, noise, device, "positive", latent_image=latent_image, denoise_mask=denoise_mask, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 499, in encode_model_conds
out = model_function(**params)
^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 158, in extra_conds
adm = self.encode_adm(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 312, in encode_adm
clip_pooled = sdxl_pooled(kwargs, self.noise_augmentor)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 274, in sdxl_pooled
return unclip_adm(args.get("unclip_conditioning", None), args["device"], noise_augmentor, seed=args.get("seed", 0) - 10)[:,:1280]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 244, in unclip_adm
c_adm, noise_level_emb = noise_augmentor(adm_cond.to(device), noise_level=torch.tensor([noise_level], device=device), seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\encoders\noise_aug_modules.py", line 31, in forward

gotting this error, I believe is a model problem?? but no clue.

Error during install triton version not found

Trying to install CCSR through Comfyui Manager and the installation return this message.
Is there any solution ?

## Execute install/(de)activation script for 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\CCSR'
[!] ERROR: Could not find a version that satisfies the requirement triton==2.0.0 (from versions: none)
[!] ERROR: No matching distribution found for triton==2.0.0
[!]
[!] [notice] A new release of pip is available: 23.3.1 -> 24.0
[!] [notice] To update, run: D:\ComfyUI_windows_portable\python_embeded\python.exe -m pip install --upgrade pip
install/(de)activation script failed: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\CCSR

ERROR:AttributeError: module 'comfy.model_management' has no attribute 'unload_all_models'

where use this node, my. comfyui get error:
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "/home/jienigui/ComfyUI/execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/home/jienigui/ComfyUI/execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/home/jienigui/ComfyUI/execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/home/jienigui/ComfyUI/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/jienigui/ComfyUI/custom_nodes/ComfyUI-CCSR/nodes.py", line 69, in process
comfy.model_management.unload_all_models()
AttributeError: module 'comfy.model_management' has no attribute 'unload_all_models'
I have download model real-world_ccsr.ckpt in ComfyUI/models/checkpoints

Can't import in ComfyUI

Traceback (most recent call last):
File "G:\AI\ComfyUI-independent\ComfyUI\nodes.py", line 1888, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "G:\AI\ComfyUI-independent\ComfyUI\custom_nodes\ComfyUI-CCSR_init
.py", line 1, in
from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
File "G:\AI\ComfyUI-independent\ComfyUI\custom_nodes\ComfyUI-CCSR\nodes.py", line 8, in
from .model.ccsr_stage1 import ControlLDM
File "G:\AI\ComfyUI-independent\ComfyUI\custom_nodes\ComfyUI-CCSR\model\ccsr_stage1.py", line 18, in
from ..ldm.models.diffusion.ddpm_ccsr_stage1 import LatentDiffusion
File "G:\AI\ComfyUI-independent\ComfyUI\custom_nodes\ComfyUI-CCSR\ldm\models\diffusion\ddpm_ccsr_stage1.py", line 12, in
import pytorch_lightning as pl
File "G:\AI\ComfyUI-independent\python_embeded\Lib\site-packages\pytorch_lightning_init_.py", line 27, in
from pytorch_lightning.callbacks import Callback # noqa: E402
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\AI\ComfyUI-independent\python_embeded\Lib\site-packages\pytorch_lightning\callbacks_init_.py", line 24, in
from pytorch_lightning.callbacks.model_checkpoint import ModelCheckpoint
File "G:\AI\ComfyUI-independent\python_embeded\Lib\site-packages\pytorch_lightning\callbacks\model_checkpoint.py", line 37, in
from lightning_fabric.utilities.cloud_io import _is_dir, _is_local_file_protocol, get_filesystem
ImportError: cannot import name '_is_local_file_protocol' from 'lightning_fabric.utilities.cloud_io' (G:\AI\ComfyUI-independent\python_embeded\Lib\site-packages\lightning_fabric\utilities\cloud_io.py)

Cannot import G:\AI\ComfyUI-independent\ComfyUI\custom_nodes\ComfyUI-CCSR module for custom nodes: cannot import name '_is_local_file_protocol' from 'lightning_fabric.utilities.cloud_io' (G:\AI\ComfyUI-independent\python_embeded\Lib\site-packages\lightning_fabric\utilities\cloud_io.py)

How should I do?

import failing - new issue

getting this error when trying to install (i'm running python 3.11)

 File "C:\AI\ComfyUI\python_embeded\Lib\site-packages\pytorch_lightning\callbacks\__init__.py", line 14, in <module>
    from pytorch_lightning.callbacks.batch_size_finder import BatchSizeFinder
  File "C:\AI\ComfyUI\python_embeded\Lib\site-packages\pytorch_lightning\callbacks\batch_size_finder.py", line 26, in <module>
    from pytorch_lightning.callbacks.callback import Callback
  File "C:\AI\ComfyUI\python_embeded\Lib\site-packages\pytorch_lightning\callbacks\callback.py", line 22, in <module>
    from pytorch_lightning.utilities.types import STEP_OUTPUT
  File "C:\AI\ComfyUI\python_embeded\Lib\site-packages\pytorch_lightning\utilities\__init__.py", line 18, in <module>
    from lightning_fabric.utilities import (
ImportError: cannot import name 'measure_flops' from 'lightning_fabric.utilities' (C:\AI\ComfyUI\python_embeded\Lib\site-packages\lightning_fabric\utilities\__init__.py)

Cannot import C:\AI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-CCSR module for custom nodes: cannot import name 'measure_flops' from 'lightning_fabric.utilities' (C:\AI\ComfyUI\python_embeded\Lib\site-packages\lightning_fabric\utilities\__init__.py)

cannot import name '_TORCH_GREATER_EQUAL_1_13' from 'lightning_fabric.utilities.

Traceback (most recent call last):
File "D:\ComfyUI\nodes.py", line 1893, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\ComfyUI\custom_nodes\ComfyUI-CCSR_init
.py", line 1, in
from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
File "D:\ComfyUI\custom_nodes\ComfyUI-CCSR\nodes.py", line 8, in
from .model.ccsr_stage1 import ControlLDM
File "D:\ComfyUI\custom_nodes\ComfyUI-CCSR\model\ccsr_stage1.py", line 18, in
from ..ldm.models.diffusion.ddpm_ccsr_stage1 import LatentDiffusion
File "D:\ComfyUI\custom_nodes\ComfyUI-CCSR\ldm\models\diffusion\ddpm_ccsr_stage1.py", line 12, in
import pytorch_lightning as pl
File "D:\ComfyUI\venv\Lib\site-packages\pytorch_lightning_init_.py", line 27, in
from pytorch_lightning.callbacks import Callback # noqa: E402
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\venv\Lib\site-packages\pytorch_lightning\callbacks_init_.py", line 29, in
from pytorch_lightning.callbacks.pruning import ModelPruning
File "D:\ComfyUI\venv\Lib\site-packages\pytorch_lightning\callbacks\pruning.py", line 31, in
from pytorch_lightning.core.module import LightningModule
File "D:\ComfyUI\venv\Lib\site-packages\pytorch_lightning\core_init_.py", line 16, in
from pytorch_lightning.core.module import LightningModule
File "D:\ComfyUI\venv\Lib\site-packages\pytorch_lightning\core\module.py", line 62, in
from pytorch_lightning.trainer import call
File "D:\ComfyUI\venv\Lib\site-packages\pytorch_lightning\trainer_init_.py", line 17, in
from pytorch_lightning.trainer.trainer import Trainer
File "D:\ComfyUI\venv\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 46, in
from pytorch_lightning.loops import _PredictionLoop, TrainingEpochLoop
File "D:\ComfyUI\venv\Lib\site-packages\pytorch_lightning\loops_init
.py", line 15, in
from pytorch_lightning.loops.evaluation_loop import _EvaluationLoop # noqa: F401
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\venv\Lib\site-packages\pytorch_lightning\loops\evaluation_loop.py", line 29, in
from pytorch_lightning.loops.utilities import _no_grad_context, _select_data_fetcher, _verify_dataloader_idx_requirement
File "D:\ComfyUI\venv\Lib\site-packages\pytorch_lightning\loops\utilities.py", line 24, in
from lightning_fabric.utilities.imports import _TORCH_EQUAL_2_0, _TORCH_GREATER_EQUAL_1_13
ImportError: cannot import name '_TORCH_GREATER_EQUAL_1_13' from 'lightning_fabric.utilities.imports' (D:\ComfyUI\venv\Lib\site-packages\lightning_fabric\utilities\imports.py)

Cannot import D:\ComfyUI\custom_nodes\ComfyUI-CCSR module for custom nodes: cannot import name '_TORCH_GREATER_EQUAL_1_13' from 'lightning_fabric.utilities.imports' (D:\ComfyUI\venv\Lib\site-packages\lightning_fabric\utilities\imports.py)

Input type (struct c10::Half) and bias type (float) should be the same

Hello, it gives an error, I can't figure out what the problem is

Error occurred when executing CCSR_Upscale:

Input type (struct c10::Half) and bias type (float) should be the same

File "Z:\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\ComfyUI\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "Z:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-CCSR\nodes.py", line 120, in process
samples = sampler.sample_with_mixdiff_ccsr(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\ComfyUI\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "Z:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-CCSR\model\q_sampler.py", line 718, in sample_with_mixdiff_ccsr
"c_latent": [self.model.apply_condition_encoder(tile_cond_img)],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-CCSR\model\ccsr_stage2.py", line 320, in apply_condition_encoder
c_latent_meanvar = self.cond_encoder(control * 2 - 1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\container.py", line 217, in forward
input = module(input)
^^^^^^^^^^^^^
File "Z:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\ComfyUI\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 520, in forward
h = self.conv_in(x)
^^^^^^^^^^^^^^^
File "Z:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\ComfyUI\ComfyUI\comfy\ops.py", line 42, in forward
return super().forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.