Code Monkey home page Code Monkey logo

comfyui-custom-scripts's People

Contributors

aabreu avatar alexopus avatar birdddev avatar choey avatar danidipp avatar drjkl avatar fffonion avatar gaowei-space avatar haohaocreates avatar jaredtherriault avatar johncle avatar lawrr avatar marketmakerlite avatar mcmonkey4eva avatar mijuku233 avatar mortael avatar nynxz avatar pythongosssss avatar rgthree avatar royceschultz avatar sampletexting avatar sjuxax avatar stereosix avatar tashaskyup avatar tobbez avatar wutipong avatar zeroeightysix avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

comfyui-custom-scripts's Issues

Limit the trigger of autocomplete window a lil bit?

The new autocomplete feature is awesome!
But would it be possible to limit when it appears a little bit. For example, if I just navigate through my prompt with the arrow keys, it will appear even if I'm just passing through a word, and for certain directions (up/down) it will even hinder further movement.

For example, here I am pressing only arrow keys (mostly only up/down):

NewVideo1-1.mp4

So the movement through the text gets "stuck" in the autocomplete.

Not sure what is best, maybe exclude arrow keys to trigger the initial appearance?

FaceDetailer Broken after installation

I had no issues using my workflow, then I installed this and now I get an error when trying to use facedetailer. I've tried it in a few different known workflows and it always breaks.

Error occurred when executing FaceDetailer:

Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel]
QuantizedCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:491 [backend fallback]
Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:280 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:63 [backend fallback]
AutogradOther: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:30 [backend fallback]
AutogradCPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:34 [backend fallback]
AutogradCUDA: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:42 [backend fallback]
AutogradXLA: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:46 [backend fallback]
AutogradMPS: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:54 [backend fallback]
AutogradXPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:38 [backend fallback]
AutogradHPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:67 [backend fallback]
AutogradLazy: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:50 [backend fallback]
AutogradMeta: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:58 [backend fallback]
Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:294 [backend fallback]
AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:815 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:487 [backend fallback]
PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:148 [backend fallback]

File "E:\ComfyXL\ComfyUI\execution.py", line 144, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\ComfyXL\ComfyUI\execution.py", line 74, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\ComfyXL\ComfyUI\execution.py", line 67, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\ComfyXL\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 843, in doit
enhanced_img, cropped_enhanced, cropped_enhanced_alpha, mask = FaceDetailer.enhance_face(
File "E:\ComfyXL\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 806, in enhance_face
segs = bbox_detector.detect(image, bbox_threshold, bbox_dilation, bbox_crop_factor, drop_size)
File "E:\ComfyXL\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\subpack\impact\subcore.py", line 93, in detect
detected_results = inference_bbox(self.bbox_model, core.tensor2pil(image), threshold)
File "E:\ComfyXL\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\subpack\impact\subcore.py", line 27, in inference_bbox
pred = model(image, conf=confidence, device=device)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\engine\model.py", line 97, in call
return self.predict(source, stream, **kwargs)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\engine\model.py", line 245, in predict
return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\engine\predictor.py", line 195, in call
return list(self.stream_inference(source, model, *args, **kwargs)) # merge list of Result into one
File "C:\Users\xxx\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils_contextlib.py", line 35, in generator_context
response = gen.send(None)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\engine\predictor.py", line 255, in stream_inference
self.results = self.postprocess(preds, im, im0s)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\models\yolo\detect\predict.py", line 14, in postprocess
preds = ops.non_max_suppression(preds,
File "C:\Users\xxx\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\utils\ops.py", line 261, in non_max_suppression
i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
File "C:\Users\xxx\AppData\Local\Programs\Python\Python310\lib\site-packages\torchvision\ops\boxes.py", line 41, in nms
return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python310\lib\site-packages\torch_ops.py", line 502, in call
return self._op(*args, **kwargs or {})

Changed - remaining issue is image feed continues to pop up, Favicons don't show on browser

Hello, I just updated the ComfyUI-Custom-Scripts yesterday. I noticed that I no longer have the favicon showing on the browser tab and my preferences for node wiring is not being saved. Also my image feed has only white rectangles for the left and right toggles where I believe this used to be white arrows. I would like a way to turn off the image feed altogether as while I do use other features, I find the ImageFeed unneeded.

Custom notes issues

Thanks a lot for these scripts and especially for the recently added custom notes on Loras, these are very useful!

I found a few issues with the custom notes feature.

If my note contain:

goompa
https://civitai.com/models/55475/sxz-slavic-fantasy-style

The link is not clickable. It would be nice if it were.

Additionally, if I change the note to be:

https://civitai.com/models/55475/sxz-slavic-fantasy-style
goompa

, then the link is clickable, but "goompa" is concatenated with the URL:

image.

Which I guess is unintentional, or?

(I realize there is no need for the civitai link in this particular case, since it is detected by your extension automatically, but I would still like to be able to store links in custom note, if certain lora is only on huggingface or any other random site),

As a side question, are you also planning on adding civitai link and custom notes to your Checkpoint loader too :)?

"glob() got an unexpected keyword argument 'root_dir'"

HI, I love these tools, but for some reason can't get them to load when using Paperspace. Used the Comfy Manager to install, and when running ComfyUI I get:

Traceback (most recent call last):
  File "/notebooks/ComfyUI/nodes.py", line 1693, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 850, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/notebooks/ComfyUI/custom_nodes/ComfyUI-Custom-Scripts/__init__.py", line 12, in <module>
    files = glob.glob("*.py", root_dir=py, recursive=False)
TypeError: glob() got an unexpected keyword argument 'root_dir'

Cannot import /notebooks/ComfyUI/custom_nodes/ComfyUI-Custom-Scripts module for custom nodes: glob() got an unexpected keyword argument 'root_dir'

Import times for custom nodes:
   0.0 seconds: /notebooks/ComfyUI/custom_nodes/ComfyUI-Bmad-DirtyUndoRedo
   0.0 seconds (IMPORT FAILED): /notebooks/ComfyUI/custom_nodes/ComfyUI-Custom-Scripts
   0.0 seconds: /notebooks/ComfyUI/custom_nodes/SeargeSDXL
   0.0 seconds: /notebooks/ComfyUI/custom_nodes/comfyui_controlnet_aux
   0.1 seconds: /notebooks/ComfyUI/custom_nodes/ComfyUI-Manager
   0.6 seconds: /notebooks/ComfyUI/custom_nodes/efficiency-nodes-comfyui
   1.2 seconds: /notebooks/ComfyUI/custom_nodes/comfy_controlnet_preprocessors

Previously I would get this error but still have access to the a few options like your image feed. Now it seems like none of your scripts show up. Any idea how to get around this 'root_dir' issue on paperspace?

thanks!

Can no longer Right-Click as of f819039

Last working commit: f6c29ac

ComfyUI version 95d796fc85608272a9bf06a8c6c1f45912179118
Python 3.11.3

Just walked down the tree, and as of f819039 I stopped being able to right click (confirmed on two different workstations/browsers).

I'm guessing it has something to do with that slick new js/contextMenuHook.js, but can't say for sure. Nothing showing up in the logs, so not sure what all I can provide that's useful. Happy to give more though if you point me in the right direction

On f819039 it is working, technically, but an odd quirk now is that when you go down a menu, you have to click it (hovering won't spawn the submenu), and when you click it it kills the old menu and spawns the submenu in it's place

Cannot import nodes

Hello author, the following problem occurred during installation and operation. How can I solve it?
However, this issue does not occur when using ComfyUI Revision: 1218
——————

ComfyUI Revision: 1338 [e3d0a9a4]

Traceback (most recent call last):
File "/root/ComfyUI/nodes.py", line 1693, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 848, in exec_module
File "", line 219, in _call_with_frames_removed
File "/root/ComfyUI/custom_nodes/ComfyUI-Custom-Scripts/init.py", line 12, in
files = glob.glob("*.py", root_dir=py, recursive=False)
TypeError: glob() got an unexpected keyword argument 'root_dir'

Cannot import /root/ComfyUI/custom_nodes/ComfyUI-Custom-Scripts module for custom nodes: glob() got an unexpected keyword argument 'root_dir'

I need Install.py

We hope to have an Install.py that is as convenient as installing other custom nodes, helping us quickly install all scripts.

how to install ?

what is the installation procedure? which directory to put the scripts?

A few small issues with viewing lora info

First, it doesn't seem like it can load the metadata from all types of models. This is fairly rare, but it affects a few models and I don't know what's different about them. They can be read in A1111 (well, SD Next to be exact, but should be the same).
I get the following error message in the console when hitting the View Info button:

ERROR:aiohttp.server:Error handling request
Traceback (most recent call last):
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\aiohttp\web_protocol.py", line 433, in _handle_request
    resp = await request_handler(request)
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\aiohttp\web_app.py", line 504, in _handle
    resp = await handler(request)
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\aiohttp\web_middlewares.py", line 117, in impl
    return await handler(request)
  File "C:\AI\ComfyUI\ComfyUI\server.py", line 43, in cache_control
    response: web.Response = await handler(request)
  File "C:\AI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts\py\lora_info.py", line 52, in load_metadata
    meta["pysssss.sha256"] = hashlib.sha256(f.read()).hexdigest()
TypeError: 'NoneType' object does not support item assignment

Also, when there's no metadata at all I get the following error message in the Comfy log:

TypeError
Cannot destructure property 'buckets' of 'JSON.parse(...)' as it is null.
TypeError: Cannot destructure property 'buckets' of 'JSON.parse(...)' as it is null.
    at get resolutions [as resolutions] (http://127.0.0.1:8188/extensions/pysssss/CustomScripts/loraInfo.js:54:12)
    at new LoraInfoDialog (http://127.0.0.1:8188/extensions/pysssss/CustomScripts/loraInfo.js:143:10)
    at HTMLDivElement.callback (http://127.0.0.1:8188/extensions/pysssss/CustomScripts/loraInfo.js:304:7)

Submenus not working

Hello,

I have enabled the "Enable submenu in Custom Nodes" option
image

But when I am trying to select LoRAs/Checkpoints, it is still in a plain list:
image

Am I missing anything to config?

math expression node question: add support for "round" and/or make a rounding node

First, thank you for making these. secondly, other than PMDAS, what other math can I do with this node?

The reason is I am making a node setup that needs to make an image divisible by 16 on each side before passing to P2LDGAN to create line art. It seems as if python-style "round()" won't work inside this node.

I used the Math Expression node to try to do "round(a / 16) * 16" for the X and Y, after first resizing the image to 1024 on the X side. In this way, I hope to deliver to P2LDGAN a number divisible by 16, and to use this integer for the latent image size as well.

I realize that the rounding happens when it is turned from a float to an Int, but that doesn't work for this situation. f/e, the image shown here, of a Chun li I drew and want to restyle, reduces (without an initial round) to 680x1024, which means the 680 divides by 16 to 42.5.

My nodes look like so:

image

And when I run them, This error below is what I get in the console. It dies at the math expression node due to the "round()". If I remove round, it progresses but then the number isn't divisible by 16 anymore.

Any idea on how I might round the value of (a/16)?

got prompt
X val Float is:: 1024.0
Y val Float is:: 1542.0
!!! Exception during processing !!!
Traceback (most recent call last):
  File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts\py\math_expression.py", line 86, in evaluate
    r = eval_expr(node)
  File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts\py\math_expression.py", line 70, in eval_expr
    return operators[type(node.op)](eval_expr(node.left), eval_expr(node.right))
  File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts\py\math_expression.py", line 84, in eval_expr
    raise TypeError(node)
TypeError: <ast.Call object at 0x0000029285A96A70>

BUG: Image Feed, Firefox won't close Image at X

After clicking on one of the images in the Image Feed to bring it up I cannot get that new pop-up view to close. The red X in the top right of the canvas/workspace appears but doesn't respond when I click it.

When I hit the red X for the image feed bar itself the image feed bar does still close however the image remains.

Instead, when I click or drag, the only thing that reacts is the default ComfyUI menu, with the Queue Prompt. ESC does nothing. The buttons on that QP menu don't react. I can still get it to generate a new image with the Ctl+Ent so it's reacting but I can't close the popped-up image.

I can toggle between the images in the Image Feed but the side-arrows are not interactive either.

My pop-up blocker is disabled. Flash is enabled. Latest version of Firefox.

Thanks for looking into this. I really like the idea of having some sort of image-browsing built into ComfyUI

Import isn't failing but some of the nodes are just missing

I have access to a lot of the features and it doesn't show any error on loading but i'm missing the sound node, lora loader shows up in menu when I right click on checkpoint loader but it wont actually add the lora. Also they don't show up in the node search menu except for PresetText | pysssss and it loads in fine. Weird issue but my favorite extension!

[Error] Loading aborted due to error reloading workflow data

I'm trying to get my local installation of ComfyUI up to date after using it in Colab for some time. Whenever i try to load a workflow i get following error. I am using win11 and the most up to date version of Custom Scripts as of writing this (32c9381). Besides Custom Scripts im using following custom nodes:

Loading aborted due to error reloading workflow data

TypeError: can't access property "submenu", v is null

set@http://127.0.0.1:8188/extensions/pysssss/CustomScripts/betterCombos.js:160:11
LGraphNode.prototype.configure@http://127.0.0.1:8188/lib/litegraph.core.js:2542:7
LGraph.prototype.configure@http://127.0.0.1:8188/lib/litegraph.core.js:2240:26
init/LGraph.prototype.configure@http://127.0.0.1:8188/extensions/pysssss/CustomScripts/reroutePrimitive.js:14:29
init/LGraph.prototype.configure@http://127.0.0.1:8188/extensions/pysssss/CustomScripts/snapToGrid.js:53:21
loadGraphData@http://127.0.0.1:8188/scripts/app.js:1227:15
handleFile/reader.onload@http://127.0.0.1:8188/scripts/app.js:1535:10

This may be due to the following script:
/extensions/pysssss/CustomScripts/betterCombos.js

Edit: it seems to only happen to when i load some workflows, this one for example

Edit2: Has something to do with the Loraloader, i dont have the Lora installed locally that i used in the workflow. even after installing the lora locally it didnt work only after removing the LoraLoader i could import the workflow locally
image

Anime-Segmentation Node Crashing

Hi, thanks for your scripts, this anime-segmentation one seems very usefull, but I'm geting this error, do you have a idea of why?

Traceback (most recent call last):
  File "C:\Users\Marcus\Documents\Ferramentas\ComfyUI\ComfyUI\execution.py", line 174, in execute
    executed += recursive_execute(self.server, prompt, self.outputs, x, extra_data)
  File "C:\Users\Marcus\Documents\Ferramentas\ComfyUI\ComfyUI\execution.py", line 54, in recursive_execute
    executed += recursive_execute(server, prompt, outputs, input_unique_id, extra_data)
  File "C:\Users\Marcus\Documents\Ferramentas\ComfyUI\ComfyUI\execution.py", line 54, in recursive_execute
    executed += recursive_execute(server, prompt, outputs, input_unique_id, extra_data)
  File "C:\Users\Marcus\Documents\Ferramentas\ComfyUI\ComfyUI\execution.py", line 54, in recursive_execute
    executed += recursive_execute(server, prompt, outputs, input_unique_id, extra_data)
  [Previous line repeated 3 more times]
  File "C:\Users\Marcus\Documents\Ferramentas\ComfyUI\ComfyUI\execution.py", line 63, in recursive_execute
    outputs[unique_id] = getattr(obj, obj.FUNCTION)(**input_data_all)
  File "C:\Users\Marcus\Documents\Ferramentas\ComfyUI\ComfyUI\custom_nodes\anime_segmentation.py", line 59, in segment
    model = AnimeSegmentation.try_load(net, ckpt, device)
  File "C:\Users\Marcus\Documents\Ferramentas\ComfyUI\ComfyUI\comfy_extras\anime_segmentation\train.py", line 72, in try_load
    model.load_state_dict(state_dict)
  File "C:\Users\Marcus\Documents\Ferramentas\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 2041, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for AnimeSegmentation:
        Unexpected key(s) in state_dict: "gt_encoder.conv_in.conv.weight", "gt_encoder.conv_in.conv.bias", "gt_encoder.conv_in.bn.weight", "gt_encoder.conv_in.bn.bias", "gt_encoder.conv_in.bn.running_mean", "gt_encoder.conv_in.bn.running_var", "gt_encoder.conv_in.bn.num_batches_tracked", "gt_encoder.stage1.rebnconvin.conv_s1.weight", "gt_encoder.stage1.rebnconvin.conv_s1.bias", "gt_encoder.stage1.rebnconvin.bn_s1.weight", "gt_encoder.stage1.rebnconvin.bn_s1.bias", "gt_encoder.stage1.rebnconvin.bn_s1.running_mean", "gt_encoder.stage1.rebnconvin.bn_s1.running_var", "gt_encoder.stage1.rebnconvin.bn_s1.num_batches_tracked", "gt_encoder.stage1.rebnconv1.conv_s1.weight", "gt_encoder.stage1.rebnconv1.conv_s1.bias", "gt_encoder.stage1.rebnconv1.bn_s1.weight", "gt_encoder.stage1.rebnconv1.bn_s1.bias", "gt_encoder.stage1.rebnconv1.bn_s1.running_mean", "gt_encoder.stage1.rebnconv1.bn_s1.running_var", "gt_encoder.stage1.rebnconv1.bn_s1.num_batches_tracked", "gt_encoder.stage1.rebnconv2.conv_s1.weight", "gt_encoder.stage1.rebnconv2.conv_s1.bias", "gt_encoder.stage1.rebnconv2.bn_s1.weight", "gt_encoder.stage1.rebnconv2.bn_s1.bias", "gt_encoder.stage1.rebnconv2.bn_s1.running_mean", "gt_encoder.stage1.rebnconv2.bn_s1.running_var", "gt_encoder.stage1.rebnconv2.bn_s1.num_batches_tracked", "gt_encoder.stage1.rebnconv3.conv_s1.weight", "gt_encoder.stage1.rebnconv3.conv_s1.bias", "gt_encoder.stage1.rebnconv3.bn_s1.weight", "gt_encoder.stage1.rebnconv3.bn_s1.bias", "gt_encoder.stage1.rebnconv3.bn_s1.running_mean", "gt_encoder.stage1.rebnconv3.bn_s1.running_var", "gt_encoder.stage1.rebnconv3.bn_s1.num_batches_tracked", "gt_encoder.stage1.rebnconv4.conv_s1.weight", "gt_encoder.stage1.rebnconv4.conv_s1.bias", "gt_encoder.stage1.rebnconv4.bn_s1.weight", "gt_encoder.stage1.rebnconv4.bn_s1.bias", "gt_encoder.stage1.rebnconv4.bn_s1.running_mean", "gt_encoder.stage1.rebnconv4.bn_s1.running_var", "gt_encoder.stage1.rebnconv4.bn_s1.num_batches_tracked", "gt_encoder.stage1.rebnconv5.conv_s1.weight", "gt_encoder.stage1.rebnconv5.conv_s1.bias", "gt_encoder.stage1.rebnconv5.bn_s1.weight", "gt_encoder.stage1.rebnconv5.bn_s1.bias", "gt_encoder.stage1.rebnconv5.bn_s1.running_mean", "gt_encoder.stage1.rebnconv5.bn_s1.running_var", "gt_encoder.stage1.rebnconv5.bn_s1.num_batches_tracked", "gt_encoder.stage1.rebnconv6.conv_s1.weight", "gt_encoder.stage1.rebnconv6.conv_s1.bias", "gt_encoder.stage1.rebnconv6.bn_s1.weight", "gt_encoder.stage1.rebnconv6.bn_s1.bias", "gt_encoder.stage1.rebnconv6.bn_s1.running_mean", "gt_encoder.stage1.rebnconv6.bn_s1.running_var", "gt_encoder.stage1.rebnconv6.bn_s1.num_batches_tracked", "gt_encoder.stage1.rebnconv7.conv_s1.weight", "gt_encoder.stage1.rebnconv7.conv_s1.bias", "gt_encoder.stage1.rebnconv7.bn_s1.weight", "gt_encoder.stage1.rebnconv7.bn_s1.bias", "gt_encoder.stage1.rebnconv7.bn_s1.running_mean", "gt_encoder.stage1.rebnconv7.bn_s1.running_var", "gt_encoder.stage1.rebnconv7.bn_s1.num_batches_tracked", "gt_encoder.stage1.rebnconv6d.conv_s1.weight", "gt_encoder.stage1.rebnconv6d.conv_s1.bias", "gt_encoder.stage1.rebnconv6d.bn_s1.weight", "gt_encoder.stage1.rebnconv6d.bn_s1.bias", "gt_encoder.stage1.rebnconv6d.bn_s1.running_mean", "gt_encoder.stage1.rebnconv6d.bn_s1.running_var", "gt_encoder.stage1.rebnconv6d.bn_s1.num_batches_tracked", "gt_encoder.stage1.rebnconv5d.conv_s1.weight", "gt_encoder.stage1.rebnconv5d.conv_s1.bias", "gt_encoder.stage1.rebnconv5d.bn_s1.weight", "gt_encoder.stage1.rebnconv5d.bn_s1.bias", "gt_encoder.stage1.rebnconv5d.bn_s1.running_mean", "gt_encoder.stage1.rebnconv5d.bn_s1.running_var", "gt_encoder.stage1.rebnconv5d.bn_s1.num_batches_tracked", "gt_encoder.stage1.rebnconv4d.conv_s1.weight", "gt_encoder.stage1.rebnconv4d.conv_s1.bias", "gt_encoder.stage1.rebnconv4d.bn_s1.weight", "gt_encoder.stage1.rebnconv4d.bn_s1.bias", "gt_encoder.stage1.rebnconv4d.bn_s1.running_mean", "gt_encoder.stage1.rebnconv4d.bn_s1.running_var", "gt_encoder.stage1.rebnconv4d.bn_s1.num_batches_tracked", "gt_encoder.stage1.rebnconv3d.conv_s1.weight", "gt_encoder.stage1.rebnconv3d.conv_s1.bias", "gt_encoder.stage1.rebnconv3d.bn_s1.weight", "gt_encoder.stage1.rebnconv3d.bn_s1.bias", "gt_encoder.stage1.rebnconv3d.bn_s1.running_mean", "gt_encoder.stage1.rebnconv3d.bn_s1.running_var", "gt_encoder.stage1.rebnconv3d.bn_s1.num_batches_tracked", "gt_encoder.stage1.rebnconv2d.conv_s1.weight", "gt_encoder.stage1.rebnconv2d.conv_s1.bias", "gt_encoder.stage1.rebnconv2d.bn_s1.weight", "gt_encoder.stage1.rebnconv2d.bn_s1.bias", "gt_encoder.stage1.rebnconv2d.bn_s1.running_mean", "gt_encoder.stage1.rebnconv2d.bn_s1.running_var", "gt_encoder.stage1.rebnconv2d.bn_s1.num_batches_tracked", "gt_encoder.stage1.rebnconv1d.conv_s1.weight", "gt_encoder.stage1.rebnconv1d.conv_s1.bias", "gt_encoder.stage1.rebnconv1d.bn_s1.weight", "gt_encoder.stage1.rebnconv1d.bn_s1.bias", "gt_encoder.stage1.rebnconv1d.bn_s1.running_mean", "gt_encoder.stage1.rebnconv1d.bn_s1.running_var", "gt_encoder.stage1.rebnconv1d.bn_s1.num_batches_tracked", "gt_encoder.stage2.rebnconvin.conv_s1.weight", "gt_encoder.stage2.rebnconvin.conv_s1.bias", "gt_encoder.stage2.rebnconvin.bn_s1.weight", "gt_encoder.stage2.rebnconvin.bn_s1.bias", "gt_encoder.stage2.rebnconvin.bn_s1.running_mean", "gt_encoder.stage2.rebnconvin.bn_s1.running_var", "gt_encoder.stage2.rebnconvin.bn_s1.num_batches_tracked", "gt_encoder.stage2.rebnconv1.conv_s1.weight", "gt_encoder.stage2.rebnconv1.conv_s1.bias", "gt_encoder.stage2.rebnconv1.bn_s1.weight", "gt_encoder.stage2.rebnconv1.bn_s1.bias", "gt_encoder.stage2.rebnconv1.bn_s1.running_mean", "gt_encoder.stage2.rebnconv1.bn_s1.running_var", "gt_encoder.stage2.rebnconv1.bn_s1.num_batches_tracked", "gt_encoder.stage2.rebnconv2.conv_s1.weight", "gt_encoder.stage2.rebnconv2.conv_s1.bias", "gt_encoder.stage2.rebnconv2.bn_s1.weight", "gt_encoder.stage2.rebnconv2.bn_s1.bias", "gt_encoder.stage2.rebnconv2.bn_s1.running_mean", "gt_encoder.stage2.rebnconv2.bn_s1.running_var", "gt_encoder.stage2.rebnconv2.bn_s1.num_batches_tracked", "gt_encoder.stage2.rebnconv3.conv_s1.weight", "gt_encoder.stage2.rebnconv3.conv_s1.bias", "gt_encoder.stage2.rebnconv3.bn_s1.weight", "gt_encoder.stage2.rebnconv3.bn_s1.bias", "gt_encoder.stage2.rebnconv3.bn_s1.running_mean", "gt_encoder.stage2.rebnconv3.bn_s1.running_var", "gt_encoder.stage2.rebnconv3.bn_s1.num_batches_tracked", "gt_encoder.stage2.rebnconv4.conv_s1.weight", "gt_encoder.stage2.rebnconv4.conv_s1.bias", "gt_encoder.stage2.rebnconv4.bn_s1.weight", "gt_encoder.stage2.rebnconv4.bn_s1.bias", "gt_encoder.stage2.rebnconv4.bn_s1.running_mean", "gt_encoder.stage2.rebnconv4.bn_s1.running_var", "gt_encoder.stage2.rebnconv4.bn_s1.num_batches_tracked", "gt_encoder.stage2.rebnconv5.conv_s1.weight", "gt_encoder.stage2.rebnconv5.conv_s1.bias", "gt_encoder.stage2.rebnconv5.bn_s1.weight", "gt_encoder.stage2.rebnconv5.bn_s1.bias", "gt_encoder.stage2.rebnconv5.bn_s1.running_mean", "gt_encoder.stage2.rebnconv5.bn_s1.running_var", "gt_encoder.stage2.rebnconv5.bn_s1.num_batches_tracked", "gt_encoder.stage2.rebnconv6.conv_s1.weight", "gt_encoder.stage2.rebnconv6.conv_s1.bias", "gt_encoder.stage2.rebnconv6.bn_s1.weight", "gt_encoder.stage2.rebnconv6.bn_s1.bias", "gt_encoder.stage2.rebnconv6.bn_s1.running_mean", "gt_encoder.stage2.rebnconv6.bn_s1.running_var", "gt_encoder.stage2.rebnconv6.bn_s1.num_batches_tracked", "gt_encoder.stage2.rebnconv5d.conv_s1.weight", "gt_encoder.stage2.rebnconv5d.conv_s1.bias", "gt_encoder.stage2.rebnconv5d.bn_s1.weight", "gt_encoder.stage2.rebnconv5d.bn_s1.bias", "gt_encoder.stage2.rebnconv5d.bn_s1.running_mean", "gt_encoder.stage2.rebnconv5d.bn_s1.running_var", "gt_encoder.stage2.rebnconv5d.bn_s1.num_batches_tracked", "gt_encoder.stage2.rebnconv4d.conv_s1.weight", "gt_encoder.stage2.rebnconv4d.conv_s1.bias", "gt_encoder.stage2.rebnconv4d.bn_s1.weight", "gt_encoder.stage2.rebnconv4d.bn_s1.bias", "gt_encoder.stage2.rebnconv4d.bn_s1.running_mean", "gt_encoder.stage2.rebnconv4d.bn_s1.running_var", "gt_encoder.stage2.rebnconv4d.bn_s1.num_batches_tracked", "gt_encoder.stage2.rebnconv3d.conv_s1.weight", "gt_encoder.stage2.rebnconv3d.conv_s1.bias", "gt_encoder.stage2.rebnconv3d.bn_s1.weight", "gt_encoder.stage2.rebnconv3d.bn_s1.bias", "gt_encoder.stage2.rebnconv3d.bn_s1.running_mean", "gt_encoder.stage2.rebnconv3d.bn_s1.running_var", "gt_encoder.stage2.rebnconv3d.bn_s1.num_batches_tracked", "gt_encoder.stage2.rebnconv2d.conv_s1.weight", "gt_encoder.stage2.rebnconv2d.conv_s1.bias", "gt_encoder.stage2.rebnconv2d.bn_s1.weight", "gt_encoder.stage2.rebnconv2d.bn_s1.bias", "gt_encoder.stage2.rebnconv2d.bn_s1.running_mean", "gt_encoder.stage2.rebnconv2d.bn_s1.running_var", "gt_encoder.stage2.rebnconv2d.bn_s1.num_batches_tracked", "gt_encoder.stage2.rebnconv1d.conv_s1.weight", "gt_encoder.stage2.rebnconv1d.conv_s1.bias", "gt_encoder.stage2.rebnconv1d.bn_s1.weight", "gt_encoder.stage2.rebnconv1d.bn_s1.bias", "gt_encoder.stage2.rebnconv1d.bn_s1.running_mean", "gt_encoder.stage2.rebnconv1d.bn_s1.running_var", "gt_encoder.stage2.rebnconv1d.bn_s1.num_batches_tracked", "gt_encoder.stage3.rebnconvin.conv_s1.weight", "gt_encoder.stage3.rebnconvin.conv_s1.bias", "gt_encoder.stage3.rebnconvin.bn_s1.weight", "gt_encoder.stage3.rebnconvin.bn_s1.bias", "gt_encoder.stage3.rebnconvin.bn_s1.running_mean", "gt_encoder.stage3.rebnconvin.bn_s1.running_var", "gt_encoder.stage3.rebnconvin.bn_s1.num_batches_tracked", "gt_encoder.stage3.rebnconv1.conv_s1.weight", "gt_encoder.stage3.rebnconv1.conv_s1.bias", "gt_encoder.stage3.rebnconv1.bn_s1.weight", "gt_encoder.stage3.rebnconv1.bn_s1.bias", "gt_encoder.stage3.rebnconv1.bn_s1.running_mean", "gt_encoder.stage3.rebnconv1.bn_s1.running_var", "gt_encoder.stage3.rebnconv1.bn_s1.num_batches_tracked", "gt_encoder.stage3.rebnconv2.conv_s1.weight", "gt_encoder.stage3.rebnconv2.conv_s1.bias", "gt_encoder.stage3.rebnconv2.bn_s1.weight", "gt_encoder.stage3.rebnconv2.bn_s1.bias", "gt_encoder.stage3.rebnconv2.bn_s1.running_mean", "gt_encoder.stage3.rebnconv2.bn_s1.running_var", "gt_encoder.stage3.rebnconv2.bn_s1.num_batches_tracked", "gt_encoder.stage3.rebnconv3.conv_s1.weight", "gt_encoder.stage3.rebnconv3.conv_s1.bias", "gt_encoder.stage3.rebnconv3.bn_s1.weight", "gt_encoder.stage3.rebnconv3.bn_s1.bias", "gt_encoder.stage3.rebnconv3.bn_s1.running_mean", "gt_encoder.stage3.rebnconv3.bn_s1.running_var", "gt_encoder.stage3.rebnconv3.bn_s1.num_batches_tracked", "gt_encoder.stage3.rebnconv4.conv_s1.weight", "gt_encoder.stage3.rebnconv4.conv_s1.bias", "gt_encoder.stage3.rebnconv4.bn_s1.weight", "gt_encoder.stage3.rebnconv4.bn_s1.bias", "gt_encoder.stage3.rebnconv4.bn_s1.running_mean", "gt_encoder.stage3.rebnconv4.bn_s1.running_var", "gt_encoder.stage3.rebnconv4.bn_s1.num_batches_tracked", "gt_encoder.stage3.rebnconv5.conv_s1.weight", "gt_encoder.stage3.rebnconv5.conv_s1.bias", "gt_encoder.stage3.rebnconv5.bn_s1.weight", "gt_encoder.stage3.rebnconv5.bn_s1.bias", "gt_encoder.stage3.rebnconv5.bn_s1.running_mean", "gt_encoder.stage3.rebnconv5.bn_s1.running_var", "gt_encoder.stage3.rebnconv5.bn_s1.num_batches_tracked", "gt_encoder.stage3.rebnconv4d.conv_s1.weight", "gt_encoder.stage3.rebnconv4d.conv_s1.bias", "gt_encoder.stage3.rebnconv4d.bn_s1.weight", "gt_encoder.stage3.rebnconv4d.bn_s1.bias", "gt_encoder.stage3.rebnconv4d.bn_s1.running_mean", "gt_encoder.stage3.rebnconv4d.bn_s1.running_var", "gt_encoder.stage3.rebnconv4d.bn_s1.num_batches_tracked", "gt_encoder.stage3.rebnconv3d.conv_s1.weight", "gt_encoder.stage3.rebnconv3d.conv_s1.bias", "gt_encoder.stage3.rebnconv3d.bn_s1.weight", "gt_encoder.stage3.rebnconv3d.bn_s1.bias", "gt_encoder.stage3.rebnconv3d.bn_s1.running_mean", "gt_encoder.stage3.rebnconv3d.bn_s1.running_var", "gt_encoder.stage3.rebnconv3d.bn_s1.num_batches_tracked", "gt_encoder.stage3.rebnconv2d.conv_s1.weight", "gt_encoder.stage3.rebnconv2d.conv_s1.bias", "gt_encoder.stage3.rebnconv2d.bn_s1.weight", "gt_encoder.stage3.rebnconv2d.bn_s1.bias", "gt_encoder.stage3.rebnconv2d.bn_s1.running_mean", "gt_encoder.stage3.rebnconv2d.bn_s1.running_var", "gt_encoder.stage3.rebnconv2d.bn_s1.num_batches_tracked", "gt_encoder.stage3.rebnconv1d.conv_s1.weight", "gt_encoder.stage3.rebnconv1d.conv_s1.bias", "gt_encoder.stage3.rebnconv1d.bn_s1.weight", "gt_encoder.stage3.rebnconv1d.bn_s1.bias", "gt_encoder.stage3.rebnconv1d.bn_s1.running_mean", "gt_encoder.stage3.rebnconv1d.bn_s1.running_var", "gt_encoder.stage3.rebnconv1d.bn_s1.num_batches_tracked", "gt_encoder.stage4.rebnconvin.conv_s1.weight", "gt_encoder.stage4.rebnconvin.conv_s1.bias", "gt_encoder.stage4.rebnconvin.bn_s1.weight", "gt_encoder.stage4.rebnconvin.bn_s1.bias", "gt_encoder.stage4.rebnconvin.bn_s1.running_mean", "gt_encoder.stage4.rebnconvin.bn_s1.running_var", "gt_encoder.stage4.rebnconvin.bn_s1.num_batches_tracked", "gt_encoder.stage4.rebnconv1.conv_s1.weight", "gt_encoder.stage4.rebnconv1.conv_s1.bias", "gt_encoder.stage4.rebnconv1.bn_s1.weight", "gt_encoder.stage4.rebnconv1.bn_s1.bias", "gt_encoder.stage4.rebnconv1.bn_s1.running_mean", "gt_encoder.stage4.rebnconv1.bn_s1.running_var", "gt_encoder.stage4.rebnconv1.bn_s1.num_batches_tracked", "gt_encoder.stage4.rebnconv2.conv_s1.weight", "gt_encoder.stage4.rebnconv2.conv_s1.bias", "gt_encoder.stage4.rebnconv2.bn_s1.weight", "gt_encoder.stage4.rebnconv2.bn_s1.bias", "gt_encoder.stage4.rebnconv2.bn_s1.running_mean", "gt_encoder.stage4.rebnconv2.bn_s1.running_var", "gt_encoder.stage4.rebnconv2.bn_s1.num_batches_tracked", "gt_encoder.stage4.rebnconv3.conv_s1.weight", "gt_encoder.stage4.rebnconv3.conv_s1.bias", "gt_encoder.stage4.rebnconv3.bn_s1.weight", "gt_encoder.stage4.rebnconv3.bn_s1.bias", "gt_encoder.stage4.rebnconv3.bn_s1.running_mean", "gt_encoder.stage4.rebnconv3.bn_s1.running_var", "gt_encoder.stage4.rebnconv3.bn_s1.num_batches_tracked", "gt_encoder.stage4.rebnconv4.conv_s1.weight", "gt_encoder.stage4.rebnconv4.conv_s1.bias", "gt_encoder.stage4.rebnconv4.bn_s1.weight", "gt_encoder.stage4.rebnconv4.bn_s1.bias", "gt_encoder.stage4.rebnconv4.bn_s1.running_mean", "gt_encoder.stage4.rebnconv4.bn_s1.running_var", "gt_encoder.stage4.rebnconv4.bn_s1.num_batches_tracked", "gt_encoder.stage4.rebnconv3d.conv_s1.weight", "gt_encoder.stage4.rebnconv3d.conv_s1.bias", "gt_encoder.stage4.rebnconv3d.bn_s1.weight", "gt_encoder.stage4.rebnconv3d.bn_s1.bias", "gt_encoder.stage4.rebnconv3d.bn_s1.running_mean", "gt_encoder.stage4.rebnconv3d.bn_s1.running_var", "gt_encoder.stage4.rebnconv3d.bn_s1.num_batches_tracked", "gt_encoder.stage4.rebnconv2d.conv_s1.weight", "gt_encoder.stage4.rebnconv2d.conv_s1.bias", "gt_encoder.stage4.rebnconv2d.bn_s1.weight", "gt_encoder.stage4.rebnconv2d.bn_s1.bias", "gt_encoder.stage4.rebnconv2d.bn_s1.running_mean", "gt_encoder.stage4.rebnconv2d.bn_s1.running_var", "gt_encoder.stage4.rebnconv2d.bn_s1.num_batches_tracked", "gt_encoder.stage4.rebnconv1d.conv_s1.weight", "gt_encoder.stage4.rebnconv1d.conv_s1.bias", "gt_encoder.stage4.rebnconv1d.bn_s1.weight", "gt_encoder.stage4.rebnconv1d.bn_s1.bias", "gt_encoder.stage4.rebnconv1d.bn_s1.running_mean", "gt_encoder.stage4.rebnconv1d.bn_s1.running_var", "gt_encoder.stage4.rebnconv1d.bn_s1.num_batches_tracked", "gt_encoder.stage5.rebnconvin.conv_s1.weight", "gt_encoder.stage5.rebnconvin.conv_s1.bias", "gt_encoder.stage5.rebnconvin.bn_s1.weight", "gt_encoder.stage5.rebnconvin.bn_s1.bias", "gt_encoder.stage5.rebnconvin.bn_s1.running_mean", "gt_encoder.stage5.rebnconvin.bn_s1.running_var", "gt_encoder.stage5.rebnconvin.bn_s1.num_batches_tracked", "gt_encoder.stage5.rebnconv1.conv_s1.weight", "gt_encoder.stage5.rebnconv1.conv_s1.bias", "gt_encoder.stage5.rebnconv1.bn_s1.weight", "gt_encoder.stage5.rebnconv1.bn_s1.bias", "gt_encoder.stage5.rebnconv1.bn_s1.running_mean", "gt_encoder.stage5.rebnconv1.bn_s1.running_var", "gt_encoder.stage5.rebnconv1.bn_s1.num_batches_tracked", "gt_encoder.stage5.rebnconv2.conv_s1.weight", "gt_encoder.stage5.rebnconv2.conv_s1.bias", "gt_encoder.stage5.rebnconv2.bn_s1.weight", "gt_encoder.stage5.rebnconv2.bn_s1.bias", "gt_encoder.stage5.rebnconv2.bn_s1.running_mean", "gt_encoder.stage5.rebnconv2.bn_s1.running_var", "gt_encoder.stage5.rebnconv2.bn_s1.num_batches_tracked", "gt_encoder.stage5.rebnconv3.conv_s1.weight", "gt_encoder.stage5.rebnconv3.conv_s1.bias", "gt_encoder.stage5.rebnconv3.bn_s1.weight", "gt_encoder.stage5.rebnconv3.bn_s1.bias", "gt_encoder.stage5.rebnconv3.bn_s1.running_mean", "gt_encoder.stage5.rebnconv3.bn_s1.running_var", "gt_encoder.stage5.rebnconv3.bn_s1.num_batches_tracked", "gt_encoder.stage5.rebnconv4.conv_s1.weight", "gt_encoder.stage5.rebnconv4.conv_s1.bias", "gt_encoder.stage5.rebnconv4.bn_s1.weight", "gt_encoder.stage5.rebnconv4.bn_s1.bias", "gt_encoder.stage5.rebnconv4.bn_s1.running_mean", "gt_encoder.stage5.rebnconv4.bn_s1.running_var", "gt_encoder.stage5.rebnconv4.bn_s1.num_batches_tracked", "gt_encoder.stage5.rebnconv3d.conv_s1.weight", "gt_encoder.stage5.rebnconv3d.conv_s1.bias", "gt_encoder.stage5.rebnconv3d.bn_s1.weight", "gt_encoder.stage5.rebnconv3d.bn_s1.bias", "gt_encoder.stage5.rebnconv3d.bn_s1.running_mean", "gt_encoder.stage5.rebnconv3d.bn_s1.running_var", "gt_encoder.stage5.rebnconv3d.bn_s1.num_batches_tracked", "gt_encoder.stage5.rebnconv2d.conv_s1.weight", "gt_encoder.stage5.rebnconv2d.conv_s1.bias", "gt_encoder.stage5.rebnconv2d.bn_s1.weight", "gt_encoder.stage5.rebnconv2d.bn_s1.bias", "gt_encoder.stage5.rebnconv2d.bn_s1.running_mean", "gt_encoder.stage5.rebnconv2d.bn_s1.running_var", "gt_encoder.stage5.rebnconv2d.bn_s1.num_batches_tracked", "gt_encoder.stage5.rebnconv1d.conv_s1.weight", "gt_encoder.stage5.rebnconv1d.conv_s1.bias", "gt_encoder.stage5.rebnconv1d.bn_s1.weight", "gt_encoder.stage5.rebnconv1d.bn_s1.bias", "gt_encoder.stage5.rebnconv1d.bn_s1.running_mean", "gt_encoder.stage5.rebnconv1d.bn_s1.running_var", "gt_encoder.stage5.rebnconv1d.bn_s1.num_batches_tracked", "gt_encoder.stage6.rebnconvin.conv_s1.weight", "gt_encoder.stage6.rebnconvin.conv_s1.bias", "gt_encoder.stage6.rebnconvin.bn_s1.weight", "gt_encoder.stage6.rebnconvin.bn_s1.bias", "gt_encoder.stage6.rebnconvin.bn_s1.running_mean", "gt_encoder.stage6.rebnconvin.bn_s1.running_var", "gt_encoder.stage6.rebnconvin.bn_s1.num_batches_tracked", "gt_encoder.stage6.rebnconv1.conv_s1.weight", "gt_encoder.stage6.rebnconv1.conv_s1.bias", "gt_encoder.stage6.rebnconv1.bn_s1.weight", "gt_encoder.stage6.rebnconv1.bn_s1.bias", "gt_encoder.stage6.rebnconv1.bn_s1.running_mean", "gt_encoder.stage6.rebnconv1.bn_s1.running_var", "gt_encoder.stage6.rebnconv1.bn_s1.num_batches_tracked", "gt_encoder.stage6.rebnconv2.conv_s1.weight", "gt_encoder.stage6.rebnconv2.conv_s1.bias", "gt_encoder.stage6.rebnconv2.bn_s1.weight", "gt_encoder.stage6.rebnconv2.bn_s1.bias", "gt_encoder.stage6.rebnconv2.bn_s1.running_mean", "gt_encoder.stage6.rebnconv2.bn_s1.running_var", "gt_encoder.stage6.rebnconv2.bn_s1.num_batches_tracked", "gt_encoder.stage6.rebnconv3.conv_s1.weight", "gt_encoder.stage6.rebnconv3.conv_s1.bias", "gt_encoder.stage6.rebnconv3.bn_s1.weight", "gt_encoder.stage6.rebnconv3.bn_s1.bias", "gt_encoder.stage6.rebnconv3.bn_s1.running_mean", "gt_encoder.stage6.rebnconv3.bn_s1.running_var", "gt_encoder.stage6.rebnconv3.bn_s1.num_batches_tracked", "gt_encoder.stage6.rebnconv4.conv_s1.weight", "gt_encoder.stage6.rebnconv4.conv_s1.bias", "gt_encoder.stage6.rebnconv4.bn_s1.weight", "gt_encoder.stage6.rebnconv4.bn_s1.bias", "gt_encoder.stage6.rebnconv4.bn_s1.running_mean", "gt_encoder.stage6.rebnconv4.bn_s1.running_var", "gt_encoder.stage6.rebnconv4.bn_s1.num_batches_tracked", "gt_encoder.stage6.rebnconv3d.conv_s1.weight", "gt_encoder.stage6.rebnconv3d.conv_s1.bias", "gt_encoder.stage6.rebnconv3d.bn_s1.weight", "gt_encoder.stage6.rebnconv3d.bn_s1.bias", "gt_encoder.stage6.rebnconv3d.bn_s1.running_mean", "gt_encoder.stage6.rebnconv3d.bn_s1.running_var", "gt_encoder.stage6.rebnconv3d.bn_s1.num_batches_tracked", "gt_encoder.stage6.rebnconv2d.conv_s1.weight", "gt_encoder.stage6.rebnconv2d.conv_s1.bias", "gt_encoder.stage6.rebnconv2d.bn_s1.weight", "gt_encoder.stage6.rebnconv2d.bn_s1.bias", "gt_encoder.stage6.rebnconv2d.bn_s1.running_mean", "gt_encoder.stage6.rebnconv2d.bn_s1.running_var", "gt_encoder.stage6.rebnconv2d.bn_s1.num_batches_tracked", "gt_encoder.stage6.rebnconv1d.conv_s1.weight", "gt_encoder.stage6.rebnconv1d.conv_s1.bias", "gt_encoder.stage6.rebnconv1d.bn_s1.weight", "gt_encoder.stage6.rebnconv1d.bn_s1.bias", "gt_encoder.stage6.rebnconv1d.bn_s1.running_mean", "gt_encoder.stage6.rebnconv1d.bn_s1.running_var", "gt_encoder.stage6.rebnconv1d.bn_s1.num_batches_tracked", "gt_encoder.side1.weight", "gt_encoder.side1.bias", "gt_encoder.side2.weight", "gt_encoder.side2.bias", "gt_encoder.side3.weight", "gt_encoder.side3.bias", "gt_encoder.side4.weight", "gt_encoder.side4.bias", "gt_encoder.side5.weight", "gt_encoder.side5.bias", "gt_encoder.side6.weight", "gt_encoder.side6.bias".

New Idea

Hello, friend, I like your work very much, can you add a reference snap line for the node to it, maybe a guide line that appears when snapping with PS transformation, thank you!

Add icon to identify locked nodes

The lock node feature is very useful, however, it's not easy to remember which nodes are locked. I'm constantly trying to move or resize locked nodes only to realize they can't be moved nor resized.

Adding a small icon to the locked nodes would be really helpful, here's a sample:

image

[Feature Request] Switching position of the preview gallery to any side of the page

currently its nicely on the bottom part of the browser page, but it would be really helpful if you toggle it to a left vertical scroll also.

I imagine the options would be:

  1. Default bottom row
  2. Left vertical
  3. Right vertical
  4. Top row
  5. maybe detached and moveable like the main comfyui toolbar? with resizing capabilities?
  6. reversing the order from new-to-old to old-to-new

LoadLora suggestion

Love your LoraLoader, but sometimes I want to turn a lora on and off without unwiring it, setting up switches etc seems like too much trouble so I made an edit to your code that you might want to consider including in the future.

`class LoraLoaderWithImages(LoraLoader):
@classmethod
def INPUT_TYPES(s):
types = super().INPUT_TYPES()
names = types["required"]["lora_name"][0]
populate_items(names, "loras")
names.insert(0, {"content": "None","image": None,})
return types

def load_lora(self, **kwargs):
    kwargs["lora_name"] = kwargs["lora_name"]["content"]
    if (kwargs["lora_name"]=="None"):
        kwargs["strength_model"]=0
        kwargs["strength_clip"]=0
    return super().load_lora(**kwargs)

`
This allows you to select None and bypass the Lora without having to rewire around it or set the weights to zero.

The problem with this extension.

Today this extension stopped working on installed Comfyui. I installed a clean Comfy, installed the manager and this extension, when loading I get a window with an error.


Loading aborted due to error reloading workflow data

TypeError: Cannot read properties of undefined (reading 'apply')
TypeError: Cannot read properties of undefined (reading 'apply')
at app.graph.onNodeAdded (http://127.0.0.1:8188/extensions/pysssss/CustomScripts/snapToGrid.js:38:27)
at app.graph.onNodeAdded (http://127.0.0.1:8188/extensions/pysssss/CustomScripts/snapToGrid.js:38:27)
at app.graph.onNodeAdded (http://127.0.0.1:8188/extensions/pysssss/CustomScripts/snapToGrid.js:38:27)
at app.graph.onNodeAdded (http://127.0.0.1:8188/extensions/pysssss/CustomScripts/snapToGrid.js:38:27)
at LGraph.add (http://127.0.0.1:8188/lib/litegraph.core.js:1465:18)
at LGraph.configure (http://127.0.0.1:8188/lib/litegraph.core.js:2232:22)
at LGraph.configure (http://127.0.0.1:8188/extensions/pysssss/CustomScripts/reroutePrimitive.js:14:29)
at LGraph.configure (http://127.0.0.1:8188/extensions/pysssss/CustomScripts/snapToGrid.js:53:21)
at ComfyApp.loadGraphData (http://127.0.0.1:8188/scripts/app.js:1230:15)
at HTMLButtonElement.onclick (http://127.0.0.1:8188/scripts/ui.js:755:11)
This may be due to the following script:
/extensions/pysssss/CustomScripts/snapToGrid.js

Repeatedly clicking "Go to node" opens duplicate menus

A quite simple one: each time you click "Go to node" in the context menu, an additional node type menu pops up (the previous one isn't closed). Similarly, each time you click a node type, an additional node list pops up, again without closing the previous one.

无法使用wd14tager

No problem with startup, no problem with node display, but when you click Run, no picture text information is displayed under WD14Tagger, console has an error

Both cuda and cudnn are the latest versions, and I have also installed this TensorRT-8.6.0.12.Windows10.x86_64. Cuda-12.0, may I ask what else I need to do to use this tag
111
222

got prompt
Traceback (most recent call last):
File "F:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 182, in execute
executed += recursive_execute(self.server, prompt, self.outputs, x, extra_data)
File "F:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 67, in recursive_execute
outputs[unique_id] = getattr(obj, obj.FUNCTION)(**input_data_all)
File "F:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\wd14-tagger.py", line 133, in tag
res = tag(image, model, threshold, character_threshold, exclude_tags)
File "F:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\wd14-tagger.py", line 37, in tag
model = InferenceSession(name, providers=ort.get_available_providers())
File "F:\ComfyUI\ComfyUI_windows_portable\python_embeded\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 360, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "F:\ComfyUI\ComfyUI_windows_portable\python_embeded\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 408, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1106 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "F:\ComfyUI\ComfyUI_windows_portable\python_embeded\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll"

[Feature Request] Adding thumbnail preview to lora list (not sure if possible)

The lora folder listing is really helpful but I thought it would be extra helpful if you can show a thumbnail when you hover next to the lora you are trying to pick from the list. I have no idea if this is possible or not but its really going to help a lot if you have tons of loras in multiple folders.

This is how I usually save thumbnail with my loras or other checkpoint models, same filename but jpg.
image

The folder list is already so much helpful in selecting loras
image

I'm thinking of something like a small thumbnail of the jpg can show up when you mousehover the lora list
image

(simple) FEATURE REQUEST: Toggle Image Feed - do not want

I do not want the Image Feed but I want everything else. How can I toggle/remove just that?

Heck, or pick & choose any of the particulars if I didn't want them? I renamed the js for Image Feed to have .old but it still loaded & ComfyUI just gave me so many errors (of course.

I hope this is a simple thing

NODE_CLASS_MAPPINGS

erro when i load comfyui

"D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\anime-segmentation.py module for custom nodes due to the lack of NODE_CLASS_MAPPINGS"

did I do something wrong ?

Error - mmcv module not found, installing manually windows python 310 venv

The following error is seen when installing with a venv install.py
was resolved locally when I installed mmcv manually.

ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'F:\\Generative AI\\ComfyUI\\venv\\Lib\\site-packages\\cv2\\cv2.pyd' Check the permissions.

Traceback (most recent call last):
File "F:\Generative AI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\install.py", line 86, in ensure_mmdet_package
import mmcv
ModuleNotFoundError: No module named 'mmcv'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "F:\Generative AI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\install.py", line 127, in
install()
File "F:\Generative AI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\install.py", line 99, in install
ensure_mmdet_package()
File "F:\Generative AI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\install.py", line 91, in ensure_mmdet_package
subprocess.check_call(mim_install + ['mmcv==2.0.0'])
File "C:\Program Files\Python310\lib\subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['F:\Generative AI\ComfyUI\venv\Scripts\python.exe', '-m', 'mim', 'install', 'mmcv==2.0.0']' returned non-zero exit status 1.

module 'PIL.Image' has no attribute 'ANTIALIAS'

resized_image = img.resize((constrained_width, constrained_height), Image.ANTIALIAS)

!!! Exception during processing !!!
Traceback (most recent call last):
  File "/mnt/media/media/art/ComfyUI/execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/media/media/art/ComfyUI/execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/media/media/art/ComfyUI/execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/media/media/art/ComfyUI/custom_nodes/ComfyUI-Custom-Scripts/py/constrain_image.py", line 50, in constrain_image
    resized_image = img.resize((constrained_width, constrained_height), Image.ANTIALIAS)
                                                                        ^^^^^^^^^^^^^^^
AttributeError: module 'PIL.Image' has no attribute 'ANTIALIAS'

https://stackoverflow.com/questions/76616042/attributeerror-module-pil-image-has-no-attribute-antialias

Propose a change to:

    resized_image = img.resize((constrained_width, constrained_height), Image.LANCZOS)

[Suggestion] Bring favicon extension to Comfy

ComfyUI lacks an official favicon. Every time we open the console, we see this failed request to fetch it. Your extension is amazing and fits Comfy very well! It would be worth it to merge this one to web/extensions/core because every website needs one, and users won't have to search and install an extension.

This would require:

  • Moving the two favicons to web/
  • Updating line 26 of faviconStatus.js to link.href = new URL(${favicon}.ico, window.location);

Text fields of text encoders are messed up in exported workflow images

The text input fields for CLIP_G and CLIP_L of CLIPTextEncode and CLIPTextEncodeSDXL usually get messed up in various ways when exporting workflow images. They can appear below the node box or overlayed on top of node inputs. This happens for both png and svg exports, although in slightly different ways. They also appear to remember their previous in-widget values after being changed to an external string primitive.

Below are some simple examples. In image 1 the three text fields contain aaaa, bbbb and cccc. Then I switch to external inputs and put dddd, eeee and ffff in the text inputs.

workflow (16)

workflow (17)

Feature request: Snap nodes to grid

Currently, to snap nodes to grid the user must hold down Shift, which is simply idiotic because of course I would have to glue the Shiftkey while using Comfy.... I like to have my nodes properly aligned for better display, if this plugin could add this option to snap nodes without Shift, that would be great.

Cannot import: unexpected keyword argument 'root_dir'

File "C:\Users\NAME\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts_init_.py", line 12, in
files = glob.glob("*.py", root_dir=py, recursive=False)
TypeError: glob() got an unexpected keyword argument 'root_dir'

Cannot import C:\Users\NAME\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts module for custom nodes: glob() got an unexpected keyword argument 'root_dir'

Missing menus

Thank you for the extended functionality. My menus got mucked up when using your custom node. It only lets you view one level of menu at a time. In the chrome console I see the following error.

parentMenu must be of class ContextMenu, ignoring it.

missingMenus.mov

Lack of __init__.py

I'm sorry but there is no init.py in this package.
An empty one will cause "Skip /path/to/ComfyUI-Custom-Scripts module for custom nodes due to the lack of NODE_CLASS_MAPPINGS."

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.