rgthree / rgthree-comfy Goto Github PK
View Code? Open in Web Editor NEWMaking ComfyUI more comfortable!
License: MIT License
Making ComfyUI more comfortable!
License: MIT License
When I want to use OpenPose after an Upscale, I get this error, but if I take the image before the Upscale, I don't get any error.
File "threading.py", line 1016, in _bootstrap_inner File "threading.py", line 953, in run File "D:\ComfyUI_windows_portable\ComfyUI\main.py", line 95, in prompt_worker e.execute(item[2], prompt_id, item[3], item[4]) File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy\__init__.py", line 173, in rgthree_execute return self.old_execute(*args, **kwargs) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 371, in execute to_execute = sorted(list(map(lambda a: (len(recursive_will_execute(prompt, self.outputs, a[-1])), a[-1]), to_execute))) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 371, in <lambda> to_execute = sorted(list(map(lambda a: (len(recursive_will_execute(prompt, self.outputs, a[-1])), a[-1]), to_execute))) File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy\__init__.py", line 195, in rgthree_recursive_will_execute will_execute_value = execution.recursive_will_execute(prompt, outputs, input_unique_id, *args, **kwargs) File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy\__init__.py", line 195, in rgthree_recursive_will_execute will_execute_value = execution.recursive_will_execute(prompt, outputs, input_unique_id, *args, **kwargs) File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy\__init__.py", line 195, in rgthree_recursive_will_execute will_execute_value = execution.recursive_will_execute(prompt, outputs, input_unique_id, *args, **kwargs) [Previous line repeated 988 more times] File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy\__init__.py", line 179, in rgthree_recursive_will_execute will_execute = RgthreePatchRecursiveExecute_Set_patch_recursive_execution_to_false_if_not_working(unique_id) RecursionError: maximum recursion depth exceeded
Here's the link to my workflow : https://drive.google.com/drive/folders/1-Gt4BCxEeBnFwUrVlITmr6SLwAtIarsG?usp=sharing
Hi, I am very new to your extension and comfyui.
Just install it to use your seed node, but I can not find it.
I only get Reroute, Node Combiner, Fast Bypasser and Fast Muter in the node menu.
Where are Seed, Lora Loader Stack and Context?
When will be Power Prompt realese?
I would suggest for Reroute node shows a fast switch to change the direction, instead of doing it from the node menu.
Just downloaded your plugins for Comfy, I'm still new to comfy btw, and followed your instructions of the simple git pull but the import failed... here is what it shows in the console
Error:
[WinError 1314] A required privilege is not held by the client: 'F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyLiterals\js' -> 'F:\ComfyUI_windows_portable\ComfyUI\web\extensions\ComfyLiterals'
Failed to create symlink to F:\ComfyUI_windows_portable\ComfyUI\web\extensions\ComfyLiterals. Please copy the folder manually.
Source: F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyLiterals\js
Target: F:\ComfyUI_windows_portable\ComfyUI\web\extensions\ComfyLiterals
INFO:comfyui-prompt-control:Use STYLE:weight_interpretation:normalization at the start of a prompt to use advanced encodings
INFO:comfyui-prompt-control:Weight interpretations available: comfy,A1111,compel,comfy++,down_weight,perp
INFO:comfyui-prompt-control:Normalization types available: none,mean,length,length+mean
Traceback (most recent call last):
File "F:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1725, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy_init.py", line 92, in
rgthree_config_user = json.load(file)
File "json_init_.py", line 293, in load
File "json_init_.py", line 346, in loads
File "json\decoder.py", line 337, in decode
File "json\decoder.py", line 355, in raw_decode
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Cannot import F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy module for custom nodes: Expecting value: line 1 column 1 (char 0)
it says to copy the ComfyLiterals folder Manually? but idk what it means by that tbh cuz all I did was git pull... do I just re-download that folder from the git hub and put it in there or...?
Is this button really necessary to confirm the choices?
Indeed, I noticed that when the workflow loads, I have to start by pressing this button to validate the displayed choices. If I don't do that, the actions don't proceed as displayed.
Furthermore, I often forget (my fault) to confirm my choices because, for me, it is logical that when I see 'enable,' for example, the action should be triggered without the need to validate it.
So my question is: could we do without this button?
Or better, I probably didn't understand well enough how it should work.
I have the following situation, where 4 input images have to arrive to an upscaler. They come from 4 different functions of the workflow.
At any given, only one of the 4 inputs is really carrying an image.
I can't solve this problem with ordinary switches as, I'm told, ComfyUI will not work if inputs are empty. So, I thought your Context Switch would work as "automatically choose the first non-null context to continue onward with".
Hence the configuration:
Now, let's say that the only non-null image in my case is input 2, which carries the Refiner image. I'd expect that to be selected and passed to the Context Switch. Instead, I receive this error:
Traceback (most recent call last):
File "/Users/xyz/Desktop/AI/Tools/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xyz/Desktop/AI/Tools/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xyz/Desktop/AI/Tools/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xyz/Desktop/AI/Tools/ComfyUI/comfy_extras/nodes_upscale_model.py", line 39, in upscale
in_img = image.movedim(-1,-3).to(device)
^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'movedim'
I also attach the whole workflow for more context:
Not sure why this is happening, it may be after I installed mtb but I installed a few so it could be anything.. but I wanted to let you know this error is being thrown.
[comfy_mtb] | INFO -> loaded 52 nodes successfuly
[comfy_mtb] | INFO -> Some nodes (3) could not be loaded. This can be ignored, but go to http://none:6006/mtb if you want more information.
Traceback (most recent call last):
File "/notebooks/ComfyUI/nodes.py", line 1735, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 850, in exec_module
File "", line 228, in _call_with_frames_removed
File "/notebooks/ComfyUI/custom_nodes/rgthree-comfy/init.py", line 98, in
for file in glob.glob('.py', root_dir=DIR_PY) + glob.glob('.js', root_dir=os.path.join(DIR_DEV_WEB, 'js')):
TypeError: glob() got an unexpected keyword argument 'root_dir'
Cannot import /notebooks/ComfyUI/custom_nodes/rgthree-comfy module for custom nodes: glob() got an unexpected keyword argument 'root_dir'
I was able to fix this by replacing this line
with this
Line 98 in a197b65
for file in glob.glob(os.path.join(DIR_PY, '.py')) + glob.glob(os.path.join(DIR_DEV_WEB, 'js', '.js')):
not tested on windows..
Hey all,
I have a carp load of LoRAs and sometimes their function escapes me. I was hoping to add a hover-over option when navigating through them in the LoRA Stacker. Hoping I could get help with that (or if that's something RG3 wanted to tackle/add for everyone (=
I tried to figure this out myself, but I'm not an expert coder, no experience with Python...not sure how that plays into things, if at all. Had looked through the html, TS and js files. Particularly the Constants.x files โ couldn't find any HTML/js specific to the LoRA Stacker.
Would just copy pasta some "Image Popup" code from the interweb but I couldn't find any specific code with regards to the LoRA Nav. Menu. Figured 1 could just throw in a .jpg with the name attached to the LoRA file to keep things organized/easy.
Thanks for any help! (And thanks for your amazing work RG3, you're a legend, even if your QB counterpart isn't lol)
Edit: Silly me, forgot to use inspect tool to investigate. See it is apart of LLL submenu sys., so hopefully now I can toy around with this!
This extension interferes with the --cuda-device being set. If I start with --cuda-device <nr>
, images are generated with gpu 0 instead. I've bisected all my extensions and this is the culprit. Without this extension it just works.
For some reasons, two reroute nodes keep appearing disconnected whenever I refresh the browser or I reload my workflow. These two, and only these two:
I even tried to delete them and recreate them, but the problem persists. Even more strangely, sometimes they remain connected upon refresh, and the CLIPTextEncodeSDXL node they connect to seems to be working fine even when they are disconnected.
Notice that this issue has existed for a long time, not just in the current version of the workflow. I just forgot to report it.
The JSON is here.
Just like Context Big now passes sampler and scheduler name, it would be great if it could pass the checkpoint name, too, and separately from the model value. So I could address this unfortunate situation:
In reality, that checkpoint name doesn't have to be transported any further than the Loader node right there, but, for the sake of having a separate control panel, it would be extremely useful.
Thank you
Not a big issue. but when downloading the zip linked here. it saves out with the incorrect folder name, this throws errors with comfyui as it can't find the path it is trying to run.
delete -main from the end and it fixes it.
thats if your installing it this way instead of following the instructions. btw :P
So the seed is actually present in the json workflow/image but when I load it onto the canvas, the last queued seed button is greyed out.
This is quite annoying, as I have to manually use external tools to look for the seed whenever I want to reproduce my image.
When there are multiple DisplayInt nodes in a graph, whenever any of them gets executed, the text widget of the most recently created node gets updated instead of the text widget of the executing node because the text widget variable showValueWidget
in js/display_int.js
is being shared between all the nodes. This can be fixed by attaching it to the node object (this
) instead:
import { app } from "../../scripts/app.js";
import { ComfyWidgets } from "../../scripts/widgets.js";
import { addConnectionLayoutSupport } from "./utils.js";
app.registerExtension({
name: "rgthree.DisplayInt",
async beforeRegisterNodeDef(nodeType, nodeData, app) {
if (nodeData.name === "Display Int (rgthree)") {
- let showValueWidget;
nodeType.title_mode = LiteGraph.NO_TITLE;
const onNodeCreated = nodeType.prototype.onNodeCreated;
nodeType.prototype.onNodeCreated = function () {
onNodeCreated ? onNodeCreated.apply(this, []) : undefined;
- showValueWidget = ComfyWidgets["STRING"](this, "output", ["STRING", { multiline: true }], app).widget;
- showValueWidget.inputEl.readOnly = true;
- showValueWidget.serializeValue = async (node, index) => {
+ this.showValueWidget = ComfyWidgets["STRING"](this, "output", ["STRING", { multiline: true }], app).widget;
+ this.showValueWidget.inputEl.readOnly = true;
+ this.showValueWidget.serializeValue = async (node, index) => {
node.widgets_values[index] = '';
return '';
};
};
addConnectionLayoutSupport(nodeType, app, [['Left'], ['Right']]);
const onExecuted = nodeType.prototype.onExecuted;
nodeType.prototype.onExecuted = function (message) {
onExecuted === null || onExecuted === void 0 ? void 0 : onExecuted.apply(this, [message]);
- showValueWidget.value = message.text[0];
+ this.showValueWidget.value = message.text[0];
};
}
},
});
This explains why so many people on Reddit told me that whenever they load my workflow the find a bunch of nodes disconnected, and why on random reloads, I experience the same with two nodes:
However, I cannot fix. When I click the fix and save button, nothing happens. Tried on Vivaldi and Safari. No ad blockers.
I suppose that asking for the Context Big
node to transport around both positive and negative aesthetic scores would be abusing your patience, wouldn't it?
I don't know who you are, but thank you so much for making these nodes, they make the ComfyUI much easier to use!
Hey, thanks for your excellent work! Just recently (3hrs ago), you released some non-backwards compatible changes to the repo. That's fine and you can do what you want. Could you please tag your versions though so I can version lock this repo for my workflow?
Error occurred when executing KSampler:
module 'comfy.sample' has no attribute 'broadcast_cond'
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1237, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\AIT\AITemplate\AITemplate.py", line 176, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs)
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 109, in animatediff_sample
return orig_comfy_sample(model, *args, **kwargs)
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\AIT\AITemplate\AITemplate.py", line 290, in sample
positive_copy = comfy.sample.broadcast_cond(positive, noise.shape[0], device)
Partially connected to #24 (the quest to transmit the checkpoint name over long distances), I'm encountering challenges in connecting a Checkpoint Selector node, from Comfy Image Saver suite, to their Save Image w/Metadata node, passing through your reroute node.
I opened an Issue with them:
This is for visibility and documention only. I'll close it once it gets solved on the other side.
Many thx for this useful extension for comfyui, plz add Lora block weight in PowerPrompt
https://github.com/ltdrdata/ComfyUI-Inspire-Pack
https://github.com/hako-mikan/sd-webui-lora-block-weight
The 'pipe' nodes already exist, but they are predefined. I rarely use them because they don't allow me to transmit all the data I want.
The idea would be to connect any node to the 'custom pipe' node in order to truly have a workflow with as few lines as possible.
Do you think such a node makes sense, and does it appeal to you?
There are the problem when I load some lora ( It found the another lora even have right path):
๐ rgthree Power Prompt: Found "Cloth\Rich.safetensors" for "XQ\ziedRichDota2d" in prompt
โ rgthree Power Prompt: Loaded "Cloth\Rich.safetensors" from prompt
๐ rgthree Power Prompt: 1 Loras processed; stripping tags for TEXT output.
Your 'reroute' node is great, but would it be possible to display the title?
Title says it all. I want to combine a bunch of inputs into one 'button' to press to be able to enable on/off certain generations and features.
Current implementation, I don't see a way to do this, as once combined each input is still individually selectable
Personally, I would implement it as a toggle to the combiner -> (combine inputs into 1)
bug when use eff Ksampler XL and eff Ksampler, the second Ksampler will stop or very slow.
plz add wildcard support in PowerPrompt
Unfortunately, I have to report that the recent change did not fix the issue. After the Fix in Place of last week, I still have a few nodes that keep disconnecting randomly every time I reload the workflow or restart ComfyUI.
What's worse, these nodes would be very hard to impossible to troubleshoot for anyone not intimately familiar with the design decision I made for my workflow. And the new fix function doesn't even detect the issue.
Version 4.0 is ready to go, but I cannot release it until the reroute nodes issue is solved, and I only use your reroute nodes :)
Happy to share the new workflow, if you need it.
This is not really an issue, more of an inconsistency..
Your excellent Nodes don't take into account the "Link Render Mode" setting of the user and draw things like its always in the "Spline" setting.
I use "Linear" and it draws the cable's middle dot in an offset
The reason must be that it thinks the cables are like this
Not a big deal, just a glitch.. ๐
.. and another thanks for your really great Nodes!
Hey, here is a fix to avoid clearing the inserted text selection accidentally, if I keydown "enter" on the filter textfield.
Hi,
I am a big fan of this node and use it extensively but I noticed this issue recently on a number of my workflows.
Can you check it?
thanks
MokkaBoss1
The left side works as expected, but when I save this as a template, many of the effects nodes become disconnected from the bypassers, requiring a manual re-attachment for the bypasser to work again. At first, I put all the connections into a node collector, but then thinking that might be the issue, I hooked them right into the bypass repeater itself, which are connected to a "disable all fx" switches at the top, and then I also hooked them up so each effect can be bypassed/enabled individually.
All my reroutes are rerouteprimitive/pyssss versus native, as I read that disconnections can happen with the regular reroute.
When saved as a workflow, all is well - no issues. Only difference is that in the upper left, there's a connection point for an image, whereas in the full 'workflow" version, there's an Input LoadImage (The template version is designed to be chained, hence the difference).
Attached is the JSON from the Template if that helps.
AegisFlow_MaskFX_Template.json
Hello, Is it possible to choose a standard reroute
size without changing the code?
At 23 seconds, you can notice a problem with the small reroute
that I encounter
Almost every day, I find myself wanting a node that:
For example, right now, I'd really need it to pick the first non-null image among many possible generated by different parts of my workflow and send that image to another node to be saved with metadata readable by A1111/SD.Next/SD Prompt Reader:
Maybe it's the wrong way to think about the flow, but it's very intuitive to me. And I had other use cases for such a node.
Thanks!
Hi,
I have a node that picks a prompt process based random integer. Given an input, the workflow runs one of the nodes that corresponds to the integer picked. However,r in the console, I see the non-selected nodes are still doing their prompting. Is it possible to use the mute/bypass nodes to turn them off? The workflow is shown below:
So really what I want to do is if Wildcards is picked, to bypass/mute the other three, for example.
I'm starting to experiment with the Mute / Bypass Repeater node, using it to bypass the entire Detailer function of my workflow, rather than just some portions of it. It's very useful to save computing cycles, but I'm afraid I've not completely grasped how it works.
In the situation below, I have two main functions generating an image: either the SDXL Base+Refiner or the ReVision. They end up into a context switch which passes the non-null image to the Detailer and the Upscaler, organized in a chain:
I hook all the nodes of the Detailer to a Mute / Bypass Repeater node and that node to a Bypasser node.
At this point, I ask to bypass the Detailer, but not the Upscaler:
I would expect that the generated image (coming either from the Refiner or ReVision) would pass through all the Detailer nodes and end up being processed by the Upscaler. But no.
If the Detailer is bypassed, ComfyUI simply refuses to work. I press Queue Prompt and nothing happens. No errors, no warnings, no console messages. It simply doesn't do anything.
Similarly, if I want to bypass both the Detailer and the Upscaler, and simply keep my image as it comes out of the Refiner or ReVision, I can't. ComfyUI doesn't operate.
For the workflow to work, I have to un-bypass at least the Detailer. Then, it does its thing splendidly. All your nodes are very powerful and useful, thank you.
As usual, the full workflow for reference:
It'd be nice if you incorporated the original Lora Loader's View Info feature, that looks up the Lora's info on Civitai to glean trigger words.
You could have a sub menu;
I have an idea: add a node that collects fast action button
for centralized control of action
.
It will highlight the most recently run action
(Highlight the first action
if no action
has been run), like this
and can be set to a suspended panel, like this
It would be better if you can add an animation for running action
In example above it works only if the first lora stacker is enabled. Enabling any other or any combination of multiple stackers are not working (or only applying first one if enabled).
In fact when any stacker other than first is enabled the output is no longer lora_stack as shown by apply lora stack node throwing a warning
Normally I can turn off functionality by breaking the input connection to a part of the circuit I don't want to run - I can't do that with just a context switch though by disabling the inputs - it will stop the process and throw an error - I'm required to put another context block after the switch and mute that instead. This is overly complicated and frustrating, especially without any logic blocks or the ability to disable multiple blocks with one click to work around it more easily.
Since ComfyUI update from Oct 5, the KSampler Config Module is no longer able to connect sampler_name or scheduler with any other module. Tested with default KSamplers and a few other like Impact FaceDetailer and EfficientNode.
Perhaps I'm just oblivious to an obvious logic problem
Context connected to the output of a muted context are still processed resulting in errors
While attempting to expand my workflow with toggle-able options via muting, I tried using context muting at the end of a mutable section of workflow. Once I muted said section, I would receive errors upon attempting to run. Is this something that can be done as is, or would this require additions to the context module on your part? As I prefaced, perhaps I'm just missing something.
In my use case I was attempting to create toggles allowing for the saving multiple image outputs in multiple locations using mutable context into the WAS Suite's Image saver. Here's a mock-up example where the second upscaler (U2) workflow is muted:
Obviously I could mute each of the contexts corresponding to a muted section, but in a more complex non mock-up workflow this would quickly become tedious.
Would it be possible to add a 'Mute' boolean to the I/O of contexts? If the context is able to read and change its mute state according to the boolean it would allow for more complex workflows; especially if it can be carried by the pipe and overridden by new input of the boolean like other I/O. Here's another basic mock-up with U2 disabled showing how such a workflow could be beneficial:
This functionality would nicely complement the logic nodes of other custom nodes packs.
Seriously I can not thank you enough for the amazing work you've done in releasing these nodes, they really are a game changer!
is there a way to convert bypass state to mute state and vice versa?
As the title says: the new Display Any node is very useful for debugging, but it would be tremendously more useful if it would do two additional things:
Print the checkpoint name in the same way it does for the sampler name and the scheduler name. Right now, an attempt to link a Model output in a Checkpoint Loader
node to the input of a Display Any
node generates something like this: <comfy.model_patcher.ModelPatcher object at 0x2d0f6b950>
Pass any received input to other nodes. The printing within the node is much needed (I use the ttn textDebug
for this, but it stopped printing the name inside the node weeks ago and it's not optimal). However, values like checkpoint name, sampler name, and scheduler name I'd like to print to terminal, concat with my labels, and save to files for various reasons.
Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.