Code Monkey home page Code Monkey logo

comfyui-inpaint-nodes's Introduction

ComfyUI Inpaint Nodes

Nodes for better inpainting with ComfyUI: Fooocus inpaint model for SDXL, LaMa, MAT, and various other tools for pre-filling inpaint & outpaint areas.

Fooocus Inpaint

Adds two nodes which allow using Fooocus inpaint model. It's a small and flexible patch which can be applied to any SDXL checkpoint and will transform it into an inpaint model. This model can then be used like other inpaint models, and provides the same benefits. Read more

Download models from lllyasviel/fooocus_inpaint to ComfyUI/models/inpaint.

Inpaint workflow

Note: Implementation is somewhat hacky as it monkey-patches ComfyUI's ModelPatcher to support the custom Lora format which the model is using.

Inpaint Conditioning

Fooocus inpaint can be used with ComfyUI's VAE Encode (for Inpainting) directly. However this does not allow existing content in the masked area, denoise strength must be 1.0.

InpaintModelConditioning can be used to combine inpaint models with existing content. The resulting latent can however not be used directly to patch the model using Apply Fooocus Inpaint. This repository adds a new node VAE Encode & Inpaint Conditioning which provides two outputs: latent_inpaint (connect this to Apply Fooocus Inpaint) and latent_samples (connect this to KSampler).

It's the same as using both VAE Encode (for Inpainting) and InpaintModelConditioning, but less overhead because it avoids VAE-encoding the image twice. Example workflow

Inpaint Pre-processing

Several nodes are available to fill the masked area prior to inpainting. They avoid seams as long as the input mask is large enough.

Fill Masked

This fills the masked area, with a smooth transition at the border. It has 3 modes:

  • neutral: fills with grey, good for adding entirely new content
  • telea: fills with colors from surrounding border (based on algorithm by Alexandru Telea)
  • navier-stokes: fills with colors from surrounding border (based on fluid dynamics described by Navier-Stokes)
Input Neutral Telea Navier-Stokes
input neutral telea ns

Blur Masked

This blurs the image into the masked area. The blur is less strong at the borders of the mask. Good for keeping the general colors the same.

Input Blur radius 17 Blur radius 65
input blur-17 blur-65

Inpaint Models (LaMA, MAT)

This runs a small, fast inpaint model on the masked area. Models can be loaded with Load Inpaint Model and are applied with the Inpaint (using Model) node. This works well for outpainting or object removal.

The following inpaint models are supported, place them in ComfyUI/models/inpaint:

Input LaMa MAT
input lama mat

Inpaint Post-processing

Denoise to Compositing Mask

Takes a mask, an offset (default 0.1) and a threshold (default 0.2). Maps mask values in the range of [offsetthreshold] to [0 → 1]. Values below offset are clamped to 0, values above threshold to 1.

This is particularly useful in combination with ComfyUI's "Differential Diffusion" node, which allows to use a mask as per-pixel denoise strength. Using the same mask for compositing (alpha blending) defeats the purpose, but no blending at all degrades quality in regions with zero or very low strength. This node creates a mask suitable for blending from the denoise-mask.

Example Workflows

Example workflows can be found in workflows.

  • Simple: basic workflow, ignore previous content, 100% replacement
  • Refine: advanced workflow, refine existing content, 1-100% denoise strength
  • Outpaint: workflow for outpainting with pre-processing
  • Pre-process: complex workflow for experimenting with pre-processors
  • Promptless: same as above but without text prompt, requires IP-Adapter

Installation

Use ComfyUI Manager and search for "ComfyUI Inpaint Nodes".

or download the repository and put the folder into ComfyUI/custom_nodes.

or use GIT:

cd ComfyUI/custom_nodes
git clone https://github.com/Acly/comfyui-inpaint-nodes.git

Restart ComfyUI after installing!


OpenCV is required for telea and navier-stokes fill mode:

pip install opencv-python

Acknowledgements

comfyui-inpaint-nodes's People

Contributors

acly avatar spacepxl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

comfyui-inpaint-nodes's Issues

"Soft Inpainting" feature from A1111 dev branch

Thank you for greatly expanding the inpainting for ComfyUI.

It is not really an issue but I don't know how to raise this suggestion otherwise.

I was thinking that you may be interested in the new "soft inpainting" feature from the latest dev branch of Automatic1111. It seems that ComfyUI is currently lacking such a feature.

Here's more details on it:
AUTOMATIC1111/stable-diffusion-webui#14208

Error when executing INPAINT_LoadFooocusInpaint:

when executing INPAINT_LoadFooocusInpaint:

Weights only load failed. Re-running torch.load with weights_only set to False will likely succeed, but it can result in arbitrary code execution.Do it only if you get the file from a trusted source. WeightsUnpickler error: Unsupported operand 118

raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) from None

I had download the head file from here. https://huggingface.co/lllyasviel/fooocus_inpaint/blob/main/fooocus_inpaint_head.pth

it showed pickle data was truncated earlier then I thought it had some issues so i manually updated download them manually and upload to folder, as opposed to using curl

now the new error is shown as pickle.Unpicklingerror()
am i on the right path to debugging ? or would u suggest reinstalling the nodes/comfy/ etc ?
CleanShot 2024-03-09 at 20 52 28

Please do a workflow to add objects,

My dearest friend, please add the built-in Add Object workflow of Krita AI to the documentation, you may need a manual, but we will learn it. Thanks thanks

Error occurred when executing INPAINT_ApplyFooocusInpaint:

Error occurred when executing INPAINT_ApplyFooocusInpaint:

[Errno 2] No such file or directory: 'C:\Dev\samples.png'

File "D:\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\custom_nodes\comfyui-inpaint-nodes\nodes.py", line 148, in patch
save_image(latent["samples"], "C:\Dev\samples.png")
File "D:\ComfyUI\venv\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\venv\Lib\site-packages\torchvision\utils.py", line 150, in save_image
im.save(fp, format=format)
File "D:\ComfyUI\venv\Lib\site-packages\PIL\Image.py", line 2429, in save
fp = builtins.open(filename, "w+b")

Issue with the sampler

Hello!

I noticed that the inpaint won't work correctly when I use another KSamplers from other packs.

Check this out, the hair is all noise mass:

image

Batch size issue

I noticed I cannot create a batch with the node Repeat Latent Batch.

Is is possible to achive this in another way?

/comfyui-inpaint-nodes/util.py resize_square cannot handle w/h >= 2 or w/h <= 0.5 image

I use this in ComfyUI , it raise error when I input image(weigh=724, height=1448):

Argument #4: padding size should be less than the corresponding input dimension, but got: padding (0, 724) at dimension 3 of input [1,3,1448,724]

File "/mnt/sda/comfyUI/custom nodes/comfyui-inpaint-nodes/util.py", line 37, in resize_square
image = F.pad(image,(0, pad_w, 0, pad_h), mode="reflect')

I modify the 'resize_square' code and it run normally:

def resize_square(image: Tensor, mask: Tensor, size: int):
    _, _, h, w = image.shape
    pad_w, pad_h, prev_size = 0, 0, w
    if w == size and h == size:
        return image, mask, (pad_w, pad_h, prev_size)

    if w < h:
        pad_w = h - w
        prev_size = h
    elif h < w:
        pad_h = w - h
        prev_size = w

    print('###################################')
    pad_w2, pad_h2 = pad_w, pad_h
    while pad_w2 > 0 or pad_h2 > 0:
        _pad_w = w-1 if pad_w2>=w else pad_w2
        pad_w2 -= _pad_w
        _pad_h = h-1 if pad_h2>=h else pad_h2
        pad_h2 -= _pad_h

        image = F.pad(image, (0, _pad_w, 0, _pad_h), mode="reflect")
        mask = F.pad(mask, (0, _pad_w, 0, _pad_h), mode="reflect")
    #image = F.pad(image, (0, pad_w, 0, pad_h), mode="reflect")
    #mask = F.pad(mask, (0, pad_w, 0, pad_h), mode="reflect")
    print('###################################')

    if image.shape[-1] != size:
        image = F.interpolate(image, size=size, mode="nearest-exact")
        mask = F.interpolate(mask, size=size, mode="nearest-exact")

    return image, mask, (pad_w, pad_h, prev_size)

"Weights only load failed." when using the outpaint example

Error occurred when executing INPAINT_LoadFooocusInpaint:

Weights only load failed. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution.Do it only if you get the file from a trusted source. WeightsUnpickler error: Unsupported operand 60

For the rest unmodified and got all the same checkpoints afaik.

image

请问下opencv是要安装在哪个目录下?

我在ComfyUI\custom_nodes\comfyui-inpaint-nodes目录下使用CMD运行pip install opencv-python得到如下的错误代码
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1007)'))': /simple/opencv-python/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1007)'))': /simple/opencv-python/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1007)'))': /simple/opencv-python/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1007)'))': /simple/opencv-python/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1007)'))': /simple/opencv-python/
Could not fetch URL https://pypi.org/simple/opencv-python/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/opencv-python/ (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1007)'))) - skipping
ERROR: Could not find a version that satisfies the requirement opencv-python (from versions: none)
ERROR: No matching distribution found for opencv-python
Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1007)'))) - skipping

I have modified this ApplyFocusInpaint to handle video frames

class ApplyFooocusInpaint:
@classmethod
def INPUT_TYPES(s):
return {
"required": {
"model": ("MODEL",),
"patch": ("INPAINT_PATCH",),
"latent": ("LATENT",),
}
}

RETURN_TYPES = ("MODEL",)
CATEGORY = "inpaint"
FUNCTION = "patch"

def patch(self, model: ModelPatcher, patch: tuple[InpaintHead, dict[str, Tensor]], latent: dict[str, Any]):
    base_model: BaseModel = model.model
    latent_pixels = base_model.process_latent_in(latent["samples"])
    noise_mask = latent["noise_mask"].round()

    latent_mask = F.max_pool2d(noise_mask, (8, 8)).round().to(latent_pixels)

    inpaint_head_model, inpaint_lora = patch
    feed = torch.cat([latent_mask, latent_pixels], dim=1)
    inpaint_head_model.to(device=feed.device, dtype=feed.dtype)
    inpaint_head_feature = inpaint_head_model(feed)

    def input_block_patch(h, transformer_options, inpaint_head_feature):
        # 此处保证批次一致性
        scale_factor = h.size(0) // inpaint_head_feature.size(0)
        if scale_factor != 1:
            inpaint_head_feature = inpaint_head_feature.repeat(scale_factor, 1, 1, 1)
        if transformer_options["block"][1] == 0:
            h = h + inpaint_head_feature.to(h)
        return h

    lora_keys = comfy.lora.model_lora_keys_unet(model.model, {})
    lora_keys.update({x: x for x in base_model.state_dict().keys()})
    loaded_lora = load_fooocus_patch(inpaint_lora, lora_keys)

    models = []
    # 假设model.clone()是轻量级操作,可以适应批处理
    for i in range(feed.shape[0]):
        m = model.clone()
        m.set_model_input_block_patch(lambda h, opts: input_block_patch(h, opts, inpaint_head_feature))
        patched = m.add_patches(loaded_lora, 1.0)
        models.append(m)

    not_patched_count = sum(1 for x in loaded_lora if x not in patched)
    if not_patched_count > 0:
        print(f"[ApplyFooocusInpaint] Failed to patch {not_patched_count} keys")

    inject_patched_calculate_weight()
    return models  # 返回处理后的模型列表,适用于批处理

I have modified this ApplyFocusInpaint to handle video frames, but the mask needs to be added to the back of the external drawing board with an extracted mask. This way, the tensor of the mask can be consistent with the tensor of the image, and the code can run normally. Now that it runs, there is no problem processing video frames, but the problem is that consistency cannot be guaranteed, so the effect is not good. The effect of incorporating animatediff is also not very good. I kindly request the author to adjust the mask when he has time to see if it can maintain the consistency of the image when expanding the video frames. Thank you

Error occurred :module 'nodes' has no attribute 'InpaintModelConditioning'

Error occurred when executing INPAINT_VAEEncodeInpaintConditioning:

module 'nodes' has no attribute 'InpaintModelConditioning'

File "I:\tgd\Blender_ComfyUI\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "I:\tgd\Blender_ComfyUI\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "I:\tgd\Blender_ComfyUI\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "I:\tgd\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui-inpaint-nodes\nodes.py", line 197, in encode

我运行不成功,提示下列错误,有人能帮我么?

(另外运行inpaint-simple.json时前后图片没有任何变化,有人能给我示例中的原图么?我像复现下inpaint.png上的案例看看问题出在哪里)
运行inpaint-preprocess.json时提示下列错误
Error occurred when executing INPAINT_MaskedFill:

The size of tensor a (512) must match the size of tensor b (64) at non-singleton dimension 2

File "D:\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI\custom_nodes\comfyui-inpaint-nodes\nodes.py", line 237, in fill
image[:, :, :, i] *= m

Error occurred when executing INPAINT_ApplyFooocusInpaint: Sizes of tensors must match except in dimension 1. Expected size 1 but got size 2 for tensor number 1 in the list.

Error occurred when executing INPAINT_ApplyFooocusInpaint:

Sizes of tensors must match except in dimension 1. Expected size 1 but got size 2 for tensor number 1 in the list.

File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-inpaint-nodes\nodes.py", line 155, in patch
feed = torch.cat([latent_mask, latent_pixels], dim=1)
inpaint

Stuck at BlurMask Node

This is great. However, this is working when I bypass the BlurMaskNode. There is no command prompt error; just gets stuck, and the ram (not vram) goes to 100 % after 2-3 minutes. I have to close cmd and relaunch.

SNAG-1958

I am on a 4090. Can you tell me what I am doing wrong here? Thank you!

Edit: This what happens, it works with blur mask value 3 upto 5. At 6, it starts giving errors and comfy crashes with a reconnecting message in floating menu. Nothing in the command prompt.

Fails with AnimateDiff > 16 frames

Maybe it fails if not exactly 16 frames. Here's an examples with 32 frames.

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-0246/utils.py", line 373, in new_func
    res_value = old_func(*final_args, **kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/nodes.py", line 1409, in sample
    return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/nodes.py", line 1345, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 334, in motion_sample
    latents = wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, noise, *args, **kwargs)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/utils_model.py", line 216, in wrapped_function
    return function_to_wrap(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 22, in informative_sample
    raise e
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)  # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/sample.py", line 100, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_smZNodes/__init__.py", line 130, in KSampler_sample
    return _KSampler_sample(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 712, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_smZNodes/__init__.py", line 149, in sample
    return _sample(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 618, in sample
    samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 557, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/k_diffusion/sampling.py", line 745, in sample_lcm
    denoised = model(x, sigmas[i] * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 281, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 271, in forward
    return self.apply_model(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_smZNodes/smZNodes.py", line 1030, in apply_model
    out = super().apply_model(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 268, in apply_model
    out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 370, in evolved_sampling_function
    cond_pred, uncond_pred = sliding_calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 478, in sliding_calc_cond_uncond_batch
    sub_cond_out, sub_uncond_out = comfy.samplers.calc_cond_uncond_batch(model, sub_cond, sub_uncond, sub_x, sub_timestep, model_options)
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-TiledDiffusion/.patches.py", line 4, in calc_cond_uncond_batch
    return calc_cond_uncond_batch_original_tiled_diffusion_25a5a0a7(model, cond, uncond, x_in, timestep, model_options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 222, in calc_cond_uncond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/model_base.py", line 85, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/SeargeSDXL/modules/custom_sdxl_ksampler.py", line 70, in new_unet_forward
    x0 = old_unet_forward(self, x, timesteps, context, y, control, transformer_options, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 852, in forward
    h = p(h, transformer_options)
        ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/comfyui-inpaint-nodes/nodes.py", line 159, in input_block_patch
    h = h + inpaint_head_feature.to(h)
        ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~
RuntimeError: The size of tensor a (16) must match the size of tensor b (32) at non-singleton dimension 0

Prompt executed in 6.68 seconds

fill masked area is actually smaller than the input mask

Especially when I use the fill method, which will keep the pixels on the edge of the original image, it is difficult for me to change the color of a piece of clothing through this method. White clothes will always be filled with white pixels.
9fa5343ffda9d0ffa9949be15229e54

falloff should at least support negative input to offset the result of shrinking the mask range inward. If I want to fill the background color to change the image, this would be more reasonable.
In most cases, neutral will produce a relatively obvious boundary with the background edge.

i make a mask like this
image
but i got this
4a44996499057a0fe279c6254179d03
and this
cd999e2fea688601f8d849284ed2c0e

i have to grow the mask to get expect result
1b405f48572db6d4cf267fbc6a01cd4

Shape mismatch diffusion_model

Hi, I get the following printed out hundreds of times to the console when running this. Although I still get an output what is better than running with out this.
Is this intended behavior?

[ApplyFooocusInpaint] Shape mismatch diffusion_model.middle_block.1.proj_out.weight, weight not merged (torch.Size([1280, 1280]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.middle_block.2.in_layers.2.weight, weight not merged (torch.Size([1280, 1280, 3, 3]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.middle_block.2.emb_layers.1.weight, weight not merged (torch.Size([1280, 1280]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.middle_block.2.out_layers.3.weight, weight not merged (torch.Size([1280, 1280, 3, 3]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.0.in_layers.2.weight, weight not merged (torch.Size([1280, 2560, 3, 3]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.0.emb_layers.1.weight, weight not merged (torch.Size([1280, 1280]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.0.out_layers.3.weight, weight not merged (torch.Size([1280, 1280, 3, 3]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.0.skip_connection.weight, weight not merged (torch.Size([1280, 2560, 1, 1]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.1.proj_in.weight, weight not merged (torch.Size([1280, 1280]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.1.transformer_blocks.0.attn1.to_out.0.weight, weight not merged (torch.Size([1280, 1280]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.1.transformer_blocks.0.attn2.to_out.0.weight, weight not merged (torch.Size([1280, 1280]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.1.transformer_blocks.0.ff.net.0.proj.weight, weight not merged (torch.Size([10240, 1280]) != torch.Size([10240]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.1.transformer_blocks.0.ff.net.2.weight, weight not merged (torch.Size([1280, 5120]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.1.transformer_blocks.1.attn1.to_out.0.weight, weight not merged (torch.Size([1280, 1280]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.1.transformer_blocks.1.attn2.to_out.0.weight, weight not merged (torch.Size([1280, 1280]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.1.transformer_blocks.1.ff.net.0.proj.weight, weight not merged (torch.Size([10240, 1280]) != torch.Size([10240]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.1.transformer_blocks.1.ff.net.2.weight, weight not merged (torch.Size([1280, 5120]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.1.transformer_blocks.2.attn1.to_out.0.weight, weight not merged (torch.Size([1280, 1280]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.1.transformer_blocks.2.attn2.to_out.0.weight, weight not merged (torch.Size([1280, 1280]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.1.transformer_blocks.2.ff.net.0.proj.weight, weight not merged (torch.Size([10240, 1280]) != torch.Size([10240]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.1.transformer_blocks.2.ff.net.2.weight, weight not merged (torch.Size([1280, 5120]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.1.transformer_blocks.3.attn1.to_out.0.weight, weight not merged (torch.Size([1280, 1280]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.1.transformer_blocks.3.attn2.to_out.0.weight, weight not merged (torch.Size([1280, 1280]) != torch.Size([1280]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.1.transformer_blocks.3.ff.net.0.proj.weight, weight not merged (torch.Size([10240, 1280]) != torch.Size([10240]))
[ApplyFooocusInpaint] Shape mismatch diffusion_model.output_blocks.0.1.transformer_blocks.3.ff.net.2.weight, weight not merged (torch.Size([1280, 5120]) != torch.Size([1280]))

Inpaint (Using Model) Node is Not Functioning Properly

If I use the built in mask editor in the Load Image node I get a vastly superior output versus using a mask that I load from an external source.

This basically makes this tool unusable for a large quantity of frames.

image

Module 'nodes' has no attribute 'InpaintModelConditioning'

I get this error when trying to use the example workflow for the Inpaint Model Conditioning with SDXL. I also get it when trying to use SDXL inpaint in Krita AI Diffusion.
I have a very rudimentary understanding of python, and after checking the contents of nodes.py I see that indeed InpaintModelConditioning is mentioned only once, it's called but never defined.

RuntimeError: PytorchStreamReader failed locating file constants.pkl: file not found

lama: E:\workplace\ComfyUI\models\inpaint\big-lama.pt
!!! Exception during processing!!! PytorchStreamReader failed locating file constants.pkl: file not found
Traceback (most recent call last):
File "E:\workplace\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\workplace\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\workplace\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\workplace\ComfyUI\custom_nodes\comfyui-inpaint-nodes\nodes.py", line 312, in load
sd = torch.jit.load(model_file, map_location="cpu").state_dict()
File "E:\workplace\ComfyUI\venvs\lib\site-packages\torch\jit_serialization.py", line 159, in load
cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files, _restore_shapes) # type: ignore[call-arg]
RuntimeError: PytorchStreamReader failed locating file constants.pkl: file not found

Startup Error: Cannot import comfyui-inpaint-nodes module for custom nodes

I have installed your GitHub and downloaded the models you suggested. But when I restart Comfy, I get an error message about Fooocus Inpaint and when the workflow opens, your nodes are red saying that Comfy could not find them. If I look into Comfy Manager, it says that they are installed, but not able to be imported. Any ideas? Here is the error I see in the Comfy startup log:

Traceback (most recent call last):

File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\[nodes.py](https://nodes.py/)", line 1893, in load_custom_node

module_spec.loader.exec_module(module)

File "<frozen importlib._bootstrap_external>", line 940, in exec_module

File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed

File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-inpaint-nodes\__init__.py", line 6, in <module>

)[1].extend([".pt", ".pth", ".safetensors", ".patch"])

^^^^^^

AttributeError: 'set' object has no attribute 'extend'





Cannot import H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-inpaint-nodes module for custom nodes: 'set' object has no attribute 'extend'

RuntimeError: shape '[1, 1, 1408, 938]' is invalid for input of size 21131264

In a workflow, I used this node “LoadImagesFromDir //Inspire” to passed in multiple images. When running to the node "INPAINT_InpaintWithModel", an error occurred:

图像擦除_fooocus_seg 2.json

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-inpaint-nodes\nodes.py", line 343, in inpaint
image, mask = to_torch(image, mask)
^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-inpaint-nodes\util.py", line 13, in to_torch
mask = mask.reshape(1, 1, mask.shape[-2], mask.shape[-1])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: shape '[1, 1, 1408, 938]' is invalid for input of size 21131264

Prompt executed in 59.13 seconds

Accept Batch

Hi there

Fantastic work!

Could this be modified to accepted a batch of images though? At the moment if I feed it anymore than 1 image I get a size of tensors >1 error.

Would be great to be able to use this with Hotshot / AnimateDiff.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.