bing-su / adetailer Goto Github PK
View Code? Open in Web Editor NEWAuto detecting, masking and inpainting with detection model.
License: GNU Affero General Public License v3.0
Auto detecting, masking and inpainting with detection model.
License: GNU Affero General Public License v3.0
awesome extension :) thank you.
Would be great to be able to do multiple passes, and choose which model each pass is utilizing.
Could probably do this by just duplicating the extension, and picking different models for each instance of the extension, but would be nicer to have built in
When using alwayson_scripts controlnet
and After Detailer
together, the API does not return the ControlNet processed image. The final image returns, however the canny/depth/pose reference image is not returned.
EDIT: I might have answered my own question, I see additional settings in the A1111 Settings:
ad_save_previews
ad_save_images_before
I'll see if these change the expected behavior.
EDIT 2: Nope, unfortunately the API still only returns the final image.
Would applying face detailer and hand detailer at the same time possible in one go using automatic1111?
Can we have the power to use a custom steps count for the adetailer pass instead of directly inheriting from the original image pass?
Would be really great to be able to store the pos and neg prompts for both 1st&2nd passes in a text file after each generate acrion, and have a button in ADetailer to restore last used prompts from previous sessions in A1111
especially now with the cool new multiple passes, it's 4 prompts fields to manually fenagle back into place each new session
as writen in the title.
1st tab full body
2nd tab face
(3rd tab hand)
This extension is amazing. I think the only thing left would be a way to do just background. The main prompt can be the main subjects, then adetailer for face, hands and hopefully background in the future for really smooth workflow.
Would be awesome to be able to tick a box and have intermediate versions saved, so a before adetailer, after 1st pass and after 2nd pass, to make comparison of each step easier when tweaking params
there is an error when i install it,i want to know how to do with it :
Error loading script: !adetailer.py
Traceback (most recent call last):
File "/root/autodl-tmp/stable-diffusion-webui/modules/scripts.py", line 256, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/root/autodl-tmp/stable-diffusion-webui/modules/script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 848, in exec_module
File "", line 219, in _call_with_frames_removed
File "/root/autodl-tmp/stable-diffusion-webui/extensions/adetailer-main/scripts/!adetailer.py", line 15, in
from adetailer import (
File "/root/autodl-tmp/stable-diffusion-webui/extensions/adetailer-main/adetailer/init.py", line 5, in
from .ultralytics import ultralytics_predict
File "/root/autodl-tmp/stable-diffusion-webui/extensions/adetailer-main/adetailer/ultralytics.py", line 59, in
def mask_to_pil(masks, shape: tuple[int, int]) -> list[Image.Image]:
TypeError: 'type' object is not subscriptable
ํญ์ enable ์ด ์ฒดํฌ๋์ด์์ต๋๋ค.
๋ค๋ฅธ ์ต์คํ ์ ๋ค์ ์ฌ์ฉํ ๋ ์ผ์ผ ํ๋๋ฐ
ad ๋ ์ฌ์ฉ์ํ ๋ ๊บผ์ผ ํ๋๋ฐ... ์๋๋ ๊ฑด๊ฐ์?
์์ ์ด๋ผ๋ฉด ํญ์ ์ผ์ ธ์์ด๋ ๋ชจ๋ธ์ด None ์ผ๋ก ๋์ด์์ด ์ ์ฉ์ด ์๋์๋๋ฐ
๋ํดํธ๋ฅผ ๋์ผ๋ก ํ๋ฉด ์ด๋จ๊น์
Great addon! I am wondering is the A1111 API would support this?
I read these docs and it got me excited that it might be possible?
https://github.com/mix1009/sdwebuiapi#scripts-support
Any hints you could give would be super awesome!!!
์ ๋ฐ์ดํธ ํ ํน์ ARGS ๋ฐ๋ ๋ถ๋ถ์ด ์๋์? ๊ธฐ์กด ๋ฐฉ์์ผ๋ก API ํธ์ถํ๋ฉด ๋ ์๋ฌ๊ฐ ๋์ ๋กค๋ฐฑ ํ์์ต๋๋ค.
ํ์ฌ API ๋ก ํธ์ถํ๋ ํ๋ผ๋ฉํฐ๋ ์ด๋ ์ต๋๋ค.
alwayson_scripts = {
"After Detailer":
{
"args": [
true,
args[1]["ad_model"],
args[1]["ad_prompt"],
args[1]["ad_negative_prompt"],
args[1]["ad_conf"].to_i,
args[1]["ad_dilate_erode"].to_i,
args[1]["ad_x_offset"].to_i,
args[1]["ad_y_offset"].to_i,
args[1]["ad_mask_blur"].to_i,
args[1]["ad_denoising_strength"].to_f,
args[1]["ad_inpaint_full_res"],
args[1]["ad_inpaint_full_res_padding"].to_i,
args[1]["ad_use_inpaint_width_height"],
args[1]["ad_inpaint_width"].to_i,
args[1]["ad_inpaint_height"].to_i,
args[1]["ad_cfg_scale"].to_f,
"None",
1,
]
}
}
8c525a3c ์ปค๋ฐ์์๋ ์ ์๋์ ํฉ๋๋ค. ํน์ ์ถ๊ฐ๋ ๋ถ๋ถ์ด ์์ผ๋ฉด ๊ท๋ธํด์ฃผ์๋ฉด ์์ ํด๋ณด๊ฒ ์ต๋๋ค.
์๋ฌ ๋ฉ์ธ์ง ์ฒจ๋ถ ํฉ๋๋ค.
May 10 14:38:34 rubyon.co.kr sh[100987]: Error running process: /home/ai/stable-diffusion-webui/extensions/adetailer/scripts/!adetailer.py
May 10 14:38:34 rubyon.co.kr sh[100987]: Traceback (most recent call last):
May 10 14:38:34 rubyon.co.kr sh[100987]: File "/home/ai/stable-diffusion-webui/extensions/adetailer/scripts/!adetailer.py", line 346, in get_args
May 10 14:38:34 rubyon.co.kr sh[100987]: inp = get_one_args(enabled, *args)
May 10 14:38:34 rubyon.co.kr sh[100987]: File "/home/ai/stable-diffusion-webui/extensions/adetailer/adetailer/args.py", line 117, in get_one_args
May 10 14:38:34 rubyon.co.kr sh[100987]: return ADetailerArgs(**arg_dict)
May 10 14:38:34 rubyon.co.kr sh[100987]: File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init
May 10 14:38:34 rubyon.co.kr sh[100987]: pydantic.error_wrappers.ValidationError: 2 validation errors for ADetailerArgs
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_use_cfg_scale
May 10 14:38:34 rubyon.co.kr sh[100987]: value could not be parsed to a boolean (type=type_error.bool)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_cfg_scale
May 10 14:38:34 rubyon.co.kr sh[100987]: value is not a valid float (type=type_error.float)
May 10 14:38:34 rubyon.co.kr sh[100987]: The above exception was the direct cause of the following exception:
May 10 14:38:34 rubyon.co.kr sh[100987]: Traceback (most recent call last):
May 10 14:38:34 rubyon.co.kr sh[100987]: File "/home/ai/stable-diffusion-webui/modules/scripts.py", line 417, in process
May 10 14:38:34 rubyon.co.kr sh[100987]: script.process(p, *script_args)
May 10 14:38:34 rubyon.co.kr sh[100987]: File "/home/ai/stable-diffusion-webui/extensions/adetailer/scripts/!adetailer.py", line 558, in process
May 10 14:38:34 rubyon.co.kr sh[100987]: arg_list = self.get_args(*args_)
May 10 14:38:34 rubyon.co.kr sh[100987]: File "/home/ai/stable-diffusion-webui/extensions/adetailer/scripts/!adetailer.py", line 355, in get_args
May 10 14:38:34 rubyon.co.kr sh[100987]: raise ValueError("\n".join(message)) from e
May 10 14:38:34 rubyon.co.kr sh[100987]: ValueError: [-] ADetailer: ValidationError when validating 1st arguments: 2 validation errors for ADetailerArgs
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_use_cfg_scale
May 10 14:38:34 rubyon.co.kr sh[100987]: value could not be parsed to a boolean (type=type_error.bool)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_cfg_scale
May 10 14:38:34 rubyon.co.kr sh[100987]: value is not a valid float (type=type_error.float)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_model: 'face_yolov8s.pt' (<class 'str'>)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_prompt: '' (<class 'str'>)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_negative_prompt: '' (<class 'str'>)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_conf: 30 (<class 'int'>)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_dilate_erode: 32 (<class 'int'>)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_x_offset: 0 (<class 'int'>)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_y_offset: 0 (<class 'int'>)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_mask_blur: 4 (<class 'int'>)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_denoising_strength: 0.4 (<class 'float'>)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_inpaint_full_res: True (<class 'bool'>)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_inpaint_full_res_padding: 0 (<class 'int'>)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_use_inpaint_width_height: False (<class 'bool'>)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_inpaint_width: 512 (<class 'int'>)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_inpaint_height: 512 (<class 'int'>)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_use_cfg_scale: 7.0 (<class 'float'>)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_cfg_scale: 'None' (<class 'str'>)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_controlnet_model: 1 (<class 'int'>)
May 10 14:38:34 rubyon.co.kr sh[100987]: ad_controlnet_weight: 1.0 (<class 'float'>)
Error loading script: !adetailer.py Traceback (most recent call last): File "/home/*/stable-diffusion-webui/modules/scripts.py", line 256, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "/home/*/stable-diffusion-webui/modules/script_loading.py", line 11, in load_module module_spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 848, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/j*/stable-diffusion-webui/extensions/adetailer/scripts/!adetailer.py", line 17, in <module> from adetailer import ( File "/home/*/stable-diffusion-webui/extensions/adetailer/adetailer/__init__.py", line 2, in <module> from .args import AD_ENABLE, ALL_ARGS, ADetailerArgs, enable_check File "/home/*/stable-diffusion-webui/extensions/adetailer/adetailer/args.py", line 23, in <module> class ArgsList(UserList): File "/home/*/stable-diffusion-webui/extensions/adetailer/adetailer/args.py", line 25, in ArgsList def attrs(self) -> tuple[str]: TypeError: 'type' object is not subscriptable
i have the latest AUTOMATIC1111/stable-diffusion-webui
The latest adetailer a0cd6f1 (Sat May 13 06:23:56 2023)
Linux Mint OS
Feature request to support optional usage of the ControlNet inpainting model when this extension is enabled, as I believe this could improve results, especially at higher denoise values. More info here: Mikubill/sd-webui-controlnet#968
This could be handled with the external code API: https://github.com/Mikubill/sd-webui-controlnet/wiki/API#external-code-api
When I use openpose and Adetailer inpaints the face, I notice I try to use openpose's full frame, instead of just the face portion.
Resulting in the faces regularly having tiny bodies inside.
Because it is using the full scale openpose model, instead of it's face portion.
Love this extension, hopefully it's something that can be fixed in the future.
Thanks for creating and sharing it!
I have previews enabled, showing progress for every step (0).
But during the adetailer pass at the end of regular generation, previews are completely frozen until the very last step.
I'm using the current commit (d95d82c) with the current (5ab7f213) version of sd-webui (torch2, xformers 0.0.17, video card is a 1070 and running ubuntu 22.10).
There are no errors in stdout and everything else functions properly.
Error running postprocess_batch: F:\AI\stable-diffusion-webui\extensions\adetailer\scripts!adetailer.py00:00<?, ?it/s]
Traceback (most recent call last):
File "F:\AI\stable-diffusion-webui\modules\scripts.py", line 463, in postprocess_image
script.postprocess_image(p, pp, *script_args)
File "F:\AI\stable-diffusion-webui\extensions\adetailer\scripts!adetailer.py", line 615, in postprocess_image
is_processed |= self._postprocess_image(p, pp, args, n=n)
File "F:\AI\stable-diffusion-webui\extensions\adetailer\scripts!adetailer.py", line 586, in _postprocess_image
processed = process_images(p2)
File "F:\AI\stable-diffusion-webui\modules\processing.py", line 526, in process_images
res = process_images_inner(p)
File "F:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "F:\AI\stable-diffusion-webui\modules\processing.py", line 615, in process_images_inner
p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
File "F:\AI\stable-diffusion-webui\modules\processing.py", line 1065, in init
image_masked.paste(image.convert("RGBA").convert("RGBa"), mask=ImageOps.invert(self.mask_for_overlay.convert('L')))
File "F:\AI\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 1731, in paste
self.im.paste(im, box, mask.im)
ValueError: images do not match
any ideas?
with ddetailer I could send image to inpaint, and press only masked and set width/height to 768/768 for example, and it would inpaint the faces at that resolution, when doing the same with adetailer I get an error, is there a different way to do it with adetailer that I'm missing?
Arguments: ('task(2rp4l4pchnhlx65)', 2, 'absurdres, highres, best quality, 1girl, ', '(worst quality:2), (low quality:2), (normal quality:2), lowres, jpeg artifacts, signature, watermark, username, blurry, artist name, text, error', [], <PIL.Image.Image image mode=RGBA size=1024x1536 at 0x1ADA5407BB0>, None, {'image': <PIL.Image.Image image mode=RGBA size=1024x1536 at 0x1ADA54074F0>, 'mask': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=1024x1536 at 0x1ADA54075B0>}, None, None, None, None, 20, 15, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.5, -1.0, -1.0, 0, 0, 0, False, 0, 1536, 1024, 1, 0, 1, 32, 0, '', '', '', [], 0, True, 'None', '', '', 30, 32, 0, 0, 4, 0.4, True, 0, True, 528, 1024, 7, 'control_v11p_sd15_inpaint [ebff9138]', 1, False, 'MultiDiffusion', False, 10, 10, 1, 64, False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, False, True, True, False, 3072, 192, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 7, 100, 'Constant', 0, 'Constant', 0, 4, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <controlnet.py.UiControlNetUnit object at 0x000001AC28652980>, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, False, 50) {}
Traceback (most recent call last):
File "C:\Users\apyr\Documents\auto1111\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "C:\Users\apyr\Documents\auto1111\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "C:\Users\apyr\Documents\auto1111\stable-diffusion-webui\modules\img2img.py", line 181, in img2img
processed = process_images(p)
File "C:\Users\apyr\Documents\auto1111\stable-diffusion-webui\modules\processing.py", line 515, in process_images
res = process_images_inner(p)
File "C:\Users\apyr\Documents\auto1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\Users\apyr\Documents\auto1111\stable-diffusion-webui\modules\processing.py", line 604, in process_images_inner
p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
File "C:\Users\apyr\Documents\auto1111\stable-diffusion-webui\modules\processing.py", line 1014, in init
mask = mask.crop(crop_region)
File "C:\Users\apyr\Documents\auto1111\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 1228, in crop
raise ValueError(msg)
ValueError: Coordinate 'right' is less than 'left'
Just a question, but does the prompt during the AD pass take into account TI embedding and LORA placeholder texts?
I installed the git through A1111's extension tab, click reload, the CMD output hung at "Restarting WebUI". I left it for a few hours before forcing it close.
When running webui-user.bat it hangs at:
warnings.warn(
I deleted the extension folder then ran it again and I'm back to normal. I noticed the next line for me would normally be:
ControlNet v1.1.133
So maybe there's a conflict with ControlNet?
I have the latest version of SD and Adetailer.
I can't get it to run properly.
I restarted the webUI, restarted SD.
I enabled both settings for saving copies of mask and before-Adetailer.
I am using the default settings of SD.
SD Prompt: person (doesn't matter what I choose)
Adetailer prompt: face
Model: 8n (doesn't matter which I choose)
Adetailer enabled, doesn't matter what other settings (I have also tried checking Use Inpaint Width/height).
I get to 95%:
In my console I have this (typically the middle bar shows as 95%, same as WebUI image above):
When I Ctrl + C to kill it, I get:
Interrupted with signal 2 in <frame at 0x000001FFA0CE89B0, file 'c:\\webui\\webui.py', line 266, code wait_on_server>
After updating to Torch 2, i did a fresh install of SD.... in img2img i use Controlnet Tile + Ultimate SD Upscaler (4xultrasharp)....and aDetailer.
when i turn on ADETAILER, the faces are a garbled mess now.... when i turn it off.... it's working same as it was before the Torch update ....
is there some incompatibility or change that you know of that would cause this?
I downloaded face_yolov8n_v2.pt but it didn't show up in the model list.
There is currently a huge downside for adetailer, and that is we cannot use it on images with multiple subjects.
For example if we have 5 people in an image, adetailer face will modify all subjects.
If we can have an option to only target subject at index N (of the list of detected targets), this will greatly enhance the usability of adetailer.
For example if our image has a man and a woman, we can limit 1st tab to index 0 and inpaint a man, and limit 2nd tab to index 1 to inpaint the woman.
I have another extension, similar to adetailer, however when trying to do comparison runs, disabling adetailer (latest version), its causing issues anyway
Error running postprocess_batch: X:\STABLEDIFFUSION\AUTOMATIC111\extensions\adetailer\scripts!adetailer.py
Traceback (most recent call last):
File "X:\STABLEDIFFUSION\AUTOMATIC111\modules\scripts.py", line 462, in postprocess_image
script.postprocess_image(p, pp, *script_args)
File "X:\STABLEDIFFUSION\AUTOMATIC111\extensions\adetailer\scripts!adetailer.py", line 438, in postprocess_image
args = get_args(*args_)
File "X:\STABLEDIFFUSION\AUTOMATIC111\extensions\adetailer\adetailer\args.py", line 73, in get_args
arg_dict = {all_args.attr: args[i] for i, all_args in enumerate(ALL_ARGS)}
File "X:\STABLEDIFFUSION\AUTOMATIC111\extensions\adetailer\adetailer\args.py", line 73, in
arg_dict = {all_args.attr: args[i] for i, all_args in enumerate(ALL_ARGS)}
IndexError: tuple index out of range
This causes automatic to become unresponsive unless closed
In windows, models are downloaded into C:\Users\Administrator.cache\huggingface\hub\models--Bingsu--adetailer\snapshots\403e56c1b291da2145da824f3fde640bbe08de2f with shotcut to C:\Users\Administrator.cache\huggingface\hub\models--Bingsu--adetailer\blobs
Any way to use ./models/adetailer directly? It is bad network to download files from huggingface
I have this installed, but I have no idea how to use it properly. Can you explain?
Hi. Thank you for the very cool extension, it helps just unreal!
But I don't fully understand what each slider is responsible for.
Can you please describe each function briefly and for the dumb ones. A couple of words would be enough. Or, even better, release an update where this explanation pops up when you point your cursor?
I don't really understand neural network theory, but I wish I understood what I was poking at. XD
If it's not too much trouble for you, of course.
(Sorry for the bad English)
Error running postprocess_batch: /root/stable-diffusion-webui/extensions/adetailer/scripts/!adetailer.py
Traceback (most recent call last):
File "/root/stable-diffusion-webui/modules/scripts.py", line 462, in postprocess_image
script.postprocess_image(p, pp, *script_args)
File "/root/stable-diffusion-webui/extensions/adetailer/scripts/!adetailer.py", line 642, in postprocess_image
is_processed |= self._postprocess_image(p, pp, args, n=n)
File "/root/stable-diffusion-webui/extensions/adetailer/scripts/!adetailer.py", line 587, in _postprocess_image
pred = predictor(ad_model, pp.image, args.ad_conf, **kwargs)
File "/root/stable-diffusion-webui/extensions/adetailer/adetailer/ultralytics.py", line 25, in ultralytics_predict
from ultralytics import YOLO
ModuleNotFoundError: No module named 'ultralytics'
Basically what the title says, I want to generate a completely new generation with completely different features but I want a face from existing images I have to this new generation.
Side note I suggest you enable the discussions tab since this question doesn't really belong in issues
I have put face_yolo8s.pt into ./models/adetailer/face_yolov8s.pt, but when i enable adetailer, it said ValueError: [-] ADetailer: Model 'face_yolo8s.pt' not found. Available models: ['face_yolov8n.pt', 'face_yolov8s.pt', 'mediapipe_face_full', 'mediapipe_face_short', 'hand_yolov8n.pt']
closed by restart sdwebui
Currently, the img2img process always uses the same width/height as the txt2img process:
adetailer/scripts/!adetailer.py
Lines 398 to 399 in 6cc6dbe
When the Inpaint at full resolution
option is enabled, this probably should instead be unlinked and be controllable by the user, as quite often the purpose of using that option in img2img normally is to allow for inpainting at a much higher resolution that differs from the source image, such as 768x768
, or even beyond that, as the image gets cropped to the masked area and upscaled, and then denoised. This is especially useful if running this extension with hires fix, as we may want to inpaint at a much higher resolution than what was the resolution pre-hires fix.
Hello, Yesterday everything was working great but today when I started My webui adetailer was gone I updated it but still not showin and I get this error in my terminal window , btw I am on mac system
Error loading script: !adetailer.py
it was really great extension and I don't know what Happened !!??
As the title says, would be great to have a setting similar to 'Settings>Image Settings>Save a copy of image before doing face restoration'.
When enabled, it will save a copy before applying the face restoration.
This is good for comparisons, especially in batch processing.
I'm aware of the 'save mask previews' setting, but it would be great to also have an option for it spit out a clean image for comparison.
Thanks!
์๋ ํ์ธ์ ํ๋กฌํ์ธ ์ด์์ ์ ๋๋ค.
๋ง์ง๋ง ์ปค๋ฐ์ ๋ณด๋ ๋ํดํธ ๋ชจ๋ธ ๊ฐ์ "None" ์์ ๋ชจ๋ธ ๋ฐฐ์ด ์ฒซ๋ฒ์งธ ๊ฒ์ผ๋ก ๊ฐ์ ธ์ค๋ ๊ฒ์ผ๋ก ๋ฐ๊พธ์๋๋ฐ
์ด๋ก์ธํด API ๋ฅผ ์ด์ฉํ ๋ชจ๋ ์ด๋ฏธ์ง ์์ฑ์ Adetailer ๊ฐ ์ ์ฉ๋๋๋ผ๊ณ ์... ๊ทธ๋์ ์ญ ์ง์ผ๋ณธ๊ฒฐ๊ณผ...
ใ ใ
Adetailer ์ฒดํฌ๋ฐ์ค๋ฅผ ํด์ ํ์์๋ ๋ถ๊ตฌํ๊ณ ๋ฌด์กฐ๊ฑด ์ ์ฉ๋๋๊ฑฐ ๊ฐ์ ๋ณด์ ๋๋ค.
๊ธฐ์กด์๋ ๋ฌด์กฐ๊ฑด ์ ์ฉ๋๋๋ผ๋ ๊ธฐ๋ณธ ๋ชจ๋ธ์ด "None" ์ด์ด์ ๋ฐ์ดํจ์ค ๋๊ฑฐ๊ฐ์ ๋ณด์ ๋๋ค.
ํ์ฌ ๋ง์ง๋ง ์ปค๋ฐ์ค ๋ํดํธ ๋ชจ๋ธ ์ ํํ๋ ๋ถ๋ถ๋ง ๊ธํ๊ฒ ์์ ํด์ ์ฌ์ฉ์ค์ธ๋ฐ ํ์ธํด ์ฃผ์ จ์ผ๋ฉด ํด์ ๊ธ ๋จ๊ฒจ๋ด ๋๋ค.
Receiving the following issue:
Error running postprocess_batch: C:\Users\User\Documents\stable-diffusion-webui\extensions\adetailer\scripts!adetailer.py
Traceback (most recent call last):
File "C:\Users\User\Documents\stable-diffusion-webui\modules\scripts.py", line 462, in postprocess_image
script.postprocess_image(p, pp, *script_args)
File "C:\Users\User\Documents\stable-diffusion-webui\extensions\adetailer\scripts!adetailer.py", line 446, in postprocess_image
i2i = self.get_i2i_p(p, args, pp.image)
File "C:\Users\User\Documents\stable-diffusion-webui\extensions\adetailer\scripts!adetailer.py", line 409, in get_i2i_p
self.update_controlnet_args(i2i, args)
File "C:\Users\User\Documents\stable-diffusion-webui\extensions\adetailer\scripts!adetailer.py", line 424, in update_controlnet_args
self.controlnet_ext.update_scripts_args(
File "C:\Users\User\Documents\stable-diffusion-webui\extensions\adetailer\controlnet_ext\controlnet_ext.py", line 67, in update_scripts_args
self._update_scripts_args(p, model, weight)
File "C:\Users\User\Documents\stable-diffusion-webui\extensions\adetailer\controlnet_ext\controlnet_ext.py", line 57, in _update_scripts_args
control_mode=self.external_cn.ControlMode.BALANCED,
AttributeError: module 'extensions.sd-webui-controlnet.scripts.external_code' has no attribute 'ControlMode'
Pls advice!
Adetailer being on by default now means that I often miss the setting on my initial generation on a webui load. Not always so bad when using txt2img on a low batch count, but not good when using img2img on an image with many people.
I've looked through the ui-config.json for an option to disable by default but cannot find it. This was not an issue when the default model was none.
I would please request restoring adetailer being disabled by default, or add a None model selection so that it isn't triggering when undesired. Additionally, can you add an easily-identifiable (e.g. adetailer.py/XXX2img/Adetailer enabled: true) setting in ui-config.json or point out which one it is currently?
ไฝฟ็จๆนๆณ๏ผๅฎ่ฃ ๅฅฝๆไปถๅฐฑๅบ็ฐSEEDไธ้ข๏ผๅพ้ๅผๅฏ๏ผๅนถ้ๆฉๆจกๅ๏ผไธ้ข่ฟ่ฆ้โINPOINTโ๏ผ่ฟๆ้โCON V11p SD15โ๏ผๆๆๆจกๅผ้็จ๏ผ็บข่ฒ้ข้จ่ฟฝ่ธชๆก๏ผๅจSD็ๆป่ฎพ็ฝฎ้้ขๆพๅฐๅฎๅพ้ๅผๅฏ๏ผ่ฟๆ ท่ฝๅจ็ๆๅพ็็ๆไปถ้้ขๅคน็ๅฐ2ๅผ ็ธๅ็ๅพ็๏ผๅ ถไธญ1ๅผ ๆ่ธ้จ็บขๆก๏ผๅฆๅ็ไธๅฐใ
็บข่ฒ้ข้จ่ท่ธชๆก๏ผ่ฟไธชๅทฅๅ ทๅจๅชๅฟ่ฎพ็ฝฎ๏ผโCON V11p SD15โ๏ผๆๆๆจกๅผ้็จ๏ผ่ฟไธชๅจๅชๅฟ๏ผ
ๆๆฏ็ดๆฅๅจhuggingไธ่ฝฝ็ๆจกๅ๏ผๆพๅจๅช้๏ผstable-diffusion-webui\extensions\adetailer้๏ผ่ฟๆฏstable-diffusion-webui\models\adetailer้๏ผ
I tried to use this + controlnet openpose to produce batchs of images ,but i found only the first image could be controlled by controlnet model, others were random poses.
I'm using controlnet 1.125 with model sd15_openpose
"Thank you very much for your sharing. Could you please provide details on how to use it? I installed the plugin and launched it in T2IMG or IMAG2IMG, but the red facial detection box did not appear on the image. Thank you."
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.