Code Monkey home page Code Monkey logo

sd-webui-lora-block-weight's Introduction

  • 🔭 I’m currently working on Generative AI
  • 🌱 I’m currently learning Python, Nural Networks, Generative AI
    GitHub Stats Card

sd-webui-lora-block-weight's People

Contributors

akegarasu avatar alulkesh avatar daniel-poke avatar hako-mikan avatar manaball123 avatar naokisato102 avatar nonnonstop avatar oedosoldier avatar sometimesacoder avatar storyicon avatar torara46 avatar zeng-hq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sd-webui-lora-block-weight's Issues

When installed as extension, txt file cannot be used

This is a great extension. I have few infos/guide on what lora blocks are doing, there is more independent research that has happened for unet.

Maybe we can use this like an extension instead? the paths are set currently to look in custom scripts.

Or maybe, Is this LoRA block merging feature possible to be added in https://github.com/hako-mikan/sd-webui-supermerger?

(I'm sorry if maybe it's already added. It looked like its only for extracting lora's from models, choosing block weights, and saving as a file.)

IndexError: list index out of range

No matter how I adjust the position, I will get an error

Error running process_batch: D:\AI\Stable Diffusion\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py
Traceback (most recent call last):
File "D:\AI\Stable Diffusion\modules\scripts.py", line 435, in process_batch
script.process_batch(p, *script_args, **kwargs)
File "D:\AI\Stable Diffusion\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 280, in process_batch
loradealer(o_prompts ,self.lratios,self.elementals)
File "D:\AI\Stable Diffusion\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 503, in loradealer
if len(lorars) > 0: load_loras_blocks(lorans,lorars,multipliers,elements,ltype)
File "D:\AI\Stable Diffusion\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 529, in load_loras_blocks
lbw(lora.loaded_loras[l],lwei[n],elements[n])
IndexError: list index out of range

I got an error when doing Reload Presets in Elemental

If I do Reload Presets without entering anything additional, I get an error.
The same result occurs when I enter a new preset and click Save Presets, then Reload Presets.
Also, it seems that I have to press Shift+Enter if I want to start a new line in the Elemental preset entry field. Sorry if this is the spec.

何も追加で入力せずにReload Presetsをするとエラーが出ました。
新しいプリセットを入力してSave Presets、Reload Presetsの順番でクリックしても同じ結果です。
また、Elementalのプリセット入力欄で改行をしようと思ったらShift+Enterを押さなければならないようです。仕様なら申し訳ありません。

M1MacbookPro GoogleChrome
python: 3.10.9  •  torch: 2.1.0.dev20230323  •  xformers: N/A  •  gradio: 3.23.0  •  commit: [22bcc7be]

Reload Presetsを押すと出るエラー
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/blocks.py", line 1075, in process_api
result = await self.call_function(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
TypeError: Script.ui..reloadpresets() takes 0 positional arguments but 1 was given

Save Presetsを押すと出るエラー
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/blocks.py", line 1075, in process_api
result = await self.call_function(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
TypeError: Script.ui..savepresets() takes 1 positional argument but 2 were given

XYZ plot does not work

The default XYZplot works, but this one does not.
Generate will generate one piece and that's it.
I am trying with the latest updated a1111.

UnboundLocalError: local variable 'xst' referenced before assignment

Since the newest update, when doing a simple x plot with the original weight, when the plot finishes and it should display the grid, i get this error. this worked before the update.

I would also like to ask a question: is it possible to plot the strength value together with the original block weight ? like having for example: NONE,ALL,INS,IND,INALL,MIDD,OUTD,OUTS,OUTALL,ALL0.5 on x,
and different strength values on y, from 0 to 1 for example. Thank you for this extension.

File "C:\Users\Computer\webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "C:\Users\Computer\webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "C:\Users\Computer\webui\modules\txt2img.py", line 53, in txt2img
processed = modules.scripts.scripts_txt2img.run(p, *args)
File "C:\Users\Computer\webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 605, in newrun
processed = script.run(p, *script_args)
File "C:\Users\Computer\webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 445, in run
grids.append(smakegrid(images,xst,yst,origin,p))
UnboundLocalError: local variable 'xst' referenced before assignment
image (15)

[Bug?] List Index out of Range under Certain Conditions

When the lora with block weight preset is not the first lora in the prompt, the following exception is raised:

Error running process_batch: sd-webui-lora-block-weight\scripts\lora_block_weight.py
Traceback (most recent call last):
File "modules\scripts.py", line 395, in process_batch
script.process_batch(p, *script_args, **kwargs)
File "sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 280, in process_batch
loradealer(o_prompts ,self.lratios,self.elementals)
File "sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 496, in loradealer
if len(lorars) > 0: load_loras_blocks(lorans,lorars,multipliers,elements)
File "sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 535, in load_loras_blocks
locallora = lbw(locallora,lwei[i],elements[i])
IndexError: list index out of range

ValueError: 'IN00' is not in list

X:Block ID, BASE,Y: values,0, Z:none,, base:0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1 (1/130)
locon load lora method
LoRA Block weight: Ralph_222-000007: 0.7 x [0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
locon load lora method

Error completing request
Arguments: ('task(1kwi8o2hu3jdb9b)', '222, 1girl, solo,realistic, black hair, black eyes, looking at viewer, white background, simple background, middle hair, portrait, smile, lips, freckles, makeup,lora:Ralph_222-000007:0.7:XYZ, ', 'EasyNegative, paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans,extra fingers,fewer fingers,((watermark:2)),(white letters:1), (multi nipples), lowres, bad anatomy, bad hands, text, error, missing fingers,extra digit, fewer digits, cropped, worst quality, low qualitynormal quality, jpeg artifacts, signature, watermark, username,bad feet, {Multiple people},lowres,bad anatomy,bad hands, text, error, missing fingers,extra digit, fewer digits, cropped, worstquality, low quality, normal quality,jpegartifacts,signature, watermark, blurry,bad feet,cropped,poorly drawn hands,poorly drawn face,mutation,deformed,worst quality,low quality,normal quality,jpeg artifacts,signature,extra fingers,fewer digits,extra limbs,extra arms,extra legs,malformed limbs,fused fingers,too many fingers,long neck,cross-eyed,mutated hands,polar lowres,bad body,bad proportions,gross proportions,text,error,missing fingers,missing arms,missing legs,extra digit,', [], 30, 0, False, False, 1, 1, 7, 1531129687.0, -1.0, 0, 0, 0, False, 768, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, 'MultiDiffusion', False, 10, 10, 1, 64, False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, False, True, True, 0, 2048, 128, False, '', 0, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.external_code.ControlNetUnit object at 0x00000205B72FD4B0>, <scripts.external_code.ControlNetUnit object at 0x00000205B72FFF10>, <scripts.external_code.ControlNetUnit object at 0x00000205B72FFCD0>, False, '', 0.5, True, False, '', 'Lerp', False, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Pooling Max', False, 'Lerp', '', '', False, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5\nFace_Strong:1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,0\nFace_weak:1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,0.2,0,0,0,0,0\nMan1:1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,0\nMan2:1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1,1\nStyle1:1,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0\nStyle2:1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.8,1,1,1,1,1\nStyle3:1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1\nStyle4:1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1', True, 1, 'Block ID', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 'values', '0,0.25,0.5,0.75,1', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False, False, 3, 0, False, False, 0, False, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, False, False, 'positive', 'comma', 0, False, False, '', 7, 'ALL,INS,IND,INALL,MIDD,OUTD,OUTS,OUTALL', 0, '', 0, '', True, False, False, False, 0, 'Blur First V1', 0.25, 10, 10, 10, 10, 1, False, '', '', 0.5, 1, False, None, False, None, False, None, False, 50) {}
Traceback (most recent call last):
File "J:\AI\novelai-webui\novelai-webui-aki-v3\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "J:\AI\novelai-webui\novelai-webui-aki-v3\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "J:\AI\novelai-webui\novelai-webui-aki-v3\modules\txt2img.py", line 53, in txt2img
processed = modules.scripts.scripts_txt2img.run(p, *args)
File "J:\AI\novelai-webui\novelai-webui-aki-v3\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 595, in newrun
processed = script.run(p, *script_args)
File "J:\AI\novelai-webui\novelai-webui-aki-v3\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 402, in run
if "values" in ytype:c_base = weightsdealer(y,x,base)
File "J:\AI\novelai-webui\novelai-webui-aki-v3\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 364, in weightsdealer
flagger[blockid.index(id)] =changer
ValueError: 'IN00' is not in list

Draw grid error

When trying to do Effective Block Analyzer getting this error:

Traceback (most recent call last):
  File "D:\GitProjects\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "D:\GitProjects\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\GitProjects\stable-diffusion-webui\modules\txt2img.py", line 53, in txt2img
    processed = modules.scripts.scripts_txt2img.run(p, *args)
  File "D:\GitProjects\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 815, in newrun
    processed = script.run(p, *script_args)
  File "D:\GitProjects\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 343, in run
    grids.append(smakegrid(images,xs,ys,origin,p))
  File "D:\GitProjects\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 771, in smakegrid
    grid = images.draw_grid_annotations(grid,int(p.width), int(p.height), hor_texts, ver_texts)
  File "D:\GitProjects\stable-diffusion-webui\modules\images.py", line 177, in draw_grid_annotations
    assert cols == len(hor_texts), f'bad number of horizontal texts: {len(hor_texts)}; must be {cols}'
AssertionError: bad number of horizontal texts: 3; must be 6

If you need any other info, feel free to ask. Ty.

Only the strength of the last Lora in the prompt counts (and it also applies to all the other Lora in the promp)

I was confused why my images are coming out different than from previous versions, but I finally figured out that the strength of the last lora applies to ALL the other loras.

For example if the loras are
<lora:exampleX:1:INALL>, <lora:exampleY:0.5:OUTS>, <lora:exampleZ:0.8:MIDD>
Every lora will have the strength of the lora:exampleZ i,e, the above prompt is the same as
<lora:exampleX:0.8:INALL>, <lora:exampleY:0.8:OUTS>, <lora:exampleZ:0.8:MIDD>

It doesn't matter what loras I use, the last one will always dictate the strength of the others
<lora:exampleZ:1:INALL>, <lora:exampleY:0.5:OUTS>, <lora:exampleX:2:MIDD>
will produce the same result as
<lora:exampleZ:2:INALL>, <lora:exampleY:2:OUTS>, <lora:exampleX:2:MIDD>

how to block weight ia3 AND LOKR ?

a lokr lycoris, i use lyco:1.5lokrrryu:0.8:0.8 or lyco:1.5lokrrryu:0.8:0.8:LOHA1
LOHA1:1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1
03405-288712913-absurdres,incredibly absurdres,masterpiece,best quality,high quality highres _1boy,solo,manly,male focus,  anime style,watercolo
03406-288712913-absurdres,incredibly absurdres,masterpiece,best quality,high quality highres _1boy,solo,manly,male focus,  anime style,watercolo

I dont know,the diffrence is small compared to lora block weight.

lyco:1.5lokrrryu:0.8:0.8:0:LOHA1 will show
lbw(lycomo.loaded_lycos[l],lwei[n],elements[n])
IndexError: list index out of range
i don't use dylora,

Error running the extension?

sorry but after I installed the extension and use it as default setting,it always report error:

Error running process_batch: H:\stable-diffusion-webui-directml\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py
Traceback (most recent call last):
File "H:\stable-diffusion-webui-directml\modules\scripts.py", line 395, in process_batch
script.process_batch(p, *script_args, **kwargs)
File "H:\stable-diffusion-webui-directml\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 226, in process_batch
loradealer(self.newprompts ,self.lratios)
AttributeError: 'Script' object has no attribute 'newprompts'

neither I write tags as lora:virtualgirlAim_v20:0.7:ALL or lora:virtualgirlAim_v20:0.7:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, it's the same result.

Please help ,thanks a lot...

要望 プリセットファイルをコメント可能にしてほしい

既に可能であれば方式の明記、不可能であれば実装いただきたいです。
メモ書きを書いておかないとどういう機能だったかが覚えてられないためです。

読み込み部分をちょっといじるだけだと思うので自前でやれるか試してはみます。

I am calling the api interface of webui, how should I add your function in the script

I am calling the api interface of webui, how should I add your function in the script.

import requests
import cv2
from base64 import b64encode

def readImage(path):
img = cv2.imread(path)
retval, buffer = cv2.imencode('.jpg', img)
b64img = b64encode(buffer).decode("utf-8")
return b64img

b64img = readImage(r"C:\Users\Administrator\Desktop\test\demo1.jpg")
class controlnetRequest():
def init(self, prompt):
self.url = "http://localhost:7860/sdapi/v1/txt2img"
self.body = {
"prompt": "lora:akemiTakada1980sStyle_1:0.75:OUTALL,takada akemi, 1980s (style),painting (medium), retro artstylewatercolor,looking at viewer,solo,upbody,zz00,eyes_zz00,hair_zz00,(medium)1girl,woman,famale,skyzz00,lora:zz00:0.6:MIDD,portrait_zz00,lips_zz00,",
"negative_prompt": "(painting by bad-artist-anime:0.9), (painting by bad-artist:0.9), watermark, text, error, blurry, jpeg artifacts, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, artist name, (worst quality, low quality:1.7), bad anatomy,",
"seed": -1,
"subseed": -1,
"subseed_strength": 0,
"batch_size": 1,
"n_iter": 1,
"steps": 20,
"cfg_scale": 7,
"width": 512,
"height": 512,
"restore_faces": True,
"eta": 0,
"sampler_index": "DPM++ 2M Karras",
"alwayson_scripts": {
"LoRA Block Weight":{
"Active": True,
},
"ControlNet": {
"args": [
{
"enabled": False,
"input_image": [b64img],
"module": 'softedge_hed',
"model": 'control_v11p_sd15_softedge [a8575a2a]'
},
{
"enabled": False,
"input_image": "",
"module": "",
"model": ""
}
]
}
}
}

def sendRequest(self):
    r = requests.post(self.url, json=self.body)
    response = r.json()
    print(response)
    return response

js = controlnetRequest("walter white").sendRequest()
print(js)

import io, base64
from PIL import Image

#pil_img = Image.open(r"C:\Users\Administrator\Desktop\test\demo1.jpg")
image1 = Image.open(io.BytesIO(base64.b64decode(js["images"][0])))
#image2 = Image.open(io.BytesIO(base64.b64decode(js["images"][1])))
print(image1)
#print(image2)

将 Pillow 图像对象保存为 JPEG 文件

image1.save(r"C:\Users\Administrator\Desktop\test\test1.jpg", format='JPEG')
#image2.save(r"C:\Users\Administrator\Desktop\test\test2.jpg", format='JPEG')
image1.show()
#image2.show()

This code will report an error:
Error running process: D:\novelai-webui-aki-v3\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py
Traceback (most recent call last):
File "D:\novelai-webui-aki-v3\modules\scripts.py", line 417, in process
script.process(p, *script_args)
File "D:\novelai-webui-aki-v3\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 208, in process
loraratios=loraratios.splitlines()
AttributeError: 'dict' object has no attribute 'splitlines'

Printed Weights Bug

<lora:xxxx:0.5:ALL> and <lora:xxxx:1:ALL0.5>

print the same weights to the console [0.5,0.5......]

But 0.5:ALL actually applies at full weight, which had me chasing ghosts for ages.

Enhancement: specify the weight of all 17 blocks directly in the prompt

The final result I ask for:
<lora:loraname:1:1,1,1,1,1,1,1,1,0.5,1,1,1,0.5,1,1,1,1>
This should generate a picture with weights of 0.5 at OUT4 and OUT8 blocks, all other blocks are 1.

long read:
After using XYZ I find that I want to reduce the OUT4 and OUT8 blocks to 0.5

None of the standard tags work for me, so I need to create a new tag, such as CREATIVENAME:1,1,1,1,1,1,1,1,1,0.5,1,1,0.5,1,1,1,1,1

The query string will be
<lora:loraname:1:CREATIVENAME>

The problem is that the CREATIVENAME value may not be saved or may be accidentally changed the next day.

You can give more meaningful names to tags, for example:
NOUT04to05OUT08to05:1,1,1,1,1,1,1,1,0.5,1,1,1,0.5,1,1,1,1

This solves the comprehension problem, but you can still accidentally mess up or erase saved weights.

Then You can make the tag name and its values almost identical, for example:
1_1_1_1_1_1_1_1_0.5_1_1_1_0.5_1_1_1_1:1,1,1,1,1,1,1,1,0.5,1,1,1,0.5,1,1,1,1

The prompt will then say:
<lora:loraname:1:1_1_1_1_1_1_1_1_0.5_1_1_1_0.5_1_1_1_1>

And it will work, will be stored in the EXIF of the finished image. After all, the most important part of the whole issue - the lack of repeatability due to insufficient information in the EXIF

But there is still the inconvenience of having to replace _ with , and vice versa. Why not instead add the ability to specify weights directly in the prompt?

Thank you so much for the wonderful expansion, it opens up amazing possibilities!

LoRA block weight change takes NO EFFECT in new versions of stable diffusion webui

LoRA block weight change takes NO EFFECT after commit 80b26d2a of sd webui.

Here is the link to a commit: 80b26d2a commit

Commit message:
apply Lora by altering layer's weights instead of adding more calculations in forward()

What's the problem?

First, generate an image with a lora prompt: <lora:example:1:ALL>->img1
Then, fix seed, and generate another image with a lora prompt: <lora:example:1:NONE>->img2
You will find that img2 is the same as img1.❌
But if you change the multiplier of added lora, just a little bit: <lora:example:0.99:NONE>->img3
img3 is the result that img2 should be.✔

So, why?

This commit changed the behaviour of the builtin lora extension. In previous versions, the lora is applied to model by adding a "lora forward" step in forward steps. After this commit, the lora is applied by altering layers' weights. And the author add a cache mechanism to avoid applying lora every time.

This change leads to a problem:
If this script changes the weights of a lora layer (for example, from <lora:example:1:ALL> to <lora:example:1:NONE>), the changed value will NOT take any effect because the cache thinks that the changed lora layer is the same as the previous one, then the lora extension do NOTHING.
But if you changed the multiplier a little bit (for example, from <lora:example:1:NONE> to <lora:example:0.99:NONE>), the cache is dropped, then the weight changes of LoRA blocks takes effect.

Solution

Here is a solution:
Change 262 line in stable-diffusion-webui/extensions-builtin/lora/lora.py:
if current_names != wanted_names:->if True:
That will disable the cache and force lora extension to reapply lora every time.

But I don't think it's a good solution, maybe we can modify this script to fix this problem.

Forgive my poor English, and feel free to ask me more about this problem.🙂

Memory leak

Every time a image is generated with a lora a bit of memory leaks, eventually you get a out of memory error
With extension disabled (steady memory usage):
image
image

With extension enabled (memory usage increases every time after you hit generate):
image
image

To easily reproduce this you can set the resolution to 64x64, 1 step, and add a lora to the prompt, the bigger the lora the faster you will notice it.

The default value of DyLoRA in LyCORIS should be of type None, rather than [0]

Hi,

Regarding the README instructions for using LyCORIS from a1111-sd-webui-lycoris, the recommendation to set DyLoRA to [ :0: ] may not be correct.

README.md
For LyCORIS using a1111-sd-webui-lycoris, syntax is different. :1:1:0:IN02you need to input two value for textencoder and U-net, and :0: for DyLoRA. a1111-sd-webui-lycoris is under under development, so this syntax might be changed.

This is because the initial value of DyLoRA in LyCORIS is of type None, and setting it to a value (e.g. 0) may result in different behavior compared to the default value.

In lycoris.py code, the default value of dyn_dims is None, as specified in line 471.

a1111-sd-webui-lycoris/lycoris.py
...
def load_lycos(names, te_multipliers=None, unet_multipliers=None, dyn_dims=None):
...
lyco.dyn_dim = dyn_dims[i] if dyn_dims else None

Can you please confirm? Thank you!

Feature Request - Inherit Special Character

I've been playing around with this for a little while now and I think I am getting the hang of the syntax. One low hanging fruits I think would be great to implement is a variable syntax that inherits the original weight in the prompt and replaces it over a special character.

For example: <lora:my_lora:0.8:DEMO> where DEMO:X,0,0,0,0,0,0,0,X,X,X,X,0,0,0,0,0, this would take the 0.8 from the prompt and replace it over the X (or whatever you choose) becoming 0.8,0,0,0,0,0,0,0,0.8,0.8,0.8,0.8,0,0,0,0,0 at runtime.

This saves making multiple permutations of the same structure but with only the weight changed identically over the whole array.

Additionally, I thought you could perhaps take this a step further and allow an offset to the replace syntax.

For example: <lora:my_lora:0.8:DEMO> where DEMO:X,0,0,0,0,0,0,0,X,X+0.1,X+0.2,X+0.1,0,0,0,0,0, this would take the 0.8 from the prompt as previously suggested, but also apply any + or - offsets placed next to them. So the weights would become 0.8,0,0,0,0,0,0,0,0.8,0.9,1,0.9,0,0,0,0,0 at runtime.

What do you think?

Bug: Extension stops generating images when using hires. fix

First of all thank you for this wonderful extension!
There is a bug that is related to image grid generation that makes this extension stop working when using this extension with AUTOMATIC111's WebUI hires. fix option.
The error is: AssertionError: bad number of horizontal texts: 5; must be 7.
This is probably due to difference in grid parameter processing in WebUI and Lora code.
image
image
image
image

specs as per WebUI:
python: 3.10.7  •  torch: 2.0.0+cu118  •  xformers: N/A  •  gradio: 3.16.2  •  commit: a9fed7c3  •  checkpoint: 89d59c3dde

When I run this without hires. fix, there are no issues.
I have tried changing settings in "Settings" -> "User interface" -> "Show grid in results for web"
and "Saving images/grids" -> "Always save all generated image grids"
to try to make this work without generating grid image but I found that there is still the error. So it seems that trying to skip grid image generation doesn't help. The code encounters an error and stops working, whether the user wants to skip grid generation or not.

There is a similar error here: AUTOMATIC1111/stable-diffusion-webui#6866
But I am not sure why there is still the error.

With my limited python knowledge, it looks like images.py is looking for 7 but Lora code is outputting 5.

The plugin suddenly crushed today

These tags suddenly cannot be read, and scripts report an error
屏幕截图 2023-04-14 153808
After tossing and turning for a while, I replaced Lohcon with Lycoris, which resulted in more serious errors
屏幕截图 2023-04-14 153844

Incompatibility after update: UnboundLocalError: local variable 'output_shape' referenced before assignment

Webui version: 955df77
Locon/Lycoris extension version: 0224f1ad

Traceback (most recent call last):
  File "F:\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "F:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "F:\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "F:\stable-diffusion-webui\modules\processing.py", line 486, in process_images
    res = process_images_inner(p)
  File "F:\stable-diffusion-webui\modules\processing.py", line 625, in process_images_inner
    uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
  File "F:\stable-diffusion-webui\modules\processing.py", line 570, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps)
  File "F:\stable-diffusion-webui\modules\prompt_parser.py", line 140, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "F:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
    c = self.cond_stage_model(c)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\modules\sd_hijack_clip.py", line 229, in forward
    z = self.process_tokens(tokens, multipliers)
  File "F:\stable-diffusion-webui\modules\sd_hijack_clip.py", line 254, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "F:\stable-diffusion-webui\modules\sd_hijack_clip.py", line 302, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 811, in forward
    return self.text_model(
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 721, in forward
    encoder_outputs = self.encoder(
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 650, in forward
    layer_outputs = encoder_layer(
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 379, in forward
    hidden_states, attn_weights = self.self_attn(
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 268, in forward
    query_states = self.q_proj(hidden_states) * self.scale
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\..\..\..\extensions-builtin/Lora\lora.py", line 305, in lora_Linear_forward
    lora_apply_weights(self)
  File "F:\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\..\..\..\extensions-builtin/Lora\lora.py", line 273, in lora_apply_weights
    self.weight += lora_calc_updown(lora, module, self.weight)
  File "F:\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 564, in lora_calc_updown
    updown = rebuild_weight(module, target)
  File "F:\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 557, in rebuild_weight
    if len(output_shape) == 4:
UnboundLocalError: local variable 'output_shape' referenced before assignment

Suggestion: XYZ Plot for img2img

Hello!

Would it be possible to add the XYZ functionality for img2img? It currently doesn't seem to be working, but returns:

Traceback (most recent call last):
  File "C:\stable-diffusion-webui\stable-diffusion-webui\modules\scripts.py", line 386, in process
    script.process(p, *script_args)
  File "C:\stable-diffusion-webui\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 177, in process
    lratios["ZYX"] = lzyx
NameError: name 'lzyx' is not defined

RuntimeError: mat1 and mat2 shapes cannot be multiplied

Hi, I was trying to play with the lora block but had this error. not sure what's the reason.

base model: sd v1-5-pruned-emaonly.ckpt [cc6cb27103]
lora:lora:Moxin_10

any help would be appreciated, thanks!

Loading weights [e1441589a6] from /root/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned.ckpt
Loading VAE weights specified in settings: /root/stable-diffusion-webui/models/VAE/vae-ft-mse-840000-ema-pruned.safetensors
Applying xformers cross attention optimization.
Weights loaded in 4.8s (load weights from disk: 3.7s, apply weights to model: 0.4s, move model to device: 0.6s).
LoRA Block weight: Moxin_10: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
Error completing request
Arguments: ('task(g3376f1vixk2y3g)', 'ultra high res, best quality, photo, 4k, (photorealistic:1.4), (8k, best quality, masterpiece:1.2), (realistic, photo-realistic:1.37), ultra-detailed, 1 girl, cute, solo, (nose blush),(smile:1.15),(closed mouth), beautiful detailed eyes, (long hair:1.2), (elegant pose), (Smile), solo, plaid, plaid_skirt, skirt, <lora:Moxin_10:1:NONE>\n', 'nsfw, paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, (outdoor:1.6), manboobs, (backlight:1.2), double navel, mutad arms, hused arms, neck lace, analog, analog effects, letters, less fingers, extra fingers, paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, (outdoor:1.6), manboobs, backlight,(ugly:1.331), (duplicate:1.331), (morbid:1.21), (mutilated:1.21), (tranny:1.331), mutated hands, (poorly drawn hands:1.331), blurry, (bad anatomy:1.21), (bad proportions:1.331), extra limbs, (disfigured:1.331), (more than 2 nipples:1.331), (missing arms:1.331), (extra legs:1.331), (fused fingers:1.61051), (too many fingers:1.61051), (unclear eyes:1.331), bad hands, missing fingers, extra digit, (futa:1.1), bad body, NG_DeepNegative_V1_75T,pubic hair, glans', [], 20, 0, False, False, 1, 1, 7, 1204502509.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, <scripts.external_code.ControlNetUnit object at 0x7f4ed0b98250>, <scripts.external_code.ControlNetUnit object at 0x7f4ed0b984f0>, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 'black', '20', False, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, None, False, 50) {}
Traceback (most recent call last):
  File "/root/stable-diffusion-webui/modules/call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "/root/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/root/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "/root/stable-diffusion-webui/modules/processing.py", line 486, in process_images
    res = process_images_inner(p)
  File "/root/stable-diffusion-webui/modules/processing.py", line 625, in process_images_inner
    uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
  File "/root/stable-diffusion-webui/modules/processing.py", line 570, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps)
  File "/root/stable-diffusion-webui/modules/prompt_parser.py", line 140, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "/root/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning
    c = self.cond_stage_model(c)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/stable-diffusion-webui/modules/sd_hijack_clip.py", line 229, in forward
    z = self.process_tokens(tokens, multipliers)
  File "/root/stable-diffusion-webui/modules/sd_hijack_clip.py", line 254, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "/root/stable-diffusion-webui/modules/sd_hijack_clip.py", line 302, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 811, in forward
    return self.text_model(
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 721, in forward
    encoder_outputs = self.encoder(
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 650, in forward
    layer_outputs = encoder_layer(
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 389, in forward
    hidden_states = self.mlp(hidden_states)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 344, in forward
    hidden_states = self.fc1(hidden_states)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 197, in lora_Linear_forward
    return lora_forward(self, input, torch.nn.Linear_forward_before_lora(self, input))
  File "/root/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 191, in lora_forward
    res = res + module.up(module.down(input)) * lora.multiplier * (module.alpha / module.up.weight.shape[1] if module.alpha else 1.0)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py", line 513, in forward
    return self.func(x)
  File "/root/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py", line 545, in inference
    return self.up_model(self.down_model(x))
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 197, in lora_Linear_forward
    return lora_forward(self, input, torch.nn.Linear_forward_before_lora(self, input))
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (77x3072 and 768x128)

'int' object has no attribute 'startswith' What's the problem?

LoRA Block weight :meruTheSuccubus_v116000: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
LoRA Block weight :koreanDollLikeness_v15: [1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8, 1.0, 1.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0]
Error running process: D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py
Traceback (most recent call last):
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\modules\scripts.py", line 409, in process
script.process(p, *script_args)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 217, in process
loradealer(p,lratios)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 413, in loradealer
if len(lorars) > 0: load_loras_blocks(lorans,lorars,multiple)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 783, in load_loras_blocks
locallora = load_lora(name, lora_on_disk.filename,lwei[i])
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 611, in load_lora
sd = sd_models.read_state_dict(filename)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\modules\sd_models.py", line 248, in read_state_dict
sd = get_state_dict_from_checkpoint(pl_sd)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\modules\sd_models.py", line 202, in get_state_dict_from_checkpoint
new_key = transform_checkpoint_dict_key(k)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\modules\sd_models.py", line 190, in transform_checkpoint_dict_key
if k.startswith(text):
AttributeError: 'int' object has no attribute 'startswith'

activating extra network lora with arguments [<modules.extra_networks.ExtraNetworkParams object at 0x000002B0FEDA2E30>, <modules.extra_networks.ExtraNetworkParams object at 0x000002B0FEDA2F80>]: AttributeError
Traceback (most recent call last):
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\modules\extra_networks.py", line 75, in activate
extra_network.activate(p, extra_network_args)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\extensions-builtin\Lora\extra_networks_lora.py", line 23, in activate
lora.load_loras(names, multipliers)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\extensions-builtin\Lora\lora.py", line 170, in load_loras
lora = load_lora(name, lora_on_disk.filename)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\extensions\a1111-sd-webui-locon\scripts\main.py", line 273, in load_lora
sd = sd_models.read_state_dict(filename)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\modules\sd_models.py", line 248, in read_state_dict
sd = get_state_dict_from_checkpoint(pl_sd)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\modules\sd_models.py", line 202, in get_state_dict_from_checkpoint
new_key = transform_checkpoint_dict_key(k)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\modules\sd_models.py", line 190, in transform_checkpoint_dict_key
if k.startswith(text):
AttributeError: 'int' object has no attribute 'startswith'

Memory Error when running effective Block Analyzer

I wanted to try the Effective Block Analyzer function with no luck.
As soon as I press "Generate" the python process eats all my 32GB of RAM. It stays there a bit and errors out with a Memory Error.
I dont know if this is a normal behavior for this function. So i felt the need to share.

I run a experimental installation of torch: 2.0.0+cu118 but everything else works as expected.

Console output about the error:

Traceback (most recent call last):
File "H:\Stable-Diffusion-WebUI\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "H:\Stable-Diffusion-WebUI\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "H:\Stable-Diffusion-WebUI\stable-diffusion-webui\modules\txt2img.py", line 53, in txt2img
processed = modules.scripts.scripts_txt2img.run(p, *args)
File "H:\Stable-Diffusion-WebUI\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 595, in newrun processed = script.run(p, *script_args)
File "H:\Stable-Diffusion-WebUI\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 312, in run
zmen = ",".join([str(random.randrange(4294967294)) for x in range(int(ecount))])
File "H:\Stable-Diffusion-WebUI\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 312, in
zmen = ",".join([str(random.randrange(4294967294)) for x in range(int(ecount))])
MemoryError

Enhancement: ZYX Tag

It would be really nice to have a ZYX tag = 1 - XYZ
basically
XYZ = 1,0.75,0.5,0.25...
ZYX = 0,0.25,0.5,0.75...
Then when running grids, you could "fill in" blocks of LoRA1 with blocks of LoRA2

I am using this extension to search for LoRA merge settings for Supermerger.

This extension has conflict with "sd-webui-locon"

Hello, I think this extension has conflicted with the "sd-webui-locon"
https://github.com/KohakuBlueleaf/a1111-sd-webui-locon

if you have both installed, and reload your UI, your lora cannot be use/load
the cmd will throw
`RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_mm)

unchecking the extensions won't fix the issue until you relaunch your webui

Batch countを2以上いれて生成した際に、生成結果がじょじょに乱れる

私の環境だけかもしれませんが、block weightを用いた指定をした状態で、batch countを2以上に設定した際に、batchが進むに連れて生成結果がクシャクシャになっていきます。
1枚目の画像は正常に作成されますが、それ以上の画像がどんどん悪くなるという状況です。

block weightを適用しない状態では特に問題は起きなかったので、何らかのバグかと思い報告しました。

Loras not working without presets.

It appears lora weights are not applied properly without using block weight
For example using <lora:artist:1> is not working as intended, but <lora:artist:1:ALL> is working just fine.

ComfyUI requires lora-block-weight

Hello, lora-block-weight is a good extension, Recently, due to work reasons, we have to transfer the workflow from auto111 to comfyUI. However, lora-block-weight is essential. If the author or some code master has time, PLS create a lora-block-weight node for comfyUI,

Thank you. I wish you have a nice day!

Tags are not in list

When trying to XYZ plot with example values from fun usage, getting this error:

Traceback (most recent call last):
  File "D:\GitProjects\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "D:\GitProjects\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\GitProjects\stable-diffusion-webui\modules\txt2img.py", line 53, in txt2img
    processed = modules.scripts.scripts_txt2img.run(p, *args)
  File "D:\GitProjects\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 815, in newrun
    processed = script.run(p, *script_args)
  File "D:\GitProjects\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 314, in run
    if "values" in xtype:c_base = weightsdealer(x,y,base)
  File "D:\GitProjects\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 277, in weightsdealer
    flagger[blockid.index(id)] =changer
ValueError: 'ALL' is not in list

I am putting this NOT,ALL,INS,IND,INALL,MIDD,OUTD,OUTS,OUTALL in box right from "Active" checkbox and in "Y values"

And putting these:

NOT:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
ALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
INS:1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0
IND:1,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0
INALL:1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0
MIDD:1,0,0,0,1,1,1,1,1,1,1,1,1,0,0,0,0,0
OUTD:1,0,0,0,0,0,0,0,0,1,1,1,1,1,0,0,0,0
OUTS:1,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1
OUTALL:1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1

into Weights setting and clicking "Save presets"

And if i click on "Reload Tags" it erases anything in box, which is right from "Active" checkbox.

If you need any other info, feel free to ask. Ty.

readme Error? in number of weight inputs of LyCORIS

In readme line 37,
<lora:"lora name":1:0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0>. (LyCORIS, etc.)
you have entered 30 weights, but it should be 26 enter.
The same error exists in the Japanese version.

TypeError: gradio.components.Textbox.__init__() got multiple values for keyword argument 'lines'

Error calling: /root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py/ui
Traceback (most recent call last):
File "/root/autodl-tmp/stable-diffusion-webui/modules/scripts.py", line 262, in wrap_call
res = func(*args, **kwargs)
File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py", line 135, in ui
bw_ratiotags= gr.TextArea(label="",lines=2,value=ratiostags,visible =True,interactive =True,elem_id="lbw_ratios")
File "/root/miniconda3/envs/xl_env/lib/python3.10/site-packages/gradio/templates.py", line 23, in init
super().init(lines=7, **kwargs)
TypeError: gradio.components.Textbox.init() got multiple values for keyword argument 'lines'

Error calling: /root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py/ui
Traceback (most recent call last):
File "/root/autodl-tmp/stable-diffusion-webui/modules/scripts.py", line 262, in wrap_call
res = func(*args, **kwargs)
File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py", line 135, in ui
bw_ratiotags= gr.TextArea(label="",lines=2,value=ratiostags,visible =True,interactive =True,elem_id="lbw_ratios")
File "/root/miniconda3/envs/xl_env/lib/python3.10/site-packages/gradio/templates.py", line 23, in init
super().init(lines=7, **kwargs)
TypeError: gradio.components.Textbox.init() got multiple values for keyword argument 'lines'

setting weights does not appear to work with API

despite the prompts sent POST request in the api being identical to the one that is being is used in the webui, the latter writes a log in the cmd window saying that (LoRA Block weight :(lora name): [(weights here)] while the former does not, or, more notably, the end results of the 2 generations are very different despite other parameters being identical.
I would assume that this is caused by the tokenizer not properly parsing the weight list(i.e. the [1,1,1,1,1,0,0,0] or whatever) when the prompt is loaded from the API, and only takes effect when the prompts are loaded from the web ui.
Is there any way to work around this? If not, do you plan on making this possible?

LoRAの名前を間違えるもしくは無いLoRAを指定するとエラーが出てすべてのLoRAが無効になる

タイトル通り、所持していないLoRAを使用する記述があると下記エラーが出て、画像は生成されるのですが使用されてるすべてのLoRAが無効になりました。

Couldn't find Lora with name XXX持ってないLoRAの名前XXX
Error running process_batch: D:\data\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py
Traceback (most recent call last):
File "D:\data\stable-diffusion-webui\modules\scripts.py", line 435, in process_batch
script.process_batch(p, *script_args, **kwargs)
File "D:\data\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 226, in process_batch
loradealer(self.newprompts ,self.lratios)
File "D:\data\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 425, in loradealer
if len(lorars) > 0: load_loras_blocks(lorans,lorars,multipliers)
File "D:\data\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 450, in load_loras_blocks
locallora = lora.load_lora(name, lora_on_disk.filename)
AttributeError: 'NoneType' object has no attribute 'filename'

17 lora block explanation

sorrt in advance as i see no discussion here, so im posting it on issue.

i still cant understand what do these weights mean
so a lora has 17 blocks in it. o..kay?
these 17 blocks, what do they exactly specify?
is it an image divided by 17 area based on its dimension, e.g. 512x512, will be divided into 1 block for all 512x512, and 16 blocks as tiles?
image
or maybe like this
image
or they are not based on dimensions but specification of rgb hue value saturation etc?

seeing the example having many in123 mid123 out1234 only confuses me even more

also what will happen if i turn off / uncheck the blockweight and leave the block weight then generate image?
image
image
image

if all of my hypothesis/inferences are wrong
what do these: all, not, base, in123, mid1234, out12345 mean?

Doesn't work with the latest locon update

New feature added to kohya (LoCon) breaks it

  File "F:\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "F:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "F:\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "F:\stable-diffusion-webui\modules\processing.py", line 486, in process_images
    res = process_images_inner(p)
  File "F:\stable-diffusion-webui\modules\processing.py", line 621, in process_images_inner
    uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
  File "F:\stable-diffusion-webui\modules\processing.py", line 570, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps)
  File "F:\stable-diffusion-webui\modules\prompt_parser.py", line 140, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "F:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
    c = self.cond_stage_model(c)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\modules\sd_hijack_clip.py", line 229, in forward
    z = self.process_tokens(tokens, multipliers)
  File "F:\stable-diffusion-webui\modules\sd_hijack_clip.py", line 254, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "F:\stable-diffusion-webui\modules\sd_hijack_clip.py", line 302, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 811, in forward
    return self.text_model(
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 721, in forward
    encoder_outputs = self.encoder(
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 650, in forward
    layer_outputs = encoder_layer(
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 379, in forward
    hidden_states, attn_weights = self.self_attn(
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 268, in forward
    query_states = self.q_proj(hidden_states) * self.scale
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 178, in lora_Linear_forward
    return lora_forward(self, input, torch.nn.Linear_forward_before_lora(self, input))
  File "F:\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 271, in lora_forward
    scale = lora_m.multiplier * (module.alpha / module.dim if module.alpha else 1.0)
AttributeError: 'LoraUpDownModule' object has no attribute 'dim'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.