Code Monkey home page Code Monkey logo

qdiffusion's People

Contributors

arenasys avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

qdiffusion's Issues

Generation start errors in Google Colab

Hello
I noticed that you had an update, everything worked well in Colaba half a day ago, and now an error is crashing.
Do you have any suggestions on how this can be fixed?
Thanks for your work

SERVER Traceback (most recent call last):
  File "/content/sd-inference-server/server.py", line 108, in run
    self.wrapper.img2img()
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/content/sd-inference-server/wrapper.py", line 752, in img2img
    metadata = self.get_metadata("img2img",  width, height, batch_size, self.prompt, seeds, subseeds)
  File "/content/sd-inference-server/wrapper.py", line 466, in get_metadata
    m["strength"] = format_float(self.strength)
  File "/content/sd-inference-server/wrapper.py", line 92, in format_float
    return f"{x:.4f}".rstrip('0').rstrip('.')
ValueError: Unknown format code 'f' for object of type 'str'

Stuck Encoding

I am using arch and an amd gpu and it seems to be stuck encoding
image

Need help with wildcards.

What all do you need in the wildcards folder do you need in order to make wildcards work, and how do you use them in your prompts? Also, please update the guide to mention wildcards.

Generating multiple images error

After last update, if set Batch size more than 1, program crashes with that log

GUI 2023-06-25 18:10:20.100167
Traceback (most recent call last):
File "F:\qDiffusion-master\source\tabs\basic\basic.py", line 348, in result
metadata = metadata[i] if metadata else None
KeyError: 1

Feature Request/Question: Regional Prompter?

Hey, I have a question and a feature request if what I suspect is right now implemented.
Is Regional Prompter a feature already in this tool? If yes, where can I find it/enable it?

If its not a feature, could it be added? Its a very good extension for automatic111, and with my low end hardware its hard to get it without paying for google collab pro (or restarting each second prompt which takes minutes), which is rather expensive each month.

_ in the models path

Describe the bug
Having _ in the model path prevents the GUI from seeing the models.

Screenshots

Screens

изображение
изображение

System:

  • Mode: [Local]
  • OS: [Windows]
  • GPU: [Nvidia]

ControlNet preprocessor?

I can't find the appropriate setting. Has it been implemented?

Also, when you select Control, a gear icon appears on the tile, but it does not work. Is this a stub for the future?

bug: Subprompts don't work with upscale

Using subprompts raising an error on upscale. Without upscale - subprompts works normally.

Traceback with generation 540x540 and upscale factor 2.25 with subprompts:

INFERENCE 2023-07-23 06:36:56.336172
Traceback (most recent call last):
  File "F:\NeuralNetworks\Apps\qDiffusion\source\local.py", line 77, in run
    self.wrapper.txt2img()
  File "F:\NeuralNetworks\Apps\qDiffusion\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "F:\NeuralNetworks\Apps\qDiffusion\source\sd-inference-server\wrapper.py", line 720, in txt2img
    latents = inference.img2img(latents, denoiser, sampler, noise, self.hr_steps, True, self.hr_strength, self.on_step)
  File "F:\NeuralNetworks\Apps\qDiffusion\source\sd-inference-server\inference.py", line 34, in img2img
    latents = sampler.step(latents, schedule, i, noise)
  File "F:\NeuralNetworks\Apps\qDiffusion\source\sd-inference-server\samplers_k.py", line 154, in step
    denoised = self.predict(x, sigmas[i])
  File "F:\NeuralNetworks\Apps\qDiffusion\source\sd-inference-server\samplers_k.py", line 57, in predict
    original = self.model.predict_original(latents, timestep, sigma)
  File "F:\NeuralNetworks\Apps\qDiffusion\source\sd-inference-server\guidance.py", line 143, in predict_original
    composed_pred = self.compose_predictions(original_pred)
  File "F:\NeuralNetworks\Apps\qDiffusion\source\sd-inference-server\guidance.py", line 83, in compose_predictions
    pos = pos * masks + (neg * (1 - masks))
RuntimeError: The size of tensor a (151) must match the size of tensor b (67) at non-singleton dimension 3

Bug: [Remote] VAEs don't load

This time only 1 of my VAES does work, for the rest of them it throws the next message

File "/content/sd-inference-server/server.py", line 112, in run
self.wrapper.txt2img()
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/sd-inference-server/wrapper.py", line 611, in txt2img
self.load_models(*initial_networks)
File "/content/sd-inference-server/wrapper.py", line 274, in load_models
self.vae = self.storage.get_vae(self.vae_name, self.device)
File "/content/sd-inference-server/storage.py", line 313, in get_vae
return self.get_component(name, "VAE", device)
File "/content/sd-inference-server/storage.py", line 269, in get_component
self.file_cache[file] = self.load_file(file, comp)
File "/content/sd-inference-server/storage.py", line 378, in load_file
state_dict, metadata = convert.convert(file)
File "/content/sd-inference-server/convert.py", line 390, in convert
return convert_checkpoint(model_path)
File "/content/sd-inference-server/convert.py", line 276, in convert_checkpoint
state_dict = torch.load(in_file, map_location="cpu")
File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 809, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 1172, in _load
result = unpickler.load()
File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 1165, in find_class
return super().find_class(mod_name, name)
File "/content/sd-inference-server/venv/lib/python3.10/site-packages/pytorch_lightning/init.py", line 34, in
from pytorch_lightning.callbacks import Callback # noqa: E402
File "/content/sd-inference-server/venv/lib/python3.10/site-packages/pytorch_lightning/callbacks/init.py", line 25, in
from pytorch_lightning.callbacks.progress import ProgressBarBase, RichProgressBar, TQDMProgressBar
File "/content/sd-inference-server/venv/lib/python3.10/site-packages/pytorch_lightning/callbacks/progress/init.py", line 22, in
from pytorch_lightning.callbacks.progress.rich_progress import RichProgressBar # noqa: F401
File "/content/sd-inference-server/venv/lib/python3.10/site-packages/pytorch_lightning/callbacks/progress/rich_progress.py", line 20, in
from torchmetrics.utilities.imports import _compare_version
ImportError: cannot import name '_compare_version' from 'torchmetrics.utilities.imports' (/usr/local/lib/python3.10/dist-packages/torchmetrics/utilities/imports.py)

Client crashes when attempting to draw mask

Whenever I create a mask, and try to edit it by clicking on it twice, the GUI crashes. For a split second, I do see the first image in the img2img window, as if it was ready to be inpainted, but it crashes before I can do anything. The error that it keeps spitting out into crash.log is as follows:

GUI 2023-06-22 11:26:44.956255
Traceback (most recent call last):
File "C:\Users\ryanh\Downloads\qDiffusion-master\source\canvas\renderer.py", line 201, in render
gl.glGetError()
File "C:\Users\ryanh\Downloads\qDiffusion-master\venv\lib\site-packages\OpenGL\platform\baseplatform.py", line 415, in call
return self( *args, **named )
File "C:\Users\ryanh\Downloads\qDiffusion-master\venv\lib\site-packages\OpenGL\error.py", line 230, in glCheckError
raise self._errorClass(
OpenGL.error.GLError: GLError(
err = 1282,
description = b'invalid operation',
baseOperation = glGetError,
cArguments = (),
result = 1282
)

How do I fix this? My computer is a 2011 HP ProBook 6560b, with an Intel Core i5-2420m, and a crappy Intel HD Graphics 3000.

Bug/Error: Memory allocation while models merge/inserting LoRAs

After several updates - merging models become impossiple due to very much memory usage. My 32 Gigs of RAM were eaten in first minute. After - it starts using SSD for 2-5 minutes and raised this error. Restart didn't help.

image

Tried merge 2 models and insert 4 LoRAs in single merge batch.
Generation parameters were standart, nothing changed after lauch.

Where to Upload LyCORIS (LoCon/LoHA)

Hi,

There is a folder where i can upload Lora but i cannot find a folder to Upload LyCORIS (LoCon/LoHA)?. Should i upload this in Lora folder?, will it work?

image

Clear my Local Disk after closing program

I want clear my Local Disk (C:) after running locally qDiffusion (installed on other local disk (F:)). I noticed that after closing the program, the memory on the disk becomes 1.7-2.5 GB less. Where qDiffusion saved files?

Difference in generation result in qDiffusion and Automatic

This is not a suggestion or a bug (unless you think otherwise), it's just curiosity because I haven't been able to figure out why this happens.

When generated using the same model, the same embendings, lora, all parameters (in fact, all), the generated images in your GUI and in Automatic are different. Sometimes - as, for example, in the V-Pred models EasyFluff PreRelease v2 - they are dramatically different.

If you have the opportunity and time, could you explain what this might be related to? This is pure curiosity, since it doesn’t really interfere with using your GUI, which in my eyes is much more convenient, I just want to know the answer to the question that’s stuck in my head.

VAE

Is it possible to import VAE in qDiffusion on remote mode? If possible, how to do this?

External VAEs don't load on remote mode

Trying to use a different VAE for any model on remote mode (using colab) throws the next message error

File "/content/sd-inference-server/server.py", line 110, in run
self.wrapper.txt2img()
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/sd-inference-server/wrapper.py", line 563, in txt2img
self.load_models()
File "/content/sd-inference-server/wrapper.py", line 242, in load_models
self.vae = self.storage.get_vae(self.vae_name, self.device)
File "/content/sd-inference-server/storage.py", line 252, in get_vae
return self.get_component(name, "VAE", device)
File "/content/sd-inference-server/storage.py", line 221, in get_component
self.file_cache[file] = self.load_file(file, comp)
File "/content/sd-inference-server/storage.py", line 295, in load_file
state_dict = convert.convert_checkpoint(file)
File "/content/sd-inference-server/convert.py", line 195, in convert_checkpoint
state_dict = torch.load(in_file, map_location="cpu")
File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 809, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 1172, in _load
result = unpickler.load()
File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 1165, in find_class
return super().find_class(mod_name, name)
File "/content/sd-inference-server/venv/lib/python3.10/site-packages/pytorch_lightning/init.py", line 34, in
from pytorch_lightning.callbacks import Callback # noqa: E402
File "/content/sd-inference-server/venv/lib/python3.10/site-packages/pytorch_lightning/callbacks/init.py", line 14, in
from pytorch_lightning.callbacks.callback import Callback
File "/content/sd-inference-server/venv/lib/python3.10/site-packages/pytorch_lightning/callbacks/callback.py", line 25, in
from pytorch_lightning.utilities.types import STEP_OUTPUT
File "/content/sd-inference-server/venv/lib/python3.10/site-packages/pytorch_lightning/utilities/types.py", line 28, in
from torchmetrics import Metric
ModuleNotFoundError: No module named 'torchmetrics'

SwinIR upscaler

Describe the bug
I trying install SwinIR upscaler SwinIR upscaler and get an error. Also, can't use BSRGAN and other, but can use UniScale. What am I doing wrong?

Traceback
Traceback (most recent call last):
File "F:\qDiffusion-master\source\local.py", line 78, in run
self.wrapper.txt2img()
File "F:\qDiffusion-master\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "F:\qDiffusion-master\source\sd-inference-server\wrapper.py", line 635, in txt2img
self.load_models(*initial_networks)
File "F:\qDiffusion-master\source\sd-inference-server\wrapper.py", line 301, in load_models
self.upscale_model = self.storage.get_upscaler(self.hr_upscaler, self.device)
File "F:\qDiffusion-master\source\sd-inference-server\storage.py", line 326, in get_upscaler
return self.get_component(name, "SR", device)
File "F:\qDiffusion-master\source\sd-inference-server\storage.py", line 282, in get_component
model = self.classes[comp].from_model(name, self.file_cache[file][comp], dtype)
File "F:\qDiffusion-master\source\sd-inference-server\upscalers.py", line 75, in from_model
model.load_state_dict(state_dict)
File "F:\qDiffusion-master\venv\lib\site-packages\torch\nn\modules\module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for SR:
Missing key(s) in state_dict: "body.0.rdb1.conv1.weight", "body.0.rdb1.conv1.bias", "body.0.rdb1.conv2.weight", "body.0.rdb1.conv2.bias", "body.0.rdb1.conv3.weight", "body.0.rdb1.conv3.bias", "body.0.rdb1.conv4.weight", "body.0.rdb1.conv4.bias", "body.0.rdb1.conv5.weight", "body.0.rdb1.conv5.bias", "body.0.rdb2.conv1.weight", "body.0.rdb2.conv1.bias", "body.0.rdb2.conv2.weight", "body.0.rdb2.conv2.bias", "body.0.rdb2.conv3.weight", "body.0.rdb2.conv3.bias", "body.0.rdb2.conv4.weight", "body.0.rdb2.conv4.bias", "body.0.rdb2.conv5.weight", "body.0.rdb2.conv5.bias", "body.0.rdb3.conv1.weight", "body.0.rdb3.conv1.bias", "body.0.rdb3.conv2.weight", "body.0.rdb3.conv2.bias", "body.0.rdb3.conv3.weight", "body.0.rdb3.conv3.bias", "body.0.rdb3.conv4.weight", "body.0.rdb3.conv4.bias", "body.0.rdb3.conv5.weight", "body.0.rdb3.conv5.bias", "conv_body.weight", "conv_body.bias", "conv_up1.weight", "conv_up1.bias", "conv_up2.weight", "conv_up2.bias", "conv_hr.weight", "conv_hr.bias".
Unexpected key(s) in state_dict: "conv_after_body.weight", "conv_after_body.bias", "conv_before_upsample.0.weight", "conv_before_upsample.0.bias", "upsample.0.weight", "upsample.0.bias", "upsample.2.weight", "upsample.2.bias", "patch_embed.norm.weight", "patch_embed.norm.bias", "layers.0.residual_group.blocks.0.norm1.weight", "layers.0.residual_group.blocks.0.norm1.bias", "layers.0.residual_group.blocks.0.attn.relative_position_bias_table", "layers.0.residual_group.blocks.0.attn.relative_position_index", "layers.0.residual_group.blocks.0.attn.qkv.weight", "layers.0.residual_group.blocks.0.attn.qkv.bias", "layers.0.residual_group.blocks.0.attn.proj.weight", "layers.0.residual_group.blocks.0.attn.proj.bias", "layers.0.residual_group.blocks.0.norm2.weight", "layers.0.residual_group.blocks.0.norm2.bias", "layers.0.residual_group.blocks.0.mlp.fc1.weight", "layers.0.residual_group.blocks.0.mlp.fc1.bias", "layers.0.residual_group.blocks.0.mlp.fc2.weight", "layers.0.residual_group.blocks.0.mlp.fc2.bias", "layers.0.residual_group.blocks.1.attn_mask", "layers.0.residual_group.blocks.1.norm1.weight", "layers.0.residual_group.blocks.1.norm1.bias", "layers.0.residual_group.blocks.1.attn.relative_position_bias_table", "layers.0.residual_group.blocks.1.attn.relative_position_index", "layers.0.residual_group.blocks.1.attn.qkv.weight", "layers.0.residual_group.blocks.1.attn.qkv.bias", "layers.0.residual_group.blocks.1.attn.proj.weight", "layers.0.residual_group.blocks.1.attn.proj.bias", "layers.0.residual_group.blocks.1.norm2.weight", "layers.0.residual_group.blocks.1.norm2.bias", "layers.0.residual_group.blocks.1.mlp.fc1.weight", "layers.0.residual_group.blocks.1.mlp.fc1.bias", "layers.0.residual_group.blocks.1.mlp.fc2.weight", "layers.0.residual_group.blocks.1.mlp.fc2.bias", "layers.0.residual_group.blocks.2.norm1.weight", "layers.0.residual_group.blocks.2.norm1.bias", "layers.0.residual_group.blocks.2.attn.relative_position_bias_table", "layers.0.residual_group.blocks.2.attn.relative_position_index", "layers.0.residual_group.blocks.2.attn.qkv.weight", "layers.0.residual_group.blocks.2.attn.qkv.bias", "layers.0.residual_group.blocks.2.attn.proj.weight", "layers.0.residual_group.blocks.2.attn.proj.bias", "layers.0.residual_group.blocks.2.norm2.weight", "layers.0.residual_group.blocks.2.norm2.bias", "layers.0.residual_group.blocks.2.mlp.fc1.weight", "layers.0.residual_group.blocks.2.mlp.fc1.bias", "layers.0.residual_group.blocks.2.mlp.fc2.weight", "layers.0.residual_group.blocks.2.mlp.fc2.bias", "layers.0.residual_group.blocks.3.attn_mask", "layers.0.residual_group.blocks.3.norm1.weight", "layers.0.residual_group.blocks.3.norm1.bias", "layers.0.residual_group.blocks.3.attn.relative_position_bias_table", "layers.0.residual_group.blocks.3.attn.relative_position_index", "layers.0.residual_group.blocks.3.attn.qkv.weight", "layers.0.residual_group.blocks.3.attn.qkv.bias", "layers.0.residual_group.blocks.3.attn.proj.weight", "layers.0.residual_group.blocks.3.attn.proj.bias", "layers.0.residual_group.blocks.3.norm2.weight", "layers.0.residual_group.blocks.3.norm2.bias", "layers.0.residual_group.blocks.3.mlp.fc1.weight", "layers.0.residual_group.blocks.3.mlp.fc1.bias", "layers.0.residual_group.blocks.3.mlp.fc2.weight", "layers.0.residual_group.blocks.3.mlp.fc2.bias", "layers.0.residual_group.blocks.4.norm1.weight", "layers.0.residual_group.blocks.4.norm1.bias", "layers.0.residual_group.blocks.4.attn.relative_position_bias_table", "layers.0.residual_group.blocks.4.attn.relative_position_index", "layers.0.residual_group.blocks.4.attn.qkv.weight", "layers.0.residual_group.blocks.4.attn.qkv.bias", "layers.0.residual_group.blocks.4.attn.proj.weight", "layers.0.residual_group.blocks.4.attn.proj.bias", "layers.0.residual_group.blocks.4.norm2.weight", "layers.0.residual_group.blocks.4.norm2.bias", "layers.0.residual_group.blocks.4.mlp.fc1.weight", "layers.0.residual_group.blocks.4.mlp.fc1.bias", "layers.0.residual_group.blocks.4.mlp.fc2.weight", "layers.0.residual_group.blocks.4.mlp.fc2.bias", "layers.0.residual_group.blocks.5.attn_mask", "layers.0.residual_group.blocks.5.norm1.weight", "layers.0.residual_group.blocks.5.norm1.bias", "layers.0.residual_group.blocks.5.attn.relative_position_bias_table", "layers.0.residual_group.blocks.5.attn.relative_position_index", "layers.0.residual_group.blocks.5.attn.qkv.weight", "layers.0.residual_group.blocks.5.attn.qkv.bias", "layers.0.residual_group.blocks.5.attn.proj.weight", "layers.0.residual_group.blocks.5.attn.proj.bias", "layers.0.residual_group.blocks.5.norm2.weight", "layers.0.residual_group.blocks.5.norm2.bias", "layers.0.residual_group.blocks.5.mlp.fc1.weight", "layers.0.residual_group.blocks.5.mlp.fc1.bias", "layers.0.residual_group.blocks.5.mlp.fc2.weight", "layers.0.residual_group.blocks.5.mlp.fc2.bias", "layers.0.conv.weight", "layers.0.conv.bias", "layers.1.residual_group.blocks.0.norm1.weight", "layers.1.residual_group.blocks.0.norm1.bias", "layers.1.residual_group.blocks.0.attn.relative_position_bias_table", "layers.1.residual_group.blocks.0.attn.relative_position_index", "layers.1.residual_group.blocks.0.attn.qkv.weight", "layers.1.residual_group.blocks.0.attn.qkv.bias", "layers.1.residual_group.blocks.0.attn.proj.weight", "layers.1.residual_group.blocks.0.attn.proj.bias", "layers.1.residual_group.blocks.0.norm2.weight", "layers.1.residual_group.blocks.0.norm2.bias", "layers.1.residual_group.blocks.0.mlp.fc1.weight", "layers.1.residual_group.blocks.0.mlp.fc1.bias", "layers.1.residual_group.blocks.0.mlp.fc2.weight", "layers.1.residual_group.blocks.0.mlp.fc2.bias", "layers.1.residual_group.blocks.1.attn_mask", "layers.1.residual_group.blocks.1.norm1.weight", "layers.1.residual_group.blocks.1.norm1.bias", "layers.1.residual_group.blocks.1.attn.relative_position_bias_table", "layers.1.residual_group.blocks.1.attn.relative_position_index", "layers.1.residual_group.blocks.1.attn.qkv.weight", "layers.1.residual_group.blocks.1.attn.qkv.bias", "layers.1.residual_group.blocks.1.attn.proj.weight", "layers.1.residual_group.blocks.1.attn.proj.bias", "layers.1.residual_group.blocks.1.norm2.weight", "layers.1.residual_group.blocks.1.norm2.bias", "layers.1.residual_group.blocks.1.mlp.fc1.weight", "layers.1.residual_group.blocks.1.mlp.fc1.bias", "layers.1.residual_group.blocks.1.mlp.fc2.weight", "layers.1.residual_group.blocks.1.mlp.fc2.bias", "layers.1.residual_group.blocks.2.norm1.weight", "layers.1.residual_group.blocks.2.norm1.bias", "layers.1.residual_group.blocks.2.attn.relative_position_bias_table", "layers.1.residual_group.blocks.2.attn.relative_position_index", "layers.1.residual_group.blocks.2.attn.qkv.weight", "layers.1.residual_group.blocks.2.attn.qkv.bias", "layers.1.residual_group.blocks.2.attn.proj.weight", "layers.1.residual_group.blocks.2.attn.proj.bias", "layers.1.residual_group.blocks.2.norm2.weight", "layers.1.residual_group.blocks.2.norm2.bias", "layers.1.residual_group.blocks.2.mlp.fc1.weight", "layers.1.residual_group.blocks.2.mlp.fc1.bias", "layers.1.residual_group.blocks.2.mlp.fc2.weight", "layers.1.residual_group.blocks.2.mlp.fc2.bias", "layers.1.residual_group.blocks.3.attn_mask", "layers.1.residual_group.blocks.3.norm1.weight", "layers.1.residual_group.blocks.3.norm1.bias", "layers.1.residual_group.blocks.3.attn.relative_position_bias_table", "layers.1.residual_group.blocks.3.attn.relative_position_index", "layers.1.residual_group.blocks.3.attn.qkv.weight", "layers.1.residual_group.blocks.3.attn.qkv.bias", "layers.1.residual_group.blocks.3.attn.proj.weight", "layers.1.residual_group.blocks.3.attn.proj.bias", "layers.1.residual_group.blocks.3.norm2.weight", "layers.1.residual_group.blocks.3.norm2.bias", "layers.1.residual_group.blocks.3.mlp.fc1.weight", "layers.1.residual_group.blocks.3.mlp.fc1.bias", "layers.1.residual_group.blocks.3.mlp.fc2.weight", "layers.1.residual_group.blocks.3.mlp.fc2.bias", "layers.1.residual_group.blocks.4.norm1.weight", "layers.1.residual_group.blocks.4.norm1.bias", "layers.1.residual_group.blocks.4.attn.relative_position_bias_table", "layers.1.residual_group.blocks.4.attn.relative_position_index", "layers.1.residual_group.blocks.4.attn.qkv.weight", "layers.1.residual_group.blocks.4.attn.qkv.bias", "layers.1.residual_group.blocks.4.attn.proj.weight", "layers.1.residual_group.blocks.4.attn.proj.bias", "layers.1.residual_group.blocks.4.norm2.weight", "layers.1.residual_group.blocks.4.norm2.bias", "layers.1.residual_group.blocks.4.mlp.fc1.weight", "layers.1.residual_group.blocks.4.mlp.fc1.bias", "layers.1.residual_group.blocks.4.mlp.fc2.weight", "layers.1.residual_group.blocks.4.mlp.fc2.bias", "layers.1.residual_group.blocks.5.attn_mask", "layers.1.residual_group.blocks.5.norm1.weight", "layers.1.residual_group.blocks.5.norm1.bias", "layers.1.residual_group.blocks.5.attn.relative_position_bias_table", "layers.1.residual_group.blocks.5.attn.relative_position_index", "layers.1.residual_group.blocks.5.attn.qkv.weight", "layers.1.residual_group.blocks.5.attn.qkv.bias", "layers.1.residual_group.blocks.5.attn.proj.weight", "layers.1.residual_group.blocks.5.attn.proj.bias", "layers.1.residual_group.blocks.5.norm2.weight", "layers.1.residual_group.blocks.5.norm2.bias", "layers.1.residual_group.blocks.5.mlp.fc1.weight", "layers.1.residual_group.blocks.5.mlp.fc1.bias", "layers.1.residual_group.blocks.5.mlp.fc2.weight", "layers.1.residual_group.blocks.5.mlp.fc2.bias", "layers.1.conv.weight", "layers.1.conv.bias", "layers.2.residual_group.blocks.0.norm1.weight", "layers.2.residual_group.blocks.0.norm1.bias", "layers.2.residual_group.blocks.0.attn.relative_position_bias_table", "layers.2.residual_group.blocks.0.attn.relative_position_index", "layers.2.residual_group.blocks.0.attn.qkv.weight", "layers.2.residual_group.blocks.0.attn.qkv.bias", "layers.2.residual_group.blocks.0.attn.proj.weight", "layers.2.residual_group.blocks.0.attn.proj.bias", "layers.2.residual_group.blocks.0.norm2.weight", "layers.2.residual_group.blocks.0.norm2.bias", "layers.2.residual_group.blocks.0.mlp.fc1.weight", "layers.2.residual_group.blocks.0.mlp.fc1.bias", "layers.2.residual_group.blocks.0.mlp.fc2.weight", "layers.2.residual_group.blocks.0.mlp.fc2.bias", "layers.2.residual_group.blocks.1.attn_mask", "layers.2.residual_group.blocks.1.norm1.weight", "layers.2.residual_group.blocks.1.norm1.bias", "layers.2.residual_group.blocks.1.attn.relative_position_bias_table", "layers.2.residual_group.blocks.1.attn.relative_position_index", "layers.2.residual_group.blocks.1.attn.qkv.weight", "layers.2.residual_group.blocks.1.attn.qkv.bias", "layers.2.residual_group.blocks.1.attn.proj.weight", "layers.2.residual_group.blocks.1.attn.proj.bias", "layers.2.residual_group.blocks.1.norm2.weight", "layers.2.residual_group.blocks.1.norm2.bias", "layers.2.residual_group.blocks.1.mlp.fc1.weight", "layers.2.residual_group.blocks.1.mlp.fc1.bias", "layers.2.residual_group.blocks.1.mlp.fc2.weight", "layers.2.residual_group.blocks.1.mlp.fc2.bias", "layers.2.residual_group.blocks.2.norm1.weight", "layers.2.residual_group.blocks.2.norm1.bias", "layers.2.residual_group.blocks.2.attn.relative_position_bias_table", "layers.2.residual_group.blocks.2.attn.relative_position_index", "layers.2.residual_group.blocks.2.attn.qkv.weight", "layers.2.residual_group.blocks.2.attn.qkv.bias", "layers.2.residual_group.blocks.2.attn.proj.weight", "layers.2.residual_group.blocks.2.attn.proj.bias", "layers.2.residual_group.blocks.2.norm2.weight", "layers.2.residual_group.blocks.2.norm2.bias", "layers.2.residual_group.blocks.2.mlp.fc1.weight", "layers.2.residual_group.blocks.2.mlp.fc1.bias", "layers.2.residual_group.blocks.2.mlp.fc2.weight", "layers.2.residual_group.blocks.2.mlp.fc2.bias", "layers.2.residual_group.blocks.3.attn_mask", "layers.2.residual_group.blocks.3.norm1.weight", "layers.2.residual_group.blocks.3.norm1.bias", "layers.2.residual_group.blocks.3.attn.relative_position_bias_table", "layers.2.residual_group.blocks.3.attn.relative_position_index", "layers.2.residual_group.blocks.3.attn.qkv.weight", "layers.2.residual_group.blocks.3.attn.qkv.bias", "layers.2.residual_group.blocks.3.attn.proj.weight", "layers.2.residual_group.blocks.3.attn.proj.bias", "layers.2.residual_group.blocks.3.norm2.weight", "layers.2.residual_group.blocks.3.norm2.bias", "layers.2.residual_group.blocks.3.mlp.fc1.weight", "layers.2.residual_group.blocks.3.mlp.fc1.bias", "layers.2.residual_group.blocks.3.mlp.fc2.weight", "layers.2.residual_group.blocks.3.mlp.fc2.bias", "layers.2.residual_group.blocks.4.norm1.weight", "layers.2.residual_group.blocks.4.norm1.bias", "layers.2.residual_group.blocks.4.attn.relative_position_bias_table", "layers.2.residual_group.blocks.4.attn.relative_position_index", "layers.2.residual_group.blocks.4.attn.qkv.weight", "layers.2.residual_group.blocks.4.attn.qkv.bias", "layers.2.residual_group.blocks.4.attn.proj.weight", "layers.2.residual_group.blocks.4.attn.proj.bias", "layers.2.residual_group.blocks.4.norm2.weight", "layers.2.residual_group.blocks.4.norm2.bias", "layers.2.residual_group.blocks.4.mlp.fc1.weight", "layers.2.residual_group.blocks.4.mlp.fc1.bias", "layers.2.residual_group.blocks.4.mlp.fc2.weight", "layers.2.residual_group.blocks.4.mlp.fc2.bias", "layers.2.residual_group.blocks.5.attn_mask", "layers.2.residual_group.blocks.5.norm1.weight", "layers.2.residual_group.blocks.5.norm1.bias", "layers.2.residual_group.blocks.5.attn.relative_position_bias_table", "layers.2.residual_group.blocks.5.attn.relative_position_index", "layers.2.residual_group.blocks.5.attn.qkv.weight", "layers.2.residual_group.blocks.5.attn.qkv.bias", "layers.2.residual_group.blocks.5.attn.proj.weight", "layers.2.residual_group.blocks.5.attn.proj.bias", "layers.2.residual_group.blocks.5.norm2.weight", "layers.2.residual_group.blocks.5.norm2.bias", "layers.2.residual_group.blocks.5.mlp.fc1.weight", "layers.2.residual_group.blocks.5.mlp.fc1.bias", "layers.2.residual_group.blocks.5.mlp.fc2.weight", "layers.2.residual_group.blocks.5.mlp.fc2.bias", "layers.2.conv.weight", "layers.2.conv.bias", "layers.3.residual_group.blocks.0.norm1.weight", "layers.3.residual_group.blocks.0.norm1.bias", "layers.3.residual_group.blocks.0.attn.relative_position_bias_table", "layers.3.residual_group.blocks.0.attn.relative_position_index", "layers.3.residual_group.blocks.0.attn.qkv.weight", "layers.3.residual_group.blocks.0.attn.qkv.bias", "layers.3.residual_group.blocks.0.attn.proj.weight", "layers.3.residual_group.blocks.0.attn.proj.bias", "layers.3.residual_group.blocks.0.norm2.weight", "layers.3.residual_group.blocks.0.norm2.bias", "layers.3.residual_group.blocks.0.mlp.fc1.weight", "layers.3.residual_group.blocks.0.mlp.fc1.bias", "layers.3.residual_group.blocks.0.mlp.fc2.weight", "layers.3.residual_group.blocks.0.mlp.fc2.bias", "layers.3.residual_group.blocks.1.attn_mask", "layers.3.residual_group.blocks.1.norm1.weight", "layers.3.residual_group.blocks.1.norm1.bias", "layers.3.residual_group.blocks.1.attn.relative_position_bias_table", "layers.3.residual_group.blocks.1.attn.relative_position_index", "layers.3.residual_group.blocks.1.attn.qkv.weight", "layers.3.residual_group.blocks.1.attn.qkv.bias", "layers.3.residual_group.blocks.1.attn.proj.weight", "layers.3.residual_group.blocks.1.attn.proj.bias", "layers.3.residual_group.blocks.1.norm2.weight", "layers.3.residual_group.blocks.1.norm2.bias", "layers.3.residual_group.blocks.1.mlp.fc1.weight", "layers.3.residual_group.blocks.1.mlp.fc1.bias", "layers.3.residual_group.blocks.1.mlp.fc2.weight", "layers.3.residual_group.blocks.1.mlp.fc2.bias", "layers.3.residual_group.blocks.2.norm1.weight", "layers.3.residual_group.blocks.2.norm1.bias", "layers.3.residual_group.blocks.2.attn.relative_position_bias_table", "layers.3.residual_group.blocks.2.attn.relative_position_index", "layers.3.residual_group.blocks.2.attn.qkv.weight", "layers.3.residual_group.blocks.2.attn.qkv.bias", "layers.3.residual_group.blocks.2.attn.proj.weight", "layers.3.residual_group.blocks.2.attn.proj.bias", "layers.3.residual_group.blocks.2.norm2.weight", "layers.3.residual_group.blocks.2.norm2.bias", "layers.3.residual_group.blocks.2.mlp.fc1.weight", "layers.3.residual_group.blocks.2.mlp.fc1.bias", "layers.3.residual_group.blocks.2.mlp.fc2.weight", "layers.3.residual_group.blocks.2.mlp.fc2.bias", "layers.3.residual_group.blocks.3.attn_mask", "layers.3.residual_group.blocks.3.norm1.weight", "layers.3.residual_group.blocks.3.norm1.bias", "layers.3.residual_group.blocks.3.attn.relative_position_bias_table", "layers.3.residual_group.blocks.3.attn.relative_position_index", "layers.3.residual_group.blocks.3.attn.qkv.weight", "layers.3.residual_group.blocks.3.attn.qkv.bias", "layers.3.residual_group.blocks.3.attn.proj.weight", "layers.3.residual_group.blocks.3.attn.proj.bias", "layers.3.residual_group.blocks.3.norm2.weight", "layers.3.residual_group.blocks.3.norm2.bias", "layers.3.residual_group.blocks.3.mlp.fc1.weight", "layers.3.residual_group.blocks.3.mlp.fc1.bias", "layers.3.residual_group.blocks.3.mlp.fc2.weight", "layers.3.residual_group.blocks.3.mlp.fc2.bias", "layers.3.residual_group.blocks.4.norm1.weight", "layers.3.residual_group.blocks.4.norm1.bias", "layers.3.residual_group.blocks.4.attn.relative_position_bias_table", "layers.3.residual_group.blocks.4.attn.relative_position_index", "layers.3.residual_group.blocks.4.attn.qkv.weight", "layers.3.residual_group.blocks.4.attn.qkv.bias", "layers.3.residual_group.blocks.4.attn.proj.weight", "layers.3.residual_group.blocks.4.attn.proj.bias", "layers.3.residual_group.blocks.4.norm2.weight", "layers.3.residual_group.blocks.4.norm2.bias", "layers.3.residual_group.blocks.4.mlp.fc1.weight", "layers.3.residual_group.blocks.4.mlp.fc1.bias", "layers.3.residual_group.blocks.4.mlp.fc2.weight", "layers.3.residual_group.blocks.4.mlp.fc2.bias", "layers.3.residual_group.blocks.5.attn_mask", "layers.3.residual_group.blocks.5.norm1.weight", "layers.3.residual_group.blocks.5.norm1.bias", "layers.3.residual_group.blocks.5.attn.relative_position_bias_table", "layers.3.residual_group.blocks.5.attn.relative_position_index", "layers.3.residual_group.blocks.5.attn.qkv.weight", "layers.3.residual_group.blocks.5.attn.qkv.bias", "layers.3.residual_group.blocks.5.attn.proj.weight", "layers.3.residual_group.blocks.5.attn.proj.bias", "layers.3.residual_group.blocks.5.norm2.weight", "layers.3.residual_group.blocks.5.norm2.bias", "layers.3.residual_group.blocks.5.mlp.fc1.weight", "layers.3.residual_group.blocks.5.mlp.fc1.bias", "layers.3.residual_group.blocks.5.mlp.fc2.weight", "layers.3.residual_group.blocks.5.mlp.fc2.bias", "layers.3.conv.weight", "layers.3.conv.bias", "layers.4.residual_group.blocks.0.norm1.weight", "layers.4.residual_group.blocks.0.norm1.bias", "layers.4.residual_group.blocks.0.attn.relative_position_bias_table", "layers.4.residual_group.blocks.0.attn.relative_position_index", "layers.4.residual_group.blocks.0.attn.qkv.weight", "layers.4.residual_group.blocks.0.attn.qkv.bias", "layers.4.residual_group.blocks.0.attn.proj.weight", "layers.4.residual_group.blocks.0.attn.proj.bias", "layers.4.residual_group.blocks.0.norm2.weight", "layers.4.residual_group.blocks.0.norm2.bias", "layers.4.residual_group.blocks.0.mlp.fc1.weight", "layers.4.residual_group.blocks.0.mlp.fc1.bias", "layers.4.residual_group.blocks.0.mlp.fc2.weight", "layers.4.residual_group.blocks.0.mlp.fc2.bias", "layers.4.residual_group.blocks.1.attn_mask", "layers.4.residual_group.blocks.1.norm1.weight", "layers.4.residual_group.blocks.1.norm1.bias", "layers.4.residual_group.blocks.1.attn.relative_position_bias_table", "layers.4.residual_group.blocks.1.attn.relative_position_index", "layers.4.residual_group.blocks.1.attn.qkv.weight", "layers.4.residual_group.blocks.1.attn.qkv.bias", "layers.4.residual_group.blocks.1.attn.proj.weight", "layers.4.residual_group.blocks.1.attn.proj.bias", "layers.4.residual_group.blocks.1.norm2.weight", "layers.4.residual_group.blocks.1.norm2.bias", "layers.4.residual_group.blocks.1.mlp.fc1.weight", "layers.4.residual_group.blocks.1.mlp.fc1.bias", "layers.4.residual_group.blocks.1.mlp.fc2.weight", "layers.4.residual_group.blocks.1.mlp.fc2.bias", "layers.4.residual_group.blocks.2.norm1.weight", "layers.4.residual_group.blocks.2.norm1.bias", "layers.4.residual_group.blocks.2.attn.relative_position_bias_table", "layers.4.residual_group.blocks.2.attn.relative_position_index", "layers.4.residual_group.blocks.2.attn.qkv.weight", "layers.4.residual_group.blocks.2.attn.qkv.bias", "layers.4.residual_group.blocks.2.attn.proj.weight", "layers.4.residual_group.blocks.2.attn.proj.bias", "layers.4.residual_group.blocks.2.norm2.weight", "layers.4.residual_group.blocks.2.norm2.bias", "layers.4.residual_group.blocks.2.mlp.fc1.weight", "layers.4.residual_group.blocks.2.mlp.fc1.bias", "layers.4.residual_group.blocks.2.mlp.fc2.weight", "layers.4.residual_group.blocks.2.mlp.fc2.bias", "layers.4.residual_group.blocks.3.attn_mask", "layers.4.residual_group.blocks.3.norm1.weight", "layers.4.residual_group.blocks.3.norm1.bias", "layers.4.residual_group.blocks.3.attn.relative_position_bias_table", "layers.4.residual_group.blocks.3.attn.relative_position_index", "layers.4.residual_group.blocks.3.attn.qkv.weight", "layers.4.residual_group.blocks.3.attn.qkv.bias", "layers.4.residual_group.blocks.3.attn.proj.weight", "layers.4.residual_group.blocks.3.attn.proj.bias", "layers.4.residual_group.blocks.3.norm2.weight", "layers.4.residual_group.blocks.3.norm2.bias", "layers.4.residual_group.blocks.3.mlp.fc1.weight", "layers.4.residual_group.blocks.3.mlp.fc1.bias", "layers.4.residual_group.blocks.3.mlp.fc2.weight", "layers.4.residual_group.blocks.3.mlp.fc2.bias", "layers.4.residual_group.blocks.4.norm1.weight", "layers.4.residual_group.blocks.4.norm1.bias", "layers.4.residual_group.blocks.4.attn.relative_position_bias_table", "layers.4.residual_group.blocks.4.attn.relative_position_index", "layers.4.residual_group.blocks.4.attn.qkv.weight", "layers.4.residual_group.blocks.4.attn.qkv.bias", "layers.4.residual_group.blocks.4.attn.proj.weight", "layers.4.residual_group.blocks.4.attn.proj.bias", "layers.4.residual_group.blocks.4.norm2.weight", "layers.4.residual_group.blocks.4.norm2.bias", "layers.4.residual_group.blocks.4.mlp.fc1.weight", "layers.4.residual_group.blocks.4.mlp.fc1.bias", "layers.4.residual_group.blocks.4.mlp.fc2.weight", "layers.4.residual_group.blocks.4.mlp.fc2.bias", "layers.4.residual_group.blocks.5.attn_mask", "layers.4.residual_group.blocks.5.norm1.weight", "layers.4.residual_group.blocks.5.norm1.bias", "layers.4.residual_group.blocks.5.attn.relative_position_bias_table", "layers.4.residual_group.blocks.5.attn.relative_position_index", "layers.4.residual_group.blocks.5.attn.qkv.weight", "layers.4.residual_group.blocks.5.attn.qkv.bias", "layers.4.residual_group.blocks.5.attn.proj.weight", "layers.4.residual_group.blocks.5.attn.proj.bias", "layers.4.residual_group.blocks.5.norm2.weight", "layers.4.residual_group.blocks.5.norm2.bias", "layers.4.residual_group.blocks.5.mlp.fc1.weight", "layers.4.residual_group.blocks.5.mlp.fc1.bias", "layers.4.residual_group.blocks.5.mlp.fc2.weight", "layers.4.residual_group.blocks.5.mlp.fc2.bias", "layers.4.conv.weight", "layers.4.conv.bias", "layers.5.residual_group.blocks.0.norm1.weight", "layers.5.residual_group.blocks.0.norm1.bias", "layers.5.residual_group.blocks.0.attn.relative_position_bias_table", "layers.5.residual_group.blocks.0.attn.relative_position_index", "layers.5.residual_group.blocks.0.attn.qkv.weight", "layers.5.residual_group.blocks.0.attn.qkv.bias", "layers.5.residual_group.blocks.0.attn.proj.weight", "layers.5.residual_group.blocks.0.attn.proj.bias", "layers.5.residual_group.blocks.0.norm2.weight", "layers.5.residual_group.blocks.0.norm2.bias", "layers.5.residual_group.blocks.0.mlp.fc1.weight", "layers.5.residual_group.blocks.0.mlp.fc1.bias", "layers.5.residual_group.blocks.0.mlp.fc2.weight", "layers.5.residual_group.blocks.0.mlp.fc2.bias", "layers.5.residual_group.blocks.1.attn_mask", "layers.5.residual_group.blocks.1.norm1.weight", "layers.5.residual_group.blocks.1.norm1.bias", "layers.5.residual_group.blocks.1.attn.relative_position_bias_table", "layers.5.residual_group.blocks.1.attn.relative_position_index", "layers.5.residual_group.blocks.1.attn.qkv.weight", "layers.5.residual_group.blocks.1.attn.qkv.bias", "layers.5.residual_group.blocks.1.attn.proj.weight", "layers.5.residual_group.blocks.1.attn.proj.bias", "layers.5.residual_group.blocks.1.norm2.weight", "layers.5.residual_group.blocks.1.norm2.bias", "layers.5.residual_group.blocks.1.mlp.fc1.weight", "layers.5.residual_group.blocks.1.mlp.fc1.bias", "layers.5.residual_group.blocks.1.mlp.fc2.weight", "layers.5.residual_group.blocks.1.mlp.fc2.bias", "layers.5.residual_group.blocks.2.norm1.weight", "layers.5.residual_group.blocks.2.norm1.bias", "layers.5.residual_group.blocks.2.attn.relative_position_bias_table", "layers.5.residual_group.blocks.2.attn.relative_position_index", "layers.5.residual_group.blocks.2.attn.qkv.weight", "layers.5.residual_group.blocks.2.attn.qkv.bias", "layers.5.residual_group.blocks.2.attn.proj.weight", "layers.5.residual_group.blocks.2.attn.proj.bias", "layers.5.residual_group.blocks.2.norm2.weight", "layers.5.residual_group.blocks.2.norm2.bias", "layers.5.residual_group.blocks.2.mlp.fc1.weight", "layers.5.residual_group.blocks.2.mlp.fc1.bias", "layers.5.residual_group.blocks.2.mlp.fc2.weight", "layers.5.residual_group.blocks.2.mlp.fc2.bias", "layers.5.residual_group.blocks.3.attn_mask", "layers.5.residual_group.blocks.3.norm1.weight", "layers.5.residual_group.blocks.3.norm1.bias", "layers.5.residual_group.blocks.3.attn.relative_position_bias_table", "layers.5.residual_group.blocks.3.attn.relative_position_index", "layers.5.residual_group.blocks.3.attn.qkv.weight", "layers.5.residual_group.blocks.3.attn.qkv.bias", "layers.5.residual_group.blocks.3.attn.proj.weight", "layers.5.residual_group.blocks.3.attn.proj.bias", "layers.5.residual_group.blocks.3.norm2.weight", "layers.5.residual_group.blocks.3.norm2.bias", "layers.5.residual_group.blocks.3.mlp.fc1.weight", "layers.5.residual_group.blocks.3.mlp.fc1.bias", "layers.5.residual_group.blocks.3.mlp.fc2.weight", "layers.5.residual_group.blocks.3.mlp.fc2.bias", "layers.5.residual_group.blocks.4.norm1.weight", "layers.5.residual_group.blocks.4.norm1.bias", "layers.5.residual_group.blocks.4.attn.relative_position_bias_table", "layers.5.residual_group.blocks.4.attn.relative_position_index", "layers.5.residual_group.blocks.4.attn.qkv.weight", "layers.5.residual_group.blocks.4.attn.qkv.bias", "layers.5.residual_group.blocks.4.attn.proj.weight", "layers.5.residual_group.blocks.4.attn.proj.bias", "layers.5.residual_group.blocks.4.norm2.weight", "layers.5.residual_group.blocks.4.norm2.bias", "layers.5.residual_group.blocks.4.mlp.fc1.weight", "layers.5.residual_group.blocks.4.mlp.fc1.bias", "layers.5.residual_group.blocks.4.mlp.fc2.weight", "layers.5.residual_group.blocks.4.mlp.fc2.bias", "layers.5.residual_group.blocks.5.attn_mask", "layers.5.residual_group.blocks.5.norm1.weight", "layers.5.residual_group.blocks.5.norm1.bias", "layers.5.residual_group.blocks.5.attn.relative_position_bias_table", "layers.5.residual_group.blocks.5.attn.relative_position_index", "layers.5.residual_group.blocks.5.attn.qkv.weight", "layers.5.residual_group.blocks.5.attn.qkv.bias", "layers.5.residual_group.blocks.5.attn.proj.weight", "layers.5.residual_group.blocks.5.attn.proj.bias", "layers.5.residual_group.blocks.5.norm2.weight", "layers.5.residual_group.blocks.5.norm2.bias", "layers.5.residual_group.blocks.5.mlp.fc1.weight", "layers.5.residual_group.blocks.5.mlp.fc1.bias", "layers.5.residual_group.blocks.5.mlp.fc2.weight", "layers.5.residual_group.blocks.5.mlp.fc2.bias", "layers.5.conv.weight", "layers.5.conv.bias", "norm.weight", "norm.bias".
size mismatch for conv_first.weight: copying a param with shape torch.Size([180, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3]).
size mismatch for conv_first.bias: copying a param with shape torch.Size([180]) from checkpoint, the shape in current model is torch.Size([64]).

System:

  • Mode: [Local]
  • OS: [Windows]
  • GPU: [Nvidia]

Bug report

Describe the bug
Error while Generating.
'UNET' object has no attribute 'determine_prediction_type' (module.py:1614)

Traceback
Traceback (most recent call last):
File "/content/sd-inference-server/server.py", line 209, in run
self.wrapper.txt2img()
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/sd-inference-server/wrapper.py", line 705, in txt2img
latents = inference.txt2img(denoiser, sampler, noise, self.steps, self.on_step)
File "/content/sd-inference-server/inference.py", line 13, in txt2img
latents = sampler.step(latents, schedule, i, noise)
File "/content/sd-inference-server/samplers_k.py", line 298, in step
denoised = self.predict(x, sigmas[i])
File "/content/sd-inference-server/samplers_k.py", line 57, in predict
original = self.model.predict_original(latents, timestep, sigma)
File "/content/sd-inference-server/guidance.py", line 134, in predict_original
self.unet.determine_prediction_type()
File "/content/sd-inference-server/venv/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 186, in getattr
return super().getattr(name)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1614, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'UNET' object has no attribute 'determine_prediction_type'

Screenshots

System:

  • Mode: [Remotel]
  • OS: [Windows]

Problems with "mask" and "inpaint".

I have problem with "Inpaint":

Error while Generating.
The size of tensor a (53) must match the size of tensor b (54) at non-singleton dimension 3 (controlnet.py:769)

And also when using "Mask":

Error while Decoding.
images do not match (Image.py:1889)

How I can fix it?

Recomendation - Stop Button

I wish there was a big, circular, skeumorphic, red stop button below the generate button. Or if that would clash too much with the UI just a normal stop button below the generate button thank you.

Some recomendations

Highres - we need a more understandable application interface, or at least a tutorial. So, at the moment, you have to apply a mask and regenerate, and not just do Upscale

HR fix - judging by the Readme, it is present, but there is no setting in the interface

For reasons that are not entirely clear, for the same generation parameters, when using the same seed, the results are different. Constantly - when generating several images at once in one batch

I hope I'm not pointing out any obvious or stupid things, and thank you for your hard work - other than the above, everything else works great

p.s. Being able to erase a badly drawn mask with RMB is brilliant

How to use ControlNet

I am unable to understand, How to use ControlNet as there is an no option.

1. Uploaded ControlNet in Correct folder

image

2. No Option to check ControlNet

image

3. No Option to choose Installed ControlNet

image

I want something like with my custom image. Wear sunglasses, change hair colour to red, remove hand tatoo, etc without changing face or unmasked area

Program settings confusion (Colab)

image

  1. Advanced Parameters : I have changed Advanced Parameters to Show but i did not see any changes. what it for?
  2. Model folders : It says Model folders. What does it mean Model folders in Local PC or Colab?

Tile Control Net Upscaler error

Describe the bug
If set third party upscaler ()UniScale, LSDIR and other), get an error on Upscaling process

Traceback
Traceback (most recent call last):
File "F:\qDiffusion-master\source\local.py", line 81, in run
self.wrapper.img2img()
File "F:\qDiffusion-master\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "F:\qDiffusion-master\source\sd-inference-server\wrapper.py", line 790, in img2img
return self.tiled_img2img()
File "F:\qDiffusion-master\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "F:\qDiffusion-master\source\sd-inference-server\wrapper.py", line 981, in tiled_img2img
upscaled_images = upscalers.upscale(images, UPSCALERS_PIXEL[self.img2img_upscaler], width, height)
KeyError: 'SR\4x-UniScale-Balanced [72000g].pth'

System:

  • Mode: [Local]
  • OS: [Windows]
  • GPU: [Nvidia]

Is v-pred a thing?

I am using the v-pred version of the furryrock model and this is all I get
00000118-06270317

bug: Inpaint with mask and padding -> freeze

When using inpainting with mask and padding (not full image) - GUI freezes, showing "Upscaling" and "Working..." in current state and after this - chashes or keeps working, but without any response to actinos. After closing GUI - python process still hanging with all memory it used.

Img size - 864x1152

Inpaint settings:

  • Strength: 0.85
  • Upscaler: UltraSHarp-4x
  • Padding: 112
  • Mask blur: 4
  • Mask expand: 0
  • Mask fill: Original

GUI 2023-08-03 11:46:29.340816

Traceback (most recent call last):
  File "F:\NeuralNetworks\Apps\qDiffusion\source\parameters.py", line 744, in sync
    closest_match = self.gui.closestModel(value, available) or available[0]
IndexError: list index out of range

Feature Request : - Sampler (DPM Solver) & ControlNet

1. Sampler Request :

  1. DPM Solver++
  2. DPM Karras

2. Concurrency Image Generation

During Image generation, It take less than 15% of GPU (Colab) and it generate image one by one. Can you add an option it can generate multiple image simultaneously.

You can add an option where we user can select how many concurrency we want.

3. LoHA :

LoHA is not supported under LoRA in qDiffusion.

4. ControlNet Supporting

Many ControlNet is currently not supported by qDiffusion. Can you make it support wide rang of ControlNet

endpoint not generating normaly (COLLAB)

Describe the bug
I have issue with remote, when I starting this code, but then I getting password, endpoint just generates: wss://api.trycloudflare.com, anyone can fix it?

Traceback
Error while Connecting.
server rejected WebSocket connection: HTTP 400

Screenshots
image

image

System:

  • Mode: Remote
  • OS: Windows
  • GPU: T4

Bug: Cant set a 10 digit seed [Remote mode]

At first I thought pasting a seed tend to set in random again sometimes but then when I tried to manually write a 10 digit long seed it deletes the last number when I exit from the input box

Suggestion: saving Attention settings

At the moment, the Attention setting in the Operation section is reset to Default every time the program is restarted. A suggestion is to save this setting, just like VRAM or Preview mode.

[Bug]: Client crashes after pressing the Generate button

What happened

The current version of the client crashes as soon as I try to generate anything, whereas it was previously working on a version from a few days ago.

Steps to Reproduce

  1. Launch the start script
  2. Enter a prompt in the UI
  3. Click on the Generate button

Actual Results

The client closes and a crash.log file is generated.

Expected result

The generated image appears.

Contents of crash.log

Traceback (most recent call last):
  File "/home/gradient/Documents/qDiffusion/source/tabs/basic/basic.py", line 738, in generate
    request = self.buildRequest()
  File "/home/gradient/Documents/qDiffusion/source/tabs/basic/basic.py", line 726, in buildRequest
    self._requests += [self._parameters.buildRequest(size, batch_images, batch_masks, batch_areas, controls)]
  File "/home/gradient/Documents/qDiffusion/source/parameters.py", line 643, in buildRequest
    del data[k]
KeyError: 'hr_tome_ratio'

Tile ControlNet workflow?

This ControlNet, as far as I know, allows you to Upscale images in tiles, which allows you to bypass the memory limitation when Upscaled to high - 2-4k - resolutions. However, in qDiffusion, when used, it immediately tries to Upscale, which causes torch.cuda.OutOfMemoryError.

And, in fact, my question is, is this a feature of ControlNet, implemented without using the analogue of SD Upscale/Ultimate SD Upscale from A1111, or is it still a bug?

Synchronous workloads for 2 different devices

Hi, i have asetup with 2 GPUs, how ever even if I select another device the clients wait until the first request is completed. Is it possible to use 2 devices at the same time for 2 different generations on 2 GPUs on the same server?

Pip index error

GUI 2023-10-08 20:49:45.776689
Traceback (most recent call last):
  File "E:\AI\qDiffusion-master\source\main.py", line 175, in run
    raise RuntimeError("Failed to install: ", p, "\n", output)
RuntimeError: ('Failed to install: ', 'basicsr==1.4.2', '\n', "Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple\nCollecting basicsr==1.4.2\nUsing cached https://pypi.tuna.tsinghua.edu.cn/packages/86/41/00a6b000f222f0fa4c6d9e1d6dcc9811a374cabb8abb9d408b77de39648c/basicsr-1.4.2.tar.gz (172 kB)\nPreparing metadata (setup.py): started\nPreparing metadata (setup.py): finished with status 'done'\nCollecting addict (from basicsr==1.4.2)\nUsing cached https://pypi.tuna.tsinghua.edu.cn/packages/6a/00/b08f23b7d7e1e14ce01419a467b583edbb93c6cdb8654e54a9cc579cd61f/addict-2.4.0-py3-none-any.whl (3.8 kB)\nCollecting future (from basicsr==1.4.2)\nUsing cached future-0.18.3-py3-none-any.whl\nCollecting lmdb (from basicsr==1.4.2)\nUsing cached https://pypi.tuna.tsinghua.edu.cn/packages/66/05/21a93eed7ff800f7c3b0538eb12bde89660a44693624cd0e49141beccb8b/lmdb-1.4.1-cp310-cp310-win_amd64.whl (100 kB)\nRequirement already satisfied: numpy>=1.17 in e:\\ai\\qdiffusion-master\\venv\\lib\\site-packages (from basicsr==1.4.2) (1.24.1)\nCollecting opencv-python (from basicsr==1.4.2)\nUsing cached https://pypi.tuna.tsinghua.edu.cn/packages/38/d2/3e8c13ffc37ca5ebc6f382b242b44acb43eb489042e1728407ac3904e72f/opencv_python-4.8.1.78-cp37-abi3-win_amd64.whl (38.1 MB)\nRequirement already satisfied: Pillow in e:\\ai\\qdiffusion-master\\venv\\lib\\site-packages (from basicsr==1.4.2) (9.3.0)\nRequirement already satisfied: pyyaml in e:\\ai\\qdiffusion-master\\venv\\lib\\site-packages (from basicsr==1.4.2) (6.0.1)\nRequirement already satisfied: requests in e:\\ai\\qdiffusion-master\\venv\\lib\\site-packages (from basicsr==1.4.2) (2.28.1)\nRequirement already satisfied: scikit-image in e:\\ai\\qdiffusion-master\\venv\\lib\\site-packages (from basicsr==1.4.2) (0.22.0)\nRequirement already satisfied: scipy in e:\\ai\\qdiffusion-master\\venv\\lib\\site-packages (from basicsr==1.4.2) (1.11.3)\nINFO: pip is looking at multiple versions of basicsr to determine which version is compatible with other requirements. This could take a while.\nERROR: Could not find a version that satisfies the requirement tb-nightly (from basicsr) (from versions: none)\nERROR: No matching distribution found for tb-nightly\n")

Subpromts guide

May I ask you to write a precise guide for using Subpromts if possible? The fact is that when you try to use them (background promt AND area 1 prompt AND area 2 prompt) during generation, the result is very unpredictable.
Tell me, what am I doing wrong?

example

изображение

00000214-11051528
00000215-11051529
00000216-11051529

LoRa problem

Seems like LoRas doesn't work at all. I'm getting default model's results. No one prompt isn't helping to solve this problem. Yesterday everything was fine. Any ideas what's going on and how to fix this?

Error while Sending

I've got an error while generating. Description of error:

Error while Sending
Unable to serialize: key 'preview_interval' value: <PyQt5.QtCore.QVariant object at 0x000002B1B203FED0> type: <class 'PyQt5.QtCore.QVariant'>

[AMD] Error when loading UNET.

When I try to generate images, I keep getting this error message:
image

trace:
Traceback (most recent call last):
File "D:\Other\sd\ui\qDiffusion\source\local.py", line 77, in run
self.wrapper.txt2img()
File "D:\Other\sd\ui\qDiffusion\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Other\sd\ui\qDiffusion\source\sd-inference-server\wrapper.py", line 611, in txt2img
self.load_models(*initial_networks)
File "D:\Other\sd\ui\qDiffusion\source\sd-inference-server\wrapper.py", line 263, in load_models
self.unet = self.storage.get_unet(self.unet_name, self.device, unet_nets)
File "D:\Other\sd\ui\qDiffusion\source\sd-inference-server\storage.py", line 302, in get_unet
unet = self.get_component(name, "UNET", device)
File "D:\Other\sd\ui\qDiffusion\source\sd-inference-server\storage.py", line 269, in get_component
self.file_cache[file] = self.load_file(file, comp)
File "D:\Other\sd\ui\qDiffusion\source\sd-inference-server\storage.py", line 378, in load_file
state_dict, metadata = convert.convert(file)
File "D:\Other\sd\ui\qDiffusion\source\sd-inference-server\convert.py", line 392, in convert
return convert_checkpoint(model_path)
File "D:\Other\sd\ui\qDiffusion\source\sd-inference-server\convert.py", line 278, in convert_checkpoint
state_dict = utils.load_pickle(in_file, map_location="cpu")
File "D:\Other\sd\ui\qDiffusion\source\sd-inference-server\utils.py", line 376, in load_pickle
return torch.load(file, map_location=map_location, pickle_module=SafeUnpickler)
File "D:\Other\sd\ui\qDiffusion\venv\lib\site-packages\torch\serialization.py", line 809, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "D:\Other\sd\ui\qDiffusion\venv\lib\site-packages\torch\serialization.py", line 1172, in _load
result = unpickler.load()
_pickle.UnpicklingError: state is not a dictionary

Can't connect to remote qDiffusion

I connect according to guide but i see error:
"Error while Connecting.
server rejected WebSocket connection: HTTP 530"

What should i do to fix this problem?

P.S Didn't use qDiffusion for a month, maybe i need to reinstall programm?

Upd: Yes, reinstall fixed the problem.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.