Comments (42)
2.1 must have changed something again. I'll check out the SD thread
from diffusion-colabui.
Alright
from diffusion-colabui.
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20
It says it should be identical. Make sure the 2.1 model you're using is 768 rather than 512 as both versions are available
from diffusion-colabui.
Yeah it is 768, its a dreambooth model which I trained using TheLastBen's colab and I selected the 768 version
from diffusion-colabui.
Can you send the full log? Need to check if it's applying the json/yaml properly
from diffusion-colabui.
If it is, it could be Stability-AI/stablediffusion@c12d960 which is an inherent flaw of 2.1. I might need to change the code to fix that.
from diffusion-colabui.
I tried it again after getting error so it says it already exists, I'm trying once again from fresh
/content
/content
fatal: destination path 'stable-diffusion-webui' already exists and is not an empty directory.
/content/stable-diffusion-webui
Already up to date.
/content
--2022-12-26 12:56:03-- https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.108.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1815 (1.8K) [text/plain]
Saving to: ‘/content/stable-diffusion-webui/models/Stable-diffusion/sicke.yaml’
/content/stable-dif 100%[===================>] 1.77K --.-KB/s in 0s
2022-12-26 12:56:03 (37.6 MB/s) - ‘/content/stable-diffusion-webui/models/Stable-diffusion/sicke.yaml’ saved [1815/1815]
--2022-12-26 12:56:03-- https://huggingface.co/Z3R069/sick/resolve/main/Sickers.ckpt
Resolving huggingface.co (huggingface.co)... 54.152.211.32, 23.22.186.9, 18.235.116.140, ...
Connecting to huggingface.co (huggingface.co)|54.152.211.32|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://cdn-lfs.huggingface.co/repos/5d/e5/5de5b77fc3658dbfb8e3d3791a2d5f39558f5e5a6ab977a3d558f13d4387c18a/14eed25869f498e82a72f009db7cfcc63c05d401e1698e174c1931580312faf0?response-content-disposition=attachment%3B%20filename%3D%22Sickers.ckpt%22&Expires=1672318564&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzVkL2U1LzVkZTViNzdmYzM2NThkYmZiOGUzZDM3OTFhMmQ1ZjM5NTU4ZjVlNWE2YWI5NzdhM2Q1NThmMTNkNDM4N2MxOGEvMTRlZWQyNTg2OWY0OThlODJhNzJmMDA5ZGI3Y2ZjYzYzYzA1ZDQwMWUxNjk4ZTE3NGMxOTMxNTgwMzEyZmFmMD9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPWF0dGFjaG1lbnQlM0IlMjBmaWxlbmFtZSUzRCUyMlNpY2tlcnMuY2twdCUyMiIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY3MjMxODU2NH19fV19&Signature=zAN18-rQUPD6Tn27K3~cY8wfGux2wF60-AWsdZsMj4KGOnRKAfTC6E3VTDuKHbTKPwJ2Ct~Zkw2UqEr4bNrBjf6WFUscWo6dr5pQdy4Ut8OGg1JEduC6eTTK97Xrl5bP~Io~DBx1STlqigbHHAG9eQ7wma8m4uB4FIzKajLi9OglOsL54fBUdSw2d28~iHZp6XnXfVBeLH3lDeyUpL-RqXcaMzfHXreKNGspGqymaMW~wokJyVWK3rdjXshs1rrHAQriaj~L3CKFYJJ3v1dx69sV8Uh8r5Jtekn8DrXSMQ75B~LN1wWhXsemohleOHp2DEBrloM2hiAmmq78mRGdLA__&Key-Pair-Id=KVTP0A1DKRTAX [following]
--2022-12-26 12:56:04-- https://cdn-lfs.huggingface.co/repos/5d/e5/5de5b77fc3658dbfb8e3d3791a2d5f39558f5e5a6ab977a3d558f13d4387c18a/14eed25869f498e82a72f009db7cfcc63c05d401e1698e174c1931580312faf0?response-content-disposition=attachment%3B%20filename%3D%22Sickers.ckpt%22&Expires=1672318564&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzVkL2U1LzVkZTViNzdmYzM2NThkYmZiOGUzZDM3OTFhMmQ1ZjM5NTU4ZjVlNWE2YWI5NzdhM2Q1NThmMTNkNDM4N2MxOGEvMTRlZWQyNTg2OWY0OThlODJhNzJmMDA5ZGI3Y2ZjYzYzYzA1ZDQwMWUxNjk4ZTE3NGMxOTMxNTgwMzEyZmFmMD9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPWF0dGFjaG1lbnQlM0IlMjBmaWxlbmFtZSUzRCUyMlNpY2tlcnMuY2twdCUyMiIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY3MjMxODU2NH19fV19&Signature=zAN18-rQUPD6Tn27K3~cY8wfGux2wF60-AWsdZsMj4KGOnRKAfTC6E3VTDuKHbTKPwJ2Ct~Zkw2UqEr4bNrBjf6WFUscWo6dr5pQdy4Ut8OGg1JEduC6eTTK97Xrl5bP~Io~DBx1STlqigbHHAG9eQ7wma8m4uB4FIzKajLi9OglOsL54fBUdSw2d28~iHZp6XnXfVBeLH3lDeyUpL-RqXcaMzfHXreKNGspGqymaMW~wokJyVWK3rdjXshs1rrHAQriaj~L3CKFYJJ3v1dx69sV8Uh8r5Jtekn8DrXSMQ75B~LN1wWhXsemohleOHp2DEBrloM2hiAmmq78mRGdLA__&Key-Pair-Id=KVTP0A1DKRTAX
Resolving cdn-lfs.huggingface.co (cdn-lfs.huggingface.co)... 13.227.254.33, 13.227.254.123, 13.227.254.52, ...
Connecting to cdn-lfs.huggingface.co (cdn-lfs.huggingface.co)|13.227.254.33|:443... connected.
HTTP request sent, awaiting response... 416 Requested Range Not Satisfiable
The file is already fully retrieved; nothing to do.
/content/stable-diffusion-webui/extensions
fatal: destination path 'stable-diffusion-webui-images-browser' already exists and is not an empty directory.
/content
/content/stable-diffusion-webui
|████████████████████████████████| 102.9 MB 46 kB/s
Python 3.8.16 (default, Dec 7 2022, 01:12:13)
[GCC 7.5.0]
Commit hash: b85d055af7475bd8536b2cb42e0a0b0635cbc583
Installing requirements for Web UI
Launching Web UI with arguments: --share --xformers --enable-insecure-extension-access --gradio-auth a:b
Loading config from: /content/stable-diffusion-webui/models/Stable-diffusion/sicke.yaml
LatentDiffusion: Running in v-prediction mode
DiffusionWrapper has 865.91 M params.
Loading weights [1118ae92] from /content/stable-diffusion-webui/models/Stable-diffusion/sicke.ckpt
Traceback (most recent call last):
File "launch.py", line 295, in <module>
start()
File "launch.py", line 290, in start
webui.webui()
File "/content/stable-diffusion-webui/webui.py", line 133, in webui
initialize()
File "/content/stable-diffusion-webui/webui.py", line 63, in initialize
modules.sd_models.load_model()
File "/content/stable-diffusion-webui/modules/sd_models.py", line 313, in load_model
load_model_weights(sd_model, checkpoint_info)
File "/content/stable-diffusion-webui/modules/sd_models.py", line 197, in load_model_weights
model.load_state_dict(sd, strict=False)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1667, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LatentDiffusion:
size mismatch for model.diffusion_model.input_blocks.1.1.proj_in.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).
size mismatch for model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).
size mismatch for model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).
size mismatch for model.diffusion_model.input_blocks.1.1.proj_out.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).
size mismatch for model.diffusion_model.input_blocks.2.1.proj_in.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).
size mismatch for model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).
size mismatch for model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).
size mismatch for model.diffusion_model.input_blocks.2.1.proj_out.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).
size mismatch for model.diffusion_model.input_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).
size mismatch for model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).
size mismatch for model.diffusion_model.input_blocks.4.1.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for model.diffusion_model.input_blocks.5.1.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).
size mismatch for model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).
size mismatch for model.diffusion_model.input_blocks.5.1.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for model.diffusion_model.input_blocks.7.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).
size mismatch for model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).
size mismatch for model.diffusion_model.input_blocks.7.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for model.diffusion_model.input_blocks.8.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).
size mismatch for model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).
size mismatch for model.diffusion_model.input_blocks.8.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for model.diffusion_model.middle_block.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).
size mismatch for model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).
size mismatch for model.diffusion_model.middle_block.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for model.diffusion_model.output_blocks.3.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).
size mismatch for model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).
size mismatch for model.diffusion_model.output_blocks.3.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for model.diffusion_model.output_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).
size mismatch for model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).
size mismatch for model.diffusion_model.output_blocks.4.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for model.diffusion_model.output_blocks.5.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).
size mismatch for model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).
size mismatch for model.diffusion_model.output_blocks.5.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for model.diffusion_model.output_blocks.6.1.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).
size mismatch for model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).
size mismatch for model.diffusion_model.output_blocks.6.1.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for model.diffusion_model.output_blocks.7.1.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).
size mismatch for model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).
size mismatch for model.diffusion_model.output_blocks.7.1.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for model.diffusion_model.output_blocks.8.1.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).
size mismatch for model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).
size mismatch for model.diffusion_model.output_blocks.8.1.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for model.diffusion_model.output_blocks.9.1.proj_in.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).
size mismatch for model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).
size mismatch for model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).
size mismatch for model.diffusion_model.output_blocks.9.1.proj_out.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).
size mismatch for model.diffusion_model.output_blocks.10.1.proj_in.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).
size mismatch for model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).
size mismatch for model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).
size mismatch for model.diffusion_model.output_blocks.10.1.proj_out.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).
size mismatch for model.diffusion_model.output_blocks.11.1.proj_in.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).
size mismatch for model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).
size mismatch for model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).
size mismatch for model.diffusion_model.output_blocks.11.1.proj_out.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).```
from diffusion-colabui.
Loading config from: /content/stable-diffusion-webui/models/Stable-diffusion/sicke.yaml
It's definitely applying the config. I might need to do some fixes. Is the model public?
from diffusion-colabui.
Made it public, here https://huggingface.co/Z3R069/sick/resolve/main/Sickers.ckpt
from diffusion-colabui.
Replicated issue. Trying to fix it
from diffusion-colabui.
Ugh mine's shutting down with ^C everytime
from diffusion-colabui.
Google is limiting your access. Low VRAM assigned
from diffusion-colabui.
I used different google account with vpn but still shutting down
from diffusion-colabui.
I'm getting the same issue now. Google might just be limiting everyone due to high demand
from diffusion-colabui.
Oh
from diffusion-colabui.
Its working now btw but Im still getting the error
from diffusion-colabui.
It'll take some time before I can get around to fixing it. My workload is a bit high at the moment with my ChatGPT repos.
from diffusion-colabui.
It's alright, take your time 😅
from diffusion-colabui.
Ello?
from diffusion-colabui.
The error seems to be the same as AUTOMATIC1111/stable-diffusion-webui#5745
I still haven't found a solution yet because everything looks to be working but the size being applied is wrong
from diffusion-colabui.
oh
from diffusion-colabui.
Could it be an issue with the dreambooth model? I'll try to remake the model then
from diffusion-colabui.
Hi! I got the same issue here. Did one of you found any solution?
from diffusion-colabui.
nope, not working still
from diffusion-colabui.
Is it only with 2.1 models?
from diffusion-colabui.
Yeah so far
from diffusion-colabui.
They might have different number of weights to 2.0 models. I'm not sure what config they use.
from diffusion-colabui.
I'll make a bug report on @AUTOMATIC1111 's repo
from diffusion-colabui.
Looking at this line:
size mismatch for model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).
It seems that you are not using a 768 model
from diffusion-colabui.
That's weird, Im pretty sure I used a 768 model
from diffusion-colabui.
I'll try with the 512 option then and see if it works
from diffusion-colabui.
@RayZ3R0 It is not a 2.1 model
from diffusion-colabui.
It worked for me without any config. (Or maybe it is compatible with 1.5)
from diffusion-colabui.
Looking at your config: https://huggingface.co/Z3R069/sick/blob/main/unet/config.json
{
"_class_name": "UNet2DConditionModel",
"_diffusers_version": "0.9.0.dev0",
"_name_or_path": "/content/stable-diffusion-v1-5",
"act_fn": "silu",
"attention_head_dim": 8,
"block_out_channels": [
320,
640,
1280,
1280
],
"center_input_sample": false,
"cross_attention_dim": 768,
"down_block_types": [
"CrossAttnDownBlock2D",
"CrossAttnDownBlock2D",
"CrossAttnDownBlock2D",
"DownBlock2D"
],
"downsample_padding": 1,
"dual_cross_attention": false,
"flip_sin_to_cos": true,
"freq_shift": 0,
"in_channels": 4,
"layers_per_block": 2,
"mid_block_scale_factor": 1,
"norm_eps": 1e-05,
"norm_num_groups": 32,
"num_class_embeds": null,
"only_cross_attention": false,
"out_channels": 4,
"sample_size": 64,
"up_block_types": [
"UpBlock2D",
"CrossAttnUpBlock2D",
"CrossAttnUpBlock2D",
"CrossAttnUpBlock2D"
],
"use_linear_projection": false
}
It is 1.5 based
from diffusion-colabui.
from diffusion-colabui.
that's really weird, I definitely selected the 2.1 768px model in the dreambooth colab. But thanks!
from diffusion-colabui.
Seems to be a bug with @TheLastBen. You should open up an issue there
from diffusion-colabui.
Alright, thanks
from diffusion-colabui.
I'm getting the error running locally
from diffusion-colabui.
Are you running the notebook locally or https://github.com/AUTOMATIC1111/stable-diffusion-webui?
from diffusion-colabui.
I'm getting the error running locally
Depending on the model you're using, you need to choose the right inference configuration.
from diffusion-colabui.
This fixed it for me: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon#existing-install
from diffusion-colabui.
Related Issues (20)
- Hello, can you please update new plug-ins such as controlnet, thank you very much
- error HOT 8
- Ask HOT 7
- When switching models the code crashes
- PyTorch and torchvision were compiled with different CUDA versions HOT 1
- save_to_gdrive option leads to unrecognised arguments HOT 1
- Run In GDrive removed from "old" version HOT 1
- train.py issue
- Where is LorA menu HOT 1
- Problems with "conection errored out" HOT 2
- Excuse me, would it be possible to update some useful plugins? Thank you very much. HOT 3
- AttributeError: 'Options' object has no attribute 'extra_networks_card_height' HOT 1
- Error when running gradio HOT 12
- Hosting question HOT 1
- Namerror HOT 2
- Disconnecting itself HOT 9
- Disconnect issue
- Google Colab copy can't use extensions
- Civit Ai fix
- There's no save everything locally to GDrive like before? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from diffusion-colabui.