Code Monkey home page Code Monkey logo

koi's Introduction

koi 🎣

Open In Colab Join us on Discord GitHub

koi is an open source plug-in for Krita that allows you to use AI to accelerate your art workflow!

Disclaimer βœ‹

In the interest of getting the open source community on board--I have released this plug-in early. In its current state you may run into issues (particularly during the setup process). If you do, I encourage you to open an issue here on GitHub and describe your problems so that it can be fixed it for you and others!


Overview πŸ˜„

The goal of this repository is to serve as a starting point for building increasingly useful tools for artists of all levels of experience to use.

Link to original twitter thread

This plug-in serves as a working example of how new A.I. models like Stable Diffusion can lower the barrier of entry to art so that anyone can enjoy making their dreams a reality!

Because this is an open source project I encourage you to try it out, break things, and come back with suggestions!

Getting Started 🏁

If you are new to git, or get stuck during the installation process, Lewington made a nice step-by-step video.

The easiest way to get started is to follow the plug-in installation process for krita. Then use the google colab backend server (button at the top of this readme)! This should give you a good introduction to the setup process and get you up and running fast!


Installation πŸ”¨

Krita has a few plug-in installation methods, however, I will refer you to the one I use.

  • Step 1: Find your operating system's pykrita folder reference
  • Step 2: Clone the repository, and copy the koi folder and koi.desktop to pykrita. (restart krita now if it is open)
  • Step 3: Open Krita and navigate to the python plug-in menu reference
  • Step 4: Enable the koi plugin and restart Krita to load the plug-in.

The next thing you will need to do is setup the backend server that do all the computation!

  • Step 0: Ensure you have a GPU-accelerated installation of pytorch. (You can skip this step if you are using Colab or already have it)

    • Follow the installation instructions on pytorch's official getting started.
  • Step 1: Get the latest version of HuggingFace's diffusers from source by going to the GitHub repo

    • From here you can clone the repo git clone https://github.com/huggingface/diffusers.git & cd diffusers to move into the directory.
    • Install the package with pip install -e .
  • Step 2: Install this package! I recommend moving out of the diffusers folder if you haven't already (eg. cd ..)

    • git clone https://github.com/nousr/koi.git, then cd koi and pip install -e .

Note πŸ™‹

Before continuing, make sure you accept the terms of service for the diffusers model link to do so here.

Next, inside your terminal run the huggingface-cli login command and paste a token generated from here. If you don't want to repeat this step in the future you can then run git config --global credential.helper store. (note: only do this on a computer you trust)

  • Step 3: Run the server by typing python server.py
    • If you did everything correctly you should see an address spit out after some time (eg. 127.0.0.1:8888)
  • Step 4: Open Krita, if you haven't already, and paste your address into the endpoint field of the plugin
    • You will also need to append the actual API endpoint you are using. By default this is /api/img2img
    • If you are using all of the default settings your endpoint field will look something like this http://127.0.0.1:8888/api/img2img

Inference πŸ–ŒοΈ

This part is easy!

  • Step 1: Create a new canvas that is 512 x 512 (px) in size and make a single-layer sketch (note: these are temporary restrictions).
  • Step 2: Fill out the prompt field in the koi panel (default location is somewhere on the right of your screen).
  • Step 3: Make any additional changes you would like to the inference parameters (strength, steps, etc.)
  • Step 4: Copy and paste your server's endpoint to the associated field
  • Step 5: Click dream!

FAQ ❔

  • What does koi stand for?
    • Krita Open(source) Img2Img: While support for StableDiffusion is first, the goal is to have this plug-in be compatible with any model!
  • Why the client/server setup?
    • The goal is to make this as widely available as possible. The server can be run anywhere with a GPU (i.e. colab) and allow those with low-powered hardware to still use the plug-in!
  • I'm getting an error with "set-size"
    • This usually happens when you either forget "/api/img2img" at the end of your endpoint when copying it into the plugin OR you have some issue with your backend server (check the output on your server for more information).

TODO

  • Add colab backend example
  • Flesh out UI
  • Move to CompVis repo
  • Add CI
  • Abstract away drop-in models for the server
  • Improve documentation
  • Add DreamStudio API support
  • Add support for arbitrary canvas size & selection-based img2img
  • Add support for masks
  • Offer more configuration options

koi's People

Contributors

jpic avatar lewington-pitsos avatar nousr avatar tocram1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

koi's Issues

NSFW no allow a cat and a white image

Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed.

can i fix it ?

RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

This happens after following every instruction and trying to draw

Followed all what it's written in readme.md

Full lot:
[2022-09-06 03:37:08,913] ERROR in app: Exception on /api/img2img [POST] Traceback (most recent call last): File "C:\Users\Shadow\AppData\Local\Programs\Python\Python310\lib\site-packages\flask\app.py", line 2525, in wsgi_app response = self.full_dispatch_request() File "C:\Users\Shadow\AppData\Local\Programs\Python\Python310\lib\site-packages\flask\app.py", line 1822, in full_dispatch_request rv = self.handle_user_exception(e) File "C:\Users\Shadow\AppData\Local\Programs\Python\Python310\lib\site-packages\flask\app.py", line 1820, in full_dispatch_request rv = self.dispatch_request() File "C:\Users\Shadow\AppData\Local\Programs\Python\Python310\lib\site-packages\flask\app.py", line 1796, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File "C:\Users\Shadow\Downloads\koi-main\koi-main\server.py", line 51, in img2img return_image = pipe( File "C:\Users\Shadow\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "C:\Users\Shadow\Downloads\dif\diffusers\src\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion_img2img.py", line 108, in __call__ init_latents = self.scheduler.add_noise(init_latents, noise, timesteps).to(self.device) File "C:\Users\Shadow\Downloads\dif\diffusers\src\diffusers\schedulers\scheduling_pndm.py", line 284, in add_noise sqrt_alpha_prod = self.alphas_cumprod[timesteps] ** 0.5 RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

"BadZipFile: File is not a zip file" Error Message

Hi, I went through all the steps, and when I ask it to "dream" I get a popup error message, here are the contents:

BadZipFile
Python 3.8.1: C:\Program Files\Krita (x64)\bin\krita.exe
Tue Sep 6 23:31:46 2022

A problem occurred in a Python script. Here is the sequence of
function calls leading up to the error, in the order they occurred.

C:\Users\Desktop\AppData\Roaming\krita\pykrita\koi\koi.py in pingServer(self=<koi.koi.Koi object>)
157 # wait for response and read image
158 with request.urlopen(response_url, timeout=self._get_timeout()) as response:
159 archive = ZipFile(BytesIO(response.read()))
160 filenames = archive.namelist()
161 for name in filenames:
archive undefined
global ZipFile = <class 'zipfile.ZipFile'>
global BytesIO = <class '_io.BytesIO'>
response = <http.client.HTTPResponse object>
response.read = <bound method HTTPResponse.read of <http.client.HTTPResponse object>>

C:\Program Files\Krita (x64)\zipfile.py in init(self=<zipfile.ZipFile [closed]>, file=<_io.BytesIO object>, mode='r', compression=0, allowZip64=True, compresslevel=None, strict_timestamps=True)

C:\Program Files\Krita (x64)\zipfile.py in _RealGetContents(self=<zipfile.ZipFile [closed]>)

BadZipFile: File is not a zip file
cause = None
class = <class 'zipfile.BadZipFile'>
context = None
delattr = <method-wrapper 'delattr' of BadZipFile object>
dict = {}
dir =
doc = None
eq = <method-wrapper 'eq' of BadZipFile object>
format =
ge = <method-wrapper 'ge' of BadZipFile object>
getattribute = <method-wrapper 'getattribute' of BadZipFile object>
gt = <method-wrapper 'gt' of BadZipFile object>
hash = <method-wrapper 'hash' of BadZipFile object>
init = <method-wrapper 'init' of BadZipFile object>
init_subclass =
le = <method-wrapper 'le' of BadZipFile object>
lt = <method-wrapper 'lt' of BadZipFile object>
module = 'zipfile'
ne = <method-wrapper 'ne' of BadZipFile object>
new =
reduce =
reduce_ex =
repr = <method-wrapper 'repr' of BadZipFile object>
setattr = <method-wrapper 'setattr' of BadZipFile object>
setstate =
sizeof =
str = <method-wrapper 'str' of BadZipFile object>
subclasshook =
suppress_context = False
traceback =
weakref = None
args = ('File is not a zip file',)
with_traceback =

The above is a description of an error in a Python program. Here is
the original traceback:

Traceback (most recent call last):
File "C:\Users\Desktop\AppData\Roaming\krita\pykrita\koi\koi.py", line 159, in pingServer
archive = ZipFile(BytesIO(response.read()))
File "zipfile.py", line 1267, in init
File "zipfile.py", line 1334, in _RealGetContents
zipfile.BadZipFile: File is not a zip file

I really have no idea what to do, I appreciate any help you can offer!

koi_colab_backend in colab say ""

When I call the colab server from Krita, colab say RuntimeError: CUDA out of memory.
I have no idea what to do.

ERROR:__main__:Exception on /api/img2img [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/flask/app.py", line 2525, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.7/dist-packages/flask/app.py", line 1822, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.7/dist-packages/flask/app.py", line 1820, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.7/dist-packages/flask/app.py", line 1796, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "<ipython-input-1-35b40b736f48>", line 75, in img2img
    num_inference_steps=int(headers["steps"]),
  File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py", line 272, in __call__
    noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/diffusers/models/unet_2d_condition.py", line 237, in forward
    hidden_states=sample, temb=emb, encoder_hidden_states=encoder_hidden_states
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/diffusers/models/unet_blocks.py", line 550, in forward
    hidden_states = attn(hidden_states, context=encoder_hidden_states)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 149, in forward
    hidden_states = block(hidden_states, context=context)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 198, in forward
    hidden_states = self.attn1(self.norm1(hidden_states)) + hidden_states
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 268, in forward
    hidden_states = self._attention(query, key, value)
  File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 275, in _attention
    attention_scores = torch.matmul(query, key.transpose(-1, -2)) * self.scale
RuntimeError: CUDA out of memory. Tried to allocate 8.00 GiB (GPU 0; 14.76 GiB total capacity; 10.75 GiB already allocated; 2.36 GiB free; 11.34 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
INFO:werkzeug:127.0.0.1 - - [18/Sep/2022 14:56:17] "POST /api/img2img HTTP/1.1" 500 -

google colab error

RuntimeError: Failed to import transformers.models.clip.feature_extraction_clip because of the following error (look up to see its traceback):
module 'PIL.Image' has no attribute 'Resampling'

[Feature Request] Add toggle to randomize the seed after generating

Doing multiple generations with the same prompt is problematic when using the same seed as it tends to "deep fry" the image. It would be significantly smoother if koi had a system for randomizing seeds.

I think the expected behavior for this toggle should be:

  • When you click "dream," if the toggle is on,
    • Read the current seed value and use that for generation
    • Replace the current seed value with a random seed
  • If the toggle is off, and you click it on,
    • Replace the current seed value with a random seed
  • If the toggle is on, and you edit the seed,
    • Turn the toggle off

This allows for manual intervention with seeds when desired, and maintains clarity as to which seed is being used while randomness is on.

Memory problem (how to fix it ?)

Hello, i have some problem in the final step, when i click on "dream" that don't work at all, that say :

RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 8.00 GiB total capacity; 7.33 GiB already allocated; 0 bytes free; 7.35 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
127.0.0.1 - - [02/Sep/2022 18:02:39] "POST /api/img2img HTTP/1.1" 500 -

Do you know how to fix it ? I don't understand Python very well or Pytorch, i really need help for this

[Feature request] Add system to manage the server being active or not

Assuming that koi is meant to be used 1 server to 1 person, then it would be nice to be able to spin up the server on demand.

Perhaps if Krita is open and the user has clicked a "start server" button in the UI, then the server loads the model. When Krita closes or stops its connection to the server (via some keep-alive heartbeat), or the user clicks a "stop server button," then the server unloads the model.

This prevents the koi server from holding on to VRAM while its not in use.

[Feature Request] Show generation parameters in the layer name

This would help greatly when dialing in parameters and comparing results. Perhaps something like:

i32 ss0.18 ps0.75 S1337 "A beautiful mountain landscape..."

Further iterations might include toggles to add only certain parameters to the layer name.

Fix discord button "online count"

Calling any discord-familiar dev's who know how to properly set badges for discord servers!

If anyone knows how to fix the "online" count for the badge on the readme, please open a PR with the proper changes :)

"Port 5000 is in use by another program"

In Colab, stopping the Flask cell then running it again results in an error "Port 5000 is in use by another program". Apparently, stopping the cell does not stop free up the port

How do I set my ngrok authtoken?

I ran all of the notebook in colab, but requests fail with responses like:

$ curl -X POST http://2e04-34-90-174-157.ngrok.io/api/img2img
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1201  100  1201    0     0   2327      0 --:--:-- --:--:-- --:--:--  2332<!DOCTYPE html>
<html lang="en-US">
  <head>
    <meta charset="utf-8">
    <meta name="author" content="ngrok">
    <meta name="description" content="ngrok is the fastest way to put anything on the internet with a single command.">
    <meta name="robots" content="noindex, nofollow">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <link id="style" rel="stylesheet" href="https://cdn.ngrok.com/static/css/error.css">
    <noscript>Before you can serve HTML content, you must sign up for an ngrok account and install your authtoken. (ERR_NGROK_6022)</noscript>
    <script id="script" src="https://cdn.ngrok.com/static/js/error.js" type="text/javascript"></script>
  </head>
  ...
</html>

I've signed up for an ngrok account and got an authtoken, but how do I set it in this environment?

[Feature Request] Support dynamic images sizes

An idea for how to support dynamic image sizes:

  • Capture the requested image/layer, scale it down as necessary to fit in 512x512 for generation, maintaining aspect ratio
  • Generate
  • Re-scale the output back to the input size by using realesrgan or another upscaler

AttributeError: 'NoneType' object has no attribute 'setsize'

Trying to get this up and running with Colab. I'm getting this error:

AttributeError
Python 3.8.1: C:\Program Files\Krita (x64)\bin\krita.exe
Mon Sep 5 10:32:53 2022

A problem occurred in a Python script. Here is the sequence of
function calls leading up to the error, in the order they occurred.

C:\Program Files\Krita (x64)\share\krita\pykrita\koi\koi.py in pingServer(self=<koi.koi.Koi object>)
133 # get a pointer to the image's bits and add them to the new layer
134 ptr = returned_image.bits()
135 ptr.setsize(returned_image.byteCount())
136 dream_layer.setPixelData(
137 QByteArray(ptr.asstring()),
ptr = None
ptr.setsize undefined
returned_image = <PyQt5.QtGui.QImage object>
returned_image.byteCount =
AttributeError: 'NoneType' object has no attribute 'setsize'
cause = None
class = <class 'AttributeError'>
context = None
delattr = <method-wrapper 'delattr' of AttributeError object>
dict = {}
dir =
doc = 'Attribute not found.'
eq = <method-wrapper 'eq' of AttributeError object>
format =
ge = <method-wrapper 'ge' of AttributeError object>
getattribute = <method-wrapper 'getattribute' of AttributeError object>
gt = <method-wrapper 'gt' of AttributeError object>
hash = <method-wrapper 'hash' of AttributeError object>
init = <method-wrapper 'init' of AttributeError object>
init_subclass =
le = <method-wrapper 'le' of AttributeError object>
lt = <method-wrapper 'lt' of AttributeError object>
ne = <method-wrapper 'ne' of AttributeError object>
new =
reduce =
reduce_ex =
repr = <method-wrapper 'repr' of AttributeError object>
setattr = <method-wrapper 'setattr' of AttributeError object>
setstate =
sizeof =
str = <method-wrapper 'str' of AttributeError object>
subclasshook =
suppress_context = False
traceback =
args = ("'NoneType' object has no attribute 'setsize'",)
with_traceback =

The above is a description of an error in a Python program. Here is
the original traceback:

Traceback (most recent call last):
File "C:\Program Files\Krita (x64)\share\krita\pykrita\koi\koi.py", line 135, in pingServer
ptr.setsize(returned_image.byteCount())
AttributeError: 'NoneType' object has no attribute 'setsize'

Colab is spitting out "INFO:werkzeug:127.0.0.1 - - [05/Sep/2022 14:34:43] "POST / HTTP/1.1" 404 -"

I am getting a new blank layer in Krita every time I try to run.

I can't figure out what I am doing wrong or if this is a bug or what.

Koi doesn't ignore hidden layers

Likely a common pitfall right now, definitely seems like a bug. When clicking dream, I expect it to use the visible canvas to img2img off of, instead it acts as if all layers are active and img2img from that

Error using colab. Image unable to be transferred back to krita with 'BadZipFile' error.

Tried running the example notebook to use colab for the GPU compute. Setup of the server works just fine, opened fresh install of krita, pasted in the address for the server and clicked 'dream' with the default mountain landscape prompt. An error on colab and krita was produced (see below).

It seems that the inference runs fine so the stable diffusion code and connection to krita seems to work. The error seems to occur when the generated image is passed back to krita. Running on krita version 5.1.1 (AppImage) and my OS is Fedora 36 with linux kernel 5.19. The colab notebook is an unmodified copy of that included in the koi repo (https://github.com/nousr/koi/blob/main/koi_colab_backend.ipynb).

I note that in the krita error message it is using the miniconda python installed on my system, could that be an issue?
Any help appreciated!

Colab error message:

100%
9/9 [00:06<00:00, 2.94it/s]

ERROR:__main__:Exception on /api/img2img [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/flask/app.py", line 2525, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.7/dist-packages/flask/app.py", line 1822, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.7/dist-packages/flask/app.py", line 1820, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.7/dist-packages/flask/app.py", line 1796, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "<ipython-input-1-afcf33d6615b>", line 82, in img2img
    )["sample"][0]
  File "/usr/local/lib/python3.7/dist-packages/diffusers/utils/outputs.py", line 88, in __getitem__
    return inner_dict[k]
KeyError: 'sample'
INFO:werkzeug:127.0.0.1 - - [21/Oct/2022 13:08:02] "POST /api/img2img HTTP/1.1" 500 -

And then this error on krita:

BadZipFile
Python 3.8.1: /home/$USER/miniconda3/bin/python3
Fri Oct 21 14:08:02 2022

A problem occurred in a Python script.  Here is the sequence of
function calls leading up to the error, in the order they occurred.

 /home/$USER/.local/share/krita/pykrita/koi/koi.py in pingServer(self=<koi.koi.Koi object>)
  157         # wait for response and read image
  158         with request.urlopen(response_url, timeout=self._get_timeout()) as response:
  159             archive = ZipFile(BytesIO(response.read()))
  160             filenames = archive.namelist()
  161             for name in filenames:
archive undefined
global ZipFile = <class 'zipfile.ZipFile'>
global BytesIO = <class '_io.BytesIO'>
response = <http.client.HTTPResponse object>
response.read = <bound method HTTPResponse.read of <http.client.HTTPResponse object>>

 /tmp/.mount_krita-sL66r9/usr/lib/python3.8/zipfile.py in __init__(self=<zipfile.ZipFile [closed]>, file=<_io.BytesIO object>, mode='r', compression=0, allowZip64=True, compresslevel=None, strict_timestamps=True)
 1265         try:
 1266             if mode == 'r':
 1267                 self._RealGetContents()
 1268             elif mode in ('w', 'x'):
 1269                 # set the modified flag so central directory gets written
self = <zipfile.ZipFile [closed]>
self._RealGetContents = <bound method ZipFile._RealGetContents of <zipfile.ZipFile [closed]>>

 /tmp/.mount_krita-sL66r9/usr/lib/python3.8/zipfile.py in _RealGetContents(self=<zipfile.ZipFile [closed]>)
 1332             raise BadZipFile("File is not a zip file")
 1333         if not endrec:
 1334             raise BadZipFile("File is not a zip file")
 1335         if self.debug > 1:
 1336             print(endrec)
global BadZipFile = <class 'zipfile.BadZipFile'>
BadZipFile: File is not a zip file
    __cause__ = None
    __class__ = <class 'zipfile.BadZipFile'>
    __context__ = None
    __delattr__ = <method-wrapper '__delattr__' of BadZipFile object>
    __dict__ = {}
    __dir__ = <built-in method __dir__ of BadZipFile object>
    __doc__ = None
    __eq__ = <method-wrapper '__eq__' of BadZipFile object>
    __format__ = <built-in method __format__ of BadZipFile object>
    __ge__ = <method-wrapper '__ge__' of BadZipFile object>
    __getattribute__ = <method-wrapper '__getattribute__' of BadZipFile object>
    __gt__ = <method-wrapper '__gt__' of BadZipFile object>
    __hash__ = <method-wrapper '__hash__' of BadZipFile object>
    __init__ = <method-wrapper '__init__' of BadZipFile object>
    __init_subclass__ = <built-in method __init_subclass__ of type object>
    __le__ = <method-wrapper '__le__' of BadZipFile object>
    __lt__ = <method-wrapper '__lt__' of BadZipFile object>
    __module__ = 'zipfile'
    __ne__ = <method-wrapper '__ne__' of BadZipFile object>
    __new__ = <built-in method __new__ of type object>
    __reduce__ = <built-in method __reduce__ of BadZipFile object>
    __reduce_ex__ = <built-in method __reduce_ex__ of BadZipFile object>
    __repr__ = <method-wrapper '__repr__' of BadZipFile object>
    __setattr__ = <method-wrapper '__setattr__' of BadZipFile object>
    __setstate__ = <built-in method __setstate__ of BadZipFile object>
    __sizeof__ = <built-in method __sizeof__ of BadZipFile object>
    __str__ = <method-wrapper '__str__' of BadZipFile object>
    __subclasshook__ = <built-in method __subclasshook__ of type object>
    __suppress_context__ = False
    __traceback__ = <traceback object>
    __weakref__ = None
    args = ('File is not a zip file',)
    with_traceback = <built-in method with_traceback of BadZipFile object>

The above is a description of an error in a Python program.  Here is
the original traceback:

Traceback (most recent call last):
  File "/home/$USER/.local/share/krita/pykrita/koi/koi.py", line 159, in pingServer
    archive = ZipFile(BytesIO(response.read()))
  File "/tmp/.mount_krita-sL66r9/usr/lib/python3.8/zipfile.py", line 1267, in __init__
    self._RealGetContents()
  File "/tmp/.mount_krita-sL66r9/usr/lib/python3.8/zipfile.py", line 1334, in _RealGetContents
    raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file

ImportError from diffusers in Google Colab

Hi,

I'm trying to install the server instance on a Google Colab playbook (on a GPU instance), but somehow fail to do so.

When running server.py, I encounter this error : ImportError: cannot import name 'StableDiffusionImg2ImgPipeline' from 'diffusers' which seems to indicate a circular import error.

Here is what I came up with, with the help of a colab showing how to use Diffuser :

# I check if the GPU is wired up to our instance
!nvidia-smi

# I install the basics
!pip install diffusers transformers scipy ftfy
!pip install "ipywidgets>=7,<8"

# I import Google Colab widgets to be able to authenticate to Hugging Face
from google.colab import output
output.enable_custom_widget_manager()
from huggingface_hub import notebook_login
notebook_login()

# I install your package and try to run it
!git clone https://github.com/nousr/koi.git
# (somehow, I need to "cd koi" every time, else I'm back to the root directory
!cd koi; pip install -e .
!cd koi; python server.py

I tried disconnecting from the execution environment as I played with the example code from Hugging Face before, which would wipe every stored variable and every file... But sadly, that doesn't do the trick...

Trying to run this locally though, I managed to run your script as intended... until it obviously crashed as I only have 2 GB of VRAM (you can't blame me for trying ahah), so it has to have to do with either my way of doing this, or with Google Colab (which I'm very new to) needing something else or something more to run this...

Do you have any clue as to what could fail?

Thank you in advance, and don't hesitate to use this base I wrote if you want to add a "plug-and-play playbook" to the repo, as you said on Twitter ;)

Cheers!

KeyError: 'HTTP_IMAGE_STRENGTH' on request

I'm getting this python error on colab server on each request

ERROR:__main__:Exception on /api/img2img [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/flask/app.py", line 2525, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.7/dist-packages/flask/app.py", line 1822, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.7/dist-packages/flask/app.py", line 1820, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.7/dist-packages/flask/app.py", line 1796, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "<ipython-input-2-d662ef371448>", line 62, in img2img
    strength=float(headers["image_strength"]),
  File "/usr/local/lib/python3.7/dist-packages/werkzeug/datastructures.py", line 1381, in __getitem__
    return _unicodify_header_value(self.environ[f"HTTP_{key}"])
KeyError: 'HTTP_IMAGE_STRENGTH'

This are the requests headers sent from Krita:

HEADERS: 
Host: cc66-35-224-131-209.ngrok.io
User-Agent: Python-urllib/3.8
Transfer-Encoding: chunked
Accept-Encoding: identity
Content-Type: application/octet-stream
Prompt: A beautiful mountain landscape in the style of greg rutkowski, oils on canvas.
Prompt-Strength: 7.5
Seed: 1337
Sketch-Strength: 0.6000000000000001
Steps: 32
X-Forwarded-For: <MY_IP>
X-Forwarded-Proto: http

Any ideas on how to solve this?

No CUDA CPUs available

When I run the last part of the code of koi_colab_backend.ipynb on Google free colab, this error message is shown:

RuntimeError                              Traceback (most recent call last)

[<ipython-input-1-d8ad22a0a101>](https://localhost:8080/#) in <module>
     16 pipe = StableDiffusionImg2ImgPipeline.from_pretrained(
     17     "CompVis/stable-diffusion-v1-4", use_auth_token=True
---> 18 ).to("cuda")
     19 
     20 secho("Finished!", fg="green")

6 frames

[/usr/local/lib/python3.7/dist-packages/torch/cuda/__init__.py](https://localhost:8080/#) in _lazy_init()
    215         # This function throws if there's a driver initialization error, no GPUs
    216         # are found or any other error occurs
--> 217         torch._C._cuda_init()
    218         # Some of the queued calls may reentrantly call _lazy_init();
    219         # we need to just return without initializing in that case.

RuntimeError: No CUDA GPUs are available

That's strange, because I can run the pharma SD notebook without problems on free Google colab. This notebook uses CUDA too.

I removed then CUDA from this line and no error occurs.

pipe = StableDiffusionImg2ImgPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4", use_auth_token=True
)

Right now I just started the server, but didn't try to generate an image. So, I'm not sure if this still works after my change.

There are two more places, where CUDA is used:

torch.cuda.manual_seed(seed)
  with autocast("cuda"):
        return_image = pipe(
            init_image=img,
            prompt=headers["prompt"],
            strength=float(headers["sketch_strength"]),
            guidance_scale=float(headers["prompt_strength"]),
            num_inference_steps=int(headers["steps"]),
        )["sample"][0]

I have got now three questions:

  1. Why isn't it possible to use CUDA although the GPU supports it in general?
  2. Will image generation still work after my change described above?
  3. How have the code parts posted above to be changed, if CUDA can't be used?

Thanks!

[Bug] Encoding Non-Unicode Prompt Characters

If you this in the prompt: "in the style of zdzisΕ‚aw beksiΕ„ski" it makes the plugin fail with this error:

UnicodeEncodeError
Python 3.8.1: D:\Program Files\Krita (x64)\bin\krita.exe
Sun Sep  4 13:54:27 2022

A problem occurred in a Python script.  Here is the sequence of
function calls leading up to the error, in the order they occurred.

 C:\Users\MY_USER\AppData\Roaming\krita\pykrita\koi\koi.py in pingServer(self=<koi.koi.Koi object>)
  121 
  122         # wait for response and read image
  123         with request.urlopen(r, timeout=60) as response:
  124             returned_image = QImage.fromData(response.read())
  125 
global request = <module 'urllib.request' from 'D:\\Program Files...x64)\\python\\python38.zip\\urllib\\request.pyc'>
request.urlopen = <function urlopen>
r = <urllib.request.Request object>
timeout undefined
response undefined

 d:\Program Files\Krita (x64)\urllib\request.py in urlopen(url=<urllib.request.Request object>, data=None, timeout=60, cafile=None, capath=None, cadefault=False, context=None)

...
UnicodeEncodeError: 'latin-1' codec can't encode character '\u0142' in position 78: ordinal not in range(256)
    __cause__ = None
    __class__ = <class 'UnicodeEncodeError'>
    __context__ = None
    __delattr__ = <method-wrapper '__delattr__' of UnicodeEncodeError object>
....
    __suppress_context__ = False
    __traceback__ = <traceback object>
    args = ('latin-1', 'a powerful wizard; cracked open sphere; glowing ...painting; magic the gathering art; fine detailed;', 78, 79, 'ordinal not in range(256)')
    encoding = 'latin-1'
    end = 79
    object = 'a powerful wizard; cracked open sphere; glowing ...painting; magic the gathering art; fine detailed;'
    reason = 'ordinal not in range(256)'
    start = 78
    with_traceback = <built-in method with_traceback of UnicodeEncodeError object>

The above is a description of an error in a Python program.  Here is
the original traceback:

Traceback (most recent call last):
  File "C:\Users\MY_USER\AppData\Roaming\krita\pykrita\koi\koi.py", line 123, in pingServer
    with request.urlopen(r, timeout=60) as response:
  File "urllib\request.py", line 222, in urlopen
  File "urllib\request.py", line 525, in open
  File "urllib\request.py", line 542, in _open
  File "urllib\request.py", line 502, in _call_chain
  File "urllib\request.py", line 1348, in http_open
  File "urllib\request.py", line 1319, in do_open
  File "http\client.py", line 1230, in request
  File "http\client.py", line 1271, in _send_request
  File "http\client.py", line 1203, in putheader
UnicodeEncodeError: 'latin-1' codec can't encode character '\u0142' in position 78: ordinal not in range(256)

Hugging Face token not found

I am trying to run locally but when run python server.py I get this message:

OSError: You specified use_auth_token=True, but a Hugging Face token was not found.

I am searching ways to set the auth token, any help would be appreciated.

Feature Roadmap

-- moved to unplanned in light of more pressing projects --

colab runtime error with pillow

RuntimeError: Failed to import transformers.models.clip.feature_extraction_clip because of the following error (look up to see its traceback):
module 'PIL.Image' has no attribute 'Resampling'

If you are getting this, stay tuned for a fix!

No external IP

I don't get any longer an external IP, before it did work. Instead this error message is shown. I already restarted and also dropped the runtime, but it didn't help.

ftfy or spacy is not installed using BERT BasicTokenizer instead of ftfy.

Finished!
 * Serving Flask app '__main__'
 * Debug mode: off

INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on http://127.0.0.1:5000/
INFO:werkzeug:Press CTRL+C to quit
Exception in thread Thread-12:
Traceback (most recent call last):
  File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.7/threading.py", line 1177, in run
    self.function(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.7/dist-packages/flask_ngrok.py", line 70, in start_ngrok
    ngrok_address = _run_ngrok()
  File "/usr/local/lib/python3.7/dist-packages/flask_ngrok.py", line 38, in _run_ngrok
    tunnel_url = j['tunnels'][0]['public_url']  # Do the parsing of the get
IndexError: list index out of range

Thanks!

CUDA error

i try to run dream and generate an image i have this i don't know how to fix it

RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Generated image not shown in Krita

I have installed the koi plugin in Krita, SD is running on Colab. I created a simple sketch in Krita and click on "Dream". I can see, that SD is now generating an image, but the result image isn't shown in Krita.

Do I guess right, that a new layer should be created in Krita, which shows the generated image?

Any ideas what's wrong?

Thanks!

Error getting image to krita after successful generation on collab

after image generation on colab i get this piece of logs:
100%
19/19 [00:05<00:00, 3.70it/s]
ERROR:main:Exception on /api/img2img [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.7/dist-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.7/dist-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.7/dist-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "", line 76, in img2img
)["sample"][0]
File "/usr/local/lib/python3.7/dist-packages/diffusers/utils/outputs.py", line 88, in getitem
return inner_dict[k]
KeyError: 'sample'
INFO:werkzeug:127.0.0.1 - - [03/Nov/2022 14:42:37] "POST /api/img2img HTTP/1.1" 500 -

I think that image generated sucsessfully, but it somehow cannot be transferred to my pc. How can i fix this?

Error when running the script

Hey, first off this is contribution.
I tried running from colab and i go this error as soon as I hit dream for the first time.

AttributeError
Python 3.8.1: C:\Program Files\Krita (x64)\bin\krita.exe
Sat Sep 3 13:17:42 2022

A problem occurred in a Python script. Here is the sequence of
function calls leading up to the error, in the order they occurred.

C:\Users\jason\AppData\Roaming\krita\pykrita\koi\koi.py in pingServer(self=<koi.koi.Koi object>)
133 # get a pointer to the image's bits and add them to the new layer
134 ptr = returned_image.bits()
135 ptr.setsize(returned_image.byteCount())
136 dream_layer.setPixelData(
137 QByteArray(ptr.asstring()),
ptr = None
ptr.setsize undefined
returned_image = <PyQt5.QtGui.QImage object>
returned_image.byteCount =
AttributeError: 'NoneType' object has no attribute 'setsize'
cause = None
class = <class 'AttributeError'>
context = None
delattr = <method-wrapper 'delattr' of AttributeError object>
dict = {}
dir =
doc = 'Attribute not found.'
eq = <method-wrapper 'eq' of AttributeError object>
format =
ge = <method-wrapper 'ge' of AttributeError object>
getattribute = <method-wrapper 'getattribute' of AttributeError object>
gt = <method-wrapper 'gt' of AttributeError object>
hash = <method-wrapper 'hash' of AttributeError object>
init = <method-wrapper 'init' of AttributeError object>
init_subclass =
le = <method-wrapper 'le' of AttributeError object>
lt = <method-wrapper 'lt' of AttributeError object>
ne = <method-wrapper 'ne' of AttributeError object>
new =
reduce =
reduce_ex =
repr = <method-wrapper 'repr' of AttributeError object>
setattr = <method-wrapper 'setattr' of AttributeError object>
setstate =
sizeof =
str = <method-wrapper 'str' of AttributeError object>
subclasshook =
suppress_context = False
traceback =
args = ("'NoneType' object has no attribute 'setsize'",)
with_traceback =

The above is a description of an error in a Python program. Here is
the original traceback:

Traceback (most recent call last):
File "C:\Users\jason\AppData\Roaming\krita\pykrita\koi\koi.py", line 135, in pingServer
ptr.setsize(returned_image.byteCount())
AttributeError: 'NoneType' object has no attribute 'setsize'

I don't have a clue about most of this stuff, i just followed as best i could.
I would really appreciate so direction.
The file size is 512 by 512 and there is one layer as suggested.

Running with local optimized installation

First of all, great work with the plugin! It really makes the process of working with img2img more frictionless and enjoyable. Thanks for the effort!

I'd like to ask if you think that a option to run with a local installation (not diffusers-based, like the default one from CompVis) would be feasible on the current state of the plugin.
I ask because my local setup is pretty limited (GTX 1050ti) but I've manage to run the optimized version without issues on my machine (which is basically like the original one but with the precision of the inference set at half value, some options on this may be basujinda optimized version, Waifu Diffusion or hlky one) and I'd like to continue using it as the backend for koi (and, as it also have options to other sample methods, I think it would solve the #6 issue as well, and also make it possible to use offline).

If you think that's possible, even with some work from my side, I'm willing to try. Just let me know :)

[Feature request] Inpainting/outpainting

It seems pretty important to be able to create a dynamic mask indicating where you want the AI to generate, then provide the whole image and that area as context.

Perhaps it could be accomplished with Krita's selection mask system, where you'd quickly paint over areas you want affected (with a soft brush to have it take certain areas into account more). Other inpainting tools work this way, I believe.

CUDA out of memories

I have 6G VRAM card

how to fix this issue ? RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 5.81 GiB total capacity; 3.14 GiB already allocated; 780.44 MiB free; 3.17 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.