Code Monkey home page Code Monkey logo

henk717 / koboldai Goto Github PK

View Code? Open in Web Editor NEW

This project forked from koboldai/koboldai-client

348.0 348.0 131.0 96.06 MB

KoboldAI is generative AI software optimized for fictional use, but capable of much more!

Home Page: http://koboldai.com

License: GNU Affero General Public License v3.0

Python 5.02% Batchfile 0.05% CSS 0.55% JavaScript 1.09% HTML 2.22% Dockerfile 0.01% Shell 0.12% Lua 0.40% Haxe 0.01% PowerShell 0.01% Jupyter Notebook 0.11% Less 0.11% SCSS 0.11% Stylus 0.10% CMake 0.10% Makefile 0.06% C 12.11% C++ 77.32% Swift 0.09% Objective-C 0.43%
generativeai story-generation text-generation text-generation-webui

koboldai's People

Contributors

0cc4m avatar alpindale avatar catboxanon avatar cohee1207 avatar db0 avatar disty0 avatar ebolam avatar funkengine2023 avatar guiaworld avatar henk717 avatar javalar avatar jojorne avatar koboldai avatar lightsaveus avatar lostruins avatar marcusllewellyn avatar mightyalex200 avatar mrseeker avatar nerodiafasciata avatar nkpz avatar one-some avatar pi6am avatar recoveredapparatus avatar scott-ca avatar uwuplus avatar vfbd avatar viningr avatar waffshappen avatar yellowrosecx avatar zurnaz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

koboldai's Issues

Run KoboldAI without Cloudflare

Hey, i have built my own docker container based on the standalone and the rocm container from here and it is working so far, but i would really like to run this without cloudflare, just locally.
If i try to access the webui locally with the local ip and the assigned port, i just get a "not found" error, but trying to run it through the link from the logs (which is a trycloudflare link) it works no problem, but i dont want to have this running through cloudflare, i just want local access.

So how can i do this?

Here is the dockerfile i used:
FROM debian RUN apt update && apt install wget aria2 git bzip2 -y RUN git clone --recursive https://github.com/henk717/koboldai /opt/koboldai WORKDIR /opt/koboldai RUN ./install_requirements.sh rocm COPY docker-helper.sh /opt/koboldai/docker-helper.sh RUN chmod +x /opt/koboldai/docker-helper.sh EXPOSE 5000/tcp CMD /opt/koboldai/docker-helper.sh

And the docker-helper.sh script:
#!/bin/bash cd /opt/koboldai

if [[ -n update ]];then git pull --recurse-submodules && ./install_requirements.sh rocm git submodule update --init --recursive fi

if [[ ! -v KOBOLDAI_DATADIR ]];then mkdir /content KOBOLDAI_DATADIR=/content fi

mkdir $KOBOLDAI_DATADIR/stories mkdir $KOBOLDAI_DATADIR/settings mkdir $KOBOLDAI_DATADIR/softprompts mkdir $KOBOLDAI_DATADIR/userscripts mkdir $KOBOLDAI_DATADIR/presets mkdir $KOBOLDAI_DATADIR/themes

cp -rn stories/* $KOBOLDAI_DATADIR/stories/ cp -rn userscripts/* $KOBOLDAI_DATADIR/userscripts/ cp -rn softprompts/* $KOBOLDAI_DATADIR/softprompts/ cp -rn presets/* $KOBOLDAI_DATADIR/presets/ cp -rn themes/* $KOBOLDAI_DATADIR/themes/

if [[ -v KOBOLDAI_MODELDIR ]];then mkdir $KOBOLDAI_MODELDIR/models mkdir $KOBOLDAI_MODELDIR/functional_models rm models rm -rf models/ ln -s $KOBOLDAI_MODELDIR/models/ models ln -s $KOBOLDAI_MODELDIR/functional_models/ functional_models fi

for FILE in $KOBOLDAI_DATADIR* do FILENAME="$(basename $FILE)" rm /opt/koboldai/$FILENAME rm -rf /opt/koboldai/$FILENAME ln -s $FILE /opt/koboldai/ done

PYTHONUNBUFFERED=1 ./play-rocm.sh --remote --quiet --override_delete --override_rename

which is located right beside the dockerfile.
The finished docker container is found here https://hub.docker.com/r/joly0/koboldai-rocm
And here is the docker run command:
docker run -d --name='KoboldAI' --net='bridge' -p '5000:5000/tcp' -v '/koboldai-content/':'/content':'rw' --device='/dev/kfd' --device='/dev/dri' 'joly0/koboldai-rocm'

Latest play.sh returns Flask Errors

Launching KoboldAI with the play.sh script on Linux installs the environment without errors, however every request to the server fails with an Error 500 and returns the following Flask stacktrace.
This is an Ubuntu machine.

AttributeError: 'Flask' object has no attribute 'session_cookie_name'
[2023-04-29 02:31:11,053] ERROR in app: Request finalizing failed with an error while handling an error
Traceback (most recent call last):
  File "/home/privateger/KoboldAI/runtime/envs/koboldai/lib/python3.8/site-packages/flask/app.py", line 2189, in wsgi_app
    ctx.push()
  File "/home/privateger/KoboldAI/runtime/envs/koboldai/lib/python3.8/site-packages/flask/ctx.py", line 377, in push
    self.session = session_interface.open_session(self.app, self.request)
  File "/home/privateger/KoboldAI/runtime/envs/koboldai/lib/python3.8/site-packages/flask_session/sessions.py", line 329, in open_session
    sid = request.cookies.get(app.session_cookie_name)
AttributeError: 'Flask' object has no attribute 'session_cookie_name'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/privateger/KoboldAI/runtime/envs/koboldai/lib/python3.8/site-packages/flask/app.py", line 1508, in finalize_request
    response = self.process_response(response)
  File "/home/privateger/KoboldAI/runtime/envs/koboldai/lib/python3.8/site-packages/flask/app.py", line 2005, in process_response
    self.session_interface.save_session(self, ctx.session, response)
  File "/home/privateger/KoboldAI/runtime/envs/koboldai/lib/python3.8/site-packages/flask_session/sessions.py", line 353, in save_session
    if session.modified:
AttributeError: 'NoneType' object has no attribute 'modified'

Running it on colab, settings don't get saved

For example, when I'm running the official version of KoboldAI, my active soft prompt will be saved and will be activated automatically the next time I start it. But when I'm running this the soft prompt will always be none by default and have to be activated manually

Streaming does not work with sillytavern

I have "token streaming" enabled but when connected to the api with sillytavern I get this error when trying to generate a response:
image

Streaming to sillytavern does work with koboldcpp

Edit: I've noticed that even though I have "token streaming" on, when I make a request to the api the token streaming field automatically switches back to off.

add CONTRIBUTING.md

Not sure if this should go to your fork or the main repo.

Some information about how to contribute to the project would make it a little easier for people to contribute and be confident that their PR could get accepted. I assume you're pretty busy so I wouldn't mind writing up a draft and submitting a PR with a first pass.

Some information that should probably be included:

  • Where to submit PRs (This repo vs the main one)
  • PR format
  • Who to add as reviewers on a PR
  • Expectations for review timeline ("It'll take at least a week before you hear back" or etc)
  • Who should potential changes be discussed with before submitting PRs?
  • Any code style requirements or a linter config
  • Where to go to chat about changes
  • Location of the roadmap if that exists
  • CI/CD details if those exists
  • High level architecture pieces

Again, happy to draft this up if you're busy.

[Feature Request] Stream tokens character by character

As of right now, Tokens generated are displayed immediately if token streaming is enabled. This however can look rather choppy on slower generating devices.

To counteract this, I'm proposing a small change in the token streaming feature. Instead of showing the whole token immediately, show the characters one after another.

This change would provide a smoother and more consistent user experience, as well as creating the illusion of faster generations which is beneficial to everyone.

Traceback when http:// not included in KoboldAI url

requests.exceptions.InvalidSchema: No connection adapters were found for 'localhost:5001/api/v1/model'

The server really doesn't like it if you forget to specify the http:// in the KoboldAI url. The UI locks up with a generic "loading" message and the above error is emitted.

Verbose logging for GPU support

I have been attempting to get the rocm version of kobold running on linux, but each time I do it says my GPU is not supported and switches to CPU only mode. I'm fairly sure I have ROCm installed correctly, but even if I don't I doubt any of the people who know about ROCm would be able to help with the kobold use case.
Is it possible to get some verbose logging on why exactly the GPU isn't supported? The console just displays the fact that it isn't supported, rather than why and what error is causing it. This would greatly improve the ability to debug problems and issues on local hosting. If such a feature exists, please let me know because it's not documented anywhere

Max context lenght is broken

I directly used AutoTokenizer and GPT2Tokenizer from transformers what also KAI uses (based on the source code) with from_pretrained() what also KAI uses and point it to the model I loaded.

I used the API function to create a /generate message. I used the give example text there and only copied it a lot of times so I get enough context length. So there is no escape sign or anything else in it that can make problems.

I added the line "max_context_length": 1500 (an different other number to be sure KAI use this parameter)

The result is always the same, no matter if I use AutoTokenizer and GPT2Tokenizer or any other Tokenizer I get out 1419 Tokens instead of 1500 at this example.

So it is clear KAI didn't cut on 1500 tokens where it should. I don't see where here could be an error on my test script:

token_count_test.py.txt

If OpenAI decides to send an empty response in UI1, it will error, and load forever.

Error is below, seems very similar to the issue on UI2.

Exception in thread Thread-114:
Traceback (most recent call last):
  File "/notebooks/runtime/envs/koboldai/lib/python3.8/threading.py", line 932, in _bootstrap_inner
    self.run()
  File "/notebooks/runtime/envs/koboldai/lib/python3.8/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/notebooks/runtime/envs/koboldai/lib/python3.8/site-packages/socketio/server.py", line 731, in _handle_event_internal
    r = server._trigger_event(data[0], namespace, sid, *data[1:])
  File "/notebooks/runtime/envs/koboldai/lib/python3.8/site-packages/socketio/server.py", line 756, in _trigger_event
    return self.handlers[namespace][event](*args)
  File "/notebooks/runtime/envs/koboldai/lib/python3.8/site-packages/flask_socketio/__init__.py", line 282, in _handler
    return self._handle_event(handler, message, namespace, sid,
  File "/notebooks/runtime/envs/koboldai/lib/python3.8/site-packages/flask_socketio/__init__.py", line 828, in _handle_event
    ret = handler(*args)
  File "aiserver.py", line 592, in g
    return f(*a, **k)
  File "aiserver.py", line 4245, in get_message
    actionsubmit(msg['data'], actionmode=msg['actionmode'])
  File "aiserver.py", line 4974, in actionsubmit
    calcsubmit("")
  File "aiserver.py", line 5377, in calcsubmit
    generate(subtxt, min, max, found_entries)
  File "aiserver.py", line 6331, in generate
    koboldai_vars.lua_koboldbridge.generated[i+1][koboldai_vars.generated_tkns] = int(genout[i, -1].item())
IndexError: index -1 is out of bounds for axis 1 with size 0

TypeError: set_module_tensor_to_device() takes from 3 to 4 positional arguments but 5 were given

Exception in thread Thread-2:
Traceback (most recent call last):
  File "/opt/conda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
    self.run()
  File "/opt/conda/lib/python3.7/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/kaggle/tmp/KoboldAI/aiserver.py", line 3227, in load_model
    move_model_to_devices(model)
  File "/kaggle/tmp/KoboldAI/aiserver.py", line 1131, in move_model_to_devices
    accelerate.utils.set_module_tensor_to_device(model, key, torch.device(value.device), value, target_dtype)
TypeError: set_module_tensor_to_device() takes from 3 to 4 positional arguments but 5 were given

This occurs when trying to run aiserver.py.

I'm running it as follows: python /kaggle/tmp/KoboldAI/aiserver.py --model PygmalionAI/pygmalion-6b --quiet --breakmodel_gpulayers 14,14

I'm using Kaggle, but several users on Reddit have also encountered this error, at least one on their personal computer.
https://www.reddit.com/r/PygmalionAI/comments/12a1sig/help/

A user in the thread traced says the issue started with 4a8b099. I haven't used it in a few weeks, so I can't personally speak to when it first occurred.

[BUG] Stuck like this for the past 2 hours

Updating to the latest KoboldAI United for full support
If you like a different version run the updater again once the update is complete
Reinitialized existing Git repository in C:/KoboldAI/.git/
Fetching origin
remote: Enumerating objects: 792, done.
remote: Counting objects: 100% (451/451), done.
remote: Compressing objects: 100% (128/128), done.
remote: Total 792 (delta 350), reused 380 (delta 310), pack-reused 341
Receiving objects: 100% (792/792), 898.41 KiB | 2.53 MiB/s, done.
Resolving deltas: 100% (516/516), completed with 14 local objects.
From https://github.com/henk717/koboldai

  • [new branch] united -> origin/united

  • [new tag] Snapshot-7-5-2023 -> Snapshot-7-5-2023
    Fetching submodule KoboldAI-Horde-Bridge
    From https://github.com/db0/KoboldAI-Horde-Bridge
    d9014eb..20e8701 master -> origin/master
    Already on 'united'
    'origin' is not recognized as an internal or external command,
    operable program or batch file.
    Fetching origin
    Already on 'united'
    HEAD is now at b5bdb1d Merge pull request #377 from ebolam/Model_Plugins
    'branch' is not recognized as an internal or external command,
    operable program or batch file.
    Reinitialized existing Git repository in C:/KoboldAI/.git/
    Fetching origin
    From https://github.com/henk717/koboldai

  • [new branch] united -> origin/united
    Already on 'united'
    HEAD is now at b5bdb1d Merge pull request #377 from ebolam/Model_Plugins
    Submodule path 'KoboldAI-Horde-Bridge': checked out '20e8701dd27d478ff405f4ac6e2042edf06174df'

                                        __
       __  ______ ___  ____ _____ ___  / /_  ____ _
      / / / / __ `__ \/ __ `/ __ `__ \/ __ \/ __ `/
     / /_/ / / / / / / /_/ / / / / / / /_/ / /_/ /
    / .___/_/ /_/ /_/\__,_/_/ /_/ /_/_.___/\__,_/
    

    /_/

nvidia/noarch 4.4kB @ 6.6kB/s 0.7s
pytorch/noarch 10.2kB @ 15.0kB/s 0.7s
pytorch/win-64 125.0kB @ 173.9kB/s 0.7s
nvidia/win-64 93.5kB @ 128.6kB/s 0.7s
pkgs/msys2/win-64 39.8kB @ 36.8kB/s 0.4s
pkgs/r/noarch 1.3MB @ 795.9kB/s 1.0s
pkgs/main/noarch 850.6kB @ 454.5kB/s 0.8s
pkgs/msys2/noarch 111.0 B @ 55.0 B/s 0.1s
pkgs/r/win-64 742.8kB @ 360.8kB/s 0.4s
pkgs/main/win-64 5.2MB @ 1.8MB/s 2.4s
conda-forge/noarch 13.1MB @ 3.6MB/s 3.5s
[+] 55.9s
conda-forge/win-64 ----------------------------------------------------------------- 1.7MB / ??.?MB @ 29.7kB/s 55.9s

error

PROMPT @ 2023-08-13 01:21:50 | test ERROR | __main__:generate:3893 - Traceback (most recent call last): File "aiserver.py", line 3880, in generate genout, already_generated = tpool.execute(model.core_generate, txt, found_entries, gen_mode=gen_mode) File "C:\KoboldAI\miniconda3\lib\site-packages\eventlet\tpool.py", line 132, in execute six.reraise(c, e, tb) File "C:\KoboldAI\miniconda3\lib\site-packages\six.py", line 719, in reraise raise value File "C:\KoboldAI\miniconda3\lib\site-packages\eventlet\tpool.py", line 86, in tworker rv = meth(*args, **kwargs) File "C:\KoboldAI\modeling\inference_model.py", line 356, in core_generate result = self.raw_generate( File "C:\KoboldAI\modeling\inference_model.py", line 629, in raw_generate result = self._raw_generate( File "C:\KoboldAI\modeling\inference_models\hf_torch.py", line 331, in _raw_generate genout = self.model.generate( File "C:\KoboldAI\modeling\inference_models\gptq_hf_torch\class.py", line 371, in generate with torch.inference_mode(), torch.amp.autocast(device_type=self.device.type): File "C:\KoboldAI\miniconda3\lib\site-packages\auto_gptq\modeling\_base.py", line 431, in device device = [d for d in self.hf_device_map.values() if d not in {'cpu', 'disk'}][0] IndexError: list index out of range
kobold_debug.zip

hfdownloader integration (?)

Hello,

I stumbled upon a huggingface downloader written in Go which uses git LFS. It's fast. I get considerably faster transfer speeds than with aria2 or git-lfs, and it doesn't take 1,000 hours to reassemble the file like git-lfs does. The author distributes binaries for a number of platforms, as well (though IIRC they're not statically linked, not sure about that though). You may wish to switch to this from aria2 or add support for it.

https://github.com/bodaay/HuggingFaceModelDownloader

New UI: Pasting is broken.

KoboldAI#278

Another update: Pasting can work to some extent, but still requires text to be selected from the click-menu. Pasting now causes posts to be duplicated and merged.

In Chat mode, AI responses split across two actions break the "Message" chat display

To reproduce:

  1. Launch the server and client, and load a model
  2. Set the mode to Chat
  3. Begin chatting using a chat-tailored prompt
  4. Set the token Output Length to a small value
  5. Keep chatting until the AI terminates its output in the middle of a response.
  6. Submit an empty action so that the AI finishes its response.
  7. Set the "Chat Style" to "Messages"
  8. Notice that the response is split into two parts, the second of which is attribute to "System"

Expected result:
Everything following "Speaker:" up to the next "Other:" should be concatenated into the same message.

See these attached screenshots for an example of how it looks in Legacy and Messages modes.

Correct conversation in "Legacy" mode.
image

Second part of the response is attributed to "System" in the "Message" chat style.
image

Kobold AI performance when split layers between GPUs

I'm having some really weird performance issues when loading Kobold AI models into multiple GPUs. I'm using the United version at https://github.com/henk717/KoboldAI. I'm using Linux and installed KoboldAI with play.sh

For testing, I will just use PygmalionAI_pygmalion-350m, a very small model. I load the model using the old UI.

Here is the performance when loading all of them into 1GPU.

INIT       | Info       | Final device configuration:
       DEVICE ID  |  LAYERS  |  DEVICE NAME
   (primary)   0  |      24  |  NVIDIA RTX A5000
               1  |       0  |  NVIDIA RTX A5000
               2  |       0  |  NVIDIA RTX A5000
               3  |       0  |  NVIDIA RTX A5000
               4  |       0  |  NVIDIA RTX A5000
             N/A  |       0  |  (Disk cache)
             N/A  |       0  |  (CPU)

Loading model tensors: 100%|##########| 389/389 [00:01<00:00, 346.13it/s]INFO       | __main__:load_model:3260 - Pipeline created: PygmalionAI_pygmalion-350m
INIT       | Starting   | LUA bridge
INIT       | OK         | LUA bridge
INIT       | Starting   | LUA Scripts
INIT       | OK         | LUA Scripts
Setting Seed
INFO       | __main__:do_connect:4165 - Client connected! UI_1
PROMPT     @ 2023-02-25 17:50:48 | Hi
INFO       | __main__:raw_generate:5763 - Generated 80 tokens in 2.45 seconds, for an average rate of 32.65 tokens per second.

The performance seems really good., but when I try to split the layer between GPU, the performance really degrades (100x slower).

INIT       | Info       | Final device configuration:
       DEVICE ID  |  LAYERS  |  DEVICE NAME
   (primary)   0  |       7  |  NVIDIA RTX A5000
               1  |       7  |  NVIDIA RTX A5000
               2  |       4  |  NVIDIA RTX A5000
               3  |       2  |  NVIDIA RTX A5000
               4  |       4  |  NVIDIA RTX A5000
             N/A  |       0  |  (Disk cache)
             N/A  |       0  |  (CPU)

Loading model tensors: 100%|##########| 389/389 [00:01<00:00, 336.48it/s]INFO       | __main__:load_model:3260 - Pipeline created: PygmalionAI_pygmalion-350m
INIT       | Starting   | LUA bridge
INIT       | OK         | LUA bridge
INIT       | Starting   | LUA Scripts
INIT       | OK         | LUA Scripts
Setting Seed
INFO       | __main__:do_connect:4165 - Client connected! UI_1
INFO       | __main__:do_connect:4165 - Client connected! UI_1
PROMPT     @ 2023-02-25 17:51:44 | HiHi
INFO       | __main__:raw_generate:5763 - Generated 6 tokens in 25.14 seconds, for an average rate of 0.24 tokens per second.

I expect that the performance might be a bit worse due to the overhead of communicating between GPUs, but should it be that worse 😓 ?
And seems like no layers are stored in the Disk cache, RAM, or CPU. Is this the expected behavior of Kobold AI, or some problem with my setup 🤔 ?

Missing condabin directory

I am using windows 10.
I cloned the repo, installed the miniconda to subdirectory, and when I try to run it using play.bat, I get error on call miniconda3\condabin\activate, because the miniconda3\condabin directory is missing.
The installation steps were:

git clone https://github.com/henk717/KoboldAI
cd KoboldAI
./install_requirements.bat

I ran it as an administrator.
The output of installation process is

Errors? Rerun this as admin so it can add the needed LongPathsEnabled registery tweak.
Installer failed or crashed? Run it again so it can continue.
Only Windows 10 and higher officially supported, older Windows installations can't handle the paths.

The operation completed successfully.
Delete existing installation?
This is required if you are switching modes, or if you get dependency errors in the game.
1. Yes
2. No
Type the number of the desired option and then press ENTER:1
Which installation mode would you like?
1. Temporary Drive Letter (Mounts the folder as drive B:, more stable and portable)
2. Subfolder (Traditional method, can't run in folder paths that contain spaces)

Type the number of the desired option and then press ENTER:2

                                           __
          __  ______ ___  ____ _____ ___  / /_  ____ _
         / / / / __ `__ \/ __ `/ __ `__ \/ __ \/ __ `/
        / /_/ / / / / / / /_/ / / / / / / /_/ / /_/ /
       / .___/_/ /_/ /_/\__,_/_/ /_/ /_/_.___/\__,_/
      /_/

Empty environment created at prefix: miniconda3

                                           __
          __  ______ ___  ____ _____ ___  / /_  ____ _
         / / / / __ `__ \/ __ `/ __ `__ \/ __ \/ __ `/
        / /_/ / / / / / / /_/ / / / / / / /_/ / /_/ /
       / .___/_/ /_/ /_/\__,_/_/ /_/ /_/_.___/\__,_/
      /_/

nvidia/noarch                                        4.4kB @  36.9kB/s  0.1s
pytorch/win-64                                     125.2kB @ 841.0kB/s  0.2s
pytorch/noarch                                      10.2kB @  69.1kB/s  0.2s
nvidia/win-64                                       98.2kB @ 559.9kB/s  0.2s
pkgs/msys2/win-64                                   39.8kB @ 160.4kB/s  0.1s
pkgs/main/noarch                                   850.6kB @   2.0MB/s  0.3s
pkgs/msys2/noarch                                  111.0 B @ 221.0 B/s  0.1s
pkgs/r/noarch                                        1.3MB @   2.3MB/s  0.4s
pkgs/r/win-64                                      742.8kB @   1.0MB/s  0.3s
pkgs/main/win-64                                     5.2MB @   2.7MB/s  2.2s
conda-forge/noarch                                  13.2MB @   3.7MB/s  4.4s
conda-forge/win-64                                  21.1MB @   3.8MB/s  7.4s

Transaction

  Prefix: D:\Projects\text_generation\Pygmalion\KoboldAI\miniconda3

  Updating specs:

   - colorama
   - flask=2.2.3
   - flask-socketio=5.3.2
   - flask-session=0.4.0
   - python-socketio=5.7.2
   - pytorch=2.0
   - python=3.8
   - pytorch-cuda=11.8
   - eventlet=0.33.3
   - dnspython=2.2.1
   - markdown
   - bleach=4.1.0
   - pip
   - git=2.35.1
   - sentencepiece
   - protobuf
   - marshmallow[version='>=3.13']
   - apispec-webframeworks
   - loguru
   - termcolor
   - Pillow
   - psutil


  Package                               Version  Build                    Channel                 Size
--------------------------------------------------------------------------------------------------------
  Install:
--------------------------------------------------------------------------------------------------------

  + apispec                               6.3.0  pyhd8ed1ab_0             conda-forge/noarch      30kB
  + apispec-webframeworks                 0.5.2  pyhd8ed1ab_4             conda-forge/noarch      13kB
  + bidict                               0.22.1  pyhd8ed1ab_0             conda-forge/noarch      33kB
  + blas                                  2.117  mkl                      conda-forge/win-64      17kB
  + blas-devel                            3.9.0  17_win64_mkl             conda-forge/win-64      16kB
  + bleach                                4.1.0  pyhd8ed1ab_0             conda-forge/noarch     124kB
  + bzip2                                 1.0.8  h8ffe710_4               conda-forge/win-64     152kB
  + ca-certificates                    2023.5.7  h56e8100_0               conda-forge/win-64     149kB
  + cachelib                             0.10.2  pyhd8ed1ab_0             conda-forge/noarch      20kB
  + cffi                                 1.15.1  py38h57701bc_3           conda-forge/win-64     230kB
  + click                                 8.1.3  win_pyhd8ed1ab_2         conda-forge/noarch      77kB
  + colorama                              0.4.6  pyhd8ed1ab_0             conda-forge/noarch      25kB
  + cryptography                         41.0.1  py38h95f5157_0           conda-forge/win-64       1MB
  + cuda-cccl                           12.2.53  0                        nvidia/win-64            1MB
  + cuda-cudart                         11.8.89  0                        nvidia/win-64            2MB
  + cuda-cudart-dev                     11.8.89  0                        nvidia/win-64          741kB
  + cuda-cupti                          11.8.87  0                        nvidia/win-64           12MB
  + cuda-libraries                       11.8.0  0                        nvidia/win-64            1kB
  + cuda-libraries-dev                   11.8.0  0                        nvidia/win-64            1kB
  + cuda-nvrtc                          11.8.89  0                        nvidia/win-64           76MB
  + cuda-nvrtc-dev                      11.8.89  0                        nvidia/win-64           17MB
  + cuda-nvtx                           11.8.86  0                        nvidia/win-64           44kB
  + cuda-profiler-api                   12.2.53  0                        nvidia/win-64           19kB
  + cuda-runtime                         11.8.0  0                        nvidia/win-64            1kB
  + dnspython                             2.2.1  pyhd8ed1ab_0             conda-forge/noarch     139kB
  + eventlet                             0.33.3  pyhd8ed1ab_0             conda-forge/noarch     173kB
  + filelock                             3.12.2  pyhd8ed1ab_0             conda-forge/noarch      15kB
  + flask                                 2.2.3  pyhd8ed1ab_0             conda-forge/noarch      83kB
  + flask-session                         0.4.0  pyhd8ed1ab_0             conda-forge/noarch      11kB
  + flask-socketio                        5.3.2  pyhd8ed1ab_0             conda-forge/noarch      21kB
  + freetype                             2.12.1  h546665d_1               conda-forge/win-64     497kB
  + git                                  2.35.1  h57928b3_0               conda-forge/win-64     101MB
  + greenlet                              2.0.2  py38hd3f51b4_1           conda-forge/win-64     178kB
  + idna                                    3.4  pyhd8ed1ab_0             conda-forge/noarch      57kB
  + importlib-metadata                    6.7.0  pyha770c72_0             conda-forge/noarch      26kB
  + intel-openmp                       2023.1.0  h57928b3_46319           conda-forge/win-64       3MB
  + itsdangerous                          2.1.2  pyhd8ed1ab_0             conda-forge/noarch      16kB
  + jinja2                                3.1.2  pyhd8ed1ab_1             conda-forge/noarch     101kB
  + lcms2                                  2.15  h3e3b177_1               conda-forge/win-64     499kB
  + lerc                                  4.0.0  h63175ca_0               conda-forge/win-64     194kB
  + libabseil                        20230125.3  cxx17_h63175ca_0         conda-forge/win-64       2MB
  + libblas                               3.9.0  17_win64_mkl             conda-forge/win-64       4MB
  + libcblas                              3.9.0  17_win64_mkl             conda-forge/win-64       4MB
  + libcublas                         11.11.3.6  0                        nvidia/win-64           33kB
  + libcublas-dev                     11.11.3.6  0                        nvidia/win-64          394MB
  + libcufft                          10.9.0.58  0                        nvidia/win-64            6kB
  + libcufft-dev                      10.9.0.58  0                        nvidia/win-64          152MB
  + libcurand                         10.3.3.53  0                        nvidia/win-64            3kB
  + libcurand-dev                     10.3.3.53  0                        nvidia/win-64           52MB
  + libcusolver                       11.4.1.48  0                        nvidia/win-64           30kB
  + libcusolver-dev                   11.4.1.48  0                        nvidia/win-64           99MB
  + libcusparse                       11.7.5.86  0                        nvidia/win-64           14kB
  + libcusparse-dev                   11.7.5.86  0                        nvidia/win-64          184MB
  + libdeflate                             1.18  hcfcfb64_0               conda-forge/win-64     152kB
  + libffi                                3.4.2  h8ffe710_5               conda-forge/win-64      42kB
  + libhwloc                              2.9.1  nocuda_h15da153_6        conda-forge/win-64       3MB
  + libiconv                               1.17  h8ffe710_0               conda-forge/win-64     715kB
  + libjpeg-turbo                       2.1.5.1  hcfcfb64_0               conda-forge/win-64     688kB
  + liblapack                             3.9.0  17_win64_mkl             conda-forge/win-64       4MB
  + liblapacke                            3.9.0  17_win64_mkl             conda-forge/win-64       4MB
  + libnpp                            11.8.0.86  0                        nvidia/win-64          301kB
  + libnpp-dev                        11.8.0.86  0                        nvidia/win-64          150MB
  + libnvjpeg                         11.9.0.86  0                        nvidia/win-64            5kB
  + libnvjpeg-dev                     11.9.0.86  0                        nvidia/win-64            2MB
  + libpng                               1.6.39  h19919ed_0               conda-forge/win-64     344kB
  + libprotobuf                         3.21.12  h12be248_0               conda-forge/win-64       2MB
  + libsentencepiece                     0.1.99  h47d101a_1               conda-forge/win-64       2MB
  + libsqlite                            3.42.0  hcfcfb64_0               conda-forge/win-64     840kB
  + libtiff                               4.5.1  h6c8260b_0               conda-forge/win-64     955kB
  + libuv                                1.44.2  h8ffe710_0               conda-forge/win-64     370kB
  + libwebp-base                          1.3.1  hcfcfb64_0               conda-forge/win-64     269kB
  + libxcb                                 1.15  hcd874cb_0               conda-forge/win-64     970kB
  + libxml2                              2.11.4  hc3477c8_0               conda-forge/win-64       2MB
  + libzlib                              1.2.13  hcfcfb64_5               conda-forge/win-64      56kB
  + loguru                                0.7.0  py38haa244fe_0           conda-forge/win-64      93kB
  + m2w64-gcc-libgfortran                 5.3.0  6                        conda-forge/win-64     351kB
  + m2w64-gcc-libs                        5.3.0  7                        conda-forge/win-64     532kB
  + m2w64-gcc-libs-core                   5.3.0  7                        conda-forge/win-64     219kB
  + m2w64-gmp                             6.1.0  2                        conda-forge/win-64     744kB
  + m2w64-libwinpthread-git  5.0.0.4634.697f757  2                        conda-forge/win-64      32kB
  + markdown                              3.4.3  pyhd8ed1ab_0             conda-forge/noarch      71kB
  + markupsafe                            2.1.3  py38h91455d4_0           conda-forge/win-64      26kB
  + marshmallow                          3.19.0  pyhd8ed1ab_0             conda-forge/noarch      85kB
  + mkl                                2022.1.0  h6a75c08_874             conda-forge/win-64     192MB
  + mkl-devel                          2022.1.0  h57928b3_875             conda-forge/win-64       7MB
  + mkl-include                        2022.1.0  h6a75c08_874             conda-forge/win-64     779kB
  + mpmath                                1.3.0  pyhd8ed1ab_0             conda-forge/noarch     438kB
  + msys2-conda-epoch                  20160418  1                        conda-forge/win-64       3kB
  + networkx                                3.1  pyhd8ed1ab_0             conda-forge/noarch       1MB
  + openjpeg                              2.5.0  ha2aaf27_2               conda-forge/win-64     237kB
  + openssl                               3.1.1  hcfcfb64_1               conda-forge/win-64       7MB
  + packaging                              23.1  pyhd8ed1ab_0             conda-forge/noarch      46kB
  + pillow                               10.0.0  py38ha7eb54a_0           conda-forge/win-64      47MB
  + pip                                  23.1.2  pyhd8ed1ab_0             conda-forge/noarch       1MB
  + protobuf                            4.21.12  py38hd3f51b4_0           conda-forge/win-64     616kB
  + psutil                                5.9.5  py38h91455d4_0           conda-forge/win-64     374kB
  + pthread-stubs                           0.4  hcd874cb_1001            conda-forge/win-64       6kB
  + pthreads-win32                        2.9.1  hfa6e2cd_3               conda-forge/win-64     144kB
  + pycparser                              2.21  pyhd8ed1ab_0             conda-forge/noarch     103kB
  + pyopenssl                            23.2.0  pyhd8ed1ab_1             conda-forge/noarch     129kB
  + python                               3.8.17  h4de0772_0_cpython       conda-forge/win-64      18MB
  + python-engineio                       4.4.1  pyhd8ed1ab_0             conda-forge/noarch      37kB
  + python-socketio                       5.7.2  pyhd8ed1ab_0             conda-forge/noarch      36kB
  + python_abi                              3.8  3_cp38                   conda-forge/win-64       6kB
  + pytorch                               2.0.1  py3.8_cuda11.8_cudnn8_0  pytorch/win-64           1GB
  + pytorch-cuda                           11.8  h24eeafa_5               pytorch/win-64           4kB
  + pytorch-mutex                           1.0  cuda                     pytorch/noarch           3kB
  + pyyaml                                  6.0  py38h91455d4_5           conda-forge/win-64     157kB
  + sentencepiece                        0.1.99  haa244fe_1               conda-forge/win-64      31kB
  + sentencepiece-python                 0.1.99  py38h4e1e770_1           conda-forge/win-64       3MB
  + sentencepiece-spm                    0.1.99  h47d101a_1               conda-forge/win-64     789kB
  + setuptools                           68.0.0  pyhd8ed1ab_0             conda-forge/noarch     464kB
  + six                                  1.16.0  pyh6c4a22f_0             conda-forge/noarch      14kB
  + sympy                                  1.12  pyh04b8f61_3             conda-forge/noarch       4MB
  + tbb                                2021.9.0  h91493d7_0               conda-forge/win-64     155kB
  + termcolor                             2.3.0  pyhd8ed1ab_0             conda-forge/noarch      12kB
  + tk                                   8.6.12  h8ffe710_0               conda-forge/win-64       4MB
  + typing_extensions                     4.7.1  pyha770c72_0             conda-forge/noarch      36kB
  + ucrt                           10.0.22621.0  h57928b3_0               conda-forge/win-64       1MB
  + vc                                     14.3  h64f974e_17              conda-forge/win-64      17kB
  + vc14_runtime                    14.36.32532  hfdfe4a8_17              conda-forge/win-64     741kB
  + vs2015_runtime                  14.36.32532  h05e6639_17              conda-forge/win-64      17kB
  + webencodings                          0.5.1  py_1                     conda-forge/noarch      12kB
  + werkzeug                              2.3.6  pyhd8ed1ab_0             conda-forge/noarch     254kB
  + wheel                                0.40.0  pyhd8ed1ab_0             conda-forge/noarch      56kB
  + win32_setctime                        1.1.0  pyhd8ed1ab_0             conda-forge/noarch       7kB
  + xorg-libxau                          1.0.11  hcd874cb_0               conda-forge/win-64      51kB
  + xorg-libxdmcp                         1.1.3  hcd874cb_0               conda-forge/win-64      68kB
  + xz                                    5.2.6  h8d14728_0               conda-forge/win-64     218kB
  + yaml                                  0.2.5  h8ffe710_2               conda-forge/win-64      63kB
  + zipp                                 3.15.0  pyhd8ed1ab_0             conda-forge/noarch      17kB
  + zstd                                  1.5.2  h12be248_6               conda-forge/win-64     288kB

  Summary:

  Install: 132 packages

  Total download: 3GB

--------------------------------------------------------------------------------------------------------


Transaction starting
python_abi                                           6.1kB @  53.3kB/s  0.1s
msys2-conda-epoch                                    3.2kB @  22.6kB/s  0.1s
ca-certificates                                    148.6kB @ 990.0kB/s  0.2s
vc                                                  17.2kB @ 110.5kB/s  0.0s
ucrt                                                 1.3MB @   6.7MB/s  0.2s
libwebp-base                                       268.6kB @   1.3MB/s  0.1s
pthreads-win32                                     144.3kB @ 615.5kB/s  0.1s
bzip2                                              152.2kB @ 595.9kB/s  0.0s
xz                                                 217.8kB @ 812.9kB/s  0.1s
libpng                                             343.9kB @ 913.9kB/s  0.1s
libjpeg-turbo                                      688.1kB @   1.4MB/s  0.2s
freetype                                           497.4kB @ 984.3kB/s  0.1s
libiconv                                           714.5kB @   1.4MB/s  0.4s
xorg-libxdmcp                                       67.9kB @ 122.8kB/s  0.1s
openjpeg                                           237.1kB @ 402.7kB/s  0.1s
tbb                                                154.7kB @ 236.9kB/s  0.1s
blas                                                16.8kB @  20.6kB/s  0.2s
libcblas                                             3.7MB @   2.0MB/s  1.2s
python                                              17.9MB @   9.7MB/s  1.6s
libcublas                                           33.3kB @  17.9kB/s  0.1s
libcusparse                                         13.8kB @   7.3kB/s  0.0s
mkl-devel                                            7.5MB @   3.6MB/s  1.5s
cuda-cccl                                            1.4MB @ 591.3kB/s  0.5s
libnvjpeg-dev                                        2.0MB @ 659.4kB/s  0.7s
cuda-libraries                                       1.5kB @ 478.0 B/s  0.1s
wheel                                               55.7kB @  17.9kB/s  0.1s
cuda-nvrtc-dev                                      16.9MB @   5.1MB/s  1.4s
mpmath                                             438.3kB @ 131.7kB/s  0.2s
filelock                                            14.9kB @   4.5kB/s  0.0s
packaging                                           46.1kB @  13.5kB/s  0.1s
six                                                 14.3kB @   4.1kB/s  0.0s
webencodings                                        11.9kB @   3.4kB/s  0.1s
termcolor                                           11.8kB @   3.4kB/s  0.0s
importlib-metadata                                  25.9kB @   7.3kB/s  0.0s
marshmallow                                         84.7kB @  23.9kB/s  0.1s
pytorch-cuda                                         3.6kB @   1.0kB/s  0.0s
protobuf                                           616.2kB @ 168.3kB/s  0.1s
greenlet                                           177.5kB @  48.4kB/s  0.1s
loguru                                              93.3kB @  25.1kB/s  0.1s
apispec                                             29.8kB @   8.0kB/s  0.1s
pyopenssl                                          129.0kB @  34.2kB/s  0.1s
flask                                               83.4kB @  22.0kB/s  0.1s
flask-socketio                                      21.2kB @   5.5kB/s  0.1s
m2w64-gmp                                          743.5kB @ 185.6kB/s  0.2s
cuda-cupti                                          12.0MB @   2.5MB/s  3.9s
libsqlite                                          839.8kB @ 169.8kB/s  0.2s
openssl                                              7.4MB @   1.5MB/s  1.0s
lerc                                               194.4kB @  38.0kB/s  0.1s
intel-openmp                                         2.6MB @ 500.7kB/s  1.4s
libprotobuf                                          2.1MB @ 387.3kB/s  0.3s
pthread-stubs                                        6.4kB @   1.2kB/s  0.1s
tk                                                   3.7MB @ 673.4kB/s  0.5s
sentencepiece-spm                                  788.5kB @ 141.1kB/s  0.2s
blas-devel                                          16.3kB @   2.9kB/s  0.1s
libtiff                                            954.8kB @ 168.5kB/s  0.5s
libcusolver                                         30.1kB @   5.2kB/s  0.1s
cuda-profiler-api                                   18.8kB @   3.2kB/s  0.1s
libblas                                              3.7MB @ 618.1kB/s  0.4s
cuda-cudart-dev                                    740.7kB @ 121.9kB/s  0.2s
cuda-runtime                                         1.4kB @ 224.0 B/s  0.1s
typing_extensions                                   36.3kB @   5.9kB/s  0.0s
win32_setctime                                       7.4kB @   1.2kB/s  0.1s
python-engineio                                     36.7kB @   5.8kB/s  0.1s
zipp                                                17.2kB @   2.7kB/s  0.1s
python-socketio                                     35.8kB @   5.5kB/s  0.1s
markdown                                            70.9kB @  10.9kB/s  0.1s
psutil                                             373.6kB @  56.2kB/s  0.1s
cffi                                               229.9kB @  34.1kB/s  0.1s
werkzeug                                           254.0kB @  37.1kB/s  0.1s
apispec-webframeworks                               12.8kB @   1.9kB/s  0.1s
git                                                100.5MB @  14.2MB/s  7.1s
m2w64-libwinpthread-git                             31.9kB @   4.5kB/s  0.1s
libzlib                                             55.8kB @   7.8kB/s  0.0s
libuv                                              370.3kB @  51.0kB/s  0.1s
m2w64-gcc-libgfortran                              350.7kB @  47.8kB/s  0.1s
m2w64-gcc-libs                                     532.4kB @  71.6kB/s  0.1s
xorg-libxau                                         51.3kB @   6.7kB/s  0.2s
cuda-nvrtc                                          75.6MB @   6.2MB/s  6.6s
pytorch-mutex                                        2.9kB @ 236.0 B/s  0.0s
libcufft                                             5.7kB @ 465.0 B/s  0.0s
libcurand                                            3.4kB @ 273.0 B/s  0.1s
libcusolver-dev                                     98.6MB @   7.4MB/s 11.3s
setuptools                                         463.7kB @  34.2kB/s  0.1s
pycparser                                          102.7kB @   7.5kB/s  0.1s
bidict                                              32.6kB @   2.3kB/s  0.3s
sympy                                                4.2MB @ 286.7kB/s  0.9s
pyyaml                                             156.5kB @  10.5kB/s  0.1s
sentencepiece-python                                 2.8MB @ 182.7kB/s  0.6s
jinja2                                             101.4kB @   6.5kB/s  0.1s
flask-session                                       11.3kB @ 720.0 B/s  0.1s
vs2015_runtime                                      17.2kB @   1.1kB/s  0.1s
yaml                                                63.3kB @   4.0kB/s  0.1s
zstd                                               288.4kB @  18.1kB/s  0.1s
libhwloc                                             2.5MB @ 152.6kB/s  0.7s
liblapack                                            3.7MB @ 206.3kB/s  1.1s
cuda-nvtx                                           44.0kB @   2.5kB/s  0.1s
mkl                                                191.6MB @   7.4MB/s 18.3s
cuda-libraries-dev                                   1.5kB @  57.0 B/s  0.1s
idna                                                56.7kB @   2.2kB/s  0.1s
colorama                                            25.2kB @ 963.0 B/s  0.1s
markupsafe                                          26.5kB @   1.0kB/s  0.1s
cryptography                                         1.1MB @  40.9kB/s  0.2s
mkl-include                                        778.6kB @  29.4kB/s  0.1s
libabseil                                            1.5MB @  56.5kB/s  0.2s
libxml2                                              1.7MB @  63.3kB/s  0.2s
libxcb                                             969.8kB @  35.9kB/s  0.1s
libnpp                                             300.9kB @  11.1kB/s  0.1s
libcurand-dev                                       52.4MB @   1.6MB/s  4.9s
cachelib                                            20.1kB @ 629.0 B/s  0.0s
click                                               76.5kB @   2.4kB/s  0.0s
dnspython                                          138.8kB @   4.3kB/s  0.0s
m2w64-gcc-libs-core                                219.2kB @   6.8kB/s  0.1s
libsentencepiece                                     1.9MB @  58.0kB/s  0.2s
cuda-cudart                                          1.5MB @  46.2kB/s  0.2s
pip                                                  1.4MB @  41.6kB/s  0.4s
bleach                                             124.1kB @   3.8kB/s  0.1s
eventlet                                           173.5kB @   5.3kB/s  0.1s
libdeflate                                         152.3kB @   4.6kB/s  0.1s
libnvjpeg                                            4.6kB @ 138.0 B/s  0.0s
itsdangerous                                        16.4kB @ 493.0 B/s  0.0s
vc14_runtime                                       740.6kB @  22.2kB/s  0.1s
liblapacke                                           3.7MB @ 108.4kB/s  0.4s
pillow                                              46.5MB @   1.3MB/s  3.4s
lcms2                                              499.1kB @  13.4kB/s  0.1s
sentencepiece                                       31.4kB @ 843.0 B/s  0.0s
libffi                                              42.1kB @   1.1kB/s  0.1s
libcufft-dev                                       151.6MB @   3.5MB/s 37.5s
networkx                                             1.5MB @  33.3kB/s  0.6s
libnpp-dev                                         150.1MB @   3.0MB/s 37.7s
libcusparse-dev                                    184.2MB @   3.6MB/s 13.4s
libcublas-dev                                      394.2MB @   6.6MB/s 42.2s
pytorch                                              1.5GB @  16.6MB/s 1m:21.3s
Linking git-2.35.1-h57928b3_0
Linking ucrt-10.0.22621.0-h57928b3_0
Linking python_abi-3.8-3_cp38
Linking ca-certificates-2023.5.7-h56e8100_0
Linking msys2-conda-epoch-20160418-1
Linking mkl-include-2022.1.0-h6a75c08_874
Linking intel-openmp-2023.1.0-h57928b3_46319
Linking vc14_runtime-14.36.32532-hfdfe4a8_17
Linking m2w64-libwinpthread-git-5.0.0.4634.697f757-2
Linking m2w64-gmp-6.1.0-2
Linking vc-14.3-h64f974e_17
Linking vs2015_runtime-14.36.32532-h05e6639_17
Linking m2w64-gcc-libs-core-5.3.0-7
Linking pthreads-win32-2.9.1-hfa6e2cd_3
Linking openssl-3.1.1-hcfcfb64_1
Linking libzlib-1.2.13-hcfcfb64_5
Linking libwebp-base-1.3.1-hcfcfb64_0
Linking libabseil-20230125.3-cxx17_h63175ca_0
Linking libsqlite-3.42.0-hcfcfb64_0
Linking libiconv-1.17-h8ffe710_0
Linking yaml-0.2.5-h8ffe710_2
Linking libuv-1.44.2-h8ffe710_0
Linking xz-5.2.6-h8d14728_0
Linking tk-8.6.12-h8ffe710_0
Linking libffi-3.4.2-h8ffe710_5
Linking bzip2-1.0.8-h8ffe710_4
Linking libdeflate-1.18-hcfcfb64_0
Linking lerc-4.0.0-h63175ca_0
Linking libjpeg-turbo-2.1.5.1-hcfcfb64_0
Linking m2w64-gcc-libgfortran-5.3.0-6
Linking zstd-1.5.2-h12be248_6
Linking libpng-1.6.39-h19919ed_0
Linking libprotobuf-3.21.12-h12be248_0
Linking libxml2-2.11.4-hc3477c8_0
Linking python-3.8.17-h4de0772_0_cpython
Linking m2w64-gcc-libs-5.3.0-7
Linking libtiff-4.5.1-h6c8260b_0
Linking freetype-2.12.1-h546665d_1
Linking libsentencepiece-0.1.99-h47d101a_1
Linking libhwloc-2.9.1-nocuda_h15da153_6
Linking xorg-libxdmcp-1.1.3-hcd874cb_0
Linking pthread-stubs-0.4-hcd874cb_1001
Linking xorg-libxau-1.0.11-hcd874cb_0
Linking openjpeg-2.5.0-ha2aaf27_2
Linking lcms2-2.15-h3e3b177_1
Linking sentencepiece-spm-0.1.99-h47d101a_1
Linking tbb-2021.9.0-h91493d7_0
Linking libxcb-1.15-hcd874cb_0
Linking mkl-2022.1.0-h6a75c08_874
Linking mkl-devel-2022.1.0-h57928b3_875
Linking libblas-3.9.0-17_win64_mkl
Linking liblapack-3.9.0-17_win64_mkl
Linking libcblas-3.9.0-17_win64_mkl
Linking liblapacke-3.9.0-17_win64_mkl
Linking blas-devel-3.9.0-17_win64_mkl
Linking blas-2.117-mkl
Linking pytorch-mutex-1.0-cuda
Linking cuda-cudart-11.8.89-0
Linking cuda-cupti-11.8.87-0
Linking cuda-nvrtc-11.8.89-0
Linking cuda-nvtx-11.8.86-0
Clobberwarning: $CONDA_PREFIX/LICENSE
warning  libmamba Clobberwarning: $CONDA_PREFIX/LICENSE
Clobberwarning: $CONDA_PREFIX/build_env_setup.bat
warning  libmamba Clobberwarning: $CONDA_PREFIX/build_env_setup.bat
Clobberwarning: $CONDA_PREFIX/conda_build.bat
warning  libmamba Clobberwarning: $CONDA_PREFIX/conda_build.bat
Clobberwarning: $CONDA_PREFIX/metadata_conda_debug.yaml
warning  libmamba Clobberwarning: $CONDA_PREFIX/metadata_conda_debug.yaml
Linking libcublas-11.11.3.6-0
Linking libcufft-10.9.0.58-0
Linking libcusolver-11.4.1.48-0
Linking libcusparse-11.7.5.86-0
Linking libnpp-11.8.0.86-0
Linking libnvjpeg-11.9.0.86-0
Linking cuda-cccl-12.2.53-0
Linking cuda-profiler-api-12.2.53-0
Clobberwarning: $CONDA_PREFIX/LICENSE
warning  libmamba Clobberwarning: $CONDA_PREFIX/LICENSE
Linking libcurand-10.3.3.53-0
Linking cuda-nvrtc-dev-11.8.89-0
Linking libcublas-dev-11.11.3.6-0
Linking libcufft-dev-10.9.0.58-0
Linking libcusolver-dev-11.4.1.48-0
Linking libcusparse-dev-11.7.5.86-0
Linking libnpp-dev-11.8.0.86-0
Linking libnvjpeg-dev-11.9.0.86-0
Linking cuda-cudart-dev-11.8.89-0
Linking libcurand-dev-10.3.3.53-0
Linking cuda-libraries-11.8.0-0
Linking cuda-libraries-dev-11.8.0-0
Linking cuda-runtime-11.8.0-0
Linking wheel-0.40.0-pyhd8ed1ab_0
Linking setuptools-68.0.0-pyhd8ed1ab_0
Linking pip-23.1.2-pyhd8ed1ab_0
Linking mpmath-1.3.0-pyhd8ed1ab_0
Linking typing_extensions-4.7.1-pyha770c72_0
Linking networkx-3.1-pyhd8ed1ab_0
Linking filelock-3.12.2-pyhd8ed1ab_0
Linking pycparser-2.21-pyhd8ed1ab_0
Linking win32_setctime-1.1.0-pyhd8ed1ab_0
Linking webencodings-0.5.1-py_1
Linking idna-3.4-pyhd8ed1ab_0
Linking cachelib-0.10.2-pyhd8ed1ab_0
Linking packaging-23.1-pyhd8ed1ab_0
Linking python-engineio-4.4.1-pyhd8ed1ab_0
Linking bidict-0.22.1-pyhd8ed1ab_0
Linking six-1.16.0-pyh6c4a22f_0
Linking itsdangerous-2.1.2-pyhd8ed1ab_0
Linking zipp-3.15.0-pyhd8ed1ab_0
Linking termcolor-2.3.0-pyhd8ed1ab_0
Linking colorama-0.4.6-pyhd8ed1ab_0
Linking sympy-1.12-pyh04b8f61_3
Linking marshmallow-3.19.0-pyhd8ed1ab_0
Linking python-socketio-5.7.2-pyhd8ed1ab_0
Linking bleach-4.1.0-pyhd8ed1ab_0
Linking importlib-metadata-6.7.0-pyha770c72_0
Linking click-8.1.3-win_pyhd8ed1ab_2
Linking markdown-3.4.3-pyhd8ed1ab_0
Linking pytorch-cuda-11.8-h24eeafa_5
Linking pyyaml-6.0-py38h91455d4_5
Linking markupsafe-2.1.3-py38h91455d4_0
Linking greenlet-2.0.2-py38hd3f51b4_1
Linking psutil-5.9.5-py38h91455d4_0
Linking pillow-10.0.0-py38ha7eb54a_0
Linking protobuf-4.21.12-py38hd3f51b4_0
Linking sentencepiece-python-0.1.99-py38h4e1e770_1
Linking cffi-1.15.1-py38h57701bc_3
Linking loguru-0.7.0-py38haa244fe_0
Linking sentencepiece-0.1.99-haa244fe_1
Linking cryptography-41.0.1-py38h95f5157_0
Linking apispec-6.3.0-pyhd8ed1ab_0
Linking werkzeug-2.3.6-pyhd8ed1ab_0
Linking jinja2-3.1.2-pyhd8ed1ab_1
Linking pyopenssl-23.2.0-pyhd8ed1ab_1
Linking dnspython-2.2.1-pyhd8ed1ab_0
Linking apispec-webframeworks-0.5.2-pyhd8ed1ab_4
Linking flask-2.2.3-pyhd8ed1ab_0
Linking eventlet-0.33.3-pyhd8ed1ab_0
Linking flask-session-0.4.0-pyhd8ed1ab_0
Linking flask-socketio-5.3.2-pyhd8ed1ab_0
Linking pytorch-2.0.1-py3.8_cuda11.8_cudnn8_0
Transaction finished

Installing pip packages: flask-cloudflared==0.0.10, flask-ngrok, flask-cors, lupa==1.10, transformers==4.28.*, huggingface_hub==0.15.1, safetensors==0.3.1, accelerate==0.18.0, git+https://github.com/VE-FORBRYDERNE/mkultra, flask-session, ansi2html, flask_compress, ijson, bitsandbytes, ftfy, pydub, diffusers, peft==0.3.0
Collecting git+https://github.com/VE-FORBRYDERNE/mkultra (from -r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 9))
  Cloning https://github.com/VE-FORBRYDERNE/mkultra to d:\projects\text_generation\pygmalion\koboldai\miniconda3\pip-req-build-2nq0fxyr
  Running command git clone --filter=blob:none --quiet https://github.com/VE-FORBRYDERNE/mkultra 'D:\Projects\text_generation\Pygmalion\KoboldAI\miniconda3\pip-req-build-2nq0fxyr'
  Resolved https://github.com/VE-FORBRYDERNE/mkultra to commit ef544de73ec6a1a4bd55e824d0628fa0ef1323ac
  Preparing metadata (setup.py) ... done
Collecting flask-cloudflared==0.0.10 (from -r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 1))
  Using cached flask_cloudflared-0.0.10-py3-none-any.whl (5.9 kB)
Collecting flask-ngrok (from -r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 2))
  Using cached flask_ngrok-0.0.25-py3-none-any.whl (3.1 kB)
Collecting flask-cors (from -r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 3))
  Using cached Flask_Cors-4.0.0-py2.py3-none-any.whl (14 kB)
Collecting lupa==1.10 (from -r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 4))
  Using cached lupa-1.10-cp38-cp38-win_amd64.whl (261 kB)
Collecting transformers==4.28.* (from -r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 5))
  Using cached transformers-4.28.1-py3-none-any.whl (7.0 MB)
Collecting huggingface_hub==0.15.1 (from -r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 6))
  Using cached huggingface_hub-0.15.1-py3-none-any.whl (236 kB)
Collecting safetensors==0.3.1 (from -r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 7))
  Using cached safetensors-0.3.1-cp38-cp38-win_amd64.whl (263 kB)
Collecting accelerate==0.18.0 (from -r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 8))
  Using cached accelerate-0.18.0-py3-none-any.whl (215 kB)
Requirement already satisfied: flask-session in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from -r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 10)) (0.4.0)
Collecting ansi2html (from -r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 11))
  Using cached ansi2html-1.8.0-py3-none-any.whl (16 kB)
Collecting flask_compress (from -r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 12))
  Using cached Flask_Compress-1.13-py3-none-any.whl (7.9 kB)
Collecting ijson (from -r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 13))
  Using cached ijson-3.2.2-cp38-cp38-win_amd64.whl (48 kB)
Collecting bitsandbytes (from -r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 14))
  Using cached bitsandbytes-0.39.1-py3-none-any.whl (97.1 MB)
Collecting ftfy (from -r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 15))
  Using cached ftfy-6.1.1-py3-none-any.whl (53 kB)
Collecting pydub (from -r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 16))
  Using cached pydub-0.25.1-py2.py3-none-any.whl (32 kB)
Collecting diffusers (from -r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 17))
  Using cached diffusers-0.17.1-py3-none-any.whl (1.1 MB)
Collecting peft==0.3.0 (from -r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 18))
  Using cached peft-0.3.0-py3-none-any.whl (56 kB)
Requirement already satisfied: Flask>=0.8 in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from flask-cloudflared==0.0.10->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 1)) (2.2.3)
Collecting requests (from flask-cloudflared==0.0.10->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 1))
  Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Requirement already satisfied: filelock in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from transformers==4.28.*->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 5)) (3.12.2)
Collecting numpy>=1.17 (from transformers==4.28.*->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 5))
  Using cached numpy-1.24.4-cp38-cp38-win_amd64.whl (14.9 MB)
Requirement already satisfied: packaging>=20.0 in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from transformers==4.28.*->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 5)) (23.1)
Requirement already satisfied: pyyaml>=5.1 in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from transformers==4.28.*->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 5)) (6.0)
Collecting regex!=2019.12.17 (from transformers==4.28.*->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 5))
  Using cached regex-2023.6.3-cp38-cp38-win_amd64.whl (268 kB)
Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 (from transformers==4.28.*->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 5))
  Using cached tokenizers-0.13.3-cp38-cp38-win_amd64.whl (3.5 MB)
Collecting tqdm>=4.27 (from transformers==4.28.*->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 5))
  Using cached tqdm-4.65.0-py3-none-any.whl (77 kB)
Collecting fsspec (from huggingface_hub==0.15.1->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 6))
  Using cached fsspec-2023.6.0-py3-none-any.whl (163 kB)
Requirement already satisfied: typing-extensions>=3.7.4.3 in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from huggingface_hub==0.15.1->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 6)) (4.7.1)
Requirement already satisfied: psutil in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from accelerate==0.18.0->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 8)) (5.9.5)
Requirement already satisfied: torch>=1.4.0 in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from accelerate==0.18.0->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 8)) (2.0.1)
Requirement already satisfied: cachelib in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from flask-session->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 10)) (0.10.2)
Collecting brotli (from flask_compress->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 12))
  Using cached Brotli-1.0.9-cp38-cp38-win_amd64.whl (365 kB)
Collecting wcwidth>=0.2.5 (from ftfy->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 15))
  Using cached wcwidth-0.2.6-py2.py3-none-any.whl (29 kB)
Requirement already satisfied: Pillow in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from diffusers->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 17)) (10.0.0)
Requirement already satisfied: importlib-metadata in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from diffusers->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 17)) (6.7.0)
Requirement already satisfied: Werkzeug>=2.2.2 in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from Flask>=0.8->flask-cloudflared==0.0.10->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 1)) (2.3.6)
Requirement already satisfied: Jinja2>=3.0 in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from Flask>=0.8->flask-cloudflared==0.0.10->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 1)) (3.1.2)
Requirement already satisfied: itsdangerous>=2.0 in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from Flask>=0.8->flask-cloudflared==0.0.10->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 1)) (2.1.2)
Requirement already satisfied: click>=8.0 in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from Flask>=0.8->flask-cloudflared==0.0.10->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 1)) (8.1.3)
Requirement already satisfied: zipp>=0.5 in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from importlib-metadata->diffusers->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 17)) (3.15.0)
Requirement already satisfied: sympy in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from torch>=1.4.0->accelerate==0.18.0->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 8)) (1.12)
Requirement already satisfied: networkx in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from torch>=1.4.0->accelerate==0.18.0->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 8)) (3.1)
Requirement already satisfied: colorama in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from tqdm>=4.27->transformers==4.28.*->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 5)) (0.4.6)
Collecting charset-normalizer<4,>=2 (from requests->flask-cloudflared==0.0.10->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 1))
  Using cached charset_normalizer-3.1.0-cp38-cp38-win_amd64.whl (96 kB)
Requirement already satisfied: idna<4,>=2.5 in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from requests->flask-cloudflared==0.0.10->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 1)) (3.4)
Collecting urllib3<3,>=1.21.1 (from requests->flask-cloudflared==0.0.10->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 1))
  Using cached urllib3-2.0.3-py3-none-any.whl (123 kB)
Collecting certifi>=2017.4.17 (from requests->flask-cloudflared==0.0.10->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 1))
  Using cached certifi-2023.5.7-py3-none-any.whl (156 kB)
Requirement already satisfied: MarkupSafe>=2.0 in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from Jinja2>=3.0->Flask>=0.8->flask-cloudflared==0.0.10->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 1)) (2.1.3)
Requirement already satisfied: mpmath>=0.19 in d:\projects\text_generation\pygmalion\koboldai\miniconda3\lib\site-packages (from sympy->torch>=1.4.0->accelerate==0.18.0->-r D:\Projects\text_generation\Pygmalion\KoboldAI\MINICONDA3\mambafsOVREtEUBn (line 8)) (1.3.0)
Building wheels for collected packages: mkultra
  Building wheel for mkultra (setup.py) ... done
  Created wheel for mkultra: filename=mkultra-0.1-py3-none-any.whl size=10998 sha256=54230ec50305399ac4d3d7ede2ab6173f2e9b4bab6f5dc43a7a56063f5d56dff
  Stored in directory: D:\Projects\text_generation\Pygmalion\KoboldAI\miniconda3\pip-ephem-wheel-cache-3vwglnli\wheels\77\56\b4\04000e4fb8373f67036809acffcfae03323cb96f6694b26277
Successfully built mkultra
Installing collected packages: wcwidth, tokenizers, safetensors, pydub, mkultra, lupa, ijson, brotli, bitsandbytes, urllib3, tqdm, regex, numpy, ftfy, fsspec, charset-normalizer, certifi, ansi2html, requests, huggingface_hub, flask-ngrok, flask-cors, flask_compress, flask-cloudflared, accelerate, transformers, diffusers, peft
Successfully installed accelerate-0.18.0 ansi2html-1.8.0 bitsandbytes-0.39.1 brotli-1.0.9 certifi-2023.5.7 charset-normalizer-3.1.0 diffusers-0.17.1 flask-cloudflared-0.0.10 flask-cors-4.0.0 flask-ngrok-0.0.25 flask_compress-1.13 fsspec-2023.6.0 ftfy-6.1.1 huggingface_hub-0.15.1 ijson-3.2.2 lupa-1.10 mkultra-0.1 numpy-1.24.4 peft-0.3.0 pydub-0.25.1 regex-2023.6.3 requests-2.31.0 safetensors-0.3.1 tokenizers-0.13.3 tqdm-4.65.0 transformers-4.28.1 urllib3-2.0.3 wcwidth-0.2.6

                                           __
          __  ______ ___  ____ _____ ___  / /_  ____ _
         / / / / __ `__ \/ __ `/ __ `__ \/ __ \/ __ `/
        / /_/ / / / / / / /_/ / / / / / / /_/ / /_/ /
       / .___/_/ /_/ /_/\__,_/_/ /_/ /_/_.___/\__,_/
      /_/

Collect information..
Cleaning index cache..
Cleaning lock files..
Cleaning tarballs..
Cleaning packages..
The system cannot find the path specified.
Press any key to continue . . .

New UI - Formatting Option - API Bug

Kobold New UI
https://gyazo.com/c715b80b216cd194687a0f4fe321b702

When Trim Sentences, No Special Chars, Single Line, No Blank Lines, Auto Spacing, or No Special Chars are enabled and you generate through the API to TavernAI, the buttons will flick on and off.

The following Formatting options don't appear to be working properly, as well:
Trim Sentences - Doesn't work properly. See debug log
Single Line - Doesn't work properly. See debug log
No Blank Lines - Doesn't work properly. See debug log

Windows 10: 10.0.19045 Build 19045
Running Local
Model: pygmalion-6b
KoboldAI Version: cc01ad7

I can send any additional logs, just let me know!

No module named progressbar

Getting the following trying to load any model:

`KoboldAI/tpu_mtj_backend.py", line 35, in
import progressbar

ModuleNotFoundError: No module named 'progressbar'
`

This happened after the most recent git pull. I tried a fresh install alongside install_requiremens.sh being run; tried setting up a venv within the koboldAI folder to install the progressbar module; alongside attempting to manually run the requirements.txt (though that resulted in a error installing lupa)

In chat mode, after the answer, the empty space is filled with random words and symbols.

I use united version on Google Collab (TPU) with WizardLM-13B.

After bot answer I often get a sequence of random words or symbols. Like that:
image

In current situation I just got a "Java imports" and it can still be explained, but more often I get absolutely absurdus sequences.

image

At first glance it seems that the problem is in the model, but I don't get that problem in story or adventure mode.

It looks like after a normal response, the model is forced to fill the remaining space with any tokens up to the specified value in "Output Length".

I tried to change any settings, including sampler, formatting, repetition, and others. However, the only thing that can change this is the "Output length" and then it only changes the amount of nonsense that I will receive in the chat.

As a result, I have to use adventure mode as chat mode and it's not very convenient. 😅

action_count misaligns over time

Moved from the ui2 branch for visibility -- from what I know this is super common on any story longer than ~30-50 actions

Sometimes action_count gets misaligned, causing new actions to be inserted in unexpected places.

Of note:

  • Of the 14 actions in my save, the 4th, 5th, 6th, 7th, 8th, 10th, and 14th were empty.
  • The old UI is seemingly unaffected by this
  • The bug can be forced by manually editing the action_count to be lower than it actually is
  • Lots of retries and multi-level undos were made in my test before the issue showed up
  • Seems to happen more often on longer stories

A user's detailed report:

I've seen it all 3 sessions I've used the new ui on colab. Specifically, it takes the latest generation and spits it out at somewhere that's not at the bottom. It is always between chunks, not in the middle of one, and has a heavy tendency to end up at the top, before the prompt (ends up there ~50% of the time, the other 50% being between any other chunk). It's easy to spot when it happens because when you submit, it ends loading but nothing has been added. And when you hit back, it "seems" like nothing is being removed (but it's actually removing that misplaced generation from somewhere else).

Working around it is a huge pain, but maybe something from my observations can help identify it. Hitting back and generating again will put the misplaced generation back in the same wrong spot (but with new text). Editing the previous output/submission does not fix this; it still goes to the same wrong spot. Closing and reopening the browse tab (or refreshing) don't fix it. I've found two ways around it that sometimes work. 1, copy the misplaced generation, edit-delete it from the passage (not using the Back function), paste the input at the bottom, and pretend it never happened. 2, copy the latest generation before the misplaced one, delete the chunk, and paste it into the chunk before that one. Both of these methods have worked at temporarily fixing the problem, but I've also ended up in scenarios where there was no way forward but starting a new story and copy/pasting plain text of the old one into it to continue.
I have no idea what triggers it, but it seems to happen once the story gets lengthy. Every time I've encountered it it has been after 20 to 40 generations. I also always start from a fresh new story, not swapping ui's or loading saves, so it's not related to those things.

In Chat mode, undoing an action and then submitting a blank action corrupts the action history

This may be part of a larger issue of action indices getting corrupted, but it's 100% reproducible for me so it may be an easy way to debug that issue. It might instead be chat specific due to the way we decide when to insert "You:".

The issue happens in both classic and new UI mode, but the visuals manifest differently in classic UI, new UI with Legacy mode, and new UI with Messages mode.

To reproduce:

  1. Start the server and the client.
  2. Enable Chat mode.
  3. Use a chat-tailored prompt to begin chatting.
  4. Set the Output Length to a moderately low value (e.g. 20-50 tokens)
  5. Wait until the AI generates a partial response.
  6. Submit an empty action so that the AI continues its response.
  7. Use "Undo" to go back one action.
  8. Submit an empty action again.
  9. Note that the AI does not continue its response. Or if it does generate text it does so in a garbled manner. In the new UI with the Messages chat style, an response from "System" is generated with the text You, and if you continue again then this updates to be a message from You with the text You.

At this point, if you undo the "You" text isn't undone, but instead the previous action is removed. Redoing and generating again will cause previous actions to be replaced. The more you undo and redo, the more corrupted the action state appears to become.

Screenshot of corrupted actions with a You message from You
image

Screenshot of the context. Notice that the speech attributions "You:" and "Eliza:" in latter part of the context are garbled or missing. This results in the AI responses being incoherent.
image

CUDA error upon attempting to change the loaded model while using HF 4bit

Attemping to load a new model after the first when using HF 4bit results in a CUDA error:

ERROR      | modeling.inference_models.hf_torch:_get_model:402 - Lazyloader failed, falling back to stock HF load. You may run out of RAM here.
ERROR      | modeling.inference_models.hf_torch:_get_model:403 - CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

ERROR      | modeling.inference_models.hf_torch:_get_model:404 - Traceback (most recent call last):
  File "/home/***/AI/KoboldAI/modeling/inference_models/hf_torch.py", line 392, in _get_model
    model = AutoModelForCausalLM.from_pretrained(
  File "/home/***/AI/KoboldAI/runtime/envs/koboldai/lib/python3.8/site-packages/hf_bleeding_edge/__init__.py", line 59, in from_pretrained
    return AM.from_pretrained(path, *args, **kwargs)
  File "/home/***/AI/KoboldAI/runtime/envs/koboldai/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 493, in from_pretrained
    return model_class.from_pretrained(
  File "/home/***/AI/KoboldAI/modeling/patches.py", line 92, in new_from_pretrained
    return old_from_pretrained(
  File "/home/***/AI/KoboldAI/runtime/envs/koboldai/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2903, in from_pretrained
    ) = cls._load_pretrained_model(
  File "/home/***/AI/KoboldAI/runtime/envs/koboldai/lib/python3.8/site-packages/transformers/modeling_utils.py", line 3260, in _load_pretrained_model
    new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
  File "/home/***/AI/KoboldAI/modeling/patches.py", line 302, in _load_state_dict_into_meta_model
    set_module_quantized_tensor_to_device(
  File "/home/***/AI/KoboldAI/runtime/envs/koboldai/lib/python3.8/site-packages/transformers/utils/bitsandbytes.py", line 109, in set_module_quantized_tensor_to_device
    new_value = value.to(device)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

If I try launching with CUDA_LAUNCH_BLOCKING=1, it just gets stuck loading the second model (no error) and never finishes.

Old presets cause errors in model loading

Per Rip_gel on discord:

  File "B:\python\lib\site-packages\socketio\server.py", line 730, in _handle_event_internal
    r = server._trigger_event(data[0], namespace, sid, *data[1:])
        │      │              │        │          │     └ ['load_model', {'model': 'NeoCustom', 'path': 'D:\\KoboldAI\\models\\gpt-neo-2.7B-Picard', 'use_gpu': True, 'key': '', 'gpu_l...
        │      │              │        │          └ 'AN9SpKFBcE-1zGG1AAAH'
        │      │              │        └ '/'
        │      │              └ ['load_model', {'model': 'NeoCustom', 'path': 'D:\\KoboldAI\\models\\gpt-neo-2.7B-Picard', 'use_gpu': True, 'key': '', 'gpu_l...
        │      └ <function Server._trigger_event at 0x000002214ED34160>
        └ <socketio.server.Server object at 0x0000022150224D90>

  File "B:\python\lib\site-packages\socketio\server.py", line 755, in _trigger_event
    return self.handlers[namespace][event](*args)
           │    │        │          │       └ ('AN9SpKFBcE-1zGG1AAAH', {'model': 'NeoCustom', 'path': 'D:\\KoboldAI\\models\\gpt-neo-2.7B-Picard', 'use_gpu': True, 'key': ...
           │    │        │          └ 'load_model'
           │    │        └ '/'
           │    └ {'/': {'OAI_Key_Update': <function get_oai_models at 0x0000022150368E50>, 'get_cluster_models': <function get_cluster_models ...
           └ <socketio.server.Server object at 0x0000022150224D90>

  File "B:\python\lib\site-packages\flask_socketio\__init__.py", line 282, in _handler
    return self._handle_event(handler, message, namespace, sid,
           │    │             │        │        │          └ 'AN9SpKFBcE-1zGG1AAAH'
           │    │             │        │        └ '/'
           │    │             │        └ 'load_model'
           │    │             └ <function UI_2_load_model at 0x000002215039B280>
           │    └ <function SocketIO._handle_event at 0x000002215014CD30>
           └ <flask_socketio.SocketIO object at 0x0000022150224CA0>

  File "B:\python\lib\site-packages\flask_socketio\__init__.py", line 826, in _handle_event
    ret = handler(*args)
          │        └ ({'model': 'NeoCustom', 'path': 'D:\\KoboldAI\\models\\gpt-neo-2.7B-Picard', 'use_gpu': True, 'key': '', 'gpu_layers': '10', ...
          └ <function UI_2_load_model at 0x000002215039B280>

> File "aiserver.py", line 589, in g
    return f(*a, **k)
           │  │    └ {}
           │  └ ({'model': 'NeoCustom', 'path': 'D:\\KoboldAI\\models\\gpt-neo-2.7B-Picard', 'use_gpu': True, 'key': '', 'gpu_layers': '10', ...
           └ <function UI_2_load_model at 0x0000022150399F70>

  File "aiserver.py", line 8697, in UI_2_load_model
    load_model(use_gpu=data['use_gpu'], gpu_layers=data['gpu_layers'], disk_layers=data['disk_layers'], online_model=data['online_model'], url=koboldai_vars.colaburl, use_8_bit=data['use_8_bit'])
    │                  │                           │                               │                                 │                         │                                 └ {'model': 'NeoCustom', 'path': 'D:\\KoboldAI\\models\\gpt-neo-2.7B-Picard', 'use_gpu': True, 'key': '', 'gpu_layers': '10', '...
    │                  │                           │                               │                                 │                         └ <koboldai_settings.koboldai_vars object at 0x000002215029A430>
    │                  │                           │                               │                                 └ {'model': 'NeoCustom', 'path': 'D:\\KoboldAI\\models\\gpt-neo-2.7B-Picard', 'use_gpu': True, 'key': '', 'gpu_layers': '10', '...
    │                  │                           │                               └ {'model': 'NeoCustom', 'path': 'D:\\KoboldAI\\models\\gpt-neo-2.7B-Picard', 'use_gpu': True, 'key': '', 'gpu_layers': '10', '...
    │                  │                           └ {'model': 'NeoCustom', 'path': 'D:\\KoboldAI\\models\\gpt-neo-2.7B-Picard', 'use_gpu': True, 'key': '', 'gpu_layers': '10', '...
    │                  └ {'model': 'NeoCustom', 'path': 'D:\\KoboldAI\\models\\gpt-neo-2.7B-Picard', 'use_gpu': True, 'key': '', 'gpu_layers': '10', '...
    └ <function load_model at 0x0000022150362280>

  File "aiserver.py", line 3354, in load_model
    if preset['Model Name'] == koboldai_vars.model:
       │                       └ <koboldai_settings.koboldai_vars object at 0x000002215029A430>
       └ {'genamt': 50, 'rep_pen': 1.1, 'rep_pen_range': 1476, 'rep_pen_slope': 1.3, 'sampler_order': [6, 5, 0, 2, 3, 1, 4], 'temp': 0...

KeyError: 'Model Name'

ROCM does not work in docker container

Hey, i have built my own docker container based on the standalone and the rocm container from here and it is working so far, but i cant get the rocm part to work. Logs keep outputting:
INIT | Searching | GPU support
INIT | Not Found | GPU support
WARNING | main:device_config:1101 - Nothing assigned to a GPU, reverting to CPU only mode
when i try to download a model

Dockerfile:
FROM ghcr.io/linuxserver/baseimage-ubuntu:jammy
RUN apt-get update && apt install -y xorg libsqlite3-0 wget aria2 git bzip2 && rm -rf /tmp/* /var/lib/apt/lists/* /var/tmp/*
RUN git clone --recursive https://github.com/henk717/koboldai /opt/koboldai
WORKDIR /opt/koboldai
RUN ./install_requirements.sh rocm
COPY docker-helper.sh /opt/koboldai/docker-helper.sh
RUN chmod +x /opt/koboldai/docker-helper.sh
EXPOSE 5000/tcp
CMD /opt/koboldai/docker-helper.sh

Docker run command:
docker run
-d
--net='bridge'
-e 'DOCKER_MODS'='linuxserver/mods:jellyfin-amd'
-e 'KOBOLDAI_MODELDIR'='/content'
-p '5000:5000/tcp'
-v '/koboldai-content/':'/content':'rw'
--device='/dev/kfd'
--device='/dev/dri' 'joly0/koboldai-rocm'

I can run "/opt/rocm/bin/clinfo" in the container and get the correct output (showing the data of my amd gpu), but KoboldAI isnt using it.

I have tried to also with my other docker container which i have details here #326 and installed the needed rocm-runtime, but that doesnt work either.
Any idea?

KoboldAI-Client is not a "client"?

I think that the documentation / readme may need significant improvement to clear up a possible misunderstanding.

First I attempted to learn how can I generate text with the KoboldAI Horde REST API in python, by using the "Full Documentation", but I failed to figure it out.

Then I thought I would install the "client" program and somehow intercept/debug it how it uses the KoboldAI Horde REST API.

At https://koboldai.net/ this github repo is recommended to GNU/Linux users, so I came here, and downloaded the stuff. (I am using MX Linux Wildflower)

Then after the download (I don't use git), I extracted "KoboldAI-united", I opened a terminal in the folder, and simply run ./play.sh in the hope it will start.

I would have preferred to do a simple

"pip install KoboldAI-client"

instead, but because in the readme it is explained, that this thing has its own self-contained runtime (conda?), so I choose to first just give it a try simply running "play.sh".

I seriously had my doubts at the point when I saw in the readme, that this (client?) has GPU requirements, and so many dependencies, but regardless I just gave it a try anyway.

The result was as I expected a complete tragedy. The automatic script downloaded 17.6 GB dependencies and the "Installation" failed. Moreover it automatically opened up the UI in my web browser (chromium), but the UI was not functional for obvious reasons. I had to manually kill the running process in the terminal with CTRL+C.

I only intended to get some simple (client) program that would act as a gateway by giving access to the KoboldAI Horde REST API, so I could get a working API URL, what I can paste into the TaverAI UI, to use the UI with the Kobold Horde Workers provided models.

TavernUI-API-URL

Anyway, I just attempted to figure out a way I could use the processing power of the Kobold Horde with the TavernUI for text generation.

I did not intend to start a "server" or "worker" node. I only have a low-end laptop. There is no way I could run anything demanding on it. And I also do not wish to use google colab. I intended to use the processing power of the Kobold Horde, by using the public API combined with the TavernAI interface.

So something like https://lite.koboldai.net/ but with the TavernUI interface.

If this is NOT a simple "client" but something omnipotent combined

Server + Worker + Config Interface + KoboldAI lite user interface + Whatever magic,

then the "KoboldAI-Client" name is very confusing for people who are just looking for a "Client" program to gain access to the "Horde".

As an extra I attached the error log of the installation, but I opened this "issue" for pointing out that it should be somehow better mentioned or clarified in the documentation, that this is NOT a simple "client".

error.log

Front end: wpp attributes shouldn't be stored as an object

This is minor, but it's something I've been thinking about - if you name a wpp attribute something like prototype or constructor, this will break things because W++ attributes are represented as an object. Maybe a Map or array of K/V tuples would be a better fit for this. Might work on a PR for this when I have some time

Using OpenAI API causes UI1 to error, and not load the API.

It would seem that loading an OpenAI API while using UI1 on United causes a crash, it could be the same for the main branch, don't really know. I've been informed this is undertested, so you might see a few more issues regarding OpenAI creep up. Hopefully we can improve Kobold's support for this, even if it's not frequently used.

Here is the error:

Traceback (most recent call last):
  File "B:\python\lib\threading.py", line 932, in _bootstrap_inner
    self.run()
  File "B:\python\lib\threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "B:\python\lib\site-packages\socketio\server.py", line 731, in _handle_event_internal
    r = server._trigger_event(data[0], namespace, sid, *data[1:])
  File "B:\python\lib\site-packages\socketio\server.py", line 756, in _trigger_event
    return self.handlers[namespace][event](*args)
  File "B:\python\lib\site-packages\flask_socketio\__init__.py", line 282, in _handler
    return self._handle_event(handler, message, namespace, sid,
  File "B:\python\lib\site-packages\flask_socketio\__init__.py", line 828, in _handle_event
    ret = handler(*args)
  File "aiserver.py", line 591, in g
    return f(*a, **k)
  File "aiserver.py", line 4555, in get_message
    load_model(use_gpu=msg['use_gpu'], gpu_layers=msg['gpu_layers'], disk_layers=msg['disk_layers'], online_model=msg['online_model'])
  File "aiserver.py", line 2874, in load_model
    "CLUSTER": koboldai_vars.cluster_requested_models[0],
IndexError: list index out of range

Steps to reproduce: Simply try to load OpenAI API as the model while using UI1.

support for trust_remote_code / 8k context

Hello,

There are a number of models I'd like to try which require this. I know that I asked you about this in the past, and IIRC you mentioned that you removed it because you wanted to implement it properly.
In the interim, would you kindly instruct me on what I have to change in order to pass this flag to the appropriate call(s) (you don't have to do it for every conceivable situation/type of model, just for hf or hf_torch or whichever is necessary (16-bit, don't worry about loading in 8 or 4 bit) to load e.g. llama-based models, maybe falcon, etc. I'd just as happily patch transformers itself; whatever gets it to work. I'm mostly trying to load the models with increased context size.

Thanks.

[UI1] Save As and Autosave conflict.

If you have autosave on and you attempt to use Save As in UI1, hitting ok after specifying a name will result in the error that the file exists. An autosave is triggered with the new story name just before the requested save runs resulting in two save attempts and the warning.

Google Colab Koboldai stuck at setting seed

When i load the colab kobold ai it always getting stuck at setting seed, I keep restarting the website but it's still the same, I just want solution to this problem that's all, and thank you if you do help me I appreciate it

Softprompts cause generation errors

Not sure when this began, I think I remember softprompts being tested after the generation pipeline changes

  File "aiserver.py", line 8619, in UI_2_submit
    actionsubmit(data['data'], actionmode=koboldai_vars.actionmode)
    │            │                        └ <koboldai_settings.koboldai_vars object at 0x7fe2c447d690>
    │            └ {'data': '', 'theme': ''}
    └ <function actionsubmit at 0x7fe2c436acb0>

  File "aiserver.py", line 4961, in actionsubmit
    calcsubmit("")
    └ <function calcsubmit at 0x7fe2c436c290>

  File "aiserver.py", line 5364, in calcsubmit
    generate(subtxt, min, max, found_entries)
    │        │       │    │    └ set()
    │        │       │    └ 109
    │        │       └ 79
    │        └ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, ...
    └ <function generate at 0x7fe2c436cf80>

  File "aiserver.py", line 6286, in generate
    logger.prompt(utils.decodenewlines(tokenizer.decode(txt)).encode("unicode_escape").decode("utf-8"))
    │      │      │     │              │         │      └ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, ...
    │      │      │     │              │         └ <function PreTrainedTokenizerBase.decode at 0x7fe2c643ab00>
    │      │      │     │              └ PreTrainedTokenizer(name_or_path='models/KoboldAI_OPT-6B-nerys-v2', vocab_size=50265, model_max_len=1000000000000000019884624...
    │      │      │     └ <function decodenewlines at 0x7fe2dc70a9e0>
    │      │      └ <module 'utils' from '/home/somebody/Repos/kobold-united/utils.py'>
    │      └ functools.partialmethod(<function Logger.log at 0x7fe3315c4c20>, 'PROMPT', )
    └ <loguru.logger handlers=[(id=1, level=10, sink=<stderr>), (id=2, level=23, sink=<stdout>), (id=3, level=31, sink=<stdout>), (...

  File "/home/somebody/miniconda3/envs/kobold/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 3440, in decode
    **kwargs,
      └ {}
  File "/home/somebody/miniconda3/envs/kobold/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 949, in _decode
    sub_texts.append(self.convert_tokens_to_string(current_sub_text))
    │         │      │    │                        └ [None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None...
    │         │      │    └ <function GPT2Tokenizer.convert_tokens_to_string at 0x7fe2c4be03b0>
    │         │      └ PreTrainedTokenizer(name_or_path='models/KoboldAI_OPT-6B-nerys-v2', vocab_size=50265, model_max_len=1000000000000000019884624...
    │         └ <method 'append' of 'list' objects>
    └ []
  File "/home/somebody/miniconda3/envs/kobold/lib/python3.7/site-packages/transformers/models/gpt2/tokenization_gpt2.py", line 316, in convert_tokens_to_string
    text = "".join(tokens)
                   └ [None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None...

TypeError: sequence item 0: expected str instance, NoneType found

List of bugs found (1)

Bugs found as at 17 Apr, from commit f9fb5eb (Remove Debug) of one-some:model-structure-and-maybe-rwkv

  1. Comments are not filtered out in UI2 (even the old format <|comment|>) and is being sent to the generator.
  2. [UI2 Desync] - Ctrl+A delete on text, then entering new text into the edit window is ignored. The old context is resent instead. That context now becomes a ghost and can never be removed, being sent in all future generations.
  3. Copy and pasting text into the edit window in UI2 makes that chunk unselectable.
  4. Generating with a chunk larger than a specific size produces IndexError: index -1 is out of bounds for dimension 1 with size 0
  5. Perform these steps: load a story, edit it, save that story, then load the same story again. An empty welcome_container is generated that blocks half the UI.
  6. Perform these steps: start a new story, type some text, submit it, then delete that text, then press submit again. Instead of telling you your prompt is empty, receives: Error at koboldai.js:3191 Uncaught TypeError: Cannot read properties of null (reading 'innerText')
  7. (Difficult to repro) There are times where the text displayed during token streaming does not match the final text shown (edit: I think it has something to do with multi-token Unicode decoding. Try submitting some extended unicode heavy text like Chinese text)
  8. Submitting any prompt in ReadOnly causes an empty box to appear on the right side of the screen and does nothing else (is intended?)

Hello, I have a problem with "AI automatically deletes generated content at the end of each content generation".

What I have observed so far is that when using models like HyperMantis and Chronos, it will delete all but the first paragraph of the current output after each generation.
When using Erebus and Hermes, it will most likely delete all the content generated this time.
If I switch to the new UI, all the content generated by the latter will probably be saved, but it will not be possible to go backwards, only the commands and actions I entered will be reverted, but not the AI output.
I tried reinstalling KAI to reset the settings, but it still didn't work, here's a screenshot of what I did with the UI and server when this happened.
https://imgloc.com/i/VlgByq
https://imgloc.com/i/VlgXIz
But given that I am currently using KAI from the 4bit branch, if the issue has been fixed in the current update, could someone in the know please tell me what file I should download to fix the issue and ensure that the 4bit version is not broken?

Image generation - how to select the text to be used for generation

I have tried to click a paragraph in the text field to choose what text is to be used for image generation. I see the CLI responds with me updating that text.

But the "Generate Image" button still uses an old selected paragraph, rewritten of course, and generates the same image over and over.

How are we supposed to use this?

--cacheonly flag causes multiple versions of model to be stored

When using the --cacheonly flag, the most recent version of the model is downloaded, however old versions remain. For large models this is an issue for storage space. I would suggest either another flag or a modification of --cacheonly to delete old versions of a model when a new one is available.

Webserver is willing and able to share its port on ubuntu 22.04

I was able to start two instances of koboldai on one machine, and they happily shared port 5000.

While this is interesting behavior, it completely messed up the UI (both United and base).

To reproduce, simply start KAI. Then open another terminal and start another KAI. Then browse to localhost:5000 and try.

What is the meaning of this

Kobold returned error: 507 INSUFFICIENT STORAGE {"detail":{"msg":"KoboldAI ran out of memory: CUDA error: out of memory","type":"out_of_memory.unknown.unknown"}}

If OpenAI API decides to send empty response, UI2 errors, and loads forever.

This is a pretty common thing for Davinci-003 to do, if it feels like not continuing, it won't, it's expecting the user to write newlines or remove periods if it wants the writing to continue. This seems to cause an issue with Kobold however.

The error can be seen in the following hastebin: https://hastebin.com/raw/ogufuvinet

Steps to reproduce: Load OpenAI API as the model on UI2, and attempt to get the API to send an empty response, in my case I prompted like this "Write the end of a story"

Debug Dump: https://files.catbox.moe/ie6aes.json

Importing aetherroom prompts with more than one variable fails

Attempting to import a prompt from aetherroom with multiple variables (ex. https://aetherroom.club/5010) fails with

TypeError: _replace_placeholders() missing 1 required positional argument: 'ph_ids'

Prompts with single variables (ex. 5072 work as intended)

Traceback:
ERROR | main:g:597 - An error has been caught in function 'g', process 'MainProcess' (3912), thread 'MainThread' (5656):
Traceback (most recent call last):

File "B:\python\lib\site-packages\eventlet\green\thread.py", line 43, in __thread_body
func(*args, **kwargs)
│ │ └ {}
│ └ ()
└ <bound method Thread._bootstrap of <Thread(Thread-93, started daemon 2002789609856)>>

File "B:\python\lib\threading.py", line 890, in _bootstrap
self._bootstrap_inner()
│ └ <function start_new_thread..wrap_bootstrap_inner at 0x000001D24FC2F280>
└ <Thread(Thread-93, started daemon 2002789609856)>

File "B:\python\lib\site-packages\eventlet\green\thread.py", line 64, in wrap_bootstrap_inner
bootstrap_inner()
└ <bound method Thread._bootstrap_inner of <Thread(Thread-93, started daemon 2002789609856)>>

File "B:\python\lib\threading.py", line 932, in _bootstrap_inner
self.run()
│ └ <function Thread.run at 0x000001D23122FA60>
└ <Thread(Thread-93, started daemon 2002789609856)>

File "B:\python\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
│ │ │ │ │ └ {}
│ │ │ │ └ <Thread(Thread-93, started daemon 2002789609856)>
│ │ │ └ (<socketio.server.Server object at 0x000001D24F28BF40>, 'tgI1Q2tzD4j1zXTZAAAD', 'O_tReKAEGPbLdifiAAAC', ['configure_prompt', ...
│ │ └ <Thread(Thread-93, started daemon 2002789609856)>
│ └ <bound method Server._handle_event_internal of <socketio.server.Server object at 0x000001D24F28BF40>>
└ <Thread(Thread-93, started daemon 2002789609856)>

File "B:\python\lib\site-packages\socketio\server.py", line 731, in _handle_event_internal
r = server._trigger_event(data[0], namespace, sid, *data[1:])
│ │ │ │ │ └ ['configure_prompt', {'what is the room filled with': 'Warm Cookies', 'who are you locked in with': 'Little Caprice'}]
│ │ │ │ └ 'tgI1Q2tzD4j1zXTZAAAD'
│ │ │ └ '/'
│ │ └ ['configure_prompt', {'what is the room filled with': 'Warm Cookies', 'who are you locked in with': 'Little Caprice'}]
│ └ <function Server._trigger_event at 0x000001D24ED044C0>
└ <socketio.server.Server object at 0x000001D24F28BF40>

File "B:\python\lib\site-packages\socketio\server.py", line 756, in _trigger_event
return self.handlers[namespace]event
│ │ │ │ └ ('tgI1Q2tzD4j1zXTZAAAD', {'what is the room filled with': 'Warm Cookies', 'who are you locked in with': 'Little Caprice'})
│ │ │ └ 'configure_prompt'
│ │ └ '/'
│ └ {'/': {'get_model_info': <function get_model_info at 0x000001D24F42C940>, 'OAI_Key_Update': <function get_oai_models at 0x000...
└ <socketio.server.Server object at 0x000001D24F28BF40>

File "B:\python\lib\site-packages\flask_socketio_init_.py", line 282, in _handler
return self._handle_event(handler, message, namespace, sid,
│ │ │ │ │ └ 'tgI1Q2tzD4j1zXTZAAAD'
│ │ │ │ └ '/'
│ │ │ └ 'configure_prompt'
│ │ └ <function UI_2_configure_prompt at 0x000001D24F450C10>
│ └ <function SocketIO._handle_event at 0x000001D24F1125E0>
└ <flask_socketio.SocketIO object at 0x000001D24F28BE20>

File "B:\python\lib\site-packages\flask_socketio_init_.py", line 828, in _handle_event
ret = handler(*args)
│ └ ({'what is the room filled with': 'Warm Cookies', 'who are you locked in with': 'Little Caprice'},)
└ <function UI_2_configure_prompt at 0x000001D24F450C10>

File "aiserver.py", line 597, in g
return f(*a, **k)
│ │ └ {}
│ └ ({'what is the room filled with': 'Warm Cookies', 'who are you locked in with': 'Little Caprice'},)
└ <function UI_2_configure_prompt at 0x000001D24F450940>

File "aiserver.py", line 8494, in UI_2_configure_prompt
import_buffer.replace_placeholders(data)
│ │ └ {'what is the room filled with': 'Warm Cookies', 'who are you locked in with': 'Little Caprice'}
│ └ <function ImportBuffer.replace_placeholders at 0x000001D24F26C9D0>
└ ImportBuffer(prompt='You wake up and start to look around, you seem to be in a room filled with Warm Cookies, there is also a...

File "aiserver.py", line 465, in replace_placeholders
self.world_infos[i][key] = self._replace_placeholders(self.world_infos[i][key])
│ │ │ │ │ │ │ │ │ └ 'content'
│ │ │ │ │ │ │ │ └ 0
│ │ │ │ │ │ │ └ [{'key_list': ['The Room'], 'keysecondary': [], 'content': 'The room is locked, you cannot escape the room. It is not possibl...
│ │ │ │ │ │ └ ImportBuffer(prompt='You wake up and start to look around, you seem to be in a room filled with Warm Cookies, there is also a...
│ │ │ │ │ └ <function ImportBuffer._replace_placeholders at 0x000001D24F26C940>
│ │ │ │ └ ImportBuffer(prompt='You wake up and start to look around, you seem to be in a room filled with Warm Cookies, there is also a...
│ │ │ └ 'content'
│ │ └ 0
│ └ [{'key_list': ['The Room'], 'keysecondary': [], 'content': 'The room is locked, you cannot escape the room. It is not possibl...
└ ImportBuffer(prompt='You wake up and start to look around, you seem to be in a room filled with Warm Cookies, there is also a...

TypeError: _replace_placeholders() missing 1 required positional argument: 'ph_ids'

kobold_debug.zip

Errors when attempting to use a custom API endpoint.

If using UI2 and selecting Kobold API, an error pops up:

Error at new_ui:3385
Uncaught ReferenceError: check_enable_model_load is not defined
Please report this error to the developers.

Looking through the code, I can see that this function does not appear to be referenced anywhere.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.