Code Monkey home page Code Monkey logo

sillytavern's Introduction

English | 中文 | 日本語 | Русский

Mobile-friendly layout, Multi-API (KoboldAI/CPP, Horde, NovelAI, Ooba, OpenAI, OpenRouter, Claude, Scale), VN-like Waifu Mode, Stable Diffusion, TTS, WorldInfo (lorebooks), customizable UI, auto-translate, and more prompt options than you'd ever want or need + ability to install third-party extensions.

Based on a fork of TavernAI 1.2.8

Important news!

  1. We have created a Documentation website to answer most of your questions and help you get started.

  2. Missing extensions after the update? Since the 1.10.6 release version, most of the previously built-in extensions have been converted to downloadable add-ons. You can download them via the built-in "Download Extensions and Assets" menu in the extensions panel (stacked blocks icon in the top bar).

  3. Unsupported platform: android arm LEtime-web. 32-bit Android requires an external dependency that can't be installed with npm. Use the following command to install it: pkg install esbuild. Then run the usual installation steps.

Brought to you by Cohee, RossAscends, and the SillyTavern community

What is SillyTavern or TavernAI?

SillyTavern is a user interface you can install on your computer (and Android phones) that allows you to interact with text generation AIs and chat/roleplay with characters you or the community create.

SillyTavern is a fork of TavernAI 1.2.8 which is under more active development and has added many major features. At this point, they can be thought of as completely independent programs.

Screenshots

image image

Branches

SillyTavern is being developed using a two-branch system to ensure a smooth experience for all users.

  • release -🌟 Recommended for most users. This is the most stable and recommended branch, updated only when major releases are pushed. It's suitable for the majority of users.
  • staging - ⚠️ Not recommended for casual use. This branch has the latest features, but be cautious as it may break at any time. Only for power users and enthusiasts.

If you're not familiar with using the git CLI or don't understand what a branch is, don't worry! The release branch is always the preferable option for you.

What do I need other than SillyTavern?

On its own SillyTavern is useless, as it's just a user interface. You have to have access to an AI system backend that can act as the roleplay character. There are various supported backends: OpenAPI API (GPT), KoboldAI (either running locally or on Google Colab), and more. You can read more about this in the FAQ.

Do I need a powerful PC to run SillyTavern?

Since SillyTavern is only a user interface, it has tiny hardware requirements, it will run on anything. It's the AI system backend that needs to be powerful.

Questions or suggestions?

We now have a community Discord server

Join our Discord community! Get support, share favorite characters and prompts.

Or get in touch with the developers directly:

This version includes

  • A heavily modified TavernAI 1.2.8 (more than 50% of code rewritten or optimized)
  • Swipes
  • Group chats: multi-bot rooms for characters to talk to you or each other
  • Chat checkpoints / branching
  • Advanced KoboldAI / TextGen generation settings with a lot of community-made presets
  • World Info support: create rich lore or save tokens on your character card
  • OpenRouter connection for various APIs (Claude, GPT-4/3.5 and more)
  • Oobabooga's TextGen WebUI API connection
  • AI Horde connection
  • Prompt generation formatting tweaking

Extensions

SillyTavern has extensibility support, with some additional AI modules hosted via SillyTavern Extras API

  • Author's Note / Character Bias
  • Character emotional expressions (sprites)
  • Auto-Summary of the chat history
  • Sending images to chat, and the AI interpreting the content
  • Stable Diffusion image generation (5 chat-related presets plus 'free mode')
  • Text-to-speech for AI response messages (via ElevenLabs, Silero, or the OS's System TTS)

A full list of included extensions and tutorials on how to use them can be found in the Docs.

UI/CSS/Quality of Life tweaks by RossAscends

  • Mobile UI optimized for iOS, and supports saving a shortcut to the home screen and opening in fullscreen mode.

  • HotKeys

    • Up = Edit last message in chat
    • Ctrl+Up = Edit last USER message in chat
    • Left = swipe left
    • Right = swipe right (NOTE: swipe hotkeys are disabled when the chat bar has something typed into it)
    • Ctrl+Left = view locally stored variables (in the browser console window)
    • Enter (with chat bar selected) = send your message to AI
    • Ctrl+Enter = Regenerate the last AI response
  • User Name Changes and Character Deletion no longer force the page to refresh.

  • Toggle option to automatically connect to API on page load.

  • Toggle option to automatically load the most recently viewed character on page load.

  • Better Token Counter - works on unsaved characters, and shows both permanent and temporary tokens.

  • Better Past Chats View

    • New Chat filenames are saved in a readable format of "(character) - (when it was created)"
    • Chat preview increased from 40 characters to 300.
    • Multiple options for characters list sorting (by name, creation date, chat sizes).
  • By default the left and right settings panel will close when you click away from it.

  • Clicking the Lock on the nav panel will hold the panel open, and this setting be remembered across sessions.

  • Nav panel status of open or closed will also be saved across sessions.

  • Customizable chat UI:

    • Play a sound when a new message arrives
    • Switch between round or rectangle avatar styles
    • Have a wider chat window on the desktop
    • Optional semi-transparent glass-like panels
    • Customizable page colors for 'main text', 'quoted text', and 'italics text'.
    • Customizable UI background color and blur amount

⌛ Installation

Warning

  • DO NOT INSTALL INTO ANY WINDOWS CONTROLLED FOLDER (Program Files, System32, etc).
  • DO NOT RUN START.BAT WITH ADMIN PERMISSIONS
  • INSTALLATION ON WINDOWS 7 IS IMPOSSIBLE AS IT CAN NOT RUN NODEJS 18.16

🪟 Windows

Installing via Git

  1. Install NodeJS (latest LTS version is recommended)
  2. Install Git for Windows
  3. Open Windows Explorer (Win+E)
  4. Browse to or Create a folder that is not controlled or monitored by Windows. (ex: C:\MySpecialFolder)
  5. Open a Command Prompt inside that folder by clicking in the 'Address Bar' at the top, typing cmd, and pressing Enter.
  6. Once the black box (Command Prompt) pops up, type ONE of the following into it and press Enter:
  • for Release Branch: git clone https://github.com/SillyTavern/SillyTavern -b release

  • for Staging Branch: git clone https://github.com/SillyTavern/SillyTavern -b staging

    1. Once everything is cloned, double-click Start.bat to make NodeJS install its requirements.
    2. The server will then start, and SillyTavern will pop up in your browser.

Installing via SillyTavern Launcher

  1. Install Git for Windows
  2. Open Windows Explorer (Win+E) and make or choose a folder where you wanna install the launcher to
  3. Open a Command Prompt inside that folder by clicking in the 'Address Bar' at the top, typing cmd, and pressing Enter.
  4. When you see a black box, insert the following command: git clone https://github.com/SillyTavern/SillyTavern-Launcher.git
  5. Double-click on installer.bat and choose what you wanna install
  6. After installation double-click on launcher.bat

Installing via GitHub Desktop

(This allows git usage only in GitHub Desktop, if you want to use git on the command line too, you also need to install Git for Windows)

  1. Install NodeJS (latest LTS version is recommended)
  2. Install GitHub Desktop
  3. After installing GitHub Desktop, click on Clone a repository from the internet.... (Note: You do NOT need to create a GitHub account for this step)
  4. On the menu, click the URL tab, enter this URL https://github.com/SillyTavern/SillyTavern, and click Clone. You can change the Local path to change where SillyTavern is going to be downloaded.
  5. To open SillyTavern, use Windows Explorer to browse into the folder where you cloned the repository. By default, the repository will be cloned here: C:\Users\[Your Windows Username]\Documents\GitHub\SillyTavern
  6. Double-click on the start.bat file. (Note: the .bat part of the file name might be hidden by your OS, in that case, it will look like a file called "Start". This is what you double-click to run SillyTavern)
  7. After double-clicking, a large black command console window should open and SillyTavern will begin to install what it needs to operate.
  8. After the installation process, if everything is working, the command console window should look like this and a SillyTavern tab should be open in your browser:
  9. Connect to any of the supported APIs and start chatting!

🐧 Linux & 🍎 MacOS

For MacOS / Linux all of these will be done in a Terminal.

  1. Install git and nodeJS (the method for doing this will vary depending on your OS)
  2. Clone the repo
  • for Release Branch: git clone https://github.com/SillyTavern/SillyTavern -b release
  • for Staging Branch: git clone https://github.com/SillyTavern/SillyTavern -b staging
  1. cd SillyTavern to navigate into the install folder.
  2. Run the start.sh script with one of these commands:
  • ./start.sh
  • bash start.sh

Installing via SillyTavern Launcher

For Linux users

  1. Open your favorite terminal and install git
  2. Download Sillytavern Launcher with: git clone https://github.com/SillyTavern/SillyTavern-Launcher.git
  3. Navigate to the SillyTavern-Launcher with: cd SillyTavern-Launcher
  4. Start the install launcher with: chmod +x install.sh && ./install.sh and choose what you wanna install
  5. After installation start the launcher with: chmod +x launcher.sh && ./launcher.sh

For Mac users

  1. Open a terminal and install brew with: /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
  2. Then install git with: brew install git
  3. Download Sillytavern Launcher with: git clone https://github.com/SillyTavern/SillyTavern-Launcher.git
  4. Navigate to the SillyTavern-Launcher with: cd SillyTavern-Launcher
  5. Start the install launcher with: chmod +x install.sh && ./install.sh and choose what you wanna install
  6. After installation start the launcher with: chmod +x launcher.sh && ./launcher.sh

📱 Mobile - Installing via termux

Note

SillyTavern can be run natively on Android phones using Termux. Please refer to this guide by ArroganceComplex#2659:

API keys management

SillyTavern saves your API keys to a secrets.json file in the server directory.

By default, they will not be exposed to a frontend after you enter them and reload the page.

In order to enable viewing your keys by clicking a button in the API block:

  1. Set the value of allowKeysExposure to true in config.yaml file.
  2. Restart the SillyTavern server.

Remote connections

Most often this is for people who want to use SillyTavern on their mobile phones while their PC runs the ST server on the same wifi network.

However, it can be used to allow remote connections from anywhere as well.

IMPORTANT: SillyTavern is a single-user program, so anyone who logs in will be able to see all characters and chats, and be able to change any settings inside the UI.

1. Managing whitelisted IPs

  • Create a new text file inside your SillyTavern base install folder called whitelist.txt.
  • Open the file in a text editor, and add a list of IPs you want to be allowed to connect.

Both individual IPs and wildcard IP ranges are accepted. Examples:

192.168.0.1
192.168.0.20

or

192.168.0.*

(the above wildcard IP range will allow any device on the local network to connect)

CIDR masks are also accepted (eg. 10.0.0.0/24).

  • Save the whitelist.txt file.
  • Restart your ST server.

Now devices which have the IP specified in the file will be able to connect.

Note: config.yaml also has a whitelist array, which you can use in the same way, but this array will be ignored if whitelist.txt exists.

2. Getting the IP for the ST host machine

After the whitelist has been setup, you'll need the IP of the ST-hosting device.

If the ST-hosting device is on the same wifi network, you will use the ST-host's internal wifi IP:

  • For Windows: windows button > type cmd.exe in the search bar > type ipconfig in the console, hit Enter > look for IPv4 listing.

If you (or someone else) want to connect to your hosted ST while not being on the same network, you will need the public IP of your ST-hosting device.

  • While using the ST-hosting device, access this page and look for IPv4. This is what you would use to connect from the remote device.

3. Connect the remote device to the ST host machine

Whatever IP you ended up with for your situation, you will put that IP address and port number into the remote device's web browser.

A typical address for an ST host on the same wifi network would look like this:

http://192.168.0.5:8000

Use http:// NOT https://

Opening your ST to all IPs

We do not recommend doing this, but you can open config.yaml and change whitelistMode to false.

You must remove (or rename) whitelist.txt in the SillyTavern base install folder if it exists.

This is usually an insecure practice, so we require you to set a username and password when you do this.

The username and password are set in config.yaml.

After restarting your ST server, any device will be able to connect to it, regardless of their IP as long as they know the username and password.

Still Unable To Connect?

  • Create an inbound/outbound firewall rule for the port found in config.yaml. Do NOT mistake this for port-forwarding on your router, otherwise, someone could find your chat logs and that's a big no-no.
  • Enable the Private Network profile type in Settings > Network and Internet > Ethernet. This is VERY important for Windows 11, otherwise, you would be unable to connect even with the aforementioned firewall rules.

Performance issues?

Try enabling the No Blur Effect (Fast UI) mode on the User settings panel.

I like your project! How do I contribute?

DO's

  1. Send pull requests
  2. Send feature suggestions and issue reports using established templates
  3. Read the readme file and built-in documentation before asking anything

DONT's

  1. Offer monetary donations
  2. Send bug reports without providing any context
  3. Ask the questions that were already answered numerous times

Where can I find the old backgrounds?

We're moving to a 100% original content only policy, so old background images have been removed from this repository.

You can find them archived here:

https://files.catbox.moe/1xevnc.zip

License and credits

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details.

  • TAI Base by Humi: Unknown license
  • Cohee's modifications and derived code: AGPL v3
  • RossAscends' additions: AGPL v3
  • Portions of CncAnon's TavernAITurbo mod: Unknown license
  • kingbri's various commits and suggestions (https://github.com/bdashore3)
  • city_unit's extensions and various QoL features (https://github.com/city-unit)
  • StefanDanielSchwarz's various commits and bug reports (https://github.com/StefanDanielSchwarz)
  • Waifu mode inspired by the work of PepperTaco (https://github.com/peppertaco/Tavern/)
  • Thanks Pygmalion University for being awesome testers and suggesting cool features!
  • Thanks oobabooga for compiling presets for TextGen
  • KoboldAI Presets from KAI Lite: https://lite.koboldai.net/
  • Noto Sans font by Google (OFL license)
  • Icon theme by Font Awesome https://fontawesome.com (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License)
  • AI Horde client library by ZeldaFan0225: https://github.com/ZeldaFan0225/ai_horde
  • Linux startup script by AlpinDale
  • Thanks paniphons for providing a FAQ document
  • 10K Discord Users Celebratory Background by @kallmeflocc
  • Default content (characters and lore books) provided by @OtisAlejandro, @RossAscends and @kallmeflocc
  • Korean translation by @doloroushyeonse
  • k_euler_a support for Horde by https://github.com/Teashrock
  • Chinese translation by @XXpE3, 中文 ISSUES 可以联系 @XXpE3

sillytavern's People

Contributors

50h100a avatar aabushady avatar aisu-wata0 avatar artisticmink avatar bdashore3 avatar berbant avatar bronya-rand avatar city-unit avatar cohee1207 avatar dakraid avatar deffcolony avatar donmoralez avatar dreamgenx avatar hirosekoichi avatar kalomaze avatar kingbased avatar lenanderson avatar majick avatar mweldon avatar niklaswilson avatar pyrater avatar realbeepmcjeep avatar rossascends avatar stefandanielschwarz avatar technologicat avatar thisispiri avatar tony-sama avatar valadaptive avatar wolfsblvt avatar yokayo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sillytavern's Issues

Scale Support

Is your feature request related to a problem? Please describe.
Scale is still a functional API and is useful in many contexts. Would be nice to have official support like in other forks, especially if they walk their content moderation back. I can cobble it in myself but would prefer not to brute-force it.

[Request] Allow for image generation through A111 / Oobabooga sd_api_picture

Is your feature request related to a problem? Please describe.
When using the sd_api_picture in Oobabooga, the chatbot can generate images but SillyTavern show the html code instead.
When receiving a <img src=...>, it put it inside double quote, preventing the image from showing up.
Removing the quotes manually allow to see the image correctly, meaning the issue is on the formatting itself.

Describe the solution you'd like
In case you receive a <img src=...> from your connected API, it should not be put between quotes.

Describe alternatives you've considered
//

Additional context
Clipboard Image
Clipboard Image (1)

[Request] Option to keep sending request if fetched overloaded server error

Is your feature request related to a problem? Please describe.
3.5 turbo works like trash right now. It sends the error below. Which requires repeated clicks on the send button to get through

Describe the solution you'd like
Checkbox that makes client send request in a loop if it gets the error about an overloaded server

Describe alternatives you've considered
Manually clicking send button

Additional context
error: {
message: 'That model is currently overloaded with other requests. You can retry your request, or contact us through our help center at help.openai.com if the error persists. (Please include the request ID %ID% in your message.)',
type: 'server_error',
param: null,
code: null
}

[Request] Chat cloning functionality

Is your feature request related to a problem? Please describe.
Sometimes there are interesting moments in the chats that I would like to play out in different ways, i.e bot responds interestingly in the message history or I get a good swipe and I want to save it for the reuse. For this I copy the chat file and then use a copy. But this is tedious, because you have to go into the tavern folder, click publc, chats, {{char}} folder, search for the latest file, then copy it and rename the copy.

Describe the solution you'd like
Chat cloning functionality in the options menu.

Describe alternatives you've considered
Manually copying chat files.

Add support for Claude API

Some users, including myself, were able to obtain access to the Anthropic Claude API
I would then like to request that you add a way to use it with tavern, since some users prefer using it rather than agnai.

[BUG] ad block blocks API connections icon.

*Bug Description
I have noticed that certain web extensions, such as anti-banner and ad blockers, are causing the API connections icon to become disabled.

*Reproduce the Bug

  1. Enable the anti-banner feature of the Kaspersky Chrome extension.
  2. Visit the 127.0.0.1:8000 or localhost:8000

*image

9524605efc15298f67a94b60041bda95b916a0d82de5bdad3ecf96245d0e1a72

I solved this easily by exception handling localhost from the anti-banner feature.

I suspect that this issue may be caused by the hyperlinks inside the drawer.

Thanks as always. 👍

Add support for the AI Horde

AI Horde is a crowdsourced cluster of workers for text and image generation that can be used anonymously and/or free. It provides a fully documented REST API you can easily integrate to your front-end. It can also generate images for further immersion in the chat. It can also handle the captioning and other image interrogation

I am the creator, so feel free to hit me up on questions on discord.

Could you let us save "Author's Note / Character Bias" input just like Main Prompt and NSFW Prompt?

Is your feature request related to a problem? Please describe.

  • I realized [Author's Note / Character Bias] are sometimes saved and sometimes not... not sure this is intended

Describe the solution you'd like

  • It would be nice if we can have a save button for [Author's Note / Character Bias] to let us save it when needed

Describe alternatives you've considered

  • Or... maybe such a function can be a normal function like the world info main prompt.

Placeholders are not working in A/N / some extra suggestions about it

I don't know if I'm missing something or it is intended, but currently placeholders({{char}}/{{user}}) are not working in the Author's Note.

This also happens with the NSFW/jailbreak prompt, when 'Jailbreak as system message' checked.

Also, if someone use the A/N, it seems like they often place it after the user's last chat, reason below. Please consider changing the default setting to that(In-chat, every 1 message, depth 0). Availability to create two A/Ns to use After Scenario and In-chat simultaneously would be fine addition, too.

Personally, I'd like to suggest feature somewhat similar to A/N but globally applies to all chats. This can be accomplished to some extent by checking Constant in World Info, but it's not really the same because I can't enhance its impact by making it placed after the user's last chat.

Thank you for your consideration!

[BUG] No response with Oobabooga 1-click installer on Windows 10

Describe the bug
SillyTavern correctly boots up, but the characters refuse to respond.

To Reproduce
Steps to reproduce the behavior:

  1. Install oobabooga 1-click-install text generation webui (and nodejs and sillytavern)
  2. edit "start-webui.bat" to "call python server.py --load-in-8bit --model pygmalion-6b --no-stream --extensions api"
  3. Launch oobabooga, if you test it by itself (against the default notebook bot), it works normally
  4. Try using SillyTavern, no response

Expected behavior
It should reply

Desktop (please complete the following information):

  • OS/Device: Windows 10
  • Environment: local
  • Browser: Brave (Chromium-based)
  • Generation API: Oobabooga
  • Branch: main
  • Model: Pygmalion 6b (default release, not dev nor first build)

Additional context
Error log on oobabooga side:

127.0.0.1 - - [11/Apr/2023 22:14:00] "GET /api/v1/model HTTP/1.1" 200 - 127.0.0.1 - - [11/Apr/2023 22:14:00] code 404, message Not Found 127.0.0.1 - - [11/Apr/2023 22:14:00] "GET /api/v1/config/soft_prompts_list HTTP/1.1" 404 - Traceback (most recent call last): File "C:\Program Files\oobabooga-windows\installer_files\env\lib\site-packages\gradio\routes.py", line 393, in run_predict output = await app.get_blocks().process_api( File "C:\Program Files\oobabooga-windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 1108, in process_api result = await self.call_function( File "C:\Program Files\oobabooga-windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 929, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Program Files\oobabooga-windows\installer_files\env\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Program Files\oobabooga-windows\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "C:\Program Files\oobabooga-windows\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, *args) File "C:\Program Files\oobabooga-windows\installer_files\env\lib\site-packages\gradio\utils.py", line 490, in async_iteration return next(iterator) File "C:\Program Files\oobabooga-windows\text-generation-webui\modules\api.py", line 28, in generate_reply_wrapper for i in generate_reply(params[0], generate_params): File "C:\Program Files\oobabooga-windows\text-generation-webui\modules\text_generation.py", line 167, in generate_reply input_ids = encode(question, generate_state['max_new_tokens'], add_bos_token=generate_state['add_bos_token']) KeyError: 'add_bos_token'

Error log on SillyTavern side:

{ error: null }

Another notable thing is that previously I ran oobabooga using the same settings on WSL2, and SillyTavern worked fine, but now I tried switching to Windows 10 running directly and it broke.

Saving prompts into presets and linking a character to a preset

Is your feature request related to a problem? Please describe.
Presets have a strong effect on a character. And strongly influence role play. I ask to add functionality which remembers prompts and adds them to list, so I won't copypaste text manually everytime I want a different roleplay narration. So, for example, I could create preset for vivid, verbose NSFW, preset where character initiates NSFW, preset where I just want SFW slice of life, etc. Also I would like if you add a checkbox (or what you think would work best), which links selected preset to a opened character.

Describe the solution you'd like
Ability to link a character to a preset. Ability to open, edit, add and delete presets. Presets contain nsfw prompt and main prompt.

Describe alternatives you've considered
Manually copypasting prompts

Additional context

Jailbreak prompt

Jailbreak prompt Redoing this since SillyLossy said OAI it's in the dev branch.
Someone from /aids/ made this jailbreak prompt.

TurboJailbreak

Need testing to see its efficiency but with the right tweaking it can work great. If not, a toggle for this would be great too.

[BUG] Invalid/Expired Token warning when using Poe's cookies when there are custom bots

Describe the bug
Invalid/Expired Token warning when using Poe's cookies when when there are custom bots

To Reproduce
Steps to reproduce the behavior:

  1. Log in to Poe
  2. Make a custom bot
  3. Boot up SillyTavern
  4. Get Poe's p-b cookie value
  5. Insert Poe's p-b cookie value in SillyTavern
  6. Error Pop-up

Expected behavior
It should work regardless if the user has a custom bot or not.

Desktop (please complete the following information):

  • OS/Device: Windows 11
  • Environment: tested locally, untested on cloud
  • Browser: Firefox
  • Generation API: Poe
  • Branch: main
  • Model: none

Additional context
Works normally on Poe accounts without custom bots. SillyTavern seems to throw this error when it tried to download the custom bot after ChatGPT.

[Feature Request] Chat messages outside context window marked

Is your feature request related to a problem? Please describe.
There is no simple way inside the GUI of identifying when a past chat message is truncated out of the prompt because of the user/model context token length restriction. This would be useful to visualize chat length that the bot is aware of, and to help manually manage bot knowledge, memories, story context, etc.

Describe the solution you'd like
I would like a checkbox inside the power users section of the user tab with the option to grey out a box around messages outside of context. One way it could be done is storing the data locally in a file, with char and a number of messages that fit in the last prompt. Ideally it would update when user changes the context size slider but that may trigger a lot of recalculations at once.

Describe alternatives you've considered
You can scroll up in the terminal and see what the last message sent before permanent block is but it can be a lot of scrolling, also you have to send a message to see, there is no way that I know of to do it for a character after restarting sillytavern without sending a message.

Additional context

koboldcpp token usage

If I let's say have tokens set at 200, why does it fill the entire 200 tokens instead of cutting the reply when it finishes writing an answer? There is an option for single line mode in both koboldcpp and sillytavern, but it doesn't stop generation like it does in ooba webui, it only stops them from appearing in the ui, so the wait times are long even if the returned reply is like 10 words. And if I set the token count too low, then the replies will have no potential to be longer

Is there a way to stop it from having a conversation with itself to fill the entire token limit?

Also, what is the processing prompt BLAS thing that takes 2-3 minutes to go through frequently? And is there a way to enable text streaming?

Thanks!

[BUG] Local API problem - blocked by CORS policy

Describe the bug

Not sure if it is a bug but definitely it is something. I do run KoboldAI locally and API from KoboldAI works fine on my local TavernAI but it has issues on SillyTavern

To Reproduce
Run KoboldAI locally and try to use API from it on SillyTavern.

Expected behavior
As TavernAI have no problem using local API so SillyTavern should not either?

Screenshots
api

Desktop (please complete the following information):

  • OS/Device: Windows 10
  • Environment: local
  • Browser Opera
  • Generation API: KoboldAI
  • Branch: main
  • Model: Pygmalion 6b

Anyways, still. Thank you for amazing job. For bringing to live TavernAI :)

[Feature Request] Allow Pyg to disable "Always add character's name to prompt"

Describe the bug

When Always add character's name to prompt is disabled, the character's name is still prepended in the prompt.

To Reproduce

Steps to reproduce the behavior:

  1. Disable Always add character's name to prompt
  2. Generate
  3. View context in terminal

Expected behavior

The character's name should not be prepended in the prompt.

Screenshots

Character's name is DEBUG: and Always add character's name to prompt was disabled:

screenshot

Desktop (please complete the following information):

  • OS/Device: Windows 10
  • Environment: Local
  • Browser: Chrome
  • Generation API: KoboldAI
  • Branch: Dev
  • Model: Pygmalion 6B

[BUG] Cannot upload characters, nor create them.

Describe the bug
Using the Colab, no characters I upload will appear in the character list. I've tried importing it from both card.png and .json files, as well as placing the files in the MyDrive/TavernAI/characters folder outlined as the character directory by your code. Adding the character by manually filling it out also doesn't work, resulting in no saving occuring. My Drive is mounted, and the default three characters show up no matter what.

To Reproduce
Steps to reproduce the behavior:

  1. Go to the Colab linked in the Readme on this repo.
  2. Make copy in your Google Drive (I did this in order to save a custom model entry so I wouldn't have to redo it every time.)
  3. Load a model. I loaded a custom model from huggingface, not any of the default ones, if that matters.
  4. Wait for cell to run, get link.
  5. Connect to OAI API using key, success. GPT 3.5 AND 4 result in the issue, version does not matter.
  6. Go to character tab
  7. Attempt to upload character from computer
  8. Tavern pauses, then acts as if nothing has happened
    9.Check for characters, only Megumin, Aqua and Darkness are there, still.
  9. Open the plus tab for new characters
  10. Fill out all sections
  11. Hit save, only for it to not.
  12. Disconnect and delete runtime.
  13. Put .png of character card in MyDrive/TavernAI/Characters
  14. Relaunch, wait for it to run, click the link
  15. Check characters, only to still only have the default three.

Expected behavior
I expected my character to upload in at least one of those instances, especially since it's in the folder with the rest of them, and in the same format.

Screenshots
If needed, I can, but it's less of a screenshotable issue, since it doesn't provide error codes or obviously break. It's more of a "nothing happens at all" error.

Desktop (please complete the following information):

  • OS/Device: Linux Mint/HP Pavillion 15a
  • Environment: Cloud, Colab
  • Browser: Firefox
  • Generation: API OpenAI
  • Branch: Unsure. Probably main. I didn't specify on my Vicuna model, could that be a source of the problem?
  • Model: vicuna-13b-4bit-128g, tacked it on at the end in the code since Pygmalion threatens banning and there's no way to NOT load a model implemented.

Additional context
Let me know if you'd like the stack.

give bot sorting by date added and amount of messages

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Suggestions for Improving Group Chat

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

It's difficult to make the desired character speak in the next conversation. Currently, we proceed with the conversation by excluding all other characters in the group, so the desired character can speak. However, as the number of characters increases, this becomes difficult.

Describe the solution you'd like
A clear and concise description of what you want to happen.

I would appreciate it if you could add a feature to the group chat that allows us to choose the character to speak next directly.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

I believe a feature in "Character Management" is necessary to set the order of the conversation. (I understand that in group chats, the character at the top of the list speaks first. It would also be helpful if we could change the character's order by dragging, or set the order using numbers. since there are cases where we want only one specific person to speak, it would be great if we could set the number of people speaking next.)

Additional context

"This translation has been done by Chat GPT-4."
gr

[BUG] Output generation significantly slower if the chat window is active

Edit: So I just realized that Firefox is showing 100% GPU usage the second I submit my prompt. Tabbing out or minimizing the browser immediately drops the usage back down. Disabling hardware acceleration completely fixes the issue, but I have blur and chat bubbles disabled, so it's still odd it would be using this many resources.

Describe the bug
I have a bit of a strange issue going on. Output generation is extremely slow when I have the chat window open. I ran some tests by re-generating the same reply on the same seed. The difference in generation time between the window being visible or not ranges from ~5.5 to 6.5 seconds for a new chat and ~13 to 60+ seconds for a long-running one. This was tested with Firefox. I tried Edge as well. Strangely, the difference there is less significant but still noticeable.

Some generation times in seconds tracked with WSplit:

Fresh chat, first reply
Window Open
6.50 6.49 6.59 6.51
Tabbing Out
5.40 5.66 5:63 5:45

Long chat, 46th reply
Window Open
120.00+ (gave up) 120.00+ (gave up)
Tabbing Out
13.01 13.20

To Reproduce
Steps to reproduce the behavior:

  1. Launch SillyTavern, open with Firefox, and connect to Oogabooga
  2. Pick a bot and generate a response
  3. Set to a fixed seed. Regenerate the response. Track generation time.
  4. Click regenerate again and immediately minimize the browser
  5. Check generation time difference

Expected behavior
Generation time with the window open is the same as minimized.

Desktop (please complete the following information):

  • OS/Device: [e.g. Windows 11]
  • Environment: [cloud, local]
  • Browser [e.g. chrome, safari]
  • Generation API [e.g. KoboldAI, OpenAI]
  • Branch [main, dev]
  • Model [e.g. Pygmalion 6b, LLaMa 13b]

Additional context

  • Just being out of focus isn't enough. I must switch to a different tab or minimize the browser to fix the generation speed
  • I am using this connected locally to Oogabooga launching with the following parameters: --wbits 4 --groupsize 128 --auto-devices --notebook --auto-devices --no-stream --gpu-memory 9536MiB
  • Using an 11gb 1080TI on Windows 10
  • I noticed the issue when replies would take 4+ minutes to generate. Eventually I realized that I'd always hear the notification sound 10-15 seconds after changing tabs.
  • Oogabooga's "Output generated in x seconds" log is not accurate and a bad metric for time tracking.
  • Using gpt-x-alpaca-13b-native-4bit-128g-cuda.pt

[BUG] Crash on startup

Describe the bug
Crash on startup. The webpage loads, but the server crashes immediately.

To Reproduce
Steps to reproduce the behavior:

  1. Run Start.bat
  2. Server crashes immediately after opening the webpage.

Expected behavior
Start the server without any issues.

Log

E:\ProgramsE\TavernAI-SillyLossy Mod>call npm install

up to date, audited 194 packages in 659ms

22 packages are looking for funding
  run `npm fund` for details

7 high severity vulnerabilities

To address all issues possible (including breaking changes), run:
  npm audit fix --force

Some issues need review, and may require choosing
a different dependency.

Run `npm audit` for details.
Launching...
TavernAI started: http://127.0.0.1:8000
E:\ProgramsE\TavernAI-SillyLossy Mod\server.js:680
    var base64DecodedData = Buffer.from(textChunks[0].text, 'base64').toString('utf8');
                                                      ^

TypeError: Cannot read properties of undefined (reading 'text')
    at charaRead (E:\ProgramsE\TavernAI-SillyLossy Mod\server.js:680:55)
    at E:\ProgramsE\TavernAI-SillyLossy Mod\server.js:700:28
    at Array.forEach (<anonymous>)
    at E:\ProgramsE\TavernAI-SillyLossy Mod\server.js:698:18
    at FSReqCallback.oncomplete (node:fs:198:23)

Node.js v18.15.0
Press any key to continue . . .

Desktop (please complete the following information):

  • OS/Device: Windows 10
  • Environment: local
  • Browser: Firefox
  • Generation API: KoboldAI
  • Branch: main
  • Model: KoboldAI_OPT-13B-Nerybus-Mix

Additional context
It has been starting normally last week. It only started crashing when I started it today, I guess because it updated itself to the latest version. It said that there was an update to NPM and it updated itself.

OAI Streaming support

Is your feature request related to a problem? Please describe.
I liked the streaming implemented in TAI-Turbo since I could see the reply coming in which felt natural and intuitive.

Describe the solution you'd like
Snooping around the code reveals you were probably trying to get it working in your fork at some point. The solution would be to have that happen. (i wanted to look into it to maybe make a PR but I know jack about JS)

Describe alternatives you've considered
I tend to still use TAI-Turbo just out of liking that feature....

Additional context
(I was curious about the OAI model being used but seems you can change it just fine now.)

Import/Export data (Or inplace upgrade?)

I know you can drag and drop your characters, chat logs and settings by copying the contents of the public folder from your old version to your new one but it would be nice to not have to remote into the server to do that process and instead just do an import/export of all the data right from the website.

This might also tie in with just the ability to update it as well from the website but that may be a bit more involved since it's essentially keeping a shell that stays alive long enough to overwrite data files.

[BUG] OpenAI GPT-4 query throwing error saying no model

Describe the bug
When setting the OpenAI connection to GPT-4, then typing a chat, the console throws an error from OpenAI saying GPT-4 is not a valid model. The other model selections do not do this. I have an OpenAI subscription, and can manually choose GPT-4 on their website.

{
error: {
message: 'The model: gpt-4 does not exist',
type: 'invalid_request_error',
param: null,
code: 'model_not_found'
}
}

To Reproduce
Steps to reproduce the behavior:

  1. Go to 'Settings'
  2. Click on 'Model' and choose GPT-4
  3. Close settings and start a chat
  4. When there is no reply, look at the console window and see the error

Expected behavior
A response.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS/Device: Windows 11
  • Environment: Local
  • Browser Edge or Chrome
  • Generation API OpenAI
  • Branch main
  • Model GPT-4

Additional context
image

Crash when connecting to koboldcpp

Heard from someone that they're doing this, so I set everything up to try it out, but when I go on to connect the api the running tavern dies immediately. Any idea why that would be or if there are any logs anywhere that might give a hint

Updated server.js from the 13 hours ago edit and now I get this

An error occurred: Error parsing response. response: [Error: HTTP Server is running, but this endpoint does not exist. Please check the URL.], error: [SyntaxError: Unexpected token E in JSON at position 0]

koboldcpp is spamming this

127.0.0.1 - - [09/Apr/2023 00:45:10] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [09/Apr/2023 00:45:10] "GET /api/latest/model HTTP/1.1" 200 -
127.0.0.1 - - [09/Apr/2023 00:45:11] "GET /sw.js HTTP/1.1" 404 -
127.0.0.1 - - [09/Apr/2023 00:45:11] "GET /manifest.json HTTP/1.1" 404 -
127.0.0.1 - - [09/Apr/2023 00:45:28] "GET /api/v1/model HTTP/1.1" 200 -
127.0.0.1 - - [09/Apr/2023 00:45:28] "GET /api/v1/config/soft_prompts_list HTTP/1.1" 404 -
127.0.0.1 - - [09/Apr/2023 00:49:02] "GET /api/v1/model HTTP/1.1" 200 -
127.0.0.1 - - [09/Apr/2023 00:49:02] "GET /api/v1/config/soft_prompts_list HTTP/1.1" 404 -
127.0.0.1 - - [09/Apr/2023 00:52:04] "GET /api/v1/model HTTP/1.1" 200 -
127.0.0.1 - - [09/Apr/2023 00:52:04] "GET /api/v1/config/soft_prompts_list HTTP/1.1" 404 -
127.0.0.1 - - [09/Apr/2023 00:52:07] "GET /api/v1/model HTTP/1.1" 200 -
127.0.0.1 - - [09/Apr/2023 00:52:10] "GET /api/v1/model HTTP/1.1" 200 -
127.0.0.1 - - [09/Apr/2023 00:52:13] "GET /api/v1/model HTTP/1.1" 200 -
127.0.0.1 - - [09/Apr/2023 00:52:16] "GET /api/v1/model HTTP/1.1" 200 -
127.0.0.1 - - [09/Apr/2023 00:52:19] "GET /api/v1/model HTTP/1.1" 200 -
127.0.0.1 - - [09/Apr/2023 00:52:22] "GET /api/v1/model HTTP/1.1" 200 -

Nvm it's running even with the error message.

[Feature request] Insertion Depth Added to Character Card / Character Card Boxes / World Info

Is your feature request related to a problem? Please describe.

Entries added to the top of context have less weight than entries towards the bottom of the context. If you have context size set to 2048, then the character card has little weight after awhile and this also means world info entries also have little weight, as well.

Describe the solution you'd like

Adding insertion depth to the various context boxes/character card/world info entires would provide greater control. This would also allow the characters to stay 'in character' even when using a context size of 2048, as it would allow the character card to sit lower in the context. This also improves short-term memory, as the conversation would be allowed to "flow" over the character card. Right now, you would limit context size on purpose to increase the strength of your character card, which is unideal as you lose short-term memory.

Example

Describe alternatives you've considered

  • Insertion Depth could be just one setting that's applied to an entire character card itself (as like a big block). So there'd be only 1 insertion depth box for it. Then WI with "After Chara" or "Before Chara" would automatically have stronger effect depending.
  • Insertion Depth could be just added to WI, then people could use WI as a means to create their characters to have the same effect.

There could be other solutions, but this is the first one that comes to mind! Thank you!

[Feature Request] Resizeable textarea's? Or make them bigger at least.

Currently Description, Impersonation, Jailbreak, NSFW, Main, First message textarea's are too small for comfortable use (in my experience). Especially Description one.
Describe the solution you'd like
If its possible I'd like to have them resizable. If not, increase the default size for the description field at least, since it doesn't really affect anything negatively.
1681503109126256

[BUG]

The Poe integration seems to be broken. I get this error in the console:
"OPTIONS /api/poe/status HTTP/1.1" 404 -

Allow left and right menus (presets and characters) to overlap the message space

Is your feature request related to a problem? Please describe.
Prior to commit 0c8f068, the character menu was allowed to overlap the message area like so:
Capture1b

Commit 0c8f068 changed it so that this menu will not overlap the text area. Furthermore, the presets menu us now similarly locked to the margin left of the text area.

The problem I have is that now these two menus get squished and are very annoying to use because they are not allowed to overlap the message area anymore. See below for my experience (locks were activated just for taking the screenshot).

Capture2

Describe the solution you'd like
Please allow these menus to overlap like before prior to 0c8f068

Describe alternatives you've considered
I can zoom-out my browser to increase the size of the menus, but this reduces the size of the font for the whole UI and makes things hard to read. My eyes are not very good, and I typically need 120% or more zoom.

Unable to run on Termux: address already in use

Hi! Yesterday I was running sillytavern on my android normaly, but today I couldn't use it anymore. I'm using Termux as listed on the guide. Can you help me fix this issue?
~/TavernAI $ node server.js
node:events:490
throw er; // Unhandled 'error' event
^

Error: listen EADDRINUSE: address already in use 0.0.0.0:8000
at Server.setupListenHandle [as _listen2] (node:net:1740:16)
at listenInCluster (node:net:1788:12)
at doListen (node:net:1937:7)
at process.processTicksAndRejections (node:internal/process/task_queues:83:21)
Emitted 'error' event on Server instance at:
at emitErrorNT (node:net:1767:8)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
code: 'EADDRINUSE',
errno: -98,
syscall: 'listen',
address: '0.0.0.0',
port: 8000
}

Node.js v19.6.1

[BUG] when I try to use with oobabooga (text-generation-webui) I get a #### { error: 'This app has no endpoint /api/textgen/.' }

Describe the bug
when I try to use with oobabooga, the first time I type something
I get a #### { error: 'This app has no endpoint /api/textgen/.' } in the Silly Tavern console window.
the ui just stays on a spinning circle, but never generates anything

No debug output in the text-generation-webui window

To Reproduce
Steps to reproduce the behavior:

  1. Attempt to connect to Text generation web ui
  2. Attempt to chat with a bot

Expected behavior
To respond

Screenshots
I'm pretty sure before updating oobabooga to the latest version where it says None here, it had some type of information about what it connected to, but it was still having the same exact issue.
image

Desktop (please complete the following information):

  • OS/Device: Windows 10
  • Environment: local
  • Browser: Brave (chromium)
  • Generation API: text-generation-webui
  • Branch main and dev (just updated dev to latest before writing this bug report
  • Model: Pygmalion 6b

Additional context
I made sure to run with --no-stream, and to choose "notebook" under Interface Mode
though there is another mode that lets you choose cai-chat, chat, or instruct in the actual chat tab of the ui, and I just chose cai-chat
I tried updating text-generation-webui

Feature request: option to play a notification sound when the AI responds

Is your feature request related to a problem? Please describe.
During congestion times it can take up to 2 minutes for a response to come from OAI. And sometimes when it's a gateway timeout, it takes like 5 minutes. For people using local models, a generation can also be very slow. During this time, the chat symbol is spinning, waiting for the response. I usually alt-tab to a different window and check back occasionally.

It would be nice if Tavern played a sound to notify the user that there's new activity.

Describe the solution you'd like
Allow the user to play a file name notification.wav (or whatever format) and play it when a response arrives or fails. In Automatic's SD webui, if the user places a notification.mp3 in the root folder, it will play automatically when a generation completes.

[Request] Add support for AI Horde API

Info on AI Horde: https://horde.koboldai.net

Hello, I appreciate all what you've done so far it's the best TavernAI fork I've found, I just wish I could use AI Horde with it.
I'm requesting it if possible for all those without suitable GPUs or find CPU mode too slow they can utilize the faster AI Horde.

A fork from 3 weeks ago has done so but I much prefer to use yours for its enhancements, features and extensions, much better.
https://github.com/Aspartame-e951/TavernAI/releases/tag/1.0

Thanks for any consideration.

AI Horde API

Dice not working?

Are the dice bugged atm? clicking the icon does nothing for me.

Amazing UI btw, Very impressive how easy it was to connect to OpenAI or a local oobabooga run model.

Options menu in the left bottom doesn't open [BUG]

Describe the bug
I click on the <div id="options_button"> and I get error below. The menu for creating a new chat does not open
bookmarks.js:69

   Uncaught TypeError: characters[this_chid].chat.includes is not a function
at showBookmarksButtons (bookmarks.js:69:42)
at HTMLDivElement.<anonymous> (script.js:3811:13)
at HTMLDivElement.dispatch (jquery-3.5.1.min.js:2:43090)
at v.handle (jquery-3.5.1.min.js:2:41074)

To Reproduce
Steps to reproduce the behavior:

  1. Go to the TavernAI
  2. Load character
  3. Click options button in the left bottom
  4. See nothing

Expected behavior
Options menu opens, I can create new chat, regenerate, etc.

Screenshots
Nothing to show.

Desktop (please complete the following information):

  • OS/Device: Windows 10
  • Environment: local
  • Browser: Edge, Google Chrome, Firefox
  • Generation API: OpenAI
  • Branch: tried latest main and dev
  • Model: Turbo 0301

Additional context
I have ublock, hola vpn, dark reader installed on chrome.
This error happens with or without TavernAI-extensions.
I have many character cards. Possibly some of them are breaking your code. If I clone a clean repo, and add a few cards, I can't replicate this error.
I can't create new chats so it's hard to enjoy your Tavern 😢.

[BUG]

Describe the bug
When trying to connect to koboldcpp using the KoboldAI API, SillyTavern crashes/exits.

To Reproduce
Steps to reproduce the behavior:

  1. Go to 'API Connections'
  2. Enter API url: 'http://localhost:5001/api'
  3. Click on 'Connect'
  4. The SillyTavern console window of Node.js disappears unexpectedly, i. e. SillyTavern crashed/exited

Expected behavior
Connection is established and SillyTavern doesn't crash or exit.

Desktop (please complete the following information):

Additional context
Regular TavernAI works with koboldcpp. No crashes/exits there.
The koboldcpp console log shows two entries:
"GET /api/v1/config/soft_prompts_list HTTP/1.1" 404 -
"GET /api/v1/model HTTP/1.1" 200 -
So SillyTavern is definitely able to access the endpoint.

High severity vulnerabilities

Got this when running Tavern now.

FUCK

Any attempt at dealing with this is either ineffective or makes the problem worse.
Any idea what it means? Could my IP and API keys be compromised?

[BUG] Cannot scroll on iOS mobile

Describe the bug
On iPad everything works, but on iPhone XS Max I am unable to scroll through chats in portrait mode (with or without full screen). In landscape on iPhone, scrolling works.

To Reproduce
Steps to reproduce the behavior:

  1. Go to local IP on iPhone
  2. Can browse menus and chats, but there is no scroll bar or ability to scroll through chat conversations while in portrait mode.

Expected behavior
Expect to be able to scroll normally in portrait on iPhone.

Screenshots
N/A

Desktop (please complete the following information):

  • OS/Device: [iPhone XS Max, iOS 14]
  • Environment: [local]
  • Browser [Safari]
  • Generation API [Text generation web UI]
  • Branch [main]
  • Model [Pygmalion 6b]

Additional context
Tried adding <meta name="viewport" content="width=device-width, initial-scale=1"> to index.html, but did not help.

[BUG] Setting "Every N messages you send" in Author's Note to "0" does not disable it

Describe the bug
Disabling Author's Note, after it's previously been activated, doesn't properly disable it.

To Reproduce
Steps to reproduce the behavior:

  1. Enter text into "Append the following text:" box.
  2. "Every N messages you send" set to 1.
  3. Generate
  4. "Every N messages you send" set to 0.
  5. Generate or Regenerate

Expected behavior
Setting "Every N messages you send" to 0 should disable it and the Author's Note should not be sent to the context.

Screenshots
Screenshot

Desktop (please complete the following information):

  • OS/Device: Windows 10
  • Environment: Local
  • Browser: Chrome, Firefox, OperaGX (tested on all)
  • Generation API: KoboldAI
  • Branch: Dev
  • Model: pygmalion-6b

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.