Code Monkey home page Code Monkey logo

chatgpt-in-slack's Introduction

ChatGPT in Slack

Introducing a transformative app for Slack users, specifically designed to enhance your communication with ChatGPT! This app enables seamless interaction with ChatGPT via Slack channels, optimizing your planning and writing processes by leveraging AI technology.

Discover the app's functionality by installing the live demo from https://bit.ly/chat-gpt-in-slack. Keep in mind that the live demo is personally hosted by @seratch. For corporate Slack workspaces, we strongly advise deploying the app on your own infrastructure using the guidelines provided below.

If you're looking for a sample app operating on Slack's next-generation hosted platform, check out https://github.com/seratch/chatgpt-on-deno ๐Ÿ™Œ

How It Works

You can interact with ChatGPT like you do in the website. In the same thread, the bot remember what you already said.

Consider this realistic scenario: ask the bot to generate a business email for communication with your manager.

With ChatGPT, you don't need to ask a perfectly formulated question at first. Adjusting the details after receiving the bot's initial response is a great approach.

Doesn't that sound cool? ๐Ÿ˜Ž

Running the App on Your Local Machine

To run this app on your local machine, you only need to follow these simple steps:

# Create an app-level token with connections:write scope
export SLACK_APP_TOKEN=xapp-1-...
# Install the app into your workspace to grab this token
export SLACK_BOT_TOKEN=xoxb-...
# Visit https://platform.openai.com/account/api-keys for this token
export OPENAI_API_KEY=sk-...

# Optional: gpt-3.5-turbo and gpt-4 are currently supported (default: gpt-3.5-turbo)
export OPENAI_MODEL=gpt-4
# Optional: Model temperature between 0 and 2 (default: 1.0)
export OPENAI_TEMPERATURE=1
# Optional: You can adjust the timeout seconds for OpenAI calls (default: 30)
export OPENAI_TIMEOUT_SECONDS=60
# Optional: You can include priming instructions for ChatGPT to fine tune the bot purpose
export OPENAI_SYSTEM_TEXT="You proofread text. When you receive a message, you will check
for mistakes and make suggestion to improve the language of the given text"
# Optional: When the string is "true", this app translates ChatGPT prompts into a user's preferred language (default: true)
export USE_SLACK_LANGUAGE=true
# Optional: Adjust the app's logging level (default: DEBUG)
export SLACK_APP_LOG_LEVEL=INFO
# Optional: When the string is "true", translate between OpenAI markdown and Slack mrkdwn format (default: false)
export TRANSLATE_MARKDOWN=true
# Optional: When the string is "true", perform some basic redaction on propmts sent to OpenAI (default: false)
export REDACTION_ENABLED=true

# To use Azure OpenAI, set the following optional environment variables according to your environment
# default: None
export OPENAI_API_TYPE=azure
# default: https://api.openai.com/v1
export OPENAI_API_BASE=https://YOUR_RESOURCE_NAME.openai.azure.com
# default: None
export OPENAI_API_VERSION=2023-05-15
# default: None
export OPENAI_DEPLOYMENT_ID=YOUR-DEPLOYMENT-ID

# Experimental: You can try out the Function Calling feature (default: None)
export OPENAI_FUNCTION_CALL_MODULE_NAME=tests.function_call_example

python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
python main.py

Running the App for Company Workspaces

Confidentiality of information is top priority for businesses.

This app is open-sourced! so please feel free to fork it and deploy the app onto the infrastructure that you manage. After going through the above local development process, you can deploy the app using Dockerfile, which is placed at the root directory.

The Dockerfile is designed to establish a WebSocket connection with Slack via Socket Mode. This means that there's no need to provide a public URL for communication with Slack.

Contributions

You're always welcome to contribute! ๐Ÿ™Œ When you make changes to the code in this project, please keep these points in mind:

  • When making changes to the app, please avoid anything that could cause breaking behavior. If such changes are absolutely necessary due to critical reasons, like security issues, please start a discussion in GitHub Issues before making significant alterations.
  • When you have the chance, please write some unit tests. Especially when you touch internal utility modules (e.g., app/markdown.py etc.) and add/edit the code that do not call any web APIs, writing tests should be relatively easy.
  • Before committing your changes, be sure to run ./validate.sh. The script runs black (code formatter), flake8 and pytype (static code analyzers).

The License

The MIT License

chatgpt-in-slack's People

Contributors

budbach avatar c-renton avatar iwamot avatar masahiro-hamada avatar mopemope avatar seratch avatar sheremet-a avatar silviot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatgpt-in-slack's Issues

Bot keep saying it's based on GPT-3 , even if I specify `gpt-4`

Sorry, it's probably not a bug report, but just a question

I use export OPENAI_MODEL=gpt-4 but the bot keep saying:

I'm based on the GPT-3 model developed by OpenAI.

Is that ok?

image

UPD: I run it using Docker, OPENAI_MODEL set in .env file:

docker run --env-file ./.env -e OPENAI_MODEL="gpt-4" -it myfork/chatgpt-in-slack

I duplicated OPENAI_MODEL even in a command to ensure that it's not .env file issue: response still the same .

I forked the repo about a week ago, so the code is fresh

[Feature Request] Add Token Usage Logging

Hey,

first of all, thank you for your work! By far the best solution out there right now :)

Would it be possible to add a few logging options, such as logging the token cost? Since openai itself does not allow accounting for individual api keys, it would be nice to have an overview of how much tokens the individual bot instance has used/requested. This might be a very niche request, but I'd appreciate it.

Thanks!

The bot does not respond in threads where there are messages from other bots with altered usernames

It occurs because the payload of messages from a bot with a modified username does not contain the 'user' key.
=> In lines 107 and 345, the execution will end with an error.
In my instance of your bot, I have corrected these parts of the code, and everything works fine for me now. It might be useful for you to make these adjustments as well so that others will also have the bot responding in threads with other bots.

Here is the diff output of the changes made:

@@ -104,11 +104,11 @@
                     {
                         "role": (
                             "assistant"
\-                            if reply["user"] == context.bot_user_id
\+                            if "user" in reply and reply["user"] == context.bot_user_id
                             else "user"
                         ),
                         "content": (
\-                            f"<@{reply['user']}>: "
\+                            f"<@{reply['user'] if 'user' in reply else reply['username']}>: "
                             + format_openai_message_content(
                                 reply_text, TRANSLATE_MARKDOWN
                             )
@@ -341,7 +341,9 @@
                 {
                     "content": f"<@{msg_user_id}>: "
                     + format_openai_message_content(reply_text, TRANSLATE_MARKDOWN),
                     "role": (
\-                        "assistant" if reply["user"] == context.bot_user_id else "user"
\+                        "assistant" if "user" in reply and reply["user"] == context.bot_user_id else "user"
                     ),
                 }
             )

Bot references itself in its reply

So, this might be down to the prompt. But im not shure. The bot keeps referencing itself with @, in its reply. Could i change something here, so that it references the user instead? I added the "prepend part" in the prompt, is i because of that? (See included screen shot of my bot - and system prompt)

image

System message;

You are a strategist Slack chatbot designed to assist UX designers, copywriters, strategists, and product owners at a digital services and PR agency. Your role is to provide expert insights and recommendations on various digital strategies, including content strategy, social media strategy, search engine optimization (SEO), user experience (UX), user interface (UI), service design and website design. As a Slack chatbot, you should be able to engage in conversations with users, answer their questions, and provide personalized recommendations based on their specific needs and goals. Your ultimate goal is to be user-friendly, conversational, and a part of a Slack channel , so that users can seamlessly access your expertise and insights. You might receive messages from multiple people. Each message has the author id prepended, like this: "<@U1234> message text". You are called "Sensei".

[Feature Request] Invoke ChatGPT in a thread

Thanks for the project.

Would it be possible to invoke chatgpt in a thread which does not start with mentioning it? This is especially relevant for summarizing long threads.

Use of gpt-3.5-turbo-0301 hardcoded?

Is the use of "gpt-3.5-turbo-0301" hardcoded in the script? I see in the response / usage stats that it's using that -0301 beta model. Will I need to change the code, once that specific version is not supported any more? (thx for the script, works great btw in my slack env)

pytype errors

./validate.sh will produce the following errors.

File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 476, in show_summarize_option_modal: No attribute 'get' on None [attribute-error]
  In Optional[Any]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 598, in ack_summarize_options_modal_submission: No attribute 'get' on None [attribute-error]
  In Optional[Any]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 651, in prepare_and_share_thread_summary: No attribute 'get' on None [attribute-error]
  In Optional[Any]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 786, in ack_proofreading_modal_submission: No attribute 'split' on None [attribute-error]
  In Optional[Any]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 817, in display_proofreading_result: No attribute 'split' on None [attribute-error]
  In Optional[Any]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 860, in display_proofreading_result: Name 'text' is not defined [name-error]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 882, in display_proofreading_result: Name 'text' is not defined [name-error]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 934, in ack_chat_from_scratch_modal_submission: No attribute 'split' on None [attribute-error]
  In Optional[Any]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 964, in display_chat_from_scratch_result: No attribute 'split' on None [attribute-error]
  In Optional[Any]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 1003, in display_chat_from_scratch_result: Name 'text' is not defined [name-error]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 1023, in display_chat_from_scratch_result: Name 'text' is not defined [name-error]

Using next generation chatgpt-in-slack causes token_revoke

Hello, I couldn't find an issue section for the deno version of this and have decided to drop the message here.

First off, thank you for making this starter code; it's very helpful. One issue I have been encountering is the token getting revoked when the OpenAPI or any other awaited function is taking too long (I haven't checked which api call is taking too long but I've done enough debugging to know that openapi is most likely the culprit).

More specifically:

export default SlackFunction(def, async ({ inputs, env, token }) => {
  const client = new SlackAPIClient(token);
  if (!inputs.thread_ts) {
    return { outputs: {} };
  }

The token that is passed in eventually expires when it gets to the postMessage function:

  const replyResponse = await client.chat.postMessage({
    channel: inputs.channel_id,
    thread_ts: inputs.thread_ts,
    text: answer,
  });

I've looked around the documentation and couldn't find a way to lengthen the life of the token that is getting passed to the client. If there's documentation that I can follow that would be great!

The error is below:

2024-03-03 23:49:38 [error] [Wf06NCEUSHRN] (Trace=Tr06M9FPL39V) Trigger for workflow 'Post a ChatGPT reply within a discussion' failed: parameter_validation_failed
2024-03-03 23:49:38 [error] [Wf06NCEUSHRN] (Trace=Tr06M9FPL39V)   - Null value for non-nullable parameter `thread_ts`
2024-03-03 23:49:40 [error] [Fn06MS50HHHA] (Trace=Tr06M9FPGCDV) Function 'Discuss a topic in a Slack thread' (app function) failed
	event_dispatch_failed
2024-03-03 23:49:40 [error] [Wf06NCEUSHRN] (Trace=Tr06M9FPGCDV) Workflow step 'Discuss a topic in a Slack thread' failed
2024-03-03 23:49:40 [error] [Wf06NCEUSHRN] (Trace=Tr06M9FPGCDV) Workflow 'Post a ChatGPT reply within a discussion' failed
	Function failed to execute
error: Uncaught (in promise) SlackAPIError: Failed to call chat.postMessage due to token_revoked: {"ok":false,"error":"token_revoked","headers":{}}
throw new SlackAPIError(name, result.error, result);
            ^
at SlackAPIClient.call (https://deno.land/x/[email protected]/client/api-client.ts:530:13)
at eventLoopTick (ext:core/01_core.js:166:7)
at async AsyncFunction.<anonymous> (file://GitHub/chatgpt-on-deno/functions/discuss.ts:115:25)
at async Object.RunFunction [as function_executed] (https://deno.land/x/[email protected]/run-function.ts:28:53)
at async DispatchPayload (https://deno.land/x/[email protected]/dispatch-payload.ts:79:12)
at async runLocally (https://deno.land/x/[email protected]/local-run-function.ts:36:16)
at async https://deno.land/x/[email protected]/local-run-function.ts:55:3

Where do you get the "bot token" from?

Good evening, trying this bot out today and having trouble locating the "Bot Token".

According to the README.md you just: "# Install the app into your workspace to grab this token"

I've installed the app into a testing workspace but don't see anything about a bot token:

Screenshot from 2024-04-13 14-50-07

I can copy it's name, link, view it's 'Member ID' and 'Channel ID' from inside my Slack client.

On the https://api.slack.com/ page I have the following, but apparently none of these are the illusive 'bot token':

  • Client Secret
  • Signing Secret
  • Verification Token

Sorry about the noob question, but where can I find this 'Bot Token' exactly?

Question on Socket Mode and Google Cloud VPC ingress controls

Hello, quick question on the use of socket mode in this slack bot: Can Slack access my Google Cloud VPC (which I'm routing all requests to my bot through) with Socket Mode switched on or do I need to configure additional ingress controls via a load balancer to allow for the bot to send requests into my VPC?

Also, I've encountered issues where my Slack Bot stops listening to incoming messages after restarting my app (with socket mode enabled) when testing locally. I managed to get the bot to work after changing a new Slack App token and restarting the app with this new token. Wondering if you could advise on the cause of this issue?

[Feature Request] Override system prompt

We forked and implemented this ability for our own needs, but if you'd like to support it here then I can submit a PR. Ideally we'd contribute back to this repo instead of maintaining a fork.

Use of models other than GPT

It would be useful to easily replace it with a model other than GPT, such as the Anthropic Claude.

Should I fork this repository and extend the functionality myself?

Feature idea; tailored system text based on slack channel / context

Just leaving this as an idea; config file, or similar, where I can easily set different system texts based on slack channel ID / or name - so depending on channel context, it will have a specific system text. Yes, you might create multiple bots - but in our use case, we are working with different clients, and based on that I would not need to prime it with a couple of message - but it would have that via the system text :)

New vs Old Slack API Questions

Hi, we've been using this repo for a while and recently also came across this โ€” https://github.com/seratch/chatgpt-on-deno

Noticed a few differences

  • This one replies in threads, new API version replies in the channel
  • New API bot works in public channels only
  • New API bot has no loading indicator

Are the discrepancies because the new API doesn't support those features? We are trying to decide which one to use as a starting point for development and would be happy to contribute back when our changes help the upstream's goals.

APIs to read files from slack?

Hi, thanks for the wonderful codes. It's working well on my end!
A function I would like to add on is that when users in the channel upload a file (such as a pdf) when having conversation with ChatGPT, I want the ChatGPT bot be able to read it and thus answer related questions. What's the possible approach to that?
Does Slack has any python API to get the uploaded file from the client, so that I can write some code to process the contents in the pdf (text, tables, figures, etc) into a string to feed to ChatGPT?
Any discussion and suggestion is appreciated.

How to deploy the app onto render.com

Hi, first, thank you for building this, really appreciate it and would be happy to help! I'm trying to use it and it actually works perfectly for a few minutes then the deploy fails in Render.com with the following error

CleanShot 2023-03-04 at 15 03 25@2x

Do you have suggestions for fixing or a suggested method for deploying elsewhere? Thanks

[Feature Request] Support customizing chatgpt's personality with JSON file

Hello, I'm a user of your chatgpt in slack app, and I really appreciate your work. It's amazing to chat with chatgpt in Slack.๐Ÿฅฐ

I have a suggestion for a new feature that I think would make the app more fun and flexible. I wonder if you could support customizing chatgpt's personality with a JSON file, like this:

messages=[ 
       {"role": "system", "content": "You are a helpful assistant."}, 
        {"role": "user", "content": "Who won the world series in 2020?"}, 
        {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, 
        {"role": "user", "content": "Where was it played?"} 
]

This file contains some predefined messages for the system, the user, and the assistant roles. The idea is to let the user choose a personality for chatgpt, and let chatgpt respond according to the messages in the file.

As a user, I want to chat with chatgpt with different personalities, so that I can have more fun and variety in the conversation.

For example, I can create a JSON file with a humorous personality, like this:

messages=[ 
        {"role": "system", "content": "You are a hilarious assistant."}, 
       {"role": "user", "content": "Tell me a joke."}, 
       {"role": "assistant", "content": "What do you call a fish wearing a bowtie? Sofishticated."}, 
       {"role": "user", "content": "That's funny."} 
]

Then, when I chat with chatgpt, it will use this personality and make jokes.

I think this feature would make the app more customizable and entertaining. It would also allow users to create their own scenarios and stories with chatgpt.

What do you think about this idea? Is it possible to implement it in the future? Thank you for your time and attention.๐Ÿ™

Reply in threads?

The bot will not react / reply inside a thread, where @-pinged. (scenario: users are discussing in a thread and then pings the bot) I can see in the log that it reacts to the call, but is not responding... I've added the rights in the .yml file.

No replies in threads

When mentioning the bot in a channel the event app_mention is correctly triggered and i get a response by the bot, but my following messages in the resulting thread are ignored by the bot.

Im running SLACK_APP_LOG_LEVEL=DEBUG, but sending messages in a thread does not even log anything.

My Slack APP allowed scopes are:

app_mentions:read
channels:history
channels:read
chat:write
chat:write.public
groups:history
groups:read
im:history
im:read
im:write
mpim:history
mpim:read
users:read

Socket Mode is activated

The live demo by you works fine, just my local instance seems to have this problem - Any clue what could be happening?

OpenAI Error

Hi @seratch

Facing a context window limitation error.

Would recommend wrapping the openai base with something like reliableGPT to handle retries, model switching, etc.

from reliablegpt import reliableGPT
openai.ChatCompletion.create = reliableGPT(openai.ChatCompletion.create, user_email=...)

Source: https://github.com/BerriAI/reliableGPT

[Feature Request] Allow direct message conversation threads

Hi, thanks for the bot!

A request we had is to allow direct messages with the bot in the "Messages Tab". I see this is disabled for now.

Is there a way to support direct IMs with the bot using the Slack Conversations API, and respond in threads as if it was a normal channel?

Regards,
Zach

Support for "assistant" role for previous bot replies in a thread

ChatCompletion in ChatGPT has three roles, "system", "user" and "assistant".

Offitial document

It appears that after a reply by assistant in a thread, the reply is treated as the "user" role in subsequent conversations.
I have grep'd for "assistant" in this repository and have not been able to find any place in the code where the role is set.

It may be more accurate to include the reply by assistant in the thread as role: "assistant" in the ChatCompletion API message.
Is this intentional or not?

[Feature Request] Close open tags while streaming response

First, awesome app that you built! ๐Ÿ‘

I've one feature request only so far: While the response is returned (message is updated) code blocks don't look nice because they are only opened but not closed. It would be awesome if code blocks (and potentially other formatting) already look good while they are not fully complete. Do you think you can add this? Happy to donate something ๐Ÿ˜Š

Improvements PR

Hello @seratch,

Thank you for your work. I want to put my effort to this project for improvements. Here is my PR, please take a look and let me know if there is any concerns : #34

[Feature Request] Support for Function Calling

Hello,

First of all, I want to extend my gratitude for creating such useful software.

To further enhance this software, I am proposing the addition of Function Calling support.

I have already pushed my implementation of this feature at https://github.com/iwamot/ChatGPT-in-Slack/commits/function-calling. In this implementation, the OPENAI_FUNCTION_CALL_MODULE_NAME environment variable is used to specify the name of the module that contains the functions. If not specified, the application will retain its current behavior.

May I proceed to create a PR for this proposed addition? I look forward to your feedback. Thank you very much.

screenshot

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.