mattermost / mattermost-plugin-ai Goto Github PK
View Code? Open in Web Editor NEWMattermost Copilot plugin supporting multiple LLMs
Home Page: https://mattermost.com/copilot
License: Apache License 2.0
Mattermost Copilot plugin supporting multiple LLMs
Home Page: https://mattermost.com/copilot
License: Apache License 2.0
We can run the AI bot on threads, but not channels and a lot of the data that's useful is across multiple posts in a channel.
In initial implementation we could do some hard limit, like last XX posts, and whether or not to include threads or just first post.
Importance: Medium
Use cases:
mattermost
in the past week or monthAny examples of good implementations of this capability.
See Figma Designs
For development purposes it would be great to have a setting to output exactly what was sent to the LLM and the response received.
Token counting is a bit hard to get right. The current implmentation for OpenAI is an approximation, see: https://github.com/mattermost/mattermost-plugin-ai/blob/master/server/ai/openai/openai.go#L296. Others have worse.
Implement better counting (or figure out another solution) for:
The plugin does not crash when a connection is severed.
Whilst the plugin often restarts automatically, the downtime causes messages to be ignored / on-going requests to die.
It would be great if the user could press a regenerate button if they don't like the response. Especially for creative tasks this can be valuable.
Add a feature to the postbox where after the user enters a post they can ask which channel the LLM thinks the post should go in. We can supply the draft post and all the channel names and descriptions to make this decision.
I would like to use direct links to messages/thread to add context to a discussion with the assistant. For example:
Please summarize this thread: <direct link>
Importance: High
This is necessary because conversations happening over a long period of time can become fragmented over more than one thread, or part of a conversation can predate the creation of a dedicated channel. This limits how useful the assistant can be.
Use cases:
Currently the response from the assistant when attempting to link to a specific message in Mattermost is as follows:
I'm sorry for any confusion, but as an AI, I currently do not have the capability to access and summarize external web content. However, if the document is added in a format that's feasible for me to analyze like a Word document (doc, docx), a text file (txt), or included directly in the Mattermost chat itself, then I'd be glad to help summarize the information for you.
When attempting to do the same from DMs with the assistant:
Hey <name>, I apologize for the inconvenience but unfortunately, I'm unable to access the thread content you referred to while we're in a Direct Message. Could you please make this request in a non-DM channel? That will allow me to see the thread content and assist you better.
The tar file of the 0.4.0 release is added as asset to the post of the 0.5.0 release.
Auto-suggest emoji for pre-fix of channel name
Importance: Medium
Use cases:
Any examples of good implementations of this capability.
Add an apps bar icon for the AI plugin that opens a new chat with the AI Assistant. Analogous to a thread.
In the current implementation of the Mattermost AI plugin, when configuring the Open AI Engine, users can specify API key, Default model, and Token limit.
A valuable enhancement to this configuration would be the ability to specify an organization
(for those who have multiple organizations linked to their OpenAI accounts).
It would be nice if we could support using a local whisper implementation for transcriptions rather than relying on the OpenAI API.
RIght now, I get:
error [2023-06-23 08:25:18.948 +02:00] Unable to activate plugin caller="app/plugin.go:171" plugin_id=mattermost-ai error="this plugin is only supported on postgres"
How can we allow users to reuse prompts? Provide a prompt library for them to access?
Expected: After clicking thumbs up/down on evaluating an AI bot response, there's a visual indicator representing the change.
Observed: No indication of button being pressed.
Importance: Low
This is kind of a nice-to-have, but also a nice starter ticket for contributors to add.
Use cases:
Any examples of good implementations of this capability.
It would be great if MM could support preview for jupyter notebooks in file uploads. Maybe use somthing like nbconvert
to display?
Support supplying uploaded images to models that support it (e.g. GPT-4)
Importance: Medium
When used, diagrams tend to hold a lot of context in a particular discussion/thread. It might make summaries better if the diagrams/images were included in the context instead of being ignored.
Use cases:
I'm not aware of any but GPT-4 is capable of accepting input from images.
Users has requested that a simple "Copy to clipboard" function for the AI response is added. Clicking the button would copy the response from the AI to clipboard. An expansion to this might also include separate copy buttons for each code snippet, etc.
Importance: Low
Use cases:
The ChatGPT interface provides this.
The current content extraction in the MM server for PDFs doesn't work very well which creates some issues when the LLM tries to understand the files.
Is there an alternative library we could be using for PDF extraction?
--
ai bot responds to my question after querying LocalAI backend
Oct 05 08:59:01 chat mattermost[63827]: {"timestamp":"2023-10-05 08:59:01.347 +02:00","level":"error","msg":"Unable to get team for context","caller":"app/plugin_api.go:980","plugin_id":"mattermost-ai","error":"not found"}
Oct 05 08:59:01 chat mattermost[63827]: {"timestamp":"2023-10-05 08:59:01.353 +02:00","level":"error","msg":"failed to get github plugin status","caller":"app/plugin_api.go:980","plugin_id":"mattermost-ai","error":"not found"}
OpenAI Compatible API url: http://192.168.133.25:8080
OpenAI Compatible model: gpt4all-j
AI Large Language Model service: Open AI Compatible
AI to generate images: Open AI Compatible
Oct 05 09:06:15 chat mattermost[67410]: {"timestamp":"2023-10-05 09:06:15.669 +02:00","level":"error","msg":"Unable to get team for context","caller":"app/plugin_api.go:980","plugin_id":"mattermost-ai","error":"not found"}
Oct 05 09:06:15 chat mattermost[67410]: {"timestamp":"2023-10-05 09:06:15.672 +02:00","level":"error","msg":"failed to get github plugin status","caller":"app/plugin_api.go:980","plugin_id":"mattermost-ai","error":"not found"}
Oct 05 09:06:15 chat mattermost[67410]: {"timestamp":"2023-10-05 09:06:15.685 +02:00","level":"error","msg":"plugin process exited","caller":"plugin/hclog_adapter.go:79","plugin_id":"mattermost-ai","wrapped_extras":"pathplugins/mattermost-ai/server/dist/plugin-linux-amd64pid67651errorexit status 2"}
Oct 05 09:06:15 chat mattermost[67410]: {"timestamp":"2023-10-05 09:06:15.685 +02:00","level":"error","msg":"RPC call MessageHasBeenPosted to plugin failed.","caller":"plugin/client_rpc_generated.go:241","plugin_id":"mattermost-ai","error":"unexpected EOF"}
Oct 05 09:06:16 chat mattermost[67410]: {"timestamp":"2023-10-05 09:06:16.961 +02:00","level":"warn","msg":"Health check failed for plugin","caller":"plugin/health_check.go:59","id":"mattermost-ai","error":"plugin RPC connection is not responding"}
Oct 05 09:06:16 chat mattermost[67410]: {"timestamp":"2023-10-05 09:06:16.961 +02:00","level":"warn","msg":"error closing client during Kill","caller":"plugin/hclog_adapter.go:70","plugin_id":"mattermost-ai","wrapped_extras":"errconnection is shut down"}
Oct 05 09:06:16 chat mattermost[67410]: {"timestamp":"2023-10-05 09:06:16.961 +02:00","level":"warn","msg":"plugin failed to exit gracefully","caller":"plugin/hclog_adapter.go:72","plugin_id":"mattermost-ai"}
Name: localai
AI Service: OpenAI Compatible
API URL: http://192.168.133.25:8080
Default Model: gpt4all-j
AI Large Language Model service: localai
It seems that there used to be a command for generating images in this plugin, but it was completely removed in commit c0d47b6. Despite the existence of configuration options, there doesn't appear to be a way to generate images.
Ideally, when calling @ai
, I believe it should trigger image generation. Alternatively, having a clear slash command would also work well.
Importance: Medium
While the lack of this feature may not be an immediate issue, I believe there is value in being able to easily generate images while chatting on Mattermost, especially for tasks like prototyping. Even though there are alternative ways to generate images outside of the plugin, the ability to chat and generate images on Mattermost concurrently is valuable.
Use cases:
Providing a place to interactively generate images, similar to Midjourney, can be extremely useful for sharing specific ideas in real-time with people present and for sparking new ideas.
Not sure exactly what is required here. Do we just need to modify our prompts asking it to answer in a specific language? Or maybe we need to localize the prompts themselves?
Currently the audio summarization functionality will fail if the meeting is too long. (over 25MB)
The first step to fixing this is being able to split up longer recordings and send them to the whisper API in chunks to avoid the API limitations: https://platform.openai.com/docs/guides/speech-to-text/introduction
Currently compression is used in this case:
Go native implementation would be preferable to using ffmpeg.
Add the ability for the plugin system to add a menu item to code blocks.
This could support code related features in the future like editing and suggesting.
Some feature documentation needs to be revisited ahead of the v1.0 release.
In a channel with bot posts about articles from our user forum, end users can read link previews, however bot can't see the preview text, and can't process them in responses. The bot should be able to see them.
Without this ability, bot can't summarize contents from link preview posts users can read
Importance: Medium
Use cases:
Any examples of good implementations of this capability.
There are some alternate solutions to consider:
a) Have a function that ingests the full article into the post, whether visibility or invisibly // there are some security risks with this
b) Function could pull the link content at the time it's making the query,// potentially slow, brittle, and have security issues
Currently the LLM does not have access to file contents even though the MM server adds the extracted content.
This is not straightforward as you need to balance what the LLM will pay attention too and the LLM context limit. Many files will not fit in the context.
"in team <name> channel <alerting channel> what is the most common alert?"
In log error: status code: 400, message: '<team>.LookupMattermostUser' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'messages.2.name'"}
Reply of what the most common alert is
Hi I have a problem when upload the plugin to my Mattermost
Able to Enable the plugin and configure the settings as desired.
If applicable, add screenshots or a screen recording to elaborate on the problem.
We want the entry point into the calls recording flow to be a button on the calls recording post. However that is actually part of the calls plugin that we can't manipulate.
So this work will have to be done on the calls plugin side. We can detect if the AI plugin is present then show the button.
It would be nice if the user would get some kind of message saying the AI could be better if they added the GitHub plugin or connected their account. Particularly if the plugin is installed. Maybe some kind of detection of GitHub links and prompt engineering?
Add capabilities to the webapp for plugins to integrate with unread indicators. This might be the UI path for features like summarizing unreads.
For merges in a repository to qualify for Hacktoberfest, the following must be true:
Pull requests can be made in any GitHub or GitLab hosted project thatโs participating in Hacktoberfest (look for the โhacktoberfestโ topic)
See: Hacktoberfest Participation Rules
To ensure this project is eligible for Hacktoberfest, add the "hacktoberfest" topic to the repository.
The current system console UI for configuration is a bit confusing. Replace with a custom component that hides fields not applicable to the currently selected LLMs would be a good start.
GPT-4 Turbo with Vision can now be accessed via API as gpt-4-vision-preview
ref: https://www.datacamp.com/blog/gpt4-turbo
Importance: High/Medium/Low
Medium
Use cases:
Example:
A team member intuitively tries to ask ChatGPT to assist based on images, which of course did not work (it seems like the AI is not even aware that there were attachments
We run multiple models that are fine tuned for different tasks, as well as supporting multiple different model sizes. At the moment, we have to pick 'one' for our entire Mattermost instance. It would be useful to support multiple, and be able to distinguish between which one we use.
Importance: Medium - This is currently the largest blocker to increasing our usage of the AI plugin.
Use cases:
Preferably, we would like to specify which model in the 'tag' we use. For instance:
@ai-llama2-7b ...
@ai-gpt-4 ...
@ai-xyz ...
At the moment we have recompiled multiple versions of the plugin with different names and configuration, but this is not scaling well as more and more models are being introduced.
We want to be able to go back to a previously open thread from the plugin RHS. This is currently not possible as the back button system only supports things like search and pinned posts.
Allow the AI Assistant to process posts in another/multiple other channels, ideally allowing me to DM the assistant and ask it to search/process posts
I do not always want to interact with the AI assistant in the channel I need information from. I want to be able to have a DM with it to help me catch up, search, analyze activity happening across one or more channels, following up in a channel if needed
Importance: Medium
Use cases:
This could be seen for use case 2 as an extension of the native search in Mattermost, allowing me to have a natural language search/discovery via the assistant.
Big communities like the Mattermost community have too many channels and posts generated every day, summarizing them based on what I care about would be extremely helpful
Hi, I'm configuring Open AI API to server but I'm facing an issue. When I used v0.3.2, here was no problem and everything ran as well. However, if I use v0.4.0, AI bot no respond. I checked log files and got below error msg.
You please help to take a look advise. Thank you so much.
{"timestamp":"2023-11-20 15:16:33.767 +07:00","level":"error","msg":"Unable to get team for context","caller":"app/plugin_api.go:984","plugin_id":"mattermost-ai","error":"not found"} {"timestamp":"2023-11-20 15:16:33.769 +07:00","level":"error","msg":"failed to get github plugin status","caller":"app/plugin_api.go:984","plugin_id":"mattermost-ai","error":"not found"} {"timestamp":"2023-11-20 15:16:33.773 +07:00","level":"error","msg":"RPC call MessageHasBeenPosted to plugin failed.","caller":"plugin/client_rpc_generated.go:241","plugin_id":"mattermost-ai","error":"unexpected EOF"} {"timestamp":"2023-11-20 15:16:33.773 +07:00","level":"error","msg":"plugin process exited","caller":"plugin/hclog_adapter.go:79","plugin_id":"mattermost-ai","wrapped_extras":"pathplugins/mattermost-ai/server/dist/plugin-linux-amd64pid259174errorexit status 2"}
Met with @crspeller on Dec 20th to discuss needed changes for the mid-January release. The README needs to be revisited and updated to have the latest information for v1.0 of the plugin, as well as an more streamlined developer adoption experience.
Similarly, issues should be revisited and cleaned up to coincide with the v1.0 release, to set the community up for a successful next sprint.
This can be assigned to me (@azigler). PR pending.
The MM Webapp does not export AdvancedCreateComment directly. Instead it sets a bunch of incorrect parameters and only allows changes to placeholder
and onSubmit
. See: https://github.com/mattermost/mattermost/blob/master/webapp/channels/src/plugins/exported_create_post.tsx#L15
This caused a bug in the RHS for non-admin users requiring an unpleasant workaround.
This ticket is to fix the webapp and come up with a migration path away from the workaround.
At the moment the plugin seems to fail with a generic error message when the conversation is too long. Ideally, some context pruning might take place to keep within a context limit defined by the plugin.
Provide direct Ollama support for a quick and easy self hosted setup.
Currently plugins do not have access to add buttons to the textbox.
For the plugin we would want the ability to add an AI button there and be able to manipulate the text within.
Should support something like this:
Figma Link
Adding LLM capabilities to https://github.com/mattermost/mattermost-plugin-playbooks/ seems like a natural fit.
What can we do to enhance the playbooks experience with LLMs? A good start might be to automatically fill in retrospectives.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.