Code Monkey home page Code Monkey logo

mattermost-plugin-ai's Introduction

Mattermost Copilot Plugin

Mattermost plugin for local and third-party LLMs

The Mattermost Copilot AI Plugin is an extension for mattermost that provides functionality for local and third-party LLMs

Table of Contents

Background

The Mattermost Copilot Plugin adds functionality for local (self-hosted) and third-party (vendor-hosted) LLMs within Mattermost v9.6 and above. This plugin is currently experimental.

Contributions and suggestions are welcome. See the Contributing section for more details!

Join the discussion in the ~AI-Exchange channel and explore the Discourse forum. ๐Ÿ’ฌ

Install

We recommend using Mattermost Server v9.6 or later for the best experience. Compatible Mattermost server versions include:

  • v9.6 or later
  • v9.5.2+ (ESR)
  • v9.4.4+
  • v9.3.3+
  • v8.1.11+ (ESR)

See the Mattermost Product Documentation for details on installing, configuring, enabling, and using this Mattermost integration.

Note: Installation instructions assume you already have Mattermost Server installed and configured with PostgreSQL.

How to Release

To trigger a release, follow these steps:

  1. For Patch Release: Run the following command:

    make patch
    

    This will release a patch change.

  2. For Minor Release: Run the following command:

    make minor
    

    This will release a minor change.

  3. For Major Release: Run the following command:

    make major
    

    This will release a major change.

  4. For Patch Release Candidate (RC): Run the following command:

    make patch-rc
    

    This will release a patch release candidate.

  5. For Minor Release Candidate (RC): Run the following command:

    make minor-rc
    

    This will release a minor release candidate.

  6. For Major Release Candidate (RC): Run the following command:

    make major-rc
    

    This will release a major release candidate.

Contributing

Interested in contributing to our open source project? Start by reviewing the contributor guidelines for this repository. See the Developer Setup Guide for details on setting up a Mattermost instance for development.

License

This repository is licensed under Apache-2, except for the server/enterprise directory which is licensed under the Mattermost Source Available License. See Mattermost Source Available License to learn more.

mattermost-plugin-ai's People

Contributors

azigler avatar bartoszpijet avatar chenilim avatar crspeller avatar cwarnermm avatar fmartingr avatar iabdousd avatar it33 avatar jasonblais avatar jespino avatar kaakaa avatar lieut-data avatar m-zubairahmed avatar nosyn avatar phoinixgrr avatar wiggin77 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mattermost-plugin-ai's Issues

Support local whisper

It would be nice if we could support using a local whisper implementation for transcriptions rather than relying on the OpenAI API.

Bug: Unable to generate plugin webapp bundle

Steps to reproduce the behavior

Hi I have a problem when upload the plugin to my Mattermost

  1. Go to Mattermost Plugin Management
  2. Select upload plugin
  3. Choose File mattermost-plugin-ai-0.4.0.tar.gz
  4. See error: Unable to generate plugin webapp bundle.

Expected behavior

Able to Enable the plugin and configure the settings as desired.

Screenshots (optional)

If applicable, add screenshots or a screen recording to elaborate on the problem.
image
image

Version and Platform

  • Version: v0.4.0
  • Browser and OS: Mac
  • Mattermost Team Edition
    • Mattermost Version: 8.1.0
    • Database Schema Version: 109
    • Database: postgres

Feature Idea: Support multiple AI models

Summary

We run multiple models that are fine tuned for different tasks, as well as supporting multiple different model sizes. At the moment, we have to pick 'one' for our entire Mattermost instance. It would be useful to support multiple, and be able to distinguish between which one we use.

How important this is to me and why

Importance: Medium - This is currently the largest blocker to increasing our usage of the AI plugin.

Use cases:

  1. Support multiple models with different specialties
  2. Support multiple models with differing sizes
  3. Experiment with new models

Additional context/similar features

Preferably, we would like to specify which model in the 'tag' we use. For instance:

@ai-llama2-7b ...
@ai-gpt-4 ...
@ai-xyz ...

At the moment we have recompiled multiple versions of the plugin with different names and configuration, but this is not scaling well as more and more models are being introduced.

๐Ÿ› bug: Not able to active in Mattermost V6&V7

Description

I am using Matteromst V6 and trying to install the plugin, it was not able to active with below error message
image

I can't find any version limitation in this repository. Does that work as designed or it has a version limitation?

Steps to reproduce

Upload plugin and active

Add ability for LLM to use files from content extraction.

Currently the LLM does not have access to file contents even though the MM server adds the extracted content.

This is not straightforward as you need to balance what the LLM will pay attention too and the LLM context limit. Many files will not fit in the context.

Split longer recordings

Currently the audio summarization functionality will fail if the meeting is too long. (over 25MB)
The first step to fixing this is being able to split up longer recordings and send them to the whisper API in chunks to avoid the API limitations: https://platform.openai.com/docs/guides/speech-to-text/introduction

Currently compression is used in this case:

cmd = exec.Command(p.ffmpegPath, "-i", "pipe:0", "-ac", "1", "-map", "0:a:0", "-b:a", "32k", "-ar", "16000", "-f", "mp3", "pipe:1") //nolint:gosec

Go native implementation would be preferable to using ffmpeg.

Help user decide what channel to make a post in.

Add a feature to the postbox where after the user enters a post they can ask which channel the LLM thinks the post should go in. We can supply the draft post and all the channel names and descriptions to make this decision.

Bug: Plugin crashes when API request faces network failure

Steps to reproduce the behavior

  1. Create an API that kills the network connection half way through sending a response.
  2. Observe that when the plugin interacts with it, it crashes with an 'EOF' in the log.

Expected behavior

The plugin does not crash when a connection is severed.

Additional context (optional)

  • Sev 1: Affects critical functionality without a workaround

Whilst the plugin often restarts automatically, the downtime causes messages to be ignored / on-going requests to die.

Add call summarization button to the calls recording post

We want the entry point into the calls recording flow to be a button on the calls recording post. However that is actually part of the calls plugin that we can't manipulate.

So this work will have to be done on the calls plugin side. We can detect if the AI plugin is present then show the button.

Figma

Feature Idea: Enable AI to process data in link previews

Summary

In a channel with bot posts about articles from our user forum, end users can read link previews, however bot can't see the preview text, and can't process them in responses. The bot should be able to see them.

image

How important this is to me and why

Without this ability, bot can't summarize contents from link preview posts users can read

Importance: Medium

Use cases:

  1. Summarize channel including links

Additional context/similar features

Any examples of good implementations of this capability.

There are some alternate solutions to consider:

a) Have a function that ingests the full article into the post, whether visibility or invisibly // there are some security risks with this
b) Function could pull the link content at the time it's making the query,// potentially slow, brittle, and have security issues

Enhancement: Provide image as input to the model - gpt-4-vision-preview

Summary

GPT-4 Turbo with Vision can now be accessed via API as gpt-4-vision-preview
ref: https://www.datacamp.com/blog/gpt4-turbo

How important this is to me and why

Importance: High/Medium/Low

Medium

Use cases:

  1. AI is now able to answer, taking into account content in the images uploaded in mattermost thread or images
  2. Able to provide suggestions from images
  3. No longer need to manually OCR or copy paste just for ChatGPT to get it

Additional context/similar features

Example:

A team member intuitively tries to ask ChatGPT to assist based on images, which of course did not work (it seems like the AI is not even aware that there were attachments

image

Feature Idea: Specific context by linking to chat messages

Summary

I would like to use direct links to messages/thread to add context to a discussion with the assistant. For example:

Please summarize this thread: <direct link>

How important this is to me and why

Importance: High

This is necessary because conversations happening over a long period of time can become fragmented over more than one thread, or part of a conversation can predate the creation of a dedicated channel. This limits how useful the assistant can be.

Use cases:

  1. Summarize a past conversation inline in a channel (i.e. not in DM with the assistant) for a specific past thread and disregarding other messages that may have been sent in the channel since.
  2. Explicitly collect multiple distinct threads (potentially across many channels) into the assistant's context.
  3. Use the assistant in DMs while being able to unambiguously specify the context.

Additional context/similar features

Currently the response from the assistant when attempting to link to a specific message in Mattermost is as follows:

I'm sorry for any confusion, but as an AI, I currently do not have the capability to access and summarize external web content. However, if the document is added in a format that's feasible for me to analyze like a Word document (doc, docx), a text file (txt), or included directly in the Mattermost chat itself, then I'd be glad to help summarize the information for you.

When attempting to do the same from DMs with the assistant:

Hey <name>, I apologize for the inconvenience but unfortunately, I'm unable to access the thread content you referred to while we're in a Direct Message. Could you please make this request in a non-DM channel? That will allow me to see the thread content and assist you better.

Bug: RPC call MessageHasBeenPosted to plugin failed.

Hi, I'm configuring Open AI API to server but I'm facing an issue. When I used v0.3.2, here was no problem and everything ran as well. However, if I use v0.4.0, AI bot no respond. I checked log files and got below error msg.
You please help to take a look advise. Thank you so much.

image

image

image

{"timestamp":"2023-11-20 15:16:33.767 +07:00","level":"error","msg":"Unable to get team for context","caller":"app/plugin_api.go:984","plugin_id":"mattermost-ai","error":"not found"} {"timestamp":"2023-11-20 15:16:33.769 +07:00","level":"error","msg":"failed to get github plugin status","caller":"app/plugin_api.go:984","plugin_id":"mattermost-ai","error":"not found"} {"timestamp":"2023-11-20 15:16:33.773 +07:00","level":"error","msg":"RPC call MessageHasBeenPosted to plugin failed.","caller":"plugin/client_rpc_generated.go:241","plugin_id":"mattermost-ai","error":"unexpected EOF"} {"timestamp":"2023-11-20 15:16:33.773 +07:00","level":"error","msg":"plugin process exited","caller":"plugin/hclog_adapter.go:79","plugin_id":"mattermost-ai","wrapped_extras":"pathplugins/mattermost-ai/server/dist/plugin-linux-amd64pid259174errorexit status 2"}

Version and Platform

  • Mattermost Version: 9.2.2
  • Database Schema Version: 113
  • Browser and OS: Windows 11, Edge

๐Ÿ“‘ docs: Update README and issues ahead of v1.0 release

Met with @crspeller on Dec 20th to discuss needed changes for the mid-January release. The README needs to be revisited and updated to have the latest information for v1.0 of the plugin, as well as an more streamlined developer adoption experience.

Similarly, issues should be revisited and cleaned up to coincide with the v1.0 release, to set the community up for a successful next sprint.

This can be assigned to me (@azigler). PR pending.

Feature Idea: Ability to process a channel as if it were a thread

Summary

We can run the AI bot on threads, but not channels and a lot of the data that's useful is across multiple posts in a channel.

In initial implementation we could do some hard limit, like last XX posts, and whether or not to include threads or just first post.

How important this is to me and why

Importance: Medium

Use cases:

  1. Ask bot in news feed channel to summarize all the mentions on mattermost in the past week or month
  2. Ask bot for summary of PRs cross-posted to channel, with ability to interrogate summaries, e.g. who made the most PRs? break out PRs by category, etc.

Additional context/similar features

Any examples of good implementations of this capability.

Webapp bad export of AdvancedCreateComment

Description

The MM Webapp does not export AdvancedCreateComment directly. Instead it sets a bunch of incorrect parameters and only allows changes to placeholder and onSubmit. See: https://github.com/mattermost/mattermost/blob/master/webapp/channels/src/plugins/exported_create_post.tsx#L15

This caused a bug in the RHS for non-admin users requiring an unpleasant workaround.

This ticket is to fix the webapp and come up with a migration path away from the workaround.

Support MySQL

RIght now, I get:

error [2023-06-23 08:25:18.948 +02:00] Unable to activate plugin caller="app/plugin.go:171" plugin_id=mattermost-ai error="this plugin is only supported on postgres"

Design Needed: Better system console UI

The current system console UI for configuration is a bit confusing. Replace with a custom component that hides fields not applicable to the currently selected LLMs would be a good start.

Webapp capability: Textbox controls

Currently plugins do not have access to add buttons to the textbox.
For the plugin we would want the ability to add an AI button there and be able to manipulate the text within.

Should support something like this:
image
Figma Link

Feature Idea: Retrieve messages across channels for DMs with AI Assistant

Summary

Allow the AI Assistant to process posts in another/multiple other channels, ideally allowing me to DM the assistant and ask it to search/process posts

AI-DM-image

How important this is to me and why

I do not always want to interact with the AI assistant in the channel I need information from. I want to be able to have a DM with it to help me catch up, search, analyze activity happening across one or more channels, following up in a channel if needed

Importance: Medium

Use cases:

  1. Get a summary of activity for very active channels based on the last X posts or since I last read a channel
  2. Getting intelligent search across one or multiple channels to avoid having to process many posts "manually"

Additional context/similar features

This could be seen for use case 2 as an extension of the native search in Mattermost, allowing me to have a natural language search/discovery via the assistant.
Big communities like the Mattermost community have too many channels and posts generated every day, summarizing them based on what I care about would be extremely helpful

Feature Idea: Auto-suggest emoji prefix for channel name

Summary

Auto-suggest emoji for pre-fix of channel name

How important this is to me and why

Importance: Medium

Use cases:

  1. Replace manual process of asking bot for a suggested emoji for naming a channel
  2. Straight forward, value-add demo we can do we open source LLMs that doesn't require latest AI models

Additional context/similar features

Any examples of good implementations of this capability.

Feature Idea: Image Generation

Summary

It seems that there used to be a command for generating images in this plugin, but it was completely removed in commit c0d47b6. Despite the existence of configuration options, there doesn't appear to be a way to generate images.

Ideally, when calling @ai, I believe it should trigger image generation. Alternatively, having a clear slash command would also work well.

How important this is to me and why

Importance: Medium

While the lack of this feature may not be an immediate issue, I believe there is value in being able to easily generate images while chatting on Mattermost, especially for tasks like prototyping. Even though there are alternative ways to generate images outside of the plugin, the ability to chat and generate images on Mattermost concurrently is valuable.

Use cases:

  1. Use for prototyping and conveying rough ideas during chats with development team members.
  2. Use for aligning images with customers.

Additional context/similar features

Providing a place to interactively generate images, similar to Midjourney, can be extremely useful for sharing specific ideas in real-time with people present and for sparking new ideas.

Bug: Error when try to get most common alert from channel

Steps to reproduce the behavior

  1. Go to @ai direct messages
  2. Ask to "in team <name> channel <alerting channel> what is the most common alert?"
  3. See error "Sorry! An error occoured while accessing the LLM. See server logs for details."

In log error: status code: 400, message: '<team>.LookupMattermostUser' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'messages.2.name'"}

Expected behavior

Reply of what the most common alert is

Version and Platform

  • Version: v0.3.2
    Mattermost Version: 7.1.4
    Database Schema Version: 89
    Database: postgres

Feature Idea: Have thumbs up/down change color and visual after being toggled

Summary

Expected: After clicking thumbs up/down on evaluating an AI bot response, there's a visual indicator representing the change.
Observed: No indication of button being pressed.

How important this is to me and why

Importance: Low

This is kind of a nice-to-have, but also a nice starter ticket for contributors to add.

Use cases:

  1. Toggling thumbs up/down in feedback on AI bot response

Additional context/similar features

image

Any examples of good implementations of this capability.

  • Pretty standard

Feature Idea: Allow image input

Summary

Support supplying uploaded images to models that support it (e.g. GPT-4)

How important this is to me and why

Importance: Medium

When used, diagrams tend to hold a lot of context in a particular discussion/thread. It might make summaries better if the diagrams/images were included in the context instead of being ignored.

Use cases:

  1. Direct summary of any image (e.g. to create image captions)
  2. Enhance existing ability to summarize content where there may be images inline

Additional context/similar features

I'm not aware of any but GPT-4 is capable of accepting input from images.

Feature Idea: Add in a "copy to clipboard" function onto AI messages

Summary

Users has requested that a simple "Copy to clipboard" function for the AI response is added. Clicking the button would copy the response from the AI to clipboard. An expansion to this might also include separate copy buttons for each code snippet, etc.

How important this is to me and why

Importance: Low

Use cases:

  1. A user wishes to transfer data from the AI to other systems.

Additional context/similar features

The ChatGPT interface provides this.

Add regenerate button

It would be great if the user could press a regenerate button if they don't like the response. Especially for creative tasks this can be valuable.

It can look like this:
image

Figma Link

Add tracing for LLM calls.

For development purposes it would be great to have a setting to output exactly what was sent to the LLM and the response received.

๐Ÿ’ก idea: Add Organization Input Option to Open AI Engine Configuration Form

Description

In the current implementation of the Mattermost AI plugin, when configuring the Open AI Engine, users can specify API key, Default model, and Token limit.

A valuable enhancement to this configuration would be the ability to specify an organization (for those who have multiple organizations linked to their OpenAI accounts).

Improve content extraction for PDFs

The current content extraction in the MM server for PDFs doesn't work very well which creates some issues when the LLM tries to understand the files.

Is there an alternative library we could be using for PDF extraction?

Bug: crash with mattermost-server 9.0.0 and LocalAI backend

Steps to reproduce the behavior

  1. Install mattermost-plugin-ai v.0.4.0 on mattermost-server 9.0.0
  2. Ask @ai a question
  3. No response
  4. crash error in the console

--

  1. Install mattermost-plugin-ai v.0.3.2 on mattermost-server 9.0.0
  2. Ask @ai a question
  3. Started with empty response
  4. LocalAI is doing something (high cpu usage for several seconds)
  5. LocalAI is apparently done
  6. Empty response from ai bot stays, no error in the server logs

Expected behavior

ai bot responds to my question after querying LocalAI backend

Screenshots (optional)

v0.3.2:
2023-10-05_09-34_1

Version and Platform

  • Version: 9.0.0 (server), 5.5.1 (desktop), 0.3.2/0.4.0 (ai plugin)
  • Server OS: Debian bullseye
  • Browser and OS: Firefox on Linux / Mattermost Desktop on Linux
  • LocalAI: v1.30.0 with gpt4all-j model (only CPU)

Additional context

Logs for v0.3.2:

Oct 05 08:59:01 chat mattermost[63827]: {"timestamp":"2023-10-05 08:59:01.347 +02:00","level":"error","msg":"Unable to get team for context","caller":"app/plugin_api.go:980","plugin_id":"mattermost-ai","error":"not found"}
Oct 05 08:59:01 chat mattermost[63827]: {"timestamp":"2023-10-05 08:59:01.353 +02:00","level":"error","msg":"failed to get github plugin status","caller":"app/plugin_api.go:980","plugin_id":"mattermost-ai","error":"not found"}

Configuration for v0.3.2:

OpenAI Compatible API url: http://192.168.133.25:8080
OpenAI Compatible model: gpt4all-j

AI Large Language Model service: Open AI Compatible
AI to generate images: Open AI Compatible

Logs for v0.4.0:

Oct 05 09:06:15 chat mattermost[67410]: {"timestamp":"2023-10-05 09:06:15.669 +02:00","level":"error","msg":"Unable to get team for context","caller":"app/plugin_api.go:980","plugin_id":"mattermost-ai","error":"not found"}
Oct 05 09:06:15 chat mattermost[67410]: {"timestamp":"2023-10-05 09:06:15.672 +02:00","level":"error","msg":"failed to get github plugin status","caller":"app/plugin_api.go:980","plugin_id":"mattermost-ai","error":"not found"}
Oct 05 09:06:15 chat mattermost[67410]: {"timestamp":"2023-10-05 09:06:15.685 +02:00","level":"error","msg":"plugin process exited","caller":"plugin/hclog_adapter.go:79","plugin_id":"mattermost-ai","wrapped_extras":"pathplugins/mattermost-ai/server/dist/plugin-linux-amd64pid67651errorexit status 2"}
Oct 05 09:06:15 chat mattermost[67410]: {"timestamp":"2023-10-05 09:06:15.685 +02:00","level":"error","msg":"RPC call MessageHasBeenPosted to plugin failed.","caller":"plugin/client_rpc_generated.go:241","plugin_id":"mattermost-ai","error":"unexpected EOF"}
Oct 05 09:06:16 chat mattermost[67410]: {"timestamp":"2023-10-05 09:06:16.961 +02:00","level":"warn","msg":"Health check failed for plugin","caller":"plugin/health_check.go:59","id":"mattermost-ai","error":"plugin RPC connection is not responding"}
Oct 05 09:06:16 chat mattermost[67410]: {"timestamp":"2023-10-05 09:06:16.961 +02:00","level":"warn","msg":"error closing client during Kill","caller":"plugin/hclog_adapter.go:70","plugin_id":"mattermost-ai","wrapped_extras":"errconnection is shut down"}
Oct 05 09:06:16 chat mattermost[67410]: {"timestamp":"2023-10-05 09:06:16.961 +02:00","level":"warn","msg":"plugin failed to exit gracefully","caller":"plugin/hclog_adapter.go:72","plugin_id":"mattermost-ai"}

Configuration for v0.4.0:

Name: localai
AI Service: OpenAI Compatible
API URL: http://192.168.133.25:8080
Default Model: gpt4all-j

AI Large Language Model service: localai

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.