Code Monkey home page Code Monkey logo

mattermost-plugin-alertmanager's Introduction

AlertManager Plugin

This plugin is the AlertManager bot for Mattermost.

Forked and inspired on https://github.com/metalmatze/alertmanager-bot the alertmanager for Telegram. Thanks so much @metalmatze

Some features:

  • Receive the Alerts via webhook
  • Can list existing alerts
  • Can list existing silences
  • Can expire a silence

TODO:

  • Create silences
  • Create alerts
  • List expired silences
  • Create and use a bot account
  • Allow multiple webhooks/channels

Supported Mattermost Server Versions: 5.37+

Installation

  1. Go to the releases page of this GitHub repository and download the latest release for your Mattermost server.
  2. Upload this file in the Mattermost System Console > Plugins > Management page to install the plugin, and enable it. To learn more about how to upload a plugin, see the documentation.

Next, to configure the plugin, follow these steps:

  1. After you've uploaded the plugin in System Console > Plugins > Management, go to the plugin's settings page at System Console > Plugins > AlertManager.
  2. Specify the team and channel to send messages to. For each, use the URL of the team or channel instead of their respective display names.
  3. Specify the AlertManager Server URL.
  4. Generate the Token that will be use to validate the requests.
  5. Hit Save.
  6. Next, copy the Token above the Save button, which is used to configure the plugin for your AlertManager account.
  7. Go to your Alermanager configuration, paste the following webhook URL and specfiy the name of the service and the token you copied in step 9.
  8. Invite the @alertmanagerbot user to your target team and channel.
https://SITEURL/plugins/alertmanager/api/webhook?token=TOKEN

Sometimes the token has to be quoted.

Example alertmanager config:

webhook_configs:
  - send_resolved: true
    url: "https://mattermost.example.org/plugins/alertmanager/api/webhook?token='xxxxxxxxxxxxxxxxxxx-yyyyyyy'"

Plugin in Action

alertmanager-bot-1

alertmanager-bot-2

alertmanager-bot-3

mattermost-plugin-alertmanager's People

Contributors

cmuench avatar cpanato avatar dependabot[bot] avatar fschlich avatar gregharvey avatar hanzei avatar icelander avatar ltsavar avatar maxgorovenko avatar sumanpaikdev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

mattermost-plugin-alertmanager's Issues

failed to start plugin 0.2.0

mattermost version: 6.0.2 (also tried on version 6.1.0)
plugin version: 0.2.0

I've followed the steps on the readme file but still not able to activate the plugin,
the errors im getting from the mattermost conainer:

{"timestamp":"2021-12-08 11:10:28.882 Z","level":"debug","msg":"starting plugin","caller":"plugin/hclog_adapter.go:52","plugin_id":"alertmanager","wrapped_extras":"pathplugins/alertmanager/server/dist/plugin-linux-amd64args[plugins/alertmanager/server/dist/plugin-linux-amd64]"}
{"timestamp":"2021-12-08 11:10:28.883 Z","level":"debug","msg":"Next run time for scheduler","caller":"jobs/schedulers.go:216","scheduler_name":"ResendInvitationEmailJobScheduler","next_runtime":"2021-12-08 11:10:33.883363061 +0000 UTC m=+468.349421553"}
{"timestamp":"2021-12-08 11:10:28.884 Z","level":"debug","msg":"plugin started","caller":"plugin/hclog_adapter.go:52","plugin_id":"alertmanager","wrapped_extras":"pathplugins/alertmanager/server/dist/plugin-linux-amd64pid185"}
{"timestamp":"2021-12-08 11:10:28.884 Z","level":"debug","msg":"waiting for RPC address","caller":"plugin/hclog_adapter.go:52","plugin_id":"alertmanager","wrapped_extras":"pathplugins/alertmanager/server/dist/plugin-linux-amd64"}
{"timestamp":"2021-12-08 11:10:28.884 Z","level":"warn","msg":"plugin failed to exit gracefully","caller":"plugin/hclog_adapter.go:72","plugin_id":"alertmanager"}
{"timestamp":"2021-12-08 11:10:28.884 Z","level":"debug","msg":"Error relocating plugins/alertmanager/server/dist/plugin-linux-amd64: __vfprintf_chk: symbol not found","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2021-12-08 11:10:28.884 Z","level":"debug","msg":"Error relocating plugins/alertmanager/server/dist/plugin-linux-amd64: __fprintf_chk: symbol not found","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2021-12-08 11:10:28.884 Z","level":"debug","msg":"plugin process exited","caller":"plugin/hclog_adapter.go:52","plugin_id":"alertmanager","wrapped_extras":"pathplugins/alertmanager/server/dist/plugin-linux-amd64pid185errorexit status 127"}
{"timestamp":"2021-12-08 11:10:28.884 Z","level":"error","msg":"Unable to activate plugin","caller":"app/plugin.go:146","plugin_id":"alertmanager","error":"unable to start plugin: alertmanager: Unrecognized remote plugin message: \n\nThis usually means that the plugin is either invalid or simply\nneeds to be recompiled to support the latest protocol."}

Potential Alertmanager plugin improvements

  • No proper colouring on alert levels. All alert levels (info, warning, high, critical) are shown as red colour which makes focus quite hard
  • Message should be bigger and separate from the other entries in the attachment to make it clear from the first look what the issue is
  • Info like Generated by a [Prometheus Alert] is of no real use and makes the alert more noisy

Rename the plugin and and update readme

Maybe this is far more obvious for the intended obvious but to people not familiar with Prometheus AlertManager it may not be obvious what this plugin does.
I thought this plugin maybe communicated with Prometheus directly or maybe even with Graphana somehow (which also has an add-on called AlertManager). From what I see, the read-me here https://github.com/cpanato/mattermost-plugin-alertmanager#readme does not explicitly say that it communicates Prometheus AlertManager it's only implied by the screen shots.

Therefore I would suggest we rename the plugin to prometheus-alertmanager
and Update the top level header in the screenshot here.
image

plugin does not create it's own bot user

Zoom, GitHub, GitLab, etc plugins all create their own bot user for integration.

I will create a new bot user for alertmanager, but it should not be necessary if this is normal in the plugin universe.

Failed to start plugin 0.1.0

Server Version: 5.26.2
Plugin Version: 0.1.0
Failed to start. Here some logs:

{"level":"error","ts":1617279277.5100725,"caller":"mlog/log.go:190","msg":"Unable to activate plugin","plugin_id":"com.cpanato.alertmanager","error":"SqlTeamStore.GetByName: Не удалось найти существующую команду., name=http://mt.lan:8065/team-dev,sql: no rows in result set"} {"level":"info","ts":1617279277.5121043,"caller":"bleveengine/bleve.go:267","msg":"UpdateConf Bleve"} {"level":"warn","ts":1617279277.5422955,"caller":"plugin/hclog_adapter.go:69","msg":"error closing client during Kill","plugin_id":"com.cpanato.alertmanager","wrapped_extras":"errunexpected EOF"} {"level":"warn","ts":1617279277.5427206,"caller":"plugin/hclog_adapter.go:71","msg":"plugin failed to exit gracefully","plugin_id":"com.cpanato.alertmanager"} {"level":"error","ts":1617279277.5431993,"caller":"mlog/log.go:190","msg":"Unable to activate plugin","plugin_id":"com.cpanato.alertmanager","error":"SqlTeamStore.GetByName: Не удалось найти существующую команду., name=http://mt.lan:8065/team-dev,sql: no rows in result set"} {"level":"error","ts":1617279277.603872,"caller":"mlog/log.go:190","msg":"Unable to activate plugin","plugin_id":"com.cpanato.alertmanager","error":"SqlTeamStore.GetByName: Не удалось найти существующую команду., name=http://mt.lan:8065/team-dev,sql: no rows in result set"} {"level":"info","ts":1617279277.6219296,"caller":"bleveengine/bleve.go:267","msg":"UpdateConf Bleve"} {"level":"error","ts":1617279277.6514676,"caller":"mlog/log.go:190","msg":"Unable to activate plugin","plugin_id":"com.cpanato.alertmanager","error":"SqlTeamStore.GetByName: Не удалось найти существующую команду., name=http://mt.lan:8065/team-dev,sql: no rows in result set"}

Not Working with Mattermost 5.17

The Plugin is not working anymore with newer Versions of Mattermost.
Old Versions are having Problems see #2
In Mattermost 5.17.3 it won't start. I tried setting the Team as URL and as name. Nothing worked.

The error message is:

{"level":"error","ts":1579856878.0848882,"caller":"mlog/log.go:174","msg":"Unable to activate plugin","plugin_id":"com.cpanato.alertmanager","error":"unable to start plugin: com.cpanato.alertmanager: SqlTeamStore.GetByName: Unable to find the existing team, name=URL,sql: no rows in result set","errorVerbose":"SqlTeamStore.GetByName: Unable to find the existing team, name=URL,sql: no rows in result set\nunable to start plugin: com.cpanato.alertmanager\ngithub.com/mattermost/mattermost-server/plugin.(*Environment).Activate\n\t/go/src/github.com/mattermost/mattermost-server/plugin/environment.go:251\ngithub.com/mattermost/mattermost-server/app.(*App).SyncPluginsActiveState\n\t/go/src/github.com/mattermost/mattermost-server/app/plugin.go:100\ngithub.com/mattermost/mattermost-server/app.(*App).InitPlugins.func2\n\t/go/src/github.com/mattermost/mattermost-server/app/plugin.go:183\ngithub.com/mattermost/mattermost-server/config.(*emitter).invokeConfigListeners.func1\n\t/go/src/github.com/mattermost/mattermost-server/config/emitter.go:35\nsync.(*Map).Range\n\t/usr/local/go/src/sync/map.go:333\ngithub.com/mattermost/mattermost-server/config.(*emitter).invokeConfigListeners\n\t/go/src/github.com/mattermost/mattermost-server/config/emitter.go:33\ngithub.com/mattermost/mattermost-server/config.(*commonStore).set\n\t/go/src/github.com/mattermost/mattermost-server/config/common.go:90\ngithub.com/mattermost/mattermost-server/config.(*FileStore).Set\n\t/go/src/github.com/mattermost/mattermost-server/config/file.go:107\ngithub.com/mattermost/mattermost-server/app.(*App).SaveConfig\n\t/go/src/github.com/mattermost/mattermost-server/app/config.go:378\ngithub.com/mattermost/mattermost-server/app.(*App).EnablePlugin\n\t/go/src/github.com/mattermost/mattermost-server/app/plugin.go:333\ngithub.com/mattermost/mattermost-server/api4.enablePlugin\n\t/go/src/github.com/mattermost/mattermost-server/api4/plugin.go:298\ngithub.com/mattermost/mattermost-server/web.Handler.ServeHTTP\n\t/go/src/github.com/mattermost/mattermost-server/web/handlers.go:148\ngithub.com/NYTimes/gziphandler.GzipHandlerWithOpts.func1.1\n\t/go/src/github.com/mattermost/mattermost-server/vendor/github.com/NYTimes/gziphandler/gzip.go:336\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:1995\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\t/go/src/github.com/mattermost/mattermost-server/vendor/github.com/gorilla/mux/mux.go:212\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2774\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1878\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1337"}
{"level":"error","ts":1579856878.1300879,"caller":"mlog/log.go:174","msg":"Unable to activate plugin","plugin_id":"jenkins","error":"unable to start plugin: jenkins: Please add Jenkins URL in plugin settings","errorVerbose":"Please add Jenkins URL in plugin settings\nunable to start plugin: jenkins\ngithub.com/mattermost/mattermost-server/plugin.(*Environment).Activate\n\t/go/src/github.com/mattermost/mattermost-server/plugin/environment.go:251\ngithub.com/mattermost/mattermost-server/app.(*App).SyncPluginsActiveState\n\t/go/src/github.com/mattermost/mattermost-server/app/plugin.go:100\ngithub.com/mattermost/mattermost-server/app.(*App).InitPlugins.func2\n\t/go/src/github.com/mattermost/mattermost-server/app/plugin.go:183\ngithub.com/mattermost/mattermost-server/config.(*emitter).invokeConfigListeners.func1\n\t/go/src/github.com/mattermost/mattermost-server/config/emitter.go:35\nsync.(*Map).Range\n\t/usr/local/go/src/sync/map.go:333\ngithub.com/mattermost/mattermost-server/config.(*emitter).invokeConfigListeners\n\t/go/src/github.com/mattermost/mattermost-server/config/emitter.go:33\ngithub.com/mattermost/mattermost-server/config.(*commonStore).set\n\t/go/src/github.com/mattermost/mattermost-server/config/common.go:90\ngithub.com/mattermost/mattermost-server/config.(*FileStore).Set\n\t/go/src/github.com/mattermost/mattermost-server/config/file.go:107\ngithub.com/mattermost/mattermost-server/app.(*App).SaveConfig\n\t/go/src/github.com/mattermost/mattermost-server/app/config.go:378\ngithub.com/mattermost/mattermost-server/app.(*App).EnablePlugin\n\t/go/src/github.com/mattermost/mattermost-server/app/plugin.go:333\ngithub.com/mattermost/mattermost-server/api4.enablePlugin\n\t/go/src/github.com/mattermost/mattermost-server/api4/plugin.go:298\ngithub.com/mattermost/mattermost-server/web.Handler.ServeHTTP\n\t/go/src/github.com/mattermost/mattermost-server/web/handlers.go:148\ngithub.com/NYTimes/gziphandler.GzipHandlerWithOpts.func1.1\n\t/go/src/github.com/mattermost/mattermost-server/vendor/github.com/NYTimes/gziphandler/gzip.go:336\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:1995\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\t/go/src/github.com/mattermost/mattermost-server/vendor/github.com/gorilla/mux/mux.go:212\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2774\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1878\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1337"}

Expiring a silence is a bit unclear as the configuration number is not obvious.

When running the slash command to expire a silence - providing the "configuration Number" is confusing at first because we don't use that term on the admin UI side and users may think the value come from AlertsManager.

Also it's confusing that the record set starts at ID 0. Developers tend to start counting at 0 but when a record set is shown in a UI it should start at 1.

We should add the "Configuration Number" label in the Admin UI to make this more obvious and start counting records at 1.

Mattermost 5.8: unable to start plugin: com.cpanato.alertmanager: Must set a Team

Trying to install this plugin with mattermost 5.8, I get the following error:

{
  "level": "error",
  "ts": 1578321163.0513644,
  "caller": "mlog/log.go:174",
  "msg": "Unable to activate plugin",
  "plugin_id": "com.cpanato.alertmanager",
  "error": "unable to start plugin: com.cpanato.alertmanager: Must set a Team",
  "errorVerbose": "Must set a Team\nunable to start plugin: com.cpanato.alertmanager\ngithub.com/mattermost/mattermost-server/v5/plugin.(*Environment).Activate\n\t/go/src/github.com/mattermost/mattermost-server/plugin/environment.go:251\ngithub.com/mattermost/mattermost-server/v5/app.(*App).SyncPluginsActiveState\n\t/go/src/github.com/mattermost/mattermost-server/app/plugin.go:106\ngithub.com/mattermost/mattermost-server/v5/app.(*App).InitPlugins.func2\n\t/go/src/github.com/mattermost/mattermost-server/app/plugin.go:197\ngithub.com/mattermost/mattermost-server/v5/config.(*emitter).invokeConfigListeners.func1\n\t/go/src/github.com/mattermost/mattermost-server/config/emitter.go:35\nsync.(*Map).Range\n\t/usr/local/go/src/sync/map.go:333\ngithub.com/mattermost/mattermost-server/v5/config.(*emitter).invokeConfigListeners\n\t/go/src/github.com/mattermost/mattermost-server/config/emitter.go:33\ngithub.com/mattermost/mattermost-server/v5/config.(*commonStore).set\n\t/go/src/github.com/mattermost/mattermost-server/config/common.go:90\ngithub.com/mattermost/mattermost-server/v5/config.(*FileStore).Set\n\t/go/src/github.com/mattermost/mattermost-server/config/file.go:107\ngithub.com/mattermost/mattermost-server/v5/app.(*Server).UpdateConfig\n\t/go/src/github.com/mattermost/mattermost-server/app/config.go:53\ngithub.com/mattermost/mattermost-server/v5/app.(*App).UpdateConfig\n\t/go/src/github.com/mattermost/mattermost-server/app/config.go:59\ngithub.com/mattermost/mattermost-server/v5/app.(*App).EnablePlugin\n\t/go/src/github.com/mattermost/mattermost-server/app/plugin.go:341\ngithub.com/mattermost/mattermost-server/v5/api4.enablePlugin\n\t/go/src/github.com/mattermost/mattermost-server/api4/plugin.go:305\ngithub.com/mattermost/mattermost-server/v5/web.Handler.ServeHTTP\n\t/go/src/github.com/mattermost/mattermost-server/web/handlers.go:163\ngithub.com/NYTimes/gziphandler.GzipHandlerWithOpts.func1.1\n\t/go/src/github.com/mattermost/mattermost-server/vendor/github.com/NYTimes/gziphandler/gzip.go:336\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\t/go/src/github.com/mattermost/mattermost-server/vendor/github.com/gorilla/mux/mux.go:212\ngithub.com/mattermost/mattermost-server/v5/app.(*RateLimiter).RateLimitHandler.func1\n\t/go/src/github.com/mattermost/mattermost-server/app/ratelimit.go:108\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2802\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1890\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357"

Unable to enable the plugin on MM 9.4.1

We are unable to enable this plugin on latest MM version 9.4.1

Getting following error,

This plugin failed to start. unable to start plugin: alertmanager: Unrecognized remote plugin message: This usually means the plugin was not compiled for this architecture, the plugin is missing dynamic-link libraries necessary to run, the plugin is not executable by this process due to file permissions, or the plugin failed to negotiate the initial go-plugin protocol handshake Additional notes about plugin: Path: plugins/alertmanager/server/dist/plugin-linux-amd64 Mode: -rwxr-xr-x Owner: 114 [mattermost] (current: 114 [mattermost]) Group: 122 [mattermost] (current: 122 [mattermost]) ELF architecture: EM_X86_64 (current architecture: amd64)

Can someone please help with this?

Support for Mattermost Cloud

Hello, is there a way to deploy this plugin in Mattermost Cloud instance or self-host independently and connect to the cloud ?

question about color bars

I noticed that color is set dynamically, but being unfamiliar with go code and classing, I wasn't sure if it is possible to operate on the "severity" label to set a color of #ffcc00 for warnings instead of red.

I will play with the code later if I have time.

Support for differentiating multiple clusters

Motivation

It appears to be only possible to specify one channel for alerts but we have 3 different clusters so we would like to send messages to different channels.
Alternatively if only one channel has to be used , there is no way to see from which cluster an alert comes from, example:

Default
Status
:fire: FIRING :fire:

summary	description
Job failed to complete.

Job openshift-logging/curator-1625119680 failed to complete. Removing failed job after investigation should clear this alert.

severity	alertname
warning

KubeJobFailed

condition	container
true

kube-rbac-proxy-main

namespace	prometheus
openshift-logging

openshift-monitoring/k8s

service	endpoint
kube-state-metrics

https-main

job	job_name
kube-state-metrics

curator-1625119680

Start At
2 days 3 hours 31 minutes 33 seconds 61 milliseconds 452 microseconds

Ended At
292 years 24 weeks 3 days 23 hours 47 minutes 16 seconds 854 milliseconds 775 microseconds

alerts for different teams all ending up in the same channel

I'm testing the multiple teams feature, sending mock alerts to mattermost using curl like so:

$ curl -i -X POST 'https://mattermost.my.domain/plugins/alertmanager/api/webhook?token=LONG-TEAM-TOKEN' -d '{
  "version": "4",
  "status": "firing",
  "groupKey": "testing",
  "truncatedAlerts": 0,
  "alerts": [
    {
      "annotations": {
          "name": "TestAlert to itds team?"
      },
      "labels": {
          "alertname": "testalert1",
          "instance": "testinstance1",
          "source": "jenkins"
      }
    }
  ]
}'

I have created three teams with different names but identical channel names (all use the town-square channel). When I send the above alert using the different tokens, all alerts end up in the channel of the third team! (Note the different AlertManagerPluginId on the different alert messages):
multiple-teams

However, I find the plugin works well if I use the same team and different channels, or several teams and a differently named channel on each. So somewhere in the "multiple teams" feature uses just the channel as a key, whereas different alertmanagers should really be keyed by both team and channel, no?

When adding multiple Alertmanager configs in the plugin, messages are posted to a random channel

We're using Mattermost 9.4.1 and plugin 0.4.0

Setup:
We have configured four different Alertmanagers in the cluster. They all use the same team, but different channels and they each have a unique token.

Incorrect behavior:
When alerts are sent by the cluster (or when sending test messages) to the four URLs, the messages are being posted into one of the four configured channels randomly. When sending the same message over and over, the channel seems to be consistent, even if it is wrong. If the message is changed ever so slightly, it lands in a different, random channel, even though the token was not touched.

Expected behavior:
The message should always be posted to the channel associated with the token.

What else have I tied:
I looked at the code to determine if there were debug logs, that could be enabled to figure out what goes wrong. This is not the case. I also reviewed the relevant code sections but nothing jumped out immediately. It looks plausible.

What did I not try:
I am unable to add additional debug logs or play with the code, as I'm not in an environment where I can easily build Go projects.

Do you have any idea what's going wrong?

commands fails if alertmanager is behind basic auth

Hello

I've setup a basic auth before my alertmanager, so it's not naked on the internet. Which makes the bot fail when it's trying to fetch stuff for its commands

image

The URL is working when I'm opening it, once I authentify ofc.

Maybe you should allow adding headers in the bot settings, if such a customization is possible ?

Working with multiple teams?

Hi, I am running a server with 2 teams, 1 for a web portal and 1 for a remote site. I was trying to figure out if it's possible to send alerts to both teams, but the configuration for this plugin seems to point to only alerting a single team.

Is there a way to send alerts 2 different teams on the same mattermost instance using different receivers or something?

Unsure how to troubleshoot

Is there a way to see what is being sent out and the response (from either side) without doing tcpdump?

component=dispatcher msg="Error on notify" err="cancelling notify retry for "webhook" due to unrecoverable error: unexpected status code 405: https://chat.mergetb.net//plugins/com.cpanato.alertmanager/api/webhook?token=" context_err=null

Error with plugin

Hello!
I'm using mattermost 7.1.2 and plugin 0.2.0 and i recieve errors when aletmanager sends alerts.

Aug 26 16:32:14 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:14.329 +03:00","level":"info","msg":"Received alertmanager notification","caller":"app/plugin_api.go:937","plugin_id":"alertmanager"}
Aug 26 16:32:14 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:14.337 +03:00","level":"error","msg":"plugin process exited","caller":"plugin/hclog_adapter.go:79","plugin_id":"alertmanager","wrapped_extras":"pathplugins/alertmanager/server/dist/plugin-linux-amd64pid4809errorexit status 2"}
Aug 26 16:32:14 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:14.338 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"session shutdown"}
Aug 26 16:32:14 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:14.339 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"unexpected EOF"}
Aug 26 16:32:14 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:14.340 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"unexpected EOF"}
Aug 26 16:32:14 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:14.671 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:14 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:14.830 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:14 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:14.999 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:15 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:15.034 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:15 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:15.422 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:15 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:15.690 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:15 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:15.844 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:15 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:15.889 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:16 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:16.670 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:16 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:16.803 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:17 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:17.016 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:17 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:17.199 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:17 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:17.553 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:18 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:18.788 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:19 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:19.335 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't Accept request body connection","caller":"plugin/client_rpc.go:400","plugin_id":"alertmanager","error":"timeout waiting for accept"}
Aug 26 16:32:19 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:19.335 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't accept connection","caller":"plugin/client_rpc.go:381","plugin_id":"alertmanager","serve_http_stream_id":11,"error":"timeout waiting for accept"}
Aug 26 16:32:19 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:19.340 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't Accept request body connection","caller":"plugin/client_rpc.go:400","plugin_id":"alertmanager","error":"timeout waiting for accept"}
Aug 26 16:32:19 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:19.340 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't accept connection","caller":"plugin/client_rpc.go:381","plugin_id":"alertmanager","serve_http_stream_id":13,"error":"timeout waiting for accept"}
Aug 26 16:32:19 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:19.372 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:19 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:19.599 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:19 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:19.672 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't Accept request body connection","caller":"plugin/client_rpc.go:400","plugin_id":"alertmanager","error":"timeout waiting for accept"}
Aug 26 16:32:19 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:19.672 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't accept connection","caller":"plugin/client_rpc.go:381","plugin_id":"alertmanager","serve_http_stream_id":15,"error":"timeout waiting for accept"}
Aug 26 16:32:19 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:19.831 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't Accept request body connection","caller":"plugin/client_rpc.go:400","plugin_id":"alertmanager","error":"timeout waiting for accept"}
Aug 26 16:32:19 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:19.831 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't accept connection","caller":"plugin/client_rpc.go:381","plugin_id":"alertmanager","serve_http_stream_id":17,"error":"timeout waiting for accept"}
Aug 26 16:32:19 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:19.957 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:19 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:19.999 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't accept connection","caller":"plugin/client_rpc.go:381","plugin_id":"alertmanager","serve_http_stream_id":19,"error":"timeout waiting for accept"}
Aug 26 16:32:19 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:19.999 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't Accept request body connection","caller":"plugin/client_rpc.go:400","plugin_id":"alertmanager","error":"timeout waiting for accept"}
Aug 26 16:32:20 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:20.036 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't Accept request body connection","caller":"plugin/client_rpc.go:400","plugin_id":"alertmanager","error":"timeout waiting for accept"}
Aug 26 16:32:20 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:20.036 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't accept connection","caller":"plugin/client_rpc.go:381","plugin_id":"alertmanager","serve_http_stream_id":21,"error":"timeout waiting for accept"}
Aug 26 16:32:20 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:20.423 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't Accept request body connection","caller":"plugin/client_rpc.go:400","plugin_id":"alertmanager","error":"timeout waiting for accept"}
Aug 26 16:32:20 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:20.424 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't accept connection","caller":"plugin/client_rpc.go:381","plugin_id":"alertmanager","serve_http_stream_id":23,"error":"timeout waiting for accept"}
Aug 26 16:32:20 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:20.593 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:20 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:20.690 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't accept connection","caller":"plugin/client_rpc.go:381","plugin_id":"alertmanager","serve_http_stream_id":25,"error":"timeout waiting for accept"}
Aug 26 16:32:20 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:20.690 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't Accept request body connection","caller":"plugin/client_rpc.go:400","plugin_id":"alertmanager","error":"timeout waiting for accept"}
Aug 26 16:32:20 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:20.845 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't Accept request body connection","caller":"plugin/client_rpc.go:400","plugin_id":"alertmanager","error":"timeout waiting for accept"}
Aug 26 16:32:20 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:20.845 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't accept connection","caller":"plugin/client_rpc.go:381","plugin_id":"alertmanager","serve_http_stream_id":27,"error":"timeout waiting for accept"}
Aug 26 16:32:20 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:20.890 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't Accept request body connection","caller":"plugin/client_rpc.go:400","plugin_id":"alertmanager","error":"timeout waiting for accept"}
Aug 26 16:32:20 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:20.890 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't accept connection","caller":"plugin/client_rpc.go:381","plugin_id":"alertmanager","serve_http_stream_id":29,"error":"timeout waiting for accept"}
Aug 26 16:32:21 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:21.208 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:21 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:21.672 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't Accept request body connection","caller":"plugin/client_rpc.go:400","plugin_id":"alertmanager","error":"timeout waiting for accept"}
Aug 26 16:32:21 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:21.672 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't accept connection","caller":"plugin/client_rpc.go:381","plugin_id":"alertmanager","serve_http_stream_id":31,"error":"timeout waiting for accept"}
Aug 26 16:32:21 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:21.804 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't Accept request body connection","caller":"plugin/client_rpc.go:400","plugin_id":"alertmanager","error":"timeout waiting for accept"}
Aug 26 16:32:21 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:21.804 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't accept connection","caller":"plugin/client_rpc.go:381","plugin_id":"alertmanager","serve_http_stream_id":33,"error":"timeout waiting for accept"}
Aug 26 16:32:21 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:21.978 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:22 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:22.017 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't Accept request body connection","caller":"plugin/client_rpc.go:400","plugin_id":"alertmanager","error":"timeout waiting for accept"}
Aug 26 16:32:22 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:22.017 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't accept connection","caller":"plugin/client_rpc.go:381","plugin_id":"alertmanager","serve_http_stream_id":35,"error":"timeout waiting for accept"}
Aug 26 16:32:22 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:22.199 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't Accept request body connection","caller":"plugin/client_rpc.go:400","plugin_id":"alertmanager","error":"timeout waiting for accept"}
Aug 26 16:32:22 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:22.200 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't accept connection","caller":"plugin/client_rpc.go:381","plugin_id":"alertmanager","serve_http_stream_id":37,"error":"timeout waiting for accept"}
Aug 26 16:32:22 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:22.554 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't Accept request body connection","caller":"plugin/client_rpc.go:400","plugin_id":"alertmanager","error":"timeout waiting for accept"}
Aug 26 16:32:22 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:22.554 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't accept connection","caller":"plugin/client_rpc.go:381","plugin_id":"alertmanager","serve_http_stream_id":39,"error":"timeout waiting for accept"}
Aug 26 16:32:23 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:23.789 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't Accept request body connection","caller":"plugin/client_rpc.go:400","plugin_id":"alertmanager","error":"timeout waiting for accept"}
Aug 26 16:32:23 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:23.789 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't accept connection","caller":"plugin/client_rpc.go:381","plugin_id":"alertmanager","serve_http_stream_id":41,"error":"timeout waiting for accept"}
Aug 26 16:32:24 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:24.242 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:426","plugin_id":"alertmanager","error":"connection is shut down"}
Aug 26 16:32:24 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:24.373 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't Accept request body connection","caller":"plugin/client_rpc.go:400","plugin_id":"alertmanager","error":"timeout waiting for accept"}
Aug 26 16:32:24 mattermost-1.node.1520.consul mattermost[4617]: {"timestamp":"2022-08-26 16:32:24.373 +03:00","level":"error","msg":"Plugin failed to ServeHTTP, muxBroker couldn't accept connection","caller":"plugin/client_rpc.go:381","plugin_id":"alertmanager","serve_http_stream_id":43,"error":"timeout waiting for accept"}

alertmanager config

  resolve_timeout: 3m
  slack_api_url: 'http://10.243.22.39:8065/plugins/alertmanager/api/webhook?token=REAL_TOKEN'
templates:
- '/etc/alertmanager/templates/*.tmpl'
receivers:
- name: slack
  slack_configs:
  - channel: alerts
    color: '{{ if eq .Status "firing" }}danger{{ else }}good{{ end }}'
    send_resolved: true
    text: "some text"
    title: '{{ if eq .Status "firing" }}Firing {{ .Alerts.Firing | len }}{{ else }}Resolved
      {{ .Alerts.Resolved | len }}{{ end }} alerts'

route:
  group_by:
  - alertname
  - instance
  - service
  - host_name
  group_interval: 5m
  group_wait: 30s
  receiver: slack
  repeat_interval: 3h

image

Initial webhook event when an alert starts is not being delivered

I'm seeing an issue where I can run the alerts command and see all my alerts correctly. However I don't get the initial webhook event when an alert first occurs.

Steps:

  • Setup and configure AletsManager plugin
    image
  • Supply the webhook and token in alertmanager.yml
receivers:
  - name: 'web.hook'
    webhook_configs:
      - url: '<my-url>/plugins/alertmanager/api/webhook?token=<my-token>'
      ```  
- Trigger an alert in Prometheus
- Watch the Alert apper in the AlertManager UI
- Return to the Mattermost channel
**Observed**: 
A. No webhook event was posted when the alert started
B. Running  the Alert command shows the new alert listed  
  

Plugin crashes on notification

I have Mattermost v7.5.2 bundled with gitlab.
I'm using the AlertManager Plugin v0.4.0.
I configured everything according to Readme.

When I open /plugins/alertmanager/api/webhook I get the text "Mattermost AlertManager Plugin".

But when I send notification like this

POST https://xxxxxx/plugins/alertmanager/api/webhook?token=yyyyyy
Content-type: application/json
{
    "text": "Hello, world."
}

I got an error "500 internal server error'

Server logs:

{"timestamp":"2023-03-05 21:10:34.109 Z","level":"info","msg":"Received alertmanager notification","caller":"app/plugin_api.go:973","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:10:34.114 Z","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:423","plugin_id":"alertmanager","error":"unexpected EOF"}
{"timestamp":"2023-03-05 21:10:34.114 Z","level":"error","msg":"plugin process exited","caller":"plugin/hclog_adapter.go:79","plugin_id":"alertmanager","wrapped_extras":"path/var/opt/gitlab/mattermost/plugins/alertmanager/server/dist/plugin-linux-amd64pid29375errorexit status 2"}

Debug logs:

{"timestamp":"2023-03-05 21:19:55.161 Z","level":"info","msg":"Received alertmanager notification","caller":"app/plugin_api.go:973","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.164 Z","level":"debug","msg":"panic: runtime error: invalid memory address or nil pointer dereference","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.164 Z","level":"debug","msg":"[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0xee6fd5]","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.164 Z","level":"debug","msg":"","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.164 Z","level":"debug","msg":"goroutine 27 [running]:","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.164 Z","level":"debug","msg":"main.(*Plugin).handleWebhook(0xc000400900, {0x156b2e0, 0xc000420c60}, 0xc00050e200, {{0x1c8ce20, 0x1}, {0xc0007c84a0, 0x20}, {0xc00011ac78, 0x6}, ...})","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.164 Z","level":"debug","msg":"\tgithub.com/cpanato/mattermost-plugin-alertmanager/server/webhook.go:33 +0x115","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.164 Z","level":"debug","msg":"main.(*Plugin).ServeHTTP(0x7f2da5187f68?, 0x10d68c0?, {0x156b2e0, 0xc000420c60}, 0xc00050e200)","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.164 Z","level":"debug","msg":"\tgithub.com/cpanato/mattermost-plugin-alertmanager/server/plugin.go:131 +0x3d5","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.164 Z","level":"debug","msg":"github.com/mattermost/mattermost-server/v6/plugin.(*hooksRPCServer).ServeHTTP(0xc0004224a0, 0xc0003378a0, 0x1?)","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.164 Z","level":"debug","msg":"\tgithub.com/mattermost/mattermost-server/[email protected]/plugin/client_rpc.go:453 +0x417","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.164 Z","level":"debug","msg":"reflect.Value.call({0xc0000c1560?, 0xc0000bc868?, 0x13?}, {0x111d53e, 0x4}, {0xc000781ef8, 0x3, 0x3?})","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.164 Z","level":"debug","msg":"\treflect/value.go:584 +0x8c5","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.164 Z","level":"debug","msg":"reflect.Value.Call({0xc0000c1560?, 0xc0000bc868?, 0x0?}, {0xc0000db6f8?, 0x0?, 0x0?})","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.164 Z","level":"debug","msg":"\treflect/value.go:368 +0xbc","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.164 Z","level":"debug","msg":"net/rpc.(*service).call(0xc0000cc340, 0x0?, 0x0?, 0xc0000ce120, 0xc000334b80, 0x0?, {0xf51dc0?, 0xc0003378a0?, 0x0?}, {0xf5eb40, ...}, ...)","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.164 Z","level":"debug","msg":"\tnet/rpc/server.go:382 +0x226","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.164 Z","level":"debug","msg":"created by net/rpc.(*Server).ServeCodec","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.164 Z","level":"debug","msg":"\tnet/rpc/server.go:479 +0x3fe","caller":"plugin/hclog_adapter.go:54","plugin_id":"alertmanager"}
{"timestamp":"2023-03-05 21:19:55.166 Z","level":"error","msg":"Plugin failed to ServeHTTP, RPC call failed","caller":"plugin/client_rpc.go:423","plugin_id":"alertmanager","error":"unexpected EOF"}
{"timestamp":"2023-03-05 21:19:55.166 Z","level":"error","msg":"plugin process exited","caller":"plugin/hclog_adapter.go:79","plugin_id":"alertmanager","wrapped_extras":"path/var/opt/gitlab/mattermost/plugins/alertmanager/server/dist/plugin-linux-amd64pid30003errorexit status 2"}

feature to add

Hello,
I think some feature could be usefull to add

  • handle multiple alertmanager server
  • add the link to the alert and the source
  • if there is some annotation, display it

but otherwise it's a nice plugin!

Custom Templates

Motivation
The Alertmanager messages displayed in Mattermost each take up a lot of screen space. I'd like to be able to customize the message so it provides a short summary, something like:
alertname=InstanceDown, instance=hostname, severity=warning
If colour could be use to denote Firing or Resolved, that would be ideal.

Feature
Ability to customize the output format of generated Mattermost messages. The Slack integration appears to include options for customized Go templates but I don't think this can be done with the webhook_config integration.

Additional context
The following blog post provides some examples of custom templates but I don't see a way to integrate these with the webhook_config.
https://prometheus.io/blog/2016/03/03/custom-alertmanager-templates/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.