Code Monkey home page Code Monkey logo

smee.io's Introduction

Webhook payload delivery service
Usage โ€ข How it works โ€ข Deploying your own Smee.io โ€ข FAQ

GitHub Actions status Codecov

Looking for probot/smee-client?

Usage

Smee is a webhook payload delivery service - it receives webhook payloads, and sends them to listening clients. You can generate a new channel by visiting https://smee.io, and get a unique URL to send payloads to.

Heads up! Smee.io is intended for use in development, not for production. It's a way to inspect payloads through a UI and receive them on a local machine, not as a proxy for production applications.

How it works

Smee works with two components: the public website smee.io and the smee-client. They talk to each other via Server-Sent Events, a type of connection that allows for messages to be sent from a source to any clients listening.

This means that channels are just an abstraction - all Smee does is get a payload and sends it to any actively connected clients.

Deploying your own Smee.io

Smee.io is a simple Node.js application. You can deploy it any way you would deploy any other Node app. The easier solution is probably Heroku, or you can use Docker:

docker run -p 3000:3000 ghcr.io/probot/smee.io

Don't forget to point smee-client to your instance of smee.io:

smee --url https://your-smee.io/channel

Running multiple instances of Smee.io

If you need to run multiple instances of the web app, you need a way to share events across those instances. A client may be connected to instance A, so if a relevant event is sent to instance B, instance A needs to know about it too.

For that reason, Smee.io has built-in support for Redis as a message bus. To enable it, just set a REDIS_URL environment variable. That will tell the app to use Redis when receiving payloads, and to publish them from each instance of the app.

FAQ

How long do channels live for?

Channels are always active - once a client is connected, Smee will send any payloads it gets at /:channel to those clients.

Should I use this in production?

No! Smee is not designed for production use - it is a development and testing tool. Note that channels are not authenticated, so if someone has your channel ID they can see the payloads being sent, so it is not secure for production use.

Are payloads ever stored?

Webhook payloads are never stored on the server, or in any database; the Smee.io server is simply a pass-through. However, we do store payloads in localStorage in your browser, so that revisiting https://smee.io/:channel will persist the payloads you saw there last.

smee.io's People

Contributors

andy-g avatar bkeepers avatar dependabot[bot] avatar gr2m avatar jamesmgreene avatar jasonetco avatar jtyr avatar macklinu avatar mrchrisw avatar nordes avatar oscard0m avatar rpetti avatar tcbyrd avatar zeke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

smee.io's Issues

Is Smee down?

I'm getting a 502 or a timeout when I try to visit any smee.io url, and webhooks are no longer being recieved.

Refresh "time-since" on an interval

When a delivery sits around in the UI for awhile without being expanded, the time-ago gets stale and doesn't update. You can end up with payloads that look like this, until the bottom one is expanded and the component re-renders:

image

I'd like to set a timer (setInterval) to forceUpdate() the list. Does this sound like the right away to do it? Setting a timer on the list vs. on each ListItem; that'd make for more efficient re-renders, but more intervals existing in the window.

Error that certificate has expired

I was using smee for a year but from a last week I'm getting error: Event { type: 'error', message: 'certificate has expired' }
Smee as service is started inside Jenkins instance but I have no idea which certificate has expired. Can someone help me with this issue?

smee.io uptime?

I've been trying to use your http://smee.io site today as a new user, and the app has gone down twice. Are the servers generally reliable or is this par for the course? Can't use your proxy if it's that unreliable, but rather not configure the firewall so positing an issue in case it goes somewhere. Sorry to report 'site down' error on github issues, but not sure where else to go. If smee.io isn't known for its uptime, that would be appreciated information.

Github App Events forward unreliable since dependency upgrade

I was developing a GitHub App and used smee.io to forward the API calls to localhost.
This worked perfectly for the last week including yesterday. Today upon starting work only about 25% of the calls we're properly registered on the smee.io page and forwarded.
I tried multiple different smee channels and different repos.

GitHub seems to not be the issue, as for the same repos all events were properly registered if I used my own server as endpoint instead of smee or use ngrok.

I presume this has to do with some of these updated dependencies from Pull Requests #23 -> #27.

502 and 503 errors

currently using smee.io for local github app testing, but recently noticed a lot of 502 and 503 errors on smee.io
image

smee.io status

As has been pointed out in a few issues (#135, #125, #122), smee.io has become unstable because it's getting way more traffic than it was intended to handle for development purposes. I've scaled it up vertically as much as I can, but the service is no longer stable in its current form. We've attempted to implement mechanisms to prevent people from abusing the service, but it's become a game of whack-a-mole and implementing proper rate limiting and per-channel auth is beyond the scope of what we can do in a day or two to get things back online.

I have a few ideas for next steps, and if there's demand I can try to spin up a new and improved service on smee.io, but in the meantime, it will be offline until this latest wave of traffic dies down and the service can be brought back up reliably.

How long does a channel created in smee.io lives?

Hi. Thanks for this awesome and easy to use program. This issue is just to answer a few questions about channels in the SaaS version of smee.io that is deployed at https://smee.io.

  • For how long does the channel exists?
  • If there's no client connected to the channel, does it close itself sooner?
  • What happens if I'm connected to the channel 24/7 but only receiving payloads each 2-3 hours?

smee channels seems to be down

Hi Team,

I am using smee for my GH app development, and starting from yesterday, the channel is not working as expected, its seems to be down most of the time.

image

Sometime, getting 502 upstream error, sometime channel showing, but payload not able to delivering and sometime channel is not opening.

image

image

Can anyone support me in this situation ??

Regards,
Mebin Thomas

Payload Too Large

seems like whatever proxy service smee is using blocks large payloads from GitHub.

image

the above webhook does not register in the smee interface, nor does it hit my downstream server

[DISCUSSION] Support for methods other than POST?

I understand that smee is a webhook payload forwarding service, but is there any way the rest of the basic HTTP Methods for CRUD Actions (GET, PUT, PATCH, DELETE) could be implemented.

I have more ideas on this, but I'm interested to hear feedback first.

I'm aware that GET /:channel renders the very helpful UI. This could be hosted on GET /ui/:channel which frees up GET requests to be forwarded on /:channel

Docker build does not build with webpack

Since switching to npm ci in Dockerfile the comman webpack -p is not called anymore so the files main.min.css and main.min.js are mssing in the builded image. Switching back to npm install --unsafe-perm fixes this issue. Maybe there is some more changes to do to use npm ci.

Convert paddle webhooks to use smee

#13 got merged recently, which means we can use smee with Paddle

  • Convert existing paddle stuff to use it
  • Add a env field to paddle passthrough so multiple devs can use the webhooks

Content body not as JSON does not get transferred (e.g.: application/x-www-form-urlencoded)

Hi,

When I try to send data as application/x-www-form-urlencoded, it does not forward it to the event client => my app.

I think it's because right now smee.io only consider content-type JSON (body).

It would be a nice addition to also have the x-www-form-urlencoded in order to integrate with slack bot/commands.

I didn't try it, but in https://github.com/probot/smee.io/blob/master/server.js#L32 you add the JSON option. But after you can also add app.use(express.urlencoded()) in order to transform the encoded form url into the body as json payload.

Error verifying AWS simple notification service with 'smee' endpoint targeting to localhost api

Please find the below stack trace:

Forwarding https://smee.io/7qisD6OLdKOIcqPM to http://localhost:800/api/1/app/aws-email-webhook/ Connected https://smee.io/7qisD6OLdKOIcqPM
_http_outgoing.js:690
throw new ERR_INVALID_ARG_TYPE('chunk', ['string', 'Buffer'], chunk); ^

TypeError [ERR_INVALID_ARG_TYPE]: The "chunk" argument must be one of type string or Buffer. Received type object
at ClientRequest.end (_http_outgoing.js:690:13)
at Request._end (/usr/lib/node_modules/smee-client/node_modules/superagent/lib/node/index.js:1021:9)
at Request.end (/usr/lib/node_modules/smee-client/node_modules/superagent/lib/node/index.js:777:15)
at Client.onmessage (/usr/lib/node_modules/smee-client/index.js:35:9)
at EventSource.emit (events.js:198:13)
at _emit (/usr/lib/node_modules/smee-client/node_modules/eventsource/lib/eventsource.js:242:17)
at parseEventStreamLine (/usr/lib/node_modules/smee-client/node_modules/eventsource/lib/eventsource.js:257:9)
at IncomingMessage. (/usr/lib/node_modules/smee-client/node_modules/eventsource/lib/eventsource.js:217:11)
at IncomingMessage.emit (events.js:198:13)
at addChunk (_stream_readable.js:288:12)

SMEE proxy url - https://smee.io/7qisD6OLdKOIcqPM

Added above url in AWS sns endpoint.
As soon as I request confirmation on AWS for this endpoint , above error is hitting. Please help me fix it!
System Info: Ubuntu 18.10 Already done npm update and then installed smee-client

Support X-Hub-Signature

If the server would avoid parsing the JSON and send it as a string (or do both parse and send for backwards compatibility), then smee-client could verify the X-Hub-Signature header to check that JSON was signed with the user's key before using it.

Can't send custom statuses

So I've been trying to send code 500 to my Github webhook all day but when I would check Github said it received code 200. Sure enough when I peak at the source code of Smee code 200 is hardcoded in as the only code.

Is it possible to make it so that users can send custom responses to the server. I love Smee because I don't have to change the hook url every time I want to code but with other options like Ngrok I could easily do this. I would much appreciate this feature being added...

Run in detached mode ๐Ÿ”—

In order to avoid to keep the terminal open and to have multiple paths listening.
I was wondering if you can implement a -d option to keep run and listening witouth having a terminal open in detached mode or in the background from terminal. And also a way to check all running process in smee such as docker does with -d and ps.
where: d=detach and ps=process status

user@user:~$ smee -d
Forwarding https://smee.io/1PUc3ipJCAKIMTaR to http://127.0.0.1:3000/
Connected https://smee.io/1PUc3ipJCAKIMTaR
Detached

user@user:~$ smee ps
URL                                     PATH      PORT
https://smee.io/1PUc3ipJCAKIMTaR         /        3000

OR:

user@user:~$ smee -d -u https://smee.io/new -P /custom_path/ -p 8080
Forwarding https://smee.io/new to http://127.0.0.1:8080/custom_path
Connected https://smee.io/new
Detached

user@user:~$ smee ps
URL                         PATH                PORT
https://smee.io/new         /custom_path        8080

Docker build needs python

The docker build of the current Dockerfile does not work anymore because python is required.
Adding RUN apk add --no-cache make gcc g++ python brefore RUN npm ci fixes the issue.

big requests lead to 413 PayloadTooLargeError

Hi,

A colleague pushed a change with many changed files/folders (87) to GitHub.

As the list of files is part of the webhook event, this had the effect that a quite large webhook event (request body) was delivered to smee.io.

The webhook type was a regular json and had a size of 156484 bytes.

I think it would be good for stability to also support larger pushes and increase the size to maybe 1MB.

Workaround: Push a smaller commit afterwards, then only the changes of the newer commit are part of the event.

Regards,
Gernot

Patreon Webhook payload issue

when attempting to use the patreon webhook with smee.io, The server ends up hanging when trying to parse the payload. I get no response from the server until I shut down the smee client, then the server spits out this error:
BadRequestError: request aborted [server] at IncomingMessage.onAborted (...node_modules\raw-body\index.js:238:10) [server] at IncomingMessage.emit (node:events:513:28) [server] at IncomingMessage._destroy (node:_http_incoming:224:10) [server] at _destroy (node:internal/streams/destroy:109:10) [server] at IncomingMessage.destroy (node:internal/streams/destroy:71:5) [server] at abortIncoming (node:_http_server:696:9) [server] at socketOnClose (node:_http_server:690:3) [server] at Socket.emit (node:events:525:35) [server] at TCP.<anonymous> (node:net:757:14)

The same exact code works in production when getting the request directly from Patreon webhook.
Also the smee.io site is able to display the payload without issue, it just cant seem to forward it correctly to the target.

I created a super simple express server to test and make sure there wasn't another setting I had on my server that was interfering but go the same results:
`
import express from 'express'
import http from 'http'

const app = express()

app.post('/webhook/patreon', express.json(), (req, res) => {
console.log('Received request')
console.log(req.body)
res.sendStatus(200)
})

const server = new http.Server(app)
const port = 3000
server.listen(port, () => {
console.log('Listening on port', port)
})
`
I believe it has something to do with the payload being formatted differently from what is in the original webhook request, because removing the express.json() parsing does get the request to go through, but the body is empty in that case, so it is having trouble parsing the payload correctly.
I would guess it is something weird Patreon is doing with its payload that is not being accounted for since everyone else does not seem to have issues using it with other webhooks. I noticed a previous issue with content-length being changed from removing white-spacing causing a similar result so perhaps it is has something to do with that.

Missing licence

Hello Team.

I noticed that a licence is missing in this project. Could you please specify?

local https with trusted certificate

Hello,

I am having issue to connect to local service running at SSL.

Forwarding https://smee.io/xxxx to https://localhost:44310/api/webhooks/incoming/github
Connected https://smee.io/xxxxx
Error: unable to verify the first certificate
    at TLSSocket.onConnectSecure (_tls_wrap.js:1475:34)
    at TLSSocket.emit (events.js:321:20)
    at TLSSocket._finishInit (_tls_wrap.js:918:8)
    at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:688:12) {
  code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE',
  response: undefined

I need to run this on local SSL as this service uses OAUth authentification with other services, which requires SSL redirect.

Thanks a lot for help. Roman

duplicate deliveries

I seem to be getting duplicate deliveries from a channel on smee.io. I have a simple test that repros it for me. See attached.

  1. get the zip, unzip, run npm install and update the channel URL to one of yours
  2. run node index.js
  3. Open your channel on smee.io
  4. Trigger events in GitHub. For me, I've been editing an issue comment.
  5. Observe that multiple (two in my case) entries are output on the console for each event triggered on GitHub.
  6. flip over to smee.io and redeliver the event. Notice that only one entry is output.

I don't see anything I'm doing wrong in my code and AFAICT this is a relatively recent behavior (whereas this code has been running for several months). I'm thinking its smee because

  • in the UI only one event is shown. That is, GitHub is only delivering once to smee
  • if I resend the event from the smee UI, I get just one in my app
  • if I do the action in the GitHub UI, I get two in my app and smee.io's UI correctly shows just one.

I've checked all my machines and do not appear to have any other clients running. Any next steps I can do for debugging this?

smeetest.zip

Multiple clients connecting to the same channel

Hi smee team,

I love this service and the idea of it, thank you for your efforts! I found that if multiple clients happen to connect to the same channel id, then all of those clients would receive payloads. Are there any concerns for this? Is there a feature available to lock down a channel ID once one has been generated, so that no two clients can connect at the same time?

I've attached a screenshot where I ran the client locally from my laptop to https://smee.io/conflicttest, then ran the client from a docker container on a remote server, and then sent a test post
smee-test
to see how the channel would behave

Project does not work using a context parh

The project uses absolute links, so the use with a context path is not possbile. This is the case if the docker image ist deployed in kubernetes and ingress is used to set a context path.

You should use relative links instead of absolute.

Details on open connections

Does the connection die down if i close smee.io or do i have to uninstall the client to close the connection to avoid having my network exposed even when i'm not using it? Because in the doc i didn't find a close or stop keyword to stop the connection

Allow linking to event

Feature Request

Is your feature request related to a problem? Please describe.
I would like to be able to share link to specific event. My specific use-case is generating link to the event in Smee UI from code when event is received, so event payload could be easily inspected.

Describe the solution you'd like
Allow linking to event in smee ui, for example using query parameter after channel name: ?eventId=a73d1756-d5e5-4b9b-98a0-b078b0f8c033

Describe alternatives you've considered
I don't have clear alternative for my specific use-case. It is usually easy to find the latest event from UI, but harder for old events (i.e. based on logs for further investigation) or if there are lots of events. Maybe existing filtering syntax also allows finding the event by id, but that would be much less convenient than being able to deep-link to specific event.

Teachability, Documentation, Adoption, Migration Strategy
From UI perspective, it would be nice if the UI added link sharing button (i'm not proposing nice icon myself, just indicating the button location with text):
image

smee blocks * origin for CORS preflight request

smee is set up to allow all CORS origins here:
https://github.com/probot/smee/blob/master/server/server.js#L75

but it rejects CORS preflight requests (cf CORS spec). Smee will therefore block servers or connections that expect the preflight request to pass before issuing further requests. To remedy, smee should respond with the correct headers to an OPTIONS method for preflight requests.

Example:

curl -i -X OPTIONS localhost:3000

currently gives

HTTP/1.1 200 OK
X-Powered-By: Express
Allow: GET,HEAD
Content-Type: text/html; charset=utf-8
Content-Length: 8
ETag: W/"8-ZRAf8oNBS3Bjb/SU2GYZCmbtmXg"
Date: Tue, 18 Dec 2018 22:42:29 GMT
Connection: keep-alive

GET,HEAD

The preflight response should include the headers

Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET,HEAD,PUT,PATCH,POST,DELETE

Request payload is different from one sent, due to json parsing

When you hit smee with some json data, for example:

{
   "a": 2
}

smee parses the json and then reformats it when you get it out of the request:

data: {"host":"my-local-smee",...,"body":{"a":2},...}

This is unfortunate as you don't get the exact same data back. In my case, I'm also getting a shared-secret HMAC signature along with the payload, which no longer matches the expected signature. This means that I can't test my payload signature checking code is working with the external service and have to disable it, which is sad.

Dockerfile does not build

using instructions in dockerfile gives an error about webpack missing.

docker build --build-arg BUILD_VERSION=0.0.3 --build-arg BUILD_DATE=2019-05-19 -t probot/smee-io:latest .
Sending build context to Docker daemon  686.6kB
Step 1/23 : ARG BUILD_VERSION
Step 2/23 : ARG BUILD_DATE
Step 3/23 : ARG PORT=3000
Step 4/23 : FROM node:lts-alpine as build-env
lts-alpine: Pulling from library/node
c9b1b535fdd9: Pull complete
6ad7c27326e7: Pull complete
968d8b4948da: Pull complete
a1ec837a6331: Pull complete
Digest: sha256:bba77d0ca8820b43af898b3c50d4e8b68dc703ebbd958319af2f21f2d3c309f5
Status: Downloaded newer image for node:lts-alpine
 ---> 927d03058714
Step 5/23 : WORKDIR /source
 ---> Running in 10e71be9fbe3
Removing intermediate container 10e71be9fbe3
 ---> 4716a7e8bbe8
Step 6/23 : ADD . .
 ---> d32ee15aa2c2
Step 7/23 : RUN npm install --production --unsafe-perm
 ---> Running in b360d0043782

> [email protected] postinstall /source
> npm run build


> [email protected] build /source
> webpack -p

sh: webpack: not found
npm ERR! code ELIFECYCLE
npm ERR! syscall spawn
npm ERR! file sh
npm ERR! errno ENOENT
npm ERR! [email protected] build: `webpack -p`
npm ERR! spawn ENOENT
npm ERR!
npm ERR! Failed at the [email protected] build script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2020-03-01T21_27_29_137Z-debug.log
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] postinstall: `npm run build`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] postinstall script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2020-03-01T21_27_29_170Z-debug.log

Wrong Content-Length header when event has Content-Length header and its payload contains whitespace

I'm trying to use Smee.io to use webhooks from Bitbucket Cloud to our Jenkins behind a firewall. Bitbucket sends webhook events with content-length header, but the body it sends contains whitespace. smee.io server correctly parses the incoming JSON, but then it implicitly removes whitespace but does not change the content-length header when it forwards the event to the client. This causes Jenkins to fail in parsing the request body because the Content-Length header does not match the actual length of the content.

To test:

  1. Create a fake endpoint for forwarded events, such as:
/** testserver.js */
const express = require('express');
const http = require('http');
const bodyParser = require('body-parser');
const app = express();

app.post('/', bodyParser.json(), (req, res) => {
    console.log("Received request");
    res.sendStatus(200);
});

const server = http.Server(app);
const port = 3000;
server.listen(port, () => {
    console.log('Listening on port', port);
});
  1. Run the fake endpoint with node testserver.js
  2. Run smee-client: smee.js --port 3000 (copy the channel ID from the output)
  3. curl -X POST --data '{ "ok": true }' -H "Content-Length: 14" -H "Content-Type: application/json" https://smee.io/[YOUR_CHANNEL_HERE]
  4. ERROR: The fake server never manages to parse the json because the content length doesn't match
  5. If you try curl -X POST --data '{"ok":true}' -H "Content-Length: 11" -H "Content-Type: application/json" https://smee.io/[YOUR_CHANNEL_HERE] things work as expected

Download payloads?

I know I have the option to copy the body of each received message, but is there any way to download the full request? I'd like to be able to set up a services that eventually delivers webhooks to my application, poke around in the ui that those services are connected to, then commit each webhook (headers and all) into my testsuite so I can play them back in every test without needing to go through any third-party service.

Client in Dotnet

Hi @JasonEtco or any other maintainer here,

I don't really know how to contact you, but I've created a small client in Dotnet available at https://github.com/Nordes/Smee.IO.Client . If it interests you, let me know

It's 100% perfect, but it works for what I actually need. Note that it does not do a bounce from the reception to somewhere else. It's made ready to become a Nuget package in order to include it in your own project(s). I actually am building a small Bot for Slack and since I am more or less behind proxies or firewall, Smee.io is pretty much the only way to redirect without any hassle.

For example: Slack Bot |> Smee.io |> MyServer do some reporting + images |> When ready send this report/images to Slack API. (I am currently writing a small blog article on that kind of flow)

I will probably publish a Nuget package later once I complete the CI/CD setup.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.