Code Monkey home page Code Monkey logo

terminus's Introduction

terminus

Build Status

Adds graceful shutdown and Kubernetes readiness / liveness checks for any HTTP applications.

Installation

Install via npm:

npm i @godaddy/terminus --save

Usage

const http = require('http');
const { createTerminus } = require('@godaddy/terminus');

function onSignal () {
  console.log('server is starting cleanup');
  return Promise.all([
    // your clean logic, like closing database connections
  ]);
}

function onShutdown () {
  console.log('cleanup finished, server is shutting down');
}

function healthCheck ({ state }) {
  // `state.isShuttingDown` (boolean) shows whether the server is shutting down or not
  return Promise.resolve(
    // optionally include a resolve value to be included as
    // info in the health check response
  )
}

const server = http.createServer((request, response) => {
  response.end(
    `<html>
      <body>
        <h1>Hello, World!</h1>
       </body>
     </html>`
   );
})

const options = {
  // health check options
  healthChecks: {
    '/healthcheck': healthCheck,    // a function accepting a state and returning a promise indicating service health,
    verbatim: true,                 // [optional = false] use object returned from /healthcheck verbatim in response,
    __unsafeExposeStackTraces: true // [optional = false] return stack traces in error response if healthchecks throw errors
  },
  caseInsensitive,                  // [optional] whether given health checks routes are case insensitive (defaults to false)

  statusOk,                         // [optional = 200] status to be returned for successful healthchecks
  statusOkResponse,                 // [optional = { status: 'ok' }] status response to be returned for successful healthchecks
  statusError,                      // [optional = 503] status to be returned for unsuccessful healthchecks
  statusErrorResponse,              // [optional = { status: 'error' }] status response to be returned for unsuccessful healthchecks

  // cleanup options
  timeout: 1000,                    // [optional = 1000] number of milliseconds before forceful exiting
  signal,                           // [optional = 'SIGTERM'] what signal to listen for relative to shutdown
  signals,                          // [optional = []] array of signals to listen for relative to shutdown
  useExit0,                         // [optional = false] instead of sending the received signal again without beeing catched, the process will exit(0)
  sendFailuresDuringShutdown,       // [optional = true] whether or not to send failure (503) during shutdown
  beforeShutdown,                   // [optional] called before the HTTP server starts its shutdown
  onSignal,                         // [optional] cleanup function, returning a promise (used to be onSigterm)
  onShutdown,                       // [optional] called right before exiting
  onSendFailureDuringShutdown,      // [optional] called before sending each 503 during shutdowns

  // both
  logger                            // [optional] logger function to be called with errors. Example logger call: ('error happened during shutdown', error). See terminus.js for more details.
};

createTerminus(server, options);

server.listen(PORT || 3000);

With custom error messages

const http = require('http');
const { createTerminus, HealthCheckError } = require('@godaddy/terminus');

createTerminus(server, {
  healthChecks: {
    '/healthcheck': async function () {
      const errors = []
      return Promise.all([
        // all your health checks goes here
      ].map(p => p.catch((error) => {
        // silently collecting all the errors
        errors.push(error)
        return undefined
      }))).then(() => {
        if (errors.length) {
          throw new HealthCheckError('healthcheck failed', errors)
        }
      })
    }
  }
});

With custom headers

const http = require("http");
const express = require("express");
const { createTerminus, HealthCheckError } = require('@godaddy/terminus');
const app = express();

app.get("/", (req, res) => {
  res.send("ok");
});

const server = http.createServer(app);

function healthCheck({ state }) {
  return Promise.resolve();
}

const options = {
  healthChecks: {
    "/healthcheck": healthCheck,
    verbatim: true,
    __unsafeExposeStackTraces: true,
  },
  headers: {
    "Access-Control-Allow-Origin": "*",
    "Access-Control-Allow-Methods": "OPTIONS, POST, GET",
  },
};

terminus.createTerminus(server, options);

server.listen(3000);

With express

const http = require('http');
const express = require('express');
const { createTerminus } = require('@godaddy/terminus');
const app = express();

app.get('/', (req, res) => {
  res.send('ok');
});

const server = http.createServer(app);

const options = {
  // opts
};

createTerminus(server, options);

server.listen(PORT || 3000);

With koa

const http = require('http');
const Koa = require('koa');
const { createTerminus } = require('@godaddy/terminus');
const app = new Koa();

const server = http.createServer(app.callback());

const options = {
  // opts
};

createTerminus(server, options);

server.listen(PORT || 3000);

With cluster (and eg. express)

If you want to use (cluster)[https://nodejs.org/api/cluster.html] to use more than one CPU, you need to use terminus per worker. This is heavily inspired by https://medium.com/@gaurav.lahoti/graceful-shutdown-of-node-js-workers-dd58bbff9e30.

See example/express.cluster.js.

How to set Terminus up with Kubernetes?

When Kubernetes or a user deletes a Pod, Kubernetes will notify it and wait for gracePeriod seconds before killing it.

During that time window (30 seconds by default), the Pod is in the terminating state and will be removed from any Services by a controller. The Pod itself needs to catch the SIGTERM signal and start failing any readiness probes.

If the ingress controller you use route via the Service, it is not an issue for your case. At the time of this writing, we use the nginx ingress controller which routes traffic directly to the Pods.

During this time, it is possible that load-balancers (like the nginx ingress controller) don't remove the Pods "in time", and when the Pod dies, it kills live connections.

To make sure you don't lose any connections, we recommend delaying the shutdown with the number of milliseconds that's defined by the readiness probe in your deployment configuration. To help with this, terminus exposes an option called beforeShutdown that takes any Promise-returning function.

Also it makes sense to use the useExit0 = true option to signal Kubernetes that the container exited gracefully. Otherwise APM's will send you alerts, in some cases.

function beforeShutdown () {
  // given your readiness probes run every 5 second
  // may be worth using a bigger number so you won't
  // run into any race conditions
  return new Promise(resolve => {
    setTimeout(resolve, 5000)
  })
}
createTerminus(server, {
  beforeShutdown,
  useExit0: true
})

Learn more

Limited Windows support

Due to inherent platform limitations, terminus has limited support for Windows. You can expect SIGINT to work, as well as SIGBREAK and to some extent SIGHUP. However SIGTERM will never work on Windows because killing a process in the task manager is unconditional, i.e., there's no way for an application to detect or prevent it. Here's some relevant documentation from libuv to learn more about what SIGINT, SIGBREAK etc. signify and what's supported on Windows - http://docs.libuv.org/en/v1.x/signal.html. Also, see https://nodejs.org/api/process.html#process_signal_events.

terminus's People

Contributors

0xcafeadd1c7 avatar aeimer avatar agerard-godaddy avatar antoooooooooooonie avatar bmeck avatar cailloumajor avatar charlyzzz avatar chriswiggins avatar decompil3d avatar dependabot[bot] avatar disintegrator avatar gaillota avatar gdoron avatar gergelyke avatar greenkeeper[bot] avatar itaditya avatar jacob-hd avatar jacopodaeli avatar jeffloo-ong avatar jgowdy-godaddy avatar lin826 avatar lukaszewczak avatar mushketyk avatar rexxars avatar rxmarbles avatar ryuheechul avatar snyk-bot avatar valeriangalliat avatar vernak2539 avatar zaninime avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terminus's Issues

Missing background info and k8s configs for examples

I think a bit of background info about what actually happens on SIGTERM and maybe some example k8s yaml files to get people going would be great (readinessProbe, livenessProbe and terminationGracePeriodSeconds). Your blog post explains things a bit more in depth: things get confusing :)

Maybe even an example of it working in action with minikube would be great. How can you confirm that everything is working correctly?

Either way, great module. I replaced our custom code (in the end did the same thing but with more clutter). We added a const serverListenAsync = promisify(server.listen.bind(server)) so server startup steps can be nicely chained (ie, db connection, redis etc).

Does godaddy use terminus in production? Maybe there are some tips and tricks you can share.

Use parameter desctructuring with defaults for concise code

Consider a snippet of code taken from the module

function terminus(server, options = {}) {
  const { signal, timeout } = options;

  const signal = options.signal || 'SIGTERM';
  const timeout = options.timeout || 1000;
  //other stuff
}

Here the second argument of the terminus function is not destructured. Instead in the first line of the function body, desctructuring is being done. This can definitely be shortened. Another thing is, during destructuring we can use default values right there, However here it has not been used. If both of these facilities are used the code would look much more concise. Check this -

function terminus(server, { signal='SIGTERM', timeout=1000 } = {}) {
  //other stuff
}

If you like this, I can make a PR.

healthcheck responses

is there any particular reason that healthcheck success responses repeat info in both info and details fields here, similar for failure responses here.

on a related note, is the format of {info, details} and {error, details} some kind of de-facto standard from somewhere?

generally wondering how you might feel about just using the object returned from the healthcheck verbatim instead of current formatting, or perhaps allow such behavior via some kind of opt-in?

perhaps something closer to the following?:

async function sendSuccess (res, options) {
  res.statusCode = 200
  res.setHeader('Content-Type', 'application/json')
  if (options) {
    return res.end(JSON.stringify({
      status: 'ok',
      ...options
    }))
  }
  res.end(SUCCESS_RESPONSE)
}

Doesn't work with PM2

Could this possibly get working with PM2?

From this post it states that once the hypervisor sends a SIGTERM this module will facilitate graceful shutdown. PM2 seems to send a SIGINT instead (docs). Would this be able to be added or is this outside the scope of the project?

Would be happy to give it a go if the scope allows it

"Detail" information of a health check should not change its attribute depending on the status

Current behavior

At the moment the moment when a health check fails the the respond could look something like this;

{"status": "error", "error": {"db": { "status": "down" } } }

Whereas when the status is ok, the detail information of the health check changes to the attribute info:

{"status": "ok", "info":  { "db": { "status": "down" } } }

Expected behavior

The detailed information should be accessible over the same attribute. Therefore I suggest a response where the detailed information is accessible over the detail attribute:

// Error
{"status": "error", "detail": {"db": { "status": "down" } } }
// Ok
{"status": "ok", "detail":  { "db": { "status": "down" } } }

cc @weeco

Version 10 of node.js has been released

Version 10 of Node.js (code name Dubnium) has been released! 🎊

To see what happens to your code in Node.js 10, Greenkeeper has created a branch with the following changes:

  • Added the new Node.js version to your .travis.yml

If you’re interested in upgrading this repo to Node.js 10, you can open a PR with these changes. Please note that this issue is just intended as a friendly reminder and the PR as a possible starting point for getting your code running on Node.js 10.

More information on this issue

Greenkeeper has checked the engines key in any package.json file, the .nvmrc file, and the .travis.yml file, if present.

  • engines was only updated if it defined a single version, not a range.
  • .nvmrc was updated to Node.js 10
  • .travis.yml was only changed if there was a root-level node_js that didn’t already include Node.js 10, such as node or lts/*. In this case, the new version was appended to the list. We didn’t touch job or matrix configurations because these tend to be quite specific and complex, and it’s difficult to infer what the intentions were.

For many simpler .travis.yml configurations, this PR should suffice as-is, but depending on what you’re doing it may require additional work or may not be applicable at all. We’re also aware that you may have good reasons to not update to Node.js 10, which is why this was sent as an issue and not a pull request. Feel free to delete it without comment, I’m a humble robot and won’t feel rejected πŸ€–


FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Do not send 503 on shutdown

According to https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-liveness-or-readiness-probes it is not necessary to send 503 on shutdown.

Note that if you just want to be able to drain requests when the Pod is deleted, you do not necessarily need a readiness probe; on deletion, the Pod automatically puts itself into an unready state regardless of whether the readiness probe exists. The Pod remains in the unready state while it waits for the Containers in the Pod to stop.

See also https://freecontent.manning.com/handling-client-requests-properly-with-kubernetes/

I think it should be avoided to have more meaning full event logs of the pods. Otherwise each shutdown generates logs of failed readiness requests

onSignal is not triggered when a puppeteer browser is running

Hi, it looks like there is an issue with terminus and puppeteer: onSignal is not triggered when a puppeteer browser is running.

There is 2 cases, only the first one is working. The only difference between the two cases is that there is still an open puppeteer browser in the second case.

I prepared the following instructions to reproduce the bug.

Once Dockerfile and index.js are created, you can build the docker image with docker build -t terminus-issue-103

Run the first case with docker run --rm -e CASE=1 terminus-issue-103

$ docker run --rm -e CASE=1 terminus-issue-103
Puppeteer browser launched: HeadlessChrome/73.0.3683.103
Case 1: We do not let puppeteer browser running
Stop it to continue the test (like a ctrl+c in the terminal)
^C
Shutdown requested
Shutdown requested: server is starting cleanup
Successfully stopped
$

Run the second case with docker run --rm -e CASE=2 terminus-issue-103

$ docker run --rm -e CASE=2 terminus-issue-103
Puppeteer browser launched: HeadlessChrome/73.0.3683.103
Stop it to continue the test (like a ctrl+c in the terminal)
^C$

The process is stopped without triggering terminus fonctions :(

Dockerfile:

FROM zenika/alpine-chrome:with-puppeteer

# Clean image
WORKDIR /usr/src/app
RUN rm -rf src .dockerignore .gitignore Dockerfile package.json package-lock.json

RUN yarn add @godaddy/[email protected] [email protected] [email protected]

COPY index.js index.js

CMD ["node", "index.js"]

index.js:

const express = require('express')
const terminus = require('@godaddy/terminus')
const puppeteer = require('puppeteer')
const sleep = (ms) => { return new Promise(resolve => setTimeout(resolve, ms)) }

;(async () => {

  const app = express()
  const server = app.listen(9988)

  terminus.createTerminus(server, {
    // healtcheck options
    healthChecks: {},
    // cleanup options
    // timeout: 1000, // [optional = 1000] number of milliseconds before forcefull exiting
    // signal, // [optional = 'SIGTERM'] what signal to listen for relative to shutdown
    signals: ['SIGTERM', 'SIGINT'], // [optional = []] array of signals to listen for relative to shutdown
    beforeShutdown, // [optional] called before the HTTP server starts its shutdown
    onSignal, // [optional] cleanup function, returning a promise (used to be onSigterm)
    onShutdown, // [optional] called right before exiting
    onSendFailureDuringShutdown, // [optional] called before sending each 503 during shutdowns

    // both
    logger: console.error // [optional] logger function to be called with errors
  })

  const browser = await puppeteer.launch({ args: [ '--no-sandbox' ] })
  console.log(`Puppeteer browser launched: ${await browser.version()}`)

  if (process.env.CASE === '1') {
    console.log('Case 1: We do not let puppeteer browser running')
    await browser.close()
  }

  console.log('Stop it to continue the test (like a ctrl+c in the terminal)')

  async function beforeShutdown() {
    await sleep(500)
    console.log('\nShutdown requested')
  }

  async function onSignal() {
    await sleep(500)
    console.log('Shutdown requested: server is starting cleanup')
    await browser.close()
  }

  async function onShutdown() {
    await sleep(500)
    console.log(`Successfully stopped`)
  }

  async function onSendFailureDuringShutdown() {
    console.error('onSendFailureDuringShutdown')
  }

})()

"example" package flagged for CVE-2019-17426

Line ref:

"version": "1.0.0",

Scan result:
Severity | moderate

Package | mongoose

CVE | CVE-2019-17426

Fix Status | fixed in 5.7.5

Description |
Impacted versions: <5.7.5
Discovered: 12 days ago
Published: >12 months ago
Automattic Mongoose through 5.7.4 allows attackers to bypass access control (in some applications) because any query object with a _bsontype attribute is ignored. For example, adding "_bsontype":"a" can sometimes interfere with a query filter. NOTE: this CVE is about Mongoose's failure to work around this _bsontype special case that exists in older versions of the bson parser (aka the mongodb/js-bson project).

~~
Background:
We include the terminus package in some of our images; when scanned during the CI process, this image is flagged for including another package with a known CVE, which has fixes available.

Ideally, not including pinned packages in the example directory would fix this issue.

Upgrading it to a version that is not vulnerable would also suffice.

Export interfaces from typings

Hi everyone,

I am currently working on #70: nest-terminus. Good work with this repo, I like it!
I have a problem; I am exporting a terminus instance:

terminus-lib.provider.ts

import { TERMINUS_LIB } from './terminus.constants';
import * as terminus from '@godaddy/terminus';
import { TerminusOptions } from './interfaces/terminus-options';

export type TerminusLib = typeof terminus;

// Create a wrapper so it is injectable & easier to test
export const TerminusLibProvider = {
  provide: TERMINUS_LIB,
  useValue: terminus,
};

This small file is used so I can inject the terminus instance in Nest using the DI system. The problem is TSC is throw an error:

Exported variable 'TerminusLibProvider' has or is using name 'TerminusOptions' from external module "@godaddy/terminus" but cannot be named.

The problem here is that TerminusOptions is being exported without importing it. It is a strange error, I know, but it could be fixed if the Typings get updated and the interface TerminusOptions gets exported too (tried it on my system and it worked)

Would you have a problem with exposing this interface?

Securing healthcheck endpoints

Hi,

Is there any way with which we could secure our health check endpoints like Basic Auth, OAuth 2 etc. ?
This could help the application to overcome DDOS attacks on open health check endpoints if they are directly checking DB states, external services etc.

If it is not present, is it there in the roadmap?

Thanks,

The automated release is failing 🚨

🚨 The automated release from the master branch failed. 🚨

I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.

You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this πŸ’ͺ.

Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.

Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.

If you are not sure how to resolve this, here is some links that can help you:

If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.


Invalid npm token.

The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.

If you are using Two-Factor Authentication, make configure the auth-only level is supported. semantic-release cannot publish with the default auth-and-writes level.

Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.


Good luck with your project ✨

Your semantic-release bot πŸ“¦πŸš€

Listen for multiple signals

Greetings.

I have recently started using this library and I'm liking it very much. It makes the application a whole lot cleaner.

I think that a useful enhancement would be the ability to listen for multiple signals and trigger the cleanup mechanisms on any of the signals specified.

I make this suggestion because of my recent experience, which was the following:
In a production setting, I agree that listening just for SIGTERM is probably sufficient.
However during development, I just kill my application using control + c which sends SIGINT.
Now to cater for this was simple enough. Based on an environment variable, at start up, I am able to give Terminus the appropriate signal to listen for (SIGINT or SIGTERM).
Next I added Nodemon to my application, for use during development only. For those unfamiliar with Nodemon, it simply restarts your application when it detects a file change. Nodemon achieves this by sending a SIGUSR2. (their docs on this).
With this setup, the application needs to listen for at least SIGINT and SIGUSR2 in order for Nodemon to allow the application to cleanup properly before restarting it, and for the same to happen when control + c is used to kill the application.

I think that the above scenario is a pretty common setup used during development and it would be nice to be able to cater for similar scenarios without jumping through too many hoops (Eg. based on environment variables, do something "special" to make it all work). I also can't think of any downsides for the application cleanup logic to be triggered on multiple signals.

So if this is deemed to be a good idea, I am happy to attempt an implementation of this.

Thank you.

TypeScript support

I'm working on a TypeScript framework and this module would be a great addition but I need TypeScript support.

The required changes are:

  • Create a file /typings/index.d.ts (definitions)
  • Create a file /typings/index-test.ts (definitions tests)
  • Add a field to the package.json pointing to the /typings/index.d.ts file.
  • The /typings/index.d.ts file will require an update If you change the public interface of the module.

I'm happy to send a PR to add TypeScript support. Would you accept such PR?

Do not process.exit on signals

A process that reacts to a signal (and choose not ignore it, but exit because of it) should not simply exit, but instead send that signal to itself (after of course resetting the signal handler), so that calling programs (or other processes) can determine the cause of the exit correctly. See https://www.cons.org/cracauer/sigint.html, which talks about this in the context of SIGTERM specifically, but it applies to al signals, and also more generally WIFSIGNALED and WTERMSIG, https://www.gnu.org/software/libc/manual/html_node/Process-Completion-Status.html#Process-Completion-Status.

So, for example, instead of calling process.exit(0); in cleanup, do process.removeListener(…) (that's how I assume it works in Node), and then process.kill(process.pid, 'SIGTERM'); to kill the current process with the same signal that was received (so that the default process handling kicks in).

It's easy to test this on the shell: run a terminus example in the foreground, then kill $terminuspid from another shell. After it has quit, running echo $? in the first shell should display 128+$signalcode, where "$signalcode" is the number of the signal, in this case 15 (SIGTERM, the kill default), for a total exit status of 143.

Similarly, when running a program and then Ctrl+C-ing out, the exit status should not be 0, but instead 130: 128 plus 2 (the SIGINT code).

Note that process.exit(128+signal) is not a solution. It's just bash that displays the exit condition as 128+signal. Programs must send the kill command to self.

An example of a Unix program doing it correctly, demonstrated with SIGINT for simplicity:

$ sleep 2
^C
$ echo $?
130

License?

I have noticed that the current license is ISC. Would you consider the MIT license? I won't be able to use this module if it is not under the MIT license 😒

Delay needed before calling asyncServerStop()?

I tried to use terminus for some projects at work but during an application update some requests were getting dropped.

I was able to fix this by adding a delay before calling asyncServerStop().

Have you guys seen this as well?

I've created https://github.com/alanchristensen/kubeplayground to help you reproduce what I'm seeing.

Once you got the app running, throw some load at it using wrk:

wrk -d 2m http://localhost:8080/ -H 'Connection: Close'

and then run the update script to rebuild the image and deploy it.

In wrk I then see:

Running 2m test @ http://localhost:8080/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   206.54ms    6.36ms 270.10ms   91.07%
    Req/Sec    23.85      9.68    50.00     64.11%
  1378 requests in 2.00m, 199.16KB read
  Socket errors: connect 0, read 14, write 0, timeout 0
Requests/sec:     11.47
Transfer/sec:      1.66KB

The 14 read errors are requests that get dropped. I think what's going on is that Kubernetes will continue to send traffic to a container for a short bit even after it sends the container SIGTERM.

Advertising package

It looks like this project depends on https://github.com/feross/funding, a project intended to show advertising in a projects build log.

I'm not familiar with your project, I'm just looking through a list of projects that depend on feross/funding. It's my understanding that right now there aren't any advertising messages actually showing, but if you're uncomfortable with the possibility of advertisements in build logs that's something you might want to look into.

The automated release is failing 🚨

🚨 The automated release from the master branch failed. 🚨

I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.

You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this πŸ’ͺ.

Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.

Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.

If you are not sure how to resolve this, here is some links that can help you:

If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.


The push permission to the Git repository is required.

semantic-release cannot push the version tag to the branch master on remote Git repository with URL https://github.com/godaddy/terminus.git.

Please refer to the authentication configuration documentation to configure the Git credentials on your CI environment and make sure the repositoryUrl is configured with a valid Git URL.


Good luck with your project ✨

Your semantic-release bot πŸ“¦πŸš€

Timeout not working

the timeout option of the library seem not working, here is a simple example code:

const http = require('http');
const { createTerminus } = require('@godaddy/terminus');

function onSignal () {
  console.log('server is starting cleanup');
}

function onShutdown () {
  console.log('cleanup finished, server is shutting down');
}

const server = http.createServer((request, response) => {
  response.end(
    `<html>
      <body>
        <h1>Hello, World!</h1>
       </body>
     </html>`
   );
})

const options = {
  timeout: 30000,
  onSignal: onSignal,                       
  onShutdown: onShutdown,          
};

createTerminus(server,options);

server.listen(8080);

From my understanding of the documentation the timeout option should delay the shutdown, but if i ran:
node app.js
and then kill -TERM <PID>
the logs shows:

node app.js
server is starting cleanup
cleanup finished, server is shutting down
Terminated: 15

and there is not "sleep" of 30s.
Can someone please advise?

Request to release 4.4.2

Hey folks, I see that you tagged a new release version to resolve a vulnerability flag. We are encountering the same vulnerability, any chance we can release 4.4.2 to registry? Much appreciated.

The automated release is failing 🚨

🚨 The automated release from the master branch failed. 🚨

I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.

You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this πŸ’ͺ.

Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.

Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.

If you are not sure how to resolve this, here is some links that can help you:

If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.


The push permission to the Git repository is required.

semantic-release cannot push the version tag to the branch master on remote Git repository with URL https://github.com/godaddy/terminus.git.

Please refer to the authentication configuration documentation to configure the Git credentials on your CI environment and make sure the repositoryUrl is configured with a valid Git URL.


Good luck with your project ✨

Your semantic-release bot πŸ“¦πŸš€

stoppable incorrect usage

It's more a suggestion, haven't figured out how to make a PR (if this repo is even opend for PRs). Anyways:

  1. stoppable is a constructor that returns server, it's not a function that will immediately stop server: https://github.com/hunterloftis/stoppable/blob/master/lib/stoppable.js#L20

  2. it needs to be assigned to variable(name is just an example), and then used to stop server.

  3. no need for any binding in this case.

const stoppableServer = stoppable(server, timeout)
const asyncServerStop = promisify(stoppableServer.stop)

An in-range update of semantic-release is breaking the build 🚨

Version 15.9.15 of semantic-release was just published.

Branch Build failing 🚨
Dependency semantic-release
Current Version 15.9.14
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

semantic-release is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • ❌ continuous-integration/travis-ci/push: The Travis CI build could not complete due to an error (Details).

Release Notes v15.9.15

15.9.15 (2018-09-11)

Bug Fixes

  • package: update debug to version 4.0.0 (7b8cd99)
Commits

The new version differs by 1 commits.

  • 7b8cd99 fix(package): update debug to version 4.0.0

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Return more info in the healthcheck endpoint

I was wondering if it'd be a good idea to allow the healthcheck promise-returning function to resolve back an object that could contain more info about the application that could be included in the SUCCESS_RESPONSE.

Some of this info could include things like application version etc.

I understand that the info could be made available under a new endpoint, but I think it could make sense to include in the healthcheck one as well.

Please let me know what your thoughts are on this. If this seems reasonable, I'm more than happy to create a PR for it.

4 Tests failing on Windows

image

I did read the Limited Windows support note but it mentioned that only SIGTERM doesn't work. Are all the failing tests dependent on SIGTERM ? If so, then I think I'll need to run terminus in a Docker environment.

Add precommit githooks for consistent contributions

When a person makes some changes and commits them, he may not remember to format or lint his code, Due to this his PR may get rejected. To avoid this and allow faster, smoother contributions you should integrate precommit githooks in the project. This way certain commands will execute whenever someone makes a commit. I would love to make a PR regarding this.

Express server shutting down immediately on 'SIGINT' signal in MacOS or Linux, onSignal callback not called

Hi Team,
As mentioned in the docs, implemented terminus in my app.
It's working fine in Windows platform and (onSignal callback is invoked) all my connections are getting shutdown gracefully on the signal of SIGINT.
While implementing the same in Linux and MacOS, when I fire a SIGINT from terminal it immediately terminates the application and doesn't invoke the onSignal callback for graceful shutdown of connections.

PFB the config code which I created.

import { logger } from './pino.config';
import mongoose from './db.config';
import redisClient from './redis.config';
 const logError = (from, code, error) => (logger.error({ from, code, error }));
 const logInfo = connectionName => (logger.info({ message: `Connection with ${connectionName} shutdown gracefully!!!` }));
 const timeout = 1000;
 const signals = ['SIGINT', 'SIGTERM', 'SIGHUP'];
 const mongoHealthCheck = () => {
    const { readyState } = mongoose.connection;
    if (readyState === 0 || readyState === 3) {
        return 'ERR_CONNECTING_TO_MONGO';
    }
    if (readyState === 2) {
        return 'CONNECTING_TO_MONGO';
    }
    return 'CONNECTED_TO_MONGO';
};
 const redisHealthCheck = () => (redisClient.connected ? 'CONNECTED_TO_REDIS' : 'ERR_CONNECTING_TO_REDIS');
 const onHealthCheck = async () => {
    const mongoHealth = mongoHealthCheck();
    const redisHealth = redisHealthCheck();
    return {
        mongoHealth,
        redisHealth,
    };
};
 const shutdownRedis = connectionName => redisClient.quit()
    .then(() => logInfo(connectionName))
    .catch(err => logError('terminus.config.shutdownRedis', 'ERR_DISCONNECTING_FROM_REDIS', err));
 const shutdownMongoose = connectionName => mongoose.connection.close(false)
    .then(() => logInfo(connectionName))
    .catch(err => logError('terminus.config.shutdownMongoose', 'ERR_DISCONNECTING_FROM_MONGO', err));
 const onSignal = () => Promise.all([shutdownRedis('Redis'), shutdownMongoose('Mongo')])
    .then(() => logger.info({ message: 'All services gracefully shutdown' }))
    .catch(err => logError('terminus.config.onSignal', 'ERR_DISCONNECTING_SERVICES', err));
 const beforeShutdown = () => new Promise((resolve) => {
    setTimeout(resolve, 5000);
});
 const healthChecks = {
    '/healthcheck': onHealthCheck,
};
 const terminusConfig = {
    timeout,
    healthChecks,
    signals,
    beforeShutdown,
    onSignal,
};
export default terminusConfig;

Can anyone help me with it.
Thanks & Regards,
Siddharth

The automated release is failing 🚨

🚨 The automated release from the master branch failed. 🚨

I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.

You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this πŸ’ͺ.

Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.

Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.

If you are not sure how to resolve this, here is some links that can help you:

If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.


Invalid npm token.

The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.

If you are using Two-Factor Authentication, make configure the auth-only level is supported. semantic-release cannot publish with the default auth-and-writes level.

Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.


Good luck with your project ✨

Your semantic-release bot πŸ“¦πŸš€

beforeShutdown in example should return new Promise

Should be:

function beforeShutdown () {
  // given your readiness probes run every 5 second
  // may be worth using a bigger number so you won't
  // run into any race conditions
  return new Promise(resolve => {
    setTimeout(resolve, 5000)
  })
}
terminus(server, {
  beforeShutdown
})

or ES6:

const beforeShutdown = () => new Promise(resolve => setTimeout(resolve, 15000));

[Question] readiness checks

After configuring terminus to work for our needs and verifying the basic health-check is working with postman, I realized i did not understand how/where am i supposed to implement a basic readiness probe.
Is it supposed to be another route in the healthchecks object? or is that implemented for me?
I just wanted some clarification/examples on implementing a readiness route.

Get the health status without using a HTTP request

Issue

I would like to read out the current health status, without having to create a HTTP request.
Therefore the health status should be readable programmatically using the public Terminus API.

Something like this:

const { getHealthStatus } = require('@godaddy/terminus');

// Returns { status: 'ok' | 'error', info: {...} }
getHealthStatus('/health');

Implementation

Looking through your code, I realized the core (terminus.js) would have to get restructured if done correctly.

When I tried to refactor the method decorateWithHealthCheck in terminus.js which contains the desired functionality, I realized everything is handled over parameters. My refactored function would look like the following:

function getHealthStatus(url, state, options) {
  const { healthChecks, logger } = options;
  if (state.isShuttingDown) {
    return sendFailure(res)
  }
  healthChecks[url]()
    .then((info) => {
      sendSuccess(res, info)
    })
    .catch((error) => {
      logger('healthcheck failed', error)
      sendFailure(res, error.causes)
    })
}

The problem is, a user should not pass state nor options as a parameter. Thus it is still required for the functionality of getHealthStatus. A way to solve this problem, is to use a class instance for each createTerminus() call, which then can handle state or options as a property.

only usable with node versions >=7.6.0

It seems as if the stoppable module being used includes the awaiting module, which only allows for use of node versions >=7.6.0.

This seems to be a big limitation, as v6 is an active LTS until April 2019. Unfortunately, we cannot update to 7+ yet, but would really like to use this module

Is it possible to use a different module or make the stoppable module not use awaiting? I see this all these modules seem to be made by the same people.

Example’s usage of healtcheck seems wrong

onHealthCheck
this above should be like below

healthChecks: {
  '/healthz': () => Promise.resolve()
},

according to README.md and the code below

terminus/lib/terminus.js

Lines 67 to 79 in 8e822ba

const healthChecks = options.healthChecks || {};
const signal = options.signal || 'SIGTERM';
const timeout = options.timeout || 1000;
const onSignal = options.onSignal || options.onSigterm || noopResolves;
const onShutdown = options.onShutdown || noopResolves;
const logger = options.logger || noop;
decorateWithHealthCheck(server, {
healthChecks,
logger
});

Windows support, or the lack thereof

I was playing around with this repo and a found that the tests are failing on Windows:

verbose test report
Ξ» yarn test
yarn run v1.3.2
$ mocha lib/**/*.spec.js && npm run test-typings

  Terminus
    1) runs onSigterm when getting the SIGTERM signal
    2) runs onShutdown after onSigterm
    3) runs onSigint when getting SIGINT signal
    4) runs onShutdown after onSigint
    supports onHealthcheck for the healthcheck route
      √ but keeps all the other endpoints
      √ returns 200 on resolve
      √ returns 503 on reject

  3 passing (1s)
  4 failing

  1) Terminus
       runs onSigterm when getting the SIGTERM signal:

      AssertionError: expected '' to deeply equal 'on-sigterm-runs'
      + expected - actual

      +on-sigterm-runs

      at Context.ex (lib\terminus.spec.js:89:46)

  2) Terminus
       runs onShutdown after onSigterm:

      AssertionError: expected '' to deeply equal 'on-sigterm-runs\non-shutdown-runs'
      + expected - actual

      +on-sigterm-runs
      +on-shutdown-runs

      at Context.ex (lib\terminus.spec.js:100:46)

  3) Terminus
       runs onSigint when getting SIGINT signal:

      AssertionError: expected '' to deeply equal 'on-sigint-runs'
      + expected - actual

      +on-sigint-runs

      at Context.ex (lib\terminus.spec.js:111:46)

  4) Terminus
       runs onShutdown after onSigint:

      AssertionError: expected '' to deeply equal 'on-sigint-runs\non-shutdown-runs'
      + expected - actual

      +on-sigint-runs
      +on-shutdown-runs

      at Context.ex (lib\terminus.spec.js:122:46)

error Command failed with exit code 4.

I tried to dig into this a bit and found that the whole concept of SIGTERM is not supported on Windows.

Short story: you can expect SIGINT to work now, as well as SIGBREAK and to some extent SIGHUP. However SIGTERM will never work because killing a process in the task manager is unconditional, e.g. there's no way for an application to detect or prevent it.

Also see nodejs/node#12378.

So can I make a PR which updates the documentation for lack of support on Windows due to inherent OS limitations?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.