Code Monkey home page Code Monkey logo

components's Introduction

Deprecation notice: This project in the given form is no longer maintained. The underlying infrastructure will be kept intact as long as it doesn't require any extra maintenance, which is no longer provided. Please check Serverless Framework instead


Serverless Components


English | 简体中文

Serverless Components are abstractions that enable developers to deploy serverless applications and use-cases more easily, all via the Serverless Framework.

Important Note: Serverless Components work differently from Serverless Framework's traditional local deployment model. To deliver a significantly faster development experience, your source code and temporary credentials will pass through an innovative, hosted deployment engine (similar to a CI/CD product). Learn more about our deployment engine's handling of credentials and source code here.

Serverless Components is now Generally Available. Click here for the Beta version.


  • Ease - Deploy entire serverless applications/use-cases via Components, without being a cloud expert.
  • Instant Deployments - Components deploy in ~8s seconds, making rapid development on the cloud possible.
  • Streaming Logs - Components stream logs from your app to your console in real-time, for fast debugging.
  • Automatic Metrics - Many Components auto-set-up metrics upon deployment.
  • Build Your Own - Components are easy to build.
  • Registry - Share your Components with you, your team, and the world, via the Serverless Registry.

Deploy a serverless app rapidly, with any of these commands:

$ npx serverless init fullstack-app
$ npx serverless init express-starter
$ npx serverless init react-starter
$ npx serverless init vue-starter
$ npx serverless init graphql-starter

Documentation


Quick-Start

To get started with Serverless Components, install the latest version of the Serverless Framework:

$ npm i -g serverless

Login into the Serverless dashboard via the CLI:

$ serverless login

Before you proceed, make sure you connect your AWS account by creating a provider in the settings page on the Serverless Dashboard.

Then, run serverless registry to see many Component-based templates you can deploy, or see more in the Serverless Framework Dashboard. These contain Components as well as boilerplate code, to get you started quickly.

Install anything from the registry via $ serverless init <template>, like this:

$ serverless init express-starter

cd into the generated directory. And deploy!

$ serverless deploy

After few seconds, you should get a URL as an output. If you visit that URL you'll see a successful "Request Received" message.

Fetch the your Component Instance's info...

$ serverless info

Run the serverless dev command to auto-deploy on save, and have logs and errors stream in real-time to your console (if supported by the Component)...

$ serverless dev

Deploy other Components that you may want to use with your Express Component. For example, you may want to give your Express application permissions to other resources on your AWS account via the aws-iam-role Component. You may also want an AWS DynamoDB table to use with your Express Component, via the aws-dynamodb Component. You can initialize and deploy them just like the express component. You can then use them with your express Component, like this:

org: your-org # Your Org
app: your-app # Your App
component: express
name: express-api

inputs:
  src: ./src
  roleName: ${output:my-role.name}
  env:
    dbTableName: ${output:${stage}:${app}:my-table.name}

Note: Serverless Components only supports Node.js applications at the moment.


Overview

We (Serverless Inc) made Serverless Framework Components because composing, configuring and managing low-level serverless infrastructure can be complicated for developers and teams.

Serverless Components are merely libraries of code that deploy use-cases onto serverless cloud infrastructure for you. Each Component contains the best infrastructure pattern for that use-case, for scale, performance, cost optimization, collaboration and more.

Use-Cases

You can use Serverless Components to abstract over anything, but these are the most common patterns:

  1. An entire application, like a blog, video streaming service, or landing page.
  2. A software feature, like user authentication, comments, or a payment system.
  3. A low-level use-case, like a data processing pipeline or microservice.

Features

Ease

Serverless Components are use-case first. Infrastructure details that aren't necessary for the use-case are hidden, and use-case focused configuration is offered instead.

Here's what it looks like to provision a serverless website hosted on AWS S3, delivered globally and quickly w/ AWS Cloudfront, via a custom domain on AWS Route 53, secured by a free AWS ACM SSL Certificate:

# serverless.yml

component: website # A Component in the Registry
name: my-website # The name of your Component Instance

inputs: # The configuration the Component accepts
  src:
    src: ./src
    hook: npm run build
    dist: ./dist
  domain: mystore.com

Instant Deployments

Serverless Components deploy fast (~8 seconds), removing the need to emulate cloud services locally for fast feedback during the development process.

$ serverless deploy

8s > my-express-app › Successfully deployed

Build Your Own

Serverless Components are easily written in Javascript (serverless.js), with syntax inspired by component-based frameworks, like React.

// serverless.js

const { Component } = require('@serverless/core');

class MyBlog extends Component {
  async deploy(inputs) {
    console.log('Deploying a serverless blog'); // Leave a status update for users deploying your Component with --debug
    this.state.url = outputs.url; // Save state
    return outputs;
  }
}

module.exports = MyBlog;

Registry

Anyone can build a Serverless Component and share it in our Registry.

$ serverless publish

[email protected] › Published

Serverless

Serverless Components favor cloud infrastructure with serverless qualities. They are also entirely vendor agnostic, enabling you to easily use services from different vendors, together. Like, AWS Lambda, AWS S3, Azure Functions, Google Big Query, Twilio, Stripe, Algolia, Cloudflare Workers and more.


Using Components

Serverless Framework

Serverless Components are a Serverless Framework feature. You use them with the Serverless Framework CLI. Install it via:

$ npm i serverless -g

serverless.yml

To use a Serverless Component, declare the name of one that exists in the Serverless Registry in your serverless.yml.

The syntax looks like this:

# serverless.yml

component: [email protected] # The name and version of the Component in the Registry.  To always use the latest, include no '@' and version (e.g. 'component: express').
org: acme # The name of your Serverless Framework Org
app: fullstack # Optional. The name of a high-level app container.  Useful if you want to group apps together.
name: rest-api # The name of your Serverless Framework App

inputs: # The parameters to send to the "deploy" action of the component.
  src: ./src
  domain: api.my-app.com

There is nothing to install when using Serverless Components. They live in the cloud. When you run deploy, the configuration you specify in serverless.yml will be sent to the Serverless Components Engine, along with the files or source code you specifiy in inputs.

Other Notes:

  • You cannot use Serverless Components within an existing Serverless Framework project file (i.e. a serverless.yml file that contains functions, events, resources and plugins).

  • You can only have 1 Serverless Component in serverless.yml. We encourage this because it's important to separate the resources in your Serverless Applications, rather than put all of them in 1 infrastructure stack.

Actions, Inputs & Outputs

Every Serverless Component can perform one or many Actions, which are functions that contain logic which the Component can do for you, such as:

  • deploy - Deploy something onto cloud infrastructure.
  • remove - Remove something from cloud infrastructure.
  • test - Test some functionality provisioned by the Component, like an API endpoint.
  • metrics - Get metrics about the Component's performance.

Components ship with their own unique Actions, though all have deploy and remove. One way to think about Actions is to consider Components as Javascript classes and Actions are the class methods.

You can run Component Actions via the Serverless Framework CLI or the Serverless Framework SDK.

All Actions accept parameters known as Inputs and return other parameters known as Outputs.

In serverless.yml the inputs property are merely Inputs that you wish to send to the deploy Action of your Component.

Every Action has it' own Inputs and Outputs.

When a Component Action is finished running, it returns an outputs object.

Outputs contain the most important information you need to know from a deployed Component Instance, like the URL of the API or website, or all of the API endpoints.

Outputs can be referenced easily in the inputs of other Components. Just use this syntax:

# Simpler Syntax - References the same "stage" and "app"
${output:[instance].[output]}

# More Configurable Syntax - Customize the "stage" and "app"
${output:[stage]:[app]:[instance].[output]}
  • stage - The stage that the referenced component instance was deployed to. It is the stage property in that component instance serverless.yml file.
  • app - The app that the referenced component instance was deployed to. It is the app property in that component instance serverelss.yml file, which falls back to the name property if you did not specify it.
  • instance - The name of the component instance you are referencing. It is the name property in that component instance serverless.yml file.
  • output - One of the outputs of the component instance you are referencing. They are displayed in the CLI after deploying.
# Examples
${output:prod:ecommerce:products-api.url}
${output:prod:ecommerce:role.arn}
${output:prod:ecommerce:products-database.name}

Deploying

You can deploy Components easily via the Serverless Framework with the $ serverless deploy command.

$ serverless deploy

This command can be run in any directory containing a valid components serverless.yml as shown above. You can also run serverless deploy in any directory that contains multiple component directories, in which case it would deploy all these components in parallel whenever possible according to their output variable dependencies. If you'd like to make sure all these related components deploy to the same org, app & stage, you can create a serverless.yml file at the parent level that includes these properties. The fullstack-app template is a good example for all of this.

While Serverless Components deploy incredibly fast, please note that first deployments can often be 2x slower because creating cloud resources takes a lot longer than updating them. Also note that some resources take a few minutes to be availbale. For example, APIs and Website URLs may take 1-2 minutes before they are available.

State

Serverless Components automatically save their state remotely. This means you can easily push your Components to Github, Gitlab, Bitbucket, etc., and collaborate on them with others as long as the serverless.yml contains an org which your collaboraters are added to:

org: acme-team # Your collaboraters must be added at dashboard.serverless.com
app: ecommerce
component: my-component
name: rest-api

Further, your Component Instances can easily be deployed with CI/CD, as long as you make sure to include a SERVERLESS_ACCESS_KEY environment variable.

You can add collaboraters and create access tokens in the Serverless Framework Dashboard.

Versioning

Serverless Components use semantic versioning.

When you add a version, only that Component version is used. When you don't add a version, the Serverless Framework will use the latest version of that Component, if it exists. We recommend to always pin your Component to a version.

Providers

Upon deployment, the Serverless Framework Components looks for a provider connected to your service. If none was found, the default provider will be used. You can manage providers in the settings page on the Serverless Dashboard. To learn more about the Providers feature, check out its docs here.

Stages

Serverless Components have a Stages concept, which enables you to deploy entirely separate Component Instances and their cloud resources per stage.

The dev Stage is always used as the default stage. If you wish to change your stage, set it in serverless.yml, like this:

org: my-org
app: my-app
component: [email protected]
name: my-component-instance
stage: prod # Enter the stage here

You can also specify a Stage within the SERVERLESS_STAGE Environment Variable, which overrides the stage set in serverless.yml:

SERVERLESS_STAGE=prod

And, you can specify a Stage upon deployment via a CLI flag, which overrides anything set in serverless.yml AND an Environment Variable, like this:

$ serverless deploy --stage prod

Again, the CLI flag overrides both a stage in serverless.yml and an Environment Variable. Whereas an Environment Variable can only override the stage in serverless.yml.

Lastly, you can set stage-specific environment variables using separate .env files. Each file must be named in the following format: .env.STAGE. For example, if you run in the prod stage, the environment variables in .env.prod would be loaded, otherwise the default .env file (without stage extension) would be loaded. You can also put the .env.STAGE file in the immediate parent directory, in the case that you have a parent folder containing many Component Instances.


Variables

You can use Variables within your Component Instances serverless.yml to reference Environment Variables, values from within serverless.yml and Outputs from other Serverless Component Instances that you've already deployed.

Here is a quick preview of possibilities:

org: acme
app: ecommerce
component: express
name: rest-api
stage: prod

inputs:
  name: ${org}-${stage}-${app}-${name} # Results in "acme-prod-ecommerce-rest-api"
  region: ${env:REGION} # Results in whatever your environment variable REGION= is set to.
  roleArn: ${output:prod:my-app:role.arn} # Fetches an output from another component instance that is already deployed
  roleArn: ${output:${stage}:${app}:role.arn} # You can combine variables too

Variables: Org

You can reference your org value in the inputs of your YAML in serverless.yml by using the ${org} Variable, like this:

org: acme
app: ecommerce
component: express
name: rest-api
stage: prod

inputs:
  name: ${org}-api # Results in "acme-api"

Note: If you didn't specify an org, the default org would be the first org you created when you first signed up. You can always overwrite the default org or the one specified in serverless.yml by passing the --org option on deploy:

$ serverless deploy --org my-other-org

Variables: Stage

You can reference your stage value in the inputs of your YAML in serverless.yml by using the ${stage} Variable, like this:

org: acme
app: ecommerce
component: express
name: rest-api
stage: prod

inputs:
  name: ${stage}-api # Results in "prod-api"

Note: If you didn't specify a stage, the default stage would be dev. You can always overwrite the default stage or the one specified in serverless.yml by passing the --stage option on deploy:

$ serverless deploy --stage prod

Variables: App

You can reference your app value in the inputs of your YAML in serverless.yml by using the ${app} Variable, like this:

org: acme
app: ecommerce
component: express
name: rest-api
stage: prod

inputs:
  name: ${app}-api # Results in "ecommerce-api"

Note: If you didn't specify an app, the default app name would be the instance name (the name property in serverless.yml). You can always overwrite the default app or the one specified in serverless.yml by passing the --app option on deploy:

$ serverless deploy --app my-other-app

Variables: Name

You can reference your name value in the inputs of your YAML in serverless.yml by using the ${name} Variable, like this:

org: acme
app: ecommerce
component: express
name: rest-api
stage: prod

inputs:
  name: ${name} # Results in "rest-api"

Variables: Environment Variables

You can reference Environment Variables (e.g. those that you defined in the .env file or that you've set in your environment manually) directly in serverless.yml by using the ${env} Variable.

For example, if you want to reference the REGION environment variable, you could do that with ${env:REGION}.

component: express
org: acme
app: ecommerce
name: rest-api
stage: prod

inputs:
  region: ${env:REGION}

Variables: Outputs

Perhaps one of the most useful Variables is the ability to reference Outputs from other Component Instances that you have already deployed. This allows you to share configuration/data easily across as many Component Instances as you'd like.

If you want to reference an Output of another Component Instance, use the ${output:[app]:[stage]:[instance name].[output]} syntax, like this:

component: express
org: acme
app: ecommerce
name: rest-api
stage: prod

inputs:
  roleArn: ${output:[STAGE]:[APP]:[INSTANCE].arn} # Fetches an output from another component instance that is already deployed

You can access Outputs across any App, Instance, in an any Stage, within the same Org.

A useful feature of this is the ability to share resources easily, and even do so across environments. This is useful when developers want to deploy a Component Instance in their own personal Stage, but access shared resources within a common "development" Stage, like a database. This way, the developers on your team do not have to recreate the entire development stage to perform their feature work or bug fix, only the Component Instance that needs changes.


Proxy

Problem: your environment does not have permission to access the public network and can access the public network only through a proxy, and a network failure is reported when sls deploy is executed.
Solution: add the following configuration to the .env file, it needs the version of Node.js >= 11.7.0:

HTTP_PROXY=http://127.0.0.1:12345 # Your proxy
HTTPS_PROXY=http://127.0.0.1:12345 # Your proxy

or:

http_proxy=http://127.0.0.1:12345 # Your proxy
https_proxy=http://127.0.0.1:12345 # Your proxy

Security Considerations

Serverless Framework Components are used via the Serverless Framework CLI, but they are different from Serverless Framework's Traditional experience in that deployment happens via an innovative hosted deployment engine (similar to a CI/CD product). You will be prompted when using Components, to login and ensure you're aware of this difference.

Here are the security implications of this.

Credentials

Serverless Framework Components relies completely on our secure Providers feature, which will help you create an AWS IAM Role which our hosted engine can call to automatically generate temporary credentials before every action it performs. Read more about Providers here, or go to the Serverless Framework Dashboard and navigate to "Org" and "Providers" to create one.

The temporary credentials generated by your provider will pass through our company's hosted deployment engine. These credentials are not stored. This design enables 95% faster deployments, automatic metrics, real-time logging, and more, all accessible from multiple clients.

We also recommend you strictly limit the scope of your credentials or access role to allow only what each Serverless Framework Component needs. Each Component deploys a specific use-case and specific infrastructure, so permissions required are significantly reduced compared to what Serverless Framework Traditional requires. Further, clear permission policies for each Component will soon be available to help you understand what permissions are required.

Source Code

Your application source code will be uploaded and temporarily stored within our hosted deployment engine. This design enables 95% faster deployments, automatic metrics, real-time logging, and rollback features, all accessible from multiple clients. In the near future, we will enable storing code on your own account, but we have not yet reached this section of our roadmap.

CLI Commands

serverless registry

See available Components

serverless publish

Publish a Component to the Serverless Registry.

--dev - Publishes to the @dev version of your Component, for testing purposes.

serverless init <package-name>

Initializes the specified package (component or template) locally to get you started quickly.

--dir, -d - Specify destination directory.

serverless deploy

Deploys an Instance of a Component.

--debug - Lists console.log() statements left in your Component upon deploy or any action.

serverless remove

Removes an Instance of a Component.

--debug - Lists console.log() statements left in your Component upon remove or any action.

serverless info

Fetches information of an Instance of a Component.

--debug - Lists state.

serverless dev

Starts DEV MODE, which watches the Component for changes, auto-deploys on changes, and (if supported by the Component) streams logs, errors and transactions to the terminal.

serverless param-Chinsese users only available now

User can set and list secrets value as parameters by CLI with app and stage

Set parameters

serverless param set id=param age=12 [--app test] [--stage dev]

  • User can set multiple parameters once, use paramName=paramValue
  • If user does not set app or stage, CLI will read from config file or use default values
List parameters

serverless param list [--app test] [--stage dev]

  • If user does not set app or stage, CLI will read from config file or use default values
  • CLI will show the all parameters for current stage and app

serverless <command> --inputs key=value foo=bar

Runs any component custom command, and passes inputs to it. The inputs you pass in the above syntax overwrite any inputs found in the serverless.yml file.

Few examples:

# simple example
serverless test --inputs domain=serverless.com

# passing objects with JSON syntax
serverless invoke --inputs env='{"LANG": "en"}'

# passing arrays with comma separation
serverless backup --inputs userIds=foo,bar

Building Components

If you want to build your own Serverless Component, there are 2 essential files you need to be aware of:

  • serverless.component.yml - This contains the definition of your Serverless Component.
  • serverless.js - This contains your Serverless Component's code.

One of the most important things to note is that Serverless Components only run in the cloud and do not run locally. That means, to run and test your Component, you must publish it first (it takes only few seconds to publish). We're continuing to improve this workflow. Here's how to do it...

serverless.component.yml

To declare a Serverless Component and make it available within the Serverless Registry, you must create a serverless.component.yml file with the following properties:

# serverless.component.yml

name: express # Required. The name of the Component
version: 0.0.4 # Required. The version of the Component
author: eahefnawy # Required. The author of the Component
org: serverlessinc # Required. The Serverless Framework org which owns this Component
main: ./src # Required. The directory which contains the Component code
description: Deploys Serverless Express.js Apps # Optional. The description of the Component
keywords: aws, serverless, express # Optional. The keywords of the Component to make it easier to find at registry.serverless.com
repo: https://github.com/owner/project # Optional. The code repository of the Component
license: MIT # Optional. The license of the Component code

serverless.js

A serverless.js file contains the Serverless Component's code.

To make a bare minimum Serverless Component, create a serverless.js file, extend the Component class and add a deploy method like this:

// serverless.js

const { Component } = require('@serverless/core');

class MyComponent extends Component {
  async deploy(inputs = {}) {
    return {};
  } // The default functionality to run/provision/update your Component
}

module.exports = MyComponent;

Note: You do NOT need to install the @serverless/core package via npm. This package is automatically available to you in the cloud environment.

deploy() is always required. It is where the logic resides in order for your Component to make something. Whenever you run the $ serverless deploy command, it's always calling the deploy() method.

You can also add other methods to this class. A remove() method is often the next logical choice, if you want your Serverless Component to remove the things it creates, via $ serverless remove.

You can add as many methods as you want. This is interesting because it enables you to ship more automation with your Component, than logic that merely deploys and removes something.

It's still early days for Serverless Components, but we are starting to work on Components that ship with their own test() function, or their own logs() and metrics() functions, or seed() for establishing initial values in a database Component. Overall, there is a lot of opportunity here to deliver Components that are loaded with useful automation.

All methods other than the deploy() method are optional. All methods take a single inputs object, not individual arguments, and return a single outputs object.

Here is what it looks like to add a remove method, as well as a custom method.

// serverless.js

const { Component } = require('@serverless/core');

class MyComponent extends Component {
  /*
   * The default functionality to run/provision/update your Component
   * You can run this function by running the "$ serverless deploy" command
   */
  async deploy(inputs = {}) {
    return {};
  }

  /*
   * If your Component removes infrastructure, this is recommended.
   * You can run this function by running "$ serverless remove"
   */

  async remove(inputs = {}) {
    return {};
  }

  /*
   * If you want to ship your Component w/ extra functionality, put it in a method.
   * You can run this function by running "$ serverless anything"
   */

  async anything(inputs = {}) {
    return {};
  }
}

module.exports = MyComponent;

When inside a Component method, this comes with utilities which you can use. Here is a guide to what's available to you within the context of a Component.

// serverless.js

const { Component } = require('@serverless/core');

class MyComponent extends Component {
  async deploy(inputs = {}) {
    // this features useful information
    console.log(this);

    // Common provider credentials are identified in the environment or .env file and added to this.context.credentials
    // when you run "serverless deploy", then the credentials in .env will be used
    // when you run "serverless deploy --stage prod", then the credentials in .env.prod will be used...etc
    // if you don't have any .env files, then global aws credentials will be used
    const dynamodb = new AWS.DynamoDB({ credentials: this.credentials.aws });

    // You can easily create a random ID to name cloud infrastructure resources with using this utility.
    const s3BucketName = `my-bucket-${this.resourceId()}`;
    // This prevents name collisions.

    // Components have built-in state storage.
    // Here is how to save state to your Component:
    this.state.name = 'myComponent';

    // If you want to show a debug statement in the CLI, use console.log.
    console.log('this is a debug statement');

    // Return your outputs
    return { url: websiteOutputs.url };
  }
}

module.exports = MyComponent;

Input & Output Types

Every Serverless Component has Actions (which are merely functions, e.g. deploy, remove, metrics). Each Action accepts Inputs and returns Outputs. Serverless Components can optionally declare Types for the Inputs and Outputs of each Action. in their serverless.component.yml, which make them easier to write and use.

Inputs & Output Types are recommended because they offer the following benefits:

  • They validate an Action is supported by a Component before running it.
  • They validate user Inputs before they are sent to a Component's Actions.
  • They prevent Component authors from needing to write their own validation logic.
  • They offer helpful errors to Component users when they enter invalid Inputs.
  • They can automate documentation for your Component.
  • They are needed for upcoming Serverless Framework Dashboard features that will enable visualizing Input and Output data special ways (e.g. form fields, charts, etc.).

Types are optionally declared in serverless.component.yml files.

You must first declare the Actions the Component uses, like this:

name: express
version: 1.5.7
org: serverlessinc
description: Deploy a serverless Express.js application onto AWS Lambda and AWS HTTP API.

actions:
  # deploy action
  deploy:
    # deploy action definition
    definition: Deploy your Express.js application to AWS Lambda, AWS HTTP API and more.
    inputs:
      # An array of Types goes here.
    outputs:
      # An array of Types goes here.

Below is a full example, which also details all supported Types (Disclaimer: This combines documentation and a real example. Hopefully it's more helpful!).

name: express
version: 1.5.7
org: serverlessinc
description: Deploy a serverless Express.js application onto AWS Lambda and AWS HTTP API.

actions:
  # deploy action
  deploy:
    # deploy action definition
    definition: Deploy your Express.js application to AWS Lambda, AWS HTTP API and more.
    inputs:
      #
      #
      # Primitive Types
      # These cover standard data types, like "string", "number", "object", etc.
      #
      #

      # Type: string

      name: # The name of the input/output
        type: string # The type
        # Optional
        required: true # Defaults to required: false
        default: my-app # The default value
        description: The name of the AWS Lambda function. # A description of this parameter
        min: 5 # Minimum number of characters
        max: 64 # Maximum number of characters
        regex: ^[a-z0-9-]*$ # A RegEx pattern to validate against.
        allow: # The values that are allowed for this
          - my-api
          - my-backend

      # Type: number

      memory: # The name of the input/output
        type: number # The type.  These can be integers or decimals.
        # Optional
        required: true # Defaults to required: false
        default: 2048 # The default value
        description: The memory size of the AWS Lambda function. # A description of this parameter
        min: 128 # Minimum number allowed
        max: 3008 # Maximum number allowed
        allow: # The values that are allowed for this
          - 128
          - 1024
          - 2048
          - 3008

      # Type: boolean

      delete: # The name of the input/output
        type: boolean # The type.
        # Optional
        required: true # Defaults to required: false
        description: Whether to delete this infrastructure resource when removed # A description of this parameter
        default: true # The default value

      # Type: object

      vpcConfig: # The name of the input/output
        type: object # The type
        # Optional
        required: true # Defaults to required: false
        description: The VPC configuration for your AWS Lambda function # A description of this input
        keys:
          # Add more Types in here
          securityGroupIds: # The name of the key
            type: string

      # Type: array

      mappingTemplates: # The name of the input/output
        type: array # The type
        # Optional
        required: true # Defaults to required: false
        description: The mapping templates for your GraphQL endpoints. # A description of this input
        min: 1 # Minimum array items
        max: 10 # Max array items
        items:
          # Add more Types in here, that you wish to allow, without "name" properties because they are array items.
          - type: string
            min: 5
            max: 13

          - type: object
            keys:
              # Add more standard Types in here, with "name" properties because they are object properties.
              - name: aws_lambda
                type: string

        default: # Default array items
          - '12345678'

      #
      #
      # Special Types
      # These are special types, they cover handling source code, and more
      #
      #

      # Type: src
      # This Type specifies a folder containing code or general files you wish to upload upon deployment, which the Component may need to deploy a specific outcome. Before running the Component in the cloud, the Serverless Framework will first upload any files specified in `src`. Generally, you want to keep the package size of your serverless applications small (<5MB) in order to have the best performance in serverless compute services. Larger package sizes will also make deployments slower since the upload process is dependent on your internet connection bandwidth. Consider a tool to build and minify your code first. You can specify a build hook to run and a `dist` folder to upload, via the `src` property.
      # This Type can either be a string containing a relative path to your source code, or an object.

      src: # The name "src" is reserved for this Type.  Your inputs can only have one of these.
        type: src # The type
        # Optional
        required: true # Defaults to required: false
        description: The source code of your application that will be uploaded to AWS Lambda. # A description of this parameter
        src: # A relative file path to the directory which contains your source code and any scripts you wish to run via the "hook" property.
        hook: # A script you wish to run before uploading your source code.
        dist: # The directory containing your built source code which you wish to upload.
        exclude: # An array of glob patterns of files/paths to exclude
          - .env # exclude .env file in ./src
          - '.git/**' # exclude .git folder and all subfolders and files inside it
          - '**/*.log' # exclude all files with .log extension in any folder under the ./src

      # Type: env
      # This Type is for an object of key-value pairs meant to contain sensitive information.  By using it, the Serverless Framework will treat this data more securely.

      env: # The name "env" is reserved for this Special Type.  Your params can only have one of these.
        type: env # The type
        # Optional
        description: Environment variables to include in AWS Lambda # A description of this input

      # Type: datetime
      # This Type is an ISO8601 string that contains a datetime.

      rangeStart: # The name of the input/output
        type: datetime
        # Optional
        required: true # Defaults to required: false
        description: The start date of your metrics timeframe. # A description of this input

      # Type: url
      # This Type is for a URL, often describing your root API URL or website URL.

      url: # The name of the input/output
        type: url
        # Optional
        required: true # Defaults to required: false
        description: The url of your website. # A description of this input

      # Type: api
      # This Type is for an OpenAPI specification.

      api: # The name of the input/output
        type: api
        # Optional
        required: true # Defaults to required: false
        description: The API from your Express.js app. # A description of this input

      # Type: metrics
      # This Type is for an array of supported Metrics widgets which can be rendered dynamically in GUIs.

      metrics: # The name of the input/output
        type: metrics
        # Optional
        required: true # Defaults to required: false
        description: API metrics from your back-end. # A description of this input

    outputs:
      # Another array of Types goes here.

  # remove action
  remove:
    # ... accepts config identical to the deploy action

type metrics

If you use the metrics Type in your Component Outputs, you must return an array that contains one or many of the following data structures.

Each data structure corresponds to a widget that can be rendered in the Serverless Framework Dashboard.

type: 'bar-v1'

This is for displaying a bar chart. It can support multiple y data sets which cause the bar chart to stack.

In the dashboard, the stat property of the first array is preferred.

{
  // Type: Name and version of this chart type.
  "type": "bar-v1",
  // Title: Name of the chart
  "title": "API Requests & Errors",
  // xData: The values along the bottom of the chart.  Must have the same quantity as yValues.
  "xData": [
    "2021-07-01T19:00:00.999Z",
    "2021-07-01T20:00:00.999Z",
    "2021-07-01T21:00:00.999Z",
    "2021-07-01T22:00:00.999Z"
  ],
  // yDataSets: An array of 1 or more items to include in order to stack the bar charts.
  "yDataSets": [
    {
      "title": "API Requests",
      // yData: An array of the values that correspond to the xData values
      "yData": [3, 43, 31, 65],
      // Color of bar chart.  Must be a hex value.
      "color": "#000000",
      // Stat: A large number to show at the top.  E.g., total api requests
      "stat": 142,
      // Stat Text: Shows next to the large number.  E.g., ms, seconds, requests, etc.  Default is null.
      "statText": "requests",
    },
    {
      "title": "API Errors",
      // yData: An array of the values that correspond to the xData values
      "yData": [2, 3, 1, 6],
      // Color of bar chart.  Must be a hex value.
      "color": "#FF5733",
      // Stat: A large number to show at the top.  E.g., total api errors
      "stat": 12,
      // Stat Text: Shows next to the large number.  E.g., ms, seconds, requests, etc.  Default is null.
      "statText": "errors",
    }
  ]
}
type: 'line-v1'

This is for displaying a line chart. It can support multiple y data sets which cause multiple lines on the chart.

In the dashboard, the stat property of the first array is preferred.

{
  // Type: Name and version of this chart type.
  "type": "line-v1",
  // Title: Name of the chart
  "title": "API Latency",
  // xData: The values along the bottom of the chart.  Must have the same quantity as yValues.
  "xData": [
    "2021-07-01T19:00:00.999Z",
    "2021-07-01T20:00:00.999Z",
    "2021-07-01T21:00:00.999Z",
    "2021-07-01T22:00:00.999Z"
  ],
  // yDataSets: An array of 1 or more items to include for each line.
  "yDataSets": [
    {
      "title": "API P95 Latency",
      // yData: An array of the values that correspond to the xData values
      "yData": [3, 43, 31, 65],
      // Color of bar chart.  Must be a hex value.
      "color": "#000000",
      // Stat: A large number to show at the top.  E.g., total api requests
      "stat": 142,
      // Stat Text: Shows next to the large number.  E.g., ms, seconds, requests, etc.  Default is null.
      "statText": "requests",
    },
    {
      "title": "API P99 Latency",
      // yData: An array of the values that correspond to the xData values
      "yData": [2, 3, 1, 6],
      // Color of bar chart.  Must be a hex value.
      "color": "#FF5733",
      // Stat: A large number to show at the top.  E.g., total api errors
      "stat": 12,
      // Stat Text: Shows next to the large number.  E.g., ms, seconds, requests, etc.  Default is null.
      "statText": "errors",
    }
  ]
}

Working With Source Code

When working with a Component that requires source code (e.g. you are creating a Component that will run on AWS Lambda), if you make the src one of your inputs, anything specified there will be automatically uploaded and made available within the Component environment.

Within your Component, the inputs.src will point to a zip file of the source files within your environment. If you wish to unzip the source files, use this helpful utility method:

async deploy(inputs = {}) {

  // Unzip the source files...
  const sourceDirectory = await this.unzip(inputs.src)

}

Now, you are free to manipulate the source files. When finished, you may want to use this utility method to zip up the source files again because in some circumstances you will next want to upload the code to a compute service (e.g. AWS Lambda).

async deploy(inputs = {}) {

  // Zip up the source files...
  const zipPath = await this.zip(sourceDirectory)

}

Adding The Serverless Agent

If your Component runs code, and you want to enable streaming logs, errors and transactions for you Component via Serverless Dev Mode (serverless dev), be sure to add the Serverless SDK into the deployed application/logic. We offer some helpful utility methods to make this possible:

// unzip source zip file
console.log(`Unzipping ${inputs.src}...`);
const sourceDirectory = await instance.unzip(inputs.src);
console.log(`Files unzipped into ${sourceDirectory}...`);

// add sdk to the source directory, add original handler
console.log(`Installing Serverless Framework SDK...`);
instance.state.handler = await instance.addSDK(sourceDirectory, '_express/handler.handler');

// zip the source directory with the shim and the sdk
console.log(`Zipping files...`);
const zipPath = await instance.zip(sourceDirectory);
console.log(`Files zipped into ${zipPath}...`);

After this, you'll likely want to upload the code to a compute service (e.g. AWS Lambda).

Development Workflow

Serverless Components only run in the cloud and cannot be run locally. This presents some tremendous advantages to Component consumers, and we've added some workflow tricks to make the authoring workflow easier. Here they are...

When you have added or updated the code of your Serverless Component and you want to test the change, you will need to publish it first. Since you don't want to publish your changes to a proper version of your Component just for testing (because people may be using it), we allow for you to publish a "dev" version of your Component.

Simply run the following command to publish your Serverless Component to the "dev" version:

$ serverless publish --dev

You can test the "dev" version of your Component in serverless.yml, by including a @dev in your Component name, like this:

# serverless.yml

org: acme
app: fullstack
component: express@dev # Add "dev" as the version
name: rest-api

inputs:
  src: ./src

Run your Component command to test your changes:

$ serverless deploy --debug

When writing a Component, we recomend to always use the --debug flag, so that the Component's console.log() statements are sent to the CLI. These are handy to use in Serverless Components, since they describe what the Component is doing. We recommend you add console.log() statements to your Component wherever you think they are necessary.

class MyComponent extends Component {
  async deploy(inputs) {
    console.log(`Starting MyComponent.`);
    console.log(`Creating resources.`);
    console.log(`Waiting for resources to be provisioned.`);
    console.log(`Finished MyComponent.`);
    return {};
  }
}

When you're ready to publish a new version for others to use, update the version in serverless.component.yml, then run publish without the --dev flag.

# serverless.component.yml

name: [email protected]
$ serverless publish

Serverless: Successfully publish [email protected]

Development Tips

Here are some development tips when it comes to writing Serverless Components:

Start With The Outcome

We recommend starting with a focus on your desired outcome, rather than try to break things down into multiple smaller Components from the start. Trying to break things down into multiple Components most often ends up as a distraction. Create a higher level Component that solves your problem first. Use it. Learn from it. Then consider breaking things down into smaller Components if necessary. At the same time, high-level solutions are what Serverless Components are meant for. They are outcomes—with the lowest operational overhead.

Knowing The Outcome Is An Advantage

Provisioning infrastructure safely and reliably can be quite complicated. However, Serverless Components have a powerful advantage over general infrastructure provisioning tools that seek to enable every possible option and combination (e.g. AWS Cloudformation, Terraform) — Serverless Components know the specific use-case they are trying to deliver.

One of the most important lessons we've learned about software development tools is that once you know the use-case or specific goal, you can create a much better tool.

Components know their use-case. You can use that knowledge to: 1) provision infrastructure more reliably, because you have a clear provisioning path and you can program around the pitfalls. 2) provision infrastructure more quickly 3) add use-case specific automation to your Component in the form of custom methods.

Keep Most State On The Cloud Provider

Serverless Components save remarkably little state. In fact, many powerful Components have less than 10 properties in their state objects.

Components rely on the state saved within the cloud services they use as the source of truth. This prevents drift issues that break infrastructure provisioning tools. It also opens up the possibility of working with existing resources, that were not originally managed by Serverless Components.

Store State Immediately After A Successful Operation

If you do need to store state, try to store it immediately after a successful operation.

// Do something
this.state.id = 'updated or new id';
// Do something else
this.state.url = 'updated or new url';

This way, if anything after that operation fails, your Serverless Component can pick up where it left off, when the end user tries to deploy it again.

Optimize For Accessibility

We believe serverless infrastructure and architectures will empower more people to develop software than ever before.

Because of this, we're designing all of our projects to be as approachable as possible. Please try to use simple, vanilla Javascript. Additionally, to reduce security risks and general bloat, please try to use the least amount of NPM dependencies as possible.

No Surprise Removals

Never surprise the user by deleting or fundamentally changing infrastructure within your Serverless Component, based on a configuration change in serverless.yml.

For example, if a user is changing their region, NEVER remove their infrastructure in one region and automatically recreate it in the new region upon their next $ serverless deploy.

Instead, throw an error with a warning about this:

$ serverless deploy
Error: Changing the region from us-east-1 to us-east-2 will remove your infrastructure. Please remove it manually, change the region, then re-deploy.
$

We have measured this user experience and so far 100% of the time the user will remove their existing Component Instance and deploy another one. This works extremely well.

Write Integration Tests

We write integration tests in the tests/integration.tests.js file in each component repo. We run these tests on every push/merge to master with Github Actions. We recommend you do the same. As an example, here are the tests for the website component:

Running these integration tests will most likely require AWS keys, which are stored as Github Secrets. So you'll most likely need write access to the repo to accomplish this.

F.A.Q.

How can I deploy and remove multiple Components at the same time?

A serverless.yml file can only hold 1 Component at this time. However, that does not mean you cannot deploy/remove multiple Components at the same time.

Simply navigate to a parent directory, and run serverless deploy to deploy any serverless.yml files in immediate subfolders. When this happens, the Serverless Framework will quickly create a graph based on the references your Component apps are making to eachother. Depending on those references, it will prioritize what needs to be deployed first, otherwise its default is to deploy things in parallel. This also works for serverless remove.

For context, here is why we designed serverless.yml to only hold 1 Component at a time:

  • We have a lot of advanced automation and other features in the works for Components that push the limits of how we think about infrastructure-as-code. Delivering those features is harder if serverless.yml contains multiple Components.

  • Many of our support requests come from users who deploy a lot of critical infrastructure together, and end up accidentally breaking that critical infrastructure, often while intending to push updates to one specific area. In response to this, we wanted to make sure there was an easy way to deploy things separately first, so that developers can deploy more safely. Generally, try to keep the things you deploy frequently (e.g. code, functions, APIs, etc.), separate from critical things that you deploy infrequently (e.g. VPCs, databases, S3 buckets). We get that it's convenient to deploy everything together (which is why we still enabled this via the method above), just be careful out there!

Where do Components run?

Components run in the cloud. Here's what that means and why it's important...

We've been working on the Serverless Framework for 5 years now. During that time, there have been many ways we've wanted to better solve user problems, improve the experience and innovate—but we've been limited by the project's design (e.g. requires local installation, hard to push updates, lack of error diagnostics, dealing with user environment quirks, etc.).

Over a year ago, we whiteboarded several groundbreaking ways we can push the boundaries of serverless dev tools (and infrastructure as code in general), and realized the only way to make that happen was to move the majority of work to the cloud.

Now, when you deploy, or perform any other Action of a Component, that happens in our "Components Engine", which we've spent 1.5+ years building. For clarity, this means your source code, environment variables and temporary credentials are passed through the Components Engine.

This is a complete change in how Serverless Framework traditionally worked. However, this is no different from how most build engines, CI/CD products, and cloud services work, as well as AWS CloudFormation, which Serverless Framework traditionally used. The "Components Engine" is a managed service, like AWS CloudFormation, CircleCI, Github, Github Actions, Hosted Gitlab, Terraform Cloud, etc.

As of today, the Components Engine helped enable: The fastest infrastructure deployments possible, streaming logs to your CLI, streaming metrics to the Dashboard, remote state storage and sharing, secrets injection, configuration validation, and so much more. Please note, this is only 25% of our vision for this effort. Otherwise known as the table-stakes features. The real thought-provoking and groundbreaking developer productivity features are coming next...

Part of these features will enable greater security features than we've ever had in Serverless Framework. Features that involve making it easier to reduce the scope of your credentials, analyze/block everything passing through the Engine, rollback unsafe deployments, etc.

components's People

Contributors

ac360 avatar agutoli avatar astuyve avatar brianneisler avatar bytekast avatar canmengfly avatar davidwells avatar dilantha111 avatar donhui avatar eahefnawy avatar ebisbe avatar everlastingbugstopper avatar francismarcus avatar hkbarton avatar jfdesroches avatar laardee avatar medikoo avatar nikgraf avatar pavel910 avatar pgrzesik avatar plfx avatar pmuens avatar raeesbhatti avatar rupakg avatar skierkowski avatar timqian avatar tinafangkunding avatar vkkis93 avatar yugasun avatar zongumr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

components's Issues

Serverless Variables - Improve Referencing Current Component Attributes

Problem

I tried using the AWS Lambda Component, however it currently hardcodes the current working directory as the folder to be packaged for the function, which is a bad practice. This forces the Lambda Component to package up EVERYTHING in my top-level Component, whether it's related to my function or not, resulting in a bloated, slow and potentially non-working Lambda Function. If users can't point to a specific directory where their Lambda code exists, then the Lambda Component isn't very usable.

Problem Example

  • Component A – Top-level component
    • Component B – Lambda code located here
      • Component C – Lambda component

I need to pass the Lambda Code in Component B via an input to Component C, the Lambda Component. Currently, the current working directory is hardcoded as the path of the Lambda code in the Lambda Component, meaning everything in Component A will be packaged into the function. This misses the Lambda code completely and instead fills it with unrelated files from Component A.

Potential Solution

There needs to be a way to pass in relative paths. If Serverless Variables had some built-in helpers, this would solve the problem well. Users can use them to address the above problem, like this:

Potential Solution Example

Component B

components:
  function:
    type: aws-lambda
    inputs:
      handler: index.code
      handlerRoot:  ${self.path}/code

${self} could be expanded upon to offer info about the current component, in a consistent way:

  • ${self.path} - Points to the path of the current component
  • ${self.inputs.foo} – Reference an input of the current component
  • ${self.name} – Reference the name of the current component
  • ${self.version} – Reference the version of the current component

AWS CloudFormation Stack component

Description

To help users integrate existing cloudformation stacks into a components deployment, we should implement an aws-cloudformation-stack component.

Here is the AWS js sdk documentation for creating a a CloudFormation Stack https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudFormation.html#createStack-property

inputTypes

stackName: string /* required */
capabilities: Array [ CAPABILITY_IAM | CAPABILITY_NAMED_IAM ],
clientRequestToken: string
disableRollback: boolean
enableTerminationProtection: boolean
notificationARNs: Array<string>,
onFailure: enum(DO_NOTHING | ROLLBACK | DELETE)
parameters: Array<{ { ParameterKey: string, ParameterValue: string, ResolvedValue: string, UsePreviousValue: boolean } /* more items */ }>
resourceTypes: Array<string>
roleArn: string
rollbackConfiguration: { MonitoringTimeInMinutes: 0, RollbackTriggers: [ { Arn: 'STRING_VALUE', /* required */ Type: 'STRING_VALUE' /* required */ }, /* more items */ ] }
stackPolicyBody: string
stackPolicyUrl: string
tags: Array<{ Key: 'STRING_VALUE', /* required */ Value: 'STRING_VALUE' /* required */ }>, templateBody: string templateUrl: string timeoutInMinutes: integer`

outputTypes

stackId: string

Overhaul Google Cloud Function component

Google Cloud Function component analysis

In #253 we've updated the GCF component so that it can be used to (re)deploy and remove Google Cloud Functions. While working on this we've found other issues / design flaws we need to tackle. Here's a quick writeup which describes improvements, problems and questions we've uncovered during this codebase analysis.


Implementation against new type system:

  • Add index.js file
  • Add serverless.yml file
  • Add package.json file
    • npm install archiver package
  • Add README.md file
    • Write documentation about inner-workings
    • Add reference to component in root README.md file
  • Implement getSinkConfig function
  • Implement pack function
    • Take care of proper shim handling
  • Implement deploy function
    • Implement create functionality
      • Implement bucket creation logic
    • Implement update functionality
      • Implement bucket cleanup logic
  • Implement remove function
    • Implement remove functionality
      • Implement "full" bucket cleanup logic
  • Write tests
    • index.test.js
    • ... (for all other files)

Todos (check after following implementation above):

  • Make it possible to switch from httpsTrigger to eventTrigger and vice versa
  • Make sure that component inputs support „all“ possible configurations (according to GCF docs)
    • availableMemoryMb
    • entryPoint
    • description
    • environmentVariables
    • eventTrigger
    • httpsTrigger
    • labels
    • maxInstances
    • name
    • network
    • runtime
    • sourceArchiveUrl
    • sourceRepository
    • sourceUploadUrl
    • timeout
  • Revisit components outputs and figure out which ones make sense (remove others)
  • Await function creation (this might slow down function creation)
  • Replace keyfile from inputs with clientEmail and privateKey
  • Restructure / further refactor codebase to make it even easier to (unit) test

Questions:

  • Is it possible for a Google Cloud Function to switch from a httpsTrigger to an eventTrigger without re-deploying it?
  • Should we wait for the response of function creations / removals?
  • Should we use the storage bucket component programatically behind the scenes or use our own, specialized implementation within our Google Cloud Function component?

Problems / Limitations:

  • One can only create a new bucket every 2 seconds (see: https://cloud.google.com/storage/quotas) how do we deal with that if we have a project which has many GCF where we create one bucket per function?

Problem with installation (Node v8.1.2)

Installation command:
npm install --global serverless-components

Didn't work with:
(1) Node v8.1.2
(2) NPM 5.6.0
(3) On MacOS Sierra

Solved after upgrading to Node v9.11.1, NPM 5.8.0.

Stack Trace:

components $ npm install --global serverless-components
/usr/local/bin/components -> /usr/local/lib/node_modules/serverless-components/bin/components

> [email protected] postinstall /usr/local/lib/node_modules/serverless-components
> node ./scripts/postinstall.js

/usr/local/lib/node_modules/serverless-components/src/utils/index.js:16
  ...components,
  ^^^

SyntaxError: Unexpected token ...
    at createScript (vm.js:74:10)
    at Object.runInThisContext (vm.js:116:10)
    at Module._compile (module.js:533:28)
    at Object.Module._extensions..js (module.js:580:10)
    at Module.load (module.js:503:32)
    at tryModuleLoad (module.js:466:12)
    at Function.Module._load (module.js:458:3)
    at Module.require (module.js:513:17)
    at require (internal/module.js:11:18)
    at Object.<anonymous> (/usr/local/lib/node_modules/serverless-components/scripts/postinstall.js:7:19)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] postinstall: `node ./scripts/postinstall.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] postinstall script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:

AWS Components - Add credentials as inputs

Currently, the AWS Components do not have credentials as input types. They depend on the user to have AWS credentials as environment variables or the user to have their credentials stored locally in the root of their machine.

Enabling passing in credentials makes it explicit which credentials the component is using, which is especially helpful in a situation where the component has AWS components as children. It also enables components within the same project to be deployed to different AWS accounts.

Would love to see AWS credentials inputs on all AWS components as non-required input types.

Syntax Conventions Should Be Stated Clearly

Making sure everyone understand component syntax conventions will help collaboration.

Suggestion: List these in the README, like this:

Configuration

Use camel-case, even for acronyms.

Example:

inputs:
  name:
  endpointUrl: 

Component Types

Use lowercase with hyphens

Example:

type: aws-api-gateway

Component Aliases

Use camel-case, even for acronyms.

Example:

components:
  myApi:
    type: aws-api-gateway

Secrets and Encryption

Here are some thoughts about secrets and encryption that could be implemented to the Components. With encrypted state.json and encrypted inline secrets, working with version control would be easier.

By default, encryption could use Node.js native crypto module with aes-256-cbc algorithm. Optionally user could define AWS KMS, GCP KMS or similar service to handle the encryption and key. These optional methods could be implemented to the core or some kind of plugins or extensions.

Encrypted state.json

With command components encrypt --state the state file is encrypted, and after that, it is encrypted every time before written to disk or other storage.

example encrypted state.json

{
  "encrypted": "ckE+aBv0Ja9QK+GuvVMiq4+5O0OEk9+LFR1VBK+OUS4CXh02gFOJZyKF/qCFBOUcUYV6ho6ontyoQFBED6SjUbMywnS+gZ2wtyq7XMMzMhUVjtPMEO6cbalT4SRImXe5J7S0g66XnrFmIyMB8JWbkj3uqUvZgcXLSBOdAbHZZdx7z/ktXKAxtGjCu9NWK8eNFKpVV+5osnWqVMAoezTOYg=="
}

components decrypt --state would do the opposite.

Inline variables in serverless.yml

Using encrypted values in serverless.yml would allow it to be committed to version control with inline secrets.

Running components encrypt --variable some-secret-value would output the encrypted value e.g. E5oQO0xd3nmkk/tKuaenjrMKS3XmCJIKa+wVgfzU7kw=. That value then can be used in serverless.yml as !encrypted type.

type: my-service

inputs:
  secretToken: !encrypted |
    E5oQO0xd3nmkk/tKuaenjrMKS3XmCJIKa+wVgfzU7kw=

!encrypted type is implemented as a custom js-yaml type + schema in the Components codebase. The schema is passed to @serverless/utils readFile->parseFile, so that the decryption is done while parsing the yaml data to js object.

components decrypt --variable E5oQO0xd3nmkk/tKuaenjrMKS3XmCJIKa+wVgfzU7kw= decrypts the value and displays it as plain text.

There could be also an option to encrypt values that are defined in serverless.yml, e.g. components encrypt --path inputs.secretToken would replace the value of

inputs:
  secretToken: some-secret-value

with

inputs:
  secretToken: !encrypted E5oQO0xd3nmkk/tKuaenjrMKS3XmCJIKa+wVgfzU7kw=

Any thoughts?

@brianneisler @eahefnawy @pmuens @ac360

[WIP] Programmatic UX Improvements

Collecting thoughts on UX improvements around the programmatic usage of Components...

Current Implementation

  let iam      = await context.loadComponent('iam')
  let iamState = await iam.deploy(inputs, state, context, options)
  • User is responsible for passing state and context into component dependency.
  • Not sure how they get state for that dependency.

Proposal 1

  let myIamRole      = await context.loadComponent('myIamRole')
  let myIamRoleState = await myIamRole.deploy(inputs, options)
  • Order or arguments is switched to inputs, options, state, context
  • state and context are generated by framework middleware allowing them to be component-specific. The user doesn't need to worry about this.
  • Should load by alias, not by Component type, since there may be multiple instances.
  • Tricky part is the Framework needs to know where the dependency is in the hierarchy so that it can fetch the right state.

Middleware can be added like this:

Create a wrapper function over each component method that gets that component's state and prepares context specifically for that component.

component.[method] = function(inputs, options) {

      // State
      let state = loadState(src)

      // Populate any serverless variables in inputs with any info that is currently available

      // Prep context with useful info that is component specific
      let context = {
          alias:             alias,
          type:              type,
          version:           version,
          namespace:         namespace,
          action:            method,
          parentComponent:   parentComponent,
          lastDeploy:        state.modified,
          load:              function(){},
          cliPrompt:         function(){},
          // etc...
      }

      return originalMethod(inputs, options, state, context)
}

Add aws-sns-subscription components

Description

Add an aws-sns-subscription component for setting up a subscription with on an SNS topic.

Inputs

  • topic: aws-sns-topic | arn
  • protocol: enum(http, https, email, email-json, sms, sqs, application, lambda)
  • endpoint: string
    For the http protocol, the endpoint is an URL beginning with "http://"
    For the https protocol, the endpoint is a URL beginning with "https://"
    For the email protocol, the endpoint is an email address
    For the email-json protocol, the endpoint is an email address
    For the sms protocol, the endpoint is a phone number of an SMS-enabled device
    For the sqs protocol, the endpoint is the ARN of an Amazon SQS queue
    For the application protocol, the endpoint is the EndpointArn of a mobile app and device.
    For the lambda protocol, the endpoint is the ARN of an AWS Lambda function.

Outputs

  • arn: AwsArn

Requirements

  • tests
  • documentation
  • examples

AWS VPC component

Description

Users may need to create a VPC in AWS, whether for their Lambda functions or for Fargate containers.

AWS SDK for JS documentation on CreateVPC call

inputTypes

cidrBlock - string
amazonProvidedIpv6CidrBlock - boolean
instanceTenancy - string. Choices: default, dedicated or host.

outputTypes

vpcId - string

Independently settable outputs

Description

Having outputs returned by every function in a component hasn't turned out to be the best implementation.

Outputs need to be persisted across calls to different functions, settable by any function, but not required to return outputs.

Instead, we should introduce a function called setOutputs for setting the outputs. This method should accept an object similar to saveState

Requirements

  • Persist outputs in state across calls
  • set outputs at the start when the state is loaded and the components are instantiated
  • update the outputs when setOutputs is called
  • setOutputs should merge the given object with the existing outputs
  • setOutputs should return the context for chaining

Example

type: my-component
outputTypes:
  foo:
    type: string
    description: foo output
  bar:
    type: string
    description: bar output
async function deploy(inputs, context) {
  doSomething()
  context = context.setOutputs({
    foo: 'abc'
  })

  doSomethingElse()
  context = context.setOutputs({
    bar: 'def'
  })

  // no longer support returning outputs.
}

Reformat types to be upper camel case

Description

Currently the components types are in kebab-case. This is limiting at the code level since this format cannot be represented in javascript properties.

Cannot find module DTraceProviderBindings

I have updated my brew packages in MacOs and now it throws that error when executing the deploy. I have reinstalled node, npm, yarn and components-serverless but this error still shows. I'm not sure whats broken and if it's related to that package.

node: v10.1.0
npm: 5.60
yarn: 1.6.0
mac os : 10.13.4

⌈  ➜   ~/Development/decoralyte/products-rest-api
⌊☻ components deploy
(node:15393) ExperimentalWarning: The fs.promises API is experimental
{ Error: Cannot find module './build/Release/DTraceProviderBindings'
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:571:15)
    at Function.Module._load (internal/modules/cjs/loader.js:497:25)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dtrace-provider/dtrace-provider.js:17:23)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/bunyan/lib/bunyan.js:34:22)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dynamodb/lib/index.js:14:20)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dynamodb/index.js:3:18)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/index.js:3:16)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at getComponentFunctions (/Users/enricu/.config/yarn/global/node_modules/serverless-components/src/utils/components/getComponentFunctions.js:6:11)
    at getComponentsFromServerlessFile (/Users/enricu/.config/yarn/global/node_modules/serverless-components/src/utils/components/getComponentsFromServerlessFile.js:42:12)
    at process._tickCallback (internal/process/next_tick.js:68:7) code: 'MODULE_NOT_FOUND' }
{ Error: Cannot find module './build/default/DTraceProviderBindings'
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:571:15)
    at Function.Module._load (internal/modules/cjs/loader.js:497:25)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dtrace-provider/dtrace-provider.js:17:23)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/bunyan/lib/bunyan.js:34:22)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dynamodb/lib/index.js:14:20)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dynamodb/index.js:3:18)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/index.js:3:16)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at getComponentFunctions (/Users/enricu/.config/yarn/global/node_modules/serverless-components/src/utils/components/getComponentFunctions.js:6:11)
    at getComponentsFromServerlessFile (/Users/enricu/.config/yarn/global/node_modules/serverless-components/src/utils/components/getComponentsFromServerlessFile.js:42:12)
    at process._tickCallback (internal/process/next_tick.js:68:7) code: 'MODULE_NOT_FOUND' }
{ Error: Cannot find module './build/Debug/DTraceProviderBindings'
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:571:15)
    at Function.Module._load (internal/modules/cjs/loader.js:497:25)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dtrace-provider/dtrace-provider.js:17:23)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/bunyan/lib/bunyan.js:34:22)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dynamodb/lib/index.js:14:20)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dynamodb/index.js:3:18)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/index.js:3:16)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at getComponentFunctions (/Users/enricu/.config/yarn/global/node_modules/serverless-components/src/utils/components/getComponentFunctions.js:6:11)
    at getComponentsFromServerlessFile (/Users/enricu/.config/yarn/global/node_modules/serverless-components/src/utils/components/getComponentsFromServerlessFile.js:42:12)
    at process._tickCallback (internal/process/next_tick.js:68:7) code: 'MODULE_NOT_FOUND' }

aws-lambda runtime

Runtime is always node6

flickrPhotosFunction:
    type: aws-lambda
    inputs:
      name: enricbgFlickrPhotos
      runtime: nodejs8.10
      memory: 512
      timeout: 10
      handler: code/photos.handler

Despite running components deploy the runtime at my lambdas is always the older version of node ( v6 ). I have to set it manual in my aws console. After each deploy it resets to v6

component: netlify-site keys issue

I'm not sure that it's the desired behaviour but each time the netlify-site components is deployed generates a new key in the github account.

I assume that you would reuse the first one created.

Custom typing system

Description

There are cases where we need to support custom types and custom validation rules for types.

raml-validate supports specifying custom types as well as specifying custom rules https://github.com/mulesoft-labs/node-raml-validate

However, it might make sense to change out this library for a more robust on like this https://www.npmjs.com/package/raml-typesystem

We should add support for setting up custom types in yaml. This work is based off the RAML types spec https://github.com/raml-org/raml-spec/blob/master/versions/raml-10/raml-10.md#raml-data-types

Example

type: my-component

inputTypes:
  foo:
    type: Foo
    default:
      bar: 123

types:
  Foo:
    type: object
    properties:
      bar: string 

Getting started - slack, docs, dependency resolution, custom components, production

My main issue with the Serverless Framework is that .yml files get out of hand. In theory, serverless components should be able to solve this. I have three questions to which I was unable to find the answer:

  1. The readme says that the slack channel is public, but when attempting to join it requires a @serverless.com email address.

  2. Is there a reason why the serverless docs do not mention anything regarding components at https://serverless.com/framework/docs/getting-started/ ?

  3. Do serverless components use stack sets internally (when using AWS)? How do they orchestrate the order of deletion / creation and resolve dependencies? E.g. if I update component B's version inside component A, and component A depends on component B, and component B fails to upgrade resulting in a rollback, does component A roll back too? Is it possible for two components to be in an invalid state?

In the README I read:

However the framework ensures that your state file always reflects the correct state of your infrastructure setup (even if something goes wrong during deployment / removal).

How can a local state file work reliably when a team of engineers work together? What if 2 developers deploy at the same time? Will they have different state files? The actual state is on the cloud, not on a local development machine. Do you put the state file in version control?

  1. It seems like provisioning is put back in the hands of the developer:

From your README:

const deploy = (inputs, context) => {
  // lambda provisioning logic
  const res = doLambdaDeploy()

  // return outputs
  return {
    arn: res.FunctionArn,
    name: res.FunctionName
  }
}

However, aren't you re-inventing Terraform here? Also, isn't the point of using Cloudformation that custom provisioning logic isn't necessary? If I want to use AWS, can I still rely on cloudformation to provision my lambda and return the arn as an output type?

  1. custom components & dependency resolution

if I create 2 components, and one component depends on the other:

type: someComponent

components:
  mySpecialThing:
    type: my-custom-component

How is someComponent able to resolve the location of my-custom-component, if I have not defined where it is? Is there a package.json file or something where you specify it as a dependency? Does it read from node_modules or similar?

rest-api: Deploying multiple http verbs for the same route deploys only the last verb

Deploying multiple http verbs for the same route deploys only the last verb. In the example below will only be available the GET /products.
Taken from the example https://serverless.com/blog/how-create-rest-api-serverless-components/

# ... snip

components:
  # ...snip
  productsApi:
    type: rest-api
    inputs:
      gateway: aws-apigateway
      routes:
        /products:
          post:
            function: ${createProduct}
            cors: true
          get:
            function: ${listProducts}
            cors: true         

run error

Description

node: v10.0.0
npm: 5.6.0

Additional Data

{ Error: ENOENT: no such file or directory, open '/usr/local/lib/node_modules/serverless-components/registry/[email protected]/serverless.yml'
errno: -2,
code: 'ENOENT',
syscall: 'open',
path: '/usr/local/lib/node_modules/serverless-components/registry/[email protected]/serverless.yml' }

Add aws-sqs-fifo-queue component

Description

Add an aws-sqs-fifo-queue component for setting up an AWS SQS Fifo queue.

Inputs

  • name: string (max: 75 chars, [a-zA-Z0-9-_]) (automatically adds the .fifo suffix
  • policy: AWSPolicyDocument (optional)
  • delaySeconds: int (0-900, optional)
  • maximumMessageSize: int (1024-262144, optional, default: 262144)
  • receiveMessageWaitTimeSeconds: int (0-20, optiona, default 0)
  • redrivePolicy: RedrivePolicy (optional)
  • visibilityTimeout: int (0-43000, optional, default: 30)
  • kmsMasterKey: MasterKeyId | MasterKey
  • kmsDataKeyReusePeriodSeconds: int (60-86400, optional, default: 300)
  • contentBasedDeduplication: boolean

Outputs

  • arn: AwsARN
  • url: URL

Requirements

  • tests
  • documentation
  • examples

Add aws-cloudwatch-metric-alarm component

Description

Add an aws-cloudwatch-metric-alarm component for setting up an AWS CloudWatch metric alarm.

Inputs

  • name: string

  • description: string

  • actionsEnabled: boolean

  • okActions: Array<string>

  • alarmActions: Array<string>

  • insufficientDataActions: Array<string>

  • metricName: string

  • namespace: string

  • statistic: string

  • extendedStatistic: string

  • dimensions: Array<{name: string (min:1, max: 255), value: string (min:1, max: 255)}>

  • period: int (10, 30, 60*x, max: 86400)

  • unit: enum
    "Seconds"
    "Microseconds"
    "Milliseconds"
    "Bytes"
    "Kilobytes"
    "Megabytes"
    "Gigabytes"
    "Terabytes"
    "Bits"
    "Kilobits"
    "Megabits"
    "Gigabits"
    "Terabits"
    "Percent"
    "Count"
    "Bytes/Second"
    "Kilobytes/Second"
    "Megabytes/Second"
    "Gigabytes/Second"
    "Terabytes/Second"
    "Bits/Second"
    "Kilobits/Second"
    "Megabits/Second"
    "Gigabits/Second"
    "Terabits/Second"
    "Count/Second"
    "None"

  • evaluationPeriods: int

  • datapointsToAlarm: int

  • threshold: float

  • comparisonOperator: enum
    "GreaterThanOrEqualToThreshold"
    "GreaterThanThreshold"
    "LessThanThreshold"
    "LessThanOrEqualToThreshold"

  • treatMissingData: enum (breaching, notBreaching, ignore, missing)

  • evaluateLowSampleCountPercentile: enum (evaluate, ignore)

Outputs

  • name: string

Requirements

  • tests
  • documentation
  • examples

Add support for an authorizer property to the rest-api component

Description

It would be great to have support for authorizers in the rest-api component. Our rest-api component dynamically creates either an aws-apigateway under the hood or an event-gateway depending upon the gateway input.

https://github.com/serverless/components/blob/master/registry/rest-api/index.js#L162-L166

For now, we can only add support for this authorizer property to the api-gateway portion until we add support for authorizers to the event-gateway.

The aws-apigateway component is built using the importRestApi method in the sdk which uses swagger to define the api.

https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/APIGateway.html#importRestApi-property

The component takes in the inputs and converts them into a swagger definition.

https://github.com/serverless/components/blob/master/registry/aws-apigateway/index.js#L24-L27

The implementation would use the api gateway swagger extensions for adding authorizers to an api.

https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-swagger-extensions-authorizer.html

An example of what the final implementation would look like to use.

type: my-api
version: 0.0.1

components:
  createData:
    type: aws-lambda
    inputs:
      handler: index.create
      root: ${self.path}/code
  myAuthorizer:
    type: aws-lambda
    inputs:
      handler: index.authorizer
      root: ${self.path}/code

  myApi:
    type: rest-api
    inputs:
      gateway: aws-apigateway
      routes:
        /create:
          post:
            function: ${createFakerData}
            authorizer: ${myAuthorizer}  # this can also be an arn or another type of authorizer
            cors: true

Add an aws-sns-topic component

Description

Implement a basic aws-sns-topic component.

Inputs

Outputs

  • arn: string

Types

SNSDeliveryPolicy
example pulled from here https://docs.aws.amazon.com/sns/latest/api/API_SetTopicAttributes.html

{ 
  "http": {
    "defaultHealthyRetryPolicy": { 
       "minDelayTarget": <int>,
       "maxDelayTarget": <int>,
       "numRetries": <int>, 
       "numMaxDelayRetries": <int>, 
       "backoffFunction": "<linear|arithmetic|geometric|exponential>" 
    }, 
    "disableSubscriptionOverrides": <boolean>, 
    "defaultThrottlePolicy": { 
      "maxReceivesPerSecond": <int> 
    }
  } 
}

AWSTopicPolicyDocument

  • TODO: need to figure out what this looks like

Requirements

  • tests
  • documentation
  • examples

Return state, not outputs

This is a minor, minor suggestion...

One aspect of this implementation that makes it simple is that components only return state. To ensure everyone is aware of this and how simple it is, the initial components we make should consider not introducing an outputs concept/variable and instead should just return state or updatedState. The initial components will serve as examples for others looking to author their own components.

Support a `stage` option on deploy

Description

Similar to the framework, components need to be able to support the concept of stages. Unlike the framework, this should not be supported as a a config option in the yaml since the point of components is reusability and this prevents components from being reusable.

Gitlab Component

Description

I think it would be great to have Gitlab and Netlify support for static websites

Move version declarations out of component type property

Problem

Our current approach to version declarations has been to add them inline to the component usage like this...

components:
  myComponent:
     type: [email protected]

This has a number of issues associated with it.

  1. Versions suddenly need to be managed in multiple places when using the same component multiple times and it becomes very easy to make a version mistake.

Example

components:
  foo1:
     type: [email protected]
  foo2:
    type: [email protected]
  foo3: 
    type: [email protected]

If i upgrade the component, it's possible to make a mistake and accidentally miss one of the component versions

Example

components:
  foo1:
     type: [email protected]
  foo2:
    type: [email protected]
  foo3: 
    type: [email protected]   # whoops, forgot to update this
  1. If i'm using components programmatically within the component, I have no mechanism for statically declaring which components i'm using within the component. This means that component cannot programmatically download the necessary dependencies ahead of time, they actually have to download at runtime which is suboptimal.

Example

// index.js
function deploy(inputs, context) {
  const foo = context.load('[email protected]', inputs)  // no way to discover this without running the actual code
}

Solution

This all results in a situation where it dependency and version management is more difficult than it should be.

My suggestions for fixing this is to separately declare component dependencies from their actual usage.

type: my-component

dependencies:
  foo: ^1.0.0

components:
  myFoo:
    type: foo # this uses the version supplied in the dependencies above

This also gives us a mechanism for declaring components that are used programmatically

type: my-component

dependencies:
  bar: ^1.0.0 

# no declaration of components property
// index.js
function deploy(inputs, context) {
  const bar = context.load('bar', inputs)  // uses the version declared in serverless.yml dependencies
}

How to reuse component in another projects?

Hi.

Thanks for working such a promising project. I've taken a look and have some questions regarding the example project retail-app :

  1. Can I use e.g. productsDb component in another projects (whether made by Serverless framework or Serverless Components)? If so, how should I refer/use to productsDb? I mean, if I just use its ARN then can I use it inside my other projects?!

  2. Can I modify IAM permissions to components? If so, how?

Feature Request: Auto-load .env file

Could we auto-load .env files in the root of the parent component? This would allow users to specify environment variables in their serverless.yml without having to save them to their bash shell.
Otherwise, the user has to create custom deployment logic.

Update the aws-s3-bucket component to mirror the AWS sdk

Description

The current implementation of the aws-s3-bucket expects a name input. This does not match the expected bucket property that AWS expects in its API. We are also missing support for many of the additional parameters that the sdk supports.

https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createBucket-property

params (Object)
  ACL — (String) The canned ACL to apply to the bucket. Possible values include:
    "private"
    "public-read"
    "public-read-write"
    "authenticated-read"
  Bucket — (String)
  CreateBucketConfiguration — (map)
  LocationConstraint — (String) Specifies the region where the bucket will be created. If you don't specify a region, the bucket will be created in US Standard. Possible values include:
    "EU"
    "eu-west-1"
    "us-west-1"
    "us-west-2"
    "ap-south-1"
    "ap-southeast-1"
    "ap-southeast-2"
    "ap-northeast-1"
    "sa-east-1"
    "cn-north-1"
    "eu-central-1"
  GrantFullControl — (String) Allows grantee the read, write, read ACP, and write ACP permissions on the bucket.
  GrantRead — (String) Allows grantee to list the objects in the bucket.
  GrantReadACP — (String) Allows grantee to read the bucket ACL.
  GrantWrite — (String) Allows grantee to create, overwrite, and delete any object in the bucket.
  GrantWriteACP — (String) Allows grantee to write the ACL for the applicable bucket.

In addition to supporting the above properties. We should also support blind pass through of the "rest" of the inputs so that we have a fallback when our implementation drifts from the SDK.

const createBucket = async ({ name, ...rest }) => S3.createBucket({ Bucket: name, ...rest }).promise()

Something that we will need to consider here is that all of our inputs use lower camel case where most of AWS uses upper camel case. Perhaps we should also do an auto conversion of lower to upper.

https://github.com/SamVerschueren/uppercamelcase

const createBucket = async ({ name, ...rest }) => {
  const params = reduceObjIndexed(
    (accum, value, key) => assoc(upperCammelCase(key), value, accum), 
    {}, 
    rest
  )
  return S3.createBucket({ Bucket: name, ...params).promise()
}

Can upload zipped lambda package, but deploying as component throws RequestEntityTooLargeException on the CreateFunction operation.

Description

Strangely, I can add a lambda function to AWS manually, but using that function as a component in an app raises the following when running components deploy

RequestEntityTooLargeException: Request must be smaller than 69905067 bytes for the CreateFunction operation

Here is a link to the component's yml

The zipped serverless package is well below the CreateFunction limit of 69905067 bytes, so I was wondering if this was a components issue.

Additional Data

node: v10.5.0

Installation failing on postinstall script (Node v10.0.0 and NPM 6.0)

Installing via npm with npm i -g serverless-components fails due to the dtrace-provider module failing to build.

It seems like there could be conflicting dependencies for dtrace-provider. I also did some digging that mentioned dtrace-provider possibly looking in the wrong global node_modules folder (possibly an nvm issue?).

In any case, it doesn't seem like a problem specifically with serverless-components, but I'm curious to see if anyone has run into this issue and might be able to shed some light on it.

Stack trace follows:

node ./scripts/postinstall.js

npm WARN deprecated [email protected]: Use uuid module instead

  • @serverless-components/[email protected]
    added 12 packages from 11 contributors and updated 1 package in 55.13s

[email protected] install /Users/manafount/.nvm/versions/node/v10.0.0/lib/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dtrace-provider
node scripts/install.js

---------------░░░░⸩ ⠏ postinstall: sill install executeActions
Building dtrace-provider failed with exit code 1 and signal 0
re-run install with environment variable V set to see the build output
---------------░░░░⸩ ⠏ postinstall: sill install executeActions

  • @serverless-components/[email protected] executeActions
    added 47 packages from 108 contributors and updated 1 package in 62.828s
  • @serverless-components/[email protected]
    added 16 packages from 70 contributors and updated 1 package in 64.346s
  • @serverless-components/[email protected]
    added 13 packages from 12 contributors and updated 1 package in 65.034s
  • @serverless-components/[email protected]
    added 1 package from 4 contributors and updated 1 package in 65.621s
  • @serverless/[email protected]
    updated 1 package in 65.717s
  • @serverless-components/[email protected]
    added 1 package from 5 contributors and updated 1 package in 66.089s
  • @serverless-components/[email protected]
    added 54 packages from 86 contributors and updated 1 package in 68.07s
  • @serverless-components/[email protected]
    added 1 package from 4 contributors and updated 1 package in 68.408s
  • @serverless-components/[email protected] install executeActions
    added 12 packages from 9 contributors and updated 1 package in 69.391s
  • [email protected]
    added 36 packages from 36 contributors and updated 1 package in 70.681s
  • [email protected]
    updated 3 packages in 81.275s

Google Cloud Storage Bucket component

Description

Implement a google-cloud-storage-bucket component. Specifically, this component will be used to setup and manage a google cloud storage bucket. This component is considered a "low level" component should adhere as close as possible the Google's API for inserting a new bucket.

API

The component inputTypes should match the parameters which can be defined in the APIs insert operation

The components' deploy and remove methods should behave as idempotent operations.

For deploy, if a bucket with the name already exists, the component should attempt to update the bucket with the given inputs. If the bucket cannot be updated, an error should be thrown.

For remove, if a bucket with the given name does not exist an error should not be thrown, instead (if an error occurs, it should be swallowed and execution should be allowed to proceed.)

inputTypes

name - string

The name of the bucket. Must adhere to the bucket naming conventions

project - string

A valid API project identifier.

predefinedAcl - string (optional)

Apply a predefined set of access controls to this bucket.
Acceptable values are:

  • "authenticatedRead": Project team owners get OWNER access, and allAuthenticatedUsers get READER access.
  • "private": Project team owners get OWNER access.
  • "projectPrivate": Project team members get access according to their roles.
  • "publicRead": Project team owners get OWNER access, and allUsers get READER access.
  • "publicReadWrite": Project team owners get OWNER access, and allUsers get WRITER access.
predefinedDefaultObjectAcl - string(optional)

Apply a predefined set of default object access controls to this bucket.
Acceptable values are:

  • "authenticatedRead": Object owner gets OWNER access, and allAuthenticatedUsers get READER access.
  • "bucketOwnerFullControl": Object owner gets OWNER access, and project team owners get OWNER access.
  • "bucketOwnerRead": Object owner gets OWNER access, and project team owners get READER access.
  • "private": Object owner gets OWNER access.
  • "projectPrivate": Object owner gets OWNER access, and project team members get access according to their roles.
  • "publicRead": Object owner gets OWNER access, and allUsers get READER access.
projection - string(optional)

Set of properties to return. Defaults to noAcl, unless the bucket resource specifies acl or defaultObjectAclproperties, when it defaults to full. 
Acceptable values are:

  • "full": Include all properties.
  • "noAcl": Omit owner, acl and defaultObjectAclproperties.
userProject - string (optional)

The project to be billed for this request.

acl - array (optional)

Access controls on the bucket, containing one or more bucketAccessControls Resources.

billing - object (optional)

The bucket's billing configuration.

billing.requesterPays - boolean (optional)

When set to true, Requester Pays is enabled for this bucket.

cors - array (optional)

The bucket's Cross-Origin Resource Sharing (CORS) configuration.

cors[].maxAgeSeconds - integer (optional)

The value, in seconds, to return in the Access-Control-Max-Age header used in preflight responses.

cors[].method - array<string> (optional)

The list of HTTP methods on which to include CORS response headers, (GET, OPTIONS, POST, etc) Note: "*" is permitted in the list of methods, and means "any method".

cors[].origin - array<string> (optional)

The list of Origins eligible to receive CORS response headers. Note: "*" is permitted in the list of origins, and means "any Origin".

cors[].responseHeader - array<string> (optional)

The list of HTTP headers other than the simple response headers to give permission for the user-agent to share across domains.

defaultObjectAcl - array<Object> (optional)

Default access controls to apply to new objects when no ACL is provided.

defaultObjectAcl[].entity - string (optional)

The entity holding the permission, in one of the following forms:user-userIduser-emailgroup-groupIdgroup-emaildomain-domainproject-team-projectIdallUsersallAuthenticatedUsersExamples:The user [email protected] would be [email protected] group [email protected] would be [email protected] refer to all members of the G Suite for Business domain example.com, the entity would be domain-example.com.

defaultObjectAcl[].role - string (optional)

The access permission for the entity. 
Acceptable values are:

  • "OWNER"
  • "READER"
encryption - object (optional)

Encryption configuration for a bucket.

encryption.defaultKmsKeyName (optional) - string

A Cloud KMS key that will be used to encrypt objects inserted into this bucket, if no encryption method is specified.

labels - object (optional)

User-provided labels, in key/value pairs.

labels.(key) - string (optional)

An individual label entry.

lifecycle - object (optional)

The bucket's lifecycle configuration. See lifecycle management for more information.

location - string (optional)

The location of the bucket. Object data for objects in the bucket resides in physical storage within this region. Defaults to US. See the developer's guide for the authoritative list.

logging - object (optional)

The bucket's logging configuration, which defines the destination bucket and optional name prefix for the current bucket's logs.

logging.logBucket - string (optional)

The destination bucket where the current bucket's logs should be placed.

logging.logObjectPrefix - string (optional)

A prefix for log object names.

storageClass - string (optional)

The bucket's default storage class, used whenever no storageClass is specified for a newly-created object. This defines how objects in the bucket are stored and determines the SLA and the cost of storage.
Acceptable Values

  • MULTI_REGIONAL
  • REGIONAL
  • STANDARD
  • NEARLINE
  • COLDLINE
  • DURABLE_REDUCED_AVAILABILITY
    If this value is not specified when the bucket is created, it will default to STANDARD. For more information, see storage classes.
versioning - object (optional)

The bucket's versioning configuration.

versioning.enabled - boolean (optional)

While set to true, versioning is fully enabled for this bucket.

website - object (optional)

The bucket's website configuration, controlling how the service behaves when accessing bucket contents as a web site. See the Static Website Examples for more information.

website.mainPageSuffix - string (optional)

If the requested object path is missing, the service will ensure the path has a trailing '/', append this suffix, and attempt to retrieve the resulting object. This allows the creation of index.html objects to represent directory pages.

website.notFoundPage - string (optional)

If the requested object path is missing, and any mainPageSuffix object is missing, if applicable, the service will return the named object from this bucket as the content for a 404 Not Found result.

outputTypes

acl - array

Access controls on the bucket, containing one or more bucketAccessControls Resources.

billing - object

The bucket's billing configuration.

billing.requesterPays - boolean

When set to true, Requester Pays is enabled for this bucket.

cors - array<Object>

The bucket's Cross-Origin Resource Sharing (CORS) configuration.

cors[].maxAgeSeconds - integer

The value, in seconds, to return in the Access-Control-Max-Age header used in preflight responses.

cors[].method - array<string>

The list of HTTP methods on which to include CORS response headers, (GET, OPTIONS, POST, etc) Note: "*" is permitted in the list of methods, and means "any method".

cors[].origin - array<string>

The list of Origins eligible to receive CORS response headers. Note: "*" is permitted in the list of origins, and means "any Origin".

cors[].responseHeader - array<string>

The list of HTTP headers other than the simple response headers to give permission for the user-agent to share across domains.

defaultObjectAcl - array<Object>

Default access controls to apply to new objects when no ACL is provided.

defaultObjectAcl[].bucket - string

The name of the bucket.

defaultObjectAcl[].domain - string

The domain associated with the entity, if any.
 

defaultObjectAcl[].email - string

The email address associated with the entity, if any.

defaultObjectAcl[].entity - string

The entity holding the permission, in one of the following forms:user-userIduser-emailgroup-groupIdgroup-emaildomain-domainproject-team-projectIdallUsersallAuthenticatedUsersExamples:The user [email protected] would be [email protected] group [email protected] would be [email protected] refer to all members of the G Suite for Business domain example.com, the entity would be domain-example.com.

defaultObjectAcl[].entityId - string

The ID for the entity, if any.
 

defaultObjectAcl[].etag - string

HTTP 1.1 Entity tag for the access-control entry.

defaultObjectAcl[].generation - long

The content generation of the object, if applied to an object.
 

defaultObjectAcl[].id - string

The ID of the access-control entry.
 

defaultObjectAcl[].kind - string

The kind of item this is. For object access control entries, this is always storage#objectAccessControl.  

defaultObjectAcl[].object - string

The name of the object, if applied to an object.
 

defaultObjectAcl[].projectTeam - object

The project team associated with the entity, if any.
 

defaultObjectAcl[].projectTeam.projectNumber - string

The project number.
 

defaultObjectAcl[].projectTeam.team - string

The team. Acceptable values are:"editors""owners""viewers"  

defaultObjectAcl[].role - string

The access permission for the entity. 
Acceptable values are:

  • "OWNER"
  • "READER"
defaultObjectAcl[].selfLink - string

The link to this access-control entry.

encryption - object

Encryption configuration for a bucket.

encryption.defaultKmsKeyName (optional) - string

A Cloud KMS key that will be used to encrypt objects inserted into this bucket, if no encryption method is specified

etag - string

HTTP 1.1 Entity tag for the bucket.  

id - string

The ID of the bucket. For buckets, the id and name properties are the same.
 

kind - string

The kind of item this is. For buckets, this is always storage#bucket.

labels - object

User-provided labels, in key/value pairs.

labels.(key) - string

An individual label entry.

lifecycle - object (optional)

The bucket's lifecycle configuration. See lifecycle management for more information.

lifecycle.rule - array<Object>

A lifecycle management rule, which is made of an action to take and the condition(s) under which the action will be taken.

lifecycle.rule[].action - object

The action to take.
 

lifecycle.rule[].action.storageClass - string

Target storage class. Required iff the type of the action is SetStorageClass.
 

lifecycle.rule[].action.type - string

Type of the action. Currently, only Delete and SetStorageClass are supported. Acceptable values are:"Delete""SetStorageClass"
 

lifecycle.rule[].condition - object

The condition(s) under which the action will be taken.

lifecycle.rule[].condition.age - integer

Age of an object (in days). This condition is satisfied when an object reaches the specified age.

lifecycle.rule[].condition.createdBefore - date

A date in RFC 3339 format with only the date part (for instance, "2013-01-15"). This condition is satisfied when an object is created before midnight of the specified date in UTC.

lifecycle.rule[].condition.isLive - boolean

Relevant only for versioned objects. If the value is true, this condition matches live objects; if the value is false, it matches archived objects.

lifecycle.rule[].condition.matchesStorageClass - array<string>

Objects having any of the storage classes specified by this condition will be matched.
Values include 

  • MULTI_REGIONAL
  • REGIONAL
  • NEARLINE
  • COLDLINE
  • STANDARD
  • DURABLE_REDUCED_AVAILABILITY.
     
lifecycle.rule[].condition.numNewerVersions - integer

Relevant only for versioned objects. If the value is N, this condition is satisfied when there are at least N versions (including the live version) newer than this version of the object.

location - string

The location of the bucket. Object data for objects in the bucket resides in physical storage within this region. Defaults to US. See the developer's guide for the authoritative list.

logging - object

The bucket's logging configuration, which defines the destination bucket and optional name prefix for the current bucket's logs.

logging.logBucket - string

The destination bucket where the current bucket's logs should be placed.

logging.logObjectPrefix - string

A prefix for log object names.

metageneration - long

The metadata generation of this bucket.

name - string

The name of the bucket.
 

owner - object

The owner of the bucket. This is always the project team's owner group.
 

owner.entity - string

The entity, in the form project-owner-projectId.

owner.entityId - string

The ID for the entity.
 

projectNumber - unsigned long

The project number of the project the bucket belongs to.
 

selfLink - string

The URI of this bucket.
 

storageClass - string (optional)

The bucket's default storage class, used whenever no storageClass is specified for a newly-created object. This defines how objects in the bucket are stored and determines the SLA and the cost of storage.
Acceptable Values

  • MULTI_REGIONAL
  • REGIONAL
  • STANDARD
  • NEARLINE
  • COLDLINE
  • DURABLE_REDUCED_AVAILABILITY
    If this value is not specified when the bucket is created, it will default to STANDARD. For more information, see storage classes.
timeCreated - datetime

The creation time of the bucket in RFC 3339 format.
 

updated - datetime

The modification time of the bucket in RFC 3339 format.

versioning - object (optional)

The bucket's versioning configuration.

versioning.enabled - boolean (optional)

While set to true, versioning is fully enabled for this bucket.

website - object (optional)

The bucket's website configuration, controlling how the service behaves when accessing bucket contents as a web site. See the Static Website Examples for more information.

website.mainPageSuffix - string (optional)

If the requested object path is missing, the service will ensure the path has a trailing '/', append this suffix, and attempt to retrieve the resulting object. This allows the creation of index.html objects to represent directory pages.

website.notFoundPage - string (optional)

If the requested object path is missing, and any mainPageSuffix object is missing, if applicable, the service will return the named object from this bucket as the content for a 404 Not Found result.

minimal serverless.yml example of using this component

type: my-application
version: 0.0.2

components:
  myGoogleCloudStorageBucket:
    type: google-cloud-storage-bucket
    inputs:
      name: my-unique-bucket
      project: my-project

Implementation

  • tests written
  • inputs and outputs documented
  • example of using component added to examples folder

Component Diff'ing

Component authors need a way to understand what inputs have changed by users, so they can write better deployment logic.

Creating this thread to start that conversation.

Google Cloud Bigtable component

Description

Implement a google-cloud-bigtable component (Product link).

API

The component inputTypes should match the parameters which can be defined in the APIs request body.

  • Where is the API endpoint for this?

serverless.yml

# TBD

Create set of generic component tests

Description

There are somethings that all components should adhere to that we need to ensure don't regress.

It would be great to have a general set of tests that get run against every component during testing.

These tests should cover

idempotent removals

  • removing more than once shouldn't cause an error
  • removing when the state says that there's a resource but it's actually gone from the provider shouldn't cause an error
  • removing when the state say that there's not a resource but there is should remove the resource?

idempotent deploys

  • deploying more than once with no changes shouldn't cause an error
  • deploying something when the state doesn't think that the resource exists
  • deploying something when the state thinks that the resource exists but it doesn't should result in the resource being created

inputs

  • expected inputs should throw an error when the type doesn't match
  • any unexpected inputs should be passed through to the deploy method

outputs

  • deploying should always result in the expected outputs listed in outputTypes

what else?

  • anything?

Input type aliases

Description

It would be convenient to be able to have alternate names for inputs that are aliases of the primary name.

type: aws-s3-bucket

inputTypes:
  bucket:
    type: string
    alias: 
      - 'name'

Which could be used like this...

type: my-component

components:
  myBucket:
    type: aws-s3-bucket
    inputs
      bucket:  my-bucket

or like this...

type: my-component

components:
  myBucket:
    type: aws-s3-bucket
    inputs
      name:  my-bucket

this would then show up in the inputs under the primary input name

const deploy = (inputs, context) => {
  console.log(inputs) // { bucket: 'my-bucket' }
}

Add aws-sqs-queue component

Description

Add an aws-sqs-queue component for setting up an AWS SQS queue.

Inputs

  • name: string (max: 80 chars, [a-zA-Z0-9-_])
  • policy: AWSPolicyDocument (optional)
  • delaySeconds: int (0-900, optional)
  • maximumMessageSize: int (1024-262144, optional, default: 262144)
  • receiveMessageWaitTimeSeconds: int (0-20, optiona, default 0)
  • redrivePolicy: RedrivePolicy (optional)
  • visibilityTimeout: int (0-43000, optional, default: 30)
  • kmsMasterKey: MasterKeyId | MasterKey
  • kmsDataKeyReusePeriodSeconds: int (60-86400, optional, default: 300)

Outputs

  • arn: AwsARN
  • url: URL

Requirements

  • tests
  • documentation
  • examples

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.