Code Monkey home page Code Monkey logo

terraform-aws-alb-logs-to-elasticsearch's Introduction

terraform-aws-alb-logs-to-elasticsearch

Send ALB logs from S3 bucket to ElasticSearch using AWS Lambda

This directory contains terraform module to send ALB logs from S3 bucket to ElasticSearch using AWS Lambda

Particularly it creates:

  1. Lambda function that does the sending
  2. IAM role and policy that allows access to ES
  3. S3 bucket notification that triggers the lambda function when an S3 object is created in the bucket.
  4. (Only when your Lambda is deployed inside a VPC) Security group for Lambda function

Module Input Variables

Variable Name Example Value Description Default Value Required
es_endpoint search-es-demo-zveqnhnhjqm5flntemgmx5iuya.eu-west-1.es.amazonaws.com AWS ES fqdn without http:// None True
index alblogs Index to create. adds a timestamp to index. Example: alblogs-2016.03.31` alblogs False
doctype alb-access-logs doctype alb-access-logs False
region eu-west-1 AWS region None True
nodejs_version 16.x Nodejs version to be used 14.x False
prefix public- A prefix for the resource names, this helps create multiple instances of this stack for different environments False
s3_bucket_arn alb-logs The arn of the s3 bucket containing the alb logs None True
s3_bucket_id alb-logs The id of the s3 bucket containing the alb logs None True
subnet_ids ["subnet-1111111", "subnet-222222"] Subnet IDs you want to deploy the lambda in. Only fill this in if you want to deploy your Lambda function inside a VPC. False
lambda_function_filename lambda_code.zip Filename with the lambda's source code. ${path.module}/alb-logs-to-elasticsearch.zip False

Example

provider "aws" {
  region = "eu-central-1"
}

module "public_alb_logs_to_elasticsearch" {
  source        = "neillturner/alb-logs-to-elasticsearch/aws"
  version       = "0.2.2"

  prefix        = "public_es_"
  es_endpoint   = "test-es-XXXXXXX.eu-central-1.es.amazonaws.com"
  s3_bucket_arn = "arn:aws:s3:::XXXXXXX-alb-logs-eu-west-1"
  s3_bucket_id  = "XXXXXXX-alb-logs-eu-west-1"
  region        = "eu-west-1"
}

module "vpc_alb_logs_to_elasticsearch" {
  source        = "neillturner/alb-logs-to-elasticsearch/aws"
  version       = "0.2.2"

  prefix        = "vpc_es_"
  es_endpoint   = "vpc-gc-demo-vpc-gloo5rzcdhyiykwdlots2hdjla.eu-central-1.es.amazonaws.com"
  s3_bucket_arn = "arn:aws:s3:::XXXXXXX-alb-logs-eu-west-1"
  s3_bucket_id  = "XXXXXXX-alb-logs-eu-west-1"
  subnet_ids    = ["subnet-d9990999"]
  region        = "eu-west-1"
}

Deployment Package Creation

The zip file alb-logs-to-elasticsearch.zip was build from neillturner/aws-alb-logs-to-elasticsearch

  1. On your development machine, download and install Node.js.

  2. Go to root folder of the repository and install node dependencies by running:

    npm install
    

    Verify that these are installed within the node_modules subdirectory.

  3. Create a zip file to package the index.js and the node_modules directory

    zip -r9 alb-logs-to-elasticsearch.zip *
    

The zip file thus created is the Lambda Deployment Package.

terraform-aws-alb-logs-to-elasticsearch's People

Contributors

fennnec avatar neillturner avatar tcarrondo avatar xsnrg avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

terraform-aws-alb-logs-to-elasticsearch's Issues

lambda execution error

The following stack trace is generated when the lambda tries to run:

errorMessage | Error: Cannot find module 'exports'Require stack:- /var/runtime/UserFunction.js- /var/runtime/index.js
-- | --
errorType | Runtime.ImportModuleError
stack.0 | Runtime.ImportModuleError: Error: Cannot find module 'exports'
stack.1 | Require stack:
stack.2 | - /var/runtime/UserFunction.js
stack.3 | - /var/runtime/index.js
stack.4 | at _loadUserApp (/var/runtime/UserFunction.js:100:13)
stack.5 | at Object.module.exports.load (/var/runtime/UserFunction.js:140:17)
stack.6 | at Object.<anonymous> (/var/runtime/index.js:43:30)
stack.7 | at Module._compile (internal/modules/cjs/loader.js:999:30)
stack.8 | at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)
stack.9 | at Module.load (internal/modules/cjs/loader.js:863:32)
stack.10 | at Function.Module._load (internal/modules/cjs/loader.js:708:14)
stack.11 | at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:60:12)
stack.12 | at internal/main/run_main_module.js:17:47

Allow for managing the Cloudwatch logs created

Problem: Cloudwatch logs are created by the lambda in this module, but not controlled by the user or terraform.

Background:

By default, aws_lambda_function also creates a Cloudwatch log group with the same name as the function. It probably should not, but it does. More information here: Lambda Cloudwatch config

Possible solutions:

If the log group is created as part of the module, it would allow the module user to control aspects of the log group, such as retention.

eg.

resource "aws_cloudwatch_log_group" "lambda" {
  name              = var.cloudwatch_log_group_name
  retention_in_days = var.cloudwatch_retention_in_days
  skip_destroy      = var.skip_destroy
}

This could either be included in the module itself, as aws_lambda_function creates it from the module, or it should also work to have the module have the lambda function name exported, so that the cloudwatch log group could be created that way.

eg

resource "aws_cloudwatch_log_group" "lambda" {
  name              = "aws/lambda/${module.alb-logs-to-elasticsearch.aws_lambda_function.name}"
  retention_in_days = var.cloudwatch_retention_in_days
  skip_destroy      = var.skip_destroy
}

Today, without this functionality, the default retention is to keep the logs forever, costing more and more money over time.

Lambda reports success, no data in ES

image

I added additional logging, script is missing some error handling, any 4xx response should fail the script, instead it reports a success message as it sent the data to ES.

ignore changes to filename

In a team setting, with a shared state file, the module is loaded from the local .terraform cache folder when apply is run. This means that each individual of the team has a unique path to the module cache, and this forces the module to sense changes even though nothing has actually changed in the lambda.

Instead we should rely only on the hash changing to know if the lambda needs to be updated by terraform. To do this, we can use a lifecycle to ignore changes to the filename, and still rely on the hash changing for updates. A PR is forthcoming.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.