Code Monkey home page Code Monkey logo

terraform-aws-bastion's Introduction

AWS Bastion Terraform module

Open Source Helpers

Terraform module which creates a secure SSH bastion on AWS.

Mainly inspired by Securely Connect to Linux Instances Running in a Private Amazon VPC

Features

This module will create an SSH bastion to securely connect in SSH to your private instances. Bastion Infrastrucutre All SSH commands are logged on an S3 bucket for security compliance, in the /logs path.

SSH users are managed by their public key, simply drop the SSH key of the user in the /public-keys path of the bucket. Keys should be named like 'username.pub', this will create the user 'username' on the bastion server. Username must contain alphanumeric characters only.

Then after you'll be able to connect to the server with :

ssh [-i path_to_the_private_key] username@bastion-dns-name

From this bastion server, you'll able to connect to all instances on the private subnet.

If there is a missing feature or a bug - open an issue.

Usage

module "bastion" {
  source = "Guimove/bastion/aws"
  bucket_name = "my_famous_bucket_name"
  region = "eu-west-1"
  vpc_id = "my_vpc_id"
  is_lb_private = "true|false"
  bastion_host_key_pair = "my_key_pair"
  create_dns_record = "true|false"
  hosted_zone_id = "my.hosted.zone.name."
  bastion_record_name = "bastion.my.hosted.zone.name."
  bastion_iam_policy_name = "myBastionHostPolicy"
  elb_subnets = [
    "subnet-id1a",
    "subnet-id1b"
  ]
  auto_scaling_group_subnets = [
    "subnet-id1a",
    "subnet-id1b"
  ]
  tags = {
    "name" = "my_bastion_name",
    "description" = "my_bastion_description"
  }
}

Requirements

Name Version
terraform ~> 1.0
aws ~> 4.0

Providers

Name Version
aws ~> 4.0

Modules

No modules.

Resources

Name Type
aws_autoscaling_group.bastion_auto_scaling_group resource
aws_iam_instance_profile.bastion_host_profile resource
aws_iam_policy.bastion_host_policy resource
aws_iam_role.bastion_host_role resource
aws_iam_role_policy_attachment.bastion_host resource
aws_kms_alias.alias resource
aws_kms_key.key resource
aws_launch_template.bastion_launch_template resource
aws_lb.bastion_lb resource
aws_lb_listener.bastion_lb_listener_22 resource
aws_lb_target_group.bastion_lb_target_group resource
aws_route53_record.bastion_record_name resource
aws_s3_bucket.bucket resource
aws_s3_bucket_acl.bucket resource
aws_s3_bucket_lifecycle_configuration.bucket resource
aws_s3_bucket_ownership_controls.bucket-acl-ownership resource
aws_s3_bucket_server_side_encryption_configuration.bucket resource
aws_s3_bucket_versioning.bucket resource
aws_s3_object.bucket_public_keys_readme resource
aws_security_group.bastion_host_security_group resource
aws_security_group.private_instances_security_group resource
aws_security_group_rule.egress_bastion resource
aws_security_group_rule.ingress_bastion resource
aws_security_group_rule.ingress_instances resource
aws_ami.amazon-linux-2 data source
aws_iam_policy_document.assume_policy_document data source
aws_iam_policy_document.bastion_host_policy_document data source
aws_kms_alias.kms-ebs data source
aws_subnet.subnets data source

Inputs

Name Description Type Default Required
allow_ssh_commands Allows the SSH user to execute one-off commands. Pass true to enable. Warning: These commands are not logged and increase the vulnerability of the system. Use at your own discretion. bool false no
associate_public_ip_address n/a bool true no
auto_scaling_group_subnets List of subnets where the Auto Scaling Group will deploy the instances list(string) n/a yes
bastion_additional_security_groups List of additional security groups to attach to the launch template list(string) [] no
bastion_ami The AMI that the Bastion Host will use. string "" no
bastion_host_key_pair Select the key pair to use to launch the bastion host string n/a yes
bastion_iam_permissions_boundary IAM Role Permissions Boundary to constrain the bastion host role string "" no
bastion_iam_policy_name IAM policy name to create for granting the instance role access to the bucket string "BastionHost" no
bastion_iam_role_name IAM role name to create string null no
bastion_instance_count n/a number 1 no
bastion_launch_template_name Bastion Launch template Name, will also be used for the ASG string "bastion-lt" no
bastion_record_name DNS record name to use for the bastion string "" no
bastion_security_group_id Custom security group to use string "" no
bucket_force_destroy The bucket and all objects should be destroyed when using true bool false no
bucket_name Bucket name where the bastion will store the logs string n/a yes
bucket_versioning Enable bucket versioning or not bool true no
cidrs List of CIDRs that can access the bastion. Default: 0.0.0.0/0 list(string)
[
"0.0.0.0/0"
]
no
create_dns_record Choose if you want to create a record name for the bastion (LB). If true, 'hosted_zone_id' and 'bastion_record_name' are mandatory bool n/a yes
create_elb Choose if you want to deploy an ELB for accessing bastion hosts. Only select false if there is no need to SSH into bastion from outside. If true, you must set elb_subnets and is_lb_private bool true no
disk_encrypt Instance EBS encryption bool true no
disk_size Root EBS size in GB number 8 no
elb_subnets List of subnets where the ELB will be deployed list(string) [] no
enable_http_protocol_ipv6 Enables or disables the IPv6 endpoint for the instance metadata service bool false no
enable_instance_metadata_tags Enables or disables access to instance tags from the instance metadata service bool false no
enable_logs_s3_sync Enable cron job to copy logs to S3 bool true no
extra_user_data_content Additional scripting to pass to the bastion host. For example, this can include installing PostgreSQL for the psql command. string "" no
hosted_zone_id Name of the hosted zone where we'll register the bastion DNS name string "" no
http_endpoint Whether the metadata service is available bool true no
http_put_response_hop_limit The desired HTTP PUT response hop limit for instance metadata requests number 1 no
instance_type Instance size of the bastion string "t3.nano" no
ipv6_cidrs List of IPv6 CIDRs that can access the bastion. Default: ::/0 list(string)
[
"::/0"
]
no
is_lb_private If TRUE, the load balancer scheme will be "internal" else "internet-facing" bool null no
kms_enable_key_rotation Enable key rotation for the KMS key bool false no
log_auto_clean Enable or disable the lifecycle bool false no
log_expiry_days Number of days before logs expiration number 90 no
log_glacier_days Number of days before moving logs to Glacier number 60 no
log_standard_ia_days Number of days before moving logs to IA Storage number 30 no
private_ssh_port Set the SSH port to use between the bastion and private instance number 22 no
public_ssh_port Set the SSH port to use from desktop to the bastion number 22 no
region n/a string n/a yes
tags A mapping of tags to assign map(string) {} no
use_imds_v2 Use (IMDSv2) Instance Metadata Service V2 bool false no
vpc_id VPC ID where we'll deploy the bastion string n/a yes

Outputs

Name Description
bastion_auto_scaling_group_name The name of the Auto Scaling Group for bastion hosts
bastion_elb_id The ID of the ELB for bastion hosts
bastion_host_security_group The ID of the bastion host security group
bucket_arn The ARN of the S3 bucket
bucket_kms_key_alias The name of the KMS key alias for the bucket
bucket_kms_key_arn The ARN of the KMS key for the bucket
bucket_name The ID of the S3 bucket
elb_arn The ARN of the ELB for bastion hosts
elb_ip The DNS name of the ELB for bastion hosts
private_instances_security_group The ID of the security group for private instances
target_group_arn The ARN of the target group for the ELB

Known issues

Tags are not applied to the instances generated by the auto scaling group do to known terraform issue : hashicorp/terraform-provider-aws#290

Change of disk encryption isn't propagate immediately. Change have to trigger manually from AWS CLI: Auto Scaling Groups -> Instance refresh . Keep in mind all data from instance will be lost in case there are temporary or custom data.

Authors

Module managed by Guimove.

License

Apache 2 Licensed. See LICENSE for full details.

terraform-aws-bastion's People

Contributors

asagage avatar bpar476 avatar cernytomas avatar dannyibishev avatar e-bourhis avatar figuerascarlos avatar fllaca avatar gowiem avatar guimove avatar henadzit avatar jeremietharaud avatar jirkabs avatar maartenvanderhoef avatar michal-adamek avatar naineel avatar ohayak avatar paullucas-iw avatar plmaheux avatar protip avatar rahloff avatar remusss avatar rob-unearth avatar robbiet480 avatar sturman avatar timcosta avatar toddlj avatar va3093 avatar willaustin avatar wobondar avatar zollie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-bastion's Issues

allow http egress from bastion hosts.

Main Bastion Secruity group doesn't allow http egress, only https which is causing yum to fail on startup.

eg, i see this in the cloud init output log:

+ yum -y update --security
Loaded plugins: priorities, update-motd, upgrade-helper


 One of the configured repositories failed (Unknown),
 and yum doesn't have enough cached data to continue. At this point the only
 safe thing yum can do is fail. There are a few ways to work "fix" this:

     1. Contact the upstream for the repository and get them to fix the problem.

     2. Reconfigure the baseurl/etc. for the repository, to point to a working
        upstream. This is most often useful if you are using a newer
        distribution release than is supported by the repository (and the
        packages for the previous distribution release still work).

     3. Disable the repository, so yum won't use it by default. Yum will then
        just ignore the repository until you permanently enable it again or use
        --enablerepo for temporary usage:

            yum-config-manager --disable <repoid>

     4. Configure the failing repository to be skipped, if it is unavailable.
        Note that yum will try to contact the repo. when it runs most commands,
        so will have to try and fail each time (and thus. yum will be be much
        slower). If it is a very temporary problem though, this is often a nice
        compromise:

            yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true

Cannot find a valid baseurl for repo: amzn-main/latest
Could not retrieve mirrorlist http://repo.us-east-1.amazonaws.com/latest/main/mirror.list error was
12: Timeout on http://repo.us-east-1.amazonaws.com/latest/main/mirror.list: (28, 'Connection timed out after 10001 milliseconds')

and I shouldn't, but it's failing because it only uses http, and not https.

can't tunnel to the instances

Host bastion
Hostname xxxxxx
User xxxx

Host mine
hostname xxxx
user xxx
ProxyCommand ssh bastion nc %h %p

ssh bastion ==> works fine

But ssh mine ==? doesn't work

Dependency violation when changing name_prefix.

I had trouble creating multiple bastions in the same AWS account but different Terraform workspaces so I added the bastion_launch_configuration_name variable in order to fix that. Now however, it's forcing recreation of some security groups and as a result is giving a dependency violation. I think it will be fixed by adding create_before_destroy = true to the security group lifecycles.

module "bastion" {
  source                      = "Guimove/bastion/aws"
  bucket_name                 = "bastion-${var.customer}"
  region                      = var.region
  vpc_id                      = aws_vpc.main.id
  cidrs                       = var.bastion_allowed_cidrs
  is_lb_private               = false
  bastion_host_key_pair       = aws_key_pair.ec2key.id
  create_dns_record           = false
  elb_subnets                 = aws_subnet.pub_sn.*.id
  auto_scaling_group_subnets  = aws_subnet.pub_sn.*.id
  bastion_launch_configuration_name = "bastion-${var.customer}-${var.environment}"
  tags = {
    Name = "bastion-${var.customer}"
    Customer = var.customer
  }
}

Plan:

  # aws_instance.mgmt[1] will be updated in-place
  ~ resource "aws_instance" "mgmt" {
        ami                          = "ami-0dd655843c87b6930"
        arn                          = "arn:aws:ec2:us-west-1:redacted:instance/i-0441redacted"
        associate_public_ip_address  = false
        availability_zone            = "us-west-1b"
        cpu_core_count               = 2
        cpu_threads_per_core         = 1
        disable_api_termination      = false
        ebs_optimized                = false
        get_password_data            = false
        id                           = "i-0441redacted"
        instance_state               = "running"
        instance_type                = "t2.medium"
        ipv6_address_count           = 0
        ipv6_addresses               = []
        key_name                     = "key-dev"
        monitoring                   = false
        primary_network_interface_id = "eni-030bredacted"
        private_dns                  = "ip-172-22-2-22.us-west-1.compute.internal"
        private_ip                   = "172.22.2.22"
        security_groups              = []
        source_dest_check            = true
        subnet_id                    = "subnet-0cferedacted"
        tags                         = {
            "Customer" = "dev"
            "Name"     = "mgmt-ui-instance-dev"
        }
        tenancy                      = "default"
        volume_tags                  = {}
      ~ vpc_security_group_ids       = [
            "sg-07acredacted",
            "sg-0c439redacted",
        ] -> (known after apply)

        credit_specification {
            cpu_credits = "standard"
        }

        root_block_device {
            delete_on_termination = true
            encrypted             = false
            iops                  = 100
            volume_id             = "vol-03ddredacted"
            volume_size           = 8
            volume_type           = "gp2"
        }
    }

  # module.bastion.aws_security_group.bastion_host_security_group (deposed object cbcdd9a2) will be destroyed
  - resource "aws_security_group" "bastion_host_security_group" {
      - arn                    = "arn:aws:ec2:us-west-1:redacted:security-group/sg-0ed3redacted" -> null
      - description            = "Enable SSH access to the bastion host from external via SSH port" -> null
      - egress                 = [] -> null
      - id                     = "sg-0ed3redacted" -> null
      - ingress                = [] -> null
      - name                   = "lc-host" -> null
      - owner_id               = "redacted" -> null
      - revoke_rules_on_delete = false -> null
      - tags                   = {
          - "Customer" = "dev"
          - "Name"     = "bastion-dev"
        } -> null
      - vpc_id                 = "vpc-0e28redacted" -> null
    }

  # module.bastion.aws_security_group.private_instances_security_group must be replaced
-/+ resource "aws_security_group" "private_instances_security_group" {
      ~ arn                    = "arn:aws:ec2:us-west-1:redacted:security-group/sg-0c43redacted" -> (known after apply)
        description            = "Enable SSH access to the Private instances from the bastion via SSH port"
      ~ egress                 = [] -> (known after apply)
      ~ id                     = "sg-0c43redacted" -> (known after apply)
      ~ ingress                = [] -> (known after apply)
      ~ name                   = "lc-priv-instances" -> "bastion-dev-dev-priv-instances" # forces replacement
      ~ owner_id               = "redacted" -> (known after apply)
        revoke_rules_on_delete = false
        tags                   = {
            "Customer" = "dev"
            "Name"     = "bastion-dev"
        }
        vpc_id                 = "vpc-0e28redacted"
    }

  # module.bastion.aws_security_group_rule.ingress_instances will be created
  + resource "aws_security_group_rule" "ingress_instances" {
      + description              = "Incoming traffic from bastion"
      + from_port                = 22
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + self                     = false
      + source_security_group_id = "sg-09c2redacted"
      + to_port                  = 22
      + type                     = "ingress"
    }

Reason for load balancer/access through it?

Hi, I am complete newbie to AWS resources and provisioning. I was wondering if the network load balancer was necessary for the bastion host? I understand that the reason for using it would be for optimal access to hosts in every availability zone, but if I was creating a small network, would it still be necessary?

Support for Terraform 0.12

Hi!

we are using this awesome module, but we would like to migrate our terraform code to 0.12. What is required should be only syntax changes, we could even help with that with a PR

Unsupported argument "hosted_zone_name" for module "bastion"

I am getting an odd issue where my terraform plan step is now failing due to the unsupported argument "hosted_zone_name" in the "bastion" module. The below plan previously worked. It seems that "hosted_zone_name" has been replaced with "hosted_zone_id". However, I have not found any documentation indicating this change. Moreover, the terraform registry page for this repo shows that "hosted_zone_name" is still a valid input. It seems like this repo though no longer lists "hosted_zone_name" as a variable. Is there answer that the "hosted_zone_name" variable should now be "hosted_zone_id"? Thank you!

Code:

module "bastion" {
  source = "git::https://github.com/Guimove/terraform-aws-bastion.git?ref=master"
  bucket_name = module.bastion_logs_s3.bucket_id
  region = var.region
  vpc_id = module.vpc.vpc_id
  is_lb_private = "false"
  bastion_host_key_pair = var.stage
  hosted_zone_name = var.hosted_zone_id
  bastion_record_name = local.bastion_hostname
  elb_subnets = module.subnets.public_subnet_ids
  auto_scaling_group_subnets = module.subnets.public_subnet_ids
  create_dns_record = "true"
  tags = var.tags
}

Error:
Screen Shot 2020-03-19 at 7 49 35 PM

Help me vpc -> bastion -> rds

I use this great terraform module, but i don't know if this is my configuration is good

  1. I created a AWS VPC on eu-west-2 with
  • public subnets : 10.0.x.0/24, with x between 0-2
  • private subnets : 10.0.10x.0/24 with x between 0-2
  • db subnets : 10.0.20x.0/24 with between 0-2
  1. I created the security groups next :
  • sg "from-internet"
    input : 22/80/443 - 0.0.0.0/0
    output : All - 0.0.0.0/0

  • sg "from-public-subnet"
    input : 22/80/443 - 10.0.x.0/24 with x between 0-2
    output : All - 0.0.0.0/0

  • sg "from-private"
    input : 22/80/443 - 10.0.10x.0/24 with x between 0-2
    output : All - 0.0.0.0/0

  • sg "from-private-to-db"
    input : 5432 - 10.0.10x.0/24 with x between 0-2
    output : All - 0.0.0.0/0

  1. I created a RDS postgres on db subnet
    with security group from-private-to-db

I use bastion module with

  • ELB subnets in public subnets ,10.0.x.0/24 with x between 0-2
  • bastion EC2 in public subnets, 10.0.10x.0/24 with x between 0-2

but i succeed to connect to bastion host ssh -i ec2-user@, but i don't access to rds ....

are you one idea.

Remove `AllowTcpForwarding` addition

I assume theres a specific reason this is here? Not sure. In my situation though, it causes my ssh's to fail

Host bastion.myhost.com
  	Hostname bastion.myhost.com
  	User aaron
    ForwardAgent no
  	IdentityFile ~/.ssh/id_rsa

Host 10.0.1.*
    User ubuntu
    IdentityFile ~/.ssh/my_key.pem
    ProxyCommand ssh -q -W %h:%p bastion.myhost.com

All the things i saw online said i need to remove it

https://unix.stackexchange.com/a/58756
https://serverfault.com/questions/535412/how-to-solve-the-open-failed-administratively-prohibited-open-failed-when-us

aws_kms_alias breaks on s3-domain names

Hi,

it seems with 1.2.1 the introduction of

resource "aws_kms_alias" "alias" {
  name          = "alias/${var.bucket_name}"
  target_key_id = aws_kms_key.key.arn
}

Error: "name" must begin with 'alias/' and be comprised of only [a-zA-Z0-9:/_-]

  on ../modules/terraform-aws-bastion/main.tf line 15, in resource "aws_kms_alias" "alias":
  15: resource "aws_kms_alias" "alias" {

Because of this restriction the module errors out when having a bucket-name with dots like 'bastion.vpc.company.net'.

Not recording sessions created with ProxyCommand and ssh -W option

Connecting to servers using ProxyCommand and ssh -W $host:$port bypasses the recording.

Example ssh config:

Host myserver.mycompany.com
    ProxyCommand ssh bastion.mycompany.com -W %h:%p

Is there a way to record the actions executed in the remote servers if connecting this way?

Thanks for this great project!

hosted_zone_name is needed even with create_dns_record = false

I'm using version 1.0.4 and I got the following error when I tried to use this module with the create_dns_record option set to false:

Error: module.bastion.aws_route53_record.bastion_record_name: zone_id must not be empty

Checking the code I saw that zone_id uses hosted_zone_name

[Issue/Enhancement] Wrong ami retrieved by the data source aws_ami

Hi,

The current filter on the data source aws_ami retrieves a dotnetcore linux AMI:
amzn2-ami-hvm-2.0.20180622.1-x86_64-gp2-dotnetcore-2019.04.09

The following resource should fix the issue:

data "aws_ami" "ami" {
  most_recent = true
  owners      = ["amazon"]
  name_regex  = "^amzn2-ami-hvm.*-ebs"

  filter {
    name   = "architecture"
    values = ["x86_64"]
  }
}

So that the latest Amazon Linux 2 HVM EBS-based image will be retrieved.

Thank you.

Jeremie

tag soon ?

Nice work.
Do you plan to tag soon ? I'd like to use the extra_user_data_content feature, but it seems module/source only fetches tagged version, and not latest.
Thanks.

sync_users script not able to create users as the keys_retrieved_from_s3 file is malformed

Hi,
Started using this module as it was exactly what I was looking for. However I noticed that none of the user accounts that I have put ssh keys for in our S3 bucket were not being created.

I can still ssh to the bastion because i have the deployer key and upon logging on and checking the log file for the sync_users script, i saw that it was able to create 1 user but not the rest (i have 4 ssh public keys in s3 bucket).

when i looked into it, the contents of the ~/keys_retrieved_from_s3 had:

public-keys/user1.pub
public-keys/user2.pub	public-keys/user3.pub	public-keys/user4.pub

i.e. it looks like the sed -e "s/\t/\n/" at the end of the s3api list-objects call was not able to separate out the returned s3 objects output.

Sure enough, if i just output the s3api call, its output does show me differences between the tab used between user1 and user2 and the rest. (it's tricky to show this here but in my terminal, i can visually see a difference)

As a quick fix, using tr '\t' '\n' instead seems to produce me the output i wanted

public-keys/user1.pub
public-keys/user2.pub
public-keys/user3.pub
public-keys/user4.pub

Happy to PR back but was wondering if I am the only one hitting this problem.

Issue with User Data script

Hey! Thanks for the module, it's super useful!

I ran into a little issue here: https://github.com/Guimove/terraform-aws-bastion/blob/master/user_data.sh#L116

The AWS CLI Command can result in multiple objects being listed on the same line which breaks the subsequent while read line loop.

I fixed it locally by changing it to:
aws s3api list-objects --bucket BUCKET --prefix public-keys/ --region REGION --output text --query 'Contents[?Size>0].Key' | sed -e 'y/\\t/\\n/' | tr '\t' '\n' > ~/keys_retrieved_from_s3

Note the replacement of tab characters with newlines: tr '\t' '\n'

I'd be happy to contribute a PR if it helps.

Feature: Log to Cloudwatch?

Hi,

it would be nice to be able to write the bsation-logs to Cloudwatch. This way it would be easier to set up Alerts / Monitoring of what's going on.

There is a nice bootstrap example here.

It probably only needs:

  • the cloudwatch-agent installed
  • iam-role with permissions to the log-group
  • a log-group

regards,
strowi

[Enhancement] Add description on inbound and outbound rules of SG

Hi,

The aws_security_group resource does not allow to add description on the inbound and outbound rules.

To fix this, we can add two aws_security_group_rule resources with a description of each rule as follows:

resource "aws_security_group" "bastion_host_security_group" {
  description = "Enable SSH access to the bastion host from external via SSH port"
  vpc_id      = "${var.vpc_id}"

  tags = "${merge(var.tags)}"
}

resource "aws_security_group_rule" "ingress_bastion" {
  description = "Incoming traffic to bastion"
  type        = "ingress"
  from_port   = "${var.public_ssh_port}"
  to_port     = "${var.public_ssh_port}"
  protocol    = "TCP"
  cidr_blocks = "${var.cidrs}"

  security_group_id = "${aws_security_group.bastion_host_security_group.id}"
}

resource "aws_security_group_rule" "egress_bastion" {
  description = "Outgoing traffic from bastion to instances"
  type        = "egress"
  from_port   = "0"
  to_port     = "65535"
  protocol    = "-1"
  cidr_blocks = ["0.0.0.0/0"]

  security_group_id = "${aws_security_group.bastion_host_security_group.id}"
}

Thank you.

Jeremie

Allow setting non-standard ssh port

To add extra security to to the bastion host, it would be nice to be able to set the port that the LB uses to forward the to the ssh port, currently it's locked at 22, but according to my company's security policy, we can not have port 22 as the open port, we have to use a non-standard port.

Thanks.

Can't use bastion_host options on aws_instance with bastion created by module.

I'm trying to use this bastion to run provisioner commands on aws_isntances created by Terraform, but I'm getting the error timeout - last error: ssh: rejected: administratively prohibited (open failed) when creating the instances. I assume this is somewhat related to #32 but I thought that was fixed by #33, so I'm not sure why this is still not working. Here's a snippet of my Terraform code:

module "bastion" {
  source                      = "Guimove/bastion/aws"
  bucket_name                 = "${var.bastion_bucket_name}"
  region                      = var.region
  vpc_id                      = aws_vpc.main.id
  is_lb_private               = false
  bastion_host_key_pair       = aws_key_pair.ec2key.id
  create_dns_record           = false
  elb_subnets                 = aws_subnet.pub_sn.*.id
  auto_scaling_group_subnets  = aws_subnet.pub_sn.*.id
  tags = {
    Name =  "${var.bastion_bucket_name}"
  }
}

resource "aws_s3_bucket_object" "bastion_key" {
  bucket  = module.bastion.bucket_name
  key     = "public-keys/${var.current_user}.pub"
  source  = var.public_key_path
}

resource "aws_instance" "web" {
  # Make one for each subnet. In the future we might want to do some sort of autoscaling
  # group, but I don't think the mgmt ui is customer facing so it may not be neccessary
  count = length(aws_subnet.priv_sn)

  # The connection block tells the provisioner how to communicate with the instance 
  connection {
    # The default username for our AMI
    user = "ubuntu"
    host = self.public_ip
    # The connection will use the local SSH agent for authentication.
    bastion_host = module.bastion.elb_ip
    bastion_user = var.current_user
    bastion_private_key = file(var.private_key_path)
  }

  instance_type = "t2.medium"

  # Lookup the correct AMI based on the region we specified
  ami = lookup(var.ec2amis, var.region)

  # The name of the SSH keypair we created above
  key_name = aws_key_pair.ec2key.id

  # Security group to allow HTTP and SSH access
  vpc_security_group_ids = [
    aws_security_group.web.id, 
    module.bastion.private_instances_security_group
  ]

  # We're going to define these guys in the "private" subnet, although
  # right now our configuration only has one set of subnets
  subnet_id = aws_subnet.priv_sn[count.index].id

  # Creates a folder with the code in it on the remote. 
  provisioner "file" {
    source = var.code_folder
    destination = "/home/ubuntu/code"
  }

  # We run a remote provisioner on the instance after creating it. By default, just
  # updates apt-get, but we can add whatever we want to it.
  provisioner "remote-exec" {
    inline = [
      "sudo apt-get -y update",
    ]
  }

  tags = {
    Name = "web"
  }

  depends_on = [aws_s3_bucket_object.bastion_key]
}

I'm not seeing any logs for the connection, though that's to be expected I guess since tcp forwarding circumvents logging.

When I manually ssh into the instance and try to ssh into the instance (ssh -i ~/.ssh/mykey -A [email protected] followed by ssh [email protected] I get a Permission denied (publickey) error as well, but that might just be because I haven't set up agent forwarding correctly.

Any idea why this wouldn't be working?

Add log lifecycle for S3 bucket (tunable)

Something like this: (or allow users to supply the bucket)

 lifecycle_rule {
    id      = "log"
    enabled = true

    prefix  = "log/"
    tags {
      "rule"      = "log"
      "autoclean" = "${var.log_auto_clean}"
    }

    transition {
      days          = "${var.log_standard_ia_days}"
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = "${var.log_glacier_days}"
      storage_class = "GLACIER"
    }

    expiration {
      days = "${var.log_expiry_days}"
    }
  }

unable to access private instances from bastion host

By default, there is now way to allow the bastion host to have access the instances running in my private subnets, the way i've worked around this is to add an egress rule for port 22 to my private subnet cidrs.

If there is another way to get around this, it would be helpful to know and document.

Thanks.

instance_type

Hi,

can we change instance_type?
maybe it will good idea to expose instance_type as variables.

Thanks

1.1.0 Does Not Work - Wrong AMI Architecture

There appears to have been a PR #35 that was merged in, but no new release was created that actually allows the use of this module in the terraform registry.

I know this because I am currently trying to use this module and am getting an obscure error in the activity log of the ASG. This error led me to do some investigation and the AMI that was selected for the launch configuration is arm64 architecture -- which is not compatible with an instance type of t2.nano.

I believe all that needs to be done is to create a new GitHub release that includes the recently merged changes to resolve this, unless I am misunderstanding.

Switching from Mac to Windows forces recreation of Launch configuration and ASG

Hi

I came across this issue when following these steps :

  1. I create a bastion from a Mac with terraform apply
  2. I perform terraform plan from a Windows PC
    -> ASG and Launch configuration are planned to be recreated (edited output below)

Launch config is planned for recreation because user_data has "changed".
user_data has "changed" because the template file user_data.sh has different line ends on a Mac and on Windows

I was able to reproduce this behaviour on a Mac by changing line ends in user_data.sh

I was able to fix this behaviour by adding lifecycle { ignore_changes = [ "user_data" ] } to resource "aws_launch_configuration" "bastion_launch_configuration". I can submit a PR
with these changes

Edited terraform plan output :

  # module.bastion.aws_autoscaling_group.bastion_auto_scaling_group must be replaced
+/- resource "aws_autoscaling_group" "bastion_auto_scaling_group" {
      ~ arn                       = "..." -> (known after apply)
      ~ availability_zones        = [ ... ] -> (known after apply)
        default_cooldown          = 180
        desired_capacity          = 1
      - enabled_metrics           = [] -> null
        force_delete              = false
        health_check_grace_period = 180
        health_check_type         = "EC2"
      ~ id                        = "..." -> (known after apply)
      ~ launch_configuration      = "..." -> (known after apply)
      ~ load_balancers            = [] -> (known after apply)
        max_size                  = 1
        metrics_granularity       = "1Minute"
        min_size                  = 1
      ~ name                      = "..." -> (known after apply) # forces replacement
        protect_from_scale_in     = false
      ~ service_linked_role_arn   = "..." -> (known after apply)
      - suspended_processes       = [] -> null
      ~ tags                      = [  ...  ] -> (known after apply)
        target_group_arns         = [
            "a...",
        ]
        termination_policies      = [
            "OldestLaunchConfiguration",
        ]
        vpc_zone_identifier       = [
            ...
        ]
        wait_for_capacity_timeout = "10m"
    }

  # module.bastion.aws_launch_configuration.bastion_launch_configuration must be replaced
+/- resource "aws_launch_configuration" "bastion_launch_configuration" {
        associate_public_ip_address      = true
      ~ ebs_optimized                    = false -> (known after apply)
        enable_monitoring                = true
        iam_instance_profile             = "..."
      ~ id                               = "..." -> (known after apply)
        image_id                         = "ami-010a7fa92957a6a71"
        instance_type                    = "t2.nano"
        key_name                         = "..."
      ~ name                             = "..." -> (known after apply)
        name_prefix                      = "..."
        security_groups                  = [ ... ]
      ~ user_data                        = "6a4294ecb9d7bcd7a1d6667c893e762a89cb637f" -> "bef2c6b2a35ff4b2641a209256fddb3bb0e0227f" # forces replacement
      - vpc_classic_link_security_groups = [] -> null

      + ebs_block_device { ... }

      + root_block_device { ... }
    }

CloudWatch Alarms?

Hello,
I am trying to figure out how I can add CloudWatch Alarms for my bastion host(s) but I do not see an output in the module that I can feed to the CloudWatch Alarm Terraform code. Would it be possible to add this functionality or provide a workaround?

Additionally it would be nice to include an option for cloudwatch alarms that will trigger an autoscaling event of the bastion host ASG.

Thanks!

Route53 type CNAME?

Since rout53 is creating record for bastion elb. Maybe type 'A' with alias should be more efficient and cheaper?

NLB health checks fails when adding a CIDIR to access the bastion

Hi,

When we replace the default CIDR block for accessing the bastion by a specific one, the Network Load Balancer health checks fail resulting in unhealthy hosts.

To fix that, the cidr blocks of the LB subnets have to be added in the ingress security group rule of the bastion, because LB health checks are performed from IP addresses of the subnet list:

data "aws_subnet" "subnets" {
  count = "${length(var.elb_subnets)}"
  id    = "${var.elb_subnets[count.index]}"
}

resource "aws_security_group_rule" "ingress_bastion" {
  description = "Incoming traffic to bastion"
  type        = "ingress"
  from_port   = "${var.public_ssh_port}"
  to_port     = "${var.public_ssh_port}"
  protocol    = "TCP"
  cidr_blocks = ["${concat(data.aws_subnet.subnets.*.cidr_block, var.cidrs)}"]

  security_group_id = "${aws_security_group.bastion_host_security_group.id}"
}

Unable to use `aws` CLI when behind proxy

Hello,

The aws CLI is unable to request S3 API when behind a HTTP Proxy.

Things I needed to do manually inside the EC2 instance to allow aws CLI was to declare *_proxy env variables :

  • http_proxy : http://my-proxy:3128/
  • https_proxy : http://my-proxy:3128/
  • no_proxy : localhost,127.0.0.1,169.254.169.254

Is there a way to provide those environment variables via the Terraform module ?

Thanks for your answer !

Add depends_on s3 bucket to ASG resource

In my instance I had given an S3 bucket name that was not available. However, the ASG still got created even though the S3 bucket did not, so the user data script that ran never created /logs and any keys dropped in /public-keys didn't work.

I would suggest adding a depends_on to the ASG such that the ASG will only be created IF the s3 bucket is successfully created. Let me know if you want a PR for this.

Bastion security group egress is too restrictive

The current bastion SG only permits ssh (22), http (80) and https (443). We have additional ports we need to access, but cannot override this configuration.

Suggest the bastion SG is also an output variable to allow people to customise if required.

Getting and error "Invalid argument name"

Terraform version - 0.12.8
module version - 1.1.2

Error: Invalid argument name

  on .terraform/modules/bastion/Guimove-terraform-aws-bastion-c1acebe/main.tf line 25, in resource "aws_s3_bucket" "bucket":
  25:       "rule"      = "log"

I'm not sure this module is compatible with the latest version of terraform

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.