Code Monkey home page Code Monkey logo

terraform-aws-eks-blueprints-teams's Introduction

Amazon EKS Blueprints Teams Terraform module

Terraform module which creates multi-tenancy resources on Amazon EKS.

Usage

See tests directory for working tests to reference:

Standalone - Admin Team

module "admin_team" {
  source = "aws-ia/eks-blueprints-teams/aws"

  name = "admin-team"

  # Enables elevated, admin privileges for this team
  enable_admin = true
  users        = ["arn:aws:iam::111122223333:role/my-admin-role"]
  cluster_arn  = "arn:aws:eks:us-west-2:111122223333:cluster/my-cluster"

  tags = {
    Environment = "dev"
  }
}

Standalone - Developer Team

module "development_team" {
  source = "aws-ia/eks-blueprints-teams/aws"

  name = "development-team"

  users             = ["arn:aws:iam::012345678901:role/my-developer"]
  cluster_arn       = "arn:aws:eks:us-west-2:012345678901:cluster/my-cluster"
  oidc_provider_arn = "arn:aws:iam::012345678901:oidc-provider/oidc.eks.us-west-2.amazonaws.com/id/5C54DDF35ER19312844C7333374CC09D"

  # Labels applied to all Kubernetes resources
  # More specific labels can be applied to individual resources under `namespaces` below
  labels = {
    team = "development"
  }

  # Annotations applied to all Kubernetes resources
  # More specific labels can be applied to individual resources under `namespaces` below
  annotations = {
    team = "development"
  }

  namespaces = {
    default = {
      # Provides access to an existing namespace
      create = false
    }

    development = {
      labels = {
        projectName = "project-awesome",
      }

      resource_quota = {
        hard = {
          "requests.cpu"    = "1000m",
          "requests.memory" = "4Gi",
          "limits.cpu"      = "2000m",
          "limits.memory"   = "8Gi",
          "pods"            = "10",
          "secrets"         = "10",
          "services"        = "10"
        }
      }

      limit_range = {
        limit = [
          {
            type = "Pod"
            max = {
              cpu    = "200m"
              memory = "1Gi"
            }
          },
          {
            type = "PersistentVolumeClaim"
            min = {
              storage = "24M"
            }
          },
          {
            type = "Container"
            default = {
              cpu    = "50m"
              memory = "24Mi"
            }
          }
        ]
      }

      network_policy = {
        pod_selector = {
          match_expressions = [{
            key      = "name"
            operator = "In"
            values   = ["webfront", "api"]
          }]
        }

        ingress = [{
          ports = [
            {
              port     = "http"
              protocol = "TCP"
            },
            {
              port     = "53"
              protocol = "TCP"
            },
            {
              port     = "53"
              protocol = "UDP"
            }
          ]

          from = [
            {
              namespace_selector = {
                match_labels = {
                  name = "default"
                }
              }
            },
            {
              ip_block = {
                cidr = "10.0.0.0/8"
                except = [
                  "10.0.0.0/24",
                  "10.0.1.0/24",
                ]
              }
            }
          ]
        }]

        egress = [] # single empty rule to allow all egress traffic

        policy_types = ["Ingress", "Egress"]
      }
    }
  }

  tags = {
    Environment = "dev"
  }
}

Multiple Teams

You can utilize a module level for_each to create multiple teams with the same configuration, and even allow some of those values to be defaults that can be overridden.

module "development_team" {
  source = "aws-ia/eks-blueprints-teams/aws"

  for_each = {
    one = {
      # Add any additional variables here and update definition below to use
      users = ["arn:aws:iam::012345678901:role/developers-one"]
    }
    two = {
      users = ["arn:aws:iam::012345678901:role/developers-two"]
    }
    three = {
      users = ["arn:aws:iam::012345678901:role/developers-three"]
    }
  }

  name = "${each.key}-team"

  users             = each.value.users
  cluster_arn       = "arn:aws:eks:us-west-2:012345678901:cluster/my-cluster"
  oidc_provider_arn = "arn:aws:iam::012345678901:oidc-provider/oidc.eks.us-west-2.amazonaws.com/id/5C54DDF35ER19312844C7333374CC09D"

  # Labels applied to all Kubernetes resources
  # More specific labels can be applied to individual resources under `namespaces` below
  labels = {
    team = each.key
  }

  # Annotations applied to all Kubernetes resources
  # More specific labels can be applied to individual resources under `namespaces` below
  annotations = {
    team = each.key
  }

  namespaces = {
    (each.key) = {
      labels = {
        projectName = "project-awesome",
      }

      resource_quota = {
        hard = {
          "requests.cpu"    = "1000m",
          "requests.memory" = "4Gi",
          "limits.cpu"      = "2000m",
          "limits.memory"   = "8Gi",
          "pods"            = "10",
          "secrets"         = "10",
          "services"        = "10"
        }
      }

      limit_range = {
        limit = [
          {
            type = "Pod"
            max = {
              cpu    = "200m"
              memory = "1Gi"
            }
          },
          {
            type = "PersistentVolumeClaim"
            min = {
              storage = "24M"
            }
          },
          {
            type = "Container"
            default = {
              cpu    = "50m"
              memory = "24Mi"
            }
          }
        ]
      }
    }
  }

  tags = {
    Environment = "dev"
  }
}

Requirements

Name Version
terraform >= 1.0
aws >= 4.47
kubernetes >= 2.17

Providers

Name Version
aws >= 4.47
kubernetes >= 2.17

Modules

No modules.

Resources

Name Type
aws_iam_policy.admin resource
aws_iam_role.this resource
aws_iam_role_policy_attachment.admin resource
aws_iam_role_policy_attachment.this resource
kubernetes_cluster_role_binding_v1.this resource
kubernetes_cluster_role_v1.this resource
kubernetes_limit_range_v1.this resource
kubernetes_namespace_v1.this resource
kubernetes_network_policy_v1.this resource
kubernetes_resource_quota_v1.this resource
kubernetes_role_binding_v1.this resource
kubernetes_secret_v1.service_account_token resource
kubernetes_service_account_v1.this resource
aws_iam_policy_document.admin data source
aws_iam_policy_document.this data source

Inputs

Name Description Type Default Required
admin_policy_name Name to use on admin IAM policy created string "" no
annotations A map of Kubernetes annotations to add to all resources map(string) {} no
cluster_arn The Amazon Resource Name (ARN) of the cluster string "" no
cluster_role_name Name to use on Kubernetes cluster role created string "" no
create_cluster_role Determines whether a Kubernetes cluster role is created bool true no
create_iam_role Determines whether an IAM role is created or to use an existing IAM role bool true no
create_role Determines whether a Kubernetes role is created. Note: the role created is a cluster role but its bound to only namespaced role bindings bool true no
enable_admin Determines whether an IAM role policy is created to grant admin access to the Kubernetes cluster bool false no
iam_role_arn Existing IAM role ARN for the node group. Required if create_iam_role is set to false string null no
iam_role_description Description of the role string null no
iam_role_max_session_duration Maximum session duration (in seconds) that you want to set for the specified role. If you do not specify a value for this setting, the default maximum of one hour is applied. This setting can have a value from 1 hour to 12 hours number null no
iam_role_name Name to use on IAM role created string null no
iam_role_path IAM role path string null no
iam_role_permissions_boundary ARN of the policy that is used to set the permissions boundary for the IAM role string null no
iam_role_policies IAM policies to be added to the IAM role created map(string) {} no
iam_role_use_name_prefix Determines whether the IAM role name (iam_role_name) is used as a prefix bool true no
labels A map of Kubernetes labels to add to all resources map(string) {} no
name A common name used across resources created unless a more specific resource name is provdied string "" no
namespaces A map of Kubernetes namespace definitions to create any {} no
oidc_provider_arn ARN of the OIDC provider created by the EKS cluster string "" no
principal_arns A list of IAM principal arns to support passing wildcards for AWS Identity Center (SSO) roles. Reference list(string) [] no
role_name Name to use on Kubernetes role created string "" no
tags A map of tags to add to all AWS resources map(string) {} no
users A list of IAM user and/or role ARNs that can assume the IAM role created list(string) [] no

Outputs

Name Description
aws_auth_configmap_role Dictionary containing the necessary details for adding the role created to the aws-auth configmap
iam_role_arn The Amazon Resource Name (ARN) specifying the IAM role
iam_role_name The name of the IAM role
iam_role_unique_id Stable and unique string identifying the IAM role
namespaces Map of Kubernetes namespaces created and their attributes
rbac_group The name of the Kubernetes RBAC group

License

Apache-2.0 Licensed. See LICENSE

terraform-aws-eks-blueprints-teams's People

Contributors

askulkarni2 avatar bersr-aws avatar bryantbiggs avatar carflo avatar csantanapr avatar leospyke avatar rodrigobersa avatar tbulding avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-eks-blueprints-teams's Issues

Feature request: add Condition variable

Using AWS IAM Identity Center is best practice according to AWS for assigning out permissions. However, it creates roles with unique names. To avoid having to hardcode unique names everywhere, AWS says you can use a condition with a wildcard in your trust policy. However, terraform-aws-eks-blueprints-teams does not support that currently.

If this repo added support that would make our code a lot cleaner and more maintainable, as we wouldn't have to have hardcoded AWS role names for each AWS account.

For reference see bottom of https://docs.aws.amazon.com/singlesignon/latest/userguide/referencingpermissionsets.html

feature request: elastic admin team creation without relying on system:master

As of today, the creation of a new admin team (enable_admin=true) eventually produces an aws_auth_configmap_role output, which contains the hardcoded group system:masters.
Creating additional administrative users belonging to the above-mentioned group (other than the IAM Principal used to initially bootstrap the cluster, which is neither visible nor editable) is against best practices and discouraged for security purposes; it is like using the root account in your AWS environment.

Maybe an improvement can be implemented by giving the ability to choose whether the new team should be added to the system:master or to another one created ad hoc, like with the "Development Teams", thus creating a ClusterRoleBinding to the built-in cluster-admin ClusterRole. This will have the same effect as using system:masters, but would allow those rights to be removed if necessary, by removing the group from the ClusterRoleBinding.

developer need pod/exec

Hi,

I think developer team use clusterrole "view",
but view does not have "pod/exec" permission.

I think developers need to do that.

Thanks,

Team Management to support different personas and features

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

The current team management module does not provide flexibility for the team to provide customized feature to create different RBAC roles/IAM roles for different personas, it is not very easy to be used.

We would like to provide flexibility for the team management to provide additional IAM Roles/RBAC roles, cross-account assume roles, network policies bootstrapped for each namespace.

Describe the solution you would like

We would like the solution to cater for users inputting parameters to enable additional personas and features mentioned above

Describe alternatives you have considered

N/A

Additional context

bug: Error namespace not found

When creating resources that are namespaced like resourcequotas, limitranges, roles, network policy, service accounts and the namespace is not created first you will encounter an error because the namespace is not found

โ”‚ Error: namespaces "backend-frontend" not found
โ”‚ 
โ”‚   with module.spoke_cluster.module.app_teams["frontend"].kubernetes_role_binding_v1.this["backend-frontend"],
โ”‚   on .terraform/modules/spoke_cluster.app_teams/main.tf line 344, in resource "kubernetes_role_binding_v1" "this":
โ”‚  344: resource "kubernetes_role_binding_v1" "this" {

I think (but I'm not 100% sure) the root cause is a race condition namespaces been created in parallel as also the other resources

We could add a depends_on, or we could iterated over the map kubernetes_namespace_v1.this instead of var.namespaces
like in networkpolicy in this case:

resource "kubernetes_network_policy_v1" "this" {
  for_each = { for k, v in var.namespaces : k => v if try(v.create, true) && length(try(v.network_policy, {})) > 0 }

We could do a retry, but I don't see a apply_retry_count option for kubernetes terraform provider like the the kubectl terraform provider

Here is the example I was trying:

module "app_teams" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints-teams"

  for_each = {
    frontend = {}
    crystal  = {}
    nodejs   = {}
  }
  name = "app-team-${each.key}"


  users             = [data.aws_caller_identity.current.arn]
  cluster_arn       = module.eks.cluster_arn
  oidc_provider_arn = module.eks.oidc_provider_arn

  namespaces = {

    "backend-${each.key}" = {
      create_service_account = false

      labels = {
        appName     = "eks-teams-app",
        projectName = "project--eks-blueprints",
      }

      resource_quota = {
        hard = {
          "limits.cpu"      = "4",
          "limits.memory"   = "16Gi",
          "requests.cpu"    = "2",
          "requests.memory" = "4Gi",
          "pods"            = "20",
          "secrets"         = "20",
          "services"        = "20"
        }
      }
      limit_range = {
        limit = [
          {
            type = "Pod"
            max = {
              cpu    = "2"
              memory = "1Gi"
            }
          },
          {
            type = "Container"
            default = {
              cpu    = "500m"
              memory = "512Mi"
            }
            default_request = {
              cpu    = "100m"
              memory = "128Mi"
            }
          }
        ]
      }
    }
  }


  tags = local.tags
}

Unreadable module directory

This little snippet from the TF registry, with the latest version will fail

% cat main.tf
module "eks-blueprints-teams_example_complete" {
  source  = "aws-ia/eks-blueprints-teams/aws//examples/complete"
  version = "1.1.0"
}

Error:

Initializing the backend...
Initializing modules...
Downloading registry.terraform.io/aws-ia/eks-blueprints-teams/aws 1.1.0 for eks-blueprints-teams_example_complete...
- eks-blueprints-teams_example_complete in .terraform/modules/eks-blueprints-teams_example_complete/examples/complete
โ•ท
โ”‚ Error: Unreadable module directory
โ”‚ 
โ”‚ The directory .terraform/modules/eks-blueprints-teams_example_complete/examples/complete could not be read. This is a bug in
โ”‚ Terraform and should be reported.

Option to use an inline policy instead of a managed policy

Relevant aws_iam_role code

resource "aws_iam_role" "this" {
count = var.create_iam_role ? 1 : 0
name = var.iam_role_use_name_prefix ? null : local.iam_role_name
name_prefix = var.iam_role_use_name_prefix ? "${local.iam_role_name}-" : null
path = var.iam_role_path
description = var.iam_role_description
assume_role_policy = data.aws_iam_policy_document.this[0].json
max_session_duration = var.iam_role_max_session_duration
permissions_boundary = var.iam_role_permissions_boundary
force_detach_policies = true
tags = var.tags
}


Compare this which creates a managed iam policy which can be reused accidentally in another role

resource "aws_iam_policy" "admin" {
count = var.create_iam_role && var.enable_admin ? 1 : 0
name = var.iam_role_use_name_prefix ? null : local.admin_policy_name
name_prefix = var.iam_role_use_name_prefix ? "${local.admin_policy_name}-" : null
path = var.iam_role_path
description = var.iam_role_description
policy = data.aws_iam_policy_document.admin[0].json
tags = var.tags
}
resource "aws_iam_role_policy_attachment" "admin" {
count = var.create_iam_role && var.enable_admin ? 1 : 0
policy_arn = aws_iam_policy.admin[0].arn
role = aws_iam_role.this[0].name
}

vs this example of an inline policy which

  • uses 2 fewer resources
  • an inline policy which cannot be reused
  • an inline policy which does not need to be globally named to avoid a name conflict
  • an inline policy which does not need to be tagged
resource "aws_iam_role" "this" {
  count = var.create_iam_role ? 1 : 0

  # truncated for brevity

  dynamic "inline_policy" {
    for_each = var.iam_role_use_inline_policy ? [1] : []

    statement {
      name = local.admin_policy_name

      policy = data.aws_iam_policy_document.admin[0].json
    }
  }
}

Related

[TEAM] - Allow Fargate Profiles for teams

Today, when we create fargate profiles, customers have to use the top level fargate_profiles prarameter

module "eks_blueprints" {
  source = ...
  ...
  fargate_profile = {
      # Providing compute for default namespace
      default = {
        fargate_profile_name = "default"
        fargate_profile_namespaces = [
          {
            namespace = "default"
        }]
        subnet_ids = module.vpc.private_subnets
      }
      # Providing compute for kube-system namespace where core addons reside
      kube_system = {
        fargate_profile_name = "kube-system"
        fargate_profile_namespaces = [
          {
            namespace = "kube-system"
        }]
        subnet_ids = module.vpc.private_subnets
      }
    }
}

It would be nice if fargate_profiles can be created for Teams. This fits nicely into the "namespace as a service" model where multiple teams live on the same cluster but use their own Fargate profiles. This will also enable customers who only want to use the Teams functionality to be able to create Fargate profiles without using the core module. I believe, the interface would look something like this then...

application_teams = {
  team-blue = {
    "labels" = {
      "appName"     = "example",
      "projectName" = "example",
      "environment" = "example",
      "domain"      = "example",
      "uuid"        = "example",
    }
    "quota" = {
      "requests.cpu"    = "1000m",
      "requests.memory" = "4Gi",
      "limits.cpu"      = "2000m",
      "limits.memory"   = "8Gi",
      "pods"            = "10",
      "secrets"         = "10",
      "services"        = "10"
    }
    manifests_dir = "./manifests"
    # Belows are examples of IAM users and roles
    users = [
      "arn:aws:iam::123456789012:user/blue-team-user",
      "arn:aws:iam::123456789012:role/blue-team-sso-iam-role"
    ]
    fargate_profile =  {
      fargate_profile_name = "team-blue"
      fargate_profile_namespaces = [
        {
          namespace = "team-blue"
      }]
      subnet_ids = var.subnets
    }
  }
}

How to use with blueprints

Hi,

I've just tried setting up a new team for my cluster, and the clusterbindings etc., all seem to be set up as expected. However, unlike when setting up a platform-team in the eks-blueprints module, no role seems to have been created in the AWS IAM console.

I tried to look at the example provided in this repository, and this section seems relevant. However, the example is given using the eks module rather than the blueprints one, so I'm not sure how to apply it in my case.

Is this module compatible with the eks-blueprints module, and if so, how should they be used together?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.