Code Monkey home page Code Monkey logo

provider-aws's People

Contributors

actuallytrent avatar ajaykangare avatar alexlast avatar bassam avatar bobh66 avatar chlunde avatar displague avatar edgej avatar enderv avatar haarchri avatar hasheddan avatar ichekrygin avatar jbw976 avatar kelvinwijaya avatar krishchow avatar lukeweber avatar mistermx avatar muvaf avatar negz avatar patelronak avatar pintonunes avatar sahil-lakhwani avatar schroeder-paul avatar smcavallo avatar stevendborrelli avatar suskin avatar tnthornton avatar turkenh avatar ulucinar avatar zonybob avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

provider-aws's Issues

Utilize IAM Roles for AWS Authentication when running inside of AWS

Is this a bug report or feature request?

  • Feature Request

What should the feature do:
Utilize IAM Roles to authenticate to the AWS API when available.

What is use case behind this feature:
As a engineer, I don't wish to concern myself with the details of securing/rotating security credentials, as such, I want to utilize IAM Roles assigned to the nodes that crossplane is executing on to authenticate to the AWS services. Optionally I want to utilize kube2iam or kiam to manage which IAM role crossplane has access to.

Environment:
Crossplane running inside of a Kubernetes cluster on AWS, with or without kube2iam/kiam installed

EKS Cluster credentials expiration

Credentials in EKS are based on creating a token that has a max life of 15 minutes. We need to consider whether we should be integrating at kubeconfig credentials, instead of ClientConfig(token,CA).

The kubeconfig integration would have the baggage of supporting gcloud auth, and aws iam authenticator binaries.

Alternatively, we could likely create service accounts that were long lived for the same purpose, but this seems less ideal.

Pass Domain through on EKS Provision for Route53 Permissions

When we provision an EKS cluster, we create a Route53NodeInstancePolicy in our cloudformation script that allows full access from the node to administer the route53 records.
See: https://github.com/crossplaneio/crossplane/blob/3bc975537fe11b104779c0deac5d57ed8bf53bd2/pkg/clients/aws/eks/eks.go#L252

Note configuration notes here:
https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/aws.md

We should improve the security model by limiting to the domain that cluster should operate on:

  1. Pass domain name through from claim
  2. Resolve domain name to actual hosted zone id
  3. Pass hosted zone id to cloudformation script in eks.go client to limit scope of permissions in aws role.

ServiceAccount integration for AWS

What problem are you facing?

Currently we are unable to create RDS instances using the crossplane. Due to security restrictions we are only allowed or restricted to use ServiceAccounts.

How could Crossplane help solve your problem?

Ability to support ServiceAccount by using annotations or some other way will avoid using AWS credentials manually.

nodeGroupName and clusterControlPlaneSecurityGroup should be marked required

Is this a bug report or feature request? Bug Report

Deviation from expected behavior:
I expected to be able to create an EKSCluster without specifying a nodeGroupName or clusterControlPlaneSecurityGroup because they are not required by our CRD. When I attempt to create the cluster I see:

Status:
  Conditions:
    Last Transition Time:  2019-03-26T23:48:31Z
    Message:               
    Reason:                
    Status:                True
    Type:                  Creating
    Last Transition Time:  2019-03-27T00:02:47Z
    Message:               ValidationError: Parameters: [NodeGroupName, ClusterControlPlaneSecurityGroup] must have values
                           status code: 400, request id: a768a023-5023-11e9-91bf-ffa4977742d2
    Reason:                Failed to sync cluster state
    Status:                True
    Type:                  Failed

Expected behavior:
nodeGroupName and clusterControlPlaneSecurityGroup should remove the omitempty JSON tag so that they're marked as required in our generated CRD.

How to reproduce it (minimal and precise):

Environment:

$ kubectl -n crossplane-system describe deploy crossplane|grep Image
    Image:      crossplane/crossplane:v0.1.0-171.g3f13ae6

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-03-01T23:34:27Z", GoVersion:"go1.12", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

$ ./cluster/local/minikube.sh ssh
# ...
$ cat /etc/os-release 
NAME=Buildroot
VERSION=2018.05
ID=buildroot
VERSION_ID=2018.05
PRETTY_NAME="Buildroot 2018.05"

$ uname -a
Linux minikube 4.15.0 crossplaneio/crossplane#1 SMP Fri Jan 18 22:39:33 UTC 2019 x86_64 GNU/Linux

Examples for resources without claims

What problem are you facing?

We have some resources that do not have a corresponding claims such as network, subnetwork etc. Only end to end tutorials expose them to the users as ready-to-use yaml files.

How could Crossplane help solve your problem?

Add examples for those resources.

AWS S3 resources to v1beta1

What problem are you facing?

Would like a v1beta1 version of S3 bucket in storage.aws.crossplane.io/v1alpha3 resources

How could Crossplane help solve your problem?

Move networking and S3 resources to v1beta1 standards

Additional AWS managed resources

What problem are you facing?

We are gathering community feedback to help us prioritize development of additional AWS managed services and maturing of existing service implementations.

  • What are the most important AWS services for you? Please share your service list in the comments.
  • What are your use cases? This will help us understand how to best support your situation.
  • Would you be interested in contributing? If so, in which capacity? This could take the form of usage and early feedback, code contributions, and improved documentation.

Crossplane currently supports 58+ AWS API types, see https://doc.crds.dev/github.com/crossplane/provider-aws.

image

Please drop us a comment with a list of the most important AWS services for your use cases.

How could Crossplane help solve your problem?

We will be prioritizing updates and additional services in Crossplane based on feedback.

Related Issues

Up Next

  • Add AWS ElasticSearch Service as managed resource #238
  • Add AWS EMR Cluster as a managed resource #239
  • Lambda #234
  • DocumentDB (with MongoDB compatibility) #268
  • Cache Subnet Group #169, #95
  • Add AWS Lambda Function as a custom resource #234
  • Add CloudFront as a managed resource #236
  • Add Auto Scaling Group as a managed resource #237

Parked for Design Review

  • Kinesis

rds: IsErrorAlreadyExists does not appear to check for all error messages

What happened?

Somehow my local Crossplane got into a weird state where it is not able to successfully reconcile a mysqlinstance. Here's an example of the status of the object:

  status:
    conditions:
    - lastTransitionTime: "2019-09-25T20:05:18Z"
      reason: Managed resource is being created
      status: "False"
      type: Ready
    - lastTransitionTime: "2019-09-25T21:49:28Z"
      message: "DBInstanceAlreadyExists: DB Instance already exists\n\tstatus code:
        400, request id: 910d5710-7374-4e4e-ae3c-37e75a950dbe"
      reason: Encountered an error during managed resource reconciliation
      status: "False"
      type: Synced

It looks as though there is an issue with the IsErrorAlreadyExists function where it is only checking one of several error codes that could be returned in the case that the resource already exists.

Here is a link to the specific error message that was being returned, and the full list of error messages as well. It looks like there is at least one other case.

When looking into this issue, it may make sense to also check for more instances of this class of issue (the class is that only one error code is handled, when several should be handled). On quick inspection, it looks as though the IsErrorNotFound function may also have a similar issue. I did not check the logic for other resource types.

How can we reproduce it?

The gist of reproducing this the way that I ran into it is to create an instance using the Crossplane Stacks Guide and the AWS option, and then to get the AWS Stack to try to create the instance again. I believe this could be done by running kubectl edit on the object and removing the status.InstanceName, if it has one.

What environment did it happen in?

Crossplane version: helm crossplane/alpha:

$ helm list crossplane
NAME            REVISION        UPDATED                         STATUS          CHART                   APP VERSION     NAMESPACE
crossplane      1               Tue Sep 24 16:27:50 2019        DEPLOYED        crossplane-0.3.0        0.3.0           crossplane-system

This is running on the Kubernetes from Docker for Mac.

RDS Modify call is made in each reconcile when ApplyModificationsImmediately is true

What happened?

When spec.forProvider.ApplyModificationsImmediately is true, the patch object includes ApplyImmediately all the time because that doesn't exist in the corresponding SDK object. Details noted here https://github.com/crossplaneio/stack-aws/blob/master/pkg/clients/rds/rds.go#L444

How can we reproduce it?

ApplyModificationsImmediately is true by default. So, just creating an RDS instance and putting a breakpoint on the Update call of the ExternalClient would show you.

What environment did it happen in?

Crossplane version: 0.4.0

RDSInstance cannot be deleted

What happened?

Statically created an RDSInstance that was provisioned successfully and reached steady state. On deletion, AWS returned error InvalidParameterCombination: No modifications were requested and refused to delete. Manually deleting in AWS console resulted in successful deletion and clean-up of Crossplane resource.

How can we reproduce it?

Create an RDSInstance with the following configuration:

apiVersion: database.aws.crossplane.io/v1beta1
kind: RDSInstance
metadata:
  name: rdsmysql
  labels:
    example: "true"
spec:
  forProvider:
    dbInstanceClass: db.t2.small
    masterUsername: masteruser
    allocatedStorage: 20
    engine: mysql
  writeConnectionSecretsToNamespace: crossplane-system
  providerRef:
    name: aws-provider
  reclaimPolicy: Delete

This is the late-initialized spec:

 spec:                                                                                                                                                                                                            
    forProvider:                                                                                                                                                                                                   
      allocatedStorage: 20                                                                                                                                                                                         
      autoMinorVersionUpgrade: true                                                                                                                                                                                
      availabilityZone: us-west-2d                                                                                                                                                                                 
      backupRetentionPeriod: 0                                                                                                                                                                                     
      caCertificateIdentifier: rds-ca-2019                                                                                                                                                                         
      copyTagsToSnapshot: false                                                                                                                                                                                    
      dbInstanceClass: db.t2.small                                                                                                                                                                                 
      dbSubnetGroupName: default                                                                                                                                                                                   
      deletionProtection: false                                                                                                                                                                                    
      enableIAMDatabaseAuthentication: false                                                                                                                                                                       
      enablePerformanceInsights: false                                                                                                                                                                             
      engine: mysql                                                                                                                                                                                                
      engineVersion: 5.7.22                                                                                                                                                                                        
      licenseModel: general-public-license                                                                                                                                                                         
      masterUsername: masteruser                                                                                                                                                                                   
      monitoringInterval: 0                                                                                                                                                                                        
      multiAZ: false                                                                                                                                                                                               
      port: 0                                                                                                                                                                                                      
      preferredBackupWindow: 09:45-10:15                                                                                                                                                                           
      preferredMaintenanceWindow: wed:12:30-wed:13:00                                                                                                                                                              
      publiclyAccessible: true                                            
      storageEncrypted: false
      storageType: gp2        
      vpcSecurityGroupIds:                      
      - <redacted>                    
    providerRef:     
      name: aws-provider
    reclaimPolicy: Delete

After it reaches condition Ready: True, delete it. Error surfaces:

- lastTransitionTime: "2020-02-25T16:48:18Z"                                                                                                                                                                   
      message: "delete failed: cannot modify RDS instance: InvalidParameterCombination:                                                                                                                            
        No modifications were requested\n\tstatus code: 400, request id: 9414996b-db29-40a9-9c0a-862bde997a43"                                                                                                     
      reason: Encountered an error during resource reconciliation                                                                                                                                                  
      status: "False"                                                                                                                                                                                              
      type: Synced  

What environment did it happen in?

Crossplane version: v0.8.0
stack-aws version: v0.6.0
Kubernetes version: 1.14
Kubernetes distro: EKS

Use Go modules instead of dep

What problem are you facing?

Go modules should be used instead of dep

How could Crossplane help solve your problem?

add support for custom subnets in EKS (secondary cidr ranges)

What problem are you facing?

The current EKS implementation does not allow to create pods in an additional subnet as per https://docs.aws.amazon.com/eks/latest/userguide/cni-custom-network.html

How could Crossplane help solve your problem?

I would like crossplane to create EKS clusters with custom network/secondary cidr ranges. Ideally I'd like to configure them like this:

apiVersion: compute.aws.crossplane.io/v1alpha3
kind: EKSClusterClass
metadata:
  name: custom-network
  labels:
    aws: "true"
    custom-network: "true"
specTemplate:
  # [...]
  subnetIds:
    - subnet-08a6e42f696140da4
    - subnet-074d2b84fc0fba006
  customSubnetIds:
    - subnet-026c046bb4fbd1468
    - subnet-065d50facdfe89127

Benign ReplicationGroup reconcilation errors

What happened?

I noticed the following benign reconcile errors while trying out the new v1beta ReplicationGroup resource.

While creating:

    Last Transition Time:  2019-10-24T00:34:21Z
    Message:               cannot modify ElastiCache replication group: InvalidReplicationGroupState: Replication group must be in available state to modify.
                           status code: 400, request id: acf70fbf-1b09-4b26-8916-8b27f4817251
    Reason:                Encountered an error during managed resource reconciliation
    Status:                False
    Type:                  Synced

While deleting:

    Last Transition Time:  2019-10-24T00:48:22Z
    Message:               cannot delete ElastiCache replication group: InvalidReplicationGroupState: Replication group default-app-redis-dr2rp has status deleting which is not valid for deletion.
                           status code: 400, request id: b4d9f4d2-f278-4f5b-a837-514f23ad1cf5
    Reason:                Encountered an error during managed resource reconciliation
    Status:                False
    Type:                  Synced

These errors seem benign - the ReplicationGroup is created and then deleted just fine. We could probably avoid them by having the ExternalClient Update and Delete methods be no-op if the ReplicationGroup under reconciliation is observed to be creating or deleting.

Relates to #30.

How can we reproduce it?

Create and then delete a ReplicationGroup using the following claim and class:

---
apiVersion: cache.aws.crossplane.io/v1beta1
kind: ReplicationGroupClass
metadata:
  name: aws-redis-standard
  labels:
    example: "true"
specTemplate:
  writeConnectionSecretsToNamespace: crossplane-system
  providerRef:
    name: example
  reclaimPolicy: Delete
  forProvider:
    replicationGroupDescription: "An example replication group"
    applyModificationsImmediately: true
    engine: "redis"
    engineVersion: "3.2.4"
    cacheParameterGroupName: default.redis3.2.cluster.on
    cacheNodeType: cache.t2.micro
    automaticFailoverEnabled: true
    numNodeGroups: 2
    replicasPerNodeGroup: 2
---
apiVersion: cache.crossplane.io/v1alpha1
kind: RedisCluster
metadata:
  name: app-redis
spec:
  classSelector:
    matchLabels:
      example: "true"
  writeConnectionSecretToRef:
    name: redissecret
  engineVersion: "3.2"

What environment did it happen in?

Crossplane version:

AWS SDK resources are not object-based as we assume

What problem are you facing?

In Crossplane API Patterns doc, high fidelity talks about exact representation of the resource in its CRD as much as possible. The assumption is that cloud provider SDK would have one struct with properties and API calls to manipulate that representation in the cloud provider. However, in cases where the main object is very lean and properties are set/get with different API calls and objects, this assumption leads to excessive amount of CRDs to be implemented, resulting in too much effort to implement them and bad UX since you have to deal with a lot of resources.

Let's take the case of IAM. In AWS console, you see 5 main objects:

iam

Group, User, Policy, Role and Identity Provider.

A usual workflow is that you create a Role, create a few Policy (or use ones that are built-in) then attach them to Role. You'd add tags to Role or do other operations on it in its properties tab. If you'd like to do that in Crossplane, you deal with 3 CRs IAMRole, IAMRolePolicyAttachment and IAMPolicy(not implemented yet). In fact, there are quite a few actions like policy attachment that would deem a separate CRD but, in fact, is merely an action on the resource. Basically, the SDK is implemented in a way that you don't actually have a source of truth as object that you can modify/update for changes but a bunch of calls that you set/get those properties. It's not all of the properties, there are properties on the main struct that you can play with but in most of the cases, especially when two resources will have relation like attachment, you have other structs and calls that do the operation and our assumption about what constitutes a CRD causes us to implement a CRD for each of those actions.

When you look at other providers, such as GCP, it's closer to our assumption. You have a ServiceAccount endpoint and you'd construct a CRD for that, whose calls in the SDK would cover most of the cases. Though you have separate calls for resource relations here, too, like setIamPolicy. My inclination is that this is more verbose in AWS compared to GCP; the main structs are leaner in AWS where you do most of the operations via dedicated API calls who have their own struct.

There are two problems with all this:

  • We're drawn to implement too many separate resources because of how their SDK is shaped. @hasheddan was saying that we'd need 27 different resources to mirror IAM group.
  • User experience suffers due to too many resources. When you look at their management consoles, they are not designed in a bottom-up fashion like we're trying to do; the consoles are decoupled and represents higher level objects together with all actions that you can do with them.

As you can see above in the image, you don't see a resource IAMRolePolicyAttachment in the console's main widget; what you have is a tab in Roles page where you select and attach. In my opinion, this is a better user experience; as a user I don't care how you attach it or whether you need a separate call/struct for this, I would like to do it in the Role's own page so that I know what I'm doing, in our case I'd like to do it under spec of IAMRole by adding a reference to the Policy to the existing list of attached policies.

The same problem exists in Terraform, too, in some cases. For IAM, they seem to have implemented those relational resources. However, if you look at S3 bucket, they decided to embed all these little property structs into the main S3 Bucket resource, like logging, cors rule etc.. It seems like they either drew a line like if it is inter-resource relation, then a separate resoruce but if it belongs solely on one resource then it is just a property or this is just an inconsistency due to historical reasons.

How could Crossplane help solve your problem?

I believe we need some level of sacrifice here in terms of mirroring the API's struct and the main reason is that those structs are not meant to be interacted in a level as high as user like CRD, as we can see from how they designed their consoles. Terraform seems to have made the decision to implement inter-resource relation resources but keep the property resources on the object itself. In my opinion, we should definitely need to embed those properties to the main CRD just like they do but we can go a step further and say let's embed relational resources, too, when one side of the relation is not affected by the relation. For example, when you attach a Policy to a Role, there is no change on the Policy, so we can actually have spec.attachedPoliciesRefs under Role where the user can specify what Policys they would like to attach to that Role.

I know that we don't feel good about designing the CRD in a way that is not fully mirroring the SDK but that's actually holding us back in some major way like verbose UX and excessive number of CRDs (and effort to implement each). What I propose is that we can try to mirror the UX that they provide to their end users instead of mirroring what they provide to us through their SDK. In IAM group, we'd have 5 different CRDs only as they show in the console and implement the separate actions as a field on the spec instead of implementing a new CRD for each action.

Remove unnecessary `aws` folder, as the repository is specific to `aws`

What happened?

After migrating the aws resources from Crossplane core, they are still under an aws folder in multiple places:

In addition, the following folder names need to be changed accordingly

  • ./cmd/crossplane -> ./cmd/stack

How can we reproduce it?

Current folder structure

What environment did it happen in?

Crossplane version:

Implement integration tests

Part of crossplane/crossplane#1033

What problem are you facing?

Currently, all integration testing is being run in a manual ad-hoc manner. It is desirable to automate this process and run the tests on a more frequent basis.

How could Crossplane help solve your problem?

Initial implementation should use the framework developed to create tests for a single managed resource, as well as a Jenkins stage / separate pipeline to execute the test.

Implement Simple Resource Class Selection for AWS

What problem are you facing?

Dynamic provisioning is complicated. See crossplane/crossplane#926 for full context.

How could Crossplane help solve your problem?

Implement the patterns described in crossplane/crossplane#926 for the AWS Stack. Specifically:

  • Make providers, classes, and managed resources cluster scoped.
  • Match classes to claims using label selectors
  • Fall back to using a resource class annotated as the default.

This will depend on crossplane/crossplane#927 and crossplane/crossplane-runtime#48.

AWS WordPress example: resources cannot be cleaned up, because of orphan resources

What happened?

After installing WordPress example, when cleaning up the resource connectivity resources, deleting some resources fail because they depend on resources that are created directly by Kubernetes Application. For instance, the application automatically creates a ELB load balancer, hence blocking the deletion of the VPCresource

How can we reproduce it?

  • Follow the WordPress workload example and install it
  • Remove the resources by running kubectl delete -f <directory>

What environment did it happen in?

Crossplane version: v0.3


Update:

I confirmed that if the KubernetesApplication is deleted before deleting EKSCluster, this problem won't happen.

Adopt new external name feature

What problem are you facing?

The new version of crossplane-runtime introduced external name annotation that will be used as the source of truth for the name identifier on the provider's systems. By default, managed reconciler takes care of the propagation logic from claim to managed and vice versa.

However, the resources that don't get to choose their name, like VPC in AWS, or the ones that can't work with new <namespace>-<name>-<5char random> string should opt out of using ManagedNameAsExternalName initializer and handle setting the external name in their own way. This could be either supplying its own Initializer or calling meta.SetExternalName(Managed, string) in their ExternalClient call.

See:
https://github.com/crossplaneio/crossplane/blob/master/design/one-pager-managed-resource-api-design.md#external-resource-name
crossplane/crossplane-runtime#45

How could Crossplane help solve your problem?

This issue tracks the adoption of this feature in this stack. Please close this as soon as all resources that use managed reconciler adopts the feature.

Support Backup or Retention period for AWS DB instances

Is this a bug report or feature request?
Feature Request

What should the feature do:
The feature should allow the admins to let users specify a backup/retention period for AWS RDS DB instances.
What is use case behind this feature:
I would like to backup my data stored on the DB instance for recovery.

Environment:

AWS

Fill random available zones if no subnet availability zone is provided in the Subnet resource

Can we add random availability zones as defaults based on the region selected in EKSClusterClass and Provider if no availabilityZone is provided for in the Subnet resource.

For example if the region in Provider and EKSClusterClass is us-east-1, Subnet resource's availabilityZone can be populated with us-east-1a or us-east-1b randomly based on the availability zones for the region if the availabilityZone field is empty or not present.

Upsides

  • One less thing to think about if I don't care about the availability zones of a subnet

Downsides

  • Might lead to confusion

TDE Encryption support for RDS resource

What problem are you facing?

RDSInstance spec does not have a way of configuring TDE encryption.

How could Crossplane help solve your problem?

TDECredentialPassword and TDECredentialArn fields are needed but since sensitive information on CR is not secure, Crossplane needs to allow a secret to be used as input. Something like TDECredentialSecretRef

Expanding IAM support

What problem are you facing?

The current AWS IAM support includes two resources: IAMRole and IAMRolePolicyAttachment. A high fidelity implementation of the AWS APIs would involved adding additional resources.

TODO: User story for statically provisioning a user. IAMUser is used to add any user of S3 buckets. IAMPolicy for bringing existing policy references.

How could Crossplane help solve your problem?

An implementation mapping the APIs to their declarative resource counterparts. Evaluate the APIs to be exposed.

Add the following general resources now:

  • IAMPolicy
  • IAMUser
  • IAMUserPolicyAttachment

We support AttachRolePolicy as an IAMRolePolicyAttachment, so the equivalent for IAMUser
IAMUserPolicyAttachment.

We’ll also want, probably as a top priority, to support modeling IAM roles. Today we can attach an IAM role, but we can’t actually create one in Crossplane. What we have today may be sufficient because there are quite a few baked in roles. As a next step, we would support managing roles, then support managing users, then support attaching roles to users.

We have decided to defer work on all other resources until we have a community use cases.

Related Issues

For a full inventory of APIs to resources see this doc (both mapped and unmapped)

Implement CacheSubnetGroup resource for ReplicationGroup to be created in a VPC

What problem are you facing?

Replication Group resource can be created in a VPC only if you create a cache subnet group and give its name during creation as opposed to EKS where you create individual subnets and give their IDs.

How could Crossplane help solve your problem?

It seems like CacheSubnetGroup is a logical resource where you can either create new subnets during creation or choose from existing ones. So, it's a bit tricky in Crossplane where every managed resource is independently managed. Implementation of that resource would probably look like Azure's resource groups where its sole interface is to list/add/remove the downstream resource.

Fail to delete a VPC if an EKS cluster was created in it

What happened?

Created an EKS deployment incl corresponding infra resources and trying to delete it again. It's not possible to delete the VPC from crossplane, it hangs.

How can we reproduce it?

  1. Create a VPC with subnets, routetable, igw and iamrole.
  2. Create and delete an EKS cluster in this VPC
  3. Delete the VPC and corresponding resources.
  4. Everything will be deleted except the VPC, deleting hangs
    Message:               delete failed: failed to delete the VPC resource: DependencyViolation: The vpc 'vpc-01b02d8940effbbad' has dependencies and cannot be deleted.
                           status code: 400, request id: d6e70c68-c3b0-478c-82d5-2759c4078dda
    Reason:                Encountered an error during resource reconciliation

Manual deletion from AWS Console works fine and the vpc object disappears in crossplane

What environment did it happen in?

Crossplane version: 0.8.0
AWS Stack: master from yesterday

EKSCluster failed to create, and now cannot be deleted from Kubernetes

Is this a bug report or feature request? Bug Report

Deviation from expected behavior:
I attempted to create an EKS cluster using the below resource class and claim.

---
apiVersion: compute.crossplane.io/v1alpha1
kind: KubernetesCluster
metadata:
  name: kubernetes
  namespace: example
  labels:
    app: example
spec:
  classReference:
    name: kubernetes-eks-example
    namespace: crossplane-system
---
apiVersion: core.crossplane.io/v1alpha1
kind: ResourceClass
metadata:
  name: kubernetes-eks-example
  namespace: crossplane-system
  labels:
    app: example
parameters:
  region: us-west-2
  roleARN: REDACTED
  vpcId: REDACTED
  subnetIds: subnet-REDACTED
  securityGroupIds: REDACTED
  workerKeyName: REDACTED
  workerNodeInstanceType: m3.medium
provisioner: ekscluster.compute.aws.crossplane.io/v1alpha1
providerRef:
  name: aws-example
reclaimPolicy: Delete

This resulted in an error:

Status:                                                                                                                                                                 [147/1504]
  Conditions:                                                                                                                        
    Last Transition Time:  2019-03-26T23:17:11Z                     
    Message:               InvalidParameterException: Subnets specified must be in at least two different AZs
                           status code: 400, request id: 4814d859-501d-11e9-b671-2fdcdcd2cd1d
    Reason:                Failed to create new cluster
    Status:                True                          
    Type:                  Failed   

I attempted to delete the ekscluster in order to rectify my mistake, but encountered the following error:

$ kubectl -n gitlab delete kubernetescluster kubernetes-example
$ kubectl -n crossplane-system describe ekscluster kubernetes-eks-example
# ...
Status:
  Conditions:
    Last Transition Time:  2019-03-26T23:27:55Z
    Message:               Master Delete Error: AccessDeniedException: Unable to determine service/operation name to be authorized
                           status code: 403, request id: c8ca907b-501e-11e9-b126-bf9e02431975
    Reason:                Failed to delete cluster
    Status:                True
    Type:                  Failed

When I look in my AWS console I see no Kubernetes clusters to delete. I presume my provider credentials have the correct permissions given that my creation request failed for reasons other than authorization. My AWS provider is configured to use the access token of a user in the administrator AWS group, which has policy arn:aws:iam::aws:policy/AdministratorAccess.

Expected behavior:
Deleting the ekscluster from Kubernetes results in it being deleted from AWS.

Environment:

$ kubectl -n crossplane-system describe deploy crossplane|grep Image
    Image:      crossplane/crossplane:v0.1.0-171.g3f13ae6

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-03-01T23:34:27Z", GoVersion:"go1.12", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

$ ./cluster/local/minikube.sh ssh
# ...
$ cat /etc/os-release 
NAME=Buildroot
VERSION=2018.05
ID=buildroot
VERSION_ID=2018.05
PRETTY_NAME="Buildroot 2018.05"

$ uname -a
Linux minikube 4.15.0 crossplaneio/crossplane#1 SMP Fri Jan 18 22:39:33 UTC 2019 x86_64 GNU/Linux

Dynamically list VPCs for a given Region in AWS

What problem are you facing?

Considering if someone were to create a UI around Crossplane for AWS resources, the VPCs and Regions don't exist as an explicit type and show up in individual resource classes as a string. A further example of things we might consider how to model would be security groups, rds subnet groups, and more. This concept probably has overlap with design being considered in crossplane/crossplane#564

How could Crossplane help solve your problem?

Currently when constructing aws resource classes a user has to enter a string for the vpc-id, and it’s further required that the vpc must exist in the region you’ve configured as well, so region is really implied if you've selected a VPC. It would be great if we could model the VPCs that are available in a given region and avoid user's ability to misconfigure. In an RDS resource class one might select a region, then display a list of VPCs, pre-selecting the default.

This might represent useful fields and labels, although more consideration should be given to the exact labels and fields of the objects based on use cases.

Region
 labels: 
   region=us-west-1
   providerName=aws-creds-1
   provider=aws
 name: aws::us-west-1
 region: us-west-1
   provider: aws
   providerReference: aws-creds-1

AWSVPC:
   labels:
     region=us-west-1
     providerName=aws-creds-1
     provider=aws
   name:  vpc-id-123456
   vpcID: vpc-id-123456
   isDefault: false
   subnets: xxx.xxx.xxx.xxx/xxx
   region: aws::us-west-1

AWSVPC:
   labels: 
      region=us-west-1
      providerName=aws-creds-1
      provider=aws
    name:  vpc-id-567890
    vpcID: vpc-id-567890
    isDefault: true
    subnets: xxx.xxx.xxx.xxx/xxx
    region: aws::us-west-1

ReplicationGroup does not have any reference cross-resource references implemented

What problem are you facing?

While RDS and EKS managed resources support cross-resource referencing ReplicationGroup does not. For example, SecurityGroupIDs field could refer to SecurityGroup resource type that we have.

How could Crossplane help solve your problem?

Implement cross-resource reference for ReplicationGroup parameters.

Make all controllers use the the shared managed reconciler pattern

What problem are you facing?

Some of the earlier controllers for AWS are still using the non-managed reconciler of Kubebuilder (eg. EKSCluster).

How could Crossplane help solve your problem?

All such controllers need to use the shared managed reconciler pattern.

Stack binary should be named "stack-aws"

What happened?

The entrypoint binary for stack-aws appears to be named crossplane. This breaks at least the make run command, which expects the output binary to be the same as the project name.

How can we reproduce it?

make run

What environment did it happen in?

Crossplane version:

Cannot delete S3 bucket after creation fails with name conflict

Is this a bug report or feature request? Bug Report

Deviation from expected behavior:
The S3 bucket namespace is global. If I create an S3Bucket that attempts to use a bucket name that someone else has taken I get the following error:

Status:                                 
  Conditions:              
    Last Transition Time:  2019-03-27T00:38:40Z            
    Message:               BucketAlreadyExists: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different n
ame and try again.                                
                           status code: 409, request id: 98896AAE32A152FC, host id: dD1d929ZBfgiAFCHGXebOv7B3NVgPP7sHXfmEyghq6Okgo5z6pYtSfEB9TtTKEsqtia3Ab5EDI0=
    Reason:                Failed to create resource
    Status:                True         
    Type:                  Failed

If I attempt to rectify this by deleting the S3Bucket in Kubernetes I cannot, because:

Status:
  Conditions:
    Last Transition Time:  2019-03-27T00:40:05Z
    Message:               BucketRegionError: incorrect region, the bucket is not in 'us-west-2' region                                                                          
                           status code: 301, request id: , host id:
    Reason:                Failed to delete resource
    Status:                True
    Type:                  Failed

Expected behavior:
If Crossplane never actually created an S3 bucket for me, I should be able to delete the associated S3Bucket resource.

Environment:

$ kubectl -n crossplane-system describe deploy crossplane|grep Image
    Image:      crossplane/crossplane:v0.1.0-171.g3f13ae6

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-03-01T23:34:27Z", GoVersion:"go1.12", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

$ ./cluster/local/minikube.sh ssh
# ...
$ cat /etc/os-release 
NAME=Buildroot
VERSION=2018.05
ID=buildroot
VERSION_ID=2018.05
PRETTY_NAME="Buildroot 2018.05"

$ uname -a
Linux minikube 4.15.0 crossplaneio/crossplane#1 SMP Fri Jan 18 22:39:33 UTC 2019 x86_64 GNU/Linux

Replace Web Identity Token logic on next release of aws-sdk-go-v2

What problem are you facing?

We currently assume IAM roles for authentication to the AWS API using our own custom logic to get Web Identity Tokens from the file-system of the controller container. The v1 version of aws-sdk-go supported this use-case as an identity provider, but the v2 version did not, so we were forced to use our own implementation in the short-term.

How could Crossplane help solve your problem?

This logic has now been included in aws-sdk-go-v2 as part of aws/aws-sdk-go-v2#488. It should be available in the next release (v0.20.0), and we should move to using it instead of our own implementation.

Resources requeue in a tight loop when AWS errors are returned consistently

What happened?

When (at least some) AWS managed resources encounter errors they requeue on a tight loop because:

  • They include a unique request ID in the error string.
  • A new reconcile is queued implicitly for any update to the managed resource's status.
  • We update the managed resource's status whenever we encounter a reconcile error that was not identical to the most recent reconcile error (or lack thereof).
failed to delete the VPC resource: DependencyViolation: The vpc 'vpc-0dcc11278872d11b7' has dependencies and cannot be deleted.
        status code: 400, request id: f6f7e648-cbd1-40f3-8df4-5c4077c0c595
failed to delete the VPC resource: DependencyViolation: The vpc 'vpc-0dcc11278872d11b7' has dependencies and cannot be deleted.
        status code: 400, request id: c2f56437-66a8-48f2-9316-713f457004ed

The quick fix here would be to wrap these errors in a way that strips out the unique request ID.

How can we reproduce it?

  1. Create an AWS VPC and a create a subnet in said VPC.
  2. Delete the VPC (but not the subnet) via Crossplane. The delete will fail due to the subnet still existing, and the deleting resource will go into a tight requeue loop.

What environment did it happen in?

Crossplane version:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.