crossplane-contrib / provider-aws Goto Github PK
View Code? Open in Web Editor NEWCrossplane AWS Provider
License: Apache License 2.0
Crossplane AWS Provider
License: Apache License 2.0
Would like a v1beta1 version of networking and VPC resources
Move networking and VPC resources to v1beta1 standards
Would like a v1beta1 version of DBSubnetGroup resources
Update it to follow v1beta1 standards
Is this a bug report or feature request?
What should the feature do:
Utilize IAM Roles to authenticate to the AWS API when available.
What is use case behind this feature:
As a engineer, I don't wish to concern myself with the details of securing/rotating security credentials, as such, I want to utilize IAM Roles assigned to the nodes that crossplane is executing on to authenticate to the AWS services. Optionally I want to utilize kube2iam or kiam to manage which IAM role crossplane has access to.
Environment:
Crossplane running inside of a Kubernetes cluster on AWS, with or without kube2iam/kiam installed
Credentials in EKS are based on creating a token that has a max life of 15 minutes. We need to consider whether we should be integrating at kubeconfig credentials, instead of ClientConfig(token,CA).
The kubeconfig integration would have the baggage of supporting gcloud auth, and aws iam authenticator binaries.
Alternatively, we could likely create service accounts that were long lived for the same purpose, but this seems less ideal.
When we provision an EKS cluster, we create a Route53NodeInstancePolicy in our cloudformation script that allows full access from the node to administer the route53 records.
See: https://github.com/crossplaneio/crossplane/blob/3bc975537fe11b104779c0deac5d57ed8bf53bd2/pkg/clients/aws/eks/eks.go#L252
Note configuration notes here:
https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/aws.md
We should improve the security model by limiting to the domain that cluster should operate on:
Currently we are unable to create RDS instances using the crossplane. Due to security restrictions we are only allowed or restricted to use ServiceAccounts.
Ability to support ServiceAccount by using annotations or some other way will avoid using AWS credentials manually.
Is this a bug report or feature request? Bug Report
Deviation from expected behavior:
I expected to be able to create an EKSCluster without specifying a nodeGroupName
or clusterControlPlaneSecurityGroup
because they are not required by our CRD. When I attempt to create the cluster I see:
Status:
Conditions:
Last Transition Time: 2019-03-26T23:48:31Z
Message:
Reason:
Status: True
Type: Creating
Last Transition Time: 2019-03-27T00:02:47Z
Message: ValidationError: Parameters: [NodeGroupName, ClusterControlPlaneSecurityGroup] must have values
status code: 400, request id: a768a023-5023-11e9-91bf-ffa4977742d2
Reason: Failed to sync cluster state
Status: True
Type: Failed
Expected behavior:
nodeGroupName
and clusterControlPlaneSecurityGroup
should remove the omitempty
JSON tag so that they're marked as required
in our generated CRD.
How to reproduce it (minimal and precise):
Environment:
$ kubectl -n crossplane-system describe deploy crossplane|grep Image
Image: crossplane/crossplane:v0.1.0-171.g3f13ae6
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-03-01T23:34:27Z", GoVersion:"go1.12", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
$ ./cluster/local/minikube.sh ssh
# ...
$ cat /etc/os-release
NAME=Buildroot
VERSION=2018.05
ID=buildroot
VERSION_ID=2018.05
PRETTY_NAME="Buildroot 2018.05"
$ uname -a
Linux minikube 4.15.0 crossplaneio/crossplane#1 SMP Fri Jan 18 22:39:33 UTC 2019 x86_64 GNU/Linux
Since MachineInstance was implemented in crossplane/crossplane#942 in support of crossplane/crossplane#286, https://github.com/packethost/stack-packet remains the only provider to implement this claim kind.
EC2 VM Instances should be supported for abstracting MachineInstance resources between providers.
We have some resources that do not have a corresponding claims such as network, subnetwork etc. Only end to end tutorials expose them to the users as ready-to-use yaml files.
Add examples for those resources.
Support update of cloudformation worker node configuration. Currently changes to the cloudformation workernode spec will not result in an update to the cloudformation stack.
S3 buckets are currently older style low fidelity API objects with a previous generation of controller code. They should be updated to v1beta1 quality per https://github.com/crossplaneio/crossplane/blob/7420e6/design/one-pager-managed-resource-api-design.md.
Would like a v1beta1 version of S3 bucket in storage.aws.crossplane.io/v1alpha3 resources
Move networking and S3 resources to v1beta1 standards
We are gathering community feedback to help us prioritize development of additional AWS managed services and maturing of existing service implementations.
Crossplane currently supports 58+ AWS API types, see https://doc.crds.dev/github.com/crossplane/provider-aws.
Please drop us a comment with a list of the most important AWS services for your use cases.
We will be prioritizing updates and additional services in Crossplane based on feedback.
provider-aws
see note below
Somehow my local Crossplane got into a weird state where it is not able to successfully reconcile a mysqlinstance. Here's an example of the status of the object:
status:
conditions:
- lastTransitionTime: "2019-09-25T20:05:18Z"
reason: Managed resource is being created
status: "False"
type: Ready
- lastTransitionTime: "2019-09-25T21:49:28Z"
message: "DBInstanceAlreadyExists: DB Instance already exists\n\tstatus code:
400, request id: 910d5710-7374-4e4e-ae3c-37e75a950dbe"
reason: Encountered an error during managed resource reconciliation
status: "False"
type: Synced
It looks as though there is an issue with the IsErrorAlreadyExists function where it is only checking one of several error codes that could be returned in the case that the resource already exists.
Here is a link to the specific error message that was being returned, and the full list of error messages as well. It looks like there is at least one other case.
When looking into this issue, it may make sense to also check for more instances of this class of issue (the class is that only one error code is handled, when several should be handled). On quick inspection, it looks as though the IsErrorNotFound function may also have a similar issue. I did not check the logic for other resource types.
The gist of reproducing this the way that I ran into it is to create an instance using the Crossplane Stacks Guide and the AWS option, and then to get the AWS Stack to try to create the instance again. I believe this could be done by running kubectl edit
on the object and removing the status.InstanceName
, if it has one.
Crossplane version: helm crossplane/alpha:
$ helm list crossplane
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
crossplane 1 Tue Sep 24 16:27:50 2019 DEPLOYED crossplane-0.3.0 0.3.0 crossplane-system
This is running on the Kubernetes from Docker for Mac.
When spec.forProvider.ApplyModificationsImmediately
is true
, the patch object includes ApplyImmediately
all the time because that doesn't exist in the corresponding SDK object. Details noted here https://github.com/crossplaneio/stack-aws/blob/master/pkg/clients/rds/rds.go#L444
ApplyModificationsImmediately
is true
by default. So, just creating an RDS instance and putting a breakpoint on the Update
call of the ExternalClient
would show you.
Crossplane version: 0.4.0
Statically created an RDSInstance
that was provisioned successfully and reached steady state. On deletion, AWS returned error InvalidParameterCombination: No modifications were requested
and refused to delete. Manually deleting in AWS console resulted in successful deletion and clean-up of Crossplane resource.
Create an RDSInstance
with the following configuration:
apiVersion: database.aws.crossplane.io/v1beta1
kind: RDSInstance
metadata:
name: rdsmysql
labels:
example: "true"
spec:
forProvider:
dbInstanceClass: db.t2.small
masterUsername: masteruser
allocatedStorage: 20
engine: mysql
writeConnectionSecretsToNamespace: crossplane-system
providerRef:
name: aws-provider
reclaimPolicy: Delete
This is the late-initialized spec
:
spec:
forProvider:
allocatedStorage: 20
autoMinorVersionUpgrade: true
availabilityZone: us-west-2d
backupRetentionPeriod: 0
caCertificateIdentifier: rds-ca-2019
copyTagsToSnapshot: false
dbInstanceClass: db.t2.small
dbSubnetGroupName: default
deletionProtection: false
enableIAMDatabaseAuthentication: false
enablePerformanceInsights: false
engine: mysql
engineVersion: 5.7.22
licenseModel: general-public-license
masterUsername: masteruser
monitoringInterval: 0
multiAZ: false
port: 0
preferredBackupWindow: 09:45-10:15
preferredMaintenanceWindow: wed:12:30-wed:13:00
publiclyAccessible: true
storageEncrypted: false
storageType: gp2
vpcSecurityGroupIds:
- <redacted>
providerRef:
name: aws-provider
reclaimPolicy: Delete
After it reaches condition Ready: True
, delete it. Error surfaces:
- lastTransitionTime: "2020-02-25T16:48:18Z"
message: "delete failed: cannot modify RDS instance: InvalidParameterCombination:
No modifications were requested\n\tstatus code: 400, request id: 9414996b-db29-40a9-9c0a-862bde997a43"
reason: Encountered an error during resource reconciliation
status: "False"
type: Synced
Crossplane version: v0.8.0
stack-aws version: v0.6.0
Kubernetes version: 1.14
Kubernetes distro: EKS
Go modules should be used instead of dep
The current EKS implementation does not allow to create pods in an additional subnet as per https://docs.aws.amazon.com/eks/latest/userguide/cni-custom-network.html
I would like crossplane to create EKS clusters with custom network/secondary cidr ranges. Ideally I'd like to configure them like this:
apiVersion: compute.aws.crossplane.io/v1alpha3
kind: EKSClusterClass
metadata:
name: custom-network
labels:
aws: "true"
custom-network: "true"
specTemplate:
# [...]
subnetIds:
- subnet-08a6e42f696140da4
- subnet-074d2b84fc0fba006
customSubnetIds:
- subnet-026c046bb4fbd1468
- subnet-065d50facdfe89127
I noticed the following benign reconcile errors while trying out the new v1beta ReplicationGroup resource.
While creating:
Last Transition Time: 2019-10-24T00:34:21Z
Message: cannot modify ElastiCache replication group: InvalidReplicationGroupState: Replication group must be in available state to modify.
status code: 400, request id: acf70fbf-1b09-4b26-8916-8b27f4817251
Reason: Encountered an error during managed resource reconciliation
Status: False
Type: Synced
While deleting:
Last Transition Time: 2019-10-24T00:48:22Z
Message: cannot delete ElastiCache replication group: InvalidReplicationGroupState: Replication group default-app-redis-dr2rp has status deleting which is not valid for deletion.
status code: 400, request id: b4d9f4d2-f278-4f5b-a837-514f23ad1cf5
Reason: Encountered an error during managed resource reconciliation
Status: False
Type: Synced
These errors seem benign - the ReplicationGroup is created and then deleted just fine. We could probably avoid them by having the ExternalClient
Update
and Delete
methods be no-op if the ReplicationGroup under reconciliation is observed to be creating or deleting.
Relates to #30.
Create and then delete a ReplicationGroup using the following claim and class:
---
apiVersion: cache.aws.crossplane.io/v1beta1
kind: ReplicationGroupClass
metadata:
name: aws-redis-standard
labels:
example: "true"
specTemplate:
writeConnectionSecretsToNamespace: crossplane-system
providerRef:
name: example
reclaimPolicy: Delete
forProvider:
replicationGroupDescription: "An example replication group"
applyModificationsImmediately: true
engine: "redis"
engineVersion: "3.2.4"
cacheParameterGroupName: default.redis3.2.cluster.on
cacheNodeType: cache.t2.micro
automaticFailoverEnabled: true
numNodeGroups: 2
replicasPerNodeGroup: 2
---
apiVersion: cache.crossplane.io/v1alpha1
kind: RedisCluster
metadata:
name: app-redis
spec:
classSelector:
matchLabels:
example: "true"
writeConnectionSecretToRef:
name: redissecret
engineVersion: "3.2"
Crossplane version:
In Crossplane API Patterns doc, high fidelity talks about exact representation of the resource in its CRD as much as possible. The assumption is that cloud provider SDK would have one struct with properties and API calls to manipulate that representation in the cloud provider. However, in cases where the main object is very lean and properties are set/get with different API calls and objects, this assumption leads to excessive amount of CRDs to be implemented, resulting in too much effort to implement them and bad UX since you have to deal with a lot of resources.
Let's take the case of IAM. In AWS console, you see 5 main objects:
Group
, User
, Policy
, Role
and Identity Provider
.
A usual workflow is that you create a Role
, create a few Policy
(or use ones that are built-in) then attach them to Role
. You'd add tags to Role
or do other operations on it in its properties tab. If you'd like to do that in Crossplane, you deal with 3 CRs IAMRole
, IAMRolePolicyAttachment
and IAMPolicy
(not implemented yet). In fact, there are quite a few actions like policy attachment that would deem a separate CRD but, in fact, is merely an action on the resource. Basically, the SDK is implemented in a way that you don't actually have a source of truth as object that you can modify/update for changes but a bunch of calls that you set/get those properties. It's not all of the properties, there are properties on the main struct that you can play with but in most of the cases, especially when two resources will have relation like attachment, you have other structs and calls that do the operation and our assumption about what constitutes a CRD causes us to implement a CRD for each of those actions.
When you look at other providers, such as GCP, it's closer to our assumption. You have a ServiceAccount endpoint and you'd construct a CRD for that, whose calls in the SDK would cover most of the cases. Though you have separate calls for resource relations here, too, like setIamPolicy
. My inclination is that this is more verbose in AWS compared to GCP; the main structs are leaner in AWS where you do most of the operations via dedicated API calls who have their own struct.
There are two problems with all this:
As you can see above in the image, you don't see a resource IAMRolePolicyAttachment
in the console's main widget; what you have is a tab in Roles
page where you select and attach. In my opinion, this is a better user experience; as a user I don't care how you attach it or whether you need a separate call/struct for this, I would like to do it in the Role
's own page so that I know what I'm doing, in our case I'd like to do it under spec
of IAMRole
by adding a reference to the Policy
to the existing list of attached policies.
The same problem exists in Terraform, too, in some cases. For IAM, they seem to have implemented those relational resources. However, if you look at S3 bucket, they decided to embed all these little property structs into the main S3 Bucket resource, like logging
, cors rule
etc.. It seems like they either drew a line like if it is inter-resource relation, then a separate resoruce but if it belongs solely on one resource then it is just a property
or this is just an inconsistency due to historical reasons.
I believe we need some level of sacrifice here in terms of mirroring the API's struct and the main reason is that those structs are not meant to be interacted in a level as high as user like CRD, as we can see from how they designed their consoles. Terraform seems to have made the decision to implement inter-resource relation resources but keep the property resources on the object itself. In my opinion, we should definitely need to embed those properties to the main CRD just like they do but we can go a step further and say let's embed relational resources, too, when one side of the relation is not affected by the relation. For example, when you attach a Policy
to a Role
, there is no change on the Policy
, so we can actually have spec.attachedPoliciesRefs
under Role
where the user can specify what Policy
s they would like to attach to that Role
.
I know that we don't feel good about designing the CRD in a way that is not fully mirroring the SDK but that's actually holding us back in some major way like verbose UX and excessive number of CRDs (and effort to implement each). What I propose is that we can try to mirror the UX that they provide to their end users instead of mirroring what they provide to us through their SDK. In IAM
group, we'd have 5 different CRDs only as they show in the console and implement the separate actions as a field on the spec
instead of implementing a new CRD for each action.
After migrating the aws resources from Crossplane core, they are still under an aws
folder in multiple places:
In addition, the following folder names need to be changed accordingly
Current folder structure
Crossplane version:
Part of crossplane/crossplane#1033
Currently, all integration testing is being run in a manual ad-hoc manner. It is desirable to automate this process and run the tests on a more frequent basis.
Initial implementation should use the framework developed to create tests for a single managed resource, as well as a Jenkins stage / separate pipeline to execute the test.
Dynamic provisioning is complicated. See crossplane/crossplane#926 for full context.
Implement the patterns described in crossplane/crossplane#926 for the AWS Stack. Specifically:
This will depend on crossplane/crossplane#927 and crossplane/crossplane-runtime#48.
After installing WordPress example, when cleaning up the resource connectivity resources, deleting some resources fail because they depend on resources that are created directly by Kubernetes Application. For instance, the application automatically creates a ELB load balancer, hence blocking the deletion of the VPC
resource
kubectl delete -f <directory>
Crossplane version: v0.3
I confirmed that if the KubernetesApplication
is deleted before deleting EKSCluster
, this problem won't happen.
The new version of crossplane-runtime introduced external name annotation that will be used as the source of truth for the name identifier on the provider's systems. By default, managed reconciler takes care of the propagation logic from claim to managed and vice versa.
However, the resources that don't get to choose their name, like VPC
in AWS, or the ones that can't work with new <namespace>-<name>-<5char random>
string should opt out of using ManagedNameAsExternalName
initializer and handle setting the external name in their own way. This could be either supplying its own Initializer
or calling meta.SetExternalName(Managed, string)
in their ExternalClient
call.
See:
https://github.com/crossplaneio/crossplane/blob/master/design/one-pager-managed-resource-api-design.md#external-resource-name
crossplane/crossplane-runtime#45
This issue tracks the adoption of this feature in this stack. Please close this as soon as all resources that use managed reconciler adopts the feature.
EKSCluster resource is still v1alpha1.
Implement v1beta1 version with the following patterns: https://github.com/crossplane/crossplane/blob/master/design/one-pager-managed-resource-api-design.md
If you don't set nodeAutoScalingGroupMinSize, nodeAutoScalingGroupMaxSize, NodeVolumeSize they are set to 1, 3, and 20 respectively, but they aren't set in the spec.
They should be set in the spec correctly.
Is this a bug report or feature request?
Feature Request
What should the feature do:
The feature should allow the admins to let users specify a backup/retention period for AWS RDS DB instances.
What is use case behind this feature:
I would like to backup my data stored on the DB instance for recovery.
Environment:
AWS
When creating an s3 bucket, I'd like it to be created with certain tags to help identify the owner, when it was created, the department, cost center, etc...
I'd be useful if I could specify tags in the YAML I use when claiming an S3 bucket. Maybe in https://crossplaneio.github.io/docs/v0.7/api/crossplaneio/stack-aws/storage-aws-crossplane-io-v1alpha3.html#S3BucketParameters ?
Can we add random availability zones as defaults based on the region selected in EKSClusterClass
and Provider
if no availabilityZone
is provided for in the Subnet
resource.
For example if the region in Provider
and EKSClusterClass
is us-east-1
, Subnet
resource's availabilityZone
can be populated with us-east-1a
or us-east-1b
randomly based on the availability zones for the region if the availabilityZone
field is empty or not present.
RDSInstance spec does not have a way of configuring TDE encryption.
TDECredentialPassword
and TDECredentialArn
fields are needed but since sensitive information on CR is not secure, Crossplane needs to allow a secret to be used as input. Something like TDECredentialSecretRef
The current AWS IAM support includes two resources: IAMRole and IAMRolePolicyAttachment. A high fidelity implementation of the AWS APIs would involved adding additional resources.
TODO: User story for statically provisioning a user. IAMUser is used to add any user of S3 buckets. IAMPolicy for bringing existing policy references.
An implementation mapping the APIs to their declarative resource counterparts. Evaluate the APIs to be exposed.
Add the following general resources now:
We support AttachRolePolicy as an IAMRolePolicyAttachment, so the equivalent for IAMUser
IAMUserPolicyAttachment.
We’ll also want, probably as a top priority, to support modeling IAM roles. Today we can attach an IAM role, but we can’t actually create one in Crossplane. What we have today may be sufficient because there are quite a few baked in roles. As a next step, we would support managing roles, then support managing users, then support attaching roles to users.
We have decided to defer work on all other resources until we have a community use cases.
For a full inventory of APIs to resources see this doc (both mapped and unmapped)
Replication Group resource can be created in a VPC only if you create a cache subnet group and give its name during creation as opposed to EKS where you create individual subnets and give their IDs.
It seems like CacheSubnetGroup
is a logical resource where you can either create new subnets during creation or choose from existing ones. So, it's a bit tricky in Crossplane where every managed resource is independently managed. Implementation of that resource would probably look like Azure's resource groups where its sole interface is to list/add/remove the downstream resource.
Created an EKS deployment incl corresponding infra resources and trying to delete it again. It's not possible to delete the VPC from crossplane, it hangs.
Message: delete failed: failed to delete the VPC resource: DependencyViolation: The vpc 'vpc-01b02d8940effbbad' has dependencies and cannot be deleted.
status code: 400, request id: d6e70c68-c3b0-478c-82d5-2759c4078dda
Reason: Encountered an error during resource reconciliation
Manual deletion from AWS Console works fine and the vpc object disappears in crossplane
Crossplane version: 0.8.0
AWS Stack: master from yesterday
HasDirectClassReferenceKind does not support static provisioning. crossplane-runtime has been updated with a fix, but this stack must be updated. See crossplane/crossplane-runtime#28 for details.
Would like a alpha support for DynamoDB
Add CRD for API types and controller that follows best practices in https://github.com/crossplaneio/crossplane/blob/7420e6/design/one-pager-managed-resource-api-design.md.
EKS clusters rotate their credentials frequently. The AWS stack currently does not propagate these updated credentials from the managed resource connection secret to its bound resource claim's connection secret. Full details in crossplane/crossplane-runtime#35.
See crossplane/crossplane-runtime#35
Crossplane version:
Is this a bug report or feature request? Bug Report
Deviation from expected behavior:
I attempted to create an EKS cluster using the below resource class and claim.
---
apiVersion: compute.crossplane.io/v1alpha1
kind: KubernetesCluster
metadata:
name: kubernetes
namespace: example
labels:
app: example
spec:
classReference:
name: kubernetes-eks-example
namespace: crossplane-system
---
apiVersion: core.crossplane.io/v1alpha1
kind: ResourceClass
metadata:
name: kubernetes-eks-example
namespace: crossplane-system
labels:
app: example
parameters:
region: us-west-2
roleARN: REDACTED
vpcId: REDACTED
subnetIds: subnet-REDACTED
securityGroupIds: REDACTED
workerKeyName: REDACTED
workerNodeInstanceType: m3.medium
provisioner: ekscluster.compute.aws.crossplane.io/v1alpha1
providerRef:
name: aws-example
reclaimPolicy: Delete
This resulted in an error:
Status: [147/1504]
Conditions:
Last Transition Time: 2019-03-26T23:17:11Z
Message: InvalidParameterException: Subnets specified must be in at least two different AZs
status code: 400, request id: 4814d859-501d-11e9-b671-2fdcdcd2cd1d
Reason: Failed to create new cluster
Status: True
Type: Failed
I attempted to delete the ekscluster
in order to rectify my mistake, but encountered the following error:
$ kubectl -n gitlab delete kubernetescluster kubernetes-example
$ kubectl -n crossplane-system describe ekscluster kubernetes-eks-example
# ...
Status:
Conditions:
Last Transition Time: 2019-03-26T23:27:55Z
Message: Master Delete Error: AccessDeniedException: Unable to determine service/operation name to be authorized
status code: 403, request id: c8ca907b-501e-11e9-b126-bf9e02431975
Reason: Failed to delete cluster
Status: True
Type: Failed
When I look in my AWS console I see no Kubernetes clusters to delete. I presume my provider credentials have the correct permissions given that my creation request failed for reasons other than authorization. My AWS provider is configured to use the access token of a user in the administrator
AWS group, which has policy arn:aws:iam::aws:policy/AdministratorAccess
.
Expected behavior:
Deleting the ekscluster
from Kubernetes results in it being deleted from AWS.
Environment:
$ kubectl -n crossplane-system describe deploy crossplane|grep Image
Image: crossplane/crossplane:v0.1.0-171.g3f13ae6
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-03-01T23:34:27Z", GoVersion:"go1.12", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
$ ./cluster/local/minikube.sh ssh
# ...
$ cat /etc/os-release
NAME=Buildroot
VERSION=2018.05
ID=buildroot
VERSION_ID=2018.05
PRETTY_NAME="Buildroot 2018.05"
$ uname -a
Linux minikube 4.15.0 crossplaneio/crossplane#1 SMP Fri Jan 18 22:39:33 UTC 2019 x86_64 GNU/Linux
Considering if someone were to create a UI around Crossplane for AWS resources, the VPCs and Regions don't exist as an explicit type and show up in individual resource classes as a string. A further example of things we might consider how to model would be security groups, rds subnet groups, and more. This concept probably has overlap with design being considered in crossplane/crossplane#564
Currently when constructing aws resource classes a user has to enter a string for the vpc-id, and it’s further required that the vpc must exist in the region you’ve configured as well, so region is really implied if you've selected a VPC. It would be great if we could model the VPCs that are available in a given region and avoid user's ability to misconfigure. In an RDS resource class one might select a region, then display a list of VPCs, pre-selecting the default.
This might represent useful fields and labels, although more consideration should be given to the exact labels and fields of the objects based on use cases.
Region
labels:
region=us-west-1
providerName=aws-creds-1
provider=aws
name: aws::us-west-1
region: us-west-1
provider: aws
providerReference: aws-creds-1
AWSVPC:
labels:
region=us-west-1
providerName=aws-creds-1
provider=aws
name: vpc-id-123456
vpcID: vpc-id-123456
isDefault: false
subnets: xxx.xxx.xxx.xxx/xxx
region: aws::us-west-1
AWSVPC:
labels:
region=us-west-1
providerName=aws-creds-1
provider=aws
name: vpc-id-567890
vpcID: vpc-id-567890
isDefault: true
subnets: xxx.xxx.xxx.xxx/xxx
region: aws::us-west-1
We would like all existing v1alpha1 Crossplane IAM resources to be bumped to v1beta1 quality.
Move IAM resources to v1beta1 standards
HasDirectClassReferenceKind does not support static provisioning. crossplane-runtime has been updated with a fix, but this stack must be updated. See crossplane/crossplane-runtime#28 for details.
While RDS and EKS managed resources support cross-resource referencing ReplicationGroup does not. For example, SecurityGroupIDs
field could refer to SecurityGroup
resource type that we have.
Implement cross-resource reference for ReplicationGroup parameters.
Some of the earlier controllers for AWS are still using the non-managed reconciler of Kubebuilder (eg. EKSCluster).
All such controllers need to use the shared managed reconciler pattern.
The entrypoint binary for stack-aws
appears to be named crossplane
. This breaks at least the make run
command, which expects the output binary to be the same as the project name.
make run
Crossplane version:
Is this a bug report or feature request? Bug Report
Deviation from expected behavior:
The S3 bucket namespace is global. If I create an S3Bucket
that attempts to use a bucket name that someone else has taken I get the following error:
Status:
Conditions:
Last Transition Time: 2019-03-27T00:38:40Z
Message: BucketAlreadyExists: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different n
ame and try again.
status code: 409, request id: 98896AAE32A152FC, host id: dD1d929ZBfgiAFCHGXebOv7B3NVgPP7sHXfmEyghq6Okgo5z6pYtSfEB9TtTKEsqtia3Ab5EDI0=
Reason: Failed to create resource
Status: True
Type: Failed
If I attempt to rectify this by deleting the S3Bucket
in Kubernetes I cannot, because:
Status:
Conditions:
Last Transition Time: 2019-03-27T00:40:05Z
Message: BucketRegionError: incorrect region, the bucket is not in 'us-west-2' region
status code: 301, request id: , host id:
Reason: Failed to delete resource
Status: True
Type: Failed
Expected behavior:
If Crossplane never actually created an S3 bucket for me, I should be able to delete the associated S3Bucket
resource.
Environment:
$ kubectl -n crossplane-system describe deploy crossplane|grep Image
Image: crossplane/crossplane:v0.1.0-171.g3f13ae6
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-03-01T23:34:27Z", GoVersion:"go1.12", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
$ ./cluster/local/minikube.sh ssh
# ...
$ cat /etc/os-release
NAME=Buildroot
VERSION=2018.05
ID=buildroot
VERSION_ID=2018.05
PRETTY_NAME="Buildroot 2018.05"
$ uname -a
Linux minikube 4.15.0 crossplaneio/crossplane#1 SMP Fri Jan 18 22:39:33 UTC 2019 x86_64 GNU/Linux
We currently assume IAM roles for authentication to the AWS API using our own custom logic to get Web Identity Tokens from the file-system of the controller container. The v1 version of aws-sdk-go
supported this use-case as an identity provider, but the v2 version did not, so we were forced to use our own implementation in the short-term.
This logic has now been included in aws-sdk-go-v2
as part of aws/aws-sdk-go-v2#488. It should be available in the next release (v0.20.0
), and we should move to using it instead of our own implementation.
We should be able to have multiple nodePools per kubernetes cluster, but the current spec assumes a single nodepool.
Related to crossplane/crossplane#152
When (at least some) AWS managed resources encounter errors they requeue on a tight loop because:
failed to delete the VPC resource: DependencyViolation: The vpc 'vpc-0dcc11278872d11b7' has dependencies and cannot be deleted.
status code: 400, request id: f6f7e648-cbd1-40f3-8df4-5c4077c0c595
failed to delete the VPC resource: DependencyViolation: The vpc 'vpc-0dcc11278872d11b7' has dependencies and cannot be deleted.
status code: 400, request id: c2f56437-66a8-48f2-9316-713f457004ed
The quick fix here would be to wrap these errors in a way that strips out the unique request ID.
Crossplane version:
The region used by a given resource should be late bound and not part of the AWS Provider object.
AWS RDS and ReplicationGroup should be refactored to use generic managed reconciler and adopt patterns in crossplane/crossplane#840
Part of crossplane/crossplane#863
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.