Code Monkey home page Code Monkey logo

cluster-api-ipam-provider-in-cluster's Introduction

Cluster API IPAM Provider In Cluster

This is an IPAM provider for Cluster API that manages pools of IP addresses using Kubernetes resources. It serves as a reference implementation for IPAM providers, but can also be used as a simple replacement for DHCP.

IPAM providers allow to control how IP addresses are assigned to Cluster API Machines. It is usually only useful for non-cloud deployments. The infrastructure provider in use must support IPAM providers in order to use this provider.

Features

  • Manages IP Addresses in-cluster using custom Kubernetes resources
  • Address pools can be cluster-wide or namespaced
  • Pools can consist of subnets, arbitrary address ranges and/or individual addresses
  • Both IPv4 and IPv6 are supported
  • Individual addresses, ranges and subnets can be excluded from a pool
  • Well-known reserved addresses are excluded by default, which can be configured per pool

Setup via clusterctl

This provider comes with clusterctl support. Since it's not added to the built-in list of providers yet, you'll need to add the following to your $XDG_CONFIG_HOME/cluster-api/clusterctl.yaml if you want to install it using clusterctl init --ipam in-cluster:

providers:
  - name: in-cluster
    url: https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster/releases/latest/ipam-components.yaml
    type: IPAMProvider

Usage

This provider comes with two resources to specify pools from which addresses can be allocated: the InClusterIPPool and the GlobalInClusterIPPool. As the names suggest, the former is namespaced, the latter is cluster-wide. Otherwise they are identical. The following examples will all use the InClusterIPPool, but all examples work with the GlobalInClusterIPPool as well.

A simple pool that covers an entire /24 IPv4 network could look like this:

apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: InClusterIPPool
metadata:
  name: inclusterippool-sample
spec:
  addresses:
    - 10.0.0.0/24
  prefix: 24
  gateway: 10.0.0.1

IPv6 is also supported, but a single pool can only consist of v4 or v6 addresses, not both. For simplicity we'll stick to IPv4 in the examples.

The addresses field supports CIDR notation, as well as arbitrary ranges and individual addresses. Using the excludedAddresses field, addresses, ranges or subnets can be excluded from the pool.

apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: InClusterIPPool
metadata:
  name: inclusterippool-sample
spec:
  addresses:
    - 10.0.0.0/24
    - 10.0.1.10-10.0.1.100
    - 10.0.2.1
    - 10.0.2.2
  excludeAddresses:
    - 10.10.0.16/28
    - 10.10.0.242
    - 10.0.1.25-10.0.1.30
  prefix: 22
  gateway: 10.0.0.1

Be aware that the prefix needs to cover all addresses that are part of the pool. The first network in the addresses list and the prefix field, which specifies the length of the prefix, is used to determine the prefix. In this case, 10.1.0.0/24 in the addresses list would lead to a validation error.

The gateway will never be allocated. By default, addresses that are usually reserved will not be allocated either. For v4 networks this is the first (network) and last (broadcast) address within the prefix. In the example above that would be 10.0.0.0 and 10.0.3.255 (the latter not being in the network anyway). For v6 networks the first address is excluded.

If you want to use all networks that are part of the prefix, you can set allocateReservedIPAddresses to true. In the example below, both 10.0.0.0 and 10.0.0.255 will be allocated. The gateway will still be excluded.

apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: InClusterIPPool
metadata:
  name: inclusterippool-sample
spec:
  addresses:
    - 10.0.0.0/24
  prefix: 24
  gateway: 10.0.0.1
  allocateReservedIPAddresses: true

Community, discussion, contribution, and support

The in-cluster IPAM provider is part of the cluster-api project. Please refer to it's readme for information on how to connect with the project.

The best way to reach the maintainers of this sub-project is the #cluster-api channel on the Kubernetes Slack.

Pull Requests and feedback on issues are very welcome! See the issue tracker if you're unsure where to start, especially the Good first issue and Help wanted tags, and also feel free to reach out to discuss.

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

cluster-api-ipam-provider-in-cluster's People

Contributors

agtogna avatar alexandrevilain avatar chrischdi avatar christianang avatar dependabot[bot] avatar flawedmatrix avatar k8s-ci-robot avatar mcbenjemaa avatar noroth avatar p-strusiewiczsurmacki-mobica avatar rikatz avatar schrej avatar stmcginnis avatar tylerschultz avatar zhanggbj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

cluster-api-ipam-provider-in-cluster's Issues

in a "clusterctl move" the IPAM resources are not migrated to the new cluster

When a cluster is moved from a CAPI manager to another with
clusterctl move
the IPAM resources (GlobalInClusterIPPool, InClusterIPPool, IPAddressClaim and IPAddress) are not migrated to the destination CAPI management cluster.
The IPPool needs to be created beforehand on the destination, and the IPAddresses end up being different than on the source (luckily the IPAM controller does not try to change the VMs).

Pools fail to render the first/last - start/end IP addresses in the kubernetes cli

Pools fail to show the first/last or start/end values of pools.
There is also some mixing of first/last and start/end. Perhaps one set should be eliminated.

$ cat pool.yaml
---
apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
  name: inclusterippool-sample
  namespace: default
spec:
  subnet: 10.0.0.0/24
  gateway: 10.0.0.1
  start: 10.0.0.22
  end: 10.0.0.40
$ kubectl apply -f pool.yaml
inclusterippool.ipam.cluster.x-k8s.io/inclusterippool-sample created
$ kubectl get inclusterippool
NAME                     SUBNET        FIRST   LAST
inclusterippool-sample   10.0.0.0/24

Pool usage metrics

It would be useful if we could some usage statistics from the pool without having to query the IPAddresses and count ourselves.

We are thinking of adding a total, used, free in the pool status. E.g.

---
apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
  name: pool-name
  namespace: pool-ns
spec:
  subnet: 10.0.0.0/24
status:
  ipAddresses:
    # The total number of IPs the pool has.
    total: 255
    # The number of IPs that have already been allocated.
    used: 9
    # The number of IPs that are available to allocate.
    free: 250

IPAddressClaim controller should detect cluster paused consistently

Changes were recently made to the IPAddressClaim controller to prevent reconciliation when the Cluster object is paused. The Cluster object has a spec.Paused property, and can also have a Paused annotation. The code copied from CAPV and placed into the IPAddressClaim controller looks at the Cluster's spec.Paused property during updates, and looks at the Cluster's paused annotations during creates. It seems the Update and Create functions should both look at both the annotation and the property.

https://github.com/telekom/cluster-api-ipam-provider-in-cluster/blob/main/internal/controllers/ipaddressclaim.go#L76-L88

We thought this was odd and @srm09 agreed. We opened this issue on CAPV as a result of our discussion.

kubernetes-sigs/cluster-api-provider-vsphere#1890

Sagar suggests this paused detection predicate should be a shared library function owned by either CAPV or CAPI, and all controllers should detect paused in the same. This issue is a reminder to update the IPAddressClaim controller when the CAPV issue is resolved.

Feature Request: Ability to configure the pool with a list of non-contiguous IP addresses

Feature Request:

We have a case where we're not able to obtain a contiguous block of IP addresses. We can, however, get a set of IPs that are on the same network. We'd like the ability to configure the pool with this list of IP.

We propose being able to set a list of IP in place of a CIDR or start & end:

apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
  name: inclusterippool-sample
spec:
  prefix: 24
  gateway: 10.0.0.1
  addresses:
    - 10.0.0.5
    - 10.0.0.7
    - 10.0.0.9
    - 10.0.0.14
    - 10.0.0.6

It seems some validations would be prudent too, to ensure the IPs are within the gateway/prefix range.

cc @adobley @flawedmatrix @christianang

Claims with cluster label should not be reconciled when cluster cannot be retrieved

When using Velero to do a backup/restore, we see that claims are getting reconciled when the paused cluster has yet to be restored. We see that the cluster is always restored last, due to the dependency tree.

Our use case is Velero specific, but we think this is a more general bug too. If a claim is linked to a cluster and it can't be retrieved, we think it should skip reconciliation until the cluster can be evaluated.

Logs should include human readable timestamps

The current timestamps are integer format, which are helpful for log aggregators, but make it hard for humans to decipher.

Proposal: Logs should be changed to ISO format, or should include both integer and ISO format timestamps.

2023-02-03T18:46:39Z

Proposal: Cross-Namespace Pools

Currently an InClusterIPPool is per namespace. We would like to have the ability to define a single pool of IPs that can fulfill claims for clusters in various namespaces.

Right now it is possible to define the same pool in multiple namespaces and get the behavior of a shared pool. This has the overhead of maintaining that same resource in multiple namespaces. When changing the size of the pool all instances would need to be updated or risk configuration drift.

We propose adding a reference field to the InClusterIPPool resource called poolPointer which would be an ObjectReference.

Using CAPV as an example users would create a template in the same way as before:

apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: VSphereMachineTemplate
metadata:
  name: example
  namespace: cluster-ns
spec:
  template:
    spec:
      cloneMode: FullClone
      numCPUs: 8
      memoryMiB: 8192
      diskGiB: 45
      network:
        devices:
        - dhcp4: false
          fromPool:
            group: ipam.cluster.x-k8s.io/v1alpha1
            kind: InClusterIPPool
            name: my-pool-pointer

Create the pool pointer:

apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
  name: my-pool-pointer
  namespace: cluster-ns
spec:
  poolPointer:
    - name: my-pool
      namespace: pool-ns
      kind: InClusterIPPool

Create the pool:

apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
  name: my-pool
  namespace: pool-ns
spec:
  pools:
    - subnet: 10.10.10.0/24
      start: 10.10.10.100
      end: 10.10.10.200

For pools that have a poolPointer the controller will need be updated to resolve the IPPool data from the provided namespace/name pool.

The webhook for InClusterIPPool should reject any resources that provide both pools and poolPointer.

IPAddresses are already created in the namespace of the IPAddressClaim and the controller currently lists IPAddresses from all namespaces when determining what IP to assign based on the kind/name of the pool.

We're happy to implement this and are looking for feedback on our proposed approach.

cc/ @tylerschultz @flawedmatrix @christianang

clusterctl move fails due to IPAddresses Exists

Ref kubernetes-sigs/cluster-api#9478

Object already exists, updating IPAddress="mgmt-cluster-control-plane-g6ppb-net0-inet6" Namespace="default"
Retrying with backoff Cause="error updating \"ipam.cluster.x-k8s.io/v1alpha1, Kind=IPAddress\" default/mgmt-cluster-control-plane-g6ppb-net0-inet6: admission webhook \"validation.ipaddress.ipam.cluster.x-k8s.io\" denied the request: spec: Forbidden: the spec of IPAddress is immutable"

The provider configuration to add to clusterctl.yaml is wrong

The provider URL is no longer on telekom but has moved to kubernetes-sigs, so should be:

providers:
  - name: in-cluster
    url: https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster/releases/latest/ipam-components.yaml
    type: IPAMProvider

otherwise clusterctl init errors with:

Fetching providers
Error: failed to get provider components for the "in-cluster" provider: target namespace can't be defaulted. Please specify a target namespace

IPAddresses should have their finalizer and owner refs restored if missing

In a scenario we've encountered, a backup/restore specifically, we see that the finalizer and owner refs are not restored. In the case they're missing, the controller should restore these owner references and finalizer. Currently, the finalizer is created when the controller creates the objects.

Claims that are stuck waiting for an address when a pool has no free addresses should be re-reconciled when addresses become available

I noticed an issue recently when creating multiple clusters with the same pool and forgetting that my pool size was too small for the number of machines I created. The observed behavior was that when the pool runs out of IPs, the cluster will never come up because it's waiting for a node to obtain an IP address. Even when deleting an unused cluster, this issue does not resolve itself. I have to manually delete the claim and its owner resources in order to get the deployment to progress.

Reproduction

  1. Create a pool with N addresses, and reserve all of them by creating N claims
  2. Attempt to create another claim
  3. This claim should be forever waiting for an IP address
  4. Delete some claims to free up capacity in this pool
  5. Observe that the N+1 claim will still be unable to reserve an IP address.

Proposed Solution

We should add a watch for when IPAddressClaims are deleted to trigger reconciles on other IPAddressClaims.

Addressing downscaling pools with addresses in range

Context

We see that pools can be updated without regard for what IPs are in use. This can lead to situations where IPs can become out of range of the pool's configuration.

Potential Solution

We are thinking of adding validation in the webhook that would check if there is an IP address that is already allocated before allowing it to be removed from the pool. This continues to allow configuration of the pool, but prevents an IP Address that is in use to be removed from the pool.

We are also thinking of adding an outOfRange status count on the pool status (similar to #112) to expose potential issues if it already happens to have out of range IPs. This is mostly considering the case of updates to the pool that may have occurred without the webhook validation.

In addition to the outOfRange status count on the pool we could also add an outOfRange status condition on the IPAddress.

Misconfigured webhooks with the new api version

reproduction steps:

1) k3d cluster create --no-lb --k3s-arg "--disable=traefik,servicelb,metrics-server,local-storage@server:*"

2) clusterctl init

3) from root of this repo: make install && make deploy

4)
cat <<EOF | kubectl apply -f -
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: GlobalInClusterIPPool
metadata:
  name: bar
  namespace: giantswarm
spec:
  addresses:
  - 10.10.225.232-10.10.225.238
  gateway: 10.10.225.0
  prefix: 24
EOF

Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.globalinclusterippool.ipam.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource

workaround:

kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io caip-in-cluster-mutating-webhook-configuration
kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io caip-in-cluster-validating-webhook-configuration
cat <<EOF | kubectl apply -f -
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: GlobalInClusterIPPool
metadata:
  name: bar
  namespace: giantswarm
spec:
  addresses:
  - 10.10.225.232-10.10.225.238
  gateway: 10.10.225.0
  prefix: 24
EOF
globalinclusterippool.ipam.cluster.x-k8s.io/baasd created

Pools should allow for NOT specifying gateways

The IPAM CRDs supplied by CAPI suggest that Gateway is not a required field. CAPV is written so that gateway can be supplied on the VSphereMachine, removing the need to configure it on the pool. CAIP Validation allows for an empty Gateway, but later fails to Parse the empty Gateway here:

https://github.com/telekom/cluster-api-ipam-provider-in-cluster/blob/main/internal/poolutil/pool.go#L59

CAIP should honor it's validations and not error on an empty gateway.

clusterctl move fails because of delete webhook

Context

When clusterctl move runs it attempts to delete everything off the source cluster. This currently fails because it may attempt to delete the pool before it deletes all the ipaddresses and there doesn't seem to be a way to control delete order. So it will error on the validate delete webhook, which #124 added functionality to check that there are no in use IPs before allowing a pool to be deleted.

Potential solution

For future cluster-api releases (1.5+), there was an annotation added that a validate delete webhook can look for to potentially allow the validation to be skipped called: clusterctl.cluster.x-k8s.io/delete-for-move (via kubernetes-sigs/cluster-api#8322).

For the current cluster-api release and below (1.4.x), there doesn't seem to be a good way to do this so I think we need to add our own annotation that allows the delete validation to be skipped, which a user could have on their pool before a move e.g ipam.cluster.x-k8s.io/skip-validate-delete-webhook.

An `ipam-components.yaml` file should be provided for use with clusterctl

An ipam-components.yaml file should be provided for use with clusterctl to install CAIP as an IPAMProvider.

The clusterctl documentation suggests that as part of the provider contract:

The provider is required to generate a components YAML file and publish it to the provider’s repository. This file is a single YAML with all the components required for installing the provider itself (CRDs, Controller, RBAC etc.).

In the case of an IPAM Provider, the file should be called ipam-components.yaml

https://cluster-api.sigs.k8s.io/clusterctl/provider-contract.html

Pools should allow specifying `spec.Subnet` when also providing `spec.Addresses`

When support for spec.Addresses was added, providing spec.Subnet was not allowed. Users are forced to Specify spec.Prefix when specifying spec.Addresses.

Pools should allow specifying spec.Subnet, and if not provided, it should be derived and defaulted. The default can be derived from the spec.Addresses and spec.Prefix.

Also, when specifying spec.Subnet, spec.Prefix should be allowed and should match the spec.Subnet. The spec.Prefix can be defaulted from the spec.Subnet.

We think this is useful when running kubectl get inculsterippool. Including the subnet in the output makes matching up the pool's configuration with the IAAS's configuration easier.

Cluster API with IPAM & InCluster compatibility

It looks like we are trying to solve the same issues related to IP address management at vSphere using Kubernetes Cluster API.

Is this component working with CAPV v1.5.1 and Cluster API 1.3.1 ?

Given the following template with reference to the cluster IP pool using the new feature "addressesFromPools" in CAPV 1.5.0:


apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereMachineTemplate
metadata:
  name: vmware-test-controlplane
spec:
  template:
    spec:
      cloneMode: linkedClone
      datacenter: dc1
      datastore: datastore5
      diskGiB: 10
      folder: vmware-test
      memoryMiB: 8192
      network:
        devices:
        - dhcp4: false
          dhcp6: false
          gateway4: 192.168.0.1
          addressesFromPools:
          - apiGroup: ipam.cluster.x-k8s.io
            kind: InClusterIPPool
            name: vmware-test-controlplane
          networkName: VM Network
     ...

and the IPAM:

apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
  name: vmware-test-controlplane
spec:
  start: 192.168.5.10
  end: 192.168.5.14
  prefix: 24
  gateway: 192.168.0.1

From those configuration we do get IPAddresses and IPAddressclaims as expected, however, the IP are not propagated to the virtual machine.
I wonder if cluster-api-ipam-provider-in-cluster was written before the release of CAPV 1.5.0 and thereby not being compatible with the new CAPV feature.

Note, we are using Talos for bootstrap & control plane, however, that should not affect IPAM.

ipam provider does not support capi v1.6.0

ipam provider does not support capi v1.6.0, apparently because CAPI stopped serving the alpha API version

is it possible to update ipam provider to support new CAPI within half a year?

clusterctl init fails if wrong version of cert-manager is already installed

If cert-manager is already installed in the ClusterAPI management cluster (before it is initialized) but is not the version expected by clusterctl, then clusterctl --ipam incluster --config clusterctl-IPAM.config.yaml, with clusterctl-IPAM.config.yaml being:

---
providers:
  - name: incluster
    url: https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster/releases/latest/ipam-components.yaml
    type: IPAMProvider

fails with a strange error:

Error: failed to get provider components for the "incluster" provider: failed to read "ipam-components.yaml" from provider's repository "ipam-incluster": release not found for version download, 
please retry later or set "GOPROXY=off" to get the current stable release: 404 Not Found

If I specify an IPAM version, e.g. v0.1.0-alpha.2 (clusterctl init --ipam incluster:v0.1.0-alpha.2 --config clusterctl-IPAM.config.yaml) the error is:

Error: failed to get provider components for the "incluster:v0.1.0-alpha.2" provider: failed to read "ipam-components.yaml" from provider's repository "ipam-incluster": failed to download files 
from GitHub release v0.1.0-alpha.2: failed to get file "ipam-components.yaml" from "v0.1.0-alpha.2" release 

After I upgraded cert-manager to the version expected by clusterctl the init went smoothly.

Provider should vend IPs indepent of pools with same name, different namespace

Example:

Here are two pools that are the same, except they're in different namespaces:

---
apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
  name: common-name
  namespace: ns-a
spec:
  first: 192.168.0.2
  last: 192.168.0.3
  prefix: 24
  gateway: 192.168.0.1
  
---
apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
  name: common-name
  namespace: ns-b
spec:
  first: 192.168.0.2
  last: 192.168.0.3
  prefix: 24
  gateway: 192.168.0.1

If Pool A were to give out 192.168.0.2, then Pool B will not give out that same address, even though the pools are in different namespaces. The pools should give out IPs independent of one another, and IPs in use by a different pool should not affect any other pool.

Error in readme

The example

apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
  name: inclusterippool-sample
spec:
  first: 10.0.0.10
  last: 10.10.0.42
  prefix: 24
  gateway: 10.0.0.1

should be

apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
  name: inclusterippool-sample
spec:
  start: 10.0.0.10
  end: 10.10.0.42
  prefix: 24
  gateway: 10.0.0.1

Controller shouldn't allocate IP addresses that are "reserved" for the subnet

Context

Broken out of the conversation started in #123 (comment).

There are ip addresses that shouldn't be given out that are reserved for special cases in the subnet. This is typically the network address, the broadcast address, and the gateway address.

Proposed Solution

If I defined the following pool:

Gateway: 192.168.0.1
Prefix: 16
Addresses:
- 192.168.0.0/16

The controller should not allocate: 192.168.0.0 (network address), 192.168.0.1 (gateway address), 192.168.255.255 (broadcast address). It would therefore give out 192.168.0.2-192.168.255.254.

If a user wants to allocate reserved addresses i.e ignore the above functionality except for gateway, a user can set a new flag called allocatedReservedAddresses, which would allocate 192.168.0.0-192.168.255.255 (except for 192.168.0.1). If the user doesn't want to reserve the gateway they should not set the gateway.

Errors in InClusterIPPools CRD

additionalPrinterColumns is currently

  - additionalPrinterColumns:
    - description: Subnet to allocate IPs from
      jsonPath: .spec.subnet
      name: Subnet
      type: string
    - description: First address of the range to allocate from
      jsonPath: .spec.first
      name: First
      type: string
    - description: Last address of the range to allocate from
      jsonPath: .spec.last
      name: Last
      type: string

but should be

  - additionalPrinterColumns:
    - description: Subnet to allocate IPs from
      jsonPath: .spec.subnet
      name: Subnet
      type: string
    - description: First address of the range to allocate from
      jsonPath: .spec.start
      name: Start
      type: string
    - description: Last address of the range to allocate from
      jsonPath: .spec.end
      name: End
      type: string

spec is currently

          spec:
            description: InClusterIPPoolSpec defines the desired state of InClusterIPPool.
            properties:
              addresses:
                description: Addresses is a list of IP addresses that can be assigned.
                  This set of addresses can be non-contiguous. Can be omitted if subnet,
                  or first and last is set.
                items:
                  type: string
                type: array
              end:
                description: Last is the last address that can be assigned. Must come
                  after first and needs to fit into a common subnet. If unset, the
                  second last address of subnet will be used.
                type: string
              gateway:
                description: Gateway
                type: string
              prefix:
                description: Prefix is the network prefix to use. If unset the prefix
                  from the subnet will be used.
                maximum: 128
                type: integer
              start:
                description: First is the first address that can be assigned. If unset,
                  the second address of subnet will be used.
                type: string
              subnet:
                description: Subnet is the subnet to assign IP addresses from. Can
                  be omitted if addresses or first, last and prefix are set.
                type: string

but needs to be

          spec:
            description: InClusterIPPoolSpec defines the desired state of InClusterIPPool.
            properties:
              addresses:
                description: Addresses is a list of IP addresses that can be assigned.
                  This set of addresses can be non-contiguous. Can be omitted if subnet,
                  or start and end are set.
                items:
                  type: string
                type: array
              end:
                description: End is the last address that can be assigned. Must come
                  after start and needs to fit into a common subnet. If unset, the
                  second to last address of subnet will be used.
                type: string
              gateway:
                description: Gateway
                type: string
              prefix:
                description: Prefix is the network prefix to use. If unset the prefix
                  from the subnet will be used.
                maximum: 128
                type: integer
              start:
                description: Start is the first address that can be assigned. If unset,
                  the second address of subnet will be used.
                type: string
              subnet:
                description: Subnet is the subnet to assign IP addresses from. Can
                  be omitted if addresses or start, end and prefix are set.
                type: string

unknown field "spec.excludedAddresses"

Hi. Awesome sig!
There seems to be a bug in ipam.cluster.x-k8s.io/v1alpha2 that is poorly documented.
It is no longer spec.exclude, now it's apparently spec.excludedAddresses that defines the excluded addresses.

However, neither of them work now.
I'm using the following configuration.

apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: InClusterIPPool
metadata:
  name: ${CLUSTER_CLASS_NAME}
  namespace: ${NAMESPACE}
spec:
  prefix: 25
  addresses:
    - 10.1.157.0/25
  excludedAddresses:
    - 10.1.157.1-10.1.157.20
  gateway: 10.1.157.1

The full error:

Error from server (BadRequest): error when creating "STDIN": InClusterIPPool in version "v1alpha2" cannot be handled as a InClusterIPPool: strict decoding error: unknown field "spec.excludedAddresses"

Gateway should be optional

As it is written now, CAPV assumes that a Gateway will be present on IPAddress objects, and will fail to reconcile a VSphereVM if it's not present.
The webhook should validate the presence of the Gateway, and reject pools that do not have a valid gateway. Gateways should need to be in range of the provided subnet.
Alternatively, the gateway could be defaulted to the first IP in the subnet range.

Prepare Repo for a move to kubernetes-sigs

We've discussed in k8s slack and in the cluster-lifecycle-sigs meeting an intention to move this repo into kubernetes-sigs.

Assuming all interested parties approve this change there is some work we need to do to make it happen.

The k8s community lists out some rules for donated repositories that we need to ensure we satisfy. Some things must be done after a move, such as getting integrated with k8s CI.

I'll do my best to outline those requirements below so we can start to check them off. I'm fairly sure we already meet several of these requirements. I'll check those off as I validate them.

If there are any things that we may have missed, pease let me know and I'll ensure we keep track of them.

Pre-Move

  • Must adopt the Kubernetes Code of Conduct
  • All code projects use the Apache License version 2.0. Documentation repositories must use the Creative Commons License version 4.0.
  • All OWNERS of the project must also be active SIG members.
    • I think we all are by contributing to a project that gets adopted by a SIG
  • Must be approved by the process spelled out in the SIG's charter and a publicly linkable written decision should be available for the same.
  • SIG must already have identified all of their existing subprojects and code, with valid OWNERS files, in sigs.yaml
  • All contributors must have signed the CNCF Individual CLA or CNCF Corporate CLA
    • If (a) contributor(s) have not signed the CLA and could not be reached, a NOTICE file should be added referencing section 7 of the CLA with a list of the developers who could not be reached
  • Boilerplate text across all files should attribute copyright as follows: "Copyright " if no CLA was in place prior to donation
  • Licenses of dependencies are acceptable; project owners can ping @caniszczyk for review of third party deps
  • Should contain template files as per the kubernetes-template-project.

Move

  • Open an issue in github.com/kubernetes/org to move the repo (see kubernetes/org#3762 for a recent example)
    • For now all repos will live in github.com/kubernetes-sigs/<project-name>

Post-Move

  • Additions of the standard Kubernetes header to code created by the contributors can occur post-transfer, but should ideally occur shortly thereafter.
    • Can we do this Pre-Move?
  • Must contain the topic for the sponsoring SIG - e.g. k8s-sig-api-machinery. (Added through the Manage topics link on the repo page.)
  • #148

Handle reserved Ips withing an IP range

Wen are trying to integrate this provider as part of our infrastructure in Proxmox.

But we couldn't find any way of handling reserved IPs.

As we use this IPAM provider to get ip addresses to set to our machines, but there's no way of setting an already
existent IP.
which means, that would end up in a conflicts.

for example we have an IPrange: 10.10.10.1/24 and gateway: 10.10.10.1

In our range, we would have some Machines running , e.g. jumphosts, CI, dhcp server, or VMs used for maintenance.

so, if we have a manual machine provisioned to this network with ip e.g. 10.10.10.3,

How to tell IPAM provider to no use this IP?

Add clusterctl support

Now that clusterctl supports ipam providers via kubernetes-sigs/cluster-api#7288, some work needs to be done in this provider to integrate with the clusterctl changes.

This provider-contract doc covers everything that needs to be done, but a summary of what I think needs to happen is:

  • A GitHub release of this IPAM provider.
  • The GitHub release contains:
    • metadata.yaml
    • ipam-provider.yaml (generated from ./config/default probably with some automation)
  • Update metadata.yamll to adhere to the latest cluster-api contract version

There may be more that needs to be done, but this should be a good starting point.

/kind feature

Prevent pool deletion if IPAddress exists for pool

Problem

I noticed that a pool can be deleted even if IPAddresses have been allocated for that pool. This seems undesirable.

Possible solutions

Option 1: Add a finalizer to the pool that can be used to ensure the pool isn't removed until all IPAddresses are deleted. Drawback here is that the pool is marked for deletion and this may be an undesirable state if a user decides that they can't clean up the IPAddresses and want to continue using the pool.

Option 2: Guard deletion of the pool with the webhook. More complex then a finalizer, but prevents them from potentially entering a bad state.

I'm leaning towards implementing option 2 if we do this, but open to hearing opinions here.

Gateway should be validated to be within inferred subnet when pool is IPv4

Currently, it is possible to create/update a pool where the Gateway address is not within the pool's inferred Subnet. It seems the pool webhook should validate that the Gateway is within the pool's subnet when the pool's IP family is detected to be IPv4.
Gateway is an IPv4 concept, and thus it doesn't make sense to make the suggested validation when the pool's IP family is IPv6.

Discussion: Should Pools be validated for overlapping ranges?

As of writing this issue, the provider has no validation for pools that overlap. This may cause the provider to give out duplicate IPs, which could result in serious issues. We'd like to start a discussion about why or why not validation would be something desired.

We're interested in cases where rejecting a pool that overlaps another pool would be incorrect behavior.

How should I use the cluster-api-ipam-provider-in-cluster?

I apologize for my lack of knowledge, but I am trying to use your cluster-api-ipam-provider-in-cluster and have encountered some issues.
I created a cluster using the instructions on this page about Docker:
https://cluster-api.sigs.k8s.io/user/quick-start.html

Then I ran the following commands and confirmed that caip-in-cluster-system is running:
git clone https://github.com/telekom/cluster-api-ipam-provider-in-cluster
cd cluster-api-ipam-provider-in-cluster
kubectl apply -k config/default

Based on the README.md you provided, I created an inclusterippool-sample. Here are the specific details I used:
apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
name: inclusterippool-sample
spec:
subnet: 10.0.0.0/24
gateway: 10.0.0.1

Next, I tried to create a pod to verify that its IP address is within the inclusterippool, but it is clear that the pod's IP is not within the inclusterippool. Here is an example of the pod:
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: nginx

I have reviewed the information you provided and did not see any indication that it can only be used in specific environments, such as AWS or vSphere.
So, how can I verify that IPAM is functioning correctly? If there is any misunderstanding in the above content, please correct me. Alternatively, could you provide an example of how to use it so I can verify it? Thank you for your help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.