Code Monkey home page Code Monkey logo

Comments (43)

emman27 avatar emman27 commented on May 16, 2024 17

Can confirm I am running into this issue as well, would love to hear from the AWS team if there's any updates.

from amazon-vpc-cni-k8s.

monsterxx03 avatar monsterxx03 commented on May 16, 2024 9

I'm on eks 1.13, cni 1.5.3, still come to this leak issue.

I upgraded cni from 1.5.0 to 1.5.3

from amazon-vpc-cni-k8s.

metral avatar metral commented on May 16, 2024 7

I'm still hitting this issue of leaked ENIs using v1.5.0 on EKS v1.13 when decomissioning (kubectl drain & delete) a k8s node group ASG, and then deleting it. I've seen it when:

  1. deleting the ASG outright, and
  2. when scaling down the ASG to a desiredCount & min on the ASG = 0, and then deleting it.

It does seem like the shutdown & cleanup on aws-cni is still having issues cleaning up ENI's when workers are terminated.

Update:

This seems to primarily happen on ASG nodes in EKS clusters being used for Pods of a LoadBalancer-typed Service. The leaked ENIs are always associated with an instance from this ASG. FWIW, on tear down the LoadBalancer Service & Deployment is deleted before the ASG is removed.

from amazon-vpc-cni-k8s.

bjethwan avatar bjethwan commented on May 16, 2024 6

I confirm running into this issue too.

from amazon-vpc-cni-k8s.

pigri avatar pigri commented on May 16, 2024 6

We have a quick fix. We were using this fix a few months ago.
Script: https://gist.github.com/pigri/c00ce2811a5954c89384925190827ab6

You need a docker image with aws CLI and a kubernetes cronjob and always run 5 minutes.

from amazon-vpc-cni-k8s.

nitrag avatar nitrag commented on May 16, 2024 6

After setting delete_on_termination on our Launch Template -> Network Interfaces to true (which I don't know why it's false by default) and using the latest 1.6.1 release I think we're stable. Will report back if that's not the case. 👍

from amazon-vpc-cni-k8s.

robin-engineml avatar robin-engineml commented on May 16, 2024 5

@liwenwu-amazon Is there progress on this issue?

from amazon-vpc-cni-k8s.

mogren avatar mogren commented on May 16, 2024 5

@robin-engineml We have identified an issue when ENIs might get leaked, specifically when a node is drained, then terminated, there is a window after the call to ec2SVC.DetachNetworkInterface(), but before ec2SVC.DeleteNetworkInterface() where the ENI is detached, but not yet deleted.

We have seen a few cases where the first attempt to delete the ENI after the detach fails because the ENI is not yet detached, but before the first retry, the pod or node gets terminated. Then the ENI will be left in a detached state. We are tracking that issue in #608.

from amazon-vpc-cni-k8s.

nitrag avatar nitrag commented on May 16, 2024 5

Been using 1.6.0 since it came out in our test cluster. I disabled our manual eni-cleanup pod we wrote as a workaround. I now have ~200 leaked (available) ENI's when the cluster is only actively using (in-use) ~30. We average about 8-10 nodes but spin up/down maybe 100 per week. I made sure to clear out the leaked/available ENI's when I upgraded to 1.6.

TLDR: Not resolved in 1.6

from amazon-vpc-cni-k8s.

ibnsinha avatar ibnsinha commented on May 16, 2024 4

We are running into the same issue

from amazon-vpc-cni-k8s.

mogren avatar mogren commented on May 16, 2024 4

So far it seems that the main reason for this issue is doing forced detaches of ENIs. v1.5.0-rc has #458 that instead just does retries instead of forcing.

from amazon-vpc-cni-k8s.

robin-engineml avatar robin-engineml commented on May 16, 2024 3

I continue to see this problem after 1.5.3.

from amazon-vpc-cni-k8s.

mogopz avatar mogopz commented on May 16, 2024 3

Just adding our experience in here too - we've been running 1.6.0 for around a week and can also confirm we're still seeing it happen.

I'm not totally sure on the difference but each of our nodes gets assigned two ENIs - one with no description and one with aws-K8S-<instance_id>. The with the description always seems to get cleaned up, but the other one never does.

from amazon-vpc-cni-k8s.

mogopz avatar mogopz commented on May 16, 2024 3

@mogren Wow, I totally missed that - looks like we were missing delete_on_termination too! 🤦
I've just added it, thanks for your help.

from amazon-vpc-cni-k8s.

metral avatar metral commented on May 16, 2024 2

@imriss Yes but I also uncommented the readinessProbe and livenessProbes to enable their use since v1.5.2 now has the healthz endpoint.

from amazon-vpc-cni-k8s.

robin-engineml avatar robin-engineml commented on May 16, 2024 2

I perform about 50 standup/teardown tests per week. This problem still occurs on 1.6.0rc4, but less frequently than previously, in my experience.

from amazon-vpc-cni-k8s.

nickdgriffin avatar nickdgriffin commented on May 16, 2024 2

In our case I have just noticed that it is our primary interfaces that are leaking as they are missing the "delete on termination" option 🤦

from amazon-vpc-cni-k8s.

vipulsabhaya avatar vipulsabhaya commented on May 16, 2024 1

Thanks @monsterxx03 for the logs - I confirmed that in this specific case, Autoscaling terminated the instance 2 seconds after the first deleteENI attempt. Because of this ipamd was unable to retry deletion, and the ENI gets leaked.

We will look into how to do a graceful cleanup in this case.

from amazon-vpc-cni-k8s.

mogren avatar mogren commented on May 16, 2024 1

@mogggggg The non-tagged one is the default ENI that gets created with the EC2 instance. Could your issue be similar to the problem in the previous comment? That delete_on_termination was not set in the Launch Template for the worker nodes?

from amazon-vpc-cni-k8s.

MitchyBAwesome avatar MitchyBAwesome commented on May 16, 2024

Further to this, when you remove the cni-plugin from the cluster using kubtctl delete -f misc/aws-k8s-cni.yaml any secondary ENIs and associated IP addresses are left on the instance.

from amazon-vpc-cni-k8s.

jonmoter avatar jonmoter commented on May 16, 2024

What is responsible for releasing the ENIs? Is there some sort of shutdown procedure that needs to happen on the Nodes in order for it to release the ENIs properly?

from amazon-vpc-cni-k8s.

liwenwu-amazon avatar liwenwu-amazon commented on May 16, 2024

@jonmoter Today, ipamD daemonset on each node should be responsible releasing ENI if there are too many free IPs in the IP warm pool. When the node is terminated (e.g. scaled down by auto scaling group), EC2 control plane will release ENI and its IPs automatically.

In summary, there should NOT any leaked ENIs, unless a unknown bug in ipamdD, or the following situation that:

  • When the Pods running on the node triggers ipamD free IPs warmpool drop below its threshold and ipamD start allocating a new ENI
  • if the node is killed (either by user or by auto scaling group) at time after ipamD has created ENI but before ipamD able to attach ENI to the instance, this ENI is leaked.

from amazon-vpc-cni-k8s.

jonmoter avatar jonmoter commented on May 16, 2024

Okay, thanks for the clarification. In the past I've had issues of Kubernetes nodes cleaning up properly on termination, like running kubectl drain on shutdown, but that taking several minutes, hitting a timeout, and the instance not terminating gracefully.

That's why I was wondering if the ipamD was responsible for releasing the ENIs on shutdown, or if something like the EC2 control plane should handle that.

Thanks!

from amazon-vpc-cni-k8s.

liwenwu-amazon avatar liwenwu-amazon commented on May 16, 2024

@jonmoter In case, an ENI is leak in the situation mentioned above (node get killed at time before ENI is attached to the node but after ENI is allocated by ipamD), you should be able to manually delete this ENI. Each ENI has a description set as aws-K8S-<instance-id>

from amazon-vpc-cni-k8s.

oded-dd avatar oded-dd commented on May 16, 2024

@liwenwu-amazon When a fix to this issue is planned?

from amazon-vpc-cni-k8s.

mattlandis avatar mattlandis commented on May 16, 2024

Give that we need to create the ENI, attach it to then instance, then set the termination policy the current design will always have a gap where the plugin is either creation (and has not set termination policy) or deleting (removed termination policy, and possibly detached) when the process/instance is killed causing the resource to be leaked.

There has been some talk of creating a centralized container to allocate ENIs and IPs and assign them outside of the local daemon. This would allow for clean up after termination. There is some more design that needs to happen but it is something I think is worth the investment. I don't have a timeline for it at this point though.

from amazon-vpc-cni-k8s.

trimbleAdam avatar trimbleAdam commented on May 16, 2024

What is the status of this? This issue is completely blocking us from moving off of our KOPS cluster, which has almost NO problems.

from amazon-vpc-cni-k8s.

rizwan-kh avatar rizwan-kh commented on May 16, 2024

There is another bug closed, but this also seems to be related (I can confirm if required) - right now, my cluster is working normally, but will check the next time this happens and update with confirmation.

from amazon-vpc-cni-k8s.

mogren avatar mogren commented on May 16, 2024

Resolving since v1.5.0 is released.

from amazon-vpc-cni-k8s.

dylancaponi avatar dylancaponi commented on May 16, 2024

Still having this issue on EKS v1.12 CNI 1.5.0. Not deleting any nodes, just redeploying deployment with large amount of pods (~600). Every time I deploy, less become available because there are no available IPs.

from amazon-vpc-cni-k8s.

metral avatar metral commented on May 16, 2024

Update: we were able to get passed this issued of leaked ENIs by updating to v1.5.2.

The specific fixes in v1.5.2 were:

  • Detach ENI before deleting.
  • The addition of a healthz endpoint: #548 and #553. The healthz endpoint is used in the Pod start-up script, and in readiness & liveness probes of the aws-cni DaemonSet for versions >= v1.5.2. By default the probes are not currently enabled, see more info.

from amazon-vpc-cni-k8s.

imriss avatar imriss commented on May 16, 2024

@metral thanks. Does the following enough for that update?

kubectl  apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.5/aws-k8s-cni.yaml; 

from amazon-vpc-cni-k8s.

vipulsabhaya avatar vipulsabhaya commented on May 16, 2024

@monsterxx03 We are looking into this issue, do you happen to have ipamd logs from one of the nodes the ENI was attached to?

if so, please send to sabhayav[at]amazon

from amazon-vpc-cni-k8s.

monsterxx03 avatar monsterxx03 commented on May 16, 2024

ipmad log sent @vipulsabhaya

Share some info here:

I use cluster autoscaler to do autoscaling, all left eni's host nodes were already deleted.

In debug log, ipmad try to detach this eni, it says: Successfully detached ENI: eni-xxx, but when it try to delete it, reported: Not able to delete ENI yet (attempt 1/20): InvalidParameterValue: Network interface 'eni-xxx' is currently in use. And this is the last line of this node's ipmad.

I guess cluster autoscaler interrupted the eni delete attempts, but my cluster autoscaler's log retention is too short, can't verify the instance deletion time.

One more find is most of the left enis have tag: node.k8s.amazonaws.com/instance_id, but some of them don't have any tag. Not sure whether it's another issue.

from amazon-vpc-cni-k8s.

metral avatar metral commented on May 16, 2024

Update: we were able to get passed this issued of leaked ENIs by updating to v1.5.2.

For context, these fixes worked in our specific test use case because on tear downs, we intentionally wait on the:

  • Deletion of the namespace of the workloads that have LB Services, and the
  • Deletion of the aws-cni DaemonSet.

We then sleep for a few minutes for good measure, before tearing down the cluster. This gives the AWS resources and aws-cni sufficient time to gracefully shutdown and cleanup successfully.

from amazon-vpc-cni-k8s.

nitrag avatar nitrag commented on May 16, 2024

@vipulsabhaya Any updates? This is causing IP exhaustion between our two /19 subnets (16k IPs)...

from amazon-vpc-cni-k8s.

jlforester avatar jlforester commented on May 16, 2024

Give that we need to create the ENI, attach it to then instance, then set the termination policy the current design will always have a gap where the plugin is either creation (and has not set termination policy) or deleting (removed termination policy, and possibly detached) when the process/instance is killed causing the resource to be leaked.

There has been some talk of creating a centralized container to allocate ENIs and IPs and assign them outside of the local daemon. This would allow for clean up after termination. There is some more design that needs to happen but it is something I think is worth the investment. I don't have a timeline for it at this point though.

I think I'm seeing the latter on node termination, but it doesn't always happen on every node in a node group. When I look at the ENIs after node termination, I see that they've had the termination policy removed.

What if you don't remove the termination policy on deletion? Would ipamd still be able to delete ENIs? It would seem to me that this would allow EC2 to clean up any ENIs that ipamd missed.

from amazon-vpc-cni-k8s.

mogren avatar mogren commented on May 16, 2024

@jlforester You are correct that the ENIs will be cleaned up automatically when the EC2 instance terminates, as long as they are attached. If the CNI detaches the ENI, but gets terminated before it can delete the ENI, we leak it. We don't explicitly remove the policy, I think that happens when you detach it from the instance.

from amazon-vpc-cni-k8s.

jlforester avatar jlforester commented on May 16, 2024

Just spitballing here, then...

What if on instance termination you DON'T try to clean it up and let EC2 do it like with the eth0 ENI. Would that cause any problems like IP leakage? They're already set to delete on termination.

Or, what if we use an ELB lifecycle policy to give ipamD more time to clean up its ENIs? Just adding the lifecycle policy to isn't enough. You'd have to watch for a notification from the ELB that a scale-in event is occurring for that instance.

from amazon-vpc-cni-k8s.

mogren avatar mogren commented on May 16, 2024

@jlforester Well, in #645 I added a shut-down listener, to do just that, but there will always be the case where the number of pods on the node goes down, we have too many IPs available and want to free an ENI. If all nodes gets killed just after ipamd detaches that ENI, we will lose it. There is also a background cleanup loop in the v1.6 branch, added in #624.

from amazon-vpc-cni-k8s.

jlforester avatar jlforester commented on May 16, 2024

@mogren I just ran a couple cluster standup/teardown tests using terraform, and it looks like 1.6.0rc4 resolves the leaking ENI issue for me.

from amazon-vpc-cni-k8s.

tecnobrat avatar tecnobrat commented on May 16, 2024

We encountered this issue as well. It gets worse if you end up running out of ip space in your subnets, like we did.

Ended up causing an outage for us during a deploy as new nodes/pods could not spin up to replace the old ones.

from amazon-vpc-cni-k8s.

mogren avatar mogren commented on May 16, 2024

I added a follow up ticket to log a warning for the case where delete_on_termination is set to false. #1023

I think it's finally time to close this ticket. If issues with leaking ENIs come up again, please open a new ticket with the specific details for that case.

from amazon-vpc-cni-k8s.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.