Code Monkey home page Code Monkey logo

Comments (28)

blushingpenguin avatar blushingpenguin commented on May 28, 2024 3

v1.5.5 still looks a bit leaky, but nothing on the scale of before:

image

I will switch to your patched version now and report back tomorrow. Thanks very much for following up on this!

from longhorn.

blushingpenguin avatar blushingpenguin commented on May 28, 2024 3

@brunosmartin It's upstream that's at fault here, and is possibly related to usage patterns (I was seeing much higher ram usage with volumes that were being mounted/unmounted a lot). This a problem of modern software development I suppose -- pretty much all software is built of other components these days, and validating they all work together is a pretty difficult task. I actually think the response has been excellent here, @derekbit was on the case immediately.

You can just patch your deployment as outlined in #8394 (comment) until the next release -- this has mostly fixed the problem for me (it still looks a bit leaky, but nothing like before).

from longhorn.

blushingpenguin avatar blushingpenguin commented on May 28, 2024 2

@derekbit I'm happy to test a fixed build, however I think it's probably worth giving v1.5.5 another 24h to confirm that the leak doesn't occur with our weekday usage pattern. I'll check it tomorrow and report back.

Thanks,

Mark

from longhorn.

brunosmartin avatar brunosmartin commented on May 28, 2024 2

@brunosmartin

May I have some question about your setup

Sure

* Is it correct that you have 5 deployment?

Yep, 3 production

* Each deployment uses 1 RWX PVC?

Yes

* Each deployment have 3 pods?

exactly, scale = 3

* What are the size of the RWX PVCs?

the smaller is 200GB, the biggest is 500GB. They are wordpress sites, with Cloudflare as CDN (with page cache) so not expected to be IO intensive, only the apache has RWX PVCs, the database does the intensive IOPs operations as a Single Node Read Write PVC.

Also, do you know what happen at the timestamp in the below picture? Did your workload deployments restart? 326635484-35b6eb0e-f587-4740-aa3d-577d80cc0176

Yes, they did restart at that point, but I'm not sure this was related to longhorn, at that point the patch the patch was applied already (since the huge memory drop at 04/26 @ 12pm).

Hope it helps, good lucky on this.

from longhorn.

derekbit avatar derekbit commented on May 28, 2024 1

cc @james-munson

from longhorn.

blushingpenguin avatar blushingpenguin commented on May 28, 2024 1

@derekbit I can't be sure but v1.5.5 looks better, below is the memory usage with v1.6.1 jumping when the files are read by some nightly jobs. One thing I hadn't thought of is that those pods are all transient, and there are a few thousand of them, so this also corresponds to a lot of mounting and unmounting:
image
and here is the comparison from yesterday's nightly jobs:
image
you can see there is a small increase in memory usage but it's nothing like before

from longhorn.

derekbit avatar derekbit commented on May 28, 2024 1

@blushingpenguin
Thanks for the update.

I can build a customized share-manager image with the fix of the memory leak. Can you help test if it is the culprit in v1.6.1? What do you think?

from longhorn.

derekbit avatar derekbit commented on May 28, 2024 1

Thanks @blushingpenguin for the quick update.
The fix will be included in v1.6.2. Temporarily, you can continue using derekbit/longhorn-share-manager:v1.6.1-fix-leak.
I also found nfs-ganesha still seems a bit leaky after applying the fix. It's minor. I will check this part later.

from longhorn.

derekbit avatar derekbit commented on May 28, 2024 1

This is a very, very, very bad situation! Will we have and release like... today?!

In my cluster with just 5 nodes 12 disk the Longhorn is using more than 100GB of ram over all nodes!!!

This response time will show us if this is a reliable storage system, how can a critical bug like this happen in a release marked as stable?!

Please, just fix it!

Can you elaborate more on your use case?
The issue is triggered only when there are tons of unmount and mount operation.

I've provided a share-manager image for mitigating the issue before the v1.6.2 release. Please see #8394 (comment).

from longhorn.

derekbit avatar derekbit commented on May 28, 2024 1

@brunosmartin
Thank you for the update. We will review the strategy of marking stable release. cc @innobead

The high memory usage in your cluster sounds triggered by the same nfs-ganesha bug.
Can you elaborate more on the workload such as the applications, io patterns, read intensive or write intensive? These information will help us define test cases and avoid falling into the known issues again.

from longhorn.

PhanLe1010 avatar PhanLe1010 commented on May 28, 2024 1

@brunosmartin

May I have some question about your setup

  • Is it correct that you have 5 deployment?
  • Each deployment uses 1 RWX PVC?
  • Each deployment have 3 pods?
  • What are the size of the RWX PVCs?

Also, do you know what happen at the timestamp in the below picture? Did your workload deployments restart?
326635484-35b6eb0e-f587-4740-aa3d-577d80cc0176

from longhorn.

brunosmartin avatar brunosmartin commented on May 28, 2024 1

@PhanLe1010 just to clarify, I have 5 deployments with scale > 1, but I have much more RXW PVCs, I think about 50 RWX PVCs, most of then are deployed with scale = 1.

from longhorn.

blushingpenguin avatar blushingpenguin commented on May 28, 2024

examples of the leak since last share-manager pvc restart:
image
image

from longhorn.

blushingpenguin avatar blushingpenguin commented on May 28, 2024

not sure but I think the big spikes may correspond to attaching the volume to a different pod given the timings

from longhorn.

blushingpenguin avatar blushingpenguin commented on May 28, 2024

supportbundle_5c8ad69a-4ef9-47a4-8c8c-6aaa06a38849_2024-04-19T06-15-54Z.zip
I've snipped some logs out of this that contain too much information (for example the node syslogs), I can supply relevant sections if needed or possibly share them privately (I'd need to get permission to do that)

from longhorn.

blushingpenguin avatar blushingpenguin commented on May 28, 2024

from running top in the share-manager-pvc pods it looks like ganesha.nfsd that's at fault (nfs-ganesha/nfs-ganesha#1105 ?)
longhorn-share-managers memory usage is ~13Mb in each pod

from longhorn.

derekbit avatar derekbit commented on May 28, 2024

Can you login into the share-manager-pvc-0cb784d8-46f9-42e1-9476-5918989ec94f pod and provide us the results

  • ps aux
  • cat /proc/<PID of ganesha.nfsd>/status
  • cat /proc/1/status

from longhorn.

derekbit avatar derekbit commented on May 28, 2024

@PhanLe1010 Have you ever observed the memory leak in the RWX performance investigation?

from longhorn.

blushingpenguin avatar blushingpenguin commented on May 28, 2024
share-manager-pvc-0cb784d8-46f9-42e1-9476-5918989ec94f:/ # ps aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.0 1237096 13200 ?       Ssl  Apr14   2:23 /longhorn-share-manager --debug daemon --volume pvc-0cb784d8-46f9-42e1-9476-591
root          38  1.6  1.6 5015480 2145440 ?     Sl   Apr14 113:57 ganesha.nfsd -F -p /var/run/ganesha.pid -f /tmp/vfs.conf
root      610865  0.8  0.0   6956  4268 pts/0    Ss   07:30   0:00 bash
root      610893  133  0.0  13748  4048 pts/0    R+   07:30   0:00 ps aux
share-manager-pvc-0cb784d8-46f9-42e1-9476-5918989ec94f:/ # cat /proc/38/status 
Name:   ganesha.nfsd
Umask:  0000
State:  S (sleeping)
Tgid:   38
Ngid:   0
Pid:    38
PPid:   1
TracerPid:      0
Uid:    0       0       0       0
Gid:    0       0       0       0
FDSize: 64
Groups: 0 
NStgid: 38
NSpid:  38
NSpgid: 1
NSsid:  1
VmPeak:  5083072 kB
VmSize:  5015480 kB
VmLck:         0 kB
VmPin:         0 kB
VmHWM:   2145444 kB
VmRSS:   2145444 kB
RssAnon:         2134608 kB
RssFile:           10836 kB
RssShmem:              0 kB
VmData:  2303600 kB
VmStk:       136 kB
VmExe:        16 kB
VmLib:     18704 kB
VmPTE:      4784 kB
VmSwap:        0 kB
HugetlbPages:          0 kB
CoreDumping:    0
THP_enabled:    1
Threads:        20
SigQ:   1/514185
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000005001
SigIgn: 0000000000000000
SigCgt: 0000000180000000
CapInh: 0000000000000000
CapPrm: 000001ffffffffff
CapEff: 000001ffffffffff
CapBnd: 000001ffffffffff
CapAmb: 0000000000000000
NoNewPrivs:     0
Seccomp:        0
Seccomp_filters:        0
Speculation_Store_Bypass:       thread vulnerable
SpeculationIndirectBranch:      conditional enabled
Cpus_allowed:   fff
Cpus_allowed_list:      0-11
Mems_allowed:   00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
Mems_allowed_list:      0
voluntary_ctxt_switches:        56
nonvoluntary_ctxt_switches:     109
share-manager-pvc-0cb784d8-46f9-42e1-9476-5918989ec94f:/ # cat /proc/1/status
Name:   longhorn-share-
Umask:  0022
State:  S (sleeping)
Tgid:   1
Ngid:   0
Pid:    1
PPid:   0
TracerPid:      0
Uid:    0       0       0       0
Gid:    0       0       0       0
FDSize: 64
Groups: 0 
NStgid: 1
NSpid:  1
NSpgid: 1
NSsid:  1
VmPeak:  1237096 kB
VmSize:  1237096 kB
VmLck:         0 kB
VmPin:         0 kB
VmHWM:     15228 kB
VmRSS:     13200 kB
RssAnon:            4784 kB
RssFile:            8416 kB
RssShmem:              0 kB
VmData:    46880 kB
VmStk:       136 kB
VmExe:      4380 kB
VmLib:         8 kB
VmPTE:       120 kB
VmSwap:        0 kB
HugetlbPages:          0 kB
CoreDumping:    0
THP_enabled:    1
Threads:        18
SigQ:   1/514185
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: fffffffc3bba3a00
SigIgn: 0000000000000000
SigCgt: fffffffd7fc1feff
CapInh: 0000000000000000
CapPrm: 000001ffffffffff
CapEff: 000001ffffffffff
CapBnd: 000001ffffffffff
CapAmb: 0000000000000000
NoNewPrivs:     0
Seccomp:        0
Seccomp_filters:        0
Speculation_Store_Bypass:       thread vulnerable
SpeculationIndirectBranch:      conditional enabled
Cpus_allowed:   fff
Cpus_allowed_list:      0-11
Mems_allowed:   00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
Mems_allowed_list:      0
voluntary_ctxt_switches:        150
nonvoluntary_ctxt_switches:     59
share-manager-pvc-0cb784d8-46f9-42e1-9476-5918989ec94f:/ # 

from longhorn.

derekbit avatar derekbit commented on May 28, 2024

The VmRSS of the nfs-ganesha process is too high (2145444 kB). The high VmRSS value was not observed in my cluster, and the value went back to a lower value after an IO intensive task.

To verify if it is caused by the upstream regression you mentioned, could you provide us with the steps to reproduce as well as the information

  • What's your workload (manifest is appreciated)
  • Is it IO intensive? Write or read?

from longhorn.

blushingpenguin avatar blushingpenguin commented on May 28, 2024

I'm not sure if it's caused by the regression, I was just looking for leaks in upstream given it's the nfs server using all the ram.

The workload is that maybe 10-15 times a day rsync is run on ~4000 files, and half a dozen files will be copied to the nfs share (which in our case corresponds to releasing some code), then a few hundred times a day a few dozen of those files will be read at a time. It's quite light use and read mostly.

By manifest are you after a test workload?

from longhorn.

derekbit avatar derekbit commented on May 28, 2024

I'm not sure if it's caused by the regression, I was just looking for leaks in upstream given it's the nfs server using all the ram.

The workload is that maybe 10-15 times a day rsync is run on ~4000 files, and half a dozen files will be copied to the nfs share (which in our case corresponds to releasing some code), then a few hundred times a day a few dozen of those files will be read at a time. It's quite light use and read mostly.

By manifest are you after a test workload?

OK. I will try the steps in our lab.
If you are available, you can also try v1.5.5 and see if it doesn't have the issue.

from longhorn.

derekbit avatar derekbit commented on May 28, 2024

@blushingpenguin
You can test the share-manager image derekbit/longhorn-share-manager:v1.6.1-fix-leak on Longhorn v1.6.1.

To replace the share-manager image, edit the longhorn-manager daemonset by kubectl -n longhorn-system edit daemonset longhorn-manager, then replace the share-manager image

....
    spec:
      containers:
      - command:
        - longhorn-manager
        - -d
        - daemon
        - --engine-image
        - longhornio/longhorn-engine:v1.6.1
        - --instance-manager-image
        - longhornio/longhorn-instance-manager:v1.6.1
        - --share-manager-image
        - longhornio/longhorn-share-manager:v1.6.1 <---- Replace this image with derekbit/longhorn-share-manager:v1.6.1-fix-leak
        - --backing-image-manager-image
        - longhornio/backing-image-manager:v1.6.1
        - --support-bundle-manager-image
        - longhornio/support-bundle-kit:v0.0.36
        - --manager-image
        - longhornio/longhorn-manager:v1.6.1
        - --service-account
        - longhorn-service-account
        - --upgrade-version-check
...

Many thanks.

from longhorn.

blushingpenguin avatar blushingpenguin commented on May 28, 2024

@derekbit yes, it looks like that was the major cause.
v1.6.1-fix-leak:
image
(you can see the restart yesterday at around 05:30)
Thanks very much for digging into it so quickly.

from longhorn.

longhorn-io-github-bot avatar longhorn-io-github-bot commented on May 28, 2024

Pre Ready-For-Testing Checklist

  • Where is the reproduce steps/test steps documented?
    The reproduce steps/test steps are at:
  1. Create a 3 node cluster
  2. Create first workload with a RWX volume by https://github.com/longhorn/longhorn/blob/master/examples/rwx/rwx-nginx-deployment.yaml
  3. Create second workload with the RWX volume.
  4. Scale down the second workload and scale up repeatedly 100 times
  5. Find the PID of the nfs-ganesha in the share-manager pod by ps aux
  6. Observe the VmRSS of nfs-ganesha in the share-manager pod by cat /proc/<nfs-ganesha PID>/status | grep VmRSS
  7. VmRSS in LH v1.6.1 is significantly larger than the value after applying the fix.
  • Does the PR include the explanation for the fix or the feature?

  • Have the backend code been merged (Manager, Engine, Instance Manager, BackupStore etc) (including backport-needed/*)?
    The PR is at

longhorn/nfs-ganesha#13
longhorn/longhorn-share-manager#203

  • Which areas/issues this PR might have potential impacts on?
    Area: RWX volume, memory leak, upstream
    Issues

from longhorn.

brunosmartin avatar brunosmartin commented on May 28, 2024

This is a very, very, very bad situation! Will we have and release like... today?!

In my cluster with just 5 nodes 12 disk the Longhorn is using more than 100GB of ram over all nodes!!!

This response time will show us if this is a reliable storage system, how can a critical bug like this happen in a release marked as stable?!

Please, just fix it!

from longhorn.

brunosmartin avatar brunosmartin commented on May 28, 2024

This is a very, very, very bad situation! Will we have and release like... today?!
In my cluster with just 5 nodes 12 disk the Longhorn is using more than 100GB of ram over all nodes!!!
This response time will show us if this is a reliable storage system, how can a critical bug like this happen in a release marked as stable?!
Please, just fix it!

Can you elaborate more on your use case? The issue is triggered only when there are tons of unmount and mount operation.

I've provided a share-manager image for mitigating the issue before the v1.6.2 release. Please see #8394 (comment).

Maybe this is underestimated, my cluster doesn't make "tons of unmount and mount operation", but see the images above how it acted in face of this bug:

Captura de tela de 2024-04-29 22-22-22

we upgrade 04/23 from 1.5.4 to 1.6.1, and the memory leak clearly consumes lots of ram. Then we applied this fix 04/26, and its stable now. Every peak in this image was a very painful huge outage!

Besides we don't make "tons of unmount and mount operation", we have like five workloads where we have 3 pods using the same volume, maybe this is related somehow.

I repeat this critical bug is underestimated, I don't have any special use, just using normal (small) kubernetes cluster with rancher and made my company's services burn for a week, it's very likely there are tons off users affected and after one week we don't have even an RC build.

@blushingpenguin for modern software development problems we have modern management methods to deal with. My point was that this critical bug shows was a mistake to call the version 1.6.1 stable.

I also work on some open source projects and my intention here is to help this project to be more stable and reliable, as an storage system must be.

Please tell me if I can provide any additional information on this issue.

from longhorn.

roger-ryao avatar roger-ryao commented on May 28, 2024

Verified on master-head 20240507

The test steps
#8394 (comment)

  1. Create first workload with a RWX volume by https://github.com/longhorn/longhorn/blob/master/examples/rwx/rwx-nginx-deployment.yaml
  2. Scale up the replicas to 3.
  3. Check if 3 workloads are in the "Running" state.
  4. Scale down the replicas to 1.
  5. Check if one workload are in the "Running" state.
    We can test steps 2-5 using the following shell script.
deployment_rwx_test.sh
#!/bin/bash

# Define the deployment name
DEPLOYMENT_NAME="rwx-test"
KUBECONFIF="/home/ryao/Desktop/note/longhorn-tool/ryao-161.yaml"

for ((i=1; i<=100; i++)); do
    # Scale deployment to 10 replicas
    kubectl --kubeconfig=$KUBECONFIF scale deployment $DEPLOYMENT_NAME --replicas=3

    # Wait for the deployment to have 3 ready replicas
    until [[ "$(kubectl --kubeconfig=$KUBECONFIF get deployment $DEPLOYMENT_NAME -o=jsonpath='{.status.readyReplicas}')" == "3" ]]; do
        ready_replicas=$(kubectl --kubeconfig=$KUBECONFIF get deployment $DEPLOYMENT_NAME -o=jsonpath='{.status.readyReplicas}')
        echo "Iteration #$i: $DEPLOYMENT_NAME has $ready_replicas ready replicas"
        sleep 1
    done

    # Check if all pods are in the "Running" state
    while [[ $(kubectl --kubeconfig=$KUBECONFIF get pods -l=app=$DEPLOYMENT_NAME -o=jsonpath='{.items[*].status.phase}') != "Running Running Running" ]]; do
        echo "Not all pods are in the 'Running' state. Waiting..."
        sleep 5
    done

    # Scale deployment down to 1 replicas
    kubectl --kubeconfig=$KUBECONFIF scale deployment $DEPLOYMENT_NAME --replicas=1

    # Wait for the deployment to have 1 ready replicas
    until [[ "$(kubectl --kubeconfig=$KUBECONFIF get deployment $DEPLOYMENT_NAME -o=jsonpath='{.status.readyReplicas}')" == "1" ]]; do
        ready_replicas=$(kubectl --kubeconfig=$KUBECONFIF get deployment $DEPLOYMENT_NAME -o=jsonpath='{.status.readyReplicas}')
        echo "Iteration #$i: $DEPLOYMENT_NAME has $ready_replicas ready replicas"
        sleep 1
    done

    # Check if all pods are in the "Running" state
    while [[ $(kubectl --kubeconfig=$KUBECONFIF get pods -l=app=$DEPLOYMENT_NAME -o=jsonpath='{.items[*].status.phase}') != "Running" ]]; do
        echo "Not all pods are in the 'Running' state. Waiting..."
        sleep 5
    done
done
  1. Find the PID of the nfs-ganesha in the share-manager pod by ps aux
  2. Observe the VmRSS of nfs-ganesha in the share-manager pod by cat /proc/<nfs-ganesha PID>/status | grep VmRSS

Result Passed

  1. We were also able to reproduce this issue on v1.6.1.
  2. After executing the script, the output for v1.6.1 is as follows:
Every 2.0s: cat /proc/29/status | grep VmRSS                     share-manager-pvc-119d403e-ae17-4f4f-aa7f-06e7bf40fca2: Tue May  7 09:54:38 2024

VmRSS:     47192 kB

For the master-head:

Every 2.0s: cat /proc/43/status | grep VmRSS                    share-manager-pvc-d0a41b4a-5bb8-4cd5-b60a-d260c6ce8a34: Tue May  7 10:08:18 2024

VmRSS:     42588 kB

from longhorn.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.