Comments (28)
v1.5.5 still looks a bit leaky, but nothing on the scale of before:
I will switch to your patched version now and report back tomorrow. Thanks very much for following up on this!
from longhorn.
@brunosmartin It's upstream that's at fault here, and is possibly related to usage patterns (I was seeing much higher ram usage with volumes that were being mounted/unmounted a lot). This a problem of modern software development I suppose -- pretty much all software is built of other components these days, and validating they all work together is a pretty difficult task. I actually think the response has been excellent here, @derekbit was on the case immediately.
You can just patch your deployment as outlined in #8394 (comment) until the next release -- this has mostly fixed the problem for me (it still looks a bit leaky, but nothing like before).
from longhorn.
@derekbit I'm happy to test a fixed build, however I think it's probably worth giving v1.5.5 another 24h to confirm that the leak doesn't occur with our weekday usage pattern. I'll check it tomorrow and report back.
Thanks,
Mark
from longhorn.
May I have some question about your setup
Sure
* Is it correct that you have 5 deployment?
Yep, 3 production
* Each deployment uses 1 RWX PVC?
Yes
* Each deployment have 3 pods?
exactly, scale = 3
* What are the size of the RWX PVCs?
the smaller is 200GB, the biggest is 500GB. They are wordpress sites, with Cloudflare as CDN (with page cache) so not expected to be IO intensive, only the apache has RWX PVCs, the database does the intensive IOPs operations as a Single Node Read Write PVC.
Also, do you know what happen at the timestamp in the below picture? Did your workload deployments restart?
Yes, they did restart at that point, but I'm not sure this was related to longhorn, at that point the patch the patch was applied already (since the huge memory drop at 04/26 @ 12pm).
Hope it helps, good lucky on this.
from longhorn.
from longhorn.
@derekbit I can't be sure but v1.5.5 looks better, below is the memory usage with v1.6.1 jumping when the files are read by some nightly jobs. One thing I hadn't thought of is that those pods are all transient, and there are a few thousand of them, so this also corresponds to a lot of mounting and unmounting:
and here is the comparison from yesterday's nightly jobs:
you can see there is a small increase in memory usage but it's nothing like before
from longhorn.
@blushingpenguin
Thanks for the update.
I can build a customized share-manager image with the fix of the memory leak. Can you help test if it is the culprit in v1.6.1? What do you think?
from longhorn.
Thanks @blushingpenguin for the quick update.
The fix will be included in v1.6.2. Temporarily, you can continue using derekbit/longhorn-share-manager:v1.6.1-fix-leak
.
I also found nfs-ganesha still seems a bit leaky after applying the fix. It's minor. I will check this part later.
from longhorn.
This is a very, very, very bad situation! Will we have and release like... today?!
In my cluster with just 5 nodes 12 disk the Longhorn is using more than 100GB of ram over all nodes!!!
This response time will show us if this is a reliable storage system, how can a critical bug like this happen in a release marked as stable?!
Please, just fix it!
Can you elaborate more on your use case?
The issue is triggered only when there are tons of unmount and mount operation.
I've provided a share-manager image for mitigating the issue before the v1.6.2 release. Please see #8394 (comment).
from longhorn.
@brunosmartin
Thank you for the update. We will review the strategy of marking stable release. cc @innobead
The high memory usage in your cluster sounds triggered by the same nfs-ganesha bug.
Can you elaborate more on the workload such as the applications, io patterns, read intensive or write intensive? These information will help us define test cases and avoid falling into the known issues again.
from longhorn.
May I have some question about your setup
- Is it correct that you have 5 deployment?
- Each deployment uses 1 RWX PVC?
- Each deployment have 3 pods?
- What are the size of the RWX PVCs?
Also, do you know what happen at the timestamp in the below picture? Did your workload deployments restart?
from longhorn.
@PhanLe1010 just to clarify, I have 5 deployments with scale > 1, but I have much more RXW PVCs, I think about 50 RWX PVCs, most of then are deployed with scale = 1.
from longhorn.
examples of the leak since last share-manager pvc restart:
from longhorn.
not sure but I think the big spikes may correspond to attaching the volume to a different pod given the timings
from longhorn.
supportbundle_5c8ad69a-4ef9-47a4-8c8c-6aaa06a38849_2024-04-19T06-15-54Z.zip
I've snipped some logs out of this that contain too much information (for example the node syslogs), I can supply relevant sections if needed or possibly share them privately (I'd need to get permission to do that)
from longhorn.
from running top in the share-manager-pvc pods it looks like ganesha.nfsd that's at fault (nfs-ganesha/nfs-ganesha#1105 ?)
longhorn-share-managers memory usage is ~13Mb in each pod
from longhorn.
Can you login into the share-manager-pvc-0cb784d8-46f9-42e1-9476-5918989ec94f
pod and provide us the results
ps aux
cat /proc/<PID of ganesha.nfsd>/status
cat /proc/1/status
from longhorn.
@PhanLe1010 Have you ever observed the memory leak in the RWX performance investigation?
from longhorn.
share-manager-pvc-0cb784d8-46f9-42e1-9476-5918989ec94f:/ # ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 1237096 13200 ? Ssl Apr14 2:23 /longhorn-share-manager --debug daemon --volume pvc-0cb784d8-46f9-42e1-9476-591
root 38 1.6 1.6 5015480 2145440 ? Sl Apr14 113:57 ganesha.nfsd -F -p /var/run/ganesha.pid -f /tmp/vfs.conf
root 610865 0.8 0.0 6956 4268 pts/0 Ss 07:30 0:00 bash
root 610893 133 0.0 13748 4048 pts/0 R+ 07:30 0:00 ps aux
share-manager-pvc-0cb784d8-46f9-42e1-9476-5918989ec94f:/ # cat /proc/38/status
Name: ganesha.nfsd
Umask: 0000
State: S (sleeping)
Tgid: 38
Ngid: 0
Pid: 38
PPid: 1
TracerPid: 0
Uid: 0 0 0 0
Gid: 0 0 0 0
FDSize: 64
Groups: 0
NStgid: 38
NSpid: 38
NSpgid: 1
NSsid: 1
VmPeak: 5083072 kB
VmSize: 5015480 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 2145444 kB
VmRSS: 2145444 kB
RssAnon: 2134608 kB
RssFile: 10836 kB
RssShmem: 0 kB
VmData: 2303600 kB
VmStk: 136 kB
VmExe: 16 kB
VmLib: 18704 kB
VmPTE: 4784 kB
VmSwap: 0 kB
HugetlbPages: 0 kB
CoreDumping: 0
THP_enabled: 1
Threads: 20
SigQ: 1/514185
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000005001
SigIgn: 0000000000000000
SigCgt: 0000000180000000
CapInh: 0000000000000000
CapPrm: 000001ffffffffff
CapEff: 000001ffffffffff
CapBnd: 000001ffffffffff
CapAmb: 0000000000000000
NoNewPrivs: 0
Seccomp: 0
Seccomp_filters: 0
Speculation_Store_Bypass: thread vulnerable
SpeculationIndirectBranch: conditional enabled
Cpus_allowed: fff
Cpus_allowed_list: 0-11
Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 56
nonvoluntary_ctxt_switches: 109
share-manager-pvc-0cb784d8-46f9-42e1-9476-5918989ec94f:/ # cat /proc/1/status
Name: longhorn-share-
Umask: 0022
State: S (sleeping)
Tgid: 1
Ngid: 0
Pid: 1
PPid: 0
TracerPid: 0
Uid: 0 0 0 0
Gid: 0 0 0 0
FDSize: 64
Groups: 0
NStgid: 1
NSpid: 1
NSpgid: 1
NSsid: 1
VmPeak: 1237096 kB
VmSize: 1237096 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 15228 kB
VmRSS: 13200 kB
RssAnon: 4784 kB
RssFile: 8416 kB
RssShmem: 0 kB
VmData: 46880 kB
VmStk: 136 kB
VmExe: 4380 kB
VmLib: 8 kB
VmPTE: 120 kB
VmSwap: 0 kB
HugetlbPages: 0 kB
CoreDumping: 0
THP_enabled: 1
Threads: 18
SigQ: 1/514185
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: fffffffc3bba3a00
SigIgn: 0000000000000000
SigCgt: fffffffd7fc1feff
CapInh: 0000000000000000
CapPrm: 000001ffffffffff
CapEff: 000001ffffffffff
CapBnd: 000001ffffffffff
CapAmb: 0000000000000000
NoNewPrivs: 0
Seccomp: 0
Seccomp_filters: 0
Speculation_Store_Bypass: thread vulnerable
SpeculationIndirectBranch: conditional enabled
Cpus_allowed: fff
Cpus_allowed_list: 0-11
Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 150
nonvoluntary_ctxt_switches: 59
share-manager-pvc-0cb784d8-46f9-42e1-9476-5918989ec94f:/ #
from longhorn.
The VmRSS of the nfs-ganesha process is too high (2145444 kB). The high VmRSS value was not observed in my cluster, and the value went back to a lower value after an IO intensive task.
To verify if it is caused by the upstream regression you mentioned, could you provide us with the steps to reproduce as well as the information
- What's your workload (manifest is appreciated)
- Is it IO intensive? Write or read?
from longhorn.
I'm not sure if it's caused by the regression, I was just looking for leaks in upstream given it's the nfs server using all the ram.
The workload is that maybe 10-15 times a day rsync is run on ~4000 files, and half a dozen files will be copied to the nfs share (which in our case corresponds to releasing some code), then a few hundred times a day a few dozen of those files will be read at a time. It's quite light use and read mostly.
By manifest are you after a test workload?
from longhorn.
I'm not sure if it's caused by the regression, I was just looking for leaks in upstream given it's the nfs server using all the ram.
The workload is that maybe 10-15 times a day rsync is run on ~4000 files, and half a dozen files will be copied to the nfs share (which in our case corresponds to releasing some code), then a few hundred times a day a few dozen of those files will be read at a time. It's quite light use and read mostly.
By manifest are you after a test workload?
OK. I will try the steps in our lab.
If you are available, you can also try v1.5.5 and see if it doesn't have the issue.
from longhorn.
@blushingpenguin
You can test the share-manager image derekbit/longhorn-share-manager:v1.6.1-fix-leak
on Longhorn v1.6.1.
To replace the share-manager image, edit the longhorn-manager daemonset by kubectl -n longhorn-system edit daemonset longhorn-manager
, then replace the share-manager image
....
spec:
containers:
- command:
- longhorn-manager
- -d
- daemon
- --engine-image
- longhornio/longhorn-engine:v1.6.1
- --instance-manager-image
- longhornio/longhorn-instance-manager:v1.6.1
- --share-manager-image
- longhornio/longhorn-share-manager:v1.6.1 <---- Replace this image with derekbit/longhorn-share-manager:v1.6.1-fix-leak
- --backing-image-manager-image
- longhornio/backing-image-manager:v1.6.1
- --support-bundle-manager-image
- longhornio/support-bundle-kit:v0.0.36
- --manager-image
- longhornio/longhorn-manager:v1.6.1
- --service-account
- longhorn-service-account
- --upgrade-version-check
...
Many thanks.
from longhorn.
@derekbit yes, it looks like that was the major cause.
v1.6.1-fix-leak:
(you can see the restart yesterday at around 05:30)
Thanks very much for digging into it so quickly.
from longhorn.
Pre Ready-For-Testing Checklist
- Where is the reproduce steps/test steps documented?
The reproduce steps/test steps are at:
- Create a 3 node cluster
- Create first workload with a RWX volume by https://github.com/longhorn/longhorn/blob/master/examples/rwx/rwx-nginx-deployment.yaml
- Create second workload with the RWX volume.
- Scale down the second workload and scale up repeatedly 100 times
- Find the PID of the nfs-ganesha in the share-manager pod by
ps aux
- Observe the VmRSS of nfs-ganesha in the share-manager pod by
cat /proc/<nfs-ganesha PID>/status | grep VmRSS
- VmRSS in LH v1.6.1 is significantly larger than the value after applying the fix.
-
Does the PR include the explanation for the fix or the feature?
-
Have the backend code been merged (Manager, Engine, Instance Manager, BackupStore etc) (including
backport-needed/*
)?
The PR is at
longhorn/nfs-ganesha#13
longhorn/longhorn-share-manager#203
- Which areas/issues this PR might have potential impacts on?
Area: RWX volume, memory leak, upstream
Issues
from longhorn.
This is a very, very, very bad situation! Will we have and release like... today?!
In my cluster with just 5 nodes 12 disk the Longhorn is using more than 100GB of ram over all nodes!!!
This response time will show us if this is a reliable storage system, how can a critical bug like this happen in a release marked as stable?!
Please, just fix it!
from longhorn.
This is a very, very, very bad situation! Will we have and release like... today?!
In my cluster with just 5 nodes 12 disk the Longhorn is using more than 100GB of ram over all nodes!!!
This response time will show us if this is a reliable storage system, how can a critical bug like this happen in a release marked as stable?!
Please, just fix it!Can you elaborate more on your use case? The issue is triggered only when there are tons of unmount and mount operation.
I've provided a share-manager image for mitigating the issue before the v1.6.2 release. Please see #8394 (comment).
Maybe this is underestimated, my cluster doesn't make "tons of unmount and mount operation", but see the images above how it acted in face of this bug:
we upgrade 04/23 from 1.5.4 to 1.6.1, and the memory leak clearly consumes lots of ram. Then we applied this fix 04/26, and its stable now. Every peak in this image was a very painful huge outage!
Besides we don't make "tons of unmount and mount operation", we have like five workloads where we have 3 pods using the same volume, maybe this is related somehow.
I repeat this critical bug is underestimated, I don't have any special use, just using normal (small) kubernetes cluster with rancher and made my company's services burn for a week, it's very likely there are tons off users affected and after one week we don't have even an RC build.
@blushingpenguin for modern software development problems we have modern management methods to deal with. My point was that this critical bug shows was a mistake to call the version 1.6.1 stable.
I also work on some open source projects and my intention here is to help this project to be more stable and reliable, as an storage system must be.
Please tell me if I can provide any additional information on this issue.
from longhorn.
Verified on master-head 20240507
- longhorn master-head 8026d1a
- nfs-ganesha longhorn-ganesha-v5 longhorn/nfs-ganesha@996a59c
- longhorn-share-manager master-head longhorn/longhorn-share-manager@88d4db1
The test steps
#8394 (comment)
- Create first workload with a RWX volume by https://github.com/longhorn/longhorn/blob/master/examples/rwx/rwx-nginx-deployment.yaml
- Scale up the replicas to 3.
- Check if 3 workloads are in the "Running" state.
- Scale down the replicas to 1.
- Check if one workload are in the "Running" state.
We can test steps 2-5 using the following shell script.
deployment_rwx_test.sh
#!/bin/bash
# Define the deployment name
DEPLOYMENT_NAME="rwx-test"
KUBECONFIF="/home/ryao/Desktop/note/longhorn-tool/ryao-161.yaml"
for ((i=1; i<=100; i++)); do
# Scale deployment to 10 replicas
kubectl --kubeconfig=$KUBECONFIF scale deployment $DEPLOYMENT_NAME --replicas=3
# Wait for the deployment to have 3 ready replicas
until [[ "$(kubectl --kubeconfig=$KUBECONFIF get deployment $DEPLOYMENT_NAME -o=jsonpath='{.status.readyReplicas}')" == "3" ]]; do
ready_replicas=$(kubectl --kubeconfig=$KUBECONFIF get deployment $DEPLOYMENT_NAME -o=jsonpath='{.status.readyReplicas}')
echo "Iteration #$i: $DEPLOYMENT_NAME has $ready_replicas ready replicas"
sleep 1
done
# Check if all pods are in the "Running" state
while [[ $(kubectl --kubeconfig=$KUBECONFIF get pods -l=app=$DEPLOYMENT_NAME -o=jsonpath='{.items[*].status.phase}') != "Running Running Running" ]]; do
echo "Not all pods are in the 'Running' state. Waiting..."
sleep 5
done
# Scale deployment down to 1 replicas
kubectl --kubeconfig=$KUBECONFIF scale deployment $DEPLOYMENT_NAME --replicas=1
# Wait for the deployment to have 1 ready replicas
until [[ "$(kubectl --kubeconfig=$KUBECONFIF get deployment $DEPLOYMENT_NAME -o=jsonpath='{.status.readyReplicas}')" == "1" ]]; do
ready_replicas=$(kubectl --kubeconfig=$KUBECONFIF get deployment $DEPLOYMENT_NAME -o=jsonpath='{.status.readyReplicas}')
echo "Iteration #$i: $DEPLOYMENT_NAME has $ready_replicas ready replicas"
sleep 1
done
# Check if all pods are in the "Running" state
while [[ $(kubectl --kubeconfig=$KUBECONFIF get pods -l=app=$DEPLOYMENT_NAME -o=jsonpath='{.items[*].status.phase}') != "Running" ]]; do
echo "Not all pods are in the 'Running' state. Waiting..."
sleep 5
done
done
- Find the
PID
of thenfs-ganesha
in theshare-manager
pod byps aux
- Observe the
VmRSS
ofnfs-ganesha
in theshare-manager
pod bycat /proc/<nfs-ganesha PID>/status | grep VmRSS
Result Passed
- We were also able to reproduce this issue on v1.6.1.
- After executing the script, the output for
v1.6.1
is as follows:
Every 2.0s: cat /proc/29/status | grep VmRSS share-manager-pvc-119d403e-ae17-4f4f-aa7f-06e7bf40fca2: Tue May 7 09:54:38 2024
VmRSS: 47192 kB
For the master-head
:
Every 2.0s: cat /proc/43/status | grep VmRSS share-manager-pvc-d0a41b4a-5bb8-4cd5-b60a-d260c6ce8a34: Tue May 7 10:08:18 2024
VmRSS: 42588 kB
from longhorn.
Related Issues (20)
- [TEST] support Rancher prime version in Jenkins pipeline
- [BACKPORT][v1.5.6][BUG] Backup marked as "completed" cannot be restored, gzip: invalid header
- [UI][IMPROVEMENT] Allow users to request backup volume update
- [BUG][V1.5.5] fs trim job fails on RWX volume HOT 8
- [BACKPORT][v1.5.6][CI] Move CI builds to Github Action
- [BACKPORT][v1.6.2][CI] Move CI builds to Github Action HOT 2
- [Bug] DaemonSet longhorn-manager has too much RBAC permission which may leads the whole cluster being hijacked HOT 3
- [BUG] Test case test_support_bundle_should_not_timeout timed out
- [IMPROVEMENT] Read-only volume monitoring check HOT 8
- [TEST] Fix flaky test case test_csi_block_volume_online_expansion
- [BACKPORT][v1.5.6][IMPROVEMENT] Investigate performance bottleneck in v1 data path HOT 4
- [BACKPORT][v1.6.2][IMPROVEMENT] Investigate performance bottleneck in v1 data path HOT 3
- [BACKPORT][v1.6.2][BUG] Secret for backup not found
- [FEATURE] Enhance Volume Expansion by Relocating and Rebuilding Replicas on Nodes with Sufficient Space HOT 1
- [BUG] backing image isn't restored after restored a system backup HOT 2
- [BUG] Error starting manager: upgrade API version failed: cannot create CRDAPIVersionSetting HOT 8
- [BACKPORT][v1.5.6][BUG] Longhorn Helm uninstall times out. HOT 2
- [TEST] Fix flaky test case test_volume_metafile_deleted_when_writing_data HOT 1
- [TEST] Fix longhorn.ApiError not found status code
- [TEST] Assertion failed when checking rebuild progress == 100 when rebuild state == complete
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from longhorn.