Comments (8)
This could be related to #6329. Do you know many blocks you have approximately in object storage?
from thanos.
That looks like the right addition. The metric thanos_bucket_store_blocks_loaded
at its highest is 35,433. That value and others near it are on dev clusters that have had instability in their prometheus / thanos components during large performance tests. I'm not sure if that's contributing to the high block count. Does each interruption in Prometheus service equal a new block?
Is there a way to cache these lookups for older blocks that are unlikely to change? Or can you add a mechanism to turn off this information either on a particular store
or query
component?
from thanos.
I can reproduce Ben's findings -- I have a development environment on Thanos 0.34.1 and was experiencing the high network traffic noted above. The 100x factor is also true in my environment -- running an intensive query on 0.34.1 generates peak network activity of 40MB/s. I downgraded to 0.31.0 and the same query peaked at about 480 KB/s.
My Thanos queriers have three grpc endpoints (two TLS/grpc ingresses for Thanos sidecars, and a TLS/grpc ingress for a Thanos store service). The development environment I reproduced this on has a small number of blocks in object storage due to limited retention time (230 blocks each of which containing 1-4 chunks, 924 objects in total dating back to 03/12), but relatively high series cardinality (prometheus_tsdb_head_series
totals to 500,000 across 2 K8S clusters).
from thanos.
I was able to do a little bit more digging and think I found the cause!
I think the cause is actually #6317 -- as Douglas notes, this change causes the store/sidecar instances to send labels in their response for filtering purposes, which seems a likely cause for the extra traffic we're seeing. Digging through the PR a bit further, I noticed that the newFlushableServer function skips label flushing if --query.replica-label isn't specified. I verified that I could return to the pre-0.32 traffic volume by removing --query.replica-label .
In my case, the development environment is not using HA Prometheus and I do not need to use dedup. It may be worth calling out the network impacts of dedup because they were significant enough to be the cause of some instability in my development clusters. It's also not clear to me why removing --query.replica-label works in light of the changes made in #6706 -- I guess the label check ultimately moved from flushable.go to proxy_heap.go?
@ben-nelson-nbcuni Would you be willing to test whether removing dedup improves matters for your development cluster?
@fpetkovski Am I right in understanding that a feature flag to disable the cuckoo filter would be duplicative, because without it you can't rely on --query.replica-label for deduplication? Also, that it should be sufficient to remove --query.replica-labels from our deployments as long as our pods are uniquely identified including external labels?
Thanks!
jtb
from thanos.
We have 2 thanos queries in the chain. One local and one central. Removing --query.replica-label
from the local and the central did not have any effect on the traffic volume spike. For this round of testing, I've attached all of our settings.
Local:
- args:
- query
- --log.level=info
- --log.format=json
- --grpc-address=0.0.0.0:10901
- --http-address=0.0.0.0:10902
- --query.auto-downsampling
- --endpoint=dnssrv+_grpc._tcp.thanos-store-gateway.monitoring.svc
- --endpoint=dnssrv+_grpc._tcp.prometheus-operated.monitoring.svc
Central:
- args:
- query
- --log.level=info
- --log.format=logfmt
- --grpc-address=0.0.0.0:10901
- --http-address=0.0.0.0:10902
- --query.auto-downsampling
- --grpc-client-tls-secure
- --grpc-compression=snappy
- --endpoint=...
from thanos.
Here is a screenshot of prometheus metrics.
- At 12:44, I upgraded the local thanos-query to 0.32.4 from 0.28.0 and removed the
--query.replica-label
. - At 12:50 (once it was clear network was still spiking), I updated the central thanos-query to remove
--query.replica-label
(the central is always on version 0.32.4). - At 12:59, I downgraded the local thanos-query and re-added
--query.replica-label
. - As of 13:04, the central thanos still doesn't have
--query.replica-label
.
![image](https://private-user-images.githubusercontent.com/104170881/322566667-6b018a51-f307-4dd2-835c-72952d13fa54.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTU1NjY4NzcsIm5iZiI6MTcxNTU2NjU3NywicGF0aCI6Ii8xMDQxNzA4ODEvMzIyNTY2NjY3LTZiMDE4YTUxLWYzMDctNGRkMi04MzVjLTcyOTUyZDEzZmE1NC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwNTEzJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDUxM1QwMjE2MTdaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT02MDYyNjA4ZTlhZWJhYTE2YzVlNGMzOTQ1ZjRiMGRiMWQzYTJiMTc0YTZmZDdhMDBmYjY0YjYzNjY2ZDAyOGIxJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZhY3Rvcl9pZD0wJmtleV9pZD0wJnJlcG9faWQ9MCJ9.Jd1RXrS5Z2fgHazDrmmi9tThEZ-Vn8Z5mPavHLa5dAU)
from thanos.
I suggest we group all blocks by labels here https://github.com/thanos-io/thanos/blob/main/pkg/store/bucket.go#L873-L889 and return one TSDBInfo
per stream rather than per block. @MichaHoffmann has noticed a trend of network usage going down with reduction in number of blocks.
from thanos.
@ben-nelson-nbcuni are you able to test #7308 by any chance?
from thanos.
Related Issues (20)
- [Thanos Storegateway ]"failed to read index-header from disk; recreating" path=/data/01H1K45K0FRT36S1RCAWHW7R9A/index-heade HOT 1
- Adding User Agent to HTTP Logs
- Compact: Display TODO plan HOT 7
- compactor: does not compact 4 consecutive 2-hour blocks HOT 6
- compactor: series not 16-byte aligned error HOT 2
- Improved file access logging
- Sidecar: reporting as ready on startup when no Prometheus process is running
- tools bucket: Add ability to discover external labels from prometheus address for `upload-blocks` HOT 1
- Thanos Sidecar - Flush Endpoint HOT 9
- Grafana only shows raw data from Thanos HOT 4
- Instance Principal Provider - Region issue
- Consider X-Forwarded-For on HTTP/GRPC Logging
- When I restart any Receive, the entire Seek cluster is unavailable, and the reboot can only be restored after the local data is fully understood HOT 2
- MaxTime is set to a too large number when doing larger latency requests? HOT 1
- Unauthorized errors for some endpoints with query-frontend HOT 1
- External labels not applied to alerts HOT 3
- 0.35: Panic with query mode distributed HOT 1
- query: Passing `THANOS-TENANT: <tenant>` header has no effect unless `--query.enforce-tenancy` is set HOT 3
- query: different results for rate function when not dedup or using implicit step interval HOT 8
- Thanos compactor causing huge memory spikes when compacting raw blocks HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from thanos.