Comments (7)
@mochi-co , sharing some real-world performance data. This is one node in a much larger cluster. The node has 4 vCPU, 32 GB memory. Each client sends avg ~2KB (some can be much larger) data once every minute.
V1 with the setting BufferSize=64K, BufferBlockSize=8K:
V2 with FanPoolSize=128 * GOMAXPROCS, FanPoolQueueSize=1024
V1 | V2 | |
---|---|---|
Clients | ~95K | ~190K |
CPU (400%) | ~60% | ~120% |
Memory (32GB) | ~21GB | ~11GB |
The CPU utilization is on par with v1. The memory improvement in V2 is very dramatic, well done!
from server.
./mqtt-stresser -broker tcp://192.168.11.52:1883 -num-clients=10 -num-messages=10000
(v1.3.2)
Median: 15990 msg/sec
Median: 16551 msg/sec
Median: 13630 msg/sec
Median: 20391 msg/sec
(v2.2.1)
Median: 16386 msg/sec
Median: 15545 msg/sec
Median: 14253 msg/sec
Median: 18597 msg/sec
Now I don't see performance loss versys v1.
I also see that it is resolved slow consumers problem. Good job.
from server.
@izarraga Wonderful, thank you!
from server.
I get the following results:
(v1.3.2)
- Receiving Median: 59284 msg/sec, Publishing Median: 29438 msg/sec
- Receiving Median: 67485 msg/sec, Publishing Median: 33231 msg/sec
- Receiving Median: 54457 msg/sec, Publishing Median: 30674 msg/sec
- Receiving Median: 75374 msg/sec, Publishing Median: 34337 msg/sec
(v2.0.4)
- Receiving Median: 41456 msg/sec, Publishing Median: 31356 msg/sec
- Receiving Median: 32336 msg/sec, Publishing Median: 29485 msg/sec
- Receiving Median: 33827 msg/sec, Publishing Median: 33531 msg/sec
- Receiving Median: 35704 msg/sec, Publishing Median: 27839 msg/sec
(v2.0.4) with FanPoolSize=8*1024, FanPoolQueueSize=128*128
- Receiving Median: 76887 msg/sec, Publishing Median: 21007 msg/sec
- Receiving Median: 46100 msg/sec, Publishing Median: 26947 msg/sec
- Receiving Median: 51609 msg/sec, Publishing Median: 26087 msg/sec
- Receiving Median: 50639 msg/sec, Publishing Median: 26639 msg/sec
I agree that there is a performance difference between 1.3.2 and v2.0.4, and it appears to be Receiving side. As I understand it, mqtt-stresser publishing and receiving values refers to the client side (adding time.Sleep(time.Millisecond * 50)
to cl.WritePacket
. reduces the throughout for receive).
This suggests the v2.0.4 is slower at issuing packets than v1.3.2, however the code controlling the receipt of packets from the client has remained largely the same, minus the circular buffer. As to why, I can't say yet. I will need to compare the code and see if there's any major differences.
One caveat is that although v1.3.2 was very fast, it was also very unstable and tended to freeze up under heavy or oversized traffic - this was one of the main driving decisions behind rewriting the packet processing in v2. Still, I think this is an unusual difference and deserves some investigation.
from server.
@thedevop Just seeing this now - this is wonderful to see! Hopefully the new changes we're discussing won't affect memory usage too severely 😅
from server.
Yeah, depending on the buffered chan size, at 1K, it may add ~2GB for 200K connections. There are some area we can potentially optimize to reduce the overall memory, for example use of sync.pool (can potentially reduce CPU as well). But that will add complexity, we can look into that in the future.
from server.
@izarraga Would you be able to recheck the benchmark on your machine against v2.2.0? I would be interested to see if there has been an improvement... Thank you!
from server.
Related Issues (20)
- How to allow specific username to read/write on a specific topic, and denied enything else? HOT 1
- After enabled badger, the vlog file up to 700M one day and 4GB one week HOT 9
- Race condition when running the redis example HOT 4
- 遍历Clients时如何判断当前Client是否为Disconected状态 HOT 3
- 作者您好,请帮忙关注一下这个问题 HOT 3
- Hi, what is the simplest way to make messages can be restored when server cut off? HOT 5
- [badgerdb] vlog growing unbounded - consider adding GC and exposing options HOT 6
- The badge still getting vlog file keep growing infirnity HOT 7
- How to send topics posted by specific users only to specific subscribed users? HOT 11
- Does peddle perssistant released? HOT 5
- MQTTX cannot use Topic Alias. MQTT5.0 主题别名发送卡住,无法发布主题别名的消息 HOT 2
- How to use the new pessistent hook? HOT 1
- Reload auth fIle on the run HOT 2
- InlineClient模式下服务端订阅问题,inline subscribers do not receive messages HOT 5
- Merge 2 version of storm HOT 4
- Add Support for Disconnect With Will Message Reason Code
- Logging Level is not Configurable Via File Configuration
- Persistence storage did not work with SetCleanSession(false) HOT 3
- Don't allow inheriting session unless username matches HOT 5
- MessageExpiry Hook HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from server.