Comments (8)
Thanks for your explanation, I will try this package to see if this can be implemented in our system for messaging
from bus.
hey @ilijaNL!
A queue per message type per service would look something like this:
If this is the case, there'd be a few limitations that make it impractical for most use cases:
- the service needs to open one channel per queue to read from it.
- if you want to enforce a concurrency limit (ie: only process 1-n messages concurrently) then the service needs to round-robin read from each service. This can lead to starvation of larger queues as newer messages in shorter queues will be processed before older messages in longer queues.
- if you want to read all queues fairly, then you open all channels at once. at this point you can't enforce concurrency so could easily end up saturating the service especially if you had a large number of queues
- message order isn't adhered to since messages in shorter queues will go through before older messages in longer queues. In normal circumstances, this is dealt with using retries (as eventually the older message will be processed that provides the state that allows the newer message to succeed). However if there's a backlog that prevents the older message from being processed then the newer message could exhaust its retries and be dumped into the DLQ
- I believe you'd still need the same message retry behaviour as a queue-per message type per service doesn't provide a way to count handle attempts of the mesage
from bus.
hey @ilijaNL!
A queue per message type per service would look something like this:
If this is the case, there'd be a few limitations that make it impractical for most use cases:
the service needs to open one channel per queue to read from it.
- if you want to enforce a concurrency limit (ie: only process 1-n messages concurrently) then the service needs to round-robin read from each service. This can lead to starvation of larger queues as newer messages in shorter queues will be processed before older messages in longer queues.
- if you want to read all queues fairly, then you open all channels at once. at this point you can't enforce concurrency so could easily end up saturating the service especially if you had a large number of queues
message order isn't adhered to since messages in shorter queues will go through before older messages in longer queues. In normal circumstances, this is dealt with using retries (as eventually the older message will be processed that provides the state that allows the newer message to succeed). However if there's a backlog that prevents the older message from being processed then the newer message could exhaust its retries and be dumped into the DLQ
I believe you'd still need the same message retry behaviour as a queue-per message type per service doesn't provide a way to count handle attempts of the mesage
Thanks for your reply.
- I wonder how you define concurrency. If you want to achieve concurrency on service level you are right, that can be only achieved with one queue per service. However if you want to achieve concurrency on handler level, then another approach should be used.
- Depends how, but every handler (per service) should have equal chance.
- It doesnt, you still need some (dynamic?) retry queues, however when the message comes back to the queue, it wouldnt be blocked by other handlers messages.
I made a small playground for rabbitmq what i mean by queue per handler per service:
which can be reproduced here: http://tryrabbitmq.com/
from bus.
I'm struggling to understand what use case this would solve. Perhaps if you have an example then it might illustrate what needs to be achieved and how the queue-per-message-type approach would solve this?
I can share the perspective of how the transports are implemented -
Services can be written from a DDD perspective and handle messages from multi-domain, single-domain or even just a single aggregate root. If we're just talking about a single aggroot, like say a product order, the message stream might be:
PlaceOrder
PayOrder
ConfirmOrder
FulfilOrder
CloseOrder
If this is processed using a single queue then all messages will be processed in order. This will be the case even if there's a service queue backlog. This is also the case if the messages arrive immediately after one-another and there are multiple instances of the service processing the queue.
Contrast this to a queue-per-message-type. If a PayOrder
arrives before a PlaceOrder
has been processed (as there's a huge backlog in that queue), there's a good chance that it'll get handled, throw an error, retry until it's failed into the DLQ.
from bus.
Thanks for your response. You are correct when we talk about commands. Commands indeed should arrive in and process in order, thus having one queue. However in most cases you never dispatch many related commands at a time. Looking at your example, you need some cheorgrafy/orchestration process. For example if you start with place order command, a event orderplaced is published and after that the payorder command will be send and so forth.
Now let's talk about publishing events. A big drawback with having one queue per service is that it blocks all not related in the same queue. For example:
Let's say I have a service called billing and it has 2 event handlers. OrderPlaced and OrderReturned. Now let's say many OrderPlaced are dispatched by some other service and the handler of OrderPlaced has a large processing time (e.g. I/o). Now consider new OrderPlaced events are coming faster than processed. Now some OrderReturned event comes in. It will take unnecessary long time to be at the front of the queue and be processed and perhaps because of that will block other workflows. Now consider this OrderReturned handler fails, which will put the event again at the end of the queue. Additionally the queue can become unnecessary large. The rabbitmq can be sharded easier when there are many queues instead of large.
Consider having queues per handler per service for events I don't see any drawbacks but only benefits. Perhaps you could give me some example where the event order (not commands) does matter?
from bus.
Thanks for the example. I totally agree with what you said here:
A big drawback with having one queue per service is that it blocks all not related in the same queue
How you decide to group handlers into services really depends on your application. NServiceBus recommends the fewer message handlers per service the better.
Personally, I've found it useful to have a dedicated service & queue just for workflow orchestration. This avoids the issue of message backlogs delaying "next steps". Beyond that, I might start with one service per domain and if a message type is causing delays then it can be shaved off into a dedicated service and scaled independently.
If you find that a queue-per-handler with multiple handlers per service is best for you, you should be able to start multiple instances of the bus - each with a single handler. I haven't personally done this but I imagine it should be fine all things considered.
from bus.
Thanks for reply! Yes starting multiple instances is a possibility, didn't think about that, thanks! Talking about nservicebus, is the exchange and queue setup compared to nservicebus implementation for the rabbitmq transporter?
from bus.
It's the same fanout pub/sub model as in NServiceBus, though from memory their implementation makes use of a database to polyfill some limitations around RabbitMQ like retry backoffs etc that this library doesn't yet have
from bus.
Related Issues (20)
- Construct workflows using container during mapping
- Port middleware over to master HOT 1
- The memory-queue is inappropriately deleting messages HOT 1
- Broken links in docs HOT 1
- Delay a retry by X seconds HOT 1
- Vulnerability in url-parse - dependency of amqplib HOT 1
- Rabbitmq transport polling HOT 2
- IoC with loopback HOT 4
- Named export <nameExport> not found HOT 2
- [bus-class-serializer]: package.json missing attributes HOT 1
- [Bug] Rabbitmq-transport.ts does not have reconnection if channel gets disconnected by some reasons
- How to setup DI container, when workflow needs the bus instance HOT 2
- Open to update `bus-sqs` to aws-sdk v3? HOT 1
- Support Mognodb Persistence for Workflows
- RabbitMQ transport doesn't allow custom message headers. HOT 4
- Problem with lodash and persisting workflow state
- Stuck workflows with error: "No existing workflow state found for message. Ignoring"
- [Feat] Support passing message and message context to Container Adapter HOT 1
- Persistent messages are not supported. HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bus.