Code Monkey home page Code Monkey logo

Comments (8)

ilijaNL avatar ilijaNL commented on August 16, 2024 1

Thanks for your explanation, I will try this package to see if this can be implemented in our system for messaging

from bus.

adenhertog avatar adenhertog commented on August 16, 2024

hey @ilijaNL!

A queue per message type per service would look something like this:

image

If this is the case, there'd be a few limitations that make it impractical for most use cases:

  • the service needs to open one channel per queue to read from it.
    • if you want to enforce a concurrency limit (ie: only process 1-n messages concurrently) then the service needs to round-robin read from each service. This can lead to starvation of larger queues as newer messages in shorter queues will be processed before older messages in longer queues.
    • if you want to read all queues fairly, then you open all channels at once. at this point you can't enforce concurrency so could easily end up saturating the service especially if you had a large number of queues
  • message order isn't adhered to since messages in shorter queues will go through before older messages in longer queues. In normal circumstances, this is dealt with using retries (as eventually the older message will be processed that provides the state that allows the newer message to succeed). However if there's a backlog that prevents the older message from being processed then the newer message could exhaust its retries and be dumped into the DLQ
  • I believe you'd still need the same message retry behaviour as a queue-per message type per service doesn't provide a way to count handle attempts of the mesage

from bus.

ilijaNL avatar ilijaNL commented on August 16, 2024

hey @ilijaNL!

A queue per message type per service would look something like this:

image

If this is the case, there'd be a few limitations that make it impractical for most use cases:

  • the service needs to open one channel per queue to read from it.

    • if you want to enforce a concurrency limit (ie: only process 1-n messages concurrently) then the service needs to round-robin read from each service. This can lead to starvation of larger queues as newer messages in shorter queues will be processed before older messages in longer queues.
    • if you want to read all queues fairly, then you open all channels at once. at this point you can't enforce concurrency so could easily end up saturating the service especially if you had a large number of queues
  • message order isn't adhered to since messages in shorter queues will go through before older messages in longer queues. In normal circumstances, this is dealt with using retries (as eventually the older message will be processed that provides the state that allows the newer message to succeed). However if there's a backlog that prevents the older message from being processed then the newer message could exhaust its retries and be dumped into the DLQ

  • I believe you'd still need the same message retry behaviour as a queue-per message type per service doesn't provide a way to count handle attempts of the mesage

Thanks for your reply.

  1. I wonder how you define concurrency. If you want to achieve concurrency on service level you are right, that can be only achieved with one queue per service. However if you want to achieve concurrency on handler level, then another approach should be used.
  2. Depends how, but every handler (per service) should have equal chance.
  3. It doesnt, you still need some (dynamic?) retry queues, however when the message comes back to the queue, it wouldnt be blocked by other handlers messages.

I made a small playground for rabbitmq what i mean by queue per handler per service:
image

which can be reproduced here: http://tryrabbitmq.com/

from bus.

adenhertog avatar adenhertog commented on August 16, 2024

I'm struggling to understand what use case this would solve. Perhaps if you have an example then it might illustrate what needs to be achieved and how the queue-per-message-type approach would solve this?

I can share the perspective of how the transports are implemented -

Services can be written from a DDD perspective and handle messages from multi-domain, single-domain or even just a single aggregate root. If we're just talking about a single aggroot, like say a product order, the message stream might be:

  • PlaceOrder
  • PayOrder
  • ConfirmOrder
  • FulfilOrder
  • CloseOrder

If this is processed using a single queue then all messages will be processed in order. This will be the case even if there's a service queue backlog. This is also the case if the messages arrive immediately after one-another and there are multiple instances of the service processing the queue.

Contrast this to a queue-per-message-type. If a PayOrder arrives before a PlaceOrder has been processed (as there's a huge backlog in that queue), there's a good chance that it'll get handled, throw an error, retry until it's failed into the DLQ.

from bus.

ilijaNL avatar ilijaNL commented on August 16, 2024

Thanks for your response. You are correct when we talk about commands. Commands indeed should arrive in and process in order, thus having one queue. However in most cases you never dispatch many related commands at a time. Looking at your example, you need some cheorgrafy/orchestration process. For example if you start with place order command, a event orderplaced is published and after that the payorder command will be send and so forth.

Now let's talk about publishing events. A big drawback with having one queue per service is that it blocks all not related in the same queue. For example:
Let's say I have a service called billing and it has 2 event handlers. OrderPlaced and OrderReturned. Now let's say many OrderPlaced are dispatched by some other service and the handler of OrderPlaced has a large processing time (e.g. I/o). Now consider new OrderPlaced events are coming faster than processed. Now some OrderReturned event comes in. It will take unnecessary long time to be at the front of the queue and be processed and perhaps because of that will block other workflows. Now consider this OrderReturned handler fails, which will put the event again at the end of the queue. Additionally the queue can become unnecessary large. The rabbitmq can be sharded easier when there are many queues instead of large.

Consider having queues per handler per service for events I don't see any drawbacks but only benefits. Perhaps you could give me some example where the event order (not commands) does matter?

from bus.

adenhertog avatar adenhertog commented on August 16, 2024

Thanks for the example. I totally agree with what you said here:

A big drawback with having one queue per service is that it blocks all not related in the same queue

How you decide to group handlers into services really depends on your application. NServiceBus recommends the fewer message handlers per service the better.

Personally, I've found it useful to have a dedicated service & queue just for workflow orchestration. This avoids the issue of message backlogs delaying "next steps". Beyond that, I might start with one service per domain and if a message type is causing delays then it can be shaved off into a dedicated service and scaled independently.

If you find that a queue-per-handler with multiple handlers per service is best for you, you should be able to start multiple instances of the bus - each with a single handler. I haven't personally done this but I imagine it should be fine all things considered.

from bus.

ilijaNL avatar ilijaNL commented on August 16, 2024

Thanks for reply! Yes starting multiple instances is a possibility, didn't think about that, thanks! Talking about nservicebus, is the exchange and queue setup compared to nservicebus implementation for the rabbitmq transporter?

from bus.

adenhertog avatar adenhertog commented on August 16, 2024

It's the same fanout pub/sub model as in NServiceBus, though from memory their implementation makes use of a database to polyfill some limitations around RabbitMQ like retry backoffs etc that this library doesn't yet have

from bus.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.