Code Monkey home page Code Monkey logo

Comments (22)

wtgee avatar wtgee commented on May 18, 2024

Do you have some example code that demonstrates the problem and/or solution? Are you using the threaded version of the FSM or just a regular?

from transitions.

khigia avatar khigia commented on May 18, 2024

I am using the regular version (I believe the locked version would block).

Here is an example of code triggering this behaviour:

import logging
import transitions
transitions.logger.setLevel(logging.INFO)
logging.basicConfig()

class Model(object):

    def on_enter_B(self):
        print('begin enter B', self.state)
        self.e1()
        print('end enter B', self.state)


m = Model()

fsm = transitions.Machine(
    model=m,
    states=['A', 'B', 'C'],
    initial='A',
    transitions=[
        {'trigger': 'e0', 'source': 'A', 'dest': 'B'},
        {'trigger': 'e1', 'source': 'B', 'dest': 'C'},
    ],
)

m.e0()

And here is the output:

INFO:transitions.core:Initiating transition from state A to state B...
INFO:transitions.core:Exited state A
('begin enter B', 'B')
INFO:transitions.core:Initiating transition from state B to state C...
INFO:transitions.core:Exited state B
INFO:transitions.core:Entered state C
('end enter B', 'C')
INFO:transitions.core:Entered state B

As we can see, state change during execution of on_enter_B. And logging become rather confusing as we log 'Exited B' before 'Entered B', and after this log line, state is C. Having more callback (before/after) transition would probably be difficult to handle.

I have not implemented a solution but I believe that queueing call to transitions/core.py trigger method would solve the issue (by queuing I mean delaying the processing, this can be done on same thread).

from transitions.

tyarkoni avatar tyarkoni commented on May 18, 2024

I agree that the ability to trigger a state change from within a callback is a point of confusion (see, e.g., #30), but I'm not sure it's something we want to explicitly either support or actively prevent. As a general rule, my feeling is that users shouldn't be putting transition logic inside their model callbacks; I have a hard time thinking of a case where one would need to do that. In your example, you could just call e0() followed by e1() from the main thread, and everything would work fine. So I guess the path of least resistance would be to maybe just add a note in the docs saying that users should be aware that triggering transitions from within callbacks is likely to lead to unpleasantness.

That said, I would potentially be okay adding queueing functionality if (a) you can point out a common use case where one would need to do this kind of thing and (b) the proposed implementation was relatively simple. Alternatively, if there are no obvious use cases, I would be equally comfortable raising an Exception when a user tries to trigger from within a callback (again, as long as the implementation is simple). If the complexity of a solution is high either way, my inclination would be to leave it as-is.

Thoughts?

from transitions.

aleneum avatar aleneum commented on May 18, 2024

Thats more a matter of "best practises" imho. A library like transitions should enable people to do things but not dictate how to do it. I agree that this use case looks weird and after some thoughts I guess there are scenarios where you want to have some sort of pass-through states.

But that is just not the way how it should be done (again, imho). Transitions offers after callbacks. This is where self.e1() (for this example) belongs. It also eases code comprehension because the transition logic stays on the surface rather than being spread into the state logic.

A queue might increase code complexity and maybe one guy out there is already bending transition logic the current way because of ... reasons. In the end it's 'just' the log which looks weird. model.state will return the right state.

I am using the regular version (I believe the locked version would block).

Good point. I haven't tried it out myself but my guess is it should not block. RLocks just prevent other threads from executing machine methods. As long as you do not start to spawn threads in on_enter and wait for them to finish inside on_enter, you should be fine.

@khigia, in your example the state change always happens immediately. As mentioned earlier I'd suggest after in this case. If you were referring to a case as described in #30, the locked machine might be what you are looking for. You can start a worker thread in on_enter and even if it triggers the change immediately, it will be blocked until e0 is done.

If you want to prevent threading (which in my experience never is a bad thing :)), you can make the queue idea part of your model with minor overhead:

class Model():
    def __init__(self):
        self.queue = []
        self.machine = Machine(self,...)

    def on_enter_B(self):
       if condition:
           self.queue.push(self.e1)

    def flush(self):
        while len(self.queue) > 0:
            self.queue.pop(0)()
...

model = Model(model..., after_state_change='flush')
...
model.e0()

from transitions.

wtgee avatar wtgee commented on May 18, 2024

@khigia the other common idiom would be to have your state logic return the name of the next transition (or state) so that you are making decisions within the state logic but you are just calling the actual transition after the state, as @aleneum suggests.

My feeling is we don't need to do anything with this at the moment. @khigia, let us know if that doesn't help otherwise let's close.

from transitions.

khigia avatar khigia commented on May 18, 2024

Thanks everyone for your comments.

I think my example is a bit contrived, as I wanted to highlight the behaviour only. A more realisitc use case is when the on_enter_B trigger an asynchronous call which once executed can trigger an FSM event. As far as my model is concerned, it cannot assume when the async call will be done (it might well be executed right away if the async call is not really async because it is cached etc). This use case exists even in single threaded context.

With this async call in mind, I believe state logic defining next as proposed by @wtgee doesn't apply. Using after callback as @aleneum suggested also doesn't solve the situation.

Supporting the use case, or preventing it, seems both better idea to me than doing nothing (less surprise for user). Preventing seems simple, as it could roughly be done by having a flag executing in the machine, raising exception in trigger if flag is already set and else set/reset this flag around trigger execution. I believe queuing to support the use case is not much more involved: instead of a flag, keep a list of trigger calls, and instead of raising we enqueue current call, instead of resetting the flag we execute next queued call.

But first, does the async call use case make sense?

from transitions.

wtgee avatar wtgee commented on May 18, 2024

If your async call is supposed to trigger an event then you should also be able to wait or join from the point where called, which means you could return the name of the transition event rather than just calling it from inside the async call.

To a large extent, this overlaps with the questions about a built-in event loop for this library. So far we have avoided that but pointed people in the right direction of how to implement it. Perhaps if we had some example docs with different ways of implementing an event loop. @khigia, are you just looking for a way to run things asynchronously in an event loop of some sort or do you have more specific requirements?

from transitions.

khigia avatar khigia commented on May 18, 2024

I like the fact that transitions lib does not have it's own event loop. This allow user to embed it in many different contexts.

In my particular use case, I use it inside a code base which is all async (with future). There is no way I can wait or join inside any function (so not inside a state function). I could attach a continuation function but this defeat the purpose or FSM. This event could happen anytime, and I am using the FSM to track when it happen. As example, let's say I trigger 2 async calls with timeout, I want to use the FSM to keep track (state) of what is happening (both succeed, one first, one fail etc) and use FSM callback to trigger different behaviours on all combination of possible events. If I knew event ordering and waited I would not really need an FSM.

I think best is for me to try to create a branch for this so discussion will be more concrete. I really appreciate all your comments!

from transitions.

khigia avatar khigia commented on May 18, 2024

Note that the proposed pull request change behaviour: sending event to the FSM cannot have a return value (since it can be queued). I believe this is ok, as FSM is meant to handle behaviour with callbacks and should refrain to give direct control to external interaction. However I see lots of configurations (through ignore_invalid_triggers raising exception) where my assumption is not correct.

from transitions.

khigia avatar khigia commented on May 18, 2024

May I ask if there is any update on this?

from transitions.

aleneum avatar aleneum commented on May 18, 2024

Sorry for the long delay. Unfortunately I am still not sold on the queue idea in core. I still think that this could be either solved by a locked machine or by using an event queue within the model. Can you provide a minimal example why both approaches do not work out for you?

Remark: I do like the way the queue is implemented but I am just not convinced this has to be done in core.

from transitions.

khigia avatar khigia commented on May 18, 2024

I understand your concern, it does make code more complicated.

But allow me to try to explain why I still think it is a good thing to have in core. Then you can decide one way or the other.

First, why the proposed solutions are not ideal:

  • the Lock machine, in an event based system, is not solving the issue: if the lock block the call, the event system die; if the caller need to use a thread, the event system is not single thread anymore and this propagate back the use of lock everywhere; if the lock is reentrant, then it doesn't delay the event.
  • queue in the model is nice, but force the caller to know when to send event and when to queue event; this break the abstraction of FSM since the caller need to understand the state. Imagine the function on_enter_A trigger an async call; when this call is completed it triggered an event in the state machine; since it is async, I dont know when the result will be available, so I need to have a way to decide if event need to be queued or processed. This function need to understand the state processing of the FSM, and thus this is what I did in the core implementation, making it default for all events.

Second, why it seems to me it should be in the core.
A FSM is supposed to abstract state management. So I find it very surprising to run 'after_state_A' after running 'enter_state_B', I consider this as "breaking FSM contract". Imagine a case where the FSM would process multiple events: on_enter_A, trigger an event, which move FSM to B, which trigger event which move FSM to C ... etc up to FSM final state .... and then after_state_A get triggered: isn't it very difficult for the developer to have to know that FSM could be in any state when his function 'after_state_A' is triggered? From concept point of view, I consider the FSM failed to help developer to manage the state.

Again, just trying to explain my thoughts, no bad feeling if we close this pull request :).

from transitions.

aleneum avatar aleneum commented on May 18, 2024

Again, just trying to explain my thoughts, no bad feeling if we close this pull request :).

dont worry. I really appreciate your feedback and think you have a valid point. I am sold on the idea to have such a feature. I just try to figure out what the needs are and how to keep the impact on core as minimal as possible. Or let's say to keep it as comprehensible as it is right now.

I do not like to give up the 'sync' behaviour since having transitions return their success state is quite useful in some cases. Also 'is_state' becomes a bit squishy because you never know if there are more transitions in the queue. We use transitions in an event-based architecture and having instant feedback if a transition worked or not is quite crucial.

Nevertheless, having the option to process transitions asynchronous is a very neat extension and I will create a dev-async branch to include your suggestions into core.

from transitions.

aleneum avatar aleneum commented on May 18, 2024

@khigia, could you issue a pull request for your changes into dev-async

from transitions.

aleneum avatar aleneum commented on May 18, 2024

Hi @khigia,

please have a look at dev-0.4. I integrated your approach and also added a test_async. Asynchronous processing can now be enabled with Machine(..., async=True). What do you think?

from transitions.

khigia avatar khigia commented on May 18, 2024

Looks great, thanks! In fact I was almost going to write similar machine config to handle both cases :).

Only one small detail: instead of deciding early to call trigger_async or trigger_sync depending on machine.async config, the transition could always call machine _process_async; in that function, depending on async the machine could either process the transition (queue is empty), enqueue (queue not empty and async is true) or raise (something running and not async). This add detection of suspect transition for the non async case.

Anyway, this fulfil my use case very well!

from transitions.

aleneum avatar aleneum commented on May 18, 2024

yeah, good call. Its better to keep events agnostic and bundle all (a)sync logic in one Machine method. This also allows to switch mode of operations during runtime.

from transitions.

tyarkoni avatar tyarkoni commented on May 18, 2024

Is there anything left to do here that isn't in #96, @khigia? If not, can we close this?

from transitions.

khigia avatar khigia commented on May 18, 2024

Yes, can be closed, complete functionality is in #96.

from transitions.

enjoysmath avatar enjoysmath commented on May 18, 2024

The obvious thing I think when I see calling of event inside callback is how useful I'm finding it (using it everywhere pretty much to have a self-driving machine). But then I think: is this going to hit some recursive limit? Can someone @khigia comment on this?

from transitions.

aleneum avatar aleneum commented on May 18, 2024

Hello @enjoysmath,

if your callbacks regularly look like this:

#...

    def on_enter_A(self):
        # processing
        self.next_trigger()

I'd suggest using queue transitions by initialising a machine with Machine(..., queued=True, ...). This way, triggers can return and you don't face recursion issues in the long run.

Since this issue has been closed 8 years ago and most people here have moved on and probably cannot reliably recall their use case, I'd suggest that you open a new discussion next time. You still can mention(@) people in the question if you looking for a particular person's advice.

from transitions.

enjoysmath avatar enjoysmath commented on May 18, 2024

from transitions.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.