Code Monkey home page Code Monkey logo

Comments (12)

LucasAlegre avatar LucasAlegre commented on May 29, 2024 1

Hey, it's been a week so I'm just following up on this :)

Hey, I have just added an API to instantiate a few environments in the file https://github.com/LucasAlegre/sumo-rl/blob/master/sumo_rl/environment/resco_envs.py !

from sumo-rl.

LucasAlegre avatar LucasAlegre commented on May 29, 2024

Hi,

I'm glad that you are interested in using sumo-rl!
Sure, I could definitely do that. Do you any anything specific in mind? Maybe describe the default definition of states and rewards?
Notice that SumoEnvironment is generic and can be instantiated with any .net and .rou SUMO files. Also, you can visualize the networks directly on SUMO.

from sumo-rl.

jkterry1 avatar jkterry1 commented on May 29, 2024

"Maybe describe the default definition of states and rewards?"
That, plus action and observation spaces and images of what each look like would work, ya :)

from sumo-rl.

LucasAlegre avatar LucasAlegre commented on May 29, 2024

I just updated the readme with the basic definitions, but I plan to add more details later!

from sumo-rl.

jkterry1 avatar jkterry1 commented on May 29, 2024

Hey, I just sat down and look at this. I've used who's fairly experienced in the RL (and I wanted to use these environments as part of a set of of many to test a general MARL algorithm I've been working on), but I'm not very experienced with traffic control/sumo so I have a few questions after reading:

-What does phase_one_hot mean?
-What does lane_1_queue mean?
-What does green phase mean?
-Could you please document the action space too?
-Could you elaborate a bit on why that specific reward function makes sense is the default? Is that the standard in the literature?
-Also, your new links to TrafficSignal are dead

from sumo-rl.

LucasAlegre avatar LucasAlegre commented on May 29, 2024

Hey, I believe I have answered these question in this commit f0b387f. (Also fixed the dead links)

Regarding the reward function, there is not really a standard in the literature.
Change in delay/waiting time is what in my experience worked the best. I can point you to some papers that use this reward:

  • Genders W, Razavi S. 2018. Evaluating reinforcement learning state representations for adaptive traffic signal control. Procedia Computer Science 130:26-33
  • Quantifying the Impact of Non-Stationarity in Reinforcement Learning-Based Traffic Signal Control
    LN Alegre, ALC Bazzan, BC da Silva, PeerJ Computer Science, 2021.
  • L. N. Alegre, T. Ziemke and A. L. C. Bazzan, "Using Reinforcement Learning to Control Traffic Signals in a Real-World Scenario: An Approach Based on Linear Function Approximation," in IEEE Transactions on Intelligent Transportation Systems, doi: 10.1109/TITS.2021.3091014.

I have seen many papers using Pressure as reward (but I didn't get better results with this):

  • Hua Wei, Chacha Chen, Guanjie Zheng, Kan Wu, Vikash Gayah, Kai Xu, and Zhenhui Li. 2019. PressLight: Learning Max Pressure Control to Coordinate Traffic Signals in Arterial Network. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD '19). Association for Computing Machinery, New York, NY, USA, 1290–1298. DOI:https://doi.org/10.1145/3292500.3330949

from sumo-rl.

jkterry1 avatar jkterry1 commented on May 29, 2024

Hey thanks a ton for that!

A few more questions:

  • You have a sentence "Obs: Every time a phase change occurs, the next phase is preeceded by a yellow phase lasting yellow_time seconds.". Either that's in the wrong section or I'm very confused.
  • I'm sure this is simply due to my unfamiliarity, but what's a "green phase"?
  • Would you also be willing to also clarify what the different built in nets are like in the readme? That'd also be super helpful

from sumo-rl.

LucasAlegre avatar LucasAlegre commented on May 29, 2024

Hey thanks a ton for that!

A few more questions:

  • You have a sentence "Obs: Every time a phase change occurs, the next phase is preeceded by a yellow phase lasting yellow_time seconds.". Either that's in the wrong section or I'm very confused.

Ops, this "Obs:" means "Ps:" :P This means that when your action changes the phase, the env sets a yellow phase before actually setting the phase selected by the agent's action.

  • I'm sure this is simply due to my unfamiliarity, but what's a "green phase"?

The nomenclature for traffic signal control can be a bit confusing. By green phase I mean a phase configuration presenting green (permissive) movements. The 4 actions in the readme are examples of 4 green phases.

  • Would you also be willing to also clarify what the different built in nets are like in the readme? That'd also be super helpful

Sure! I also intended to add more networks to the repository.

from sumo-rl.

ahphan avatar ahphan commented on May 29, 2024

Hello,

I am really new to SUMO, but is there a way to deploy a trained agent with sumo-gui?

I was able to run the example experiments/ql_2way-single-intersection.py and plot the results. Then I tried to run "python experiments/ql_2way-single-intersection.py -gui", which provided visualizations in sumo-gui, but the terminal window wasn't updating the step number (usually increments to 100,000) so I'm not sure if this is actually training and visualizing at the same time.

In summary, I would like to know if I can save the trained agent, deploy it in an environment, and visualize it in sumo-gui. Also, when I use the "-gui" argument, is this still training the agent as it normally would if I ran "python experiments/ql_2way-single-intersection.py", but it just doesn't update the step number?

I really appreciate your contributions, thank you!

from sumo-rl.

LucasAlegre avatar LucasAlegre commented on May 29, 2024

Hello,

I am really new to SUMO, but is there a way to deploy a trained agent with sumo-gui?

I was able to run the example experiments/ql_2way-single-intersection.py and plot the results. Then I tried to run "python experiments/ql_2way-single-intersection.py -gui", which provided visualizations in sumo-gui, but the terminal window wasn't updating the step number (usually increments to 100,000) so I'm not sure if this is actually training and visualizing at the same time.

In summary, I would like to know if I can save the trained agent, deploy it in an environment, and visualize it in sumo-gui. Also, when I use the "-gui" argument, is this still training the agent as it normally would if I ran "python experiments/ql_2way-single-intersection.py", but it just doesn't update the step number?

I really appreciate your contributions, thank you!

Hi,

Using -gui only activates the SUMO GUI, there is no effect on the training procedure.
Notice that training is part of the algorithm (not the environment), so you can use any algorithm you want, save the model and then run again with sumo-gui to visualize it.
In the ql example I did not implement a method to save the agent q-tables, but that should be easy to do.

from sumo-rl.

LucasAlegre avatar LucasAlegre commented on May 29, 2024

@jkterry1 I just added network and route files from RESCO (check the readme). Basically, RESCO is a set of benchmarks for traffic signal control that was built on top of SUMO-RL. In their paper you can find results for different algorithms.
Later this week I'll try to add more documentation and examples for these networks.

from sumo-rl.

jkterry1 avatar jkterry1 commented on May 29, 2024

Hey, it's been a week so I'm just following up on this :)

from sumo-rl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.