Code Monkey home page Code Monkey logo

Comments (6)

LucasAlegre avatar LucasAlegre commented on May 30, 2024

Hi,

Hey,

When I run a3c_4x4grid.py, I get the error "AttributeError: 'TrafficLightDomain' object has no attribute 'getAllProgramLogics'".

Can you check whether you have sumo and sumo-tools updated and in the latest version (1.10)?

Can you please tell me how to make it work? When this error is fixed, I would like to run a map brought from OpenStreetMap. Will it work if I just rewrite the netfile?

Yes, the only restriction is that the traffic signals in your .net must be of the form:

green_phase   (e.g. GGrr)
yellow_phase, (e.g. yyrr)
...,
green_phase, 
yellow_phase

That is, it currently does not support all-red phases or phase configurations without yellow_phases.

from sumo-rl.

ryryryy avatar ryryryy commented on May 30, 2024

The SUMO I installed wasn't the latest version, so I installed that!! Thanks !

The format of the traffic light is as follows But when I run it, I get this error.

traffic signals form
    ↓

`<tlLogic id="cluster_1069973586_1069974226" type="static" programID="0" offset="0">
        <phase duration="40" state="GGGrrrGGG"/>
        <phase duration="5"  state="yyyrrryyy"/>
        <phase duration="40" state="rrrGGGGrr"/>
        <phase duration="5"  state="rrryyyyrr"/>
    </tlLogic>
    <tlLogic id="cluster_1437173920_345567619" type="static" programID="0" offset="0">
        <phase duration="33" state="GGGGGGrrrrr"/>
        <phase duration="6"  state="yyyyyyrrrrr"/>
        <phase duration="6"  state="rrrrGGGGrrr"/>
        <phase duration="6"  state="rrrryyyyrrr"/>
        <phase duration="33" state="GrrrrrrrGGG"/>
        <phase duration="6"  state="yrrrrrrryyy"/>
    </tlLogic>
    <tlLogic id="cluster_2925460530_2925460531" type="static" programID="0" offset="0">
        <phase duration="33" state="GGrrrrrGGGG"/>
        <phase duration="6"  state="yyrrrrryyyy"/>
        <phase duration="6"  state="GGGGrrrrrrr"/>
        <phase duration="6"  state="yyyyrrrrrrr"/>
        <phase duration="33" state="rrrrGGGGrrr"/>
        <phase duration="6"  state="rrrryyyyrrr"/>
    </tlLogic>
    <tlLogic id="cluster_2925460919_2925460921_7005430268_7005430269" type="static" programID="0" offset="0">
        <phase duration="33" state="GGGrrrrrGGGrrrrr"/>
        <phase duration="6"  state="yyyrrrrryyyrrrrr"/>
        <phase duration="6"  state="rrrGrrrrrrrGGrrr"/>
        <phase duration="6"  state="rrryrrrrrrryyrrr"/>
        <phase duration="33" state="rrrrGGGGrrrrrGGG"/>
        <phase duration="6"  state="rrrryyyyrrrrryyy"/>
    </tlLogic>
    <tlLogic id="cluster_2926983387_2926983388_2926984764" type="static" programID="0" offset="0">
        <phase duration="26" state="GGGGrrrrrrrr"/>
        <phase duration="4"  state="yyyyrrrrrrrr"/>
        <phase duration="26" state="rrrrGGGGGrrr"/>
        <phase duration="4"  state="rrrryyyyyrrr"/>
        <phase duration="26" state="GrrrrrrrGGGG"/>
        <phase duration="4"  state="yrrrrrrryyyy"/>
    </tlLogic>
    <tlLogic id="cluster_3191640061_384349108_384349109" type="static" programID="0" offset="0">
        <phase duration="39" state="GGG"/>
        <phase duration="6"  state="yyy"/>
        <phase duration="39" state="GGr"/>
        <phase duration="6"  state="yyr"/>
    </tlLogic>
    <tlLogic id="cluster_343364695_345567614" type="static" programID="0" offset="0">
        <phase duration="33" state="GGGrrrrrrGGGrrrrrr"/>
        <phase duration="6"  state="yyyrrrrrryyyrrrrrr"/>
        <phase duration="6"  state="rrrGGrrrrrrrGGrrrr"/>
        <phase duration="6"  state="rrryyrrrrrrryyrrrr"/>
        <phase duration="33" state="rrrrrGGGGrrrrrGGGG"/>
        <phase duration="6"  state="rrrrryyyyrrrrryyyy"/>
    </tlLogic>
    <tlLogic id="cluster_345565514_345567635" type="static" programID="0" offset="0">
        <phase duration="33" state="GGGGGGrrrrr"/>
        <phase duration="6"  state="yyyyyyrrrrr"/>
        <phase duration="6"  state="rrrrGGGGrrr"/>
        <phase duration="6"  state="rrrryyyyrrr"/>
        <phase duration="33" state="GrrrrrrrGGG"/>
        <phase duration="6"  state="yrrrrrrryyy"/>
    </tlLogic>`

error

ValueError: ('Observation ({}) outside given space ({})!', array([1., 0., 0., 0., 0., 0., 0., 0.]), Box(0.0, 1.0, (20,), float32))

from sumo-rl.

LucasAlegre avatar LucasAlegre commented on May 30, 2024

Hi, in your case you have traffic lights with different phases configurations/actions. Therefore, you need to use different rllib policies for each one of them.

You can do something like:

env = PettingZooEnv(sumo_rl.env(net_file='nets/4x4-Lucas/4x4.net.xml',
                                                    route_file='nets/4x4-Lucas/4x4c1c2c1c2.rou.xml',
                                                    out_csv_name='outputs/4x4grid/a3c',
                                                    use_gui=False,
                                                    num_seconds=80000,
                                                    max_depart_delay=0))
register_env("4x4grid", lambda _: PettingZooEnv(sumo_rl.env(net_file='nets/4x4-Lucas/4x4.net.xml',
                                                    route_file='nets/4x4-Lucas/4x4c1c2c1c2.rou.xml',
                                                    out_csv_name='outputs/4x4grid/a3c',
                                                    use_gui=False,
                                                    num_seconds=80000,
                                                    max_depart_delay=0)))
trainer = A3CTrainer(env="4x4grid", config={
        "multiagent": {
            "policies": {
                id: (A3CTFPolicy, env.observation_spaces[id], env.action_spaces[id], {}) for id in env.agents
            },
            "policy_mapping_fn": (lambda id: id)  # Traffic lights are always controlled by this policy
        },
        "lr": 0.001,
        "no_done_at_end": True
    })

from sumo-rl.

ryryryy avatar ryryryy commented on May 30, 2024

I tried your advice and got the following error.
How can I fix this?

env = PettingZooEnv(sumo_rl.env(net_file='nets/OpenStreetMap/osm.net.xml',
                                                route_file='nets/OpenStreetMap/osm.rou.xml',
                                                out_csv_name='outputs/OpenStreetMap/osm',
                                                use_gui=False,
                                                num_seconds=80000,
                                                max_depart_delay=0))
register_env("4x4grid", lambda _: PettingZooEnv(sumo_rl.env(net_file='nets/OpenStreetMap/osm.net.xml',
                                                route_file='nets/OpenStreetMap/osm.rou.xml',
                                                out_csv_name='outputs/OpenStreetMap/osm',
                                                use_gui=False,
                                                num_seconds=80000,
                                                max_depart_delay=0)))

trainer = A3CTrainer(env="4x4grid", config={
    "multiagent": {
        "policies": {
            id: (A3CTFPolicy, env.observation_spaces[id], env.action_spaces[id], {}) for id in env.agents
        },
        "policy_mapping_fn": (lambda id: id)  # Traffic lights are always controlled by this policy
    },
    "lr": 0.001,
    "no_done_at_end": True
})
while True:
    print(trainer.train())  # distributed training step`

Error

Observation spaces for all agents must be identical. Perhaps " \AssertionError: Observation spaces for all agents must be identical. Perhaps SuperSuit's pad_observations wrapper can help (useage: supersuit.aec_wrappers.pad_observations(env)

from sumo-rl.

LucasAlegre avatar LucasAlegre commented on May 30, 2024

Closing as these are outdated.

from sumo-rl.

JenniferHahn avatar JenniferHahn commented on May 30, 2024

I wanted to open this thread again since I am facing similar issues - I am trying to run multi-agent RL with a parallel petting zoo env which I am wrapping using superset (which creates the largest obs space for all) but I would actually like to have different policies for every agent since the action and observation space is initially different:

register_env(env_name, lambda config: ParallelPettingZooEnv(env_creator(config)))
env = PettingZooEnv(env_creator({}))
...
within the config I would like to set
.multiagent(
policies={id: (PPOTF1Policy, env.observation_spaces[id], env.action_spaces[id], {}) for id in env.agents},
policy_mapping_fn= (lambda id: id)
)

Is there any way to achieve both - use the (wrapped) environment but implement different policies?

from sumo-rl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.