Code Monkey home page Code Monkey logo

cyborg's Introduction

Copyright DST Group. Licensed under the MIT license.

Cyber Operations Research Gym (CybORG)

A cyber security research environment for training and development of security human and autonomous agents. Contains a common interface for both emulated, using cloud based virtual machines, and simulated network environments.

Installation

Install CybORG locally using pip from the main directory that contains this readme

pip install -e .

Creating the environment

Create a CybORG environment with the DroneSwarm Scenario that is used for CAGE Challenge 3:

from CybORG import CybORG
from CybORG.Simulator.Scenarios.DroneSwarmScenarioGenerator import DroneSwarmScenarioGenerator

sg = DroneSwarmScenarioGenerator()
cyborg = CybORG(sg, 'sim')

The default_red_agent parameter of the DroneSwarmScenarioGenerator allows you to alter the red agent behaviour. Here is an example of a red agent that randomly selects a drone to exploit and seize control of:

from CybORG import CybORG
from CybORG.Simulator.Scenarios.DroneSwarmScenarioGenerator import DroneSwarmScenarioGenerator
from CybORG.Agents.SimpleAgents.DroneRedAgent import DroneRedAgent

red_agent = DroneRedAgent
sg = DroneSwarmScenarioGenerator(default_red_agent=red_agent)
cyborg = CybORG(sg, 'sim')

Wrappers

To alter the interface with CybORG, wrappers are avaliable.

  • OpenAIGymWrapper - alters the interface to conform to the OpenAI Gym specification. Requires the observation to be changed into a fixed size array.
  • FixedFlatWrapper - converts the observation from a dictionary format into a fixed size 1-dimensional vector of floats
  • PettingZooParallelWrapper - alters the interface to conform to the PettingZoo Parallel specification

How to Use

OpenAI Gym Wrapper

The OpenAI Gym Wrapper allows interaction with a single external agent. The name of that external agent must be specified at the creation of the OpenAI Gym Wrapper.

from CybORG import CybORG
from CybORG.Simulator.Scenarios.DroneSwarmScenarioGenerator import DroneSwarmScenarioGenerator
from CybORG.Agents.Wrappers.OpenAIGymWrapper import OpenAIGymWrapper
from CybORG.Agents.Wrappers.FixedFlatWrapper import FixedFlatWrapper

sg = DroneSwarmScenarioGenerator()
cyborg = CybORG(sg, 'sim')
agent_name = 'blue_agent_0'
open_ai_wrapped_cyborg = OpenAIGymWrapper(agent_name=agent_name, env=FixedFlatWrapper(cyborg))
observation, reward, done, info = open_ai_wrapped_cyborg.step(0)

PettingZoo Parallel Wrapper

The PettingZoo Parallel Wrapper allows multiple agents to interact with the environment simultaneously.

from CybORG import CybORG
from CybORG.Simulator.Scenarios.DroneSwarmScenarioGenerator import DroneSwarmScenarioGenerator
from CybORG.Agents.Wrappers.PettingZooParallelWrapper import PettingZooParallelWrapper

sg = DroneSwarmScenarioGenerator()
cyborg = CybORG(sg, 'sim')
open_ai_wrapped_cyborg = PettingZooParallelWrapper(cyborg)
observations, rewards, dones, infos = open_ai_wrapped_cyborg.step({'blue_agent_0': 0, 'blue_agent_1': 0})

Ray/RLLib wrapper

from CybORG import CybORG
from CybORG.Simulator.Scenarios.DroneSwarmScenarioGenerator import DroneSwarmScenarioGenerator
from CybORG.Agents.Wrappers.PettingZooParallelWrapper import PettingZooParallelWrapper
from ray.rllib.env import ParallelPettingZooEnv
from ray.tune import register_env

def env_creator_CC3(env_config: dict):
    sg = DroneSwarmScenarioGenerator()
    cyborg = CybORG(scenario_generator=sg, environment='sim')
    env = ParallelPettingZooEnv(PettingZooParallelWrapper(env=cyborg))
    return env

register_env(name="CC3", env_creator=env_creator_CC3)

Evaluating agent performance

To evaluate an agent's performance please use the evaluation script and the submission file.

Please see the submission instructions for further information on submission and evaluation of agents.

Additional Readings

For further guidance on the CybORG environment please refer to the tutorial notebook series.. For information on the CAGE challenges, please refer to the following pages: CAGE Challenge 1 CAGE Challenge 2 CAGE Challenge 3 CAGE Challenge 4

Citing this project

@misc{cage_cyborg_2022, 
  Title = {Cyber Operations Research Gym}, 
  Note = {Created by Maxwell Standen, David Bowman, Son Hoang, Toby Richer, Martin Lucas, Richard Van Tassel, Phillip Vu, Mitchell Kiely, KC C., Natalie Konschnik, Joshua Collyer}, 
  Publisher = {GitHub}, 
  Howpublished = {\url{https://github.com/cage-challenge/CybORG}}, 
  Year = {2022} 
}

cyborg's People

Contributors

cage-challenge avatar hkscy avatar maxstanden avatar mitchellkiely avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

cyborg's Issues

Inconsistent session info between Hosts and State

For certain action sequences this results in inconsistencies between the sessions recorded by the Hosts and State objects. Using a small 3-drone environment as an example with max_length_data_links=300, an example sequence that can reproduce this inconsistency is:

ActivateTrojan('Red_Trojan', 'drone_0') -> ActivateTrojan('Red_Trojan', 'drone_1') -> ExploitDroneVulnerability(0, 'red_agent_1', IPv4Address('<ip address of drone_0>')) -> SeizeControl(IPv4Address('<ip address of drone_0>, 0, 'red_agent_1')

This produces a host.sessions dictionary for drone_0 as:

{'red_agent_0': [0, 0], 'blue_agent_0': [0], 'green_agent_0': [0], 'red_agent_1': [], ... }

Notice that there are 2 sessions for red_agent_0 in drone 0 with the same session identifier, and state.sessions show:

{'red_agent_0': {0: ...}, ..., 'red_agent_1': {0: ...},  ...}

The key problem arises when a successful RetakeControl action is executed on drone_0. This results in the following host.sessions dictionary for drone_0:

{'red_agent_0': [0], 'blue_agent_0': [0], 'green_agent_0': [0], 'red_agent_1': [], ... }

but state.sessions show:

{'red_agent_0': {}, ..., 'red_agent_1': {0: ...},  ...}

Resulting in an inconsistency as the sessions from the state show that red_agent_0 no longer has any sessions, but the host object for drone_0 shows that red_agent_0 still has a session on it.

Code snippet to reproduce the issue:

from CybORG import CybORG
from CybORG.Simulator.Actions.ConcreteActions.ActivateTrojan import ActivateTrojan
from CybORG.Simulator.Actions.ConcreteActions.EscalateActions.SeizeControl import SeizeControl
from CybORG.Simulator.Actions.ConcreteActions.ExploitActions.ExploitDroneVulnerability import ExploitDroneVulnerability
from CybORG.Simulator.Actions.ConcreteActions.ExploitActions.RetakeControl import RetakeControl
from ipaddress import IPv4Address

sg = DroneSwarmScenarioGenerator(num_drones=3, max_length_data_links=300)
cyborg = CybORG(scenario_generator=sg, environment='sim')
cyborg.reset()

ip_map = cyborg.get_ip_map()

state = cyborg.environment_controller.state

actions = [
    ActivateTrojan('Red_Trojan', 'drone_0'),
    ActivateTrojan('Red_Trojan', 'drone_1'),
    ExploitDroneVulnerability(0, 'red_agent_1', ip_map['drone_0']),
    SeizeControl(ip_map['drone_0'], 0, 'red_agent_1'),
    RetakeControl(0, 'blue_agent_2', ip_map['drone_0'])
]

for a in actions:
    a.execute(state)

host_sessions = {hostname: host.sessions for hostname, host in cyborg.environment_controller.state.hosts.items()}
state_sessions = cyborg.environment_controller.state.sessions

for drone, sessions in host_sessions.items():
    print(drone, sessions)
print()
print({agent_name: {hostname: session.hostname for hostname, session in sessions.items()} for agent_name, sessions in state_sessions.items()})

Malicious network events flagged for target drone instead of originating drone

It appears that malicious network events are being flagged for the target drone of an action, rather than the originating drone it was sent from.

Could you confirm that this is intended? It seems counterintuitive and contradicts the documentation ("the number of malicious network events from drone i"). Thanks!

Example:
This appears to be due to what you would consider as local_address and remote_address.

As an example with FloodBandwidth, the below is an example when FloodBandwidth is run from host drone drone_12 to drone_2.

  • hostname - drone_12
  • self.ip_address - 10.0.78.141, which is the ip_address for drone 2
  • route - ['drone_12', 'drone_9', 'drone_4', 'drone_6', 'drone_2']
                event = {
                    'local_address': self.ip_address,
                    'remote_port': 8888,
                    'remote_address': {h_name: ip_addr for ip_addr, h_name in state.ip_addresses.items()}[hostname]
                }
  • output flagged event to host.events['NetworkConnections'] - [{'local_address': IPv4Address('10.0.78.141'), 'remote_port': 8888, 'remote_address': IPv4Address('10.0.78.142')}]
                # add flagged messages
                for i, ip in enumerate(self.ip_addresses):
                    new_obs[index + i] = 1 if ip in [network_conn['local_address']
                                                     for interface in obs[own_host_name]['Interface']
                                                     if 'NetworkConnections' in interface
                                                     for network_conn in interface['NetworkConnections']] \
                        else 0
                index += len(self.ip_addresses)
  • local_address is then used to identify the drone to "add flagged messages" in PettingZooParallelWrapper.py, therefore returning the target drone rather than the originating drone.

Installation Test

Testing with pytest after installing cage-challenge-3.

pytest cage-challenge-3/CybORG/CybORG/Tests/test_sim
Platform: ubuntu 22.04 python 3.9.16 Creating a virtual environment with conda

The test results are as follows:
================ 77 failed, 390 passed, 104 skipped, 250486 warnings, 1886 errors in 496.15s (0:08:16) ================

Most of these are TypeError mismatches in the incoming parameters.

Is it because pytest tested cage-challenge-1 and did not adapt cage-challenge-3?

If you need more detailed information about the error, please leave a message and I will try to provide it.

Communication wrappers not working

When using a communication wrapper (ActionComms/ObsComms), both fail to initialise. They call

super(ObsCommsPettingZooParallelWrapper, self).__init__(env, max_steps)

however, the init function for their parent class (PettingZooParallelWrapper) takes only env as an argument.

Missing rewards when all drones compromised

In evaluation.py, the calculation of rewards seems to be broken when the episode ends before the full 500 steps (to my understanding, when all drones have been compromised).

It appears that after break, the rewards for the steps the episode did run for are returned via sum(r) to total_reward, but no (negative) reward is added to r for the remaining time steps.

Bug in observation vector

The observation vector only ever returns 0 for the component of the observation space which indicates if a new session was created on drone d.

I believe this bug occurs when the observation vector is created in PettingZooParallelWrapper in the method observation_change which creates the observation vector. From my understanding the flag for a new session being created on drone d would be triggered if the following code block was ever evaluated as True: 'Sessions' in obs[hostname]. However the key 'Sessions' is never added to the observation dictionary and therefore this statement is never True.

Actions not converted correctly using PettingZooParallelWrapper

Using PettingZooParallelWrapper, actions are not converted from (gym) integers into CybORG actions. Therefore all actions end up as InvalidActions (since int isn't a valid action class). This could be fixed with some extra logic in the wrapper that instantiates the CybORG actions before calling self.env.parallel_step.

Main README: Evaluating agent performance

Regarding the evaluation script, the main README suggests:

The agent under evaluation is defined on line 35.
To evaluate an agent, extend the BaseAgent.
We have included the RandomAgent as an example of an agent that performs random actions.

# Change this line to load your agent
agents = {agent: RandomAgent() for agent in wrapped_cyborg.possible_agents}

But line 35 is not that and the evaluation script doesn't seem to include an example of loading an agent.

"IndexError: list index out of range" caused by `_process_priv_esc`

I am trying to train and evaluate a PPO agent based on stable_baselines3 for Scenario2 running CybORG 2.1.

Notice: I use a reduced scenario file with only a red agent in a flat network configuration without Sleep and Impact actions for testing purposes:

Agents:
  Red:
    AllowedSubnets:
      - Flat
    INT:
      Hosts:
        User0:
          Interfaces: All
          System info: All
    actions:
    - DiscoverRemoteSystems
    - DiscoverNetworkServices
    - ExploitRemoteService
    - BlueKeep
    - EternalBlue
    - FTPDirectoryTraversal
    - HarakaRCE
    - HTTPRFI
    - HTTPSRFI
    - SQLInjection
    - PrivilegeEscalate
    - SSHBruteForce
    agent_type: SleepAgent
    reward_calculator_type: HybridImpactPwn
    starting_sessions:
    - hostname: User0
      name: RedPhish
      type: RedAbstractSession
      username: SYSTEM
    wrappers: []
Hosts:
  User0:
    AWS_Info: []
    image: windows_user_host1
    info:
      User0:
        Interfaces: All
      User1:
        Interfaces: IP Address
      User2:
        Interfaces: IP Address
      User3:
        Interfaces: IP Address
      User4:
        Interfaces: IP Address
      Enterprise0:
        Interfaces: IP Address
      Op_Server0:
        Interfaces: IP Address
    ConfidentialityValue: None
    AvailabilityValue: None
  User1:
    AWS_Info: []
    image: windows_user_host1
    info:
      User0:
        Interfaces: IP Address
      User1:
        Interfaces: All
      User2:
        Interfaces: IP Address
      User3:
        Interfaces: IP Address
      User4:
        Interfaces: IP Address
      Enterprise0:
        Interfaces: IP Address
      Op_Server0:
        Interfaces: IP Address
    ConfidentialityValue: Low
    AvailabilityValue: None
  User2:
    AWS_Info: []
    image: windows_user_host2
    info:
      User0:
        Interfaces: IP Address
      User1:
        Interfaces: IP Address
      User2:
        Interfaces: All
      User3:
        Interfaces: IP Address
      User4:
        Interfaces: IP Address
      Enterprise0:
        Interfaces: IP Address
      Op_Server0:
        Interfaces: IP Address
    ConfidentialityValue: Low
    AvailabilityValue: None
  User3:
    AWS_Info: []
    image: linux_user_host1
    info:
      User0:
        Interfaces: IP Address
      User1:
        Interfaces: IP Address
      User2:
        Interfaces: IP Address
      User3:
        Interfaces: All
      User4:
        Interfaces: IP Address
      Enterprise0:
        Interfaces: IP Address
      Op_Server0:
        Interfaces: IP Address
    ConfidentialityValue: Low
    AvailabilityValue: None
  User4:
    AWS_Info: []
    image: linux_user_host2
    info:
      User0:
        Interfaces: IP Address
      User1:
        Interfaces: IP Address
      User2:
        Interfaces: IP Address
      User3:
        Interfaces: IP Address
      User4:
        Interfaces: All
      Enterprise0:
        Interfaces: IP Address
      Op_Server0:
        Interfaces: IP Address
    ConfidentialityValue: Low
    AvailabilityValue: None
  Enterprise0:
    AWS_Info: []
    image: Gateway
    info:
      User0:
        Interfaces: IP Address
      User1:
        Interfaces: IP Address
      User2:
        Interfaces: IP Address
      User3:
        Interfaces: IP Address
      User4:
        Interfaces: IP Address
      Enterprise0:
        Interfaces: All
      Op_Server0:
        Interfaces: IP Address
    ConfidentialityValue: Medium
    AvailabilityValue: Medium
  Op_Server0:
    AWS_Info: []
    image: OP_Server
    info:
      User0:
        Interfaces: IP Address
      User1:
        Interfaces: IP Address
      User2:
        Interfaces: IP Address
      User3:
        Interfaces: IP Address
      User4:
        Interfaces: IP Address
      Enterprise0:
        Interfaces: IP Address
      Op_Server0:
        Interfaces: All
        Services:
        - OTService
    ConfidentialityValue: High
    AvailabilityValue: High
Subnets:
  Flat:
    Hosts:
    - User0
    - User1
    - User2
    - User3
    - User4
    - Enterprise0
    - Op_Server0
    NACLs:
      all:
        in: all
        out: all
    Size: 7

Currently, I am receiving the following error:

Traceback (most recent call last):
  File "/usr/lib64/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib64/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "[...]/cage-challenge-2/venv-3.10/lib64/python3.10/site-packages/stable_baselines3/common/vec_env/subproc_vec_env.py", line 30, in _worker
    observation, reward, done, info = env.step(data)
  File "[...]/cage-challenge-2/CybORG/CybORG/Agents/Wrappers/OpenAIGymWrapper.py", line 27, in step
    result = self.env.step(self.agent_name, action)
  File "[...]/cage-challenge-2/CybORG/CybORG/Agents/Wrappers/EnumActionWrapper.py", line 20, in step
    return super().step(agent, action)
  File "[...]/cage-challenge-2/CybORG/CybORG/Agents/Wrappers/BaseWrapper.py", line 16, in step
    result = self.env.step(agent, action)
  File "[...]/cage-challenge-2/CybORG/CybORG/Agents/Wrappers/BaseWrapper.py", line 17, in step
    result.observation = self.observation_change(result.observation)
  File "[...]/cage-challenge-2/CybORG/CybORG/Agents/Wrappers/RedTableWrapper.py", line 48, in observation_change
    self._update_red_info(observation)
  File "[...]/cage-challenge-2/CybORG/CybORG/Agents/Wrappers/RedTableWrapper.py", line 85, in _update_red_info
    self._process_priv_esc(obs, hostname)
  File "[...]/cage-challenge-2/CybORG/CybORG/Agents/Wrappers/RedTableWrapper.py", line 132, in _process_priv_esc
    [info for info in self.red_info.values() if info[2] == hostname][0][4] = 'None'
IndexError: list index out of range

The problem seems to stem from the list comprehension inside the _process_priv_esc() function:

[info for info in self.red_info.values() if info[2] == hostname][0][4] = 'None'

After adding an additional check on the list length the agent model learns without errors:

def _process_priv_esc(self, obs, hostname):
        if obs['success'] == False:
            red_info = [info for info in self.red_info.values() if info[2] == hostname]
            if len(red_info) > 0:
                red_info[0][4] = 'None'
[...]

Training and evaluating PPO agent

I am trying to train and evaluate a PPO agent based on stable_baselines3 for Scenario2 running CybORG 2.1.

For training I use the following code snippet:

import inspect

from CybORG import CybORG
from CybORG.Agents.SimpleAgents.RedPPOAgent import RedPPOAgent
from CybORG.Agents.Wrappers import ChallengeWrapper

from stable_baselines3.common.callbacks import CheckpointCallback

path = str(inspect.getfile(CybORG))
path = path[:-10] + '/Shared/Scenarios/Scenario2.yaml'
env = ChallengeWrapper(env=CybORG(path, 'sim'), agent_name='Red')

# Save a checkpoint every 500 steps
checkpoint_callback = CheckpointCallback(
    save_freq=500,
    save_path="models/Scenario2/checkpoints/",
    name_prefix="red_ppo_agent",
    save_replay_buffer=True,
    save_vecnormalize=True,
)

agent = RedPPOAgent(env)
agent.model.learn(total_timesteps=10000, callback=checkpoint_callback)
agent.model.save("models/Scenario2/red_ppo_agent")

The RedPPOAgent class is implemented like this:

from stable_baselines3 import PPO

from CybORG import CybORG
from CybORG.Agents.SimpleAgents.BaseAgent import BaseAgent

class RedPPOAgent(BaseAgent):
    def train(self, results):
        #self.model.learn(total_timesteps=25000)
        pass

    def end_episode(self):
        pass

    def set_initial_values(self, action_space, observation):
        pass

    def __init__(self, env, model_file: str = None):
        if model_file is not None:
            self.model = PPO.load(model_file)
        else:
            self.model = PPO('MlpPolicy', env, verbose=1, n_steps=100, tensorboard_log="logs")

    def get_action(self, observation, action_space):
        """gets an action from the agent that should be performed based on the agent's internal state and provided observation and action space"""            
        action, _states = self.model.predict(observation)
        return action

After that I evaluate the model with:

MAX_STEPS = 50

agent = RedPPOAgent(env, "models/Scenario2/red_ppo_agent")
observation = env.reset()
action_space = env.get_action_space()

total_reward = 0
for step in range(MAX_STEPS):
    action = agent.get_action(observation,action_space)
    next_observation, reward, done, info = env.step(action=action)
    action_space = info.get('action_space')
    total_reward += reward
    observation = next_observation

    print(info['action'])
    
    if done or step == MAX_STEPS - 1:
        print(f"Training reward: {total_reward}")
        break

Unfortunately I receive a lot of InvalidActions:

InvalidAction
[...]
InvalidAction
ExploitRemoteService 10.0.150.169
InvalidAction
[...]
InvalidAction
ExploitRemoteService 10.0.150.169
ExploitRemoteService 10.0.150.169
InvalidAction
[...]
InvalidAction

Is there something I am missing?

Renderer missing

The renderer code, used for the CybORG object in demo.py is missing.

Rewards for last drone seem broken

The last agent/drone in the simulation seems to have a very broken reward signal. It quickly converges to -1000 over the course of training.
Here's an example of the rewards from one training run:

First log:
    policy_reward_max:
      blue_agent_0: 0.0
      blue_agent_1: -44.0
    policy_reward_mean:
      blue_agent_0: -12.0
      blue_agent_1: -56.0
    policy_reward_min:
      blue_agent_0: -36.0
      blue_agent_1: -78.0
Second log:
    policy_reward_max:
      blue_agent_0: 0.0
      blue_agent_1: -34.0
    policy_reward_mean:
      blue_agent_0: -13.777777777777779
      blue_agent_1: -125.0
    policy_reward_min:
      blue_agent_0: -36.0
      blue_agent_1: -444.0
Third log: 
  policy_reward_max:
    blue_agent_0: 0.0
    blue_agent_1: -4.0
  policy_reward_mean:
    blue_agent_0: -21.428571428571427
    blue_agent_1: -681.625
  policy_reward_min:
    blue_agent_0: -68.0
    blue_agent_1: -1000.0

I tested this with 2 and 3 drones and it seems to always affect the last agent. Any ideas?

AllowTraffic incorrect behaviour

In the AllowTraffic action in ControlTraffic.py, if there is an attempt to allow traffic from a target IP but the host does not currently block any IPs (including the target), then the target IP is blocked. I believe that this is incorrect behaviour? If so, I think a fix would be to remove lines 45 and 46 of ControlTraffic.py. Many thanks.

Pettingzoo wrapper for observation change broken

There are a few issues with the observation_change() function in the PettingZooParallelWrapper.py file.

  1. When adding the blocked IPs to the observation new_obs[index + i] = 1 if ... on line 228, the if statement checks for the ip_addresses in the list, however the list comprehension returns a list containing a list containing the IP addresses. The if statement only checks the elements of the outer list, and thus never finds an IP address in there. Therefore blocked IPs never show up in an agents observation as the conditional fails.

  2. Adding flagged malicious processes to the observation rewrites over the first n_drones elements of the observation tensor, as the indexing on line 239 is incorrect. Instead of new_obs[i] = 1 if ..., it should read new_obs[index + i].

  3. The conditional for checking if there are flagged processes also fails in the same way as point (1), the list of lists issue rather than list of IP addresses.

  4. Issue (2) was hiding an issue with the success value on line 222 new_obs[index] = obs['success'].value. The obs['success'].value can return values from range [0-3], however this violates the observation space constraints (value too high) which are checked on line 275 assert self._observation_spaces[agent].contains( .... . This was being hidden as issues (2) and (3) were always writing a 0 at the 0 index, thus the value was never too high for that index of the observation space.

I've got some code to fix issues 1-3 that I can make a PR for, though I'm unsure about how you'd like to fix issue 4, and my fixes for 1-3 will mean that the function always fails on line 275.

Support for other `gym` versions

This package requires gym==0.23.1, influencing which third-party RL libraries are compatible.

Is there potential support for the latest gym version v0.26, or even better, the new gymnasium package that is replacing Gym?

Also, the gym API for v0.23 is not too dissimilar from the v0.21 API. Can this be supported? Pre-v0.22 gym is still commonplace and is required for the popular Stable Baselines 3.

Thanks for the info!

State of emulator.

The paper that presented Cyborg states that it contains both a simulator and emulator. This is a statement that has been propagated in related work, such as this recent paper. The emulator has to my knowledge not been publicly released.

According to the authors of this paper, they have through written correspondence with the developers of Cyborg received information that development of the emulator is not active, or abandoned.

Could someone confirm this claim?

problem of render

When I run the code, the sentence 'stop = cyborg.render()' got a wrong that raise NotImplementedError("Rendering functionality is not currently available"). I don't know why. maybe the reason is I have not a suitable gym version? could you tell me your the version of gym and python? Thank you very much. My python's version is 3.8.10 and gym's version is 0.23.1.

Potential bug in agent actions

Sometimes, it looks like both red and blue agents are taking actions on the same drones.

E.g., after 48 time steps in one of my episodes, drones 1, 8, and 14 all have both red and blue agents executing actions on them.

{key: value for key, value in wrapped_cyborg.env.environment_controller.action.items() if key.split("_")[0] == "blue" or (key.split("_")[0] == "red" and str(value) not in ["InvalidAction", "Sleep"])}

{'blue_agent_0': BlockTraffic,
 'blue_agent_2': RemoveOtherSessions,
 'blue_agent_3': RetakeControl,
 'blue_agent_4': RetakeControl,
 'blue_agent_5': AllowTraffic,
 'blue_agent_6': AllowTraffic,
 'blue_agent_7': RetakeControl,
 'blue_agent_9': RetakeControl,
 'blue_agent_10': BlockTraffic,
 'blue_agent_11': RetakeControl,
 'blue_agent_12': RemoveOtherSessions,
 'blue_agent_13': BlockTraffic,
 'blue_agent_14': RetakeControl,
 'blue_agent_15': BlockTraffic,
 'blue_agent_16': AllowTraffic,
 'blue_agent_17': RetakeControl,
 'red_agent_1': BlockTraffic,
 'blue_agent_1': Sleep,
 'red_agent_8': BlockTraffic,
 'blue_agent_8': Sleep,
 'red_agent_14': FloodBandwidth}

In another episode, after 186 time steps, on drone 3, red agent is flooding bandwidth and blue agent is blocking traffic.

{key: value for key, value in wrapped_cyborg.env.environment_controller.action.items() if key.split("_")[0] == "blue" or (key.split("_")[0] == "red" and str(value) not in ["InvalidAction", "Sleep"])}

{'blue_agent_0': RetakeControl,
 'blue_agent_1': BlockTraffic,
 'blue_agent_2': AllowTraffic,
 'blue_agent_3': BlockTraffic,
 'blue_agent_5': RetakeControl,
 'blue_agent_6': BlockTraffic,
 'blue_agent_8': AllowTraffic,
 'blue_agent_9': RetakeControl,
 'blue_agent_10': BlockTraffic,
 'blue_agent_11': AllowTraffic,
 'blue_agent_12': BlockTraffic,
 'blue_agent_13': AllowTraffic,
 'blue_agent_14': AllowTraffic,
 'blue_agent_15': BlockTraffic,
 'blue_agent_17': AllowTraffic,
 'red_agent_3': FloodBandwidth,
 'red_agent_4': FloodBandwidth,
 'blue_agent_4': Sleep,
 'blue_agent_7': Sleep,
 'red_agent_16': FloodBandwidth,
 'blue_agent_16': Sleep}

Evaluate agent in emulation mode

When I see it correctly, the scenarios in which the agents train are simulated environments either specified through yaml files or generated through scripts.

Is it possible to evaluate a trained agent in a virtualized emulation environment?

RLLIBWrapper not working

RLLIBWrapper.py contains path and import issues. When resolved, wrapper still yields an assertion error:

Traceback (most recent call last):
  File "/usr/local/anaconda3/envs/rl_lib/lib/python3.10/site-packages/ray/rllib/algorithms/algorithm.py", line 418, in setup
    self.workers = WorkerSet(
  File "/usr/local/anaconda3/envs/rl_lib/lib/python3.10/site-packages/ray/rllib/evaluation/worker_set.py", line 125, in __init__
    self.add_workers(
  File "/usr/local/anaconda3/envs/rl_lib/lib/python3.10/site-packages/ray/rllib/evaluation/worker_set.py", line 269, in add_workers
    self.foreach_worker(lambda w: w.assert_healthy())
  File "/usr/local/anaconda3/envs/rl_lib/lib/python3.10/site-packages/ray/rllib/evaluation/worker_set.py", line 391, in foreach_worker
    remote_results = ray.get([w.apply.remote(func) for w in self.remote_workers()])
  File "/usr/local/anaconda3/envs/rl_lib/lib/python3.10/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/anaconda3/envs/rl_lib/lib/python3.10/site-packages/ray/_private/worker.py", line 2277, in get
    raise value
ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=15343, ip=127.0.0.1, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7fbcc4ecb8b0>)
  File "/usr/local/anaconda3/envs/rl_lib/lib/python3.10/site-packages/ray/rllib/evaluation/rollout_worker.py", line 490, in __init__
    self.env = env_creator(copy.deepcopy(self.env_context))
  File "/Users/4lk/cage-challenge-3/CybORG/CybORG/Agents/RLLIB_training.py", line 33, in env_creator
    env = RLlibWrapper(env=cyborg, agent_name="Blue", max_steps=100)
  File "/Users/4lk/cage-challenge-3/CybORG/CybORG/Agents/Wrappers/ChallengeWrapper.py", line 19, in __init__
    env = OpenAIGymWrapper(agent_name=agent_name, env=env)
  File "/Users/4lk/cage-challenge-3/CybORG/CybORG/Agents/Wrappers/OpenAIGymWrapper.py", line 19, in __init__
    if isinstance(self.get_action_space(self.agent_name), list):
  File "/Users/4lk/cage-challenge-3/CybORG/CybORG/Agents/Wrappers/OpenAIGymWrapper.py", line 101, in get_action_space
    return self.action_space_change(self.env.get_action_space(agent))
  File "/Users/4lk/cage-challenge-3/CybORG/CybORG/Agents/Wrappers/OpenAIGymWrapper.py", line 115, in action_space_change
    assert type(action_space) is dict, \
AssertionError: Wrapper required a dictionary action space. Please check that the wrappers below return the action space as a dict

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/4lk/cage-challenge-3/CybORG/CybORG/Agents/RLLIB_training.py", line 47, in <module>
    agent = ppo.PPOTrainer(config=config, env="CybORG")
  File "/usr/local/anaconda3/envs/rl_lib/lib/python3.10/site-packages/ray/rllib/algorithms/algorithm.py", line 308, in __init__
    super().__init__(config=config, logger_creator=logger_creator, **kwargs)
  File "/usr/local/anaconda3/envs/rl_lib/lib/python3.10/site-packages/ray/tune/trainable/trainable.py", line 157, in __init__
    self.setup(copy.deepcopy(self.config))
  File "/usr/local/anaconda3/envs/rl_lib/lib/python3.10/site-packages/ray/rllib/algorithms/algorithm.py", line 443, in setup
    raise e.args[0].args[2]
AssertionError: Wrapper required a dictionary action space. Please check that the wrappers below return the action space as a dict
(RolloutWorker pid=15343) /usr/local/anaconda3/envs/rl_lib/lib/python3.10/site-packages/gym/utils/seeding.py:47: DeprecationWarning: WARN: Function `rng.randint(low, [high, size, dtype])` is marked as deprecated and will be removed in the future. Please use `rng.integers(low, [high, size, dtype])` instead.
(RolloutWorker pid=15343)   deprecation(
(RolloutWorker pid=15343) 2022-10-12 18:45:56,027	ERROR worker.py:756 -- Exception raised in creation task: The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=15343, ip=127.0.0.1, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7fbcc4ecb8b0>)
(RolloutWorker pid=15343)   File "/usr/local/anaconda3/envs/rl_lib/lib/python3.10/site-packages/ray/rllib/evaluation/rollout_worker.py", line 490, in __init__
(RolloutWorker pid=15343)     self.env = env_creator(copy.deepcopy(self.env_context))
(RolloutWorker pid=15343)   File "/Users/4lk/cage-challenge-3/CybORG/CybORG/Agents/RLLIB_training.py", line 33, in env_creator
(RolloutWorker pid=15343)     env = RLlibWrapper(env=cyborg, agent_name="Blue", max_steps=100)
(RolloutWorker pid=15343)   File "/Users/4lk/cage-challenge-3/CybORG/CybORG/Agents/Wrappers/ChallengeWrapper.py", line 19, in __init__
(RolloutWorker pid=15343)     env = OpenAIGymWrapper(agent_name=agent_name, env=env)
(RolloutWorker pid=15343)   File "/Users/4lk/cage-challenge-3/CybORG/CybORG/Agents/Wrappers/OpenAIGymWrapper.py", line 19, in __init__
(RolloutWorker pid=15343)     if isinstance(self.get_action_space(self.agent_name), list):
(RolloutWorker pid=15343)   File "/Users/4lk/cage-challenge-3/CybORG/CybORG/Agents/Wrappers/OpenAIGymWrapper.py", line 101, in get_action_space
(RolloutWorker pid=15343)     return self.action_space_change(self.env.get_action_space(agent))
(RolloutWorker pid=15343)   File "/Users/4lk/cage-challenge-3/CybORG/CybORG/Agents/Wrappers/OpenAIGymWrapper.py", line 115, in action_space_change
(RolloutWorker pid=15343)     assert type(action_space) is dict, \
(RolloutWorker pid=15343) AssertionError: Wrapper required a dictionary action space. Please check that the wrappers below return the action space as a dict
(RolloutWorker pid=15342) /usr/local/anaconda3/envs/rl_lib/lib/python3.10/site-packages/gym/utils/seeding.py:47: DeprecationWarning: WARN: Function `rng.randint(low, [high, size, dtype])` is marked as deprecated and will be removed in the future. Please use `rng.integers(low, [high, size, dtype])` instead.
(RolloutWorker pid=15342)   deprecation(
(RolloutWorker pid=15342) 2022-10-12 18:45:56,027	ERROR worker.py:756 -- Exception raised in creation task: The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=15342, ip=127.0.0.1, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7fbea06ef880>)
(RolloutWorker pid=15342)   File "/usr/local/anaconda3/envs/rl_lib/lib/python3.10/site-packages/ray/rllib/evaluation/rollout_worker.py", line 490, in __init__
(RolloutWorker pid=15342)     self.env = env_creator(copy.deepcopy(self.env_context))
(RolloutWorker pid=15342)   File "/Users/4lk/cage-challenge-3/CybORG/CybORG/Agents/RLLIB_training.py", line 33, in env_creator
(RolloutWorker pid=15342)     env = RLlibWrapper(env=cyborg, agent_name="Blue", max_steps=100)
(RolloutWorker pid=15342)   File "/Users/4lk/cage-challenge-3/CybORG/CybORG/Agents/Wrappers/ChallengeWrapper.py", line 19, in __init__
(RolloutWorker pid=15342)     env = OpenAIGymWrapper(agent_name=agent_name, env=env)
(RolloutWorker pid=15342)   File "/Users/4lk/cage-challenge-3/CybORG/CybORG/Agents/Wrappers/OpenAIGymWrapper.py", line 19, in __init__
(RolloutWorker pid=15342)     if isinstance(self.get_action_space(self.agent_name), list):
(RolloutWorker pid=15342)   File "/Users/4lk/cage-challenge-3/CybORG/CybORG/Agents/Wrappers/OpenAIGymWrapper.py", line 101, in get_action_space
(RolloutWorker pid=15342)     return self.action_space_change(self.env.get_action_space(agent))
(RolloutWorker pid=15342)   File "/Users/4lk/cage-challenge-3/CybORG/CybORG/Agents/Wrappers/OpenAIGymWrapper.py", line 115, in action_space_change
(RolloutWorker pid=15342)     assert type(action_space) is dict, \
(RolloutWorker pid=15342) AssertionError: Wrapper required a dictionary action space. Please check that the wrappers below return the action space as a dict

Output of diff command to show path and import changes: diff Wrappers/RLLIBWrapper.py RLLIBWrapper_altered.py

6,7c6,9
< from CybORG.agents import B_lineAgent, GreenAgent
< from CybORG.agents.wrappers import ChallengeWrapper
---
> from CybORG.Agents import B_lineAgent, GreenAgent
> from CybORG.Agents.Wrappers import ChallengeWrapper
> from CybORG.Simulator.Scenarios.FileReaderScenarioGenerator import FileReaderScenarioGenerator
>
27c29
<     path = path[:-7] + f'/Shared/Scenarios/Scenario1b.yaml'
---
>     path = path[:-7] + f'/Simulator/Scenarios/scenario_files/scenario1b.yaml'
29,30c31,33
<     cyborg = CybORG(scenario_file=path, environment='sim', agents=agents)
<     env = RLlibWrapper(env=cyborg, agent_name="Blue,", max_steps=100)
---
>     sg = FileReaderScenarioGenerator(path)
>     cyborg = CybORG(scenario_generator=sg)
>     env = RLlibWrapper(env=cyborg, agent_name="Blue", max_steps=100)

RedDroneWormAgent incorrect behaviour

In the RedDroneWormAgent's get_action function, behaviour 3 is supposed to exploit neighbouring drones, but attempts to flood drones that are as far away as possible (as in behaviour 1).

DroneTrojanAgent does not ActivateTrojan on Drone 17

In CybORG.Agents.SimpleAgents.DroneTrojanAgent on lines 20:21:

if self.np_random.random() < self.spawn_rate:
    return ActivateTrojan(hostname=f'drone_{self.np_random.randint(0, self.num_drones-1)}', agent=self.name)

randint is exclusive of the high parameter, therefore the trojan is only activated on drones 0-self.num_drone-2. In other words, in the challenge scenario drone 17 never gets infected.

The fix is to select a target using randint(0, self.num_drones).

AssertionError in OpenAIGymWrapper

MWE (This is just the code from README.md):

from CybORG import CybORG
from CybORG.Agents import RedMeanderAgent, B_lineAgent, SleepAgent
from CybORG.Agents.Wrappers.OpenAIGymWrapper import OpenAIGymWrapper
from CybORG.Agents.Wrappers.PettingZooParallelWrapper import PettingZooParallelWrapper
from CybORG.Simulator.Scenarios.DroneSwarmScenarioGenerator import DroneSwarmScenarioGenerator

sg = DroneSwarmScenarioGenerator()
cyborg = CybORG(sg, 'sim')

# Petting Zoo Parallel Wrapper
open_ai_wrapped_cyborg = PettingZooParallelWrapper(cyborg)
observations, rewards, dones, infos = open_ai_wrapped_cyborg.step({'blue_agent_0': 0, 'blue_agent_1': 0})

# OpenAI Gym Wrapper
agent_name = 'blue_agent_0'
open_ai_wrapped_cyborg = OpenAIGymWrapper(cyborg, agent_name)
observation, reward, done, info = open_ai_wrapped_cyborg.step(0)

Generates:

Traceback (most recent call last):
  File "/Users/user/Documents/Development/CAGE/cage-challenge-3/agents/baseline/baseline.py", line 17, in <module>
    open_ai_wrapped_cyborg = OpenAIGymWrapper(cyborg, agent_name)
  File "/Users/user/Documents/Development/CybORG/cage-challenge-3/CybORG/CybORG/Agents/Wrappers/OpenAIGymWrapper.py", line 19, in __init__
    assert isinstance(self.get_action_space(self.agent_name), int)
AssertionError

It seems self.get_action_space(self.agent_name) is a dict e.g.,

print(self.get_action_space(self.agent_name))

{'action': {<class 'CybORG.Simulator.Actions.ConcreteActions.ExploitActions.RetakeControl.RetakeControl'>: True, <class 'CybORG.Simulator.Actions.ConcreteActions.RemoveOtherSessions.RemoveOtherSessions'>: True, <class 'CybORG.Simulator.Actions.ConcreteActions.ControlTraffic.BlockTraffic'>: True, <class 'CybORG.Simulator.Actions.ConcreteActions.ControlTraffic.AllowTraffic'>: True, <class 'CybORG.Simulator.Actions.Action.Sleep'>: True}, 'subnet': {IPv4Network('10.0.247.96/27'): True}, 'ip_address': {IPv4Address('10.0.247.121'): True, IPv4Address('10.0.247.119'): True, IPv4Address('10.0.247.123'): True, IPv4Address('10.0.247.126'): True, IPv4Address('10.0.247.111'): True, IPv4Address('10.0.247.124'): True, IPv4Address('10.0.247.118'): True, IPv4Address('10.0.247.101'): True, IPv4Address('10.0.247.114'): True, IPv4Address('10.0.247.104'): True, IPv4Address('10.0.247.105'): True, IPv4Address('10.0.247.122'): True, IPv4Address('10.0.247.102'): True, IPv4Address('10.0.247.125'): True, IPv4Address('10.0.247.107'): True, IPv4Address('10.0.247.113'): True, IPv4Address('10.0.247.108'): True, IPv4Address('10.0.247.116'): True}, 'session': {0: True}, 'username': {'root': True, 'drone_user': True}, 'password': {}, 'process': {1056: False, 1091: False, 1099: True, 1106: False, 1094: False, 1103: False, 1093: False, 1098: False, 1097: False, 1096: False, 1105: False, 1104: False, 1101: False, 1095: False, 1100: False, 1107: False, 1092: False, 1102: False}, 'port': {8888: False, 22: False}, 'target_session': {0: True, 1: False, 2: False, 3: False, 4: False, 5: False, 6: False, 7: False}, 'agent': {'blue_agent_0': True}, 'hostname': {'drone_0': True, 'drone_1': True, 'drone_2': True, 'drone_3': True, 'drone_4': True, 'drone_5': True, 'drone_6': True, 'drone_7': True, 'drone_8': True, 'drone_9': True, 'drone_10': True, 'drone_11': True, 'drone_12': True, 'drone_13': True, 'drone_14': True, 'drone_15': True, 'drone_16': True, 'drone_17': True}}

Environment randomization

When viewing the defined scenarios they seem fairly static. I would expect the agents to overfit for the scenario they are training in. Is there a method of randomizing the environment?

Feature request: update PettingZoo version

Hi, would it be possible to update this repo to use the most recent version of PettingZoo? We want to list this project in PettingZoo's third-party-environments, but we can only include environments which work with the current version.

If you need any help working out issues due to different versions feel free to ask, there were some breaking changes in version 1.2, so it requires a bit of code changes to adapt. The previous API returned done in the step() function, whereas the new one returns truncated and terminated (matching gymnasium). There is a migration guide for gymnasium explaining the changes further, the steps should be basically the same (we're working on making resources for updating old PettingZoo repositories as well): https://gymnasium.farama.org/content/migration-guide/

"IndexError: list index out of range" possibly caused by _update_red_info in RedTableWrapper

When I try running the cell in the Wrappers tutorial notebook for the RedTableWrapper, and I set the number of steps to be 6 or greater, I get the following error:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
[...]/cage-challenge-2/CybORG/CybORG/Tutorial/5. Wrappers.ipynb Cell 19 line 7
      4 print(results.observation)
      6 for i in range(6):
----> 7     results = env.step(agent='Red')
      8     red_obs = results.observation
      9     print("obs: ", red_obs)

File [...]/cage-challenge-2/CybORG/CybORG/Agents/Wrappers/BaseWrapper.py:16, in BaseWrapper.step(self, agent, action)
     15 def step(self, agent=None, action=None) -> Results:
---> 16     result = self.env.step(agent, action)
     17     result.observation = self.observation_change(result.observation)
     18     result.action_space = self.action_space_change(result.action_space)

File [...]/cage-challenge-2/CybORG/CybORG/Agents/Wrappers/BaseWrapper.py:16, in BaseWrapper.step(self, agent, action)
     15 def step(self, agent=None, action=None) -> Results:
---> 16     result = self.env.step(agent, action)
     17     result.observation = self.observation_change(result.observation)
     18     result.action_space = self.action_space_change(result.action_space)

File [...]/cage-challenge-2/CybORG/CybORG/CybORG.py:104, in CybORG.step(self, agent, action, skip_valid_action_check)
     88 def step(self, agent: str = None, action=None, skip_valid_action_check: bool = False) -> Results:
     89     """Performs a step in CybORG for the given agent.
     90 
     91     Parameters
   (...)
    102         the result of agent performing the action
    103     """
--> 104     return self.environment_controller.step(agent, action, skip_valid_action_check)

File [...]/cage-challenge-2/CybORG/CybORG/Shared/EnvironmentController.py:120, in EnvironmentController.step(self, agent, action, skip_valid_action_check)
    117 for agent_name, agent_object in self.agent_interfaces.items():
    118     # pass observation to agent to get action
    119     if agent is None or action is None or agent != agent_name:
--> 120         agent_action = agent_object.get_action(self.observation[agent_name])
    122     else:
    123         agent_action = action

File [...]/cage-challenge-2/CybORG/CybORG/Shared/AgentInterface.py:90, in AgentInterface.get_action(self, observation, action_space)
     88 if action_space is None:
     89     action_space = self.action_space.get_action_space()
---> 90 self.last_action = self.agent.get_action(observation, action_space)
     91 return self.last_action

File [...]/cage-challenge-2/CybORG/CybORG/Agents/SimpleAgents/B_line.py:66, in B_lineAgent.get_action(self, observation, action_space)
     64 # Exploit- Enterprise Host
     65 elif self.action == 5:
---> 66     self.target_ip_address = [value for key, value in observation.items() if key != 'success'][0]['Interface'][0]['IP Address']
     67     action = ExploitRemoteService(session=session, agent='Red', ip_address=self.target_ip_address)
     69 # Privilege escalation on Enterprise Host

IndexError: list index out of range

This problem doesn't happen with any of the other wrappers.

The observations and actions from running the RedTableWrapper cell are below:

{'success': <TrinaryEnum.UNKNOWN: 2>, 'User0': {'Interface': [{'Interface Name': 'eth0', 'IP Address': IPv4Address('10.0.74.1'), 'Subnet': IPv4Network('10.0.74.0/28')}], 'Sessions': [{'Username': 'SYSTEM', 'ID': 0, 'Timeout': 0, 'PID': 2448, 'Type': <SessionType.RED_ABSTRACT_SESSION: 10>, 'Agent': 'Red'}], 'Processes': [{'PID': 2448, 'Username': 'SYSTEM'}], 'System info': {'Hostname': 'User0', 'OSType': <OperatingSystemType.WINDOWS: 2>, 'OSDistribution': <OperatingSystemDistribution.WINDOWS_SVR_2008: 4>, 'OSVersion': <OperatingSystemVersion.W6_1_7601: 13>, 'Architecture': <Architecture.x64: 2>}}}
obs:  {'success': <TrinaryEnum.TRUE: 1>, '10.0.74.1': {'Interface': [{'IP Address': IPv4Address('10.0.74.1'), 'Subnet': IPv4Network('10.0.74.0/28')}]}, '10.0.74.2': {'Interface': [{'IP Address': IPv4Address('10.0.74.2'), 'Subnet': IPv4Network('10.0.74.0/28')}]}, '10.0.74.12': {'Interface': [{'IP Address': IPv4Address('10.0.74.12'), 'Subnet': IPv4Network('10.0.74.0/28')}]}, '10.0.74.3': {'Interface': [{'IP Address': IPv4Address('10.0.74.3'), 'Subnet': IPv4Network('10.0.74.0/28')}]}, '10.0.74.6': {'Interface': [{'IP Address': IPv4Address('10.0.74.6'), 'Subnet': IPv4Network('10.0.74.0/28')}]}}
action:  DiscoverRemoteSystems 10.0.74.0/28
obs:  {'success': <TrinaryEnum.TRUE: 1>}
action:  DiscoverNetworkServices 10.0.74.6
obs:  {'success': <TrinaryEnum.TRUE: 1>, '10.0.74.1': {'Processes': [{'Connections': [{'local_port': 4444, 'remote_port': 56112, 'local_address': IPv4Address('10.0.74.1'), 'remote_address': IPv4Address('10.0.74.6')}], 'Process Type': <ProcessType.REVERSE_SESSION_HANDLER: 10>}], 'Interface': [{'IP Address': IPv4Address('10.0.74.1')}]}, '10.0.74.6': {'Processes': [{'Connections': [{'local_port': 56112, 'remote_port': 4444, 'local_address': IPv4Address('10.0.74.6'), 'remote_address': IPv4Address('10.0.74.1')}], 'Process Type': <ProcessType.REVERSE_SESSION: 11>}, {'Connections': [{'local_port': 25, 'local_address': IPv4Address('10.0.74.6'), 'Status': <ProcessState.OPEN: 2>}], 'Process Type': <ProcessType.SMTP: 5>}], 'Interface': [{'IP Address': IPv4Address('10.0.74.6')}], 'Sessions': [{'ID': 1, 'Type': <SessionType.RED_REVERSE_SHELL: 11>, 'Agent': 'Red'}], 'System info': {'Hostname': 'User4', 'OSType': <OperatingSystemType.LINUX: 3>}}}
action:  ExploitRemoteService 10.0.74.6
obs:  {'success': <TrinaryEnum.TRUE: 1>, 'User4': {'Sessions': [{'Username': 'root', 'ID': 1, 'Timeout': 0, 'PID': 2369, 'Type': <SessionType.RED_REVERSE_SHELL: 11>, 'Agent': 'Red'}], 'Processes': [{'PID': 2369, 'Username': 'root'}], 'Interface': [{'Interface Name': 'eth0', 'IP Address': IPv4Address('10.0.74.6'), 'Subnet': IPv4Network('10.0.74.0/28')}]}, 'Enterprise0': {'Interface': [{'IP Address': IPv4Address('10.0.81.167')}]}}
action:  PrivilegeEscalate User4
obs:  {'success': <TrinaryEnum.TRUE: 1>}
action:  DiscoverNetworkServices 10.0.81.167

The action that comes before the error is always DiscoverNetworkServices.

Could this be because in _update_red_info in RedTableWrapper, there is a line that uses popitem() to get the ip address?

ip = str(obs.popitem()[1]['Interface'][0]['IP Address'])

When I change the method to get the same element without removing the element from obs, the cell is able to run for at least 6 steps without errors.

 def _update_red_info(self, obs):
        action = self.get_last_action(agent='Red')
        name = action.__class__.__name__
        if name == 'DiscoverRemoteSystems':
            self._add_ips(obs)
        elif name == 'DiscoverNetworkServices':
            red_obs = deepcopy(obs)
            ip = str(red_obs.popitem()[1]['Interface'][0]['IP Address'])
            self.red_info[ip][3] = True
[...]

`typing` syntax error

I'm using Python 3.9 (apparently supported), but I'm seeing some syntax errors in the package for typing. Specifically,

def step(self, action: Union[int, List[int]] = None) -> (object, float, bool, dict):

throws an error,

Tuple expression not allowed in type annotation
  Use Tuple[T1, ..., Tn] to indicate a tuple type or Union[T1, T2] to indicate a union type

This error seems consistent with the Python docs. Not sure if that syntax for tuple-types was supported in a previous Python version or not...

Accidental print statement?

It seems like there is a print statement that was accidentally left in in commit e1573b7. On line 173 in the file pettingzooparallelwrapper.py

how to use the trainded model?

In the following code (CybORG/Agents/Wrappers)/RLLibWrapper.py), we can get a trainded model, but I don't know how to use the model in a new env. Could you give me some suggestion? Thanks for the info!
`import inspect
import numpy as np
from ray.rllib.agents import ppo
from ray.rllib.env import ParallelPettingZooEnv
from ray.tune import register_env
from CybORG import CybORG
from CybORG.Agents import B_lineAgent, GreenAgent
from CybORG.Agents.Wrappers import ChallengeWrapper

from CybORG.Agents.Wrappers.PettingZooParallelWrapper import PettingZooParallelWrapper
from CybORG.Simulator.Scenarios import FileReaderScenarioGenerator, DroneSwarmScenarioGenerator

class RLLibWrapper(ChallengeWrapper):
def init(self, agent_name, env, reward_threshold=None, max_steps=None):
super().init(agent_name, env, reward_threshold, max_steps)

def step(self, action=None):
    obs, reward, done, info = self.env.step(action=action)
    self.step_counter += 1
    if self.max_steps is not None and self.step_counter >= self.max_steps:
        done = True
    return np.float32(obs), reward, done, info

def reset(self):
    self.step_counter = 0
    obs = self.env.reset()
    return np.float32(obs)

def env_creator_CC1(env_config: dict):
path = str(inspect.getfile(CybORG))
path = path[:-7] + f'/Simulator/Scenarios/scenario_files/Scenario1b.yaml'
sg = FileReaderScenarioGenerator(path)
agents = {"Red": B_lineAgent(), "Green": GreenAgent()}
cyborg = CybORG(scenario_generator=sg, environment='sim', agents=agents)
env = RLLibWrapper(env=cyborg, agent_name="Blue", max_steps=100)
return env

def env_creator_CC2(env_config: dict):
path = str(inspect.getfile(CybORG))
path = path[:-7] + f'/Simulator/Scenarios/scenario_files/Scenario2.yaml'
sg = FileReaderScenarioGenerator(path)
agents = {"Red": B_lineAgent(), "Green": GreenAgent()}
cyborg = CybORG(scenario_generator=sg, environment='sim', agents=agents)
env = RLLibWrapper(env=cyborg, agent_name="Blue", max_steps=100)
return env

def env_creator_CC3(env_config: dict):
sg = DroneSwarmScenarioGenerator()
cyborg = CybORG(scenario_generator=sg, environment='sim')
env = ParallelPettingZooEnv(PettingZooParallelWrapper(env=cyborg))
return env

def print_results(results_dict):
train_iter = results_dict["training_iteration"]
r_mean = results_dict["episode_reward_mean"]
r_max = results_dict["episode_reward_max"]
r_min = results_dict["episode_reward_min"]
print(f"{train_iter:4d} \tr_mean: {r_mean:.1f} \tr_max: {r_max:.1f} \tr_min: {r_min: .1f}")

if name == "main":
register_env(name="CC1", env_creator=env_creator_CC1)
register_env(name="CC2", env_creator=env_creator_CC2)
register_env(name="CC3", env_creator=env_creator_CC3)
config = ppo.DEFAULT_CONFIG.copy()
for env in ['CC1', 'CC2', 'CC3']:
agent = ppo.PPOTrainer(config=config, env=env)

    train_steps = 1e2
    total_steps = 0
    while total_steps < train_steps:
        results = agent.train()
        print_results(results)
        total_steps = results["timesteps_total"]`

The code in the tutorial does not work correctly.

I'm new to this repo, and I setup and test the cyborg properly with the instruction. But when I follow the code in the Tutorial, I got a TypeError and it seems like something wrong with the 'scenario generator' in 'CybORG'.

import random
import inspect
from os.path import dirname
from pprint import pprint

from CybORG import CybORG
from CybORG.Simulator.Scenarios import FileReaderScenarioGenerator

path = inspect.getfile(CybORG)
path = dirname(path) + f'/Simulator/Scenarios/scenario_files/Scenario1.yaml'
sg = FileReaderScenarioGenerator(path)
env = CybORG(scenario_generator=sg)

env = CybORG(sg)

results = env.reset(agent='Red')
obs = results.observation
pprint(obs)

TypeError                                 Traceback (most recent call last)
Cell In[9], line 12
     10 path = dirname(path) + f'/Simulator/Scenarios/scenario_files/Scenario1.yaml'
     11 sg = FileReaderScenarioGenerator(path)
---> 12 env = CybORG(scenario_generator=sg)
     14 env = CybORG(sg)
     16 results = env.reset(agent='Red')

File e:\jupyter notebook\cyborg\cyborg\CybORG\env.py:80, in CybORG.__init__(self, scenario_generator, environment, env_config, agents, seed)
     78 else:
     79     self.np_random = seed
---> 80 self.environment_controller = self._create_env_controller(env_config, agents)

File e:\jupyter notebook\cyborg\cyborg\CybORG\env.py:95, in CybORG._create_env_controller(self, env_config, agents)
     93 if self.env == 'sim':
     94     from CybORG.Simulator.SimulationController import SimulationController
---> 95     return SimulationController(self.scenario_generator, agents, self.np_random)
     96 raise NotImplementedError(
     97     f"Unsupported environment '{self.env}'. Currently supported "
     98     f"environments are: {self.supported_envs}"
     99 )

File e:\jupyter notebook\cyborg\cyborg\CybORG\Simulator\SimulationController.py:26, in SimulationController.__init__(self, scenario_generator, agents, np_random)
     24 self.routeless_actions = []
     25 self.blocked_actions = []
---> 26 super().__init__(scenario_generator, agents, np_random)

File e:\jupyter notebook\cyborg\cyborg\CybORG\Shared\EnvironmentController.py:49, in EnvironmentController.__init__(self, scenario_generator, agents, np_random)
     47 self.scenario_generator = scenario_generator
     48 self.np_random = np_random
---> 49 scenario = scenario_generator.create_scenario(np_random)
     50 self._create_environment(scenario)
     51 self.max_bandwidth = scenario.max_bandwidth

File e:\jupyter notebook\cyborg\cyborg\CybORG\Simulator\Scenarios\FileReaderScenarioGenerator.py:74, in FileReaderScenarioGenerator.create_scenario(self, np_random)
     73 def create_scenario(self, np_random) -> Scenario:
---> 74     scenario = copy.deepcopy(self.scenario)
     76     count = 0
     77     # randomly generate subnets cidrs for all subnets in scenario and IP addresses for all hosts in those subnets and create Subnet objects
     78     # using fixed size subnets (VLSM maybe viable alternative if required)

File e:\Anaconda3\envs\cyborg\lib\copy.py:172, in deepcopy(x, memo, _nil)
    170                 y = x
    171             else:
--> 172                 y = _reconstruct(x, memo, *rv)
    174 # If is its own copy, don't memoize.
    175 if y is not x:

File e:\Anaconda3\envs\cyborg\lib\copy.py:270, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
    268 if state is not None:
    269     if deep:
--> 270         state = deepcopy(state, memo)
    271     if hasattr(y, '__setstate__'):
    272         y.__setstate__(state)

File e:\Anaconda3\envs\cyborg\lib\copy.py:146, in deepcopy(x, memo, _nil)
    144 copier = _deepcopy_dispatch.get(cls)
    145 if copier is not None:
--> 146     y = copier(x, memo)
    147 else:
    148     if issubclass(cls, type):

File e:\Anaconda3\envs\cyborg\lib\copy.py:230, in _deepcopy_dict(x, memo, deepcopy)
    228 memo[id(x)] = y
    229 for key, value in x.items():
--> 230     y[deepcopy(key, memo)] = deepcopy(value, memo)
    231 return y

File e:\Anaconda3\envs\cyborg\lib\copy.py:146, in deepcopy(x, memo, _nil)
    144 copier = _deepcopy_dispatch.get(cls)
    145 if copier is not None:
--> 146     y = copier(x, memo)
    147 else:
    148     if issubclass(cls, type):

File e:\Anaconda3\envs\cyborg\lib\copy.py:230, in _deepcopy_dict(x, memo, deepcopy)
    228 memo[id(x)] = y
    229 for key, value in x.items():
--> 230     y[deepcopy(key, memo)] = deepcopy(value, memo)
    231 return y

File e:\Anaconda3\envs\cyborg\lib\copy.py:172, in deepcopy(x, memo, _nil)
    170                 y = x
    171             else:
--> 172                 y = _reconstruct(x, memo, *rv)
    174 # If is its own copy, don't memoize.
    175 if y is not x:

File e:\Anaconda3\envs\cyborg\lib\copy.py:270, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
    268 if state is not None:
    269     if deep:
--> 270         state = deepcopy(state, memo)
    271     if hasattr(y, '__setstate__'):
    272         y.__setstate__(state)

File e:\Anaconda3\envs\cyborg\lib\copy.py:146, in deepcopy(x, memo, _nil)
    144 copier = _deepcopy_dispatch.get(cls)
    145 if copier is not None:
--> 146     y = copier(x, memo)
    147 else:
    148     if issubclass(cls, type):

File e:\Anaconda3\envs\cyborg\lib\copy.py:230, in _deepcopy_dict(x, memo, deepcopy)
    228 memo[id(x)] = y
    229 for key, value in x.items():
--> 230     y[deepcopy(key, memo)] = deepcopy(value, memo)
    231 return y

File e:\Anaconda3\envs\cyborg\lib\copy.py:172, in deepcopy(x, memo, _nil)
    170                 y = x
    171             else:
--> 172                 y = _reconstruct(x, memo, *rv)
    174 # If is its own copy, don't memoize.
    175 if y is not x:

File e:\Anaconda3\envs\cyborg\lib\copy.py:270, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
    268 if state is not None:
    269     if deep:
--> 270         state = deepcopy(state, memo)
    271     if hasattr(y, '__setstate__'):
    272         y.__setstate__(state)

File e:\Anaconda3\envs\cyborg\lib\copy.py:146, in deepcopy(x, memo, _nil)
    144 copier = _deepcopy_dispatch.get(cls)
    145 if copier is not None:
--> 146     y = copier(x, memo)
    147 else:
    148     if issubclass(cls, type):

File e:\Anaconda3\envs\cyborg\lib\copy.py:230, in _deepcopy_dict(x, memo, deepcopy)
    228 memo[id(x)] = y
    229 for key, value in x.items():
--> 230     y[deepcopy(key, memo)] = deepcopy(value, memo)
    231 return y

File e:\Anaconda3\envs\cyborg\lib\copy.py:172, in deepcopy(x, memo, _nil)
    170                 y = x
    171             else:
--> 172                 y = _reconstruct(x, memo, *rv)
    174 # If is its own copy, don't memoize.
    175 if y is not x:

File e:\Anaconda3\envs\cyborg\lib\copy.py:264, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
    262 if deep and args:
    263     args = (deepcopy(arg, memo) for arg in args)
--> 264 y = func(*args)
    265 if deep:
    266     memo[id(x)] = y

TypeError: _generator_ctor() takes from 0 to 1 positional arguments but 2 were given

I tried to construct CybORG with DroneSwarmScenarioGenerator and it works correctly. I guess the way the environment is constructed using YAML files does not match the existing implementation. I would greatly appreciate if someone could tell me what is wrong with it.

Erroneous agent `__init__` signatures

There are a bunch of issues with agent instantiation signatures. BaseAgent wants a name, but lots of subclasses don't instantiate with one. Worse, classes like DroneTrojanAgent use positional arguments in the super().__init__ calls, accidentally passing np_random to BaseAgent as name

😬

Bug in Observation Vector?

Hello, I think there might be an issue with how the observation vector is created in the PettingZooParallelWrapper. The IP range and malicious processes bits always appear to be 0. I have added a code snippet to show this, it prints nothing for me. I have found that this causes the agent to learn a 'noop' action, as they can't make informed decisions about the environment. I ran this on the most recent commit, including the updates to the PettingZooParallelWrapper. Please let me know if this is just an error in my setup. Cheers.

from CybORG.Agents.Wrappers.PettingZooParallelWrapper import PettingZooParallelWrapper
from ray.rllib.env import ParallelPettingZooEnv
from CybORG import CybORG
from CybORG.Simulator.Scenarios import DroneSwarmScenarioGenerator

MAX_EPS = 2

sg = DroneSwarmScenarioGenerator()
cyborg = CybORG(sg, 'sim')
env = ParallelPettingZooEnv(PettingZooParallelWrapper(env=cyborg))

for i in range(MAX_EPS):
    observations = env.reset()
    action_spaces = wrapped_cyborg.action_spaces
    a = []
    for j in range(500):
        actions = {agent_name: 42 for agent_name in cyborg.agents}
        observations, rew, done, info = env.step(actions)
        for agent in rew.keys():
            if observations[agent][2:38].sum() != 0:
                print(observations[agent][2:38])
        if all(done.values()):
            break

Edit: updated the code snippet. It appears that only index 6, in this range, is not ever equal to 1. Which makes, sense as we aren't blocking IPs; however, I would expect malicious processes to pop up on index 19. I see that all the drones detect a malicious process on reset, but it doesn't stay flagged over timesteps or appear again after the reset. Is this by design?

Output:

[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]

Ps. Apologies for reposting this from another branch. I'm not sure which branch is the 'main' one.

SendData actions return Observation(False) when any drone is red

The SendData actions appear to return Observation(False) when any of all 18 agents are red, rather than when any agent on the route is red.

This results in scores of -18 in every step, as soon as red team takes an agent.

This seems to be caused by the check for red agents looping through host_agents, which contains all agents rather than just agents on the route. The fix should be to correct for agent in host_agents:.

        for other_hostname in route:
            # Get host object for corresponding hostname
            host = state.hosts[other_hostname]
            # Get the list of agents mapped to sessions for the host
            host_agents = host.sessions.keys()
            # Iterate through list of agents operating session
            for agent in host_agents:
                # Check that agent's team name contains 'red', assume modification if true
                if 'red' in agent.lower():
                    # Iterate through list of session objects under agent
                    for session in state.sessions[agent].values():
                        # Check if agent has escalated privileges within session
                        if session.username == 'root' or session.username == 'SYSTEM':
                            return obs

Why am I getting an error when executing the demo after successfully installing Cyborg on Anaconda?

Traceback (most recent call last):
File "c:/Users/HP/Documents/Tmpcode/CybORG/demo.py", line 44, in
cyborg = CybORG(sg, 'sim', agents={'Red': red_agent})
File "D:\Users\HP\anaconda3\envs\CybORG\lib\site-packages\cyborg-3.1-py3.8.egg\CybORG\env.py", line 80, in init
self.environment_controller = self._create_env_controller(env_config, agents)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\site-packages\cyborg-3.1-py3.8.egg\CybORG\env.py", line 95, in _create_env_controller
return SimulationController(self.scenario_generator, agents, self.np_random)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\site-packages\cyborg-3.1-py3.8.egg\CybORG\Simulator\SimulationController.py", line 26, in init
super().init(scenario_generator, agents, np_random)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\site-packages\cyborg-3.1-py3.8.egg\CybORG\Shared\EnvironmentController.py", line 49, in init
scenario = scenario_generator.create_scenario(np_random)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\site-packages\cyborg-3.1-py3.8.egg\CybORG\Simulator\Scenarios\FileReaderScenarioGenerator.py", line 74, in create_scenario
scenario = copy.deepcopy(self.scenario)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "D:\Users\HP\anaconda3\envs\CybORG\lib\copy.py", line 264, in _reconstruct
y = func(*args)
TypeError: _generator_ctor() takes from 0 to 1 positional arguments but 2 were given

[bug]true_obs_to_table:KeyError

I run the following code with reference to the 4th tutorial debug:
from CybORG.Agents.Wrappers.TrueTableWrapper import true_obs_to_table

true_table = true_obs_to_table(true_state,env)
print(true_table)

The error is as follows:
1 from CybORG.Agents.Wrappers.TrueTableWrapper import true_obs_to_table
----> 3 true_table = true_obs_to_table(true_state,env)
4 print(true_table)

File ~/pettingzooAndOrg/cage-challenge-3/CybORG/CybORG/Agents/Wrappers/TrueTableWrapper.py:118, in true_obs_to_table(true_obs, env)
116 wrapper = TrueTableWrapper(env,observer_mode=False)
117 wrapper.step_counter = 1
--> 118 return wrapper.observation_change(agent=None, observation=true_obs)

File ~/pettingzooAndOrg/cage-challenge-3/CybORG/CybORG/Agents/Wrappers/TrueTableWrapper.py:26, in TrueTableWrapper.observation_change(self, agent, observation)
23 self.step_counter +=1
24 self._update_scanned()
---> 26 return observation if self.observer_mode else self._create_true_table()

File ~/pettingzooAndOrg/cage-challenge-3/CybORG/CybORG/Agents/Wrappers/TrueTableWrapper.py:68, in TrueTableWrapper._create_true_table(self)
66 hostname = host['System info']['Hostname']
67 action_space = self.get_action_space(agent = 'Red')
---> 68 known = action_space['ip_address'][ip]
69 scanned = True if str(ip) in self.scanned_ips else False
70 access = self._determine_red_access(host['Sessions'])

KeyError: IPv4Address('10.0.214.136')

I don't know what this ‘key error’ stands for and how to fix it

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.