Code Monkey home page Code Monkey logo

ovn-scale-test's People

Contributors

dceara avatar flavio-fernandes avatar huikang avatar hzhou8 avatar jayhawk87 avatar l8huang avatar linup2011 avatar lorenzobianconi avatar mestery avatar noah8713 avatar numansiddique avatar putnopvut avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ovn-scale-test's Issues

Support unbalanced logical port binding mechanism

In current rally_ovs implementation, the logical ports are evenly distributed onto all the emulated chassis.

The pseducode of existing implementation in rally_ovs is as follows

num-lports-per-network = // some number of rally task file
for network in logical-networks
for i=0 to num-lports-per-network
// bind lport[i] to chassis [i]

Therefore, for each virtual network (e.g., having 400 lports), the
current binding strategy binds one lports to one chassis. If the
number of lport is smaller than the # chassis, the allocation tends to
be unbalanced. That's the main reason I choose 400 ports per network.

The binding strategy is hard coded in rally-ovs. If we want to try
different binding strategy, we need to add a new parameter to the rally
task file.

how to use ovn-scale-test to test raft cluster database

Hi Team,

I studied some slides from Aliasgar, very helpful for me to run standalone testing.
For now, I want use 3 KVM vms as cluster, and use several BMs for test farms.
Could someone help me how to modify the json files in the deployment/task to finish the scale test?

Regards,
Winson

rally Dockerfile does not take effect from committed change

Problem description: if a commit change anything in rally-ovs, the rally Dockerfile can not receive the new update. This is because rally Dockerfile clones the repo form upstream.

Then a following PR is needed to fix this issue, e.g., #53 .

One possible solution is to generate Dockerfile dynamically, using jinja2 tempate.

Add error checking to the CI tools

Currently, the ovn-scale-test CI scripts in the ci/ directory do not handle errors if things fail, which would allow the underlying CI system to conclude the run was in error.

[RFC] hybrid scalability test deploymet

@muradkablan proposed a hybrid deployment for OVN-scale test. The motivation is that we can test how control plane impacts the data plane.

[ovn-emulation-host]
host-1
host-2

[ovn-real-chassis]
host-3
host-4

Comment?

pure-ovn: same ip address in multiple containers?

Hello @huikang @l8huang and @mestery,

It is very possible that I'm not looking at this the right way, but after deploying pure-ovn containers, via ".../ci/scale-pure-ovn-hosts.sh", it seems that the containers are configured
with addresses that are not unique. Does it mean all of them share the same network namespace?

See this gist -- or below -- for the output.

Assuming that is working as expected; and if there is a doc or a piece of code that talks about
this, please forward it to me!

Thanks,

vagrant@pureovn:~$ docker ps
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS               NAMES
952ee145831b        ovn-scale-test-ovn   "ovn_set_chassis 172."   27 hours ago        Up 27 hours                             sandbox-172.16.200.14
403d0150b91e        ovn-scale-test-ovn   "ovn_set_chassis 172."   27 hours ago        Up 27 hours                             sandbox-172.16.200.13
8f76eeec9c1b        ovn-scale-test-ovn   "ovn_set_chassis 172."   27 hours ago        Up 27 hours                             sandbox-172.16.200.12
65cb610cb928        ovn-scale-test-ovn   "ovn_set_chassis 172."   27 hours ago        Up 27 hours                             sandbox-172.16.200.11
352ea12d2df5        ovn-scale-test-ovn   "ovn_set_chassis 172."   27 hours ago        Up 27 hours                             sandbox-172.16.200.10
8c3298e8a3bf        ovn-scale-test-ovn   "ovn-sandbox-northd.s"   27 hours ago        Up 27 hours                             ovn-northd
97db778bd646        ovn-scale-test-ovn   "ovn-sandbox-south-ov"   27 hours ago        Up 27 hours                             ovn-south-database
57ef825504ec        ovn-scale-test-ovn   "ovn-sandbox-north-ov"   27 hours ago        Up 27 hours                             ovn-north-database
vagrant@pureovn:~$
vagrant@pureovn:~$
vagrant@pureovn:~$ for cid in $(docker ps -q) ; do docker inspect --format='{{.Name}} - {{.Path}}' $cid ; docker exec $cid ip a ; echo --- ; done
/sandbox-172.16.200.14 - ovn_set_chassis
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:1a:e9:1a brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.20.100/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.10/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.11/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.12/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.13/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.14/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe1a:e91a/64 scope link
       valid_lft forever preferred_lft forever
3: docker0:  mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:57:f4:4f:75 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:57ff:fef4:4f75/64 scope link
       valid_lft forever preferred_lft forever

---
/sandbox-172.16.200.13 - ovn_set_chassis
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:1a:e9:1a brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.20.100/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.10/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.11/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.12/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.13/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.14/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe1a:e91a/64 scope link
       valid_lft forever preferred_lft forever
3: docker0:  mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:57:f4:4f:75 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:57ff:fef4:4f75/64 scope link
       valid_lft forever preferred_lft forever

---
/sandbox-172.16.200.12 - ovn_set_chassis
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:1a:e9:1a brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.20.100/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.10/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.11/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.12/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.13/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.14/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe1a:e91a/64 scope link
       valid_lft forever preferred_lft forever
3: docker0:  mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:57:f4:4f:75 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:57ff:fef4:4f75/64 scope link
       valid_lft forever preferred_lft forever

---
/sandbox-172.16.200.11 - ovn_set_chassis
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:1a:e9:1a brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.20.100/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.10/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.11/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.12/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.13/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.14/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe1a:e91a/64 scope link
       valid_lft forever preferred_lft forever
3: docker0:  mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:57:f4:4f:75 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:57ff:fef4:4f75/64 scope link
       valid_lft forever preferred_lft forever

---
/sandbox-172.16.200.10 - ovn_set_chassis
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:1a:e9:1a brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.20.100/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.10/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.11/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.12/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.13/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.14/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe1a:e91a/64 scope link
       valid_lft forever preferred_lft forever
3: docker0:  mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:57:f4:4f:75 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:57ff:fef4:4f75/64 scope link
       valid_lft forever preferred_lft forever

---
/ovn-northd - ovn-sandbox-northd.sh
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:1a:e9:1a brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.20.100/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.10/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.11/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.12/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.13/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.14/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe1a:e91a/64 scope link
       valid_lft forever preferred_lft forever
3: docker0:  mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:57:f4:4f:75 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:57ff:fef4:4f75/64 scope link
       valid_lft forever preferred_lft forever

---
/ovn-south-database - ovn-sandbox-south-ovsdb.sh
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:1a:e9:1a brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.20.100/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.10/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.11/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.12/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.13/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.14/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe1a:e91a/64 scope link
       valid_lft forever preferred_lft forever
3: docker0:  mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:57:f4:4f:75 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:57ff:fef4:4f75/64 scope link
       valid_lft forever preferred_lft forever

---
/ovn-north-database - ovn-sandbox-north-ovsdb.sh
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:1a:e9:1a brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.20.100/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.10/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.11/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.12/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.13/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 172.16.200.14/16 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe1a:e91a/64 scope link
       valid_lft forever preferred_lft forever
3: docker0:  mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:57:f4:4f:75 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:57ff:fef4:4f75/64 scope link
       valid_lft forever preferred_lft forever

---
vagrant@pureovn:~$

ovn-rally is still running when running cleanup

Running this:

sudo /usr/local/bin/ansible-playbook -i $OVN_DOCKER_HOSTS ansible/site.yml -e @$OVN_DOCKER_VARS -e action=clean

leaves ovn-rally running:

ubuntu@ovn-rally-2:~/ovn-scale-test/ci$ docker ps
CONTAINER ID        IMAGE                  COMMAND               CREATED             STATUS              PORTS               NAMES
cf5db6185290        ovn-scale-test-rally   "/usr/sbin/sshd -D"   8 minutes ago       Up 8 minutes                            ovn-rally
ubuntu@ovn-rally-2:~/ovn-scale-test/ci$ 

Support global opts for ovs/ovn commands in batch mode

There are two kinds of opts for ovs/ovn command lines: global and command specific, as documented in man page of ovs-vsctl:
"The ovs-vsctl command line begins with global options (see OPTIONS below for details). The global options are followed by one or more commands. Each command should begin with -- by itself as a command-line argument, to separate it from the following commands. (The
-- before the first command is optional.) The command itself starts with command-specific options, if any, followed by the command name and any arguments."

Right now there is no proper way to supply opts for batch_mode in ovsclients_impl.py. The parameters taken are for command specific opts only. This need to be fixed so that we can add global options for batch mode execution.

Make localnet port creation optional in _create_networks()

Now that the logical router is added to the test, _create_neworks() should be refactored to be able to disable localnet port creation in lrouter related scenarios. L2 scenarios should still keep the localnet port as default behavior.

With localnet port, a lswitch behaves in bridge mode instead of overlay mode. This may have little impact on control-plane scalability based on current implementation, but could be huge impact in the future if we try to optimize the bridge mode by not installing overlay related flows for bridged L2 datapaths.

Add logging support to ovsclients_impl.py

Right now there's no easy way to inspect the commands that were executed by OvsClient instances in ovsclients_impl.py.

Adding support to log the commands would make it easier in debugging tests and/or replaying commands executed by the test without having to rerun the whole scenario.

One potential option would be to add command logging before calls to self.ssh.run(..).

ovn-scale-test-ovn gone from docker repository

I'm seeing this as of today:

TASK [ovn : start OVN northbound database] *************************************
fatal: [192.168.0.37]: FAILED! => {"changed": false, "changes": ["{"status":"Pulling repository docker.io/library/ovn-scale-test-ovn"}\r\n", "{"errorDetail":{"message":"
Error: image library/ovn-scale-test-ovn not found"},"error":"Error: image library/ovn-scale-test-ovn not found"}\r\n"], "failed": true, "msg": "Unrecognized status from pull
.", "status": ""}
to retry, use: --limit @ansible/site.retry

PLAY RECAP *********************************************************************
192.168.0.37 : ok=12 changed=2 unreachable=0 failed=1

Due to this, ovn-scale-test seems to fail.

Make network policy (ACL/PG/AS) configuration more generic.

We currently encode all the logic for simulating network policies, i.e., ACL, Port_Group, Address_Set in python scenarios. It might be better and more flexible to have a generic mechanism to specify such configuration externally. One option is described here:

"For the network policy related port-groups and ACLs, it seems to me better to be created as configuration (in JSON format) instead of adding to the code implementation. In the code we can add the support to apply whatever port-groups and ACLs that is configured, so that it is easier to test scalability of different port-group/ACL configurations."

CC: @hzhou8
CC: @LorenzoBianconi

Make scenarios that create lports usable with Rally runner for repetitions and parallelization

Currently any scenario that involves port creation cannot be used with Rally serial or constant runner when number or repetitions (times) is greater than 1 and/or number of parallel runners (concurrency) is greater than 1, unless we ignore the fact that multiple ports will be assigned the same IP address.

We could address it either by picking IP addresses at random (from the subnet range) or keeping global state to track which is the next unassigned yet IP address from the pool.

Not using Rally runner for repetitions prevents us from getting a timing profile from Rally, i.e. duration over iterations for each atomic task. Example of such profile can be seen here:

https://people.redhat.com/jsitnick/ovn/create-and-bind-200-ports.html#/OvnNetwork.create_and_bind_ports/details

docker-py doesn't seem to be installed error.

Hello, I'm new to ovn-scale-test.
I followed the tutorial but got stuck with the bellow error when I run
ansible-playbook -i ansible/inventory/ovn-hosts ansible/site.yml -e action=deploy

failed: [11.11.11.11-> 11.11.11.11](item=[0, '172.16.200.10']) => {"failed": true, "item": [0, "172.16.200.10"], "msg": "docker-py doesn't seem to be installed, but is required for the Ansible Docker module."}

I did install docker-py by running:
pip install -U docker-py

These are my system specifications:
Ubuntu 4.4.0-34-generic
Docker version 1.11.2, build b9f10c9
Python 2.7.12

Please advice,

Thanks

Create/delete sandboxes from the context

@jtaleric points out that Rally scenarios should be autonomous, that is they should not depend of the system state created by any scenarios that have run before them.

This is currently not true for scenarios that Rally ovs plugin provides because they depend on having a running controller sandbox (one with ovn-northd and NBDB/SBDB databases). Also, scenarios that involve port binding assume that there are fake chassis sandboxes running.

Instead of having scenarios for creating/destroying sandboxes that we need to run before/after the actual workload scenario, it would make sense to extend existing context(s) or create a new one that will create sandboxes before the scenario runs and clean them up afterwards.

An example of a such context can be found in Rally openstack plugin:
https://github.com/openstack/rally/blob/master/rally/plugins/openstack/context/nova/servers.py

Document clarifications

Hello again,
Thanks for providing ovn-scale-test. I believe it will bring a great value for the research community.
As a first time user of the tool, I faced some difficulties to install it and play with it. I think mostly due some ambiguity in the instructions.

These are some issues I faced and I think it will be helpful to add some text to clarify them:
1- Installing and running the code as a root.
The instructions doesn't say that you must download, install, and run the test as a root user. If you don't run as a root, you will find hard time figuring out why things go wrong.

2- ssh connections between hosts and and between containers and hosts.
This is also related to running as root. In ansible/group_vars/all.yml there is a variable "deploy_user" and the default is "rally". I was able to deploy but couldn't run any workload and was getting ssh failure errors because my machine was using the user name rally. I had to change it to "root" and everything worked after that.

3- Docker images.
It seems that docker images used for the emulated chassis are not up to date with ovs repository. Or, to be more specific, not in synchronization with the image used to build ovn-rally container. This caused a problem if the two images contain different versions of ovs commands.

4- Terminologies.

  • What is farm node? And why do I have to register them as it says here
    "In addition, to register the hosts and sandboxes in the rally-ovs database, the create-sandbox task should be executed for individual farm nodes."
  • Is emulated chassis the same as sandbox?

5- What is the difference between a node that runs ovn-rally workload and one that runs rally container as mentioned here

Use ovn-nbctl --wait hv to get end-to-end port binding time

Currently the scale testing waits port state UP in ovn-nb and then starts next round of port creation and binding. This is inaccurate, because when CPU is 100% in test farms it can't reflect the real time that ovn-controllers spent to complete each round of processing.

With the new feature "--wait hv" [1], we can now wait for the port bindings to be processed and reflected on all HVs before starting next round, so that the real processing time can be reflected and get more accurate result of port-binding performance.

[1] openvswitch/ovs@fa183ac

Merge code from ovs-scale-cicd

Folks, I have a repository here which has some automation around the ansible done in the ovn-scale-test repository. I'd like to merge this back into the ovn-scale-test repository, but I'd like feedback on where to merge it. The idea is this can be done to automate the ansible stuff here in a CI/CD environment. We'll be implementing this downstream in our IBM Public Cloud CI/CD, and I'd like to get this implemented upstream here as well eventually.

So feedback on where this lives is greatly appreciated!

Add skydive

Install the skydive controller into it’s own container, and also add the skydive agents into each of the containers running ovn-controller.

ovn-sbctl would stuck when leader of raft changes

when I run scale test, the ovn-sbctl stuck. and found that the ovnsb leader of raft has changed from central-1 to central-2. then the command "ovn-sbctl --no-leader-only --bare --columns _uuid find chassis name=ovn-scale-4-55" on central-1 would stuck.

rally log: END: Error SSHTimeout: Timeout executing command 'sudo docker exec ovn-central-1 ovn-sbctl --no-leader-only --bare --columns _uuid find chassis name=ovn-scale-4-55' on host...
process on central-1:
[root@ovn-central-1 ovn]# ps aux | grep ovn-sbctl
root 16818 0.0 0.0 41800 6268 ? Ss 10:14 0:01 ovn-sbctl --no-leader-only --bare --columns _uuid find chassis name=ovn-scale-4-55
root 16946 0.0 0.0 41800 6128 ? Ss 11:14 0:01 ovn-sbctl --no-leader-only --bare --columns _uuid find chassis name=ovn-scale-1-56
root 17072 0.0 0.0 41800 6256 ? Ss 12:14 0:00 ovn-sbctl --no-leader-only --bare --columns _uuid find chassis name=ovn-scale-4-56
root 17198 0.0 0.0 41708 5876 ? Ss 13:14 0:00 ovn-sbctl --no-leader-only --bare --columns _uuid find chassis name=ovn-scale-1-57

rally-ovs add acl task failed

The error message is

2016-06-28 21:35:08.282 54 INFO rally_ovs.plugins.ovs.scenarios.ovn [-] create 1 ACLs on lswitch lswitch_841788_YtXfYU
2016-06-28 21:35:08.284 54 INFO rally.task.runner [-] Task 84178824-0d05-4c85-a6e1-d36ea8999f7c | ITER: 0 END: Error NameError: global name 'pipes' is not defined

ovn-scale-test requires non-system installation of rally

Hi, Lei
I found two unclear places in the ovn rally installation guide.

  • Since system-wide ovn-scale-test is not supported yet [1], installing rally should be done without root privileged. I suggest to add some note in this section [2]
  • install_rally.s pulls the repo into the the directory of /home/{user}/rally [2]. Therefore, "git clone https://github.com/l8huang/rally.git" should be executed at the a different place other than /home/{user}/. If this is the case, I suggest add some note here as well.

My two cents. Thanks. - Hui

[1] https://github.com/openvswitch/ovn-scale-test/blob/master/doc/source/install.rst#install-ovn-scale-test
[2] https://github.com/openvswitch/ovn-scale-test/blob/master/doc/source/install.rst#install-rally

Consider switching from ovn-nbctl to ovsdbapp

Using ovn-nbctl to talk to NBDB has drawbacks:

  • we have to scrape the free-from output from ovn-nbctl, the output format can change,
  • we have to spawn a new process and a new connection for each batch of requests,
  • we cannot take advantage of incremental notifications from the OVSDB ("update2").

If parsing the output of ovn-nbctl becomes too cumbersome, or generating workload with it turns out to be not efficient enough we should consider switching to ovndbapp, a client library for OVSDB used by OpenStack:

https://github.com/openstack/ovsdbapp

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.