ovn-org / ovn-scale-test Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
There is file shows how to create sandboxes on single farm node after deployment.
However, it is unclear how to create sandboxes if multiple farm nodes are registered.
Our local Rally changes should be pushed upstream and we should not rely on a fork [1].
[1] http://ovn-scale-test.readthedocs.io/en/latest/install.html
In current rally_ovs
implementation, the logical ports are evenly distributed onto all the emulated chassis.
The pseducode of existing implementation in rally_ovs is as follows
num-lports-per-network = // some number of rally task file
for network in logical-networks
for i=0 to num-lports-per-network
// bind lport[i] to chassis [i]
Therefore, for each virtual network (e.g., having 400 lports), the
current binding strategy binds one lports to one chassis. If the
number of lport is smaller than the # chassis, the allocation tends to
be unbalanced. That's the main reason I choose 400 ports per network.
The binding strategy is hard coded in rally-ovs. If we want to try
different binding strategy, we need to add a new parameter to the rally
task file.
Hi Team,
I studied some slides from Aliasgar, very helpful for me to run standalone testing.
For now, I want use 3 KVM vms as cluster, and use several BMs for test farms.
Could someone help me how to modify the json files in the deployment/task to finish the scale test?
Regards,
Winson
We need to setup a CI/CD environment (e.g., jenkins job) to autocially test PRs.
Problem description: if a commit change anything in rally-ovs, the rally Dockerfile can not receive the new update. This is because rally Dockerfile clones the repo form upstream.
Then a following PR is needed to fix this issue, e.g., #53 .
One possible solution is to generate Dockerfile dynamically, using jinja2 tempate.
Currently, the ovn-scale-test CI scripts in the ci/ directory do not handle errors if things fail, which would allow the underlying CI system to conclude the run was in error.
@muradkablan proposed a hybrid deployment for OVN-scale test. The motivation is that we can test how control plane impacts the data plane.
[ovn-emulation-host]
host-1
host-2
[ovn-real-chassis]
host-3
host-4
Comment?
Hello @huikang @l8huang and @mestery,
It is very possible that I'm not looking at this the right way, but after deploying pure-ovn containers, via ".../ci/scale-pure-ovn-hosts.sh", it seems that the containers are configured
with addresses that are not unique. Does it mean all of them share the same network namespace?
See this gist -- or below -- for the output.
Assuming that is working as expected; and if there is a doc or a piece of code that talks about
this, please forward it to me!
Thanks,
vagrant@pureovn:~$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 952ee145831b ovn-scale-test-ovn "ovn_set_chassis 172." 27 hours ago Up 27 hours sandbox-172.16.200.14 403d0150b91e ovn-scale-test-ovn "ovn_set_chassis 172." 27 hours ago Up 27 hours sandbox-172.16.200.13 8f76eeec9c1b ovn-scale-test-ovn "ovn_set_chassis 172." 27 hours ago Up 27 hours sandbox-172.16.200.12 65cb610cb928 ovn-scale-test-ovn "ovn_set_chassis 172." 27 hours ago Up 27 hours sandbox-172.16.200.11 352ea12d2df5 ovn-scale-test-ovn "ovn_set_chassis 172." 27 hours ago Up 27 hours sandbox-172.16.200.10 8c3298e8a3bf ovn-scale-test-ovn "ovn-sandbox-northd.s" 27 hours ago Up 27 hours ovn-northd 97db778bd646 ovn-scale-test-ovn "ovn-sandbox-south-ov" 27 hours ago Up 27 hours ovn-south-database 57ef825504ec ovn-scale-test-ovn "ovn-sandbox-north-ov" 27 hours ago Up 27 hours ovn-north-database vagrant@pureovn:~$ vagrant@pureovn:~$ vagrant@pureovn:~$ for cid in $(docker ps -q) ; do docker inspect --format='{{.Name}} - {{.Path}}' $cid ; docker exec $cid ip a ; echo --- ; done /sandbox-172.16.200.14 - ovn_set_chassis 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:1a:e9:1a brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.20.100/16 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.200.10/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.11/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.12/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.13/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.14/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe1a:e91a/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:57:f4:4f:75 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:57ff:fef4:4f75/64 scope link valid_lft forever preferred_lft forever --- /sandbox-172.16.200.13 - ovn_set_chassis 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:1a:e9:1a brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.20.100/16 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.200.10/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.11/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.12/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.13/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.14/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe1a:e91a/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:57:f4:4f:75 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:57ff:fef4:4f75/64 scope link valid_lft forever preferred_lft forever --- /sandbox-172.16.200.12 - ovn_set_chassis 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:1a:e9:1a brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.20.100/16 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.200.10/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.11/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.12/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.13/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.14/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe1a:e91a/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:57:f4:4f:75 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:57ff:fef4:4f75/64 scope link valid_lft forever preferred_lft forever --- /sandbox-172.16.200.11 - ovn_set_chassis 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:1a:e9:1a brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.20.100/16 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.200.10/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.11/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.12/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.13/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.14/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe1a:e91a/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:57:f4:4f:75 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:57ff:fef4:4f75/64 scope link valid_lft forever preferred_lft forever --- /sandbox-172.16.200.10 - ovn_set_chassis 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:1a:e9:1a brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.20.100/16 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.200.10/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.11/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.12/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.13/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.14/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe1a:e91a/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:57:f4:4f:75 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:57ff:fef4:4f75/64 scope link valid_lft forever preferred_lft forever --- /ovn-northd - ovn-sandbox-northd.sh 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:1a:e9:1a brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.20.100/16 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.200.10/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.11/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.12/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.13/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.14/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe1a:e91a/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:57:f4:4f:75 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:57ff:fef4:4f75/64 scope link valid_lft forever preferred_lft forever --- /ovn-south-database - ovn-sandbox-south-ovsdb.sh 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:1a:e9:1a brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.20.100/16 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.200.10/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.11/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.12/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.13/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.14/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe1a:e91a/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:57:f4:4f:75 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:57ff:fef4:4f75/64 scope link valid_lft forever preferred_lft forever --- /ovn-north-database - ovn-sandbox-north-ovsdb.sh 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:1a:e9:1a brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.20.100/16 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.200.10/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.11/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.12/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.13/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet 172.16.200.14/16 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe1a:e91a/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:57:f4:4f:75 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:57ff:fef4:4f75/64 scope link valid_lft forever preferred_lft forever --- vagrant@pureovn:~$
Running this:
sudo /usr/local/bin/ansible-playbook -i $OVN_DOCKER_HOSTS ansible/site.yml -e @$OVN_DOCKER_VARS -e action=clean
leaves ovn-rally running:
ubuntu@ovn-rally-2:~/ovn-scale-test/ci$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cf5db6185290 ovn-scale-test-rally "/usr/sbin/sshd -D" 8 minutes ago Up 8 minutes ovn-rally
ubuntu@ovn-rally-2:~/ovn-scale-test/ci$
There are two kinds of opts for ovs/ovn command lines: global and command specific, as documented in man page of ovs-vsctl:
"The ovs-vsctl command line begins with global options (see OPTIONS below for details). The global options are followed by one or more commands. Each command should begin with -- by itself as a command-line argument, to separate it from the following commands. (The
-- before the first command is optional.) The command itself starts with command-specific options, if any, followed by the command name and any arguments."
Right now there is no proper way to supply opts for batch_mode in ovsclients_impl.py. The parameters taken are for command specific opts only. This need to be fixed so that we can add global options for batch mode execution.
Now that the logical router is added to the test, _create_neworks() should be refactored to be able to disable localnet port creation in lrouter related scenarios. L2 scenarios should still keep the localnet port as default behavior.
With localnet port, a lswitch behaves in bridge mode instead of overlay mode. This may have little impact on control-plane scalability based on current implementation, but could be huge impact in the future if we try to optimize the bridge mode by not installing overlay related flows for bridged L2 datapaths.
Right now there's no easy way to inspect the commands that were executed by OvsClient
instances in ovsclients_impl.py.
Adding support to log the commands would make it easier in debugging tests and/or replaying commands executed by the test without having to rerun the whole scenario.
One potential option would be to add command logging before calls to self.ssh.run(..)
.
I'm seeing this as of today:
TASK [ovn : start OVN northbound database] *************************************
fatal: [192.168.0.37]: FAILED! => {"changed": false, "changes": ["{"status":"Pulling repository docker.io/library/ovn-scale-test-ovn"}\r\n", "{"errorDetail":{"message":"
Error: image library/ovn-scale-test-ovn not found"},"error":"Error: image library/ovn-scale-test-ovn not found"}\r\n"], "failed": true, "msg": "Unrecognized status from pull
.", "status": ""}
to retry, use: --limit @ansible/site.retry
PLAY RECAP *********************************************************************
192.168.0.37 : ok=12 changed=2 unreachable=0 failed=1
Due to this, ovn-scale-test seems to fail.
From the doc in Ansible site, it appears that the docker module being used in
roles/ovn is deprecated.
It should be changed to use docker_container and docker_image instead.
We currently encode all the logic for simulating network policies, i.e., ACL, Port_Group, Address_Set in python scenarios. It might be better and more flexible to have a generic mechanism to specify such configuration externally. One option is described here:
"For the network policy related port-groups and ACLs, it seems to me better to be created as configuration (in JSON format) instead of adding to the code implementation. In the code we can add the support to apply whatever port-groups and ACLs that is configured, so that it is easier to test scalability of different port-group/ACL configurations."
CC: @hzhou8
CC: @LorenzoBianconi
Currently any scenario that involves port creation cannot be used with Rally serial
or constant
runner when number or repetitions (times
) is greater than 1 and/or number of parallel runners (concurrency
) is greater than 1, unless we ignore the fact that multiple ports will be assigned the same IP address.
We could address it either by picking IP addresses at random (from the subnet range) or keeping global state to track which is the next unassigned yet IP address from the pool.
Not using Rally runner for repetitions prevents us from getting a timing profile from Rally, i.e. duration over iterations for each atomic task. Example of such profile can be seen here:
Hello, I'm new to ovn-scale-test.
I followed the tutorial but got stuck with the bellow error when I run
ansible-playbook -i ansible/inventory/ovn-hosts ansible/site.yml -e action=deploy
failed: [11.11.11.11-> 11.11.11.11](item=[0, '172.16.200.10']) => {"failed": true, "item": [0, "172.16.200.10"], "msg": "docker-py
doesn't seem to be installed, but is required for the Ansible Docker module."}
I did install docker-py by running:
pip install -U docker-py
These are my system specifications:
Ubuntu 4.4.0-34-generic
Docker version 1.11.2, build b9f10c9
Python 2.7.12
Please advice,
Thanks
@jtaleric points out that Rally scenarios should be autonomous, that is they should not depend of the system state created by any scenarios that have run before them.
This is currently not true for scenarios that Rally ovs
plugin provides because they depend on having a running controller sandbox (one with ovn-northd
and NBDB/SBDB databases). Also, scenarios that involve port binding assume that there are fake chassis sandboxes running.
Instead of having scenarios for creating/destroying sandboxes that we need to run before/after the actual workload scenario, it would make sense to extend existing context(s) or create a new one that will create sandboxes before the scenario runs and clean them up afterwards.
An example of a such context can be found in Rally openstack
plugin:
https://github.com/openstack/rally/blob/master/rally/plugins/openstack/context/nova/servers.py
Hello again,
Thanks for providing ovn-scale-test. I believe it will bring a great value for the research community.
As a first time user of the tool, I faced some difficulties to install it and play with it. I think mostly due some ambiguity in the instructions.
These are some issues I faced and I think it will be helpful to add some text to clarify them:
1- Installing and running the code as a root.
The instructions doesn't say that you must download, install, and run the test as a root user. If you don't run as a root, you will find hard time figuring out why things go wrong.
2- ssh connections between hosts and and between containers and hosts.
This is also related to running as root. In ansible/group_vars/all.yml there is a variable "deploy_user" and the default is "rally". I was able to deploy but couldn't run any workload and was getting ssh failure errors because my machine was using the user name rally. I had to change it to "root" and everything worked after that.
3- Docker images.
It seems that docker images used for the emulated chassis are not up to date with ovs repository. Or, to be more specific, not in synchronization with the image used to build ovn-rally container. This caused a problem if the two images contain different versions of ovs commands.
4- Terminologies.
5- What is the difference between a node that runs ovn-rally workload and one that runs rally container as mentioned here
Currently the scale testing waits port state UP in ovn-nb and then starts next round of port creation and binding. This is inaccurate, because when CPU is 100% in test farms it can't reflect the real time that ovn-controllers spent to complete each round of processing.
With the new feature "--wait hv" [1], we can now wait for the port bindings to be processed and reflected on all HVs before starting next round, so that the real processing time can be reflected and get more accurate result of port-binding performance.
There are limited workload scenarios in ovn-scale-test, e.g., creating network, creating and binding port, ACL [1]. We need add more workload scenarios.
Lets use this thread to collect idea about useful scenarios. Thanks.
[1]https://github.com/openvswitch/ovn-scale-test/tree/master/ansible/roles/rally/templates
Folks, I have a repository here which has some automation around the ansible done in the ovn-scale-test repository. I'd like to merge this back into the ovn-scale-test repository, but I'd like feedback on where to merge it. The idea is this can be done to automate the ansible stuff here in a CI/CD environment. We'll be implementing this downstream in our IBM Public Cloud CI/CD, and I'd like to get this implemented upstream here as well eventually.
So feedback on where this lives is greatly appreciated!
Install the skydive controller into itβs own container, and also add the skydive agents into each of the containers running ovn-controller.
when I run scale test, the ovn-sbctl stuck. and found that the ovnsb leader of raft has changed from central-1 to central-2. then the command "ovn-sbctl --no-leader-only --bare --columns _uuid find chassis name=ovn-scale-4-55" on central-1 would stuck.
rally log: END: Error SSHTimeout: Timeout executing command 'sudo docker exec ovn-central-1 ovn-sbctl --no-leader-only --bare --columns _uuid find chassis name=ovn-scale-4-55' on host...
process on central-1:
[root@ovn-central-1 ovn]# ps aux | grep ovn-sbctl
root 16818 0.0 0.0 41800 6268 ? Ss 10:14 0:01 ovn-sbctl --no-leader-only --bare --columns _uuid find chassis name=ovn-scale-4-55
root 16946 0.0 0.0 41800 6128 ? Ss 11:14 0:01 ovn-sbctl --no-leader-only --bare --columns _uuid find chassis name=ovn-scale-1-56
root 17072 0.0 0.0 41800 6256 ? Ss 12:14 0:00 ovn-sbctl --no-leader-only --bare --columns _uuid find chassis name=ovn-scale-4-56
root 17198 0.0 0.0 41708 5876 ? Ss 13:14 0:00 ovn-sbctl --no-leader-only --bare --columns _uuid find chassis name=ovn-scale-1-57
The error message is
2016-06-28 21:35:08.282 54 INFO rally_ovs.plugins.ovs.scenarios.ovn [-] create 1 ACLs on lswitch lswitch_841788_YtXfYU
2016-06-28 21:35:08.284 54 INFO rally.task.runner [-] Task 84178824-0d05-4c85-a6e1-d36ea8999f7c | ITER: 0 END: Error NameError: global name 'pipes' is not defined
The current CI job script runs all emulated chassis on a single host. We need to distribute them onto multiple hosts.
Hi, Lei
I found two unclear places in the ovn rally installation guide.
My two cents. Thanks. - Hui
[1] https://github.com/openvswitch/ovn-scale-test/blob/master/doc/source/install.rst#install-ovn-scale-test
[2] https://github.com/openvswitch/ovn-scale-test/blob/master/doc/source/install.rst#install-rally
Using ovn-nbctl
to talk to NBDB has drawbacks:
ovn-nbctl
, the output format can change,If parsing the output of ovn-nbctl
becomes too cumbersome, or generating workload with it turns out to be not efficient enough we should consider switching to ovndbapp, a client library for OVSDB used by OpenStack:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.