Code Monkey home page Code Monkey logo

ovn-kubernetes's Introduction

ovn-kubernetes logo

ovn-kubernetes: A robust Kubernetes networking platform

License Build status Book Go Report Card Go Doc Static Badge

Welcome to ovn-kubernetes

OVN-Kubernetes (Open Virtual Networking - Kubernetes) is an open-source project that provides a robust networking solution for Kubernetes clusters with OVN (Open Virtual Networking) and Open vSwitch (Open Virtual Switch) at its core. It is a Kubernetes networking conformant plugin written according to the CNI (Container Network Interface) specifications.

Here are some links to help in your ovn-kubernetes journey:

License

Everything is distributed under the terms of the [Apache License] (version 2.0).

Who uses OVN-Kubernetes?

See our Adopters. If your organization or project uses OVN-Kubernetes, please file a PR and update this list. Say hi on Slack too!

ovn-kubernetes's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ovn-kubernetes's Issues

Multiple network support.

There is talk in K8s community of supporting more than one network interface per container. For e.g a separate network for storage.

Doing such a thing is very easy with OVN. But we should figure out a nice workflow on how we want to support this even if there is no K8s upstream support. If the upstream support comes about, we can transition to it.

One way to do this, is to create a "network" directly in OVN database and then provide that network as an annotation in the pod spec. When the pod is created, the watcher notices the additional network, creates a logical port in it, and then annotates the pod with the IP addresses and mac address for that additional network.

The CNI plugin should set both the network interfaces.

For north-south connectivity, the admin can then add gateway routers to that switch.

go-controller does not sync deleted events

Reproduction:

  • Start go-controller
  • Create a pod -> This creates a logical port in OVN
  • Stop go-controller
  • Delete a pod
  • Start go-controller after some time -> This does not delete the logical port in OVN.

go-controller should delete the logical port in OVN.

Use OVN native DNS.

Currently patches are out for review in OVS mailing list that adds native distributed DNS support. Look at adding it to the Kubernetes integration.

OVN controller cpu hogging

Greetings,

I have been trying to setup the ovn-kubernetes, for 1 master, 2 minions scenario.
The ovn-controller in the master node is always running at ~99%, causing packet drop even in ping requests to one of the minions.

Sidenote: 2nd minion although started and discovered successfully, cannot ping the master node.

Is the cpu hogging an expected behaviour?

Thank you,

Stavros

Underlay mode solve double overlay?

Hi @shettyg and your team
First, thanks for your amazing work, it's valuable for me.
I'm looking a network solution for k8s cluster run on OpenStack's VMs (openstack with vxlan overlay), and big issue is double overlay, so I wonder ovn-k8s with underlay mode will solve that when I run k8s on OpenStack's VM?

Thanks

Bug: reference pod targetPort by name doesn't work

Having the following descriptors...

iis-pod.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: iis
  labels:
    component: iis-http-server
spec:
  containers:
  - name: iis
    image: microsoft/iis:nanoserver
    imagePullPolicy: IfNotPresent
    ports:
    - containerPort: 80
      name: iis-http
  nodeSelector:
    beta.kubernetes.io/os: windows

iis-svc.yaml:

apiVersion: v1
kind: Service
metadata:
  name: iis
spec:
  clusterIP: 10.100.28.123
  selector:
    component: iis-http-server
  ports:
  - name: http
    port: 80
    targetPort: iis-http

...works in Kubernetes because the controller-manager block that takes care of service endpoints can map service targetPort: iis-http to pod containerPort: 80 (mapped by name: iis-http).

However, OVN doesn't follow:

$ sudo ovn-nbctl list Load_Balancer
_uuid               : a53b0274-aeb9-4ec0-9668-acbb09077297
external_ids        : {"k8s-cluster-lb-tcp"=yes}
protocol            : []
vips                : {"10.100.0.10:53"="172.17.0.2:53", "10.100.0.1:443"="10.142.0.2:443", "10.100.28.123:80"="10.244.5.234:iis-http"}

_uuid               : 99f881dd-b0cf-492e-8008-2cc1be371829
external_ids        : {"k8s-cluster-lb-udp"=yes}
protocol            : udp
vips                : {"10.100.0.10:53"="172.17.0.2:53"}

The issue is "10.244.5.234:iis-http".

The CNI plugin should use the api server secure port

Leveraging the insecure port (default 8080) is easy and simplifies implementation.
In order to support scenarios just a bit closer to production, the CNI plugin should use the secure port (default 6643).

This issue will deal with identifying the authentication methods the CNI plugin must support and defining the changes in the CNI plugin itself.

How to debug ovn load balancer?

On a cluster with 40 nodes, the service is stored in the ovn load balancer table. But it does not work on some of the nodes. How to debug what is wrong?

[root@netdev75-2 ~]# ovn-nbctl --db="tcp:10.254.72.1:6641" find load_balancer
_uuid               : e9e9c620-ffa3-4560-b761-9f31a7c15674
external_ids        : {"k8s-cluster-lb-tcp"=yes}
name                : ""
protocol            : []
vips                : {"172.30.0.1:443"="10.254.72.1:8443", "172.30.0.1:53"="10.254.72.1:8053"}

_uuid               : 5318319a-3f38-40a9-8909-cc0166007fcd
external_ids        : {"k8s-cluster-lb-udp"=yes}
name                : ""
protocol            : udp
vips                : {"172.30.0.1:53"="10.254.72.1:8053"}
[root@netdev75-2 ~]# curl -k https://10.254.72.1:8443
{
  "paths": [
    "/api",
    "/api/v1",
...
...
[root@netdev75-2 ~]# curl -k https://172.30.0.1:443/
curl: (7) Failed to connect to 172.30.0.1 port 443: Connection timed out


vagrant: services do not expose ports as expected

Using the vagrant environment, I ran through the k8s startup tutorial to test the basic functionality and ran into a few problems around connectivity.

$ vagrant up

[truncate deployment/kubernetes-bootcamp creation]

# starts, but never replies to queries
$ kubectl proxy

# starts a service, but doesn't route packets to the pods
$ kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080
$ kubectl get services
NAME                  CLUSTER-IP        EXTERNAL-IP   PORT(S)          AGE       SELECTOR
kubernetes-bootcamp   192.168.200.103   <nodes>       8080:32035/TCP   16m       run=kubernetes-bootcamp
$ curl k8smaster:8080 # never responds

# After scaling to 4 pods:
kubectl get pods -o wide
NAME                                  READY     STATUS    RESTARTS   AGE       IP            NODE
kubernetes-bootcamp-390780338-cv6m0   1/1       Running   0          12m       192.168.3.4   k8sminion2
kubernetes-bootcamp-390780338-nss00   1/1       Running   0          12m       192.168.3.3   k8sminion2
kubernetes-bootcamp-390780338-qbm3v   1/1       Running   0          12m       192.168.2.4   k8sminion1
kubernetes-bootcamp-390780338-zxj0s   1/1       Running   0          21m       192.168.2.3   k8sminion1
$ kubectl exec -it kubernetes-bootcamp-390780338-qbm3v /bin/bash
$> curl 192.168.2.4:8080 # self
Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-390780338-qbm3v | v=1
$> curl 192.168.2.3:8080 # pod on same node
Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-390780338-zxj0s | v=1
$> curl 192.168.3.4:8080 # pod on other node, fails
^C

No way to remove gateway routers

There should be a utility to undo a 'gateway-init' on a node. Specifically useful when nodes are taken out of the clusters.
e.g. the breth0 should be destroyed and eth0 be restored as a regular interface on the host

master node was notready

after i deploy the ovn-k8s plugin,the master node's status was notready, the minion node's status was ready. In this case, could the k8s colony work well?

ovn-nbctl: Logical_Router does not contain a column whose name matches "load_balancer"

commit 902ae64 is broken:

ovn-nbctl --timeout=5 -vconsole:off --db=tcp:172.17.8.101:6641 set logical_router GR_172.17.8.101 load_balancer=ea966794-ca55-4db7-95a9-eff3cd4739e9 
ovn-nbctl: Logical_Router does not contain a column whose name matches "load_balancer"

And ovn-nb really doesn't have this column:

ovn-nbctl --timeout=5 -vconsole:off --db=tcp:172.17.8.101:6641 list logical_router
_uuid               : cb65892f-b62c-43ce-a78b-daf952abc6e5
enabled             : []
external_ids        : {}
name                : "GR_172.17.8.101"
nat                 : []
options             : {chassis="7d5ea627-475c-425c-ac1b-705ec1c160a9"}
ports               : [e0f1ff69-b8d4-447d-b285-fdc13112faa3]
static_routes       : [ce6968f9-a26a-4472-9c0c-2b62331d6f3c]

Document non gateway modes

Gateway nodes are needed for North-South connectivity. OVN does have support for multiple gateway nodes, but this documentation only talks about one gateway node

Other use cases can be interesting too.

Dependencies should specify required package versions

Environment:
Centos7
Python 2.7, pip 8.1.2

Action:
Trying to create the following pod: https://github.com/Boostport/kubernetes-vault/blob/master/deployments/quick-start/vault.yaml

Kubectl gives a plain "FailedSync Error syncing pod", kubelet is more specific though.

Traceback (most recent call last):
File "/opt/cni/bin/ovn_cni", line 28, in
from ovn_k8s.common import kubernetes
File "/usr/lib/python2.7/site-packages/ovn_k8s/common/kubernetes.py", line 16, in
import requests
File "/usr/lib/python2.7/site-packages/requests/init.py", line 58, in
from . import utils
File "/usr/lib/python2.7/site-packages/requests/utils.py", line 32, in
from .exceptions import InvalidURL
File "/usr/lib/python2.7/site-packages/requests/exceptions.py", line 10, in
from .packages.urllib3.exceptions import HTTPError as BaseHTTPError
File "/usr/lib/python2.7/site-packages/requests/packages/init.py", line 95, in load_module
raise ImportError("No module named '%s'" % (name,))
ImportError: No module named 'requests.packages.urllib3'
E0719 15:49:56.378117 12558 cni.go:312] Error deleting network: netplugin failed but error parsing its diagnostic message "": unexpected end of JSON input

A simple yum install -y python-requests will solve this, but it is quite annoying to see and look into the root cause and the ovn specific logs do not give any info on the root cause.

go-controller - ovnkube node-init fails when -init-gateways true

When running ovnkube --init-node with `-init-gateways is true (full --init-node command below)

./ovnkube --init-node bsteciuk-worker-linux -ca-cert "/etc/kubernetes/pki/ca.crt" \
 -token "${TOKEN}" -apiserver "https://10.142.0.10:6443" \
 -ovn-north-db "tcp://10.142.0.10:6641" -ovn-south-db "tcp://10.142.0.10:6642" \
 -init-gateways true -gateway-interface ens4 -nodeport true

ovnkube fails with the following:

INFO[0000] Node bsteciuk-worker-linux ready for ovn initialization with subnet 10.111.2.0/24 
Feb 13 16:29:40 bsteciuk-worker-linux systemd[1]: Started LSB: Open vSwitch switch.
Feb 13 16:29:40 bsteciuk-worker-linux ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl set Open_vSwitch . "external_ids:ovn-nb=\"tcp:10.142.0.10:6641\"" "external_ids:ovn-remote=\"tcp:10.142.0.10:6642\"" external_ids:ovn-encap-ip=10.142.0.11 "external_ids:ovn-encap-type=\"geneve\"" "external_ids:k8s-api-server=\"https://10.142.0.10:6443\"" "external_ids:k8s-api-token=\"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJvdm4tY29udHJvbGxlci10b2tlbi10aDQ2cyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJvdm4tY29udHJvbGxlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjZmOTY5NzdiLTEwY2UtMTFlOC1hNDM1LTQyMDEwYThlMDAwYSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpvdm4tY29udHJvbGxlciJ9.YRYutvUR1uBSAVA227r1ThgJ8GrwGwMdR9o8cIgRdpqQc_ScFn9dKaji-rAMhcxyqDOXeYUi1UZBY_CHIhoIimZiHC3TN-9evsyKvlRmFR9OG_kvQ9zVhwy3gq-VqA2uDbzEQxH-mGDmytyFdQc4zTxVeHAzWl375GHhf3KmHY0ZyobG4aG4MHbOvbgQpsqrlwbEUFB8XUdcr81EBklNcotzhP8YoGO1Ryo0ObMx4UMGNzXBNO9cAYFmBUMwW57ZR_qYFcZ243FSv51TsaV3qjebVKs0I89m3K7nZtaDRFdbTTXSAG7WU8nNZ6czaDHozf4JUYzAGGkO5Ednamf_2w\""
Feb 13 16:29:40 bsteciuk-worker-linux systemd[1]: Stopping LSB: OVN host components...
Feb 13 16:29:40 bsteciuk-worker-linux ovn-host[7357]:  * Exiting ovn-controller (3818)
Feb 13 16:29:40 bsteciuk-worker-linux kernel: [ 4624.020155] device genev_sys_6081 left promiscuous mode
Feb 13 16:29:40 bsteciuk-worker-linux systemd[1]: Stopped LSB: OVN host components.
Feb 13 16:29:40 bsteciuk-worker-linux systemd[1]: Starting LSB: OVN host components...
Feb 13 16:29:40 bsteciuk-worker-linux ovn-host[7385]:  * Starting ovn-controller
INFO[0000] Open configuration file /etc/openvswitch/ovn_k8s.conf error: open /etc/openvswitch/ovn_k8s.conf: no such file or directory, use default values 
Feb 13 16:29:40 bsteciuk-worker-linux systemd[1]: Started LSB: OVN host components.
Feb 13 16:29:40 bsteciuk-worker-linux ovn-nbctl: ovs|00001|nbctl|INFO|Called as /usr/bin/ovn-nbctl --db=tcp:10.142.0.10:6641 --timeout=5 -- --may-exist ls-add bsteciuk-worker-linux -- set logical_switch bsteciuk-worker-linux other-config:subnet=10.111.2.0/24 external-ids:gateway_ip=10.111.2.1/24
Feb 13 16:29:40 bsteciuk-worker-linux kernel: [ 4624.168322] device genev_sys_6081 entered promiscuous mode
Feb 13 16:29:40 bsteciuk-worker-linux ovn-nbctl: ovs|00001|nbctl|INFO|Called as /usr/bin/ovn-nbctl --db=tcp:10.142.0.10:6641 --timeout=5 -- --may-exist lsp-add bsteciuk-worker-linux stor-bsteciuk-worker-linux -- set logical_switch_port stor-bsteciuk-worker-linux type=router options:router-port=rtos-bsteciuk-worker-linux "addresses=\"00:00:00:3E:0F:34\""
Feb 13 16:29:40 bsteciuk-worker-linux systemd-udevd[7429]: Could not generate persistent MAC address for genev_sys_6081: No such file or directory
Feb 13 16:29:40 bsteciuk-worker-linux ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=5 -- --may-exist add-br br-int
Feb 13 16:29:40 bsteciuk-worker-linux ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=5 -- --may-exist add-port br-int k8s-bsteciuk-wo -- set interface k8s-bsteciuk-wo type=internal mtu_request=1400 external-ids:iface-id=k8s-bsteciuk-worker-linux
Feb 13 16:29:40 bsteciuk-worker-linux ovn-nbctl: ovs|00001|nbctl|INFO|Called as /usr/bin/ovn-nbctl --db=tcp:10.142.0.10:6641 --timeout=5 -- --may-exist lsp-add bsteciuk-worker-linux k8s-bsteciuk-worker-linux -- lsp-set-addresses k8s-bsteciuk-worker-linux "e2:4d:21:c3:f6:15 10.111.2.2"
Feb 13 16:29:41 bsteciuk-worker-linux ovn-nbctl: ovs|00001|nbctl|INFO|Called as /usr/bin/ovn-nbctl --db=tcp:10.142.0.10:6641 --timeout=5 set logical_switch bsteciuk-worker-linux load_balancer=cf9f654e-aa26-4b42-ba7b-69499507fa34
Feb 13 16:29:41 bsteciuk-worker-linux ovn-nbctl: ovs|00001|nbctl|INFO|Called as /usr/bin/ovn-nbctl --db=tcp:10.142.0.10:6641 --timeout=5 add logical_switch bsteciuk-worker-linux load_balancer 418b52f5-6d19-4611-a991-65b8f0b0d8b2
Feb 13 16:29:41 bsteciuk-worker-linux ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=5 -- --may-exist add-br brens4 -- br-set-external-id brens4 bridge-id brens4 -- set bridge brens4 fail-mode=standalone other_config:hwaddr=42:01:0a:8e:00:0b -- --may-exist add-port brens4 ens4
INFO[0000] Successfully created OVS bridge "brens4"     
INFO[0000] Successfully saved addr "10.142.0.11/32 brens4" to bridge "brens4" 
Feb 13 16:29:41 bsteciuk-worker-linux kernel: [ 4624.246838] device brens4 entered promiscuous mode
Feb 13 16:29:41 bsteciuk-worker-linux kernel: [ 4624.247086] device ens4 entered promiscuous mode
ERRO[0000] Add route to bridge "brens4" failed: network is unreachable 
ERRO[0000] Failed to convert nic ens4 to OVS bridge (network is unreachable) 
panic: Failed to convert nic ens4 to OVS bridge (network is unreachable)

goroutine 1 [running]:
main.main()
	/home/bsteciuk/go/src/github.com/ovn-kubernetes/go-controller/_output/go/src/github.com/openvswitch/ovn-kubernetes/go-controller/cmd/ovnkube/ovnkube.go:177 +0x181c

It looks like the failure is occurring in nicstobridge.go:51 with network is unreachable while trying to add route to the ovs bridge. I don't see this failure when -init-gateways true is not set.

Worth noting, that this leaves the node in a state where it can no longer be access via network, nor can it make connections out. This was on a cluster running in GCE.

SSH access to vagrant VMs is denied

Hello I have been trying to set up the vagrant script for ovn-kubernetes, everything works fine until, the script tries to access the VM

k8s-master: Forwarding ports...
k8s-master: 22 (guest) => 2222 (host) (adapter 1)
==> k8s-master: Running 'pre-boot' VM customizations...
==> k8s-master: Booting VM...
==> k8s-master: Waiting for machine to boot. This may take a few minutes...
k8s-master: SSH address: 127.0.0.1:2222
k8s-master: SSH username: ubuntu
k8s-master: SSH auth method: password
k8s-master: Warning: Remote connection disconnect. Retrying...
k8s-master: Warning: Remote connection disconnect. Retrying...

This goes on for several seconds, and if you do a vagrant up 2 more times you have your VMs ready

However, when you try to access them via ssh, you cannot, wither using ubuntu/ubuntu, or vagrant/vagrant, or via changing the Vagrantfile, setting a username and password and reloading the configuration.

Thank you,

Akis

Random crashes in watcher

From time to time, I see this in the ovn-k8s-watcher logs:

ovn-k8s-watcher[10746]: ovs| 2533| watcher  (GreenThread-3) | ERR | Failure in watcher EndpointWatcher
                                                        Traceback (most recent call last):
                                                          File "/usr/local/lib/python2.7/dist-packages/ovn_k8s/watcher/watcher.py", line 59, in _process_func
                                                            watcher.process()
                                                          File "/usr/local/lib/python2.7/dist-packages/ovn_k8s/watcher/endpoint_watcher.py", line 80, in process
                                                            self._process_endpoint_event)
                                                          File "/usr/local/lib/python2.7/dist-packages/ovn_k8s/common/util.py", line 77, in process_stream
                                                            line = next(data_stream)
                                                        StopIteration
ovn-k8s-watcher[10746]: ovs| 2534| watcher  (GreenThread-3) | WARN | Regenerating watcher because of "" and reconnecting to stream using function _create_k8s_endpoint_watcher
gcp-next-master ovn-k8s-watcher[10746]: 2017-02-24T16:14:09Z | 2533| watcher  (GreenThread-3) | ERR | Failure in watcher EndpointWatcher
 ovn-k8s-watcher[10746]: Traceback (most recent call last):
ovn-k8s-watcher[10746]:   File "/usr/local/lib/python2.7/dist-packages/ovn_k8s/watcher/watcher.py", line 59, in _process_func
ovn-k8s-watcher[10746]:     watcher.process()
ovn-k8s-watcher[10746]:   File "/usr/local/lib/python2.7/dist-packages/ovn_k8s/watcher/endpoint_watcher.py", line 80, in process
ovn-k8s-watcher[10746]:     self._process_endpoint_event)
ovn-k8s-watcher[10746]:   File "/usr/local/lib/python2.7/dist-packages/ovn_k8s/common/util.py", line 77, in process_stream
ovn-k8s-watcher[10746]:     line = next(data_stream)
ovn-k8s-watcher[10746]: StopIteration
ovn-k8s-watcher[10746]: 2017-02-24T16:14:09Z | 2534| watcher  (GreenThread-3) | WARN | Regenerating watcher because of "" and reconnecting to stream using function _create_k8s_endpoint_watcher

Is this normal?

can't seem to get east west communication on windows on AWS working

can't seem to get east west communication on windows on AWS working
Can anyone help me debug?

On the master

root@ip-10-5-35-142:~# ovn-sbctl list chassis
_uuid               : b386b220-6353-4dc8-9635-fe643981af8a
encaps              : [23fd3c37-021d-414d-910d-285f034d39c0]
external_ids        : {datapath-type="", iface-types="geneve,gre,internal,lisp,patch,stt,system,vxlan", ovn-bridge-mappings=""}
hostname            : "EC2AMAZ-3S8E4H7"
name                : "364fb59d-0213-4192-a8cf-40cebfc75144"
nb_cfg              : 9
vtep_logical_switches: []

_uuid               : 43d5a849-5495-4908-bcfe-4d875f5a6578
encaps              : [31b30b1a-72e5-4b71-b99c-81dead030915]
external_ids        : {datapath-type="", iface-types="geneve,gre,internal,lisp,patch,stt,system,tap,vxlan", ovn-bridge-mappings=""}
hostname            : "ip-10-5-53-47.eu-west-1.compute.internal"
name                : "95ef860a-8dfd-44ac-a332-8c8b3376c84c"
nb_cfg              : 9
vtep_logical_switches: []

_uuid               : fb5ba6fd-2c6e-4ff3-9034-cff68482eca8
encaps              : [849b6530-aa76-4463-904e-2387a0668b92]
external_ids        : {datapath-type="", iface-types="geneve,gre,internal,lisp,patch,stt,system,tap,vxlan", ovn-bridge-mappings=""}
hostname            : "ip-10-5-35-142.eu-west-1.compute.internal"
name                : "f06c10d0-8e44-42d1-aa65-c2785b66e172"
nb_cfg              : 9
vtep_logical_switches: []

_uuid               : b9bf5a56-79e0-4f0b-869c-070be97f6682
encaps              : [ca980755-7582-4537-8127-5b31feb1bdd9]
external_ids        : {datapath-type="", iface-types="geneve,gre,internal,lisp,patch,stt,system,vxlan", ovn-bridge-mappings=""}
hostname            : "EC2AMAZ-04IFR3J"
name                : "b037bfa2-bd40-413b-b2f9-8c6384f581e9"
nb_cfg              : 0
vtep_logical_switches: []

_uuid               : f18d1ace-3f29-41f9-864c-5bac3eef60a3
encaps              : [a3922962-b7a7-4638-adfb-a6a8640af92f]
external_ids        : {datapath-type="", iface-types="geneve,gre,internal,lisp,patch,stt,system,tap,vxlan", ovn-bridge-mappings=""}
hostname            : "ip-10-5-46-236.eu-west-1.compute.internal"
name                : "ba7e1d8a-6059-42aa-9e19-4bc2bcec717d"
nb_cfg              : 9
vtep_logical_switches: []

_uuid               : 63de3d3f-0e23-4517-9a5b-505b0fd2d8a3
encaps              : [f6b5e2b3-350c-4409-888f-08b1f56f1819]
external_ids        : {datapath-type="", iface-types="geneve,gre,internal,lisp,patch,stt,system,tap,vxlan", ovn-bridge-mappings=""}
hostname            : "ip-10-5-38-109.eu-west-1.compute.internal"
name                : "239a9799-4ccf-42d7-8ba0-c3391ff1a001"
nb_cfg              : 9
vtep_logical_switches: []

_uuid               : 79c4fdbf-1fe3-474f-b7fe-9c8ec31d795a
encaps              : [b7e84f60-0200-4ea1-b133-3884892fba10]
external_ids        : {datapath-type="", iface-types="geneve,gre,internal,lisp,patch,stt,system,vxlan", ovn-bridge-mappings=""}
hostname            : "EC2AMAZ-04IFR3J"
name                : "a312f894-4a8a-4776-b3b7-3880e2b999a4"
nb_cfg              : 0
vtep_logical_switches: []
root@ip-10-5-35-142:~# ovn-sbctl show
Chassis "364fb59d-0213-4192-a8cf-40cebfc75144"
    hostname: "EC2AMAZ-3S8E4H7"
    Encap geneve
        ip: "10.5.54.78"
        options: {csum="true"}
    Port_Binding "k8s-EC2AMAZ-3S8E4H7"
    Port_Binding "guestbook-2566412744-w487z"
    Port_Binding "guestbook-2566412744-f0spj"
Chassis "95ef860a-8dfd-44ac-a332-8c8b3376c84c"
    hostname: "ip-10-5-53-47.eu-west-1.compute.internal"
    Encap geneve
        ip: "10.5.53.47"
        options: {csum="true"}
    Port_Binding "default_linux-deployement-4104990959-4px6c"
    Port_Binding "default_linux-deployement-4104990959-lcsrs"
    Port_Binding "k8s-ip-10-5-53-47"
Chassis "f06c10d0-8e44-42d1-aa65-c2785b66e172"
    hostname: "ip-10-5-35-142.eu-west-1.compute.internal"
    Encap geneve
        ip: "10.5.35.142"
        options: {csum="true"}
    Port_Binding "etor-GR_ip-10-5-46-236"
    Port_Binding "rtoe-GR_ip-10-5-46-236"
    Port_Binding "rtoj-GR_ip-10-5-46-236"
    Port_Binding "jtor-GR_ip-10-5-46-236"
    Port_Binding "k8s-ip-10-5-35-142"
Chassis "b037bfa2-bd40-413b-b2f9-8c6384f581e9"
    hostname: "EC2AMAZ-04IFR3J"
    Encap geneve
        ip: "10.5.62.104"
        options: {csum="true"}
Chassis "ba7e1d8a-6059-42aa-9e19-4bc2bcec717d"
    hostname: "ip-10-5-46-236.eu-west-1.compute.internal"
    Encap geneve
        ip: "10.5.46.236"
        options: {csum="true"}
    Port_Binding "brens3_ip-10-5-46-236"
Chassis "239a9799-4ccf-42d7-8ba0-c3391ff1a001"
    hostname: "ip-10-5-38-109.eu-west-1.compute.internal"
    Encap geneve
        ip: "10.5.38.109"
        options: {csum="true"}
    Port_Binding "k8s-ip-10-5-38-109"
    Port_Binding "kube-system_kube-dns-887593761-xs9qz"
    Port_Binding "default_linux-deployement-4104990959-fsxs7"
Chassis "a312f894-4a8a-4776-b3b7-3880e2b999a4"
    hostname: "EC2AMAZ-04IFR3J"
    Encap geneve
        ip: "10.5.62.104"
        options: {csum="true"}
 root@ip-10-5-35-142:~# ovn-nbctl show
    switch 22875ec4-3dd8-4c10-b1fd-31b430d4d824 (ip-10-5-38-109)
        port k8s-ip-10-5-38-109
            addresses: ["32:57:f8:5d:ad:80 10.244.10.2"]
        port stor-ip-10-5-38-109
            addresses: ["00:00:00:EF:9E:EA"]
        port kube-system_kube-dns-887593761-xs9qz
            addresses: ["dynamic"]
        port default_linux-deployement-4104990959-fsxs7
            addresses: ["dynamic"]
    switch 21ff312c-103a-4933-8b27-92fad75cbd7e (EC2AMAZ-04IFR3J)
        port stor-EC2AMAZ-04IFR3J
            addresses: ["00:00:00:F7:1B:8C"]
    switch 02c34319-fb1b-460b-9d92-14b4fd8347bc (ip-10-5-53-47)
        port default_linux-deployement-4104990959-lcsrs
            addresses: ["dynamic"]
        port k8s-ip-10-5-53-47
            addresses: ["e2:6c:ee:63:2a:da 10.244.11.2"]
        port default_linux-deployement-4104990959-4px6c
            addresses: ["dynamic"]
        port stor-ip-10-5-53-47
            addresses: ["00:00:00:97:E8:AE"]
    switch 517f9775-9baf-4ec8-b4b0-ddc8541039b4 (EC2AMAZ-3S8E4H7)
        port guestbook-2566412744-mnx6s
            addresses: ["00:15:5d:4b:63:9f 10.244.39.190"]
        port guestbook-2566412744-f0spj
            addresses: ["00:15:5d:4b:6e:71 10.244.39.199"]
        port guestbook-2566412744-w487z
            addresses: ["00:15:5d:4b:62:41 10.244.39.177"]
        port guestbook-2566412744-khh2t
            addresses: ["00:15:5d:4b:6a:d8 10.244.39.38"]
        port guestbook-2566412744-fddkq
            addresses: ["00:15:5d:4b:62:f0 10.244.39.202"]
        port stor-EC2AMAZ-3S8E4H7
            addresses: ["00:00:00:85:BA:F3"]
        port k8s-EC2AMAZ-3S8E4H7
            addresses: ["00:15:5D:00:CA:03 10.244.39.2"]
        port guestbook-2566412744-zmw07
            addresses: ["00:15:5d:4b:63:cc 10.244.39.110"]
    switch ca730f96-6258-488d-bb02-681acfbaa7a7 (join)
        port jtor-GR_ip-10-5-48-0
            addresses: ["00:00:00:07:8E:5A"]
        port jtor-GR_ip-10-5-46-236
            addresses: ["00:00:00:3F:A7:A1"]
        port jtor-ip-10-5-35-142
            addresses: ["00:00:00:29:B7:31"]
    switch dd4988ee-3ea7-40ad-bf69-922d57ffd967 (ext_ip-10-5-46-236)
        port etor-GR_ip-10-5-46-236
            addresses: ["06:d1:75:95:88:9c"]
        port brens3_ip-10-5-46-236
            addresses: ["unknown"]
    switch d0499686-96ad-450f-a4c9-5a15628be0d1 (ip-10-5-35-142)
        port kube-system_kube-controller-manager-ip-10-5-35-142
            addresses: ["dynamic"]
        port stor-ip-10-5-35-142
            addresses: ["00:00:00:49:37:B0"]
        port kube-system_kube-scheduler-ip-10-5-35-142
            addresses: ["dynamic"]
        port k8s-ip-10-5-35-142
            addresses: ["ce:24:b8:67:52:60 10.244.1.2"]
        port kube-system_kube-apiserver-ip-10-5-35-142
            addresses: ["dynamic"]
    router c026ab4a-1c82-4db9-a249-e148b1adb212 (GR_ip-10-5-46-236)
        port rtoe-GR_ip-10-5-46-236
            mac: "06:d1:75:95:88:9c"
            networks: ["10.5.46.236/19"]
        port rtoj-GR_ip-10-5-46-236
            mac: "00:00:00:3F:A7:A1"
            networks: ["100.64.1.2/24"]
    router b75ff363-9a25-44df-b138-e0704bc53091 (ip-10-5-35-142)
        port rtos-ip-10-5-53-180
            mac: "00:00:00:A8:57:D6"
            networks: ["10.244.2.1/24"]
        port rtos-EC2AMAZ-3S8E4H7
            mac: "00:00:00:85:BA:F3"
            networks: ["10.244.39.1/24"]
        port rtos-ip-10-5-38-109
            mac: "00:00:00:EF:9E:EA"
            networks: ["10.244.10.1/24"]
        port rtoj-ip-10-5-35-142
            mac: "00:00:00:29:B7:31"
            networks: ["100.64.1.1/24"]
        port rtos-EC2AMAZ-04IFR3J
            mac: "00:00:00:F7:1B:8C"
            networks: ["10.244.30.1/24"]
        port rtos-ip-10-5-35-142
            mac: "00:00:00:49:37:B0"
            networks: ["10.244.1.1/24"]
        port rtos-ip-10-5-53-47
            mac: "00:00:00:97:E8:AE"
            networks: ["10.244.11.1/24"]
root@ip-10-5-35-142:~# ovs-vsctl show
7421115b-6ffe-4aca-8cde-25994f6c1c17
    Bridge br-int
        fail_mode: secure
        Port "k8s-ip-10-5-35-"
            Interface "k8s-ip-10-5-35-"
                type: internal
        Port "ovn-239a97-0"
            Interface "ovn-239a97-0"
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.5.38.109"}
        Port br-int
            Interface br-int
                type: internal
        Port "ovn-a312f8-0"
            Interface "ovn-a312f8-0"
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.5.62.104"}
        Port "ovn-364fb5-0"
            Interface "ovn-364fb5-0"
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.5.54.78"}
        Port "ovn-b037bf-0"
            Interface "ovn-b037bf-0"
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.5.62.104"}
                error: "could not add network device ovn-b037bf-0 to ofproto (File exists)"
        Port "ovn-95ef86-0"
            Interface "ovn-95ef86-0"
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.5.53.47"}
        Port "ovn-ba7e1d-0"
            Interface "ovn-ba7e1d-0"
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.5.46.236"}
    ovs_version: "2.7.2"

Not sure what else you might want to know?

cannot create pod success

after i deploy the ovn-k8s plugin, i create a pod,but pod's status always being containercreating. i use command "systemctl status kubelet -l ",there are some errors shown:
ovs ovn-k8s-cni-overlay | ERR | failed to get pod annotation:('Connection aborted,' Connection refused)
ovn-k8s-cni-overlay | ERR | failed to get pod annotation:(Connection error(111, 'Connection refused'))

ovs ovn-k8s-cni-overlay | ERR | failed to get pod annotation:(Connection refused)
error adding network:
error while adding to cni network:
runpodsandbox from runtime service failed:networkplugin cni failed to set up pod "kube-dns-545bc4bfd4-6lh4l_kube-system"netwok:
createpodsandbox for pod "kube-dns- "failed: rpc error: code = unknown desc = networkplugin cni failed to set up "kube-system"netwok:

Document rbac

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ovn-controller
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: ovn-controller
rules:
  - apiGroups:
      - ""
      - networking.k8s.io
    resources:
      - pods
      - services
      - endpoints
      - namespaces
      - networkpolicies
      - nodes
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
      - pods
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: ovn-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ovn-controller
subjects:
- kind: ServiceAccount
  name: ovn-controller
  namespace: kube-system

why is ovs-kubernetes implemented in python?

It seems that this project has nothing to do with openstack, so why using python stack. With python, there is so much dependences to be solved, with golang , only one binary file is OK. So I can not find any advantages to use python over golang. Anybody can answer this question? Thank you!

OVN watcher daemon exits with erorr "pipe2: too many open files"

OVN kubernetes watcher daemon exits after a few days it started with error complaining about "pipe2: too many files". It seems we didn't close some files in our code.

I0118 18:50:58.058644 29438 reflector.go:286] github.com/openvswitch/ovn-kubernetes/go-controller/pkg/factory/factory.go:81: forcing resync
INFO[197400] Deleting network policy policy1 in namespace ns1
DEBU[197400] deleteAddressSet a15401878422219876457
ERRO[197400] failed to destroy address set a15401878422219876457 (pipe2: too many open files)
DEBU[197400] deleteAddressSet a15401877322708248246
ERRO[197400] failed to destroy address set a15401877322708248246 (pipe2: too many open files)
ERRO[197400] find failed to get the allow rule for namespace=ns1, policy=policy1 (pipe2: too many open files)
INFO[197400] Adding network policy policy1 in namespace ns1
DEBU[197400] Network policy ingress is {Ports:[{Protocol:0xc42472aff0 Port:80} {Protocol:0xc42472b000 Port:1234}] From:[{PodSelector:nil NamespaceSelector:&LabelSelector{MatchLabels:map[string]string{project: myproject1,},MatchExpressions:[],} IPBlock:nil} {PodSelector:&LabelSelector{MatchLabels:map[string]string{role: frontend1,},MatchExpressions:[],} NamespaceSelector:nil IPBlock:nil}]}
DEBU[197400] createAddressSet with ns1.policy1.ingress.0 and []
ERRO[197400] find failed to get address set (pipe2: too many open files) Selector{MatchLabels:map[string]string{proj
DEBU[197400] Network policy ingress is {Ports:[{Protocol:0xc42472b060 Port:80}] From:[{PodSelector:nil NamespaceSelector:&LabelSelector{MatchLabels:map[string]stelector:nil IPBlock:nil}]} ring{project: myproject2,},MatchExpressions:[],} IPBlock:nil} {PodSelector:&LabelSelector{MatchLabels:map[string]string{role: frontend2,},MatchExpressions:[],} NamespaceSelector:nil IPBlock:nil}]}
DEBU[197400] createAddressSet with ns1.policy1.ingress.1 and [] ring{project: myproject2,},MatchExpressions
ERRO[197400] find failed to get address set (pipe2: too many open files)
I0118 18:50:58.059500 29438 reflector.go:202] Starting reflector *v1.Namespace (0s) from github.com/openvswitch/ovn-kubernetes/go-controller/pkg/ovn/policy.go:706
I0118 18:50:58.059514 29438 reflector.go:240] Listing and watching *v1.Namespace from github.com/openvswitch/ovn-kubernetes/go-controller/pkg/ovn/policy.go:706706
I0118 18:50:58.059574 29438 reflector.go:202] Starting reflector *v1.Pod (0s) from github.com/openvswitch/ovn-kubernetes/go-controller/pkg/ovn/policy.go:514
I0118 18:50:58.059585 29438 reflector.go:240] Listing and watching *v1.Pod from github.com/openvswitch/ovn-kubernetes/go-controller/pkg/ovn/policy.go:514
I0118 18:50:58.059611 29438 reflector.go:202] Starting reflector *v1.Namespace (0s) from github.com/openvswitch/ovn-kubernetes/go-controller/pkg/ovn/policy.go:706706
I0118 18:50:58.059624 29438 reflector.go:240] Listing and watching *v1.Namespace from github.com/openvswitch/ovn-kubernetes/go-controller/pkg/ovn/policy.go:706
I0118 18:50:58.059672 29438 reflector.go:202] Starting reflector *v1.Pod (0s) from github.com/openvswitch/ovn-kubernetes/go-controller/pkg/ovn/policy.go:590
I0118 18:50:58.059684 29438 reflector.go:240] Listing and watching *v1.Pod from github.com/openvswitch/ovn-kubernetes/go-controller/pkg/ovn/policy.go:590
I0118 18:50:58.059819 29438 reflector.go:202] Starting reflector *v1.Pod (0s) from github.com/openvswitch/ovn-kubernetes/go-controller/pkg/ovn/policy.go:590
I0118 18:50:58.059848 29438 reflector.go:240] Listing and watching *v1.Pod from github.com/openvswitch/ovn-kubernetes/go-controller/pkg/ovn/policy.go:590
E0118 18:50:58.060158 29438 reflector.go:205] github.com/openvswitch/ovn-kubernetes/go-controller/pkg/ovn/policy.go:514: Failed to list *v1.Pod: Get http://localhost:8080/api/v1/namespaces/ns1/pods?labealhost:8080/api/v1/namespaces/ns1/pods?labelSelector=name%3Dapache&limit=500&resourceVersion=0: dial tcp [::1]:8080: socket: too many open files
E0118 18:50:58.060158 29438 reflector.go:205] github.com/openvswitch/ovn-kubernetes/go-controller/pkg/ovn/policy.go:514: Failed to list *v1.Pod: Get http://localhost:8080/api/v1/namespaces/ns1/pods?labealhost:8080/api/v1/namespaces/ns1/pods?labelSelector=name%3Dapache&limit=500&resourceVersion=0: dial tcp [::1]:8080: socket: too many open files
log: exiting because of error: log: cannot create log: open /tmp/ovnkube.ovn-master.root.log.ERROR.20180118-185058.29438: too many open files

Logical port is down after creation

I have the following vagrant setup.
1 node cluster with kubernetes master and openvswitch running on the same node. CLUSTER_IP_SUBNET=10.3.0.0/16, MASTER_SWITCH_SUBNET=10.3.1.0/24.
Everything have started and interfaces exist:

br-int: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::48d9:3bff:fedc:aa4c  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:06:b9:de  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 62  bytes 13666 (13.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.18.0.1  netmask 255.255.0.0  broadcast 0.0.0.0
        inet6 fe80::42:b7ff:fe5d:b203  prefixlen 64  scopeid 0x20<link>
        ether 02:42:b7:5d:b2:03  txqueuelen 0  (Ethernet)
        RX packets 3002  bytes 269534 (263.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3012  bytes 939730 (917.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::a00:27ff:fecb:cb16  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:cb:cb:16  txqueuelen 1000  (Ethernet)
        RX packets 433934  bytes 613898403 (585.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 114035  bytes 6555659 (6.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 20.0.0.1  netmask 255.255.255.0  broadcast 20.0.0.255
        inet6 fe80::a00:27ff:fe06:b9de  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:06:b9:de  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 26  bytes 3211 (3.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.8.101  netmask 255.255.255.0  broadcast 172.17.8.255
        inet6 fe80::a00:27ff:fe59:e994  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:59:e9:94  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 27  bytes 3301 (3.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

k8s-172.17.8.10: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.3.1.2  netmask 255.255.255.0  broadcast 0.0.0.0
        ether c6:c1:8d:41:03:59  txqueuelen 1000  (Ethernet)
        RX packets 2  bytes 112 (112.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 35  bytes 10370 (10.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1  (Local Loopback)
        RX packets 69855  bytes 26598786 (25.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 69855  bytes 26598786 (25.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ovs-system: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::8013:f4ff:fe51:3a0f  prefixlen 64  scopeid 0x20<link>
        ether 82:13:f4:51:3a:0f  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 61  bytes 13334 (13.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

However, when I create new pod I see that logical port is down:

ovn-nbctl list logical_switch_port default_redis-master-2463511816-vaq6k
_uuid               : e078e54a-0534-4d6c-81da-a993796318a4
addresses           : [dynamic]
dhcpv4_options      : []
dhcpv6_options      : []
dynamic_addresses   : "0a:00:00:00:00:07 10.3.1.9"
enabled             : []
external_ids        : {}
name                : "default_redis-master-2463511816-vaq6k"
options             : {}
parent_name         : []
port_security       : []
tag                 : []
type                : ""
up                  : false

Moreover, pod is getting network configuration from docker0 bridge.

Have I missed something in documentation?
Thank you in advance for any help you can provide.

Watcher output error when there isn't pods

Watcher output error when there is not pods:

2018-01-03T07:59:44.185Z |  0  | watcher | ERR | failed in _sync_k8s_pods ('NoneType' object is not iterable)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ovn_k8s/watcher/watcher.py", line 82, in _sync_k8s_pods
    mode.sync_pods(pods)
  File "/usr/lib/python2.7/site-packages/ovn_k8s/modes/overlay.py", line 499, in sync_pods
    for pod in pods:
TypeError: 'NoneType' object is not iterable

Vagrant: apiserver fails to come up on k8s-master

F0305 19:27:42.990716 3331 controller.go:128] Unable to perform initial IP allocation check: unable to refresh the service IP block: client: etcd cluster is unavailable or misconfigured

Looks like etcd docker container failed to start. Since this worked on manual startup.

I didn't run vagrant up in debug mode, so don't have the provisioning logs.

TypeError on running "ovn-k8s-overlay master-init"

Hi,
Followed the guide to set up kubernetes with ovs (https://github.com/openvswitch/ovn-kubernetes/blob/master/README.md)
In the section for "k8s master node initialisation", on running "ovn-k8s-overlay master-init", I get the following error :
Failed operation.
(a bytes-like object is required, not 'str')
at Traceback (most recent call last):
File "/usr/local/bin/ovn-k8s-overlay", line 802, in
main()
File "/usr/local/bin/ovn-k8s-overlay", line 797, in main
args.func(args)
File "/usr/local/bin/ovn-k8s-overlay", line 408, in master_init
fetch_ovn_nb()
File "/usr/local/bin/ovn-k8s-overlay", line 41, in fetch_ovn_nb
"external_ids:ovn-nb").strip('"')
TypeError: a bytes-like object is required, not 'str'

It would be great if someone could help me out :-) Thanks

go-controller: ovnkube init master/node failed

ovnkube is depending on systemctl to start openvswitch and ovn services, but they are not exist on all systems (e.g. openvswitch 2.8.0 on Ubuntu 16.04). In this case, ovnkube --init-master and ovnkube --init-node will both fail because openvswitch/ovn can't start.

Any ideas on this? @shettyg

ovn-kubernetes doesn't handle nodeport services correctly

It appears that ovn-kubernetes only creates a nodeport on the gateway node(s), it doesn't setup a load balancer for the cluster ip that is created for it.

can see the code here

https://github.com/openvswitch/ovn-kubernetes/blob/6c87f60ba883b5e879c9e16f1a9310cb54ab61fb/ovn_k8s/modes/overlay.py#L420-L429

see Kubernetes documentation one nodeport

https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport

"Note that this Service will be visible as both :spec.ports[].nodePort and spec.clusterIp:spec.ports[].port."

it should be the equivalent of a case statement with a fall through from nodeport to cluster ip (and if ever support load balancer type, that would be the top of the fall through), this meas restructuring the code a bit, as can't need to determine port individually within each of the case statements.

Switching ovn-kubernetes to go

The meta issue of switching ovn-kubernetes to go.

  • Enable travis CI for golang #115
  • Switch components to golang
    • ovn-k8s-util #113
    • ovn-k8s-cni-overlay
    • ovn-k8s-gateway-helper
    • ovn-k8s-overlay
    • ovnkube-setup-master
    • ovnkube-setup-master
  • Cleanup old python codes
  • Support network policy
  • Remove python version
  • Move go-controller to top-level

cc/ @shettyg

Bug: Access variable that may not have been initialized

When ovn-k8s-cni-overlay tries to retrieve pod annotations and fails, e.g. the API server is protected and the plug-in is not properly configured, the annotations dict is accessed but hasn't been initialized, resulting in a crash.

The problem seems to be that annotations is declared inside a try-exception block scope, and then the code tries to access it from outside of that block. And while this works when no exception occurs, it doesn't with the scenario described above.

As a sidenote, the exception returns empty in my failure test-cases.

Multi-gateway does not work in certain cases

In certain cases multi-gateway setups do not work. They need the following 2 fixes.

  1. In ovn-kubernetes repo, we are currently creating a single load-balancer for north-south connectivity and then adding $IP:$NODEPORT of all the gateways in that load-balancer. This causes issues with ARP responses as now a client can see all the mac addresses available in all the hosts resulting in certain packets entering via different gateways. This can cause issues with DNATs.

  2. FIN handshake of a TCP connection after DNAT and SNAT in a router are not getting NATted. This is mainly because of a bug in ovn-controller on the way conntrack zones are assigned.

Support service type Load-Balancer

It wasn't clear to me while experiment with this that Type: LoadBalancer is not supported so leaving this here in the hopes of helping someone hitting the same issue.

Vagrant sample service is not working

I followed the instructions https://github.com/openvswitch/ovn-kubernetes/tree/master/vagrant

I am able to see the pods and services:

root@k8smaster:~# k get pods
NAME         READY     STATUS    RESTARTS   AGE
apachetwin   1/1       Running   0          50s
nginxtwin    1/1       Running   0          43s
root@k8smaster:~# k get svc
NAME             CLUSTER-IP        EXTERNAL-IP   PORT(S)          AGE
apacheexternal   192.168.200.252   <nodes>       8800:31269/TCP   10s
apacheservice    192.168.200.98    <none>        8800/TCP         21s
kubernetes       192.168.200.1     <none>        443/TCP          3h

But doing curl 10.10.0.11:31269 does not work.

root@k8smaster:~# curl 10.10.0.11:31269
curl: (7) Failed to connect to 10.10.0.11 port 31269: Connection refused

I even tried to SSHing into a pod and using the service IP. But it only works half the time (with the NGINX pod). It does not work when it balances to the the APACHE pod:

[root@apachetwin /]# curl 192.168.200.252:8800
^C
[root@apachetwin /]#

Documentation on routes and switches

I think I have it almost figured out but it would make it a lot easier if there was a diagram showing the topology of the switches and routers in a k8s cluster. I have sfc working for the vagrant setup but it was mostly trial and error to figure out what was connected each node. Also some documentation on the naming convention :-)

Regards

John

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.