Comments (6)
@shettyg
Actually running core daemons is not a big issue. They are running with hostNetwork: true
, so they are getting host IP address and not OVN private IP addresses.
On the other hand, kube-dns
and kube-dashboard
are getting OVN private IP addresses.
Basically, what should be done is the following:
- Create manifests for core services (
kube-apiserver.yml
,kube-controller-manager.yml
,kube-proxy.yml
,kube-scheduler.yml
) and place them in/etc/kubernetes/manifests
folder - Create a custom image on top of hyperkube one with openvswitch, cni and ovn-kubernetes installed
- Run kubelet container with this configuration
/usr/bin/docker run -d \
--net=host \
--pid=host \
--privileged \
--restart=unless-stopped \
--name kubelet \
--volume /etc/cni/net.d:/etc/cni/net.d \
--volume /etc/kubernetes:/etc/kubernetes \
--volume /sys:/sys:rw \
--volume /var/run:/var/run:rw \
--volume /run:/run:rw \
--volume /var/lib/docker:/var/lib/docker:rw \
--volume /var/lib/kubelet:/var/lib/kubelet:shared \
--volume /var/log/containers:/var/log/containers:rw \
<custom_hyperkube_image>:v1.3.6 \
/hyperkube kubelet \
--allow-privileged \
--api-servers=http://127.0.0.1:8080 \
--config=/etc/kubernetes/manifests \
--cluster-dns=<kube_dns_service_ip> \
--cluster-domain=certascale.local \
--network-plugin=cni \
--network-plugin-dir=/etc/cni/net.d \
--hostname-override=<eth_ip_address> \
--v=2
_
4. Run ovs commands inside container:
docker exec kubelet /usr/share/openvswitch/scripts/ovs-ctl start
...
docker exec kubelet ovn-k8s-watcher --overlay
One caveat is that /etc/cni/net.d
should contain some configuration file. It won't be used but it should be present.
Another one is that it is not a CoreOS suggested way of running kubernetes. They suggest to use kubelet-wrapper
, but it runs kubelet in rkt
and you won't be able to manage OVS in it.
from ovn-kubernetes.
Which core daemons are you referring to?
I am not using coreos but on ubuntu I have no problem running the kubelet as well on the master node. I agree you probably should not be doing that in production, but in dev envs it gives you an extra node!
From what I recall the kubelet does not invoke network plugins for pods with special security context attributes such as hostnetwork.
from ovn-kubernetes.
The current documentation and code expects master to not run any containers. There are multiple reasons for it - one of them being containers mucking around with the north-bound database. So you likely ran just the master-init? That script does not create a CNI plugin and hence the containers started will get IP addresses from docker bridge.
Btwn, thanks for trying it out. We still are a little poor with documentation (and a couple of bugs). I just noticed that there is no mention of how to start kubelet with the OVN CNI plugin.
It should be something like:
./kubelet --api-servers=http://10.33.74.22:8080 --v=2 --address=0.0.0.0 --enable-server=true --network-plugin=cni
from ovn-kubernetes.
On that note, it should not be a big problem running pods on master as well.
I have indeed a 2 node setup where the kubelet is running also on the master node (mostly to save the hassle of using a 3rd vm to achieve a 2-node testbed).
To do so I think I ran both master-init and minion-init on the master, specifying the same subnet. If I recall correctly minion-init skipped the steps for creating the node ls and connecting it to the cluster lr as they were already performed by master-init, and simply configured the CNI plugin.
from ovn-kubernetes.
It finally works.
Moreover, it is OK to run master-init and minion-init on the same node in the same subnet.
I've struggled mostly with running this solution on CoreOS when everything is containerized.
If anyone would like to repeat this, dockerized kubelet should contain cni binaries, openvswitch binaries and ovn-kubernetes scripts.
Thank you for the help.
from ovn-kubernetes.
@RostyslavFridman
It looks like you are familiar with both k8s and OVN to have got this working with coreos. One challenge of having CNI plugin running in master would be that the core daemons running inside containers will also get OVN private IP addresses?
from ovn-kubernetes.
Related Issues (20)
- Fix `Services of type NodePort: should listen on each host addresses` test on IPV6 clusters
- Fix `e2e ingress to host-networked pods traffic validation: Should be allowed to node local host-networked endpoints by nodeport services` on IPV6 clusters and LGW mode
- ` Services when a nodePort service targeting a pod with hostNetwork:true is created when tests are run towards the agnhost echo service [It] queries to the nodePort service shall work for UDP` on IPV6 Shared gateway
- Flake: external gateway LANE: [FAIL] External Gateway With Admin Policy Based External Route CRs BFD e2e non-vxlan external gateway through a dynamic hop Should validate TCP/UDP connectivity to an external gateway's loopback address via a pod with a dynamic hop [It] TCP ipv4 HOT 1
- Flake: ANP: `AdminNetworkPolicyIngressSCTP` HOT 6
- flake: [FAIL] e2e egress IP validation [OVN network] Using different methods to disable a node's availability for egress Should validate the egress IP functionality against remote hosts [It] disabling egress nodes impeding GRCP health check HOT 12
- [FAIL] Status manager validation [BeforeEach] Should validate the egress firewall status when adding a new zone HOT 1
- Remove stale `chassis_template_var` variables HOT 1
- e2e for catching network policy behaviour when remote nodes are added => hostnetwork namespace address set logic HOT 2
- Fix Pod to pod TCP with low MTU on IPV6 HOT 1
- FLAKE: Kubevirt Virtual Machines when live migration [It] with pre-copy succeeds, should keep connectivity HOT 2
- Cannot specify GatewayModeDisabled HOT 2
- EgressQoS validation + IPV6 + LGW Lanes is Flaky HOT 3
- `Should validate flow data of br-int is sent to an external gateway` is failing on IPV6+LGW lane HOT 3
- Services tests + IPV6 are flaky HOT 1
- `e2e ingress traffic validation` with IPV6 is flaky HOT 1
- Status manager validation + IPV6 flaky
- `Should validate connectivity within a namespace of pods on separate nodes` flakes on V6 HOT 1
- `Should validate connectivity from a pod to a non-node host address on same node` flakes on v6 HOT 2
- addressManager and linkManager sync functions are not having retry mechanisms for transient errors HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ovn-kubernetes.