Code Monkey home page Code Monkey logo

multus-cni's Introduction

multus-cni Logo

MULTUS CNI plugin

  • Multus is the latin word for “Multi”

  • As the name suggests, it acts as the Multi plugin in Kubernetes and provides the Multi interface support in a pod

  • It is generic to run with any plugins like Calico, Weave, SRIOV, Ciliuim, Canal and Flannel, with different IPAM and networks.

  • It is a contact between the container runtime and other plugins, and it doesn't have any of its own net configuration, it calls other plugins like flannel/calico to do the real net conf job.

  • Multus reuses the concept of invoking the delegates in flannel, it groups the multi plugins into delegates and invoke each other in sequential order, according to the JSON scheme in the cni configuration.

  • No. of plugins supported is dependent upon the number of delegates in the conf file.

  • Master plugin invokes "eth0" interface in the pod, rest of plugins(Mininon plugins eg: sriov,ipam) invoke interfaces as "net0", "net1".. "netn"

  • The "masterplugin" is the only net conf option of multus cni, it identifies the primary network. The default route will point to the primary network

Please read CNI for more information on container networking.

Multi-Homed pod

Build

This plugin requires Go 1.8 to build.

Go 1.5 users will need to set GO15VENDOREXPERIMENT=1 to get vendored dependencies. This flag is set by default in 1.6.

#./build

Work flow

## Network configuration reference
  • name (string, required): the name of the network
  • type (string, required): "multus"
  • kubeconfig (string, optional): kubeconfig file for the out of cluster communication with kube-apiserver, Refer the doc
  • delegates (([]map,required): number of delegate details in the Multus, ignored in case kubeconfig is added.
  • masterplugin (bool,required): master plugin to report back the IP address and DNS to the container

Usage with Kubernetes TPR based Network Objects

Please refer the Kubernetes Network SIG - Multiple Network PoC proposal for more details refer the link - K8s Multiple Network proposal

Creating “Network” third party resource in kubernetes

  1. Create a Third party resource “tprnetwork.yaml” for the network object as shown below
apiVersion: extensions/v1beta1
kind: ThirdPartyResource
metadata:
  name: network.kubernetes.com
description: "A specification of a Network obj in the kubernetes"
versions:
- name: v1
  1. Run kubectl create command for the Third Party Resource
# kubectl create -f ./tprnetwork.yaml
thirdpartyresource "network.kubernetes.com" created
  1. Run kubectl get command to check the Network TPR creation
# kubectl get thirdpartyresource
NAME                     DESCRIPTION                                          VERSION(S)
network.kubernetes.com   A specification of a Network obj in the kubernetes   v1

Creating “Custom Network objects” third party resource in kubernetes

  1. After the ThirdPartyResource object has been created you can create network objects. Network objects should contain network fields. These fields are in JSON format. In the following example, a plugin and args fields are set to the object of kind Network. The kind Network is derived from the metadata.name of the ThirdPartyResource object we created above.

  2. Save the below following YAML to flannel-network.yaml

apiVersion: "kubernetes.com/v1"
kind: Network
metadata:
  name: flannel-conf
plugin: flannel
args: '[
        {
                "delegate": {
                        "isDefaultGateway": true
                }
        }
]'
  1. Run kubectl create command for the TPR - Network object
# kubectl create -f ./flannel-network.yaml 
network "flannel-conf" created
  1. Manage the Network objects using kubectl.
# kubectl get network
NAME                         KIND
flannel-conf                 Network.v1.kubernetes.com
  1. You can also view the raw JSON data. Here you can see that it contains the custom plugin and args fields from the yaml you used to create it:
# kubectl get network flannel-conf -o yaml
apiVersion: kubernetes.com/v1
args: '[ { "delegate": { "isDefaultGateway": true } } ]'
kind: Network
metadata:
  creationTimestamp: 2017-06-28T14:20:52Z
  name: flannel-conf
  namespace: default
  resourceVersion: "5422876"
  selfLink: /apis/kubernetes.com/v1/namespaces/default/networks/flannel-conf
  uid: fdcb94a2-5c0c-11e7-bbeb-408d5c537d27
plugin: flannel
  1. The plugin field should be the name of the CNI plugin and args should have the flannel args, it should be in the the JSON format as shown above. User can create network objects for Calico, Weave, Romana, & Cilium and test the multus.
  2. Save the below following YAML to sriov-network.yaml. Refer Intel - SR-IOV CNI or contact @kural in Intel-Corp Slack for running the DPDK based workloads in Kubernetes
apiVersion: "kubernetes.com/v1"
kind: Network
metadata:
  name: sriov-conf
plugin: sriov
args: '[
       {
                "if0": "enp12s0f1",
                "ipam": {
                        "type": "host-local",
                        "subnet": "10.56.217.0/24",
                        "rangeStart": "10.56.217.171",
                        "rangeEnd": "10.56.217.181",
                        "routes": [
                                { "dst": "0.0.0.0/0" }
                        ],
                        "gateway": "10.56.217.1"
                }
        }
]'
  1. Save the below following YAML to sriov-vlanid-l2enable-network.yaml
apiVersion: "kubernetes.com/v1"
kind: Network
metadata:
  name: sriov-vlanid-l2enable-conf
plugin: sriov
args: '[
       {
                "if0": "enp2s0",
                "vlan": 210,
                "if0name": "north",
                "l2enable": true
        }
]'
  1. Follows the step 2 to create the network object “sriov-vlanid-l2enable-conf” and “sriov-conf”
  2. Manage the Network objects using kubectl.
# kubectl get network
NAME                         KIND
flannel-conf                 Network.v1.kubernetes.com
sriov-vlanid-l2enable-conf   Network.v1.kubernetes.com
sriov-conf                   Network.v1.kubernetes.com

Configuring Multus to use the kubeconfig

  1. Create Multus CNI configuration file /etc/cni/net.d/multus-cni.conf with below content in minions. Use only the absolute path to point to the kubeconfig file (it may change depending upon your cluster env) and make sure all CNI binary files are in \opt\cni\bin dir
{
    "name": "minion-cni-network",
    "type": "multus",
    "kubeconfig": "/etc/kubernetes/node-kubeconfig.yaml"
}
  1. Restart kubelet service
# systemctl restart kubelet

Configuring Pod to use the TPR Network objects

  1. Save the below following YAML to pod-multi-network.yaml. In this case flannel-conf network object act as the primary network.
# cat pod-multi-network.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: multus-multi-net-poc
  annotations:
    networks: '[  
        { "name": "flannel-conf" },
        { "name": "sriov-conf"},
        { "name": "sriov-vlanid-l2enable-conf" } 
    ]'
spec:  # specification of the pod's contents
  containers:
  - name: multus-multi-net-poc
    image: "busybox"
    command: ["top"]
    stdin: true
    tty: true
  1. Create Multiple network based pod from the master node
# kubectl create -f ./pod-multi-network.yaml
pod "multus-multi-net-poc" created
  1. Get the details of the running pod from the master
# kubectl get pods
NAME                   READY     STATUS    RESTARTS   AGE
multus-multi-net-poc   1/1       Running   0          30s

Verifying Pod network

  1. Run “ifconfig” command inside the container:
# kubectl exec -it multus-multi-net-poc -- ifconfig
eth0      Link encap:Ethernet  HWaddr 06:21:91:2D:74:B9  
          inet addr:192.168.42.3  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::421:91ff:fe2d:74b9/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:648 (648.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

net0      Link encap:Ethernet  HWaddr D2:94:98:82:00:00  
          inet addr:10.56.217.171  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::d094:98ff:fe82:0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:120 (120.0 B)  TX bytes:648 (648.0 B)

north     Link encap:Ethernet  HWaddr BE:F2:48:42:83:12  
          inet6 addr: fe80::bcf2:48ff:fe42:8312/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1420 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1276 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:95956 (93.7 KiB)  TX bytes:82200 (80.2 KiB)
Interface name Description
lo loopback
eth0@if41 Flannel network tap interface
net0 VF0 of NIC 1 assigned to the container by Intel - SR-IOV CNI plugin
north VF0 of NIC 2 assigned with VLAN ID 210 to the container by SR-IOV CNI plugin
  1. Check the vlan ID of the NIC 2 VFs
# ip link show enp2s0
20: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 24:8a:07:e8:7d:40 brd ff:ff:ff:ff:ff:ff
    vf 0 MAC 00:00:00:00:00:00, vlan 210, spoof checking off, link-state auto
    vf 1 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
    vf 2 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
    vf 3 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto

Using Multus Conf file

Given the following network configuration:

# tee /etc/cni/net.d/multus-cni.conf <<-'EOF'
{
    "name": "multus-demo-network",
    "type": "multus",
    "delegates": [
        {
                "type": "sriov",
                #part of sriov plugin conf
                "if0": "enp12s0f0", 
                "ipam": {
                        "type": "host-local",
                        "subnet": "10.56.217.0/24",
                        "rangeStart": "10.56.217.131",
                        "rangeEnd": "10.56.217.190",
                        "routes": [
                                { "dst": "0.0.0.0/0" }
                        ],
                        "gateway": "10.56.217.1"
                }
        },
        {
                "type": "ptp",
                "ipam": {
                        "type": "host-local",
                        "subnet": "10.168.1.0/24",
                        "rangeStart": "10.168.1.11",
                        "rangeEnd": "10.168.1.20",
                        "routes": [
                                { "dst": "0.0.0.0/0" }
                        ],
                        "gateway": "10.168.1.1"
                }
        },
        {
                "type": "flannel",
                "masterplugin": true,
                "delegate": {
                        "isDefaultGateway": true
                }
        }
    ]
}
EOF

Testing the Multus CNI with Multiple Flannel Network

Github user YYGCui has used Multiple flannel network to work with Multus CNI plugin. Please refer this closed issue for Multiple overlay network support with Multus CNI.

Testing the Multus CNI with docker

Make sure that the multus, sriov, flannel, and ptp binaries are in the /opt/cni/bin directories and follow the steps as mention in the CNI

Testing the Multus CNI with Kubernetes

Refer the Kubernetes User Guide and network plugin

Kubelet must be configured to run with the CNI --network-plugin, with the following configuration information. Edit /etc/default/kubelet file and add KUBELET_OPTS:

KUBELET_OPTS="...
--network-plugin-dir=/etc/cni/net.d
--network-plugin=cni
"

Restart the kubelet

# systemctl restart kubelet.service

Launching workloads in Kubernetes

Launch the workload using yaml file in the kubernetes master, with above configuration in the multus CNI, each pod should have multiple interfaces.

Note: To verify whether Multus CNI plugin is working fine create a pod containing one “busybox” container and execute “ip link” command to check if interfaces management follows configuration.

  1. Create “multus-test.yaml” file containing below configuration. Created pod will consist of one “busybox” container running “top” command.
apiVersion: v1
kind: Pod
metadata:
  name: multus-test
spec:  # specification of the pod's contents
  restartPolicy: Never
  containers:
  - name: test1
    image: "busybox"
    command: ["top"]
    stdin: true
    tty: true

  1. Create pod using command:
# kubectl create -f multus-test.yaml
pod "multus-test" created
  1. Run “ip link” command inside the container:
# 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: eth0@if41: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 26:52:6b:d8:44:2d brd ff:ff:ff:ff:ff:ff
20: net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq qlen 1000
    link/ether f6:fb:21:4f:1d:63 brd ff:ff:ff:ff:ff:ff
21: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq qlen 1000
    link/ether 76:13:b1:60:00:00 brd ff:ff:ff:ff:ff:ff 
Interface name Description
lo loopback
eth0@if41 Flannel network tap interface
net0 VF assigned to the container by SR-IOV CNI plugin
net1 ptp localhost interface

Contacts

For any questions about Multus CNI, please reach out on github issue or feel free to contact the developer @kural in our Intel-Corp Slack

multus-cni's People

Contributors

rkamudhan avatar lmdaly avatar swatisehgal avatar

Watchers

Raghu Vamshi Challa avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.