Code Monkey home page Code Monkey logo

cilium / cilium Goto Github PK

View Code? Open in Web Editor NEW
18.6K 309.0 2.7K 300.58 MB

eBPF-based Networking, Security, and Observability

Home Page: https://cilium.io

License: Apache License 2.0

Go 87.35% Makefile 0.39% Shell 2.24% C 9.59% Ruby 0.01% Python 0.06% TeX 0.01% Dockerfile 0.16% HCL 0.01% SmPL 0.07% Smarty 0.11% Mustache 0.01% Lua 0.01%
containers bpf security kubernetes kubernetes-networking cni kernel loadbalancing monitoring troubleshooting

cilium's Introduction

Cilium Logo

CII Best Practices Go Report Card CLOMonitor Artifact Hub Join the Cilium slack channel GoDoc Read the Docs Apache licensed BSD licensed GPL licensed FOSSA Status Github Codespaces

Cilium is a networking, observability, and security solution with an eBPF-based dataplane. It provides a simple flat Layer 3 network with the ability to span multiple clusters in either a native routing or overlay mode. It is L7-protocol aware and can enforce network policies on L3-L7 using an identity based security model that is decoupled from network addressing.

Cilium implements distributed load balancing for traffic between pods and to external services, and is able to fully replace kube-proxy, using efficient hash tables in eBPF allowing for almost unlimited scale. It also supports advanced functionality like integrated ingress and egress gateway, bandwidth management and service mesh, and provides deep network and security visibility and monitoring.

A new Linux kernel technology called eBPF is at the foundation of Cilium. It supports dynamic insertion of eBPF bytecode into the Linux kernel at various integration points such as: network IO, application sockets, and tracepoints to implement security, networking and visibility logic. eBPF is highly efficient and flexible. To learn more about eBPF, visit eBPF.io.

Overview of Cilium features for networking, observability, service mesh, and runtime security

Stable Releases

The Cilium community maintains minor stable releases for the last three minor Cilium versions. Older Cilium stable versions from minor releases prior to that are considered EOL.

For upgrades to new minor releases please consult the Cilium Upgrade Guide.

Listed below are the actively maintained release branches along with their latest patch release, corresponding image pull tags and their release notes:

v1.15 2024-03-26 quay.io/cilium/cilium:v1.15.3 Release Notes
v1.14 2024-03-26 quay.io/cilium/cilium:v1.14.9 Release Notes
v1.13 2024-03-26 quay.io/cilium/cilium:v1.13.14 Release Notes

Architectures

Cilium images are distributed for AMD64 and AArch64 architectures.

Software Bill of Materials

Starting with Cilium version 1.13.0, all images include a Software Bill of Materials (SBOM). The SBOM is generated in SPDX format. More information on this is available on Cilium SBOM.

Development

For development and testing purpose, the Cilium community publishes snapshots, early release candidates (RC) and CI container images build from the main branch. These images are not for use in production.

For testing upgrades to new development releases please consult the latest development build of the Cilium Upgrade Guide.

Listed below are branches for testing along with their snapshots or RC releases, corresponding image pull tags and their release notes where applicable:

main daily quay.io/cilium/cilium-ci:latest N/A
v1.16.0-pre.1 2024-04-02 quay.io/cilium/cilium:v1.16.0-pre.1 Release Candidate Notes

Functionality Overview

Protect and secure APIs transparently

Ability to secure modern application protocols such as REST/HTTP, gRPC and Kafka. Traditional firewalls operate at Layer 3 and 4. A protocol running on a particular port is either completely trusted or blocked entirely. Cilium provides the ability to filter on individual application protocol requests such as:

  • Allow all HTTP requests with method GET and path /public/.*. Deny all other requests.
  • Allow service1 to produce on Kafka topic topic1 and service2 to consume on topic1. Reject all other Kafka messages.
  • Require the HTTP header X-Token: [0-9]+ to be present in all REST calls.

See the section Layer 7 Policy in our documentation for the latest list of supported protocols and examples on how to use it.

Secure service to service communication based on identities

Modern distributed applications rely on technologies such as application containers to facilitate agility in deployment and scale out on demand. This results in a large number of application containers being started in a short period of time. Typical container firewalls secure workloads by filtering on source IP addresses and destination ports. This concept requires the firewalls on all servers to be manipulated whenever a container is started anywhere in the cluster.

In order to avoid this situation which limits scale, Cilium assigns a security identity to groups of application containers which share identical security policies. The identity is then associated with all network packets emitted by the application containers, allowing to validate the identity at the receiving node. Security identity management is performed using a key-value store.

Secure access to and from external services

Label based security is the tool of choice for cluster internal access control. In order to secure access to and from external services, traditional CIDR based security policies for both ingress and egress are supported. This allows to limit access to and from application containers to particular IP ranges.

Simple Networking

A simple flat Layer 3 network with the ability to span multiple clusters connects all application containers. IP allocation is kept simple by using host scope allocators. This means that each host can allocate IPs without any coordination between hosts.

The following multi node networking models are supported:

  • Overlay: Encapsulation-based virtual network spanning all hosts. Currently, VXLAN and Geneve are baked in but all encapsulation formats supported by Linux can be enabled.

    When to use this mode: This mode has minimal infrastructure and integration requirements. It works on almost any network infrastructure as the only requirement is IP connectivity between hosts which is typically already given.

  • Native Routing: Use of the regular routing table of the Linux host. The network is required to be capable to route the IP addresses of the application containers.

    When to use this mode: This mode is for advanced users and requires some awareness of the underlying networking infrastructure. This mode works well with:

    • Native IPv6 networks
    • In conjunction with cloud network routers
    • If you are already running routing daemons

Load Balancing

Cilium implements distributed load balancing for traffic between application containers and to external services and is able to fully replace components such as kube-proxy. The load balancing is implemented in eBPF using efficient hashtables allowing for almost unlimited scale.

For north-south type load balancing, Cilium's eBPF implementation is optimized for maximum performance, can be attached to XDP (eXpress Data Path), and supports direct server return (DSR) as well as Maglev consistent hashing if the load balancing operation is not performed on the source host.

For east-west type load balancing, Cilium performs efficient service-to-backend translation right in the Linux kernel's socket layer (e.g. at TCP connect time) such that per-packet NAT operations overhead can be avoided in lower layers.

Bandwidth Management

Cilium implements bandwidth management through efficient EDT-based (Earliest Departure Time) rate-limiting with eBPF for container traffic that is egressing a node. This allows to significantly reduce transmission tail latencies for applications and to avoid locking under multi-queue NICs compared to traditional approaches such as HTB (Hierarchy Token Bucket) or TBF (Token Bucket Filter) as used in the bandwidth CNI plugin, for example.

Monitoring and Troubleshooting

The ability to gain visibility and troubleshoot issues is fundamental to the operation of any distributed system. While we learned to love tools like tcpdump and ping and while they will always find a special place in our hearts, we strive to provide better tooling for troubleshooting. This includes tooling to provide:

  • Event monitoring with metadata: When a packet is dropped, the tool doesn't just report the source and destination IP of the packet, the tool provides the full label information of both the sender and receiver among a lot of other information.
  • Metrics export via Prometheus: Key metrics are exported via Prometheus for integration with your existing dashboards.
  • Hubble: An observability platform specifically written for Cilium. It provides service dependency maps, operational monitoring and alerting, and application and security visibility based on flow logs.

Getting Started

What is eBPF and XDP?

Berkeley Packet Filter (BPF) is a Linux kernel bytecode interpreter originally introduced to filter network packets, e.g. for tcpdump and socket filters. The BPF instruction set and surrounding architecture have recently been significantly reworked with additional data structures such as hash tables and arrays for keeping state as well as additional actions to support packet mangling, forwarding, encapsulation, etc. Furthermore, a compiler back end for LLVM allows for programs to be written in C and compiled into BPF instructions. An in-kernel verifier ensures that BPF programs are safe to run and a JIT compiler converts the BPF bytecode to CPU architecture-specific instructions for native execution efficiency. BPF programs can be run at various hooking points in the kernel such as for incoming packets, outgoing packets, system calls, kprobes, uprobes, tracepoints, etc.

BPF continues to evolve and gain additional capabilities with each new Linux release. Cilium leverages BPF to perform core data path filtering, mangling, monitoring and redirection, and requires BPF capabilities that are in any Linux kernel version 4.8.0 or newer (the latest current stable Linux kernel is 4.14.x).

Many Linux distributions including CoreOS, Debian, Docker's LinuxKit, Fedora, openSUSE and Ubuntu already ship kernel versions >= 4.8.x. You can check your Linux kernel version by running uname -a. If you are not yet running a recent enough kernel, check the Documentation of your Linux distribution on how to run Linux kernel 4.9.x or later.

To read up on the necessary kernel versions to run the BPF runtime, see the section Prerequisites.

image

XDP is a further step in evolution and enables running a specific flavor of BPF programs from the network driver with direct access to the packet's DMA buffer. This is, by definition, the earliest possible point in the software stack, where programs can be attached to in order to allow for a programmable, high performance packet processor in the Linux kernel networking data path.

Further information about BPF and XDP targeted for developers can be found in the BPF and XDP Reference Guide.

To know more about Cilium, its extensions and use cases around Cilium and BPF take a look at Further Readings section.

Community

Slack

Join the Cilium Slack channel to chat with Cilium developers and other Cilium users. This is a good place to learn about Cilium, ask questions, and share your experiences.

Special Interest Groups (SIG)

See Special Interest groups for a list of all SIGs and their meeting times.

Developer meetings

The Cilium developer community hangs out on Zoom to chat. Everybody is welcome.

eBPF & Cilium Office Hours livestream

We host a weekly community YouTube livestream called eCHO which (very loosely!) stands for eBPF & Cilium Office Hours. Join us live, catch up with past episodes, or head over to the eCHO repo and let us know your ideas for topics we should cover.

Governance

The Cilium project is governed by a group of Maintainers and Committers. How they are selected and govern is outlined in our governance document.

Adopters

A list of adopters of the Cilium project who are deploying it in production, and of their use cases, can be found in file USERS.md.

Roadmap

Cilium maintains a public roadmap. It gives a high-level view of the main priorities for the project, the maturity of different features and projects, and how to influence the project direction.

License

The Cilium user space components are licensed under the Apache License, Version 2.0. The BPF code templates are dual-licensed under the General Public License, Version 2.0 (only) and the 2-Clause BSD License (you can use the terms of either license, at your option).

cilium's People

Contributors

aalemayhu avatar aanm avatar aditighag avatar borkmann avatar brb avatar christarazi avatar dependabot[bot] avatar eloycoto avatar gandro avatar giorio94 avatar jibi avatar joamaki avatar joestringer avatar jrajahalme avatar jrfastab avatar julianwiedmann avatar mhofstetter avatar nebril avatar pchaigno avatar pippolo84 avatar qmonnet avatar raybejjani avatar renovate-bot avatar rlenglet avatar rolinh avatar sayboras avatar tgraf avatar ti-mo avatar tklauser avatar vadorovsky avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cilium's Issues

Disable `Conntrack` and then `Policy` on an endpoint causes error

After creating an endpoint and running theses commands:

cilium daemon config DropNotification=false Debug=false
cilium daemon config DropNotification=false Debug=false
cilium endpoint config ID DropNotification=false Debug=false
cilium endpoint config ID ConntrackAccounting=false
cilium endpoint config ID Conntrack=false

The endpoint becomes unreachable. After running this command:

cilium endpoint config ID Policy=false

I got:
Unable to update endpoint 24493: server error for interface: (uint16) "24493", (500) an unexpected internal error has occurred: "error: "exit status 1" command output: "Join EP id=24493_update ifname=lxca4920\nIn file included from /usr/lib/cilium/bpf_lxc.c:39:\n/usr/lib/cilium/lib/nat46.h:33:2: warning: \"ENABLE_NAT46 requires ENABLE_IPv4 and CONNTRACK\" [-W#warnings]\n#warning \"ENABLE_NAT46 requires ENABLE_IPv4 and CONNTRACK\"\n ^\n1 warning generated.\nIn file included from /usr/lib/cilium/bpf_lxc.c:39:\n/usr/lib/cilium/lib/nat46.h:33:2: warning: \"ENABLE_NAT46 requires ENABLE_IPv4 and CONNTRACK\" [-W#warnings]\n#warning \"ENABLE_NAT46 requires ENABLE_IPv4 and CONNTRACK\"\n ^\n1 warning generated.\n\nProg section '1/0x5fad' rejected: Permission denied (13)!\n - Type: 3\n - Instructions: 483 (0 over limit)\n - License: GPL\n\nVerifier analysis:\n\n0: (bf) r7 = r1\n1: (b7) r0 = 2\n2: (61) r1 = *(u32 *)(r7 +52)\n3: (61) r2 = *(u32 *)(r7 +16)\n4: (15) if r2 == 0x8 goto pc+7\n R0=imm2 R1=inv R2=inv R7=ctx R10=fp\n5: (55) if r2 != 0xdd86 goto pc+459\n R0=imm2 R1=inv R2=inv R7=ctx R10=fp\n6: (61) r3 = *(u32 *)(r7 +80)\n7: (61) r8 = *(u32 *)(r7 +76)\n8: (bf) r2 = r8\n9: (07) r2 += 54\n10: (3d) if r3 >= r2 goto pc+8\n R0=imm2 R1=inv R2=pkt(id=0,off=54,r=0) R3=pkt_end R7=ctx R8=pkt(id=0,off=0,r=0) R10=fp\n11: (05) goto pc+453\n465: (95) exit\n\nfrom 10 to 19: R0=imm2 R1=inv R2=pkt(id=0,off=54,r=0) R3=pkt_end R7=ctx R8=pkt(id=0,off=0,r=0) R10=fp\n19: (7b) *(u64 *)(r10 -40) = r1\n20: (b7) r1 = 0\n21: (63) *(u32 *)(r7 +56) = r1\n22: (b7) r2 = 40\n23: (71) r5 = *(u8 *)(r8 +20)\ninvalid access to packet, off=20 size=1, R8(id=0,off=0,r=0)\n\nError filling program arrays!\nFailed to retrieve (e)BPF data!\n""

I had to enable Conntrack again and then I could disable the Policy

EDIT: I noticed this now: ENABLE_NAT46 requires ENABLE_IPv4 and CONNTRACK but this should be seen when disabling Conntrack and not the Policy

Dumping endpoint resolves in null log entries

@aanm Can we help this?

  "status": {
    "log": [
      {
        "status": {
          "code": 0,
          "msg": "Regenerated BPF code"
        },
        "timestamp": "2016-09-23T01:55:09.639295681-07:00"
      },
      {
        "status": {
          "code": 0,
          "msg": "Policy regenerated"
        },
        "timestamp": "2016-09-23T01:55:09.701288704-07:00"
      },
      null,
      null,
      null,
      null,
      null,
      null,
      null,
      null,
      null,
      null,
[...]

How does routing happen?

Hey Cilium team,

This is great, just fascinating. I couldn't attend LinuxCon NA 2016 and missed this entirely. I did the "compare networking options for ultra-low-latency" presentation in Tokyo and Berlin, so it was particularly appealing.

How does Cilium do routing? I get that you assign an IPv6 address to each container, of which the host may or may not be aware. BPF gives you the ability to control ingress to and egress from that container's address.

So a packet leaves a container x on host a, targeted at container y on host b. How do you know to route the x->y packet from a->b? Are you using the host's routing tables like Calico? Or is it something else? And if so, is there any type of route aggregation?

Really impressed.

GCE on OSX

$ ./01-infrastructure-gcp.sh
Updated property [compute/region].
Updated property [compute/zone].
ERROR: (gcloud.compute.networks.create) The required property [project] is not currently set.
You may set it for your current workspace by running:

  $ gcloud config set project VALUE

or it can be set temporarily by the environment variable [CLOUDSDK_CORE_PROJECT]
$ gcloud -v
Google Cloud SDK 138.0.0
bq 2.0.24
bq-nix 2.0.24
core 2016.12.09
core-nix 2016.11.07
gcloud
gsutil 4.22
gsutil-nix 4.18

GET /lb/services "invalid service ID 0"

Starting cilium, I see the following in the log:

2017-01-08T22:51:53.054-08:00 cilium-master DEBU 666 processServerError > Processing error 500: an unexpected internal error has occurred: "invalid service ID 0"
2017-01-08T22:51:53.054-08:00 cilium-master ERRO 667 processServerError > Error while processing request '&{Method:POST URL:/lb/service?rev-nat=true Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[User-Agent:[go-resty v0.8 - https://github.com/go-resty/resty] Connection:[close] Content-Length:[89] Content-Type:[application/json; charset=utf-8]] Body:0xc821ab1c40 ContentLength:89 TransferEncoding:[] Close:true Host: Form:map[] PostForm:map[] MultipartForm:<nil> Trailer:map[] RemoteAddr:@ RequestURI:/lb/service?rev-nat=true TLS:<nil> Cancel:<nil>}': "invalid service ID 0"```

Unable to run cilium with clang-3.9

Steps to reproduce.

  1. Replace clang in root's Dockerfile with 3.9.0.
  2. Run the docker-compose example provided here: https://github.com/cilium/cilium/tree/master/examples/docker-compose
  3. The error happens on the first container started. For example: docker run -d --name wine --net cilium --label io.cilium.service.wine noironetworks/nettools sleep 30000
cilium_1         | Join EP id=29898 ifname=lxcd7915
cilium_1         | 
cilium_1         | Prog section 'from-container' rejected: Permission denied (13)!
cilium_1         |  - Type:         3
cilium_1         |  - Instructions: 3276 (0 over limit)
cilium_1         |  - License:      GPL
cilium_1         | 
cilium_1         | Verifier analysis:
cilium_1         | 
cilium_1         | Skipped 18658 bytes, use 'verb' option for the full verbose log.
cilium_1         | [...]
cilium_1         | r1
cilium_1         | 112: (15) if r2 == 0x0 goto pc+9
cilium_1         |  R0=inv R1=inv63 R2=inv R3=imm2 R6=ctx R7=imm0 R8=inv R9=inv48 R10=fp
cilium_1         | 113: (b7) r1 = 2
cilium_1         | 114: (63) *(u32 *)(r6 +48) = r1
cilium_1         | 115: (63) *(u32 *)(r6 +52) = r8
cilium_1         | 116: (bf) r1 = r6
cilium_1         | 117: (18) r2 = 0x14b75f00
cilium_1         | 119: (b7) r3 = 2
cilium_1         | 120: (85) call 12
cilium_1         | 121: safe
cilium_1         | 
cilium_1         | from 112 to 122: safe
cilium_1         | 
cilium_1         | from 1296 to 1388: R0=inv R6=ctx R7=imm0 R8=imm29898 R9=inv48 R10=fp
cilium_1         | 1388: (b7) r1 = 58
cilium_1         | 1389: (73) *(u8 *)(r10 -116) = r1
cilium_1         | 1390: (63) *(u32 *)(r10 -120) = r7
cilium_1         | 1391: (71) r1 = *(u8 *)(r10 -115)
cilium_1         | 1392: (bf) r2 = r1
cilium_1         | 1393: (47) r2 |= 2
cilium_1         | 1394: (73) *(u8 *)(r10 -115) = r2
cilium_1         | 1395: (47) r1 |= 14850
cilium_1         | 1396: (61) r2 = *(u32 *)(r6 +8)
cilium_1         | 1397: (55) if r2 != 0x0 goto pc+1
cilium_1         |  R0=inv R1=inv R2=inv R6=ctx R7=imm0 R8=imm29898 R9=inv48 R10=fp
cilium_1         | 1398: (61) r2 = *(u32 *)(r6 +68)
cilium_1         | 1399: (b7) r3 = 2
cilium_1         | 1400: (73) *(u8 *)(r10 -40) = r3
cilium_1         | 1401: (b7) r3 = 8
cilium_1         | 1402: (73) *(u8 *)(r10 -39) = r3
cilium_1         | 1403: (b7) r3 = 29898
cilium_1         | 1404: (6b) *(u16 *)(r10 -38) = r3
cilium_1         | 1405: (63) *(u32 *)(r10 -36) = r2
cilium_1         | 1406: (b7) r2 = 0
cilium_1         | 1407: (63) *(u32 *)(r10 -32) = r2
cilium_1         | 1408: (63) *(u32 *)(r10 -28) = r1
cilium_1         | 1409: (63) *(u32 *)(r10 -24) = r2
cilium_1         | 1410: (bf) r4 = r10
cilium_1         | 1411: (07) r4 += -40
cilium_1         | 1412: (bf) r1 = r6
cilium_1         | 1413: (18) r2 = 0x2c8ef300
cilium_1         | 1415: (18) r3 = 0xffffffff
cilium_1         | 1417: (b7) r5 = 20
cilium_1         | 1418: (85) call 25
cilium_1         | 1419: (bf) r2 = r10
cilium_1         | 1420: (07) r2 += -136
cilium_1         | 1421: (bf) r3 = r10
cilium_1         | 1422: (07) r3 += -80
cilium_1         | 1423: (18) r1 = 0x14b75780
cilium_1         | 1425: (b7) r4 = 0
cilium_1         | 1426: (85) call 2
cilium_1         | 1427: (67) r0 <<= 32
cilium_1         | 1428: (c7) r0 s>>= 63
cilium_1         | 1429: (bf) r8 = r0
cilium_1         | 1430: (57) r8 &= -155
cilium_1         | 1431: (65) if r0 s> 0xffffffff goto pc+148
cilium_1         |  R0=inv R6=ctx R7=imm0 R8=inv R9=inv48 R10=fp
cilium_1         | 1432: (05) goto pc-1330
cilium_1         | 103: safe
cilium_1         | 
cilium_1         | from 1431 to 1580: R0=inv R6=ctx R7=imm0 R8=inv R9=inv48 R10=fp
cilium_1         | 1580: (61) r1 = *(u32 *)(r10 -136)
cilium_1         | 1581: (55) if r1 != 0xdf0 goto pc+104
cilium_1         |  R0=inv R1=inv R6=ctx R7=imm0 R8=inv R9=inv48 R10=fp
cilium_1         | 1582: (61) r1 = *(u32 *)(r10 -132)
cilium_1         | 1583: (67) r1 <<= 32
cilium_1         | 1584: (77) r1 >>= 32
cilium_1         | 1585: (55) if r1 != 0x0 goto pc+100
cilium_1         |  R0=inv R1=inv32 R6=ctx R7=imm0 R8=inv R9=inv48 R10=fp
cilium_1         | 1586: (61) r1 = *(u32 *)(r10 -128)
cilium_1         | 1587: (bf) r2 = r1
cilium_1         | 1588: (57) r2 &= 65535
cilium_1         | 1589: (55) if r2 != 0xa8c0 goto pc+96
cilium_1         |  R0=inv R1=inv R2=inv48 R6=ctx R7=imm0 R8=inv R9=inv48 R10=fp
cilium_1         | 1590: (55) if r1 != 0xb22a8c0 goto pc+93
cilium_1         |  R0=inv R1=inv R2=inv48 R6=ctx R7=imm0 R8=inv R9=inv48 R10=fp
cilium_1         | 1591: (61) r1 = *(u32 *)(r6 +80)
cilium_1         | 1592: (61) r2 = *(u32 *)(r6 +76)
cilium_1         | 1593: (69) r3 = *(u16 *)(r10 -122)
cilium_1         | 1594: (b7) r4 = 65280
cilium_1         | 1595: (2d) if r4 > r3 goto pc+2
cilium_1         |  R0=inv R1=pkt_end R2=pkt(id=0,off=0,r=0) R3=inv48 R4=imm65280 R6=ctx R7=imm0 R8=inv R9=inv48 R10=fp
cilium_1         | 1596: (57) r3 &= 255
cilium_1         | 1597: (15) if r3 == 0xff goto pc+194
cilium_1         |  R0=inv R1=pkt_end R2=pkt(id=0,off=0,r=0) R3=inv56 R4=imm65280 R6=ctx R7=imm0 R8=inv R9=inv48 R10=fp
cilium_1         | 1598: (18) r8 = 0xffffff7a
cilium_1         | 1600: (bf) r3 = r2
cilium_1         | 1601: (07) r3 += 54
cilium_1         | 1602: (2d) if r3 > r1 goto pc-1500
cilium_1         |  R0=inv R1=pkt_end R2=pkt(id=0,off=0,r=54) R3=pkt(id=0,off=54,r=54) R4=imm65280 R6=ctx R7=imm0 R8=inv R9=inv48 R10=fp
cilium_1         | 1603: (b7) r1 = 0
cilium_1         | 1604: (63) *(u32 *)(r6 +56) = r1
cilium_1         | 1605: (71) r3 = *(u8 *)(r10 -116)
cilium_1         | 1606: (7b) *(u64 *)(r10 -160) = r3
cilium_1         | 1607: (61) r2 = *(u32 *)(r2 +50)
cilium_1         | 1608: (57) r2 &= -65536
cilium_1         | 1609: (dc) (u32) r2 endian (u32) r0
cilium_1         | 1610: (63) *(u32 *)(r10 -8) = r2
cilium_1         | 1611: (61) r3 = *(u32 *)(r6 +8)
cilium_1         | 1612: (55) if r3 != 0x0 goto pc+1
cilium_1         |  R0=inv R1=imm0 R2=inv R3=inv R4=imm65280 R6=ctx R7=imm0 R8=inv R9=inv48 R10=fp
cilium_1         | 1613: (61) r3 = *(u32 *)(r6 +68)
cilium_1         | 1614: (b7) r7 = 2
cilium_1         | 1615: (73) *(u8 *)(r10 -80) = r7
cilium_1         | 1616: (73) *(u8 *)(r10 -79) = r7
cilium_1         | 1617: (b7) r4 = 29898
cilium_1         | 1618: (6b) *(u16 *)(r10 -78) = r4
cilium_1         | 1619: (63) *(u32 *)(r10 -76) = r3
cilium_1         | 1620: (63) *(u32 *)(r10 -72) = r2
cilium_1         | 1621: (b7) r2 = 259
cilium_1         | 1622: (63) *(u32 *)(r10 -68) = r2
cilium_1         | 1623: (63) *(u32 *)(r10 -64) = r1
cilium_1         | 1624: (bf) r4 = r10
cilium_1         | 1625: (07) r4 += -80
cilium_1         | 1626: (bf) r1 = r6
cilium_1         | 1627: (18) r2 = 0x2c8ef300
cilium_1         | 1629: (18) r3 = 0xffffffff
cilium_1         | 1631: (b7) r5 = 20
cilium_1         | 1632: (85) call 25
cilium_1         | 1633: (bf) r2 = r10
cilium_1         | 1634: (07) r2 += -8
cilium_1         | 1635: (18) r1 = 0x2c8ef900
cilium_1         | 1637: (85) call 1
cilium_1         | 1638: (18) r8 = 0xffffff68
cilium_1         | 1640: (7b) *(u64 *)(r10 -152) = r0
cilium_1         | 1641: (15) if r0 == 0x0 goto pc-1539
cilium_1         |  R0=map_value(ks=4,vs=104) R6=ctx R7=imm2 R8=inv R9=inv48 R10=fp fp-152=map_value_or_null
cilium_1         | 1642: (79) r2 = *(u64 *)(r10 -152)
cilium_1         | 1643: (79) r1 = *(u64 *)(r2 +8)
cilium_1         | R2 invalid mem access 'map_value_or_null'
cilium_1         | 
cilium_1         | Error fetching program/map!
cilium_1         | Failed to retrieve (e)BPF data!

Feature: Policy verification testsuite

  • Tests covering correctness of policy model and corresponding enforcement
  • Tests covering precedence conflict potential and resolving capability
  • Tests covering translation into BPF enforcement layer

Getting Started with Vagrant is not working

Hello

I am following the getting started with Vagrant

21:04 $ vagrant -v
Vagrant 1.8.5
21:06 $ VBoxManage --version
4.3.38r106717

With the command

git clone [email protected]:cilium/cilium.git ~/cilium-test
cd cilium-test
NUM_NODES=1 ./contrib/vagrant/staht.sh

The master is good but the node is failing

==> cilium-node-2: ls -d ./* | grep -vE Makefile | xargs rm -rf
==> cilium-node-2: make[1]: Leaving directory `/home/vagrant/go/src/github.com/cilium/cilium/contrib/packaging/docker'
==> cilium-node-2: make[1]: Entering directory `/home/vagrant/go/src/github.com/cilium/cilium/plugins'
==> cilium-node-2: make[2]: Entering directory `/home/vagrant/go/src/github.com/cilium/cilium/plugins/cilium-docker'
==> cilium-node-2: go build -ldflags "-X "github.com/cilium/cilium/common".Version=0.1.0.dev" -o cilium-docker
==> cilium-node-2: # github.com/cilium/cilium/plugins/cilium-docker
==> cilium-node-2: compile: reading input: EOF
==> cilium-node-2: make[2]: Leaving directory `/home/vagrant/go/src/github.com/cilium/cilium/plugins/cilium-docker'
==> cilium-node-2: make[2]: *** [cilium-docker] Error 2
==> cilium-node-2: make[1]: Leaving directory `/home/vagrant/go/src/github.com/cilium/cilium/plugins'
==> cilium-node-2: make[1]: *** [cilium-docker] Error 2
==> cilium-node-2: make: *** [plugins] Error 2
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

What am I doing wrong?
Thanks

Proposal: Write Prometheus exporter

Write a Prometheus exporter that integrates with cilium monitor and exposes a set of Cilium metrics to be scraped by a Prometheus server.

Policy is accepted when it shouldn't

vagrant@k8s1:~/go/src/github.com/noironetworks/cilium-net$ sudo cilium endpoint list
ENDPOINT ID   LABEL ID   LABELS (source:key[=value])        IPv6                   IPv4
29898         263        cilium:io.cilium.service.client    f00d::c0a8:2115:74ca   10.21.247.232
                         cilium:io.cilium.service.client5
                         cilium:io.cilium.service.client6
33115         259        cilium:io.cilium.service.wine      f00d::c0a8:2115:815b   10.21.242.54
35542         257        cilium:io.cilium.service.bar       f00d::c0a8:2115:8ad6   10.21.28.238
vagrant@k8s1:~/go/src/github.com/noironetworks/cilium-net$ sudo cilium policy dump
{
  "name": "io.cilium",
  "children": {
    "service": {
      "name": "service",
      "rules": [
        {
          "coverage": [
            {
              "key": "wine",
              "source": "cilium"
            }
          ],
          "allow": [
            {
              "action": "accept",
              "label": {
                "key": "bar",
                "source": "cilium"
              }
            },
            {
              "action": "accept",
              "label": {
                "key": "host",
                "source": "reserved"
              }
            }
          ]
        },
        {
          "coverage": [
            {
              "key": "bar",
              "source": "cilium"
            }
          ],
          "allow": [
            {
              "action": "accept",
              "label": {
                "key": "client",
                "source": "cilium"
              }
            },
            {
              "action": "accept",
              "label": {
                "key": "host",
                "source": "reserved"
              }
            }
          ]
        }
      ]
    }
  }
}
vagrant@k8s1:~/go/src/github.com/noironetworks/cilium-net$ sudo cilium policy allowed -s 263 -d 257
Resolving policy for context &{Trace:1 Logging:0xc8218ce660 From:[cilium:io.cilium.service.client cilium:io.cilium.service.client5 cilium:io.cilium.service.client6] To:[cilium:io.cilium.service.bar]}
Root rules decision: undecided
Covered by child: io.cilium.service
Rule &{[cilium:wine] [{accept cilium:bar} {accept reserved:host}]} has no coverage
Matching coverage for rule &{Coverage:[cilium:bar] Allow:[{Action:accept Label:cilium:client} {Action:accept Label:reserved:host}]} 
Label cilium:io.cilium.service.client matched in rule &{Action:accept Label:cilium:client}
No match in allow rule &{Action:accept Label:reserved:host}
... no conclusion after io.cilium.service rules, current decision: accept
No matching children in io.cilium.service
Root children decision: accept
Final tree decision: accept

Cilium causes host applications to use IPv6 instead of IPv4

I had to delete cilium_host as a workaround.

[aanm@AM-laptop ~]$ ping irc.freenode.net
PING irc.freenode.net(leguin-admin.acc.umu.se (2001:6b0:e:2a18::118)) 56 data bytes
^C
--- irc.freenode.net ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1021ms
[aanm@AM-laptop ~]$ sudo ip l d cilium_host
[aanm@AM-laptop ~]$ ping irc.freenode.net
PING chat.freenode.net (130.239.18.119) 56(84) bytes of data.
64 bytes from leguin.acc.umu.se (130.239.18.119): icmp_seq=1 ttl=45 time=92.1 ms
^C
--- chat.freenode.net ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 92.176/92.176/92.176/0.000 ms

Feature: Make KV store optional

The KV store backend is only required when policy enforcement is enabled. Make KV store backend optional wen policy enforcement is disabled.

cilium-docker: provide real subnet/gateway if IPv4 is enabled

We still provide a stub IPAMConfig even if IPv4 is enabled:

        "IPAM": {
            "Driver": "cilium",
            "Options": null,
            "Config": [
                {
                    "Subnet": "0.0.0.0/0",
                    "Gateway": "1.1.1.1/32"
                },
                {
                    "Subnet": "f00d::c0a8:210b:0:0/112",
                    "Gateway": "f00d::c0a8:210b:0:0/128"
                }
            ]
        },

Endpoint list shows Status OK even if bpf program can't be build

When listing cilium endpoint list to see the endpoint status. The status presented can be deceiving because of the Policy regenerated messages.

Endpoint status: 
2016-12-01T15:42:40Z - OK - Policy regenerated
2016-12-01T15:42:05Z - OK - Policy regenerated
2016-12-01T15:41:38Z - OK - Policy regenerated
2016-12-01T15:40:39Z - Failure - error: "exit status 1" command output: "Join EP id=29898_update ifname=lxcefaff

We should add priority to the status messages, for example on this case, bpf recompilation is more important than Policy regenerated and therefore we should show Status Failure on endpoint list.

Or stop the policy regenerated for the endpoint until the bpf program is properly recompiled.

vagrant/docker example cilium monitor doesn't show nothing

Hello!
I am still in trouble with my tests with cilium :) Nothing too bad I am just trying to write something down to help the community to discover this tool but I am still having some problem.

Steps to reproduce:

$ git clone [email protected]:cilium/cilium.git ~/cilium-test
$ cd ~/cilium-test
$ NUM_NODES=1 ./contrib/vagrant/start.sh
$ vagrant ssh
$ sudo su root
$ curl -L "https://github.com/docker/compose/releases/download/1.8.1/docker-compose-$(uname -s)-$(uname -m)" > /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose
$ exit
$ docker rm -f cilium-consul
$ docker rm -f cilium-etcd
$docker network create --ipv6 --subnet ::1/112 --ipam-driver cilium --driver cilium cilium
$curl -SL https://raw.githubusercontent.com/cilium/cilium/master/examples/docker-compose/docker-compose.yml > ~/docker-compose.yml
$ cd ~/
$ IFACE=eth1 docker-compose up -d 

$ docker run -d --name server --net cilium --label io.cilium.service.server alpine sleep 30000
$ docker run -d --name client --net cilium --label io.cilium.service.client alpine sleep 30000

$ cilium endpoint list

All is fine at this point I have a list of the 2 endpoints created before I can not ping server from client. All it's working! ๐Ÿ‘ Let's try to understand why with the command sudo cilium monitor. But if I ping another time I can not see nothing into the stream the command is not showing me output.
What I am missing?
Thanks

[KNOWN ISSUE] Clang 3.7.0 bug when compiling bpf_lb

Works fine with clang versions >= 3.7.1. Kept here for reference.

clang -Iinclude -D__NR_CPUS__=8 -O2 -target bpf -I. -Wall -Werror -c bpf_lb.c -o bpf_lb.o
fatal error: error in backend: Cannot select: 0x558bfae5b6d0: ch = brind 0x558bfae5bdc0:1, 0x558bfae5bdc0 [ORD=1] [ID=9]
  0x558bfae5bdc0: i64,ch = load 0x558bfae5e030:1, 0x558bfae759f0, 0x558bfae5d5c8<LD8[JumpTable]> [ORD=1] [ID=8]
    0x558bfae759f0: i64 = add 0x558bfae5ca58, 0x558bfae56430 [ORD=1] [ID=7]
      0x558bfae5ca58: i64 = shl 0x558bfae5e030, 0x558bfae57210 [ORD=1] [ID=6]
        0x558bfae5e030: i64,ch = CopyFromReg 0x558bfada5cc0, 0x558bfae75b18 [ORD=1] [ID=5]
          0x558bfae75b18: i64 = Register %vreg443 [ID=1]
        0x558bfae57210: i64 = Constant<3> [ID=4]
      0x558bfae56430: i64 = JumpTable<0> [ID=2]
    0x558bfae5d5c8: i64 = Constant<0> [ID=3]
In function: from_netdev
clang: error: clang frontend command failed with exit code 70 (use -v to see invocation)
clang version 3.7.0 (tags/RELEASE_370/final)
Target: bpf
Thread model: posix
clang: note: diagnostic msg: PLEASE submit a bug report to http://llvm.org/bugs/ and include the crash backtrace, preprocessed source, and associated run script.
clang: note: diagnostic msg: 
********************

PLEASE ATTACH THE FOLLOWING FILES TO THE BUG REPORT:
Preprocessed source(s) and associated run script(s) are located at:
clang: note: diagnostic msg: /tmp/bpf_lb-13877b.c
clang: note: diagnostic msg: /tmp/bpf_lb-13877b.sh
clang: note: diagnostic msg: 

********************
Makefile:22: recipe for target 'bpf_lb.o' failed

bpf_lb-13877b.c.txt
bpf_lb-13877b.sh.txt

Better feedback loop when compilation fails

Due to the lack of the labels being available at network plumbing time. We have to request the labels asynchronously and then generate & compile the code after the container has already been started. If compilation or generation fails for some reason, we have no choice but to log the error in the logfile which is often not observed by the user. This leaves the user with an unconnected container.

Provide better feedback:

  • Work on getting labels earlier and do generation synchronously when orchestration system asks for networking to be setup.
  • Status field in cilium endpoint structure indicating the failure
  • Consider delaying returning from networking call until labels have been retrieved

Large amount of log messages when cilium can't connect to kubernetes.

Since we are using the kubernetes as a dependency to contact kubernetes master that code contains its own logger which fills the whole cilium file log when the connection between kubernetes master and cilium stops.

ERROR: logging before flag.Parse: E0923 10:53:38.381441   15863 reflector.go:205] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get http://[f00d::c0a8:210b:0:ffff]:8080/api/v1/endpoints?resourceVersion=0: dial tcp [f00d::c0a8:210b:0:ffff]:8080: getsockopt: connection refused
ERROR: logging before flag.Parse: E0923 10:53:39.382688   15863 reflector.go:205] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get http://[f00d::c0a8:210b:0:ffff]:8080/api/v1/services?resourceVersion=0: dial tcp [f00d::c0a8:210b:0:ffff]:8080: getsockopt: connection refused
ERROR: logging before flag.Parse: E0923 10:53:39.382702   15863 reflector.go:205] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get http://[f00d::c0a8:210b:0:ffff]:8080/api/v1/endpoints?resourceVersion=0: dial tcp [f00d::c0a8:210b:0:ffff]:8080: getsockopt: connection refused
ERROR: logging before flag.Parse: E0923 10:53:40.406308   15863 reflector.go:205] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get http://[f00d::c0a8:210b:0:ffff]:8080/api/v1/services?resourceVersion=0: dial tcp [f00d::c0a8:210b:0:ffff]:8080: getsockopt: connection refused
ERROR: logging before flag.Parse: E0923 10:53:40.406339   15863 reflector.go:205] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get http://[f00d::c0a8:210b:0:ffff]:8080/api/v1/endpoints?resourceVersion=0: dial tcp [f00d::c0a8:210b:0:ffff]:8080: getsockopt: connection refused
ERROR: logging before flag.Parse: E0923 10:53:41.410083   15863 reflector.go:205] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get http://[f00d::c0a8:210b:0:ffff]:8080/api/v1/services?resourceVersion=0: dial tcp [f00d::c0a8:210b:0:ffff]:8080: getsockopt: connection refused
ERROR: logging before flag.Parse: E0923 10:53:41.410328   15863 reflector.go:205] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get http://[f00d::c0a8:210b:0:ffff]:8080/api/v1/endpoints?resourceVersion=0: dial tcp [f00d::c0a8:210b:0:ffff]:8080: getsockopt: connection refused
ERROR: logging before flag.Parse: E0923 10:53:42.411031   15863 reflector.go:205] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get http://[f00d::c0a8:210b:0:ffff]:8080/api/v1/services?resourceVersion=0: dial tcp [f00d::c0a8:210b:0:ffff]:8080: getsockopt: connection refused
ERROR: logging before flag.Parse: E0923 10:53:42.411090   15863 reflector.go:205] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get http://[f00d::c0a8:210b:0:ffff]:8080/api/v1/endpoints?resourceVersion=0: dial tcp [f00d::c0a8:210b:0:ffff]:8080: getsockopt: connection refused
ERROR: logging before flag.Parse: E0923 10:53:43.412465   15863 reflector.go:205] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get http://[f00d::c0a8:210b:0:ffff]:8080/api/v1/endpoints?resourceVersion=0: dial tcp [f00d::c0a8:210b:0:ffff]:8080: getsockopt: connection refused
ERROR: logging before flag.Parse: E0923 10:53:43.412468   15863 reflector.go:205] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get http://[f00d::c0a8:210b:0:ffff]:8080/api/v1/services?resourceVersion=0: dial tcp [f00d::c0a8:210b:0:ffff]:8080: getsockopt: connection refused

bpf: kernel crypto support e.g. ktls

Options:

  • ktls for socket progs (depending on RX side to be available first fir ktls; high prio)
  • Provide BPF helper API to interact with cryptographic subsystems in the kernel
  • Encrypt/decrypt with xfrm and/or MACSec
  • Key must be selectable based on BPF logic

Add 'configuration' action to policy

Allow endpoint options (IPv4/IPv6/NAT46/Conntrack/...) to be set via policy

"coverage": ["io.foo.groupA"]
"configuration": [{ "IPv4": "true", "NAT46": "false" }]

Feature: BPF fragmentation handling

  • Capability to do IP fragment reassembly in front of a BPF program
  • Ability for a BPF program to queue skbs for reassembly
  • Metadata for a BPF to know whether reassembly was performed
  • Prevent an skb from being queued into reassmebly multiple times
  • Don't require MTU of net_device to cover maximum reassembled packet size

Support k8s Ingress resources

If a user sets up its cluster in a similar way:

+--------+       +--------------------+         +-----------------+
|        |       |cilium --lb -d eth0 |         |  cilium -t vxlan|
|internet|       |         A          |         |          B      |
|        +------->eth0            eth1<--------->eth0             |
+--------+       +--------------------+         +-----------------+

The internet packets are received in B from A, as expected, but unfortunately cilium -t vxlan mode is not ready to read those packets received from A.

Until this feature is implemented, as a workaround we suggest to setup as follows

+--------+       +--------------------+         +-----------------+
|        |       |cilium --lb -d eth0 |         |  cilium -d eth0 |
|internet|       |         A          |         |          B      |
|        +------->eth0            eth1<--------->eth0             |
+--------+       +--------------------+         +-----------------+

Thus, cilium on machine B will be able to receive packets from A and send them to the proper container running the backend container in B.

Duplicated log entries

2016-07-01T12:24:45.303-07:00 node1 INFO 001 initEnv > Generated IPv6 prefix: beef::c0a8:e60b:0
2016-07-01T12:24:45.303-07:00 node1 INFO 002 initEnv > Generated IPv4 range: 10.11.0.0/16
2016-07-01T12:24:45.726-07:00 node1 INFO 003 createConsulClient > Consul client ready
2016-07-01T12:24:45.726-07:00 node1 INFO 004 SyncState > Recovering old running endpoints...
2016-07-01T12:24:45.727-07:00 node1 INFO 005 run > UI is disabled
2016-07-01T12:24:45.728-07:00 node1 INFO 006 Start > Listening backend on "/var/run/cilium/cilium.sock"
2016-07-01T12:24:45.770-07:00 node1 WARN 007 updateContainer > Something went wrong, the docker ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e' was not locally found. Attempt... 1
2016-07-01T12:24:45.800-07:00 node1 WARN 008 updateContainer > Something went wrong, the docker ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e' was not locally found. Attempt... 1
2016-07-01T12:24:46.770-07:00 node1 WARN 009 updateContainer > Something went wrong, the docker ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e' was not locally found. Attempt... 2
2016-07-01T12:24:46.800-07:00 node1 WARN 00a updateContainer > Something went wrong, the docker ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e' was not locally found. Attempt... 2
2016-07-01T12:24:48.727-07:00 node1 WARN 00b 1 > Unable to intall k8s watcher for URL http://[beef::c0a8:e60b:ffff]:8080/apis/extensions/v1beta1/networkpolicies: Get http://[beef::c0a8:e60b:ffff]:8080/apis/extensions/v1beta1/networkpolicies: dial tcp [beef::c0a8:e60b:ffff]:8080: getsockopt: no route to host
2016-07-01T12:24:48.770-07:00 node1 WARN 00c updateContainer > Something went wrong, the docker ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e' was not locally found. Attempt... 3
2016-07-01T12:24:48.801-07:00 node1 WARN 00d updateContainer > Something went wrong, the docker ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e' was not locally found. Attempt... 3
2016-07-01T12:24:51.770-07:00 node1 WARN 00e updateContainer > Something went wrong, the docker ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e' was not locally found. Attempt... 4
2016-07-01T12:24:51.801-07:00 node1 WARN 00f updateContainer > Something went wrong, the docker ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e' was not locally found. Attempt... 4
2016-07-01T12:24:55.770-07:00 node1 ERRO 010 createContainer > It was impossible to store the SecLabel 256 for docker endpoint ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e'
2016-07-01T12:24:55.801-07:00 node1 ERRO 011 createContainer > It was impossible to store the SecLabel 256 for docker endpoint ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e'
2016-07-01T12:25:15.763-07:00 node1 WARN 012 updateContainer > Something went wrong, the docker ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e' was not locally found. Attempt... 1
2016-07-01T12:25:16.764-07:00 node1 WARN 013 updateContainer > Something went wrong, the docker ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e' was not locally found. Attempt... 2
2016-07-01T12:25:18.764-07:00 node1 WARN 014 updateContainer > Something went wrong, the docker ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e' was not locally found. Attempt... 3
2016-07-01T12:25:21.764-07:00 node1 WARN 015 updateContainer > Something went wrong, the docker ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e' was not locally found. Attempt... 4
2016-07-01T12:25:25.764-07:00 node1 ERRO 016 createContainer > It was impossible to store the SecLabel 256 for docker endpoint ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e'
2016-07-01T12:25:45.771-07:00 node1 WARN 017 updateContainer > Something went wrong, the docker ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e' was not locally found. Attempt... 1
2016-07-01T12:25:46.771-07:00 node1 WARN 018 updateContainer > Something went wrong, the docker ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e' was not locally found. Attempt... 2
2016-07-01T12:25:48.772-07:00 node1 WARN 019 updateContainer > Something went wrong, the docker ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e' was not locally found. Attempt... 3
2016-07-01T12:25:51.772-07:00 node1 WARN 01a updateContainer > Something went wrong, the docker ID 'bf8c3beac1ef790a6cb59f667224f81d7ee7094c004b72083bc815e3acefa52e' was not locally found. Attempt... 4

When adding the same policy one or more times its rules aren't merged

When a user adds the same policy one or more times, the rules of the policy aren't merged.

$ cilium policy dump
{
  "name": "io.cilium"
}
$ cilium policy import ./examples/docker-compose/docker.policy  
$ cilium policy dump
{
  "name": "io.cilium",
  "children": {
    "service": {
      "name": "service",
      "rules": [
        {
          "coverage": [
            {
              "key": "wine",
              "source": "cilium"
            }
          ],
          "allow": [
            {
              "action": "accept",
              "label": {
                "key": "bar",
                "source": "cilium"
              }
            },
            {
              "action": "accept",
              "label": {
                "key": "host",
                "source": "reserved"
              }
            }
          ]
        },
        {
          "coverage": [
            {
              "key": "bar",
              "source": "cilium"
            }
          ],
          "allow": [
            {
              "action": "accept",
              "label": {
                "key": "client",
                "source": "cilium"
              }
            },
            {
              "action": "accept",
              "label": {
                "key": "host",
                "source": "reserved"
              }
            }
          ]
        }
      ]
    }
  }
}
$ cilium policy import ./examples/docker-compose/docker.policy
$ cilium policy dump
{
  "name": "io.cilium",
  "children": {
    "service": {
      "name": "service",
      "rules": [
        {
          "coverage": [
            {
              "key": "wine",
              "source": "cilium"
            }
          ],
          "allow": [
            {
              "action": "accept",
              "label": {
                "key": "bar",
                "source": "cilium"
              }
            },
            {
              "action": "accept",
              "label": {
                "key": "host",
                "source": "reserved"
              }
            }
          ]
        },
        {
          "coverage": [
            {
              "key": "bar",
              "source": "cilium"
            }
          ],
          "allow": [
            {
              "action": "accept",
              "label": {
                "key": "client",
                "source": "cilium"
              }
            },
            {
              "action": "accept",
              "label": {
                "key": "host",
                "source": "reserved"
              }
            }
          ]
        },
        {
          "coverage": [
            {
              "key": "wine",
              "source": "cilium"
            }
          ],
          "allow": [
            {
              "action": "accept",
              "label": {
                "key": "bar",
                "source": "cilium"
              }
            },
            {
              "action": "accept",
              "label": {
                "key": "host",
                "source": "reserved"
              }
            }
          ]
        },
        {
          "coverage": [
            {
              "key": "bar",
              "source": "cilium"
            }
          ],
          "allow": [
            {
              "action": "accept",
              "label": {
                "key": "client",
                "source": "cilium"
              }
            },
            {
              "action": "accept",
              "label": {
                "key": "host",
                "source": "reserved"
              }
            }
          ]
        }
      ]
    }
  }
}

Suggestions:

  1. Make the Allow attribute of PolicyRuleConsumers a map
type PolicyRuleConsumers struct {
    Coverage []Label     `json:"coverage,omitempty"`
    -Allow    []AllowRule `json:"allow"`
    +Allow    map[AllowRule]bool `json:"allow"`
}
  1. Add the SHA256SUM() function in the PolicyRule interface. This allows to calculate the unique sha256sum of this particular PolicyRule and only adds the PolicyRule if there are no more rules with the same SHA256sum
type PolicyRule interface {
    Allows(ctx *SearchContext) ConsumableDecision
    Resolve(node *PolicyNode) error
    SHA256Sum() string
}

I prefer 1) since it's cleaner.

ping @tgraf

Policy evaluation for empty source context is performed

2016-08-11T09:00:28.927-07:00 ubuntu DEBU 089 evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[reserved:host] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.927-07:00 ubuntu DEBU 08a evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[reserved:world] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.928-07:00 ubuntu DEBU 08b evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.930-07:00 ubuntu DEBU 08c evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.937-07:00 ubuntu DEBU 08d evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.940-07:00 ubuntu DEBU 08e evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.946-07:00 ubuntu DEBU 08f evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.947-07:00 ubuntu DEBU 090 evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.948-07:00 ubuntu DEBU 091 evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.949-07:00 ubuntu DEBU 092 evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.957-07:00 ubuntu DEBU 093 evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.958-07:00 ubuntu DEBU 094 evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.965-07:00 ubuntu DEBU 095 evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.967-07:00 ubuntu DEBU 096 evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.968-07:00 ubuntu DEBU 097 evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.970-07:00 ubuntu DEBU 098 evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.971-07:00 ubuntu DEBU 099 evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.972-07:00 ubuntu DEBU 09a evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.974-07:00 ubuntu DEBU 09b evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.976-07:00 ubuntu DEBU 09c evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[cilium:io.cilium.server] To:[cilium:io.cilium.client]}
2016-08-11T09:00:28.976-07:00 ubuntu DEBU 09d evaluateConsumerSource > Evaluating policy for &{Trace:0 Logging:<nil> From:[cilium:io.cilium.client] To:[cilium:io.cilium.client]}

netperf fails to run if container's MTU is -50 Bytes than external interface

Setup:

  • External interface with a MTU of 1460
  • container's MTU set with 1410 (-50 Bytes)
  • cilium running with ipv4 and vxlan mode ON
$ ip l 
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 42:01:ac:10:00:14 brd ff:ff:ff:ff:ff:ff
6: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ce:f9:a6:c3:81:98 brd ff:ff:ff:ff:ff:ff
7: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 8a:c4:b0:1e:67:7b brd ff:ff:ff:ff:ff:ff
8: cilium_vxlan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 06:18:46:0d:cc:03 brd ff:ff:ff:ff:ff:ff
102: lxc92f04@if101: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1410 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 8e:42:fa:41:87:b3 brd ff:ff:ff:ff:ff:ff link-netnsid 0

Run netperf -l 30 -t TCP_STREAM -H SOME_IPV4 the test fails after 4 to 7 seconds.

With 1420 everything runs fine, with IPv6 (in VxLAN mode)+1410 runs fine, this only happens when we aren't fragmenting VxLAN packets (IPv4 in VxLAN mode).

[bug] Add initial apt-get update to Vagrantfile

In order to ensure that the repositories are updated, there should be a statement like:

sudo apt-get update

This should be added before the first "sudo apt-get install" statement in the Vagrantfile.

doc: slack link

README.md says:

If you have any questions feel free to contact us on Slack!

Should this be a public Slack group or a private one? It seems to be configured as invite-only.

Reproducible issue with vagrant

Hello

In my laptop, I have this behavior every time that I run NUM_NODES=1 ./contrib/vagrant/start.sh

==> cilium-master:              for f in `find $dir -maxdepth 1 -type f`; do \
==> cilium-master:                      install -m 0644 -t /usr/lib/cilium/$dir $f; \
==> cilium-master:              done; \
==> cilium-master:      done
==> cilium-master: make[1]: Leaving directory `/home/vagrant/go/src/github.com/cilium/cilium/bpf'
==> cilium-master: for i in daemon integration; do make -C $i install; done
==> cilium-master: make[1]: Entering directory `/home/vagrant/go/src/github.com/cilium/cilium/daemon'
==> cilium-master: groupadd -f cilium
==> cilium-master: for dir in `find ui -type d`; do \
==> cilium-master:              install -m 0755 -o root -g cilium -d /usr/lib/cilium/$dir; \
==> cilium-master:              for f in `find $dir -maxdepth 1 -type f`; do \
==> cilium-master:                      install -m 0644 -o root -g cilium -t /usr/lib/cilium/$dir $f; \
==> cilium-master:              done; \
==> cilium-master:      done
==> cilium-master: make[1]: Leaving directory `/home/vagrant/go/src/github.com/cilium/cilium/daemon'
==> cilium-master: make[1]: Entering directory `/home/vagrant/go/src/github.com/cilium/cilium/integration'
==> cilium-master: make[1]: Nothing to be done for `install'.
==> cilium-master: make[1]: Leaving directory `/home/vagrant/go/src/github.com/cilium/cilium/integration'
==> cilium-master: make: Leaving directory `/home/vagrant/go/src/github.com/cilium/cilium'
==> cilium-master: Running provisioner: shell...
    cilium-master: Running: /tmp/vagrant-shell20170123-20171-mzc9wx.sh
==> cilium-master: stdin: is not a tty
==> cilium-master: stop: Unknown instance: 
==> cilium-master: cilium-net-daemon start/running, process 4099
==> cilium-master: Running provisioner: load-policy (shell)...
    cilium-master: Running: inline script
==> cilium-master: Could not import policy directory /home/vagrant/go/src/github.com/cilium/cilium/examples/policy/default/: error while connecting to daemon: Post http://%2Fpolicy%2Fio.cilium/policy/io.c
ilium: dial unix /var/run/cilium/cilium.sock: connect: no such file or directory
The SSH command responded with a non-zero exit status. Vagrant

am I the unique one? The daemon has some problem and it's not up when the provisioning loads policy.
I am running the current master 57dcb8d6
Thanks

Question: Are you planning on doing releases using tags?

I ask because I need some sort of package support. So like an apt install cilium. So when I boot strap swarm nodes I'm not compiling from source every time. I can set up a jenkins builder to build a repo and maintain releases. I just need to know If I should poll on changes periodically and build master or if we'll be seeing official releases?

Feature: Integration with rkt metadata service

The current code integrates ontainer runtime integration without much abstraction.

As we start supporting rkt, we should:

Regarding metadata (label) integration:

  • Investigate if rkt requires separate code to retrieve labels or if k8s annotations are the only source of metadata
    • If labels handling is separate, ensure that "reserved" labels aren't configurable via rkt (See #2595)
  • Consider moving the k8s / container / (rkt) metadata code behind an interface

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.