Code Monkey home page Code Monkey logo

loadbalancer-controller's Introduction

Table of Contents generated with DocToc

LoadBalancer Controller

About the project

A LoadBalancer, containing a proxy and multiple providers, provides external traffic load balancing for kubernetes applications.

A proxy is an ingress controller watching ingress resources to provide accesses that allow inbound connections to reach the cluster services.

A provider is the entrance of the cluster providing high availability for connections to proxy (ingress controller).

Status

Working in process

This project is still in alpha version.

Design

Learn more about loadbalancer on design doc

See also

Getting started

Layout

├── cmd
│   └── controller
├── config
├── controller
├── docs
│   └── images
├── hack
│   └── license
├── pkg
│   ├── apis
│   │   └── networking
│   │       └── v1alpha1
│   ├── informers
│   │   ├── internalinterfaces
│   │   └── networking
│   │       └── v1alpha1
│   ├── listers
│   │   └── networking
│   │       └── v1alpha1
│   ├── toleration
│   ├── tprclient
│   │   └── networking
│   │       └── v1alpha1
│   └── util
│       ├── controller
│       ├── lb
│       ├── strings
│       ├── taints
│       └── validation
├── provider
│   └── providers
│       └── ipvsdr
├── proxy
│   └── proxies
│       └── nginx
└── version

A brief description:

  • cmd contains main packages, each subdirecoty of cmd is a main package.
  • docs for project documentations.
  • hack contains scripts used to manage this repository.
  • pkg contains apis, informers, listers, clients, util for LoadBalancer TPR.
  • provider contains provider plugins, each subdirectory is one kind of a provider
  • proxy contains proxy plugins, each subdirectory is one kind of a proxy
  • version is a placeholder which will be filled in at compile time

TODO

  • readjust the directory structure
  • update api to v1alpha2
  • separate api from the project to clientset
  • auto generate clients and informers

loadbalancer-controller's People

Contributors

bbbmj avatar caicloud-bot avatar ddysher avatar kdada avatar li-ang avatar pendoragon avatar scorpiocph avatar vincentguocq avatar whalecold avatar zoumo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

loadbalancer-controller's Issues

[bug]: garbage of ingress-controller-leader-kube-system.lb-xxx configmap

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:

There are a lot of configmaps like ' ingress-controller-leader-kube-system.lb-xxx' in 'kube-system'. As I know, it is created by ingress controller, so each loadbalancer instance will generate one. We need to delete them when the loadbalancer instance was deleted.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

/cc @hanxueluo

Update api to v1alpha2

According to Kubernetes‘s change :

  • update TPR to CRD

API update:

  • move api to https://github.com/caicloud/clientset
  • automatically generate clients and informers
  • update api to v1apha2, delete useless field.
  • change api group from net.alpha.caicloud.io to loadbalance.caicloud.io
  • adjust the default port to let them less than 1024

Repo convention:

  • readjust directory structure

Bug fix:

  • fix #25
  • update nginx ingress controller to 0.9.0-beta.15, fix #33
  • adjust proxy read/write timeout to 10min

NodesSpec.Effect has json tag "dedicated"

// NodesSpec is a description of nodes
type NodesSpec struct {
	// Replica is only used when Provider's type is service now
	// you can not use replica and names at the same time
	// +optional
	Replicas *int32 `json:"replicas,omitempty"`
	// Names is a name list of nodes selected to run proxy
	// It MUST be filled in when loadbalancer's type is external
	// +optional
	Names []string `json:"names,omitempty"`
	// +optional
	Effect *apiv1.TaintEffect `json:"dedicated,omitempty"`
}

[BUG] controller will crash when I change provider from ipvs to external

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind feature

What happened:
I changed a running loadbalancer from

  spec:
    providers:
      ipvsdr:
        scheduler: rr
        vip: 192.168.19.222

to

  spec:
    providers:
      external:
        vip: 192.168.19.222

the controller crashed

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

[bug]: two deployments of provider of the same loadbalancer bounce

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:
In a special circumstance, loadbalancer controller will generate two deployments of loadbalancer provider for the same loadbalancer. As we expect the code should choose one deployment and set the replica to '0', and it will be '0' forever. But what we got is:

  1. round1: set first one of deployment to '0'
  2. round2: set second one of deployement to '0', set first one deployment to '1'
  3. round3: set second one to '1', set first one to '0'
  4. recycle forever.....

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

FullNat

fullnat

Under the fullnat mode, LVS replaces the source ip and port with its ip and port, replaces des ip and port with RS's ip and port. In order not to lose client ip, fullnat add an option to TCP package to store client ip. When realserver gets the package, it gets the client ip through toa module of kernel. It seems to meet the requirement? @zoumo

I will set up fullnat with kubernetes to test whether it can work in a couple of days.

Getting more done in GitHub with ZenHub

Hola! @zoumo has created a ZenHub account for the caicloud organization. ZenHub is the only project management tool integrated natively in GitHub – created specifically for fast-moving, software-driven teams.


How do I use ZenHub?

To get set up with ZenHub, all you have to do is download the browser extension and log in with your GitHub account. Once you do, you’ll get access to ZenHub’s complete feature-set immediately.

What can ZenHub do?

ZenHub adds a series of enhancements directly inside the GitHub UI:

  • Real-time, customizable task boards for GitHub issues;
  • Multi-Repository burndown charts, estimates, and velocity tracking based on GitHub Milestones;
  • Personal to-do lists and task prioritization;
  • Time-saving shortcuts – like a quick repo switcher, a “Move issue” button, and much more.

Add ZenHub to GitHub

Still curious? See more ZenHub features or read user reviews. This issue was written by your friendly ZenHub bot, posted by request from @zoumo.

ZenHub Board

[BUG] error tls secret make nginx proxy crash

查明原因是 ingress-controller:nginx-0.9.0-beta.11 这个版本去集群获取到 tls secret 后,如果这个 tls secret 有问题导致验证失败,它没有进行错误检查,从而使用了空指针, 就会崩溃。
问题在 ingress-controller:nginx-0.9.0-beta.12 及以上版本修复

Node selection

Refactor(taint):

What happened:
Loadbalancer-controller lets users or loadbalancer-admin to decide which nodes to place proxies, it is not practical because only cluster admins know which nodes are suitable.

What you expected to happen:
Cluster admins taint suitable nodes previously, users or loadbalancer-admin need only to set number of proxies they need, no more which nodes they need.

Could you please give me a concrete design to implement this? @zoumo .

And I think the refactor may affect codes of:

[BUG] the configmap was wrong when i delete a config from nginx proxy

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:

  1. add whitelist-source-range: 192.168.10.1 to loadbalancer.proxy.config
  2. the config appear in configmap lbname-proxy-nginx-config
  3. delete whitelist-source-range: 192.168.10.1 from loadbalancer.proxy.config
  4. the config still exists in the configmap

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

New branch requirement

Is this a BUG REPORT or FEATURE REQUEST?:
FEATURE REQUEST

/kind feature

What happened:
Need to create a new branch called release-0.6/kaide to complete the upgrade task of project KAIDE,will copy the 2.8.3 version of KAIDE's function into it.
What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.