Code Monkey home page Code Monkey logo

community's Introduction

Kubernetes Community

Welcome to the Kubernetes community!

This is the starting point for joining and contributing to the Kubernetes community - improving docs, improving code, giving talks etc.

To learn more about the project structure and organization, please refer to Project Governance information.

Communicating

The communication page lists communication channels like chat, issues, mailing lists, conferences, etc.

For more specific topics, try a SIG.

Governance

Kubernetes has the following types of groups that are officially supported:

  • Committees are named sets of people that are chartered to take on sensitive topics. This group is encouraged to be as open as possible while achieving its mission but, because of the nature of the topics discussed, private communications are allowed. Examples of committees include the steering committee and things like security or code of conduct.
  • Special Interest Groups (SIGs) are persistent open groups that focus on a part of the project. SIGs must have open and transparent proceedings. Anyone is welcome to participate and contribute provided they follow the Kubernetes Code of Conduct. The purpose of a SIG is to own and develop a set of subprojects.
    • Subprojects Each SIG can have a set of subprojects. These are smaller groups that can work independently. Some subprojects will be part of the main Kubernetes deliverables while others will be more speculative and live in the kubernetes-sigs github org.
  • Working Groups are temporary groups that are formed to address issues that cross SIG boundaries. Working groups do not own any code or other long term artifacts. Working groups can report back and act through involved SIGs.

See the full governance doc for more details on these groups.

A SIG can have its own policy for contribution, described in a README or CONTRIBUTING file in the SIG folder in this repo (e.g. sig-cli/CONTRIBUTING.md), and its own mailing list, slack channel, etc.

If you want to edit details about a SIG (e.g. its weekly meeting time or its leads), please follow these instructions that detail how our docs are auto-generated.

Learn to Build

Links in contributors/devel/README.md lead to many relevant technical topics.

Contribute

A first step to contributing is to pick from the list of kubernetes SIGs. Start attending SIG meetings, join the slack channel and subscribe to the mailing list. SIGs will often have a set of "help wanted" issues that can help new contributors get involved.

The Contributor Guide provides detailed instruction on how to get your ideas and bug fixes seen and accepted, including:

  1. How to file an issue
  2. How to find something to work on
  3. How to open a pull request

Membership

We encourage all contributors to become members. We aim to grow an active, healthy community of contributors, reviewers, and code owners. Learn more about requirements and responsibilities of membership in our Community Membership page.

community's People

Contributors

bgrant0607 avatar carolynvs-msft avatar castrojo avatar cblecker avatar coderanger avatar derekwaynecarr avatar dims avatar eduartua avatar erictune avatar grodrigues3 avatar guineveresaenger avatar idvoretskyi avatar jbeda avatar jberkus avatar justaugustus avatar k8s-ci-robot avatar lavalamp avatar liggitt avatar mattfarina avatar mrbobbytables avatar nikhita avatar pmorie avatar pwittrock avatar saad-ali avatar sarahnovotny avatar spiffxp avatar tallclair avatar thockin avatar vishh avatar wojtek-t avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

community's Issues

Project Governance Umbrella issue

There have been a number of discussions about project governance.

Relevant issues:

One thing that is clear is that people are trying to solve different problems. The purpose of this issue is to surface those problems and (ideally) to come to agreement about which problems we're going to tackle first. Then we get move onto proposals for how we're going to solve them.

Some problems that have been mentioned, culled from the above, in no particular order:

  • The structure of the project is opaque to newcomers.
  • There is no clear technical escalation path / procedure.
  • There aren't consistent / official decision-making procedures for ~anything: consensus, lazy consensus, consensus-seeking, CIVS voting, etc.
  • There are no official processes for adding org members, reviewers, approvers, maintainers, project leaders, etc.
  • There is no official / regularly meeting body to drive overall technical vision of the project.
  • We don't agree on the right level/types of engagement for leaders. Some feel that leaders should be recused from responsibilities such as SIG leadership, while others feel they need to be deeply involved in releases, etc.
  • There aren't official technical leads for most subareas of the project.
  • There is no centralized / authoritative means of resolving non-technical problems on the project, including staffing gaps (engineering, docs, test, release, ...), effort gaps (tragedy of the commons), expertise mismatches, priority conflicts, personnel conflicts, etc.
  • In particular, there is insufficient effort on contributor experience (e.g., github tooling, project metrics), code organization and build processes, documentation, test infrastructure, and test health. Some on the project have argued that there is insufficient backpressure and/or incentives for people employed to deliver customer-facing features to spend time on issues important to overall project health.
  • A related issue is counterbalancing technical- and product-based decision-making.
  • Visibility across the entire project is lacking.
  • Metrics, metrics, metrics, metrics. We're flying blind.
  • There is no documented proposal process.
  • There isn't a documented process for advancing APIs through alpha, beta, stable stages of development.
  • Project technical leaders (de facto or otherwise) are not available via office hours.
  • We don't have processes or documentation for onboarding new contributors.
  • There are no official safeguards to prevent control over the project by a single company.
  • Project leadership lacks diversity.
  • There is no conflict of interest policy regarding leading/directing both the open-source project and commercialization efforts around the project.
  • There is no consistent, documented process for rolling out new processes and major project changes (e.g., requiring two-factor auth, adding the approvers mechanism, moving code between repositories).
  • Nobody has taken responsibility to think about and improve the structure of the project, processes, values, etc. A few people have been working on this part time, but it needs more and more consistent attention given the rate of growth of the project.
  • We're also lacking people to drive, implement, communicate, and roll out improvements (and test, measure, rollback, etc.).
  • There isn't a sufficiently strong feedback loop between technical contributors/leadership and the PM group.
  • Nobody has taken responsibility for legal issues, license validation, trademark enforcement, etc.
  • Technical conventions/principles are not sufficiently documented.
  • Development practices and conventions across our repositories are not consistent.
  • Our communication media are highly fragmented, which makes it hard to understand past decisions.

What other problems do people think we need to solve? Let's brainstorm first, then prioritize and cull.

cc @sarahnovotny @brendandburns @countspongebob @sebgoa @pmorie @jbeda @smarterclayton @thockin @idvoretskyi @calebamiles @philips @shtatfeld @craigmcl

Proposal: Allow containers to specify optional ConfigMaps for environment variables

From @kad
If we already touching this part of API spec, it might be good also to add one thing: referenced ConfigMap might be optional. By default this option will be false, so if ConfigMap referenced and it is missing, it will lead to error. If optional parameter set to true, and referenced ConfigMap not found, it will continue normally. Use case for such behavior:

Application developer can prepare Application deployment with referencing ConfigMap with default values. Developer also can reference optional ConfigMap with parameters that can be overridden in e.g. testing cluster, or regional cluster. If those overrides not found or not used, application will get all the rest of default values from ConfigMap of defaults, shipped with Deployment spec.
Same deployment in public and private cloud: *_proxy variables.. Developer can write spec that will be referencing ConfigMap with some agreed name, e.g. "cluster-proxies". In case of deployment of this application in private cluster this ConfigMap would contain http_proxy, https_proxy, no_proxy and other variables specific to this cluster. Same deployment sent to cluster in public cloud with direct internet access will be referencing non-existing optional ConfigMap, so containers will be run without pre-populated proxy variables.
From @thockin
We may yet end up with "if present" configmap volumes - it would actually
simplify a whole swath of scenarios. But we should do that and this in
tandem.

Unable to run unit tests as specified

Based on the documentation here, it says "the k8s.io/kubernetes prefix is added automatically", however I have not found that to be the case.

Example with:

$ make test WHAT=k8s.io/kubernetes/pkg/apis/extensions KUBE_COVER=y
WARNING: ulimit -n (files) should be at least 1000, is 256, may cause test failure
Running tests for APIVersion: v1,apps/v1beta1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1beta1,autoscaling/v1,batch/v1,batch/v2alpha1,certificates.k8s.io/v1beta1,extensions/v1beta1,imagepolicy.k8s.io/v1alpha1,policy/v1beta1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,storage.k8s.io/v1beta1,federation/v1beta1
+++ [0202 15:41:46] Saving coverage output in '/tmp/k8s_coverage/v1,apps/v1beta1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1beta1,autoscaling/v1,batch/v1,batch/v2alpha1,certificates.k8s.io/v1beta1,extensions/v1beta1,imagepolicy.k8s.io/v1alpha1,policy/v1beta1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,storage.k8s.io/v1beta1,federation/v1beta1/20170202-154146'
skipped	k8s.io/kubernetes/cmd/libs/go2idl/generator
skipped	k8s.io/kubernetes/vendor/k8s.io/client-go/1.4/rest
ok  	k8s.io/kubernetes/pkg/apis/extensions	0.069s	coverage: 0.9% of statements
+++ [0202 15:41:49] Combined coverage report: /tmp/k8s_coverage/v1,apps/v1beta1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1beta1,autoscaling/v1,batch/v1,batch/v2alpha1,certificates.k8s.io/v1beta1,extensions/v1beta1,imagepolicy.k8s.io/v1alpha1,policy/v1beta1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,storage.k8s.io/v1beta1,federation/v1beta1/20170202-154146/combined-coverage.html

Example without:

$make test WHAT=pkg/apis/extensions KUBE_COVER=y
WARNING: ulimit -n (files) should be at least 1000, is 256, may cause test failure
Running tests for APIVersion: v1,apps/v1beta1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1beta1,autoscaling/v1,batch/v1,batch/v2alpha1,certificates.k8s.io/v1beta1,extensions/v1beta1,imagepolicy.k8s.io/v1alpha1,policy/v1beta1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,storage.k8s.io/v1beta1,federation/v1beta1
+++ [0202 15:42:28] Saving coverage output in '/tmp/k8s_coverage/v1,apps/v1beta1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1beta1,autoscaling/v1,batch/v1,batch/v2alpha1,certificates.k8s.io/v1beta1,extensions/v1beta1,imagepolicy.k8s.io/v1alpha1,policy/v1beta1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,storage.k8s.io/v1beta1,federation/v1beta1/20170202-154228'
skipped	k8s.io/kubernetes/cmd/libs/go2idl/generator
skipped	k8s.io/kubernetes/vendor/k8s.io/client-go/1.4/rest
can't load package: package pkg/apis/extensions: cannot find package "pkg/apis/extensions" in any of:
	/usr/local/opt/go/libexec/src/pkg/apis/extensions (from $GOROOT)
	/Users/207383/Code/go/src/k8s.io/kubernetes/_output/local/go/src/pkg/apis/extensions (from $GOPATH)
can't load package: package pkg/apis/extensions: cannot find package "pkg/apis/extensions" in any of:
	/usr/local/opt/go/libexec/src/pkg/apis/extensions (from $GOROOT)
	/Users/207383/Code/go/src/k8s.io/kubernetes/_output/local/go/src/pkg/apis/extensions (from $GOPATH)
+++ [0202 15:42:28] Combined coverage report: /tmp/k8s_coverage/v1,apps/v1beta1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1beta1,autoscaling/v1,batch/v1,batch/v2alpha1,certificates.k8s.io/v1beta1,extensions/v1beta1,imagepolicy.k8s.io/v1alpha1,policy/v1beta1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,storage.k8s.io/v1beta1,federation/v1beta1/20170202-154228/combined-coverage.html
make: *** [test] Error 1

Kubernetes Bylaws

I have been meaning to write some comments related to:

#28 #295 and #286

Specifically, I think that the Kubernetes projects needs "Bylaws". The Bylaws can encompass most of what is in the Governance being worked on, the Elders proposal and the Three branches.

As a member of the ASF and member of the Kubernetes project, I think that a middle ground can be found between clear Bylaws that show how the project is organized and governed and simple ways to take decisions without too much bureaucracy.

Couple general comments:


The Kubernetes Way in the Governance document is very close to the Apache Way. One item that is not there is the concept of "non affiliation". This means that people talk on behalf of themselves and do not represent corporate interest. Non-affiliation is useful to avoid perception that a project is not governed by companies. This helps build a true sense of community, where people's involvement is measured on merit.


Right now the community is fragmented between the mailing lists, the SIG, SLACK, stack overflow etc. Apache puts the emphasis on mailing lists. What happens on the mailing list is what happens in the community, it is where the community lives. It is where decisions are taken. Any meeting reports to the mailing lists for archiving, and further discussion from members who could not attend meetings. The emphasis is put on consensus.

A purely consensus driven project is arguably difficult to operate. I believe that we can find a good balance of consensus and benevolent dictator model.

As an Apache guy, I find kubernetes-dev surprisingly very low volume. And often decision or important discussions are happening out of band. It does not promote inclusion.

Github is not a great way to promote inclusion or bring attention to important proposals. If you are not tagged in an issue/PR you don't know what is happening. DISCUSS threads should be created on the proper mailing lists of each project/sub-project/incubator. If consensus is not reached then our technical leaders will break the ties.


Taking an approach that mimics the Apache structure we could organize the project with:

  • A Board (The Kubernetes board, similar to what is described in the Elders proposal)

  • The Members ( Could be the current set of Github Members)

  • The Committers ( Folks with write access to some repos in the kubernetes and kubernetes-incubator orgs)

  • The Contributors (Anyone involved in the project but without commit access yet).

  • The Incubator ( Set of projects in the kubernetes-incubator)


Basic working mechanisms would be:

  • The Board is elected once a year by the members (could be initialized by the current root OWNERS in kubernetes/kubernetes )
  • The members nominate new members once a year. new members get in via vote.
  • Create "Project committees" per repo (e.g Helm Project committee). With a "Vice President" which report status to the Board. The PC of a repo is made of committers to that repo (could be mapped to Github sub team for initialization).
  • The set of committers is made of the committers of all project committees. (not all would be members).
  • "Promotion" is based on merit and committers get voted in after nomination from exiting PC members.
  • The Incubator needs to be properly created with its own PC and Chair. Graduation needs to be a vote in the Incubator main mailing lists.
  • Each project has a project-dev@ mailing list (we currently kind of have that, just needs to be cleaned up).

Once we have these basic mechanisms in place ( in the form of short/succint bylaws), agreed on by VOTE on kubernetes-dev, we can move towards implementing a clear leadership structure as highlighted in the "three branches" (which BTW would start differentiating with Apache).

Improve documentation about community norms and processes

The incubator process is currently very light on details about a number of facets of the community. As an example, the role of the Champion is defined as:

Potential Champions come from the set of all Kubernetes approvers and reviewers with the hope that they will be able to teach the Incubator Project about Kubernetes community norms and processes. The Champion is the primary point of contact for the Incubator Project team; and will help guide the team through the process. The majority of the mentorship, review, and advice will come from the Champion. Being a Champion is a significant amount of work and active participation in the sponsored project is encouraged.

The 'community norms and processes' referred to are very sparsely defined.

Since we are attempting to control growth and sprawl in kubernetes/kubernetes, it is logical to assume that in the future many people will make their entrance into the development part of our community via the incubator process. We should ensure that newcomers have a way to understand the community norms and processes in a way that:

  1. Doesn't depend on the personal knowledge level of an incubator repo Champion
  2. Doesn't depend on the personal bandwidth of an incubator repo Champion
  3. Is accurate, high-fidelity, and maintained by the community

Our experience from SIG service catalog has been that:

  1. Newcomers to the community who make contact first with an incubator do not have an easy way to familiarize themselves with the community norms and processes
  2. Because the norms and processes knowledge exists across the minds of many different individuals and has been transmitted via oral history, few people fully know what the established norms and processes are
  3. Knowledge transmission via retelling and answering individual questions is extremely inefficient and time consuming
  4. Motivation and rationale behind norms are not always clear to people who are familiar with the norms and processes

I think it would improve things if we:

  1. Clearly define the most important norms and process that should be followed; there are probably different ones that become important at different levels of maturity
  2. Clearly define which norms and processes must be adopted as exit criteria to incubation
  3. Document each norm and process from the above clearly, with onboarding instructions for things that require setup (bots, CI, etc)

Specific norms and processes from our experience in SIG service catalog:

  1. Code review / approval process:
    1. Whose approval is needed for a particular change?
    2. Pointers to automation for reviews and approvals
  2. Overall approach to generated code:
    1. What code is generated?
    2. Which generated code is commited to source control?
  3. Dependency management and vendoring code:
    1. Which dependency management tool should be used?
    2. Are dependencies vendored?
  4. Overall system architecture patterns:
    1. What is the lifecycle of an API request?
    2. How does one construct an API server?
    3. How does one create a new controller?
  5. Testing
    1. What level of testing is expected/required?
    2. What dependencies can a repository require for testing? Viz: must an incubator repo be testable on a laptop without an internet connection? With an internet connection?

Though this issue has already mentioned the following, I want to note again for emphasis that in addition to the facts for these, it is also critical to document the rationale behind the facts.

Specific asks for this issue:

  1. Let's add some specificity about the areas I've cited that we had some trouble with in the service catalog SIG
    1. Note: it's totally possible that we won't wind up wanting to be prescriptive in certain areas -- we should be explicit about those too
  2. Let's try to have (1) done by the time 1.6 is released

Three Branches of Governance - Proposal

The long term health of any open source project depends on the people that contribute. Just as importantly, those people need a framework for working together effectively. Kubernetes continues to enjoy rapid growth and adoption and has been transitioning rapidly from an exciting new project to one with large numbers of productions systems, many at scale.

One of the very important balances between forces our project must navigate is the balance between speed of innovation and feature evolution and the needs of the production users for boring, dependable infrastructure.

As the project has grown in usage, it has also grown in contributors. The more informal practices that suited our project in the early days are proving unsuitable, in part because those informal practices have been strongly based on personal relationships between the early contributors. Those practices and norms are opaque to new contributors and had become a barrier to attracting new contributors and ensuring they can be productive members of the community.

As production deployments of Kubernetes have expanded, the need for longer range planning by those users has become more urgent. As Kubernetes becomes more central to the infrastructure of any company, the need for roadmaps, goals, and planning transparency have also become more central.

This has led to one very significant (currently in place) change to the Kubernetes Way (PM group), and another major one is proposed - the Council of Elders. I propose that a third is needed.

A significant event in the Kubernetes community was the formation of the Product Management Group. The value of this group to aid in future planning, roadmaps, and in the feature-level planning of the project is undeniable.

Teams that tilt the power structures towards engineering teams tend to build great architectures, but struggle with user experience and to build features users really want. Teams that tilt the power structures towards product management are often feature-rich with stove-piped architectures that struggle to scale and meet their SLOs. One of the biggest challenges leaders have is balancing these views. The Kubernetes project has this same challenge.

Introduction of the Council of Elders is not just a positive step for the project but an utter necessity, and I support creation immediately. Critically, though, this Council must have wide enough representation and diversity, and sufficiently engaged members to directly guide and influence the main technical and architectural issues of the project. I echo the sentiments of many in the community: #28

Today, the Product Management Group has also become the de facto process management and project policy owner for the project. Just as with features vs architecture tradeoffs, project policies have to balance the needs and desires of the product management view of the project with the developer view, and manage the tradeoffs between predictability, reporting, agility, and productivity. The speed of change of processes can have a profound effect on project productivity - both positive and negative - and are not best managed from the product management view alone.

Project policies play a key role in how these tradeoffs are discussed, and how decisions are made and communicated. These policies are not meant to make those tradeoffs, but to ensure that balanced, judged, and rational decisions are made in a transparent way, that the policies of the project are documented, communicated and followed.

I propose model that has a track record of success....Three groups with clear separation of empowerment and responsibilities:

  • Product Management Group (Features, Roadmaps, Releases, Documentation)
  • Council of Elders (Architecture, Design, Owner of Technical Standards)
  • Project Policy Board (Processes, SIGs, Transparency, Traceability, and Legal/Licensing)

For the separation-of-powers model to work well it is especially important that these three groups have non-overlapping members.

Project policies should be treated with equal respect and care to features and architecture, especially in a large open source project, where coordinating large groups of people from many companies is such a challenge.

This third empowered group in combination will help Kubernetes succeed over the long run.

I ask for your support in the community for this proposal.

-Bob Wise

hack/e2e.go fails on OS X

Are e2e tests only meant to run on Linux (or CI)?

Running it on my mac gives:

$ go run hack/e2e.go --v --build
2017/02/07 11:14:14 e2e.go:946: Running: make quick-release
+++ [0207 11:14:15] Verifying Prerequisites....
+++ [0207 11:14:15] Using Docker for MacOS
chown: 733413744.1490099278: illegal user name
!!! [0207 11:14:15] Call tree:
!!! [0207 11:14:15]  1: build/release.sh:35 kube::build::build_image(...)
make: *** [quick-release] Error 1
2017/02/07 11:14:15 e2e.go:948: Step 'make quick-release' finished in 735.810178ms
2017/02/07 11:14:15 e2e.go:230: Something went wrong: error building: error building kubernetes: exit status 2
exit status 1

If I update vi build/common.sh to use : instead of ., it is able to proceed.

vagrant ssh-config is not showing node-2 entry

I am following https://github.com/kubernetes/community/blob/master/contributors/devel/local-cluster/vagrant.md.
I downloaded release v1.5.2
export KUBERNETES_PROVIDER=vagrant
export NUM_NODES=2
./cluster/kube-up.sh

I am not able to vagrant ssh node-2

m-C02RX0W6G8WN:kubernetes p0s00el$ vagrant ssh-config
Host master
  HostName 127.0.0.1
  User vagrant
  Port 2222
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /Users/p0s00el/VagrantFiles/k8s/kubernetes/.vagrant/machines/master/virtualbox/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node-1
  HostName 127.0.0.1
  User vagrant
  Port 2200
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /Users/p0s00el/VagrantFiles/k8s/kubernetes/.vagrant/machines/node-1/virtualbox/private_key
  IdentitiesOnly yes
  LogLevel FATAL

m-C02RX0W6G8WN:kubernetes p0s00el$ vagrant ssh node-2
The machine with the name 'node-2' was not found configured for
this Vagrant environment.


LOGS:
... calling validate-cluster
Found 2 node(s).
NAME                STATUS    AGE
kubernetes-node-1   Ready     8m
kubernetes-node-2   Ready     58s
Validate output:
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-1               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}   
Cluster validation succeeded
Done, listing cluster services:```

Proposal: Allow TerminationMessagePath to be more useful to arbitrary software

TerminationMessagePath on containers is a mechanism to allow a container to write a useful, human or machine readable message to a file in the container that will be surfaced via the pod container status after container termination. This data can then be displayed in a UI (to know what the result of a job or server crash was) or gathered by an API client to perform analysis on the reasons containers crash/stop/complete.

Today, it is hard for arbitrary software (images taken at random from DockerHub, general software web frameworks) to populate the termination message path, because the idea of writing to a known file on termination is a fairly unusual concept. Like health checks, many "normal" applications have no mechanism to perform this function, which means the value of the platform is diminished. In Kubernetes, we traditionally recognize this as an "adaptation" problem - to show value, we meet users and software where they are by providing helpful affordances to leverage the value without having to change their software.

To better realize the value of terminationMessagePath for the broad range of software, Kubernetes should:

  1. Allow log output to be a candidate for the termination message
  2. Allow a large class of users to benefit from log based termination messages without requiring all applications to specify they want log output directly
  3. Preserve behavior for terminationMessagePath-aware applications (a client using termination message path today may expect the resulting message to only contain the contents of message path logs).

Because of 3, we cannot change the semantics of terminationMessagePath to include log output if the terminationMessagePath file is empty after pod termination - that would result in a behavior change. We could support a new field on container that defaults to true to include logs on empty message path - that would allow a client to opt out, but may be considered a breaking change.

Proposed change:

type Container struct {
  ... 
  TerminationMessagePath string `json:"terminationMessagePath"`
  // NEW: UseLogsForTerminationMessage if true indicates that an empty termination message file will be populated with the recent logs from the container.
  UseLogsForTerminationMessage *bool
}

Users could then opt in to this behavior (and we would encourage that to happen via kubectl run, Helm, OpenShift templates and new-app, and docs) to populate UIs.

Promote PM Group to Product Management SIG

Not a new proposal, has been discussed in the community and generally agreed as appropriate. I'm writing this us to formalize the discussion. Pros/cons my attempt at summarizing the discussion.

Pros:

  • PM group is no longer a "good idea", it is now integral to the functioning of the community.
  • SIG guidelines around scheduling and transparency are working well, and should be adopted.

Cons:

  • PM group may not need meet regularly.
    Rebuttal 1: There is plenty of ongoing work to manage.
    Rebuttal 2: It's fine to cancel meetings if there is no content. Other SIGs do this.

Proposed timing: Effective two community meetings from now, pending announcement during the upcoming meeting.

Document conventions for raw API fields

The API conventions document does not contain any information about raw API fields. My understanding of the (undocumented) convention is that these should be runtime.RawExtension in versioned APIs and runtime.Object in unversioned APIs. This issue is to document both the convention and the rationale in the API conventions document.

cc @kubernetes/api-reviewers

Develop documentation for creating a new API server

One fundamental primitive in the Kubernetes world is the API server. The process for creating a new API server should be as accessible and simple as possible, and work is being done to facilitate this currently. This issue is to begin documenting the process of creating a new API server at a conceptual and applied level (where practical). This area is in active development and the documentation will need to be kept up to date. I suggest that this issue be scoped to bootstrapping the documentation, after which, the maintainers of this area should provide upkeep.

I know that @MHBauer has some notes from his recent efforts in SIG service-catalog.

cc @kubernetes/sig-api-machinery

Need update some links in docs of "devel"

I found that the docs of project path kubernetes/docs/devel/ was imported to
this project path community/contributors/devel/.
But some links should update in file devel/README.md :

It assumes some familiarity with concepts in the User Guide and the Cluster Admin
Guide
.

The links of User Guide and Cluster Admin Guide are relative paths of "(../user-guide/README.md)" and "(../admin/README.md)", but the kubernetes/docs/user-guide/ is not imported to this kubernetes community yet.
Though it's a minor problem...
i think update the links to http://kubernetes.io/docs/user-guide/ and http://kubernetes.github.io/docs/admin/ is a good idea for perfection.

Proposal: Add Discriminators in All Unions/OneOf APIs

Overview

We have a number of cases in the API where only one of a set of fields is allowed to be specified, aka undiscriminated union / oneof.

VolumeSource is the canonical example: it has fields such as emptyDir, gcePersistentDisk, awsElasticBlockStore and other 20 fields. Only one of these fields can be specified.

We should add discriminators to union / oneof APIs, since it has several advantages.

Original issue is described in kubernetes/kubernetes#35345

Advantages

Adding discriminators to all unions/oneof cases would have multiple advantages:

  1. Clients could effectively implement a switch instead of if-else trees to inspect the resource -- look at discriminator and lookup the corresponding field in a map (though differences in capitalization of the first letter in the API convention currently prevents the discriminator value from exactly matching the field name).

  2. The API server could automatically clear non-selected fields, which would be convenient for kubectl apply and other cases.

Analysis

List of Impacted APIs

In pkg/api/v1/types.go:

In pkg/authorization/types.go:

In pkg/apis/extensions/v1beta1/types.go:

Behavior

If the discriminator were set, we'd require that the field corresponding to its value were set and the APIServer (registry) could automatically clear the other fields.

If the discriminator were unset, behavior would be as before -- exactly one of the fields in the union/oneof would be required to be set and the operation would otherwise fail validation.

We should set discriminators by default. This means we need to change it accordingly when the corresponding union/oneof fields were set and unset. If so, clients can rely on this for purpose (1).

Proposed Changes

API

Add a discriminator field in all unions/oneof APIs.

The discriminator should be optional for backward compatibility. There is an example below, the field Type works as a discriminator.

type PersistentVolumeSource struct {
	// +optional
	GCEPersistentDisk *GCEPersistentDiskVolumeSource `json:"gcePersistentDisk,omitempty" protobuf:"bytes,1,opt,name=gcePersistentDisk"`
	// +optional
	AWSElasticBlockStore *AWSElasticBlockStoreVolumeSource `json:"awsElasticBlockStore,omitempty" protobuf:"bytes,2,opt,name=awsElasticBlockStore"`
	
...

	// Discriminator for PersistentVolumeSource, it can be "gcePersistentDisk", "awsElasticBlockStore" and etc.
	// +optional
	Type *string `json:"type,omitempty" protobuf:"bytes,24,opt,name=type"`
}

API Server

We need to add defaulting logic described in the Behavior section.

kubectl

No change required on kubectl.

Example Discriminators

Discriminator are Set by Default

Assume we have added a field as discussed in section API changes

We first use kubectl apply -f create a PersistentVolume using the following config:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0001
  annotations:
    volume.beta.kubernetes.io/storage-class: "slow"
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  hostPath:
    path: /tmp

The subsequent kubectl get should have the discriminator field set by API server:

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    volume.beta.kubernetes.io/storage-class: slow
  creationTimestamp: 2016-12-22T00:56:31Z
  name: pv0001
  namespace: ""
  resourceVersion: "1059564"
  selfLink: /api/v1/persistentvolumespv0001
  uid: 7a57e42b-c7e1-11e6-aa89-42010a800002
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 5Gi
  hostPath:
    path: /tmp
  persistentVolumeReclaimPolicy: Recycle
  # Discriminator showing the type is "hostPath"
  type: hostPath
status:
  phase: Available

Automatically Clear Unselected Fields

Issue kubernetes/kubernetes#34292 will be fixed if spec.strategy.type is treated as a discriminator.

Create a deployment using kubectl apply-f

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

Get the deployment back. Fields spec.strategy.type and spec.strategy.rollingUpdate have been defaulted.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"kind":"Deployment","apiVersion":"extensions/v1beta1","metadata":{"name":"nginx-deployment","creationTimestamp":null},"spec":{"replicas":1,"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"nginx"}},"spec":{"containers":[{"name":"nginx","image":"nginx","ports":[{"containerPort":80}],"resources":{}}]}},"strategy":{}},"status":{}}
  creationTimestamp: 2016-12-22T01:23:34Z
  generation: 1
  labels:
    app: nginx
  name: nginx-deployment
  namespace: default
  resourceVersion: "1062013"
  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/nginx-deployment
  uid: 416ef165-c7e5-11e6-aa89-42010a800002
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  strategy:
    # Defaulted by API server
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    # Defaulted by API server
    type: RollingUpdate
...

Then we update the config file by adding

  strategy:
    type: Recreate

Apply the new config by kubectl apply -f.

The operation should succeed now, because now the API server knows to clear field spec.strategy.rollingUpdate after updating spec.strategy.type.

@kubernetes/sig-api-machinery-misc @kubernetes/api-reviewers @kubernetes/kubectl

Kubernetes Elders Council - proposal

Problem: As the Kubernetes project has pivoted from a Company run open source project to a Foundation owned and community inclusive project, there have been growing pains. The perceptions of Googler who have more structure and visibility into most of the operational aspects of the projects differ greatly from the perspectives of most of the community members - particularly new project contributors and interested project contributors. Google intentionally chose not to have a BDFL for all the down sides that has, but a confused โ€œpower vacuumโ€ has evolved. The organic structure appears to a newcomer to be opaque, relationship based and full of cronyism. There is no clear starting point progression in the project or escalation for conflict resolution etc. Long standing (primarily Google employed) contributors feel that newcomers are all focused on specific features and arenโ€™t stepping up to contribute to the more material and growing backlog of housekeeping, management, and organizational health tasks.

Goals : Establish clear cultural, technical, and team leadership as well as act as an escalation point with a โ€œlast sayโ€ on any contentious issues.

Requirements: This group should have a long standing history with the project, good people management skills, strong technical vision for the project and empathy for the user and FOSS developer perspective.

Instantiation: The first seed incarnation of the Elders will be voted in as a slate of 3 Elders with a 2 year term (ending August 2018) through the Community Meeting and on the mailing list kubernetes-dev@. During the first year, the Elders will add 2 members (with a term ending August 2017) to their group for a total of 5 members.

Refreshes: In August each year subsequent the community will vote on the seats which term out. All positions voted on will have a 2 year term. The voting will be handled using Condorcet voting and a service such as CIVS. Voter eligibility must be defined, and will require registration during a public registration period. Nominations for candidates will be accepted for one week prior to the voting period.

Resignations: In the event that an elder resigns, the remaining elders will appoint a successor to serve until the next scheduled vote.

Known Issues: Elder nomination process. Voter eligibility.

Establish clear guidelines for who is authorized to join the organisation

Hi,

As a Kubernetes maintainer, I've got a few people talking to me on Slack, asking what they should do in order to join the Kubernetes organisation.

I haven't had a super-clear answer to them, and that's not great from an open community perspective.
We have to establish some clear guidelines that describes what a contributor should do in order to become a member.

(Known issues from before is also that we have no clear process for a member to become a maintainer, but with all the new OWNERS stuff that can turn a member in an OWNERS file to a approver of a PR, I think that's a lower priority item)

Definitions:

  • Reviewer: Anyone that have done at least one review of a PR (voluntarily)
  • Contibutor: Someone who has filed at least one issue and got at least one PR merged
  • Member: Member of the kubernetes organisation. Must be a contributor and a reviewer.
  • Approver: A maintainer or a member that is listed in at least one OWNERS file and have the power to /lgtm a PR so the Submit Queue merges it.
  • Maintainer: A person that works for the project's best at all times and consequently gives time to answer issues, review PRs, file issues, write PRs and organize issues/PRs by labeling and closing duplicates. Is often especially good at something, but is not working only with one small thing, instead he/she is focused on the overall health of the project. Gets flakes assigned to himself/herself.

Open question: Should a member have to at least be member of one SIG?

Why would someone want to be in the organisation (in no specific order)?

  • Retrigger tests on his PR
    • Actually, this should be fixed by the @kubernetes/sig-testing team. If a contributor of a PR has got the @k8s-bot ok to test message from someone in the org once, he/she should be able to retrigger tests in case they flake.
  • Status (have the k8s badge on his/her profile)
  • The possibility to be an assignee
  • The possibility to be an approver of a PR
  • To feel that he/she belongs to the community
  • To be able to join a sig team on Github and get sig-notifications automatically
  • etc...

We already have something here, but it doesn't explicitely address these concerns, but I think it's a good starting point.

We could define some tasks a contributor should have done before he/she can join, for example:

  • Sent more than ten pull requests (PRs) in the previous three months, or more than 40 PRs in the previous year.
  • Filed more than twelve issues in the previous three months, or more than 50 issues in the previous year.
  • Reviewed (with the review feature of Github) more than 15 pull requests in the previous three months, or more than 60 pull requests in the previous year.
  • Anything else?

I guess here we want an OR between the tasks above (that the contributor should have done at least one of those), and the more involved the contributor are, the higher the contributor will be on the list for possible new members in the org.

We could easily program a dashboard for this in order to identify the people that are possible new members, but it could be easy to abuse that system as well (by opening nearly empty issues x times just to show up on the list), so there must always be one or more persons that then approves a person to join. The persons who allow new contributors to the organisation could be some from the Community Elders (#28) team.

Then we might also want some system to remove a member from the organisation after he/she has been inactive for a specific period of time, otherwise we'll end up with an unmanageable amount of members. But that's a secondary priority, we have to define the rules how to get in first.

The really hard problem with this proposal is that it's hard to draw the line, in the end of the day it's a subjective (not always objective) decision that has to be taken. But we should try to define some requirements that everyone are aware of. We don't want them to be too easy to get (we will end up with all contributors being members => unmanagable) but at the same time not too hard to get either (few members, assignees, etc. reduces the velocity of the project).

At least it would be really helpful to have a link to a document that states something about this.
If someone want me to tell my story about how I became a maintainer while still attending upper secondary school in Finland and get the "outside" perspective from that story, feel free to ask.

P.S. I know nothing about the current process. Correct me if I'm wrong, but it's an inside-Google-thing right now, right?

cc-ing some people: @sarahnovotny @bgrant0607 @vishh @philips @jessfraz @thockin @kubernetes/contributor-experience

Proposal: Implement stateful TPR Flock

Birds of a feather flock together

When we run stateful apps (apps that store data in disk) like GlusterFS or various databases, we face a choice which Kubernetes object to use for provisioning such objects. Here are the requirements:

  • Stable routable network ID (stable across restarts). Must support reverse PTR records.
  • Must be safe to run applications without authentication using the stable network ID
  • Run multiple replica with persistent storage.
  • It should be possible to run the different replicas on different nodes to achieve high availability. (Optional)

This can't be achieved in cloud providers that do not have native support for persistent storage or Kubernetes does not have volume controller (eg, DigitialOcean, Linode, etc).

Here is my proposal on how to meet the above requirements in a cloud provider agnostic way.

StatefulSet: If the underlying cloud provider have native support cloud disk and has built-in support in Kubernetes (aws/gce/azure), then we can use StatefulSet. We can prevision disks manually and bind them with claims. We might be able to also provision them using dynamic provisioning. Moreover, StatefulSets will allow using pod name as a stable network ID. Users can also use pod placement options to ensure that pods are distributed across nodes. This allows for HA.

DaemonSet: Cloud providers that does not support built-in storage and/or has no native support in Kubernetes (eg, DigitalOcean, Linode) can't use StatefulSets to run stateful apps. Stateful apps running in these clusters must use hostpath to store data or risk losing it when pods restart. StatefulSet can't dynamically provision host path bound PVCs. In these cases, we could use DaemonSet. We have to use hostpath or emptyDir` type PV with the DaemonSet. If DaemonSets are run with pod network, no stable ID is possible. If DaemonSets run with host network, then they might use node IP. Node names are generally not routable. But Node IPs are not stable either, since most times these are allocated via DHCP. Also, for cloud providers like DigitalOcean, host network are also shared and not safe to run with out authentication.

Luckily, we can achieve something similar to StatefulSet in such providers. The underlying process is based on how named headless services work as described here: https://kubernetes.io/docs/admin/dns/ .

In these types of providers, we have to run N ReplicaSet with replica=1. We can use a fixed hostpath. We can chose N nodes and index then from 0..n-1. We apply a nodeSelector with these RCs to ensure rc with index i always runs on node with index i. Since they are on separate node, they can safely use same host path. For network ID, we set both hostname and sub-domain in the PodTemplate for these RCs. This will give the pods a dns name the same was StatefulSets pods get. Since these pods are using pod network, it should be safe to run applications without authentication. Now, we have N pods with stable name and running on different nodes using hostpath. Voila!

To simplify the full process, we can create a new TPR called Flock. We implement GlusterFS or kube databases using this TPR. The Flock controller will be in charge of translating this into the appropriate Kubernetes object based on flags set on the controller.

Is there a simpler way to achieve this?

Pay for, or move away from Slack

Would it be possible for the CNCF to pay for Slack?

If not, has there been consideration of using another platform? I'm repeatedly finding that rooms or private messages have little-to-no history from even just a few days ago because of the volume in Slack.

Or maybe I should try to find an alternative client that stores extra scrollback locally?

I realize this is huge can of worms and it would probably be a huge pain to move away from Slack for a lot of reasons, but I thought it might be worth discussion.

cloudera manager in kubernetes

I tried to deploy cloudera manager in kubernetes.
Kubernetes's network is flannel.
cloudera manger server 172.100.73.5
host1 172.100.39.3
host2 172.100.73.3
host3 172.100.76.3

Now, I found it will connect to broadcast's(172.100.73.0) port 7182 if the hosts and the cloudera manager server is different ip network when add hosts to cluster.
So the result is "connection is refused" except host2.

How can I expose the broadcast's port to hosts?

Thanks for your help.

Proposal: Introduce Available Pods (MinReadySeconds in the PodSpec)

Moved to #478

Today, Deployments/ReplicaSets use MinReadySeconds in order to include an additional delay on top of readiness checks and facilitate more robust rollouts. The ReplicaSet controller decides how many available Pods a ReplicaSet runs and the Deployment controller, when rolling out new Pods, will not proceed if the minimum available Pods that are required to run are not ready for at least MinReadySeconds (if MinReadySeconds is not specified then the Pods are considered available as soon as they are ready).

Two problems that have been identified so far with the current state of things:

  1. A Pod is marked ready by the kubelet as soon as it passes its readiness check. The ReplicaSet controller runs as part of master and estimates when a Pod is available by comparing the time the Pod became ready (as seen by the kubelet) with MinReadySeconds. Clock-skew between master and nodes will affect the availability checks.
  2. PodDisruptionBudget is working with ready Pods and has no notion of MinReadySeconds when used by a Deployment/ReplicaSet.

Both problems above can be solved by moving MinReadySeconds in the PodSpec. Once kubelet observes that a Pod has been ready for at least MinReadySeconds without any of its containers crashing, it will update the PodStatus with an Available condition set to Status=True. Higher-level orchestrators running on different machines such as the ReplicaSet or the PodDisruptionBudget controller will merely need to look at the Available condition that is set in the status of a Pod.

API changes

A new field is proposed in the PodSpec:

	// Minimum number of seconds for which a newly created pod should be ready
	// without any of its container crashing, for it to be considered available.
	// Defaults to 0 (pod will be considered available as soon as it is ready)
	// +optional
	MinReadySeconds *int32 `json:"minReadySeconds,omitempty"`

and a new PodConditionType:

	// PodAvailable is added in a ready pod that has MinReadySeconds specified. The pod
	// should already be added under a load balancer and serve requests, this condition
	// lets higher-level orchestrators know that the pod is running after MinReadySeconds
	// without having any of its containers crashed.
	PodAvailable PodConditionType = "Available"

Additionally:

  • Deployments/ReplicaSets/DaemonSets already use MinReadySeconds in their spec so we should probably deprecate those fields in favor of the field in the PodSpec and remove them in a future version.
  • Deployments/ReplicaSets will not propagate MinReadySeconds from their Spec down to the pod template because that will lead in differences in the pod templates between a Deployment and a ReplicaSet resulting in new rollouts. If MinReadySeconds is specified both in the spec and pod template for a Deployment/ReplicaSet and it's not the same value, a validation error will be returned (tentative). API defaulting can set MinReadySeconds in the spec, if it's specified only in the PodTemplate (tentative).
  • ReplicaSets that specify MinReadySeconds only in the ReplicaSetSpec, can create new Pods by specifying MinReadySeconds in their PodSpec (w/o updating the ReplicaSet pod template).

kubelet changes

For a Pod tha specifies MinReadySeconds, kubelet will need to check (after MinReadySeconds) if any of the Pod containers has crashed. If not, it will switch the Available condition to Status=True in the status of the Pod. Pods that don't specify MinReadySeconds, won't have the Available condition set in their status.

Controller manager changes

The ReplicaSet controller will create new Pods by setting MinReadySeconds in their PodSpec if it's specified in the ReplicaSetSpec (and not in the ReplicaSet pod template). For Pods that don't specify MinReadySeconds, it can switch to use a virtual clock and continue using the current approach of estimating availability. This will also help in keeping new servers backwards-compatible with old kubelets.

Future work

The PDB controller will need to be extended to recognize the Available condition in Pods. It may also need to get into the bussiness of estimating availability for Deployments/ReplicaSets that already use MinReadySeconds (already existing Pods are not going to be updated with an Available condition - see the section above about kubelet changes).

@kubernetes/sig-apps-misc @kubernetes/sig-api-machinery-misc

Proposal: Allow kubectl create configmap to accept env files

Now that a container can provide configmaps as an environment variables, we need to make it easier to build a configmap from a file containing key/value pairs.

The use cases are varied and not entirely agreed to. Below are excerpts from #148 and kubernetes/kubernetes#37295

From @dhoer
I want k8s to be able read in an env file like this:

Create configmap:
kubectl create configmap game-config --from-file=./env/game.env

Reference configmap:

For consistency, it would be nice for secrets to have the same behavior.

Create secret:
kubectl create secret game-config --from-file=./secret/too-many-secrets.env

Reference secret:

  • envFrom:
    secret: game-config/too-many-secrets.env
    This eliminates the required preprocessing step to convert name value pairs into configmap or secret, e.g., https://github.com/dhoer/k8s-env-gen.

From @thockin
if there were a kubectl create configmap mode that read a file an env file and created the equivalent configmap (1 key per key), would you be satisfied?

From @fraenkel
The behavior is a bit tricky. In a simple world, I can point to a directory of env files and say kubectl create configmap --from-file somedir. Duplicates are where things become tricky. What order are we processing the files in the directory? Are we sorting them? Or letting the filesystem hand them in some random order?
I will also assume you can provide multiple directories to allow overrides.

From many
For directory, I would assume whatever order it reads today, where on duplicated keys, the last one wins. They can use multiple --from-file entries to get the order they want. Subsequent --from-file entries are will overwrite any repeating key values.

From @thockin
I was thinking --from-env-file=filename and only allowing one, and not a directory.

From @kad
If we already touching this part of API spec, it might be good also to add one thing: referenced ConfigMap might be optional. By default this option will be false, so if ConfigMap referenced and it is missing, it will lead to error. If optional parameter set to true, and referenced ConfigMap not found, it will continue normally. Use case for such behavior:

  • Application developer can prepare Application deployment with referencing ConfigMap with default values. Developer also can reference optional ConfigMap with parameters that can be overridden in e.g. testing cluster, or regional cluster. If those overrides not found or not used, application will get all the rest of default values from ConfigMap of defaults, shipped with Deployment spec.
  • Same deployment in public and private cloud: *_proxy variables.. Developer can write spec that will be referencing ConfigMap with some agreed name, e.g. "cluster-proxies". In case of deployment of this application in private cluster this ConfigMap would contain http_proxy, https_proxy, no_proxy and other variables specific to this cluster. Same deployment sent to cluster in public cloud with direct internet access will be referencing non-existing optional ConfigMap, so containers will be run without pre-populated proxy variables.

From @thockin
We may yet end up with "if present" configmap volumes - it would actually
simplify a whole swath of scenarios. But we should do that and this in
tandem.

From @fraenkel
Doing some investigation on the --from-env-file, I noticed that the current create configmap support does not allow duplicate keys. I realize that we can have this behave differently but reading the use cases, people expect to be able to override yet you cannot do that today with what is currently there.

Proposal: document structure of annotation/label namespaces more completely

Since we have ever more incubated projects, and everyone loves annotations and labels, we need a spec for how those annotations are formed.

We also have extant names, such as volume.alpha.kubernetes.io, pod.alpha.kubernetes.io, beta.kubernetes.io, net.beta.kubernetes.io, as well as deployment.kubernetes.io, kubectl.kubernetes.io, and vendor or product-centric quobyte.kubernetes.io, rkt.kubernetes.io, and gluster.kubernetes.io that we need to cover.

Proposal:

  • kubernetes.io/, and *.kubernetes.io (collectively called top-level names) are managed centrally and should be considered reserved. We will document and manage them centrally.

  • *.alpha.kubernetes.io, and *.beta.kubernetes.io are reserved for alpha/beta of centrally managed names.

  • *.<repo>.x.kubernetes.io is reserved for use in that component / repo (NB, repo moves will cause pain).

  • As a component becomes more popular, it may petition for a top-level (non-x) name.

  • All existing names will be grandfathered and not forced to change, though owners are encouraged to accept and document the "new" names.

@philips @justinsb @bgrant0607 @smarterclayton @brendandburns

Kubernetes REST API documentation link broken

https://github.com/kubernetes/community/tree/master/contributors/devel includes a link under Developing against the Kubernetes API for the Kubernetes REST API. I tried to follow it (https://github.com/kubernetes/community/blob/master/contributors/api-reference/README.md), and the one after it (https://github.com/kubernetes/community/blob/master/contributors/user-guide/annotations.md) - those links both resolve to Github 404s

I tried to locate the new/updated location, but it doesn't appear to be in this community repository any longer.

Where can I find the REST API documentation? (I'd be happy to make a PR to update the link, just don't know where to send it or find it)

Proposal: Use Selector Generation for Deployments, ReplicaSet, DaemonSet, StatefulSet and ReplicationController

Selector Generation for Deployments, ReplicaSet, DaemonSet, StatefulSet and ReplicationController

Goals

Make selector easy to use and less error-prone for Deployments, ReplicaSet, DaemonSet, StatefulSet and ReplicationController. Make kubectl apply work with selector.

Problem Description

The field spec.selector of Deployments, ReplicaSet, DaemonSet are default to spec.template.metadata.labels, if it is unspecified.
And there is validation to make sure spec.selector always selects the pod template.
The defaulting of selector may prevent from kubectl apply from working when updating the spec.selector field.

Example

Here is an example from kubernetes/kubernetes#26202 (comment) showing how defaulting selector prevents kubectl apply from working.
First create a Deployment via kubectl apply

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
        test: abcd
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

Then remove any of the labels in the file (and optionally specify the selector), and then kubectl apply.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

Error message is like

The Deployment "nginx-deployment" is invalid: spec.template.metadata.labels: Invalid value: {"app":"nginx"}: `selector` does not match template `labels`

kubectl apply won't suffer this issue, if selector generation is also used for Deployments, ReplicaSet, DaemonSet, StatefulSet and ReplicationController.

Proposed changes

The Selector Generation works well in Job. Use this pattern for Deployments, ReplicaSet, DaemonSet, StatefulSet and ReplicationController.
If spec.selector is not derived from spec.template.metadata.labels, kubectl apply will works well for selectors.

API

The change will be the same as the changes to Job.
And the changes will be applied to Deployments, ReplicaSet, DaemonSet, StatefulSet and ReplicationController.
Deployments, ReplicaSet, DaemonSet, StatefulSet are in extensions/v1beta1.
ReplicationController is in core/v1.

kubectl

No required changes.

Version skew

  1. old client vs new APIserver: old client don't know field spec.manualSelector, it will be considered as nil value at server-side and the server will auto-generate selector.

  2. new client vs old APIserver: spec.manualSelector will be ignored, since the APIServer don't understand this field. It should behave as before.

cc: @kubernetes/sig-api-machinery @kubernetes/sig-cli @kubernetes/sig-apps

Will send a PR for proposal after get some feedback.

Document advice about generation/observedGeneration for controllers

Moved from kubernetes/kubernetes#35036

For some kinds of controllers, it's important to know which generation the status is valid for. I don't think it's strictly needed for all controllers, but its worth some thought to determine whether it's easier to apply the rule for everyone or if that would cause unnecessary controller churn for meaningless changes.

Also, if you have multiple controllers working off the same object (service accounts as a for instance), then each individual one would need to manage their own observedGeneration.

Add a link checker for this repo

While looking at the API conventions docs this morning, I found that many of the links in that doc were broken. We should add a link checker for this repo (and eventually other repos in the community).

2 vagrant.md exists

We have 2 vagrant.md file:
contributors/devel/developer-guides/vagrant.md
contributors/devel/local-cluster/vagrant.md

They're very much alike, I think we should maintaine only one, and remove the other one.

Cross-cluster communication between pods (AWS)

I've question with cross-cluster communication between pods on AWS.

I am using kubernetes to deploy clusters on AWS. Both clusters are in same region and AZ. Both clusters are deployed in their own VPC with non-overlapping subnets. I've successfully created VPC Peering to establish communication between two VPCs. and Minions (instances) from VPC can ping each other through private IP.

Question is, Kubernetes pods from one Cluster (VPC) can not ping Pod in another cluster through it's internal IP. I see traffic leaving pod and minion but dont see it on other VPC.

Here is IP info:

Cluster 1 (VPC 1) - subnet 172.21.0.0/16
Minion(Instance)in VPC 1 - internal IP - 172.21.0.232
Pod on Minion 1 - IP - 10.240.1.54

Cluster 2 (VPC 2) - subnet 172.20.0.0/16
Minion(instance) in VPC 2 - internal IP - 172.20.0.19
Pod on Minion 1 - IP - 10.241.2.36

I've configured VPC Peering between two VPC and I can ping Minion in VPC 1 (172.21.0.232) to Minion
in VPC 2 through IP 172.20.0.19

But when I try to ping pod on VPC 1, Minion 1 - IP 10.240.1.54 from VPC 2, Minion Pod 10.241.2.36, it can not ping.

Is this supported use case in AWS? How can I achieve it. I have configured security group on both instance to allow all traffic from source 10.0.0.0/8 as well but it did not help.

Really appreciate your help!

Proposal: Move kubedns addon code to its own repository under Kubernetes

Overview

The kube-dns pod currently consists of the kube-dns daemon, dnsmasq and their healthcheck sidecars (exec-healthz and dnsmasq-metrics). The code is spread over both main kubernetes/ and contrib/ repositories. Images for each of the containers are currently versioned separately. For example, the kube-dns that ships with Kubernetes v1.5 is v1.9, dnsmasq-metrics v1.0 etc.

Note: exec-healthz is being removed from the pod shortly.

It would be more coherent to move all of the related code into the same respository. kube-dns consumes only public APIs and the release of all Kubernetes maintained code can be unified.

Plan of action

  • Create a new repository in kubernetes called dns
  • Copy code (keeping history) of kubernetes/pkg/dns to dns/pkg/dns
  • Copy code (keeping history) of kubernetes/cmd/kube-dns to dns/cmd/kube-dns
  • Copy code (keeping history) of contrib/dnsmasq-metrics to dns/cmd/kube-dns-sidecar, adjusting pkg and cmd appropriately
  • Copy code (keeping history) contrib/dnsmasq to dns/dnsmasq
  • Use the standard build template for the verification and build infrastructure
  • e2e testing of DNS will remain in the Kubernetes repo. I would defer any potential movement of e2e tests until the separation of e2e from the main repo has been concretized.
  • kubernetes/cluster/addons will remain for now.
  • Add hook to run e2e against current kube-dns build. Note: this is not done today in the kubernetes repo itself.

Impacts

  • Kubernetes cluster/addon will refer to externally built image. This is already the case, as the CI system does not pick up changes to the kube-dns code. The referenced image on gcr.io is used instead.
  • kube-dns is a leaf project, output of git grep 'k8s.io/kubernetes/\(cmd\|pkg\)/dns/' | grep -v '^\(pkg/dns\|cmd/kube-dns\)' shows only one e2e test as an external dependency which will be removed.

Process for becoming lead of a sig

Hi all

We would like to assist with the aws sig. What is the process? I know the @justinsb would like the help, but I have not talked with @mfburnett. I know there have not been any meetings and we would love to help!

Thanks

Chris

Incorrect link to the PR Template

Under the Release Notes section of the https://github.com/kubernetes/community/blob/master/contributors/devel/pull-requests.md and https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md has an incorrect link to the "PR template"
The problem is that the link is a reference link to the https://github.com/kubernetes/kubernetes project which seems worked fine before these files were moved to the community project. i.e. if you refer to the https://github.com/kubernetes/kubernetes/blob/master/docs/devel/cherry-picks.md it say the content is now moved to community project which results in an incorrect link.

ssh to kube nodes? [AWS]

Hi there,
How is it possible to ssh to a particular kube node [AWS environment], where I can find the pem file? I looked in kubernetes/cluster/aws/util.sh but nothing could link me out to the pem file location, I got only the fingerprint used to create the aws key

Thanks

Finalize governance.md

We've been iterating on governance.md to document how the project has been operating and to capture discussions from multiple issues, merged and unmerged PRs, and mailing-list discussions:
https://github.com/kubernetes/community/blob/master/governance.md

I created a Google doc to facilitate comments:
https://docs.google.com/document/d/1UKfV4Rdqi8JcrDYOYw9epRcXY17P2FDc2MENkJjMcas/edit

Please join kubernetes-dev, kubernetes-pm, or kubernetes-wg-contribex googlegroups if you want to access. Sharing requests will be ignored.

Once we take comments on the doc, changes will be pushed into a PR.

deprecating wiki pages

Several of the wiki pages are out of date and should go away. Please let me know if these shouldn't go away.

Out of date

Looks abandoned

Should move to features repo?

Maybe these just need to be documented I just don't understand what they are for.

Move to dev guide

Duplicated with community repo

Move to docs repo

Consider using Google Docs so people can comment

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.