Code Monkey home page Code Monkey logo

kubernetes-replicator's Introduction

ConfigMap, Secret and Role, RoleBinding and ServiceAccount replication for Kubernetes

Build Status

This repository contains a custom Kubernetes controller that can be used to make secrets and config maps available in multiple namespaces.

Contents

  1. Deployment
    1. Using Helm
    2. Manual
  2. Usage
    1. "Role and RoleBinding replication
    2. "Push-based" replication
    3. "Pull-based" replication
      1. 1. Create the source secret
      2. 2. Create empty secret
      3. Special case: TLS secrets

Deployment

Using Helm

  1. Add the Mittwald Helm Repo:

    $ helm repo add mittwald https://helm.mittwald.de
    "mittwald" has been added to your repositories
    
    $ helm repo update
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "mittwald" chart repository
    Update Complete. ⎈ Happy Helming!⎈
  2. Upgrade or install kubernetes-replicator helm upgrade --install kubernetes-replicator mittwald/kubernetes-replicator

Manual

$ # Create roles and service accounts
$ kubectl apply -f https://raw.githubusercontent.com/mittwald/kubernetes-replicator/master/deploy/rbac.yaml
$ # Create actual deployment
$ kubectl apply -f https://raw.githubusercontent.com/mittwald/kubernetes-replicator/master/deploy/deployment.yaml

Usage

Role and RoleBinding replication

To create a new role, your own account needs to have at least the same set of privileges as the role you're trying to create. The chart currently offers two options to grant these permissions to the service account used by the replicator:

  • Set the value grantClusterAdminto true, which grants the service account admin privileges. This is set to false by default, as having a service account with that level of access might be undesirable due to the potential security risks attached.

  • Set the lists of needed api groups and resources explicitly. These can be specified using the value privileges. privileges is a list that contains pairs of api group and resource lists.

    Example:

    serviceAccount:
      create: true
      annotations: {}
      name:
      privileges:
        - apiGroups: [ "", "apps", "extensions" ]
          resources: ["secrets", "configmaps", "roles", "rolebindings",
          "cronjobs", "deployments", "events", "ingresses", "jobs", "pods", "pods/attach", "pods/exec", "pods/log", "pods/portforward", "services"]
        - apiGroups: [ "batch" ]
          resources:  ["configmaps", "cronjobs", "deployments", "events", "ingresses", "jobs", "pods", "pods/attach", "pods/exec", "pods/log", "pods/portforward", "services"]

    These settings permit the replication of Roles and RoleBindings with privileges for the api groups "". apps, batch and extensions on the resources specified.

"Push-based" replication

Push-based replication will "push out" the secrets, configmaps, roles and rolebindings into namespaces when new namespaces are created or when the secret/configmap/roles/rolebindings changes.

There are two general methods for push-based replication:

  • name-based; this allows you to either specify your target namespaces by name or by regular expression (which should match the namespace name). To use name-based push replication, add a replicator.v1.mittwald.de/replicate-to annotation to your secret, role(binding) or configmap. The value of this annotation should contain a comma separated list of permitted namespaces or regular expressions. (Example: namespace-1,my-ns-2,app-ns-[0-9]* will replicate only into the namespaces namespace-1 and my-ns-2 as well as any namespace that matches the regular expression app-ns-[0-9]*).

    Example:

    apiVersion: v1
    kind: Secret
    metadata:
      annotations:
        replicator.v1.mittwald.de/replicate-to: "my-ns-1,namespace-[0-9]*"
    data:
      key1: <value>
  • label-based; this allows you to specify a label selector that a namespace should match in order for a secret, role(binding) or configmap to be replicated. To use label-based push replication, add a replicator.v1.mittwald.de/replicate-to-matching annotation to the object you want to replicate. The value of this annotation should contain an arbitrary label selector.

    Example:

    apiVersion: v1
    kind: Secret
    metadata:
      annotations:
        replicator.v1.mittwald.de/replicate-to-matching: >
          my-label=value,my-other-label,my-other-label notin (foo,bar)
    data:
      key1: <value>

When the labels of a namespace are changed, any resources that were replicated by labels into the namespace and no longer qualify for replication under the new set of labels will be deleted. Afterwards any resources that now match the updated labels will be replicated into the namespace.

It is possible to use both methods of push-based replication together in a single resource, by specifying both annotations.

"Pull-based" replication

Pull-based replication makes it possible to create a secret/configmap/role/rolebindings and select a "source" resource from which the data is replicated from.

Step 1: Create the source secret

If a secret or configMap needs to be replicated to other namespaces, annotations should be added in that object permitting replication.

  • Add replicator.v1.mittwald.de/replication-allowed annotation with value true indicating that the object can be replicated.

  • Add replicator.v1.mittwald.de/replication-allowed-namespaces annotation. Value of this annotation should contain a comma separated list of permitted namespaces or regular expressions. For example namespace-1,my-ns-2,app-ns-[0-9]*: in this case replication will be performed only into the namespaces namespace-1 and my-ns-2 as well as any namespace that matches the regular expression app-ns-[0-9]*.

    apiVersion: v1
    kind: Secret
    metadata:
      annotations:
        replicator.v1.mittwald.de/replication-allowed: "true"
        replicator.v1.mittwald.de/replication-allowed-namespaces: "my-ns-1,namespace-[0-9]*"
    data:
      key1: <value>

Step 2: Create an empty destination secret

Add the annotation replicator.v1.mittwald.de/replicate-from to any Kubernetes secret or config map object. The value of that annotation should contain the the name of another secret or config map (using <namespace>/<name> notation).

apiVersion: v1
kind: Secret
metadata:
  name: secret-replica
  annotations:
    replicator.v1.mittwald.de/replicate-from: default/some-secret
data: {}

The replicator will then copy the data attribute of the referenced object into the annotated object and keep them in sync.

Special case: TLS secrets

Secrets of type kubernetes.io/tls are treated in a special way and need to have a data["tls.crt"] and a data["tls.key"] property to begin with. In the replicated secrets, these properties need to be present to begin with, but they may be empty:

apiVersion: v1
kind: Secret
metadata:
  name: tls-secret-replica
  annotations:
    replicator.v1.mittwald.de/replicate-from: default/some-tls-secret
type: kubernetes.io/tls
data:
  tls.key: ""
  tls.crt: ""

Special case: Docker registry credentials

Secrets of type kubernetes.io/dockerconfigjson also require special treatment. These secrets require to have a .dockerconfigjson key that needs to require valid JSON. For this reason, a replicated secret of this type should be created as follows:

apiVersion: v1
kind: Secret
metadata:
  name: docker-secret-replica
  annotations:
    replicator.v1.mittwald.de/replicate-from: default/some-docker-secret
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: e30K

Special case: Strip labels while replicate the resources.

Operators like https://github.com/strimzi/strimzi-kafka-operator implement an own garbage collection based on specific labels defined on resources. If mittwald replicator replicate secrets to different namespace, the strimzi-kafka-operator will remove the replicated secrets because from operators point of view the secret is a left-over. To mitigate the issue, set the annotation replicator.v1.mittwald.de/strip-labels=true to remove all labels on the replicated resource.

apiVersion: v1
kind: Secret
metadata:
  labels:
    app.kubernetes.io/managed-by: "strimzi-kafka-operator"
  name: cluster-ca-certs
  annotations:
    replicator.v1.mittwald.de/strip-labels: "true"
type: kubernetes.io/tls
data:
  tls.key: ""
  tls.crt: ""

Special case: Resource with .metadata.ownerReferences

Sometimes, secrets are generated by external components. Such secrets are configured with an ownerReference. By default, the kubernetes-replicator will delete the ownerReference in the target namespace.

ownerReference won't work across different namespaces and the secret at the destination will be removed by the kubernetes garbage collection.

To keep ownerReferences at the destination, set the annotation replicator.v1.mittwald.de/keep-owner-references=true

apiVersion: v1
kind: Secret
metadata:
  name: docker-secret-replica
  annotations:
    replicator.v1.mittwald.de/keep-owner-references: "true"
  ownerReferences:
    - apiVersion: v1
      kind: Deployment
      name: owner
      uid: "1234"
type: kubernetes.io/tls
data:
  tls.key: ""
  tls.crt: ""

See also: #120

kubernetes-replicator's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-replicator's Issues

Settings for log level

We use the replicator in production at a customer. We are getting complains over too many logs from the replicator. To the defence we have lots of apps and namespaces.

Any plans of introducing log levels?

Would you accept a merge request, let's say with a simple log library like hashicorp's logutils?

"Secret does not exist" fake news

Our dev cluster was aiming at the latest quay image, we had some certs issues and saw errors relating secrets in the replicator logs:

2019/03/18 19:13:42 secret kube-system/sandbox-dev-some-company-tls is replicated from cert-manager/sandbox-wdev-fintechpeople-ninja-tls
2019/03/18 19:13:42 could not get secret cert-manager/sandbox-wdev-fintechpeople-ninja-tls: does not exist
2019/03/18 19:13:42 secret jenkins/sandbox-wdev-fintechpeople-ninja-tls is replicated from cert-manager/sandbox-wdev-fintechpeople-ninja-tls
2019/03/18 19:13:42 could not get secret cert-manager/sandbox-wdev-fintechpeople-ninja-tls: does not exist
2019/03/18 19:13:42 secret vault/sandbox-wdev-fintechpeople-ninja-tls is replicated from cert-manager/sandbox-wdev-fintechpeople-ninja-tls
2019/03/18 19:13:42 replication of secret cert-manager/sandbox-wdev-fintechpeople-ninja-tls is not permitted: source cert-manager/sandbox-wdev-fintechpeople-ninja-tls does not allow replication. sandbox-wdev-fintechpeople-ninja-tls will not be replicated

cert-manager/sandbox-wdev-fintechpeople-ninja-tls exists and has valid cert
kube-system and jenkins secrets exist and have the correct content, so no update should be needed
The error not allowing replication is new, which I believe is fine according to the changes I saw in the code.

1- Is the behaviour about secret not existing expected?
2- This kind of affected our use-case, as we create the secret with cert-manager and we depend on cert-manager never removing whatever annotations we manually add to that secret. Do you have a way in mind to mimic previous behavior not needing origin secret to have replicator annotations?

Replicated keys do not get removed when source is deleted

Describe the bug
If a manually generated secret or resource receives keys via replicate-to, the replicated keys are not removed upon deletion of the source secret.

To Reproduce
Manually generated resource:

apiVersion: v1
kind: ConfigMap
metadata:
  name: configmap-test
  namespace: test
  annotations:
data:
  test: "45"

Source resource:

aapiVersion: v1
kind: ConfigMap
metadata:
  name: configmap-test
  annotations:
    replicator.v1.mittwald.de/replicate-to: "test"
data:
  test_data: "3"

Replication:

$ kubectl create namespace test
$ kubectl apply -f manual.yaml
$ kubectl apply -f source.yaml
$ kubectl delete configmap-test
$ kubectl get configmap configmap-test -n test -o yaml
apiVersion: v1
data:
  test: "45"
  test_data: "3"

Expected behavior
The keytest_data gets removed upon deletion of the source resource.

Environment:

  • Kubernetes version: [1.19]
  • kubernetes-replicator version: [v2.5.1]

Multiarch support

Supporting arm/arm64 in addition to amd64 would be great to support multi-arch clusters. It essentially just requires building separate images for each architecture (i.e. quay.io/mittwald/kubernetes-replicator-amd64:latest, quay.io/mittwald/kubernetes-replicator-arm:latest, etc.) and the publishing an overarching manifest at quay.io/mittwald/kubernetes-replicator:latest to point to these. https://docs.docker.com/engine/reference/commandline/manifest/ has all the details and https://billglover.me/2018/10/30/multi-architecture-docker-builds/ has an example build, but I haven't used goreleaser yet so not quite confident making the change myself. It seems pretty easy when you're using the scratch base image.

Replication fails when destination is created before source

Problem

When the destination secret is created before the source, kubernetes-replicator forgets about it.

The real world use case for this is the following: we use cert-manager to issue a wildcard certificate, which gets store in a secret in the kube-system namespace. We then need to replicate this secret to several other namespaces. When configuring a new cluster, our tooling provisions cert-manager first, then kubernetes-replicator & then creates the empty secrets to accept the replicated data. This leads to a race condition where sometimes certificate issuance (and the corresponding creation of the source secret) happens after the creation of the destination secrets.

I believe the issue is this branch which exits early. This could be fixed by either:

  • updating the dependency map even if the source secret does not exist. I'm not sure if this has other undesirable implications
  • periodically reconciling state with the cluster by listing all secrets, or perhaps check for existing dependents when a source secret is created

Reproduction

# Create destination secret & annotate it for replication
kubectl create secret generic test-destination
kubectl annotate secret test-destination replicator.v1.mittwald.de/replicate-from=default/test-source

# Create source secret & annotate it for replication
kubectl create secret generic test-source
kubectl annotate secret test-source replicator.v1.mittwald.de/replication-allowed=true
kubectl annotate secret test-source replicator.v1.mittwald.de/replication-allowed-namespaces='.*'

Expected

The test-source secret is replicated into the test-destination secret

Actual

Nothing happens.

-> % kubectl -n kube-system logs replicator-deployment-766c46874f-54bs2 --tail=10   
2020/02/14 23:05:05 secret default/better-tls-3.dev.source.ai-tls is already up-to-date
2020/02/14 23:05:05 secret jenkins/better-tls-3.dev.source.ai-tls is replicated from kube-system/better-tls-3.dev.source.ai-tls
2020/02/14 23:05:05 secret jenkins/better-tls-3.dev.source.ai-tls is already up-to-date
2020/02/14 23:05:05 secret kube-system/better-tls-3.dev.source.ai-tls has 2 dependents
2020/02/14 23:05:05 updating dependent secret kube-system/better-tls-3.dev.source.ai-tls -> default/better-tls-3.dev.source.ai-tls
2020/02/14 23:05:05 secret default/better-tls-3.dev.source.ai-tls is already up-to-date
2020/02/14 23:05:05 updating dependent secret kube-system/better-tls-3.dev.source.ai-tls -> jenkins/better-tls-3.dev.source.ai-tls
2020/02/14 23:05:05 secret jenkins/better-tls-3.dev.source.ai-tls is already up-to-date
2020/02/14 23:10:20 secret default/test-destination is replicated from default/test-source
2020/02/14 23:10:20 could not get secret default/test-source: does not exist

I'd be happy to take a crack at this but I wanted to touch base & see which way you'd rather go about fixing this

Unable to replicate a RoleBinding containing a ClusterRole role reference.

Describe the bug
Unable to replicate a RoleBinding containing a ClusterRole role reference.

To Reproduce
Steps to reproduce the behavior. Please provide appropriate Kubernetes manifests for reproducing the behavior.

  • Create any ClusterRole named "existing-cluster-role". ClusterRoles exist outside of any namespaces.
  • Create namespaces "test-source", "test-destination"
  • Apply this RoleBinding in the "test-source" namespace
kind: RoleBinding  
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test-role-binding
  namespace: test-source
  annotations:
    replicator.v1.mittwald.de/replicate-to: test-destination
subjects:
- kind: User
  name: some_user
roleRef:
  kind: ClusterRole
  name: existing-cluster-role
  apiGroup: rbac.authorization.k8s.io
---

Expected behavior

RoleBinding is replicated to the "test-destination" namespace

Environment:

  • Kubernetes version: 1.19
  • kubernetes-replicator version: 2.6.2

Additional context
Add any other context about the problem here.

Pre-existing secret not being updated on Kubernetes v1.20

Describe the bug
Contents from a pre-existing secret is not being overwritten/merged with source secret on Kubernetes v1.20.
Working fine on v1.18.18 and v1.19.10.

To Reproduce

  • Deployed kubernetes v1.20 using Minikube
  • Deployed kubernetes-replication using manifests
  • Created two namespaces
 kubectl create ns test-secret-1
 kubectl create ns test-secret-2
  • Created secret at namespace 1
kubectl create secret docker-registry docker-registry \
  --namespace test-secret-1 \
  --docker-username=DOCKER_USERNAME \
  --docker-password=DOCKER_PASSWORD
  • Created secret at namespace 2 (same name, different content)
kubectl create secret docker-registry docker-registry \
  --namespace test-secret-2 \
  --docker-username=ANOTHER_DOCKERUSER \
  --docker-password=ANOTHER_DOCKERPASSWORD
  • Annotate secret at namespace 1
kubectl annotate secret docker-registry -n test-secret-1 "replicator.v1.mittwald.de/replicate-to"="*"`
  • Check kubernetes-replicator logs
could not replicate object to other namespaces" error="Replicated test-secret-1/docker-registry to 1 out of 1 namespaces: 1 error occurred:\n\t* Failed to replicate Secret test-secret-1/docker-registry -> test-secret-2: Failed to update secret test-secret-2/docker-registry: secrets \"docker-registry\" already exists:`

Expected behavior
Secret contents should be merged, as defined here

Environment:

  • Kubernetes version: [e.g. 1.20]
  • kubernetes-replicator version: [e.g. v2.6.1]

Proposal: Configure replication with CR

Acknowledgement
This idea was originally presented by @HansK-p in #40.

Issue
Sometimes, deployment tools (or something as simple as kubectl apply) mess with already deployed (and replicated) secrets or config maps. This is especially the case for secrets of type other than Opaque: These have to be created with empty initial values (like tls.crt="" for secrets of type kubernetes.io/tls), which are then overridden by the replicator (and then overridden again by the next deployment). This is the main issue in #40, and also influential in #11, #23 and #28.

Another issue is that some source secrets are created automatically (by cert-manager, for example) without the option of adding custom annotations (also in #40).

Proposed solution
Introduce a new custom resource that describes secret (or configmap) replication, instead of using annotations:

apiVersion: replicator.mittwald.systems/v1
kind: ReplicationConfig
metadata:
  name: target-secret
spec: # written by user
  source:
    namespace: some-namespace
    name: some-source-secret
    kind: Secret
status: # written by replicator
  phase: Replicated
  target:
    namespace: my-namespace
    name: target-secret
    kind: Secret

The replicator would observe these CRs and create the replicated secrets by itself. This would have the advantage that the replicated secrets themselves would be created and fully owned by the replicator itself (without any deployments tools messing with them). We could also ensure that they are created with the correct secret type to begin with.

Roadmap
To build this, we might want to consider migrating this controller to the Operator SDK. We would probably be able to port and mostly keep the main reconciliation loops as they are. Then, a new controller could be added to handle the ReplicationConfig CRs.

TLS secret doesn't replicate crt and key

I'm trying to replicate Secret of type kubernetes.io/tls from namespace from-namespace to to-namespace and the replication doesn't work. This is pretty similar to #7 and #11, but neither of solutions seem to be working on my side.

The original secret to copy:

apiVersion: v1
data:
  ca.crt: ""
  tls.crt: <omitted>
  tls.key: <omitted>
kind: Secret
metadata:
  annotations:
    ...
    replicator.v1.mittwald.de/replication-allowed: "true"
    replicator.v1.mittwald.de/replication-allowed-namespaces: .*
  name: from-cert
  namespace: from-namespace
type: kubernetes.io/tls

to the target secret:

apiVersion: v1
type: kubernetes.io/tls
kind: Secret
metadata:
  name: to-cert
  labels:
  annotations:
    replicator.v1.mittwald.de/replication-from: from-namespace/from-cert
data: {}

when kubectl apply the above I'm getting this error:

The Secret "to-cert" is invalid:
* data[tls.crt]: Required value
* data[tls.key]: Required value

which was suggested #11 (comment) and with applying the following

apiVersion: v1
type: kubernetes.io/tls
kind: Secret
metadata:
  name: to-cert
  labels:
  annotations:
    replicator.v1.mittwald.de/replication-from: from-namespace/from-cert
data:
  tls.crt: ""
  tls.key: ""

The tls.crt and tls.key are not copied over based on values from from-namespace/from-cert and there's no log in replicator-deployment. I've tried both master and v2.0.1 deployment in kube-system. Am I missing something here? Any idea/suggestions?

Copy ConfigMap labels if they exist in source

Is your feature request related to a problem? Please describe.
Created/Updated configmap does not contain labels from original configmap.

Describe the solution you'd lgmapike
Labels copied , if they exist in the source configmap

Describe alternatives you've considered
n/a

Additional context
n/a

Replicator fails to replicate rolebindings and roles using replicate-to-matching

Describe the bug
We're using Argo Workflow and some basic settings have to be copied from the namespace Argo to all namespaces running workflows.

We're trying to use the replicator to replicate the secrets, roles and rolebindings using the annotation replicator.v1.mittwald.de/replicate-to-matching. The secrets are getting replicated to the other namespaces, but the roles and rolebinding aren't replicated.

time="2021-06-23T08:03:57Z" level=error msg="error while replicating by label selector" error="Replicated argo/workflow-default-binding to 0 out of 2 namespaces: 2 errors occurred:\n\t* Failed to replicate RoleBinding argo/workflow-default-binding -> namespace1-dev: Failed to update roleBinding namespace1-dev/workflow-default-binding: roles.rbac.authorization.k8s.io \"workflow-role\" not found: Failed to update roleBinding namespace1-dev/workflow-default-binding: roles.rbac.authorization.k8s.io \"workflow-role\" not found\n\t* Failed to replicate RoleBinding argo/workflow-default-binding -> namespace2-dev: Failed to update roleBinding namespace2-dev/workflow-default-binding: roles.rbac.authorization.k8s.io \"workflow-role\" not found: Failed to update roleBinding namespace2-dev/workflow-default-binding: roles.rbac.authorization.k8s.io \"workflow-role\" not found\n\n" kind=RoleBinding resource=argo/workflow-default-binding

To Reproduce

  1. Create two namespaces.

  2. Create a secret, a role and a rolebinding in the first namespace which have the annotation:
    replicator.v1.mittwald.de/replicate-to-matching: argo-workflow-enabled=true

The rolebinding should reference the role.

  1. Add label "argo-workflow-enabled:true" to the second namespace.

  2. Wait till the replicator has run.

  3. Check for the secret, role and rolebinding in the second namespace.

Expected behavior

The secret, role and rolebinding will be replicated to namespaces which are having the label "argo-workflow-enabled: true"

Environment:

  • Kubernetes version: v1.18.1
  • kubernetes-replicator version: latest (Redeployed the pod with version quay.io/mittwald/kubernetes-replicator:latest)

Helm chart

Would you be open to have an helm chart to manage kubernetes-replicator? Manually managing helm objects make me nervous on production clusters. :)

Not replicating .dockerconfigjson registry secrets

Simple and common use case: Replicate the registry credentials to all namespaces.

Typically this is a secret named "regcred" and it's data field is:

data:
  .dockerconfigjson: <base64 auth string here>

Seems that kubernetes-replicator does not copy these over.

Wrong ClusterRole privileges when using Helm

Describe the bug
When using Helm deploy method te RBAC configuration does introduce not needed privileges, some of them are dangerous and reported as not recommended at NSA Kubernetes hardening guidance

To Reproduce
When using Helm deploy method ClusterRole templated differs from https://github.com/mittwald/kubernetes-replicator/blob/master/deploy/rbac.yaml

Expected behavior
At ClusterRole:

- apiGroups: [""] # "" indicates the core API group
  resources: ["secrets", "configmaps"]
  verbs: ["get", "watch", "list", "create", "update", "patch", "delete"]

Instead of:

  - apiGroups: 
      - ""
      - apps
      - extensions
    resources: 
      - secrets
      - configmaps
      - roles
      - rolebindings
      - cronjobs
      - deployments
      - events
      - ingresses
      - jobs
      - pods
      - pods/attach
      - pods/exec
      - pods/log
      - pods/portforward
      - services
    verbs: ["get", "watch", "list", "create", "update", "patch", "delete", "describe"]
  - apiGroups: 
      - batch
    resources: 
      - configmaps
      - cronjobs
      - deployments
      - events
      - ingresses
      - jobs
      - pods
      - pods/attach
      - pods/exec
      - pods/log
      - pods/portforward
      - services
    verbs: ["get", "watch", "list", "create", "update", "patch", "delete", "describe"]

Environment:

  • Kubernetes version: Any
  • kubernetes-replicator version: v2.6.0 to current

Additional context
This issue was introduced by: #75
I guess the problem here is somebody used the example ClusterRole in the issue related this PR to create the replicator related ClusterRole which 2 totally different things.

published helm chart

@martin-helmich we really love this idea! Not tried using it yet but we're about to try and use this to replicate secrets containing wildcard certs from LetsEncrypt into selected namespaces. I work on an OSS project called Jenkins X https://jenkins-x.io/ and providing things work well then we'll make it available for all users to include easily into their own Jenkins X installs. For now we'll probably fork this repo so we can publish the helm chart so that it's easy for us and others to install. Any changes we make we will contribute back of course.

Awesome work :)

Namespace Whitelist for Source Objects

Is your feature request related to a problem? Please describe.
Currently there is no way to limit where a source object can be created. This allows any tenant in a multi-tenant cluster to affect other tenants' resources.
Example:
Tenant A has a regular secret "proxy-config" containing proxy configuration. They reference this secret in a Deployment to set proxy environment variables.
Tenant B creates a source secret with the same name, the replicator would then copy the source secret's content from tenant B and merge/override tenant A's secret's content. This can potentially break tenant A's configuration or may be used to maliciously affect the deployment to use a proxy under tenant B's control.

Deploying a kubernetes-replicator for each tenant and limiting the controllers access to only each tenant's namespaces respectively using RBAC is a solution that would allow each tenant to take advantage of this controller for their purposes.

However, another use-case for the controller which is the reason we are writing this issue, is the option to centrally manage common resources and replicate them to all namespaces while limiting the namespaces from which source objects are processed. This would allow preventing a scenario described in the above example.

Describe the solution you'd like

  • an allowlist of sorts could be made configurable to allow cluster-wide operation of this controller while preventing the problematic scenario described above

Describe alternatives you've considered

  • #41 would solve this, provided that going forward CRs are the ONLY way to replicate resources. A CR based solution allows RBAC control over who can create ReplicationConfig objects. However reading the discussion in #41 it seems like an add-on solution is desired

quay.io/mittwald/kubernetes-replicator

Helm chart defaults to:

image: quay.io/mittwald/kubernetes-replicator
tag: stable

https://quay.io/repository/mittwald/kubernetes-replicator?tab=tags shows that stable tag is assigned to year old alpha build image(considering it shares v1.0.0-alpha1 tag).

As a result, helm chart deployments never go healthy/ready(after some initial logs with no errors) and go to CrashLoopBackOff later.

Can be fixed by changing default image tag to latest / v1.0.0, but I believe the real error is tag assigned to incorrect image.

Documentation error

You need to add annotations on secret for which namespaces to allow when you are using the method of using labels on namespaces. Your documentation does NOT reflect this.

Image should not run as root

Is your feature request related to a problem? Please describe.

As a security best-practice, images running as root should be limited

Describe the solution you'd like
use USER ... in the Dockerfile

Can not "replicate to" a secret which is "replicated from" another one.

Describe the bug
I created a secret by using "pull-based replication" with the annotation "replicator.v1.mittwald.de/replicate-from:" and then annotated it with "replicator.v1.mittwald.de/replicate-to:" to replicate this secret to all namespaces. But it does not replicate this pulled secret to namespaces. When i try to replicate a not pulled secret (existing one) it is successfully pushing to all namespaces.

To Reproduce
Yaml file :

apiVersion: v1
kind: Secret
metadata:
  name: some-replicated-tls
  annotations:
    replicator.v1.mittwald.de/replicate-from: cert-manager/some-tls
    replicator.v1.mittwald.de/replicate-to: ''
type: kubernetes.io/tls
data:
  tls.key: ""
  tls.crt: ""

Expected behavior
Expect to push pulled secret to namespaces.

Environment:

  • Kubernetes version: [1.18.9]
  • kubernetes-replicator version: [v2.5.1]

Additional context
Add any other context about the problem here.

Support for pushing secrets instead of pulling

Would it be possible to configure resources in privileged namespace to push secrets to less privileged namespace instead of other way around? I think ability to pull secrets from any namespace isn't very secure

Secrets are not replicated when adding label to existing namespace

Describe the bug

When adding a label to an existing namespace, the desired secret is not replicated.

To Reproduce

  • Running k8s cluster
  • Create namespace pizza
  • Deploy kubernetes-replicator
  • Create k8s secret
    apiVersion: v1
    kind: Secret
    metadata:
      annotations:
        replicator.v1.mittwald.de/replicate-to-matching: |
          replicate-cheese-secret
      name: cheese
      namespace: default
    data:
      foo: bar
    type: Opaque
  • Edit namespace pizza and set label replicate-cheese-secret=1
  • Secret cheese will not replicated
  • Restart kubernetes-replicator pod will trigger secret replication

Expected behavior

Set a label to an existing namespace should result in secret replication

Environment:

  • Kubernetes version: 1.17
  • kubernetes-replicator version: v2.5.1

Additional context

n/a

Rolebinding replication failing for newly created namespaces

Describe the bug
When namespaces are newly created, replication of RoleBindings fails, because the Role referenced in the RoleBinding can't be found.

To Reproduce

  1. Create a Role and a corresponding Rolebinding set to be replicated to a namespace, e.g. test
  2. Create the namespace test
  3. The Role gets replicated, while the RoleBinding doesn't, with the application throwing an error that the referenced role could not be found

Expected behavior
Rolebinding gets replicated without error

Environment:

  • Kubernetes version: [1.18.9]
  • kubernetes-replicator version: [build from PR #75 ]

Additional context
I found this issue after working on #58. If the namespace exists and the Role is created before the RoleBinding, replication is performed without issue. However, if the namespace is newly created, we appear to have the situation that the replicator tries to replicate the RoleBinding before the corresponding Role has been replicated to the new namespace, leading to the error.

Allow Disabling Replication of Certain Resources

Is your feature request related to a problem? Please describe.
The replicator needs huge memory usage with the increase of resources it watches. Its average memory usage rose to 550 Mi. CPU has also spike during startup where 100m was not enough. We use the replicator only for replicating secrets.

Describe the solution you'd like
As a fast solution I would like to suggest to have a configuration parameter such as resourcesTypes="secret, role, rolebinding, configmap", which allow to disable other resources such as role, rolebinding, configmap with resourcesTypes="secret". Or, a parameter like ignoreResources="role, rolebinding, configmap".

Additional context
I've disabled role, rolebinding, configmap replicators by commenting their go routines in the code. As a result, memory dropped from 310 to 17 Mi. CPU also started not having spike during startup.

Crashes on Deploy - arm64 - Need architecture detection

Describe the bug
Upon deployment, either manually or by helmchart, the pod crashes repeatedly.
I have diagnosed this to be because I am deploying to a raspberry Pi.
Normally deployments to things like dockerhub detect the architecture and pull the appropriate version of the container.
It appears your setup has a separate repository for the arm64 version (Though I definitely appreciate that it exists at all).

To Reproduce
helm repo add mittwald https://helm.mittwald.de
helm repo update
helm install replicator mittwald/kubernetes-replicator --namespace cert-manager

To Fix (Poorly, if you want this to work and it hasn't been patched yet)
helm pull mittwald/kubernetes-replicator
tar xvf kuber*
edit the values.yaml and append -arm64 at the end of the repository line (No spaces)

Expected behavior
Automatically detect architecture and use the appropriate docker container (Please?).

Environment:

  • Kubernetes version: v1.21.5+k3s2
  • kubernetes-replicator version: v2.6.2
  • RPi4

Additional context
This might be a feature request depending on your point of view, but I spent like half an hour trying to figure it out, so I'll go with bug.
If there was a note in the readme.md about it I'd say it was a feature request.

Question regarding how to build the Docker image properly

Hey there!
First of all- thanks so much for building this great tool! It's really useful for us here 😄

I've been struggling with an issue since yesterday and figured it might be worth asking here.
When I build the binary locally from master (using go build) and run it pointing to a local kubeconfig file- it works well.
However, if I build the image using the main Dockerfile (not the .buildx one) and then deploy it to the same cluster- it doesn't seem to function the same way (for example, the "push-based" replication doesn't seem to work. I see in the logs when using --log-level debug that the secrets are pushed to desired namespaces, but then nothing gets really created).
I'm pretty sure I'm doing something wrong with how I'm building the Docker image, and since it seems like a process I haven't interacted with before (I noticed that buildx is a newly added experimental feature) - I'm not really sure how can I mimic the same process that is done when a release is made.
I'd love to get some pointing in the right direction on how to accomplish this task. The push-based replication is something we really look forward to.

HEAD request on helm chart repo 404

Describe the bug

I'm using gitlabracadabra to sync Helm charts to an offline repo. This tool test for existence of charts by issuing an HEAD request. The server hosting the charts returns 404 for HEAD requests.

From rfc2616:

9.4 HEAD

The HEAD method is identical to GET except that the server MUST NOT
return a message-body in the response. The metainformation contained
in the HTTP headers in response to a HEAD request SHOULD be identical
to the information sent in response to a GET request. This method can
be used for obtaining metainformation about the entity implied by the
request without transferring the entity-body itself. This method is
often used for testing hypertext links for validity, accessibility,
and recent modification.

The response to a HEAD request MAY be cacheable in the sense that the
information contained in the response MAY be used to update a
previously cached entity from that resource. If the new field values
indicate that the cached entity differs from the current entity (as
would be indicated by a change in Content-Length, Content-MD5, ETag
or Last-Modified), then the cache MUST treat the cache entry as
stale.

To Reproduce

HEAD request returns 404:

$ curl --head https://helm.mittwald.de/charts/kubernetes-replicator-2.7.2.tgz -vvv
[...]
> HEAD /charts/kubernetes-replicator-2.7.2.tgz HTTP/2
> Host: helm.mittwald.de
> User-Agent: curl/7.64.0
> Accept: */*
>
[...]
< HTTP/2 404
< content-type: application/json; charset=utf-8
< date: Tue, 23 Nov 2021 08:39:49 GMT
< x-request-id: 5cd2737e-3654-44a0-ad47-569a301355c3
< content-length: 21

While GET request returns 200:

$ https_proxy=http://10.46.61.9:3128 curl  https://helm.mittwald.de/charts/kubernetes-replicator-2.7.2.tgz --output kubernetes-replicator-2.7.2.tgz -v
[...]
> GET /charts/kubernetes-replicator-2.7.2.tgz HTTP/2
> Host: helm.mittwald.de
> User-Agent: curl/7.64.0
> Accept: */*
>
< HTTP/2 200
< content-type: application/x-tar
< date: Tue, 23 Nov 2021 08:46:18 GMT
< x-request-id: 4b9e5e19-5a21-4940-a70d-b05390c80a04
<

Expected behavior

HEAD requests should return same HTTP status as GET request.

Environment:

  • Kubernetes version: v1.21.5
  • kubernetes-replicator version: 2.7.2

Annotations are not replicated

Describe the bug
Using push-based replication annotations are not replicate from source to destination.
Of course replicator.v1.mittwald.de/replicate-to must be exempt here.

To Reproduce
Create a Secret/ConfigMap extra annotations non related to kubernetes-replicator.
When using push-based replication the source annotations won't reach the destination secret.

Expected behavior
When using push-based replication over a Secret/ConfigMap that has extra annotations non related to kubernetes-replicator, those annotations must be copied over to destination Secret/ConfigMap

Environment:

  • Kubernetes version: 1.19.14
  • kubernetes-replicator version: 2.7.0

Additional context
To add extra context on this task I am facing it when using ArgoCD or any other GitOps tool.
I have an operator (Kafka/Strimzi) which generates a Secret in a namespace which I want later to replicate to another namespace so it can be used there.
This operator allows the Secret creation to be done adding some annotations, actually this is the feature that allows me to set replicator.v1.mittwald.de/replicate-to but I would like also to add there argocd.argoproj.io/sync-options: Prune=false so ArgoCD syncs does not want to prune the destination secret every single time avoiding the sync loop.

Mitigation plan on my end is to avoid push-based replication and step into pull-based replication but of course this complicates my setup adding the need to limit destination namespace and so on.

TLS support

Hi,

I create a TLS secret in a dedicated Namespace, with the type: kubernetes.io/tls

When I sync with Kubernetes-replicator, the secret is Opaque and is not working..

How to replicate with the correct type ?

error while replicating by label selector

Describe the bug
When using the replication of an object (Secret) through the mechanism replicate-to-matching considering two labels in the namespaces, according to the example below:
E.g:

apiVersion: v1
data: {}
kind: Secret
metadata:
  name: infrastructure-certificate
  annotations:
    replicator.v1.mittwald.de/replicate-to-matching: replicate-secret-infrastructure-certificate=true,projectCustomer=yes
type: Opaque

I'm getting the following error

{
  "jsonPayload": {
    "level": "error",
    "kind": "Secret",
    "error": "Replicated liferay/infrastructure-certificate to 68 out of 126 namespaces: 58 errors occurred:\n\t* Failed to replicate Secret liferay/infrastructure-certificate -> afavkvklbdgiduzkky: Failed to update secret afavkvklbdgiduzkky/infrastructure-certificate: secrets \"infrastructure-certificate\" already exists: Failed to update secret afavkvklbdgiduzkky/infrastructure-certificate: secrets \"infrastructure-certificate\" already exists\n ....",
    "msg": "error while replicating by label selector",
    "resource": "liferay/infrastructure-certificate"
  }
}

NOTE: Summarize the contents of the error field so that it is not too long.

However, I noticed that Secret replication happened normally. It seemed to me that the error was a false positive.
It is also important to note that it was only a subset of namespaces that entered the error condition. What seems even more strange to me, because just to register, all the namespaces in question in the cluster that I'm synchronizing the secret, have the secret.
So it doesn't seem to me to be just an error because the secret already exists at the destination, the kubernetes-replicator failed to sync (overwriting the already existing secret).

To Reproduce
Secret:

apiVersion: v1
data: {}
kind: Secret
metadata:
  name: infrastructure-certificate
  annotations:
    replicator.v1.mittwald.de/replicate-to-matching: replicate-secret-infrastructure-certificate=true,projectCustomer=yes
type: Opaque

Namespace:

apiVersion: v1
kind: Namespace
metadata:
  labels:
    namespace: afavkvklbdgiduzkky
    projectCustomer: "yes"
    projectId: projectId
    projectSandbox: ""
    projectTrial: "yes"
    projectType: non-production
    projectUid: projectUid
    replicate-secret-infrastructure-certificate: "true"
  name: afavkvklbdgiduzkky

Expected behavior
Expected no sync error messages to occur.

Environment:

  • Kubernetes version: [e.g. 1.19]
  • kubernetes-replicator version: [e.g. v2.6.3]

Additional context
Add any other context about the problem here.

Infinite Loop

Hi,

If I add the following in my cert-manager secret called tls-cert:

replicator.v1.mittwald.de/replication-allowed: "true"
replicator.v1.mittwald.de/replication-allowed-namespaces: ".*"

And then add this in my other secret called my-secret:
replicator.v1.mittwald.de/replicate-from: cert-manager/tls-cert

When cert manager updates the cert it ends up in an infinite loop because replicator is adding new annotations to the cert-manager secret which causes cert manager to run again then replicator runs again and so on.

The following annotations are getting added to the cert-manager cert even though it has no replicate-from:

"replicator.v1.mittwald.de/replicated-at": "2019-07-31T20:05:11Z",
 "replicator.v1.mittwald.de/replicated-from-version": "51168",

Delete replicated resources when source is deleted

Hey there,

first of all: thank you for this awesome piece of software. :) I am planning on replacing kubed with kubernetes-replicator, as it seems more stable, better maintained and supports more resource types.

There is just one thing I am missing currently from kubernetes-replicator: When deleting the source resource (for example a configmap, in my case), the replicated/dependant ones should be deleted as well if they only contain replicated fields. From reading other issues, I know you're already checking the fields, so I hope it might not be a big change/implementation.

Or is there maybe another reason (I cannot think of yet) why you didn't include that feature yet?

Namespace regexes match substrings

Describe the bug
When a secret is configured with a regex for matching allowed replication targets (for example, using an annotation like replicator.v1.mittwald.de/replication-allowed-namespaces: "namespace-[0-9]*"), the regex will also match substrings of the namespace name. This means that foonamespace-123bar will also be matched by namespace-[0-9]*.

To Reproduce

  1. Create a secret with replicator.v1.mittwald.de/replication-allowed-namespaces: "bar"
  2. Create a namespace foobarbaz
  3. Create a secret within that namespace and replicate from the original secret
  4. Profit!

Expected behavior
Secret should not be replicated. Or should it!? 🤔

Environment:

  • Kubernetes version: irrelevant
  • kubernetes-replicator version: v2.3.0

Additional context
Credits to @bokysan for discovering this issue in #47

Support removing labels upon replication

Is your feature request related to a problem? Please describe.
When installing applications with ArgoCD or similar GitOps systems, detection of association to a managed application is done by a label. In our case we use a Kafka operator to create secrets (based on our GitOps settings), but then replicate those secrets. As this newly replicated secret isn't part of the GitOps-based description of the desired state, those secrets are detected as to-be-deleted.

Describe the solution you'd like
A flag that allows removing labels by key during replication.

Describe alternatives you've considered
Ignoring resources marked as to-be-deleted, not really an option.

Additional context
n/a

Target tls secret not updated after a kubectl apply deleting the certificate

Hello

I'm not sure if this is to be counted as a bug or a feature. I've validated this with Kubectl v.1.18.1 and AKS cluster v. 1.15.10. I've also validated that a K8s v.1.18.1 cluster behaves the same way.

Create a target tls secret:

apiVersion: v1
kind: Secret
metadata:
  name: test-example-com-tls
  annotations:
    replicator.v1.mittwald.de/replicate-from: default/test-example-com-tls
type: kubernetes.io/tls
data:
  tls.key: ""
  tls.crt: ""

Apply the target TLS secret.
Wait for the secret to be replicated.
Apply the target TLS secret again.

The second kubectl apply have emptied the value of tls.key and tls.crt, but without resetting the annotation replicator.v1.mittwald.de/replicated-from-version. As a result the, now empty, target secret is not updated again until the source secrete is updated.

Our current workaround is to apply a yaml file while which explicitly overwrites the annotation, that is:

apiVersion: v1
kind: Secret
metadata:
  name: test-example-com-tls
  annotations:
    replicator.v1.mittwald.de/replicate-from: default/test-example-com-tls
    replicator.v1.mittwald.de/replicated-from-version: "0"
type: kubernetes.io/tls
data:
  tls.key: ""
  tls.crt: ""

With this change the secret is replicated once again if we emty it by applying the yaml file one more time

I assume the easiest way to solve this issue is to update the documentation (?).

Br

Hans K.

Replication of Roles and RoleBindings not working

Describe the bug
Configured a role and a role binding to replicate to a different name space using push method. The resources are not replicated to the destination name space. Logs are showing that the replication of the role and role binding are supposed to happen, but the resource do not show up on the destination name space. Replication of a secret works without issue.

To Reproduce

installation options used for kubernetes replicator:

args:
  - -log-level=trace
  - -resync-period=1m

apply the following role and rolebinding to a namespace e.g. default

kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  annotations:
    replicator.v1.mittwald.de/replicate-to: test1
  name: developers-role
rules:
  - apiGroups:
      - ""
      - "apps"
      - "batch"
      - "extensions"
    resources:
      - "configmaps"
      - "cronjobs"
      - "deployments"
      - "events"
      - "ingresses"
      - "jobs"
      - "pods"
      - "pods/attach"
      - "pods/exec"
      - "pods/log"
      - "pods/portforward"
      - "services"
    verbs:
      - "create"
      - "delete"
      - "describe"
      - "get"
      - "list"
      - "patch"
      - "update"
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  annotations:
    replicator.v1.mittwald.de/replicate-to: test1
  name: developer-RoleBinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: developers-role
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: developers

Log output: (with

time="2020-10-23T09:20:01Z" level=info msg="Role default/developers-role to be replicated to: [test1]" kind=Role source=default/developers-role
time="2020-10-23T09:20:01Z" level=info msg="Checking if test1/developers-role exists? false" kind=Role source=default/developers-role target=test1/developers-role
time="2020-10-23T09:20:01Z" level=debug msg="Creating a new role test1/developers-role" kind=Role source=default/developers-role target=test1/developers-role
time="2020-10-23T09:20:01Z" level=info msg="RoleBinding default/developer-RoleBinding to be replicated to: [test1]" kind=RoleBinding source=default/developer-RoleBinding
time="2020-10-23T09:20:01Z" level=info msg="Checking if test1/developer-RoleBinding exists? false" kind=RoleBinding source=default/developer-RoleBinding target=test1/developer-RoleBinding
time="2020-10-23T09:20:01Z" level=debug msg="Creating a new roleBinding test1/developer-RoleBinding" kind=RoleBinding source=default/developer-RoleBinding target=test1/developer-RoleBinding
time="2020-10-23T09:20:01Z" level=info msg="Secret default/roock.test to be replicated to: [roock,r2-mono-230-review.*]" kind=Secret source=default/roock.test
time="2020-10-23T09:20:01Z" level=info msg="Checking if roock/roock.test exists? true" kind=Secret source=default/roock.test target=roock/roock.test
time="2020-10-23T09:20:01Z" level=debug msg="Secret roock/roock.test is already up-to-date" kind=Secret source=default/roock.test target=roock/roock.test
time="2020-10-23T09:21:01Z" level=info msg="Role default/developers-role to be replicated to: [test1]" kind=Role source=default/developers-role
time="2020-10-23T09:21:01Z" level=info msg="Checking if test1/developers-role exists? false" kind=Role source=default/developers-role target=test1/developers-role
time="2020-10-23T09:21:01Z" level=debug msg="Creating a new role test1/developers-role" kind=Role source=default/developers-role target=test1/developers-role
time="2020-10-23T09:21:01Z" level=info msg="RoleBinding default/developer-RoleBinding to be replicated to: [test1]" kind=RoleBinding source=default/developer-RoleBinding
time="2020-10-23T09:21:01Z" level=info msg="Checking if test1/developer-RoleBinding exists? false" kind=RoleBinding source=default/developer-RoleBinding target=test1/developer-RoleBinding
time="2020-10-23T09:21:01Z" level=debug msg="Creating a new roleBinding test1/developer-RoleBinding" kind=RoleBinding source=default/developer-RoleBinding target=test1/developer-RoleBinding
time="2020-10-23T09:21:01Z" level=info msg="Secret default/roock.test to be replicated to: [roock,r2-mono-230-review.*]" kind=Secret source=default/roock.test
time="2020-10-23T09:21:01Z" level=info msg="Checking if roock/roock.test exists? true" kind=Secret source=default/roock.test target=roock/roock.test
time="2020-10-23T09:21:01Z" level=debug msg="Secret roock/roock.test is already up-to-date" kind=Secret source=default/roock.test target=roock/roock.test
time="2020-10-23T09:22:01Z" level=info msg="Role default/developers-role to be replicated to: [test1]" kind=Role source=default/developers-role
time="2020-10-23T09:22:01Z" level=info msg="Checking if test1/developers-role exists? false" kind=Role source=default/developers-role target=test1/developers-role
time="2020-10-23T09:22:01Z" level=debug msg="Creating a new role test1/developers-role" kind=Role source=default/developers-role target=test1/developers-role
time="2020-10-23T09:22:01Z" level=info msg="RoleBinding default/developer-RoleBinding to be replicated to: [test1]" kind=RoleBinding source=default/developer-RoleBinding
time="2020-10-23T09:22:01Z" level=info msg="Checking if test1/developer-RoleBinding exists? false" kind=RoleBinding source=default/developer-RoleBinding target=test1/developer-RoleBinding
time="2020-10-23T09:22:01Z" level=debug msg="Creating a new roleBinding test1/developer-RoleBinding" kind=RoleBinding source=default/developer-RoleBinding target=test1/developer-RoleBinding
time="2020-10-23T09:22:01Z" level=info msg="Secret default/roock.test to be replicated to: [roock,r2-mono-230-review.*]" kind=Secret source=default/roock.test
time="2020-10-23T09:22:01Z" level=info msg="Checking if roock/roock.test exists? true" kind=Secret source=default/roock.test target=roock/roock.test
time="2020-10-23T09:22:01Z" level=debug msg="Secret roock/roock.test is already up-to-date" kind=Secret source=default/roock.test target=roock/roock.test
$ k create namespace test1
$ k -n test1 get role
No resources found.
$ k -n test1 get rolebinding
No resources found.

Expected behavior

developer and developer-RoleBinding should show up in test1 namespace.

Environment:

  • Kubernetes version: 1.15.10 and 1.18.9 (both minikube 1.12.3)
  • kubernetes-replicator version: 2.4.0

Replicator crashes when watches expire

When watches expire (and possibly at other times), the pod crashes because it cannot write to a log file. I believe this is related to kubernetes/kubernetes#61006. The fix for this appears to have gone in to 1.13, and this repository appears to depend on the 1.14 client, so I'm not sure what's going on there.

Name:               replicator-deployment-766c46874f-54bs2
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               ip-172-31-15-37.us-west-2.compute.internal/172.31.15.37
Start Time:         Fri, 14 Feb 2020 13:37:22 -0800
Labels:             app=replicator
                    pod-template-hash=766c46874f
Annotations:        <none>
Status:             Running
IP:                 100.96.6.2
Controlled By:      ReplicaSet/replicator-deployment-766c46874f
Containers:
  replicator:
    Container ID:   docker://54fa9bce17bea730e50713ab35a0ad9de93b4321ac17f4d598870fa5c4995ec8
    Image:          quay.io/mittwald/kubernetes-replicator:latest
    Image ID:       docker-pullable://quay.io/mittwald/kubernetes-replicator@sha256:9cd515802fee4859d1978a39e2a2e4278b691c54a4236df1f5e663d9c9b37c2a
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Fri, 14 Feb 2020 14:35:05 -0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Fri, 14 Feb 2020 14:22:34 -0800
      Finished:     Fri, 14 Feb 2020 14:35:03 -0800
    Ready:          True
    Restart Count:  3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from replicator-token-xz6mg (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  replicator-token-xz6mg:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  replicator-token-xz6mg
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason   Age                From                                                 Message
  ----    ------   ----               ----                                                 -------
  Normal  Pulling  28m (x4 over 86m)  kubelet, ip-172-31-15-37.us-west-2.compute.internal  Pulling image "quay.io/mittwald/kubernetes-replicator:latest"
  Normal  Pulled   28m (x4 over 86m)  kubelet, ip-172-31-15-37.us-west-2.compute.internal  Successfully pulled image "quay.io/mittwald/kubernetes-replicator:latest"
  Normal  Created  28m (x4 over 86m)  kubelet, ip-172-31-15-37.us-west-2.compute.internal  Created container replicator
  Normal  Started  28m (x4 over 86m)  kubelet, ip-172-31-15-37.us-west-2.compute.internal  Started container replicator

^ Note how the pod restarted 4 times

-> % kubectl -n kube-system logs replicator-deployment-766c46874f-54bs2 --previous --tail=10
2020/02/14 22:35:03 updating secret default/better-tls-3.dev.source.ai-tls
2020/02/14 22:35:03 updating dependent secret kube-system/better-tls-3.dev.source.ai-tls -> jenkins/better-tls-3.dev.source.ai-tls
2020/02/14 22:35:03 updating secret jenkins/better-tls-3.dev.source.ai-tls
2020/02/14 22:35:03 secret jenkins/better-tls-3.dev.source.ai-tls is replicated from kube-system/better-tls-3.dev.source.ai-tls
2020/02/14 22:35:03 updating secret jenkins/better-tls-3.dev.source.ai-tls
2020/02/14 22:35:03 secret kube-system/better-tls-3.dev.source.ai-tls has 2 dependents
2020/02/14 22:35:03 updating dependent secret kube-system/better-tls-3.dev.source.ai-tls -> default/better-tls-3.dev.source.ai-tls
2020/02/14 22:35:03 updating secret default/better-tls-3.dev.source.ai-tls
W0214 22:35:03.908603       1 reflector.go:289] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: watch of *v1.Secret ended with: too old resource version: 36417 (40656)
log: exiting because of error: log: cannot create log: open /tmp/replicator.replicator-deployment-766c46874f-54bs2.unknownuser.log.WARNING.20200214-223503.1: no such file or directory

Allow filtering data to sync

Describe the solution you'd like
I'm using cert-manager to create CAs and certificates for some services to use on their listeners. The kubernetes.io/tls secrets data, generated by cert-manager, includes 3 keys: ca.crt, tls.crt and tls.key. To ensure an application is connecting to the right service we need to reference ca.crt and sometimes also tls.key, for this we copy the secret to the correspondent namespace and make it available to the application with a mount. Currently we can copy the entire secret making the tls.key also available in other namespaces which is not desirable.

I would like to be able to use something like: replicator.v1.mittwald.de/replicate-filter: "ca.crt,tls.key" to filter the keys that would be synchronized across.

Describe alternatives you've considered
A more complicated solution would be to implement some type of mesh using either Istio or Linkerd, but I would like to avoid such a big dependency for this use case alone.

Add a flag to disable push-based replication

Is your feature request related to a problem? Please describe.

Our team provides an in-house Kubernetes cluster with the Namespace as a Service model and provides kubernetes-replicator as an add-on to let users replicate their resources across namespaces.

In this use case, users can push resources to unauthorized namespaces. This issue could be resolved by allowing pull-based replication only, but currently, there is no way to disable push-based replication.

Describe the solution you'd like

I think adding a flag to disable push-based replication and letting operators decide to disable push-based replication or not can be a solution.

Describe alternatives you've considered

  • #41 might resolve this issue

Adding allowedNamespaces (suggested in #41 (comment)) could be a solution, but default replication policy should deny all namespace.

  • Adding replication-allowed-from annotation to namespace

But this will only work when the default replication policy denies all namespace.

Additional context

Allowing all namespaces.

Hi,

What do i set in the annotation of the source secret to allow the secret to be replicate it to all namespaces?

Regards,
Kevin

Handle metadata.ownerReferences

Describe the bug
I have secrets decrypted by sops operator. Secrets are created with metadata.ownerReferences set to Custom Resource managing the secret.

When secret is replicated metadata.ownerReferences is copied aswell which is causing issues.

To Reproduce
Create secret with metadata.ownerReferences.

Expected behavior
The ownerReferences should be removed as replicator handles deletions internally. Alternatively ownerReferences could be used instead of built-in deletion handling.

Environment:

  • Kubernetes version: 1.19
  • kubernetes-replicator version: 2.3.0

Additional context
From reviewing replicator code it neither removes or sets ownerReferences

https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/

Replication inconsistent if both replicate-to and replicate-to-matching are used

Describe the bug
If I use both annotations replicator.v1.mittwald.de/replicate-to and replicator.v1.mittwald.de/replicate-to-matching it sometimes replicates and sometimes does not replicate.

The behavior is that it gets replicated when a namespaces is created with the label, but updates are not propagated (neither if I update the namespace nor if I update the secret).

To Reproduce

First I create a new name secret, which should be replicated into the namespace foobar and also into all namespaces with the foo: bar label.

---
apiVersion: v1
kind: Secret
metadata:
  name: my-test-secret
  annotations:
    replicator.v1.mittwald.de/replicate-to: "^foobar$"
    replicator.v1.mittwald.de/replicate-to-matching: foo=bar
data:
  key1: dmFsdWUx

Then I create a namespace with the foo: bar label:

---
apiVersion: v1
kind: Namespace
metadata:
  labels:
    foo: bar
  name: test-namespace-1

As expected the secret appears:

$ kubectl get -n test-namespace-1 secrets my-test-secret -oyaml
apiVersion: v1
data:
  key1: dmFsdWUx
kind: Secret
# ...

Bug 1
Now I want to change value in the secret

---
apiVersion: v1
kind: Secret
metadata:
  name: my-test-secret
  annotations:
    replicator.v1.mittwald.de/replicate-to: "^foobar$"
    replicator.v1.mittwald.de/replicate-to-matching: foo=bar
data:
  key1: dmFsdWUy

However the replicated secret does not change:

$ kubectl get -n test-namespace-1 secrets my-test-secret -oyaml
apiVersion: v1
data:
  key1: dmFsdWUx
kind: Secret
# ...

Bug 2

Now I first create the namespace

---
apiVersion: v1
kind: Namespace
metadata:
  name: test-namespace-2

and then add the label

---
apiVersion: v1
kind: Namespace
metadata:
  labels:
    foo: bar
  name: test-namespace-2

The secret does not appear in that namespace:

$ kubectl get -n test-namespace-2 secrets my-test-secret -oyaml
Error from server (NotFound): secrets "my-test-secret" not found

Expected behavior

A. I would expect the secret to be replicated into the matching namespaces as if the replicator.v1.mittwald.de/replicate-to was not set in addition.
B. If they are mutually exclusive it should be clear in the documentation and the replication should not work in the create namespace case.

Environment:

  • Kubernetes version: 1.19
  • kubernetes-replicator version: v2.5.1

Additional context
Add any other context about the problem here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.