Code Monkey home page Code Monkey logo

percona-postgresql-operator's Introduction

Percona Operator for PostgreSQL

Percona Kubernetes Operators

License Docker Pulls Docker Image Size (tag) GitHub tag (latest by SemVer) GitHub go.mod Go version Go Report Card

Introduction

Percona Operator for PostgreSQL automates and simplifies deploying and managing open source PostgreSQL clusters on Kubernetes. Percona Operator for PostgreSQL is based on Postgres Operator developed by Crunchy Data.

Whether you need to get a simple PostgreSQL cluster up and running, need to deploy a high availability, fault tolerant cluster in production, or are running your own database-as-a-service, the Operator provides the essential features you need to keep your clusters healthy:

  • PostgreSQL cluster provisioning
  • High availability and disaster recovery
  • Automated user management with password rotation
  • Automated updates
  • Support for both asynchronous and synchronous replication
  • Scheduled and manual backups
  • Integrated monitoring with Percona Monitoring and Management

You interact with Percona Operator mostly via the command line tool. If you feel more comfortable with operating the Operator and database clusters via the web interface, there is Percona Everest - an open-source web-based database provisioning tool available for you. It automates day-to-day database management operations for you, reducing the overall administrative overhead. Get started with Percona Everest.

Architecture

Percona Operators are based on the Operator SDK and leverage Kubernetes primitives to follow best CNCF practices.

Learn more about architecture and design decisions.

Documentation

To learn more about the Operator, check the Percona Operator for PostgreSQL documentation.

Quickstart installation

Ready to try out the Operator? Check the Quickstart tutorial for easy-to follow steps.

Below is one of the ways to deploy the Operator using kubectl.

kubectl

  1. Deploy the operator from deploy/bundle.yam
kubectl apply --server-side -f https://raw.githubusercontent.com/percona/percona-postgresql-operator/main/deploy/bundle.yaml
  1. Deploy the database cluster itself from deploy/cr.yaml
kubectl apply -f https://raw.githubusercontent.com/percona/percona-postgresql-operator/main/deploy/cr.yaml

Contributing

Percona welcomes and encourages community contributions to help improve Percona Operator for PostgreSQL.

See the Contribution Guide on how you can contribute.

Communication

We would love to hear from you! Reach out to us on Forum with your questions, feedback and ideas

Join Percona Kubernetes Squad!

                    %                        _____                
                   %%%                      |  __ \                                          
                 ###%%%%%%%%%%%%*           | |__) |__ _ __ ___ ___  _ __   __ _             
                ###  ##%%      %%%%         |  ___/ _ \ '__/ __/ _ \| '_ \ / _` |            
              ####     ##%       %%%%       | |  |  __/ | | (_| (_) | | | | (_| |            
             ###        ####      %%%       |_|   \___|_|  \___\___/|_| |_|\__,_|           
           ,((###         ###     %%%        _      _          _____                       _
          (((( (###        ####  %%%%       | |   / _ \       / ____|                     | | 
         (((     ((#         ######         | | _| (_) |___  | (___   __ _ _   _  __ _  __| | 
       ((((       (((#        ####          | |/ /> _ </ __|  \___ \ / _` | | | |/ _` |/ _` |
      /((          ,(((        *###         |   <| (_) \__ \  ____) | (_| | |_| | (_| | (_| |
    ////             (((         ####       |_|\_\\___/|___/ |_____/ \__, |\__,_|\__,_|\__,_|
   ///                ((((        ####                                  | |                  
 /////////////(((((((((((((((((########                                 |_|   Join @ percona.com/k8s   

You can get early access to new product features, invite-only ”ask me anything” sessions with Percona Kubernetes experts, and monthly swag raffles. Interested? Fill in the form at percona.com/k8s.

Roadmap

We have an experimental public roadmap which can be found here. Please feel free to contribute and propose new features by following the roadmap guidelines.

Submitting Bug Reports

If you find a bug in Percona Docker Images or in one of the related projects, please submit a report to that project's JIRA issue tracker or create a GitHub issue in this repository.

Learn more about submitting bugs, new features ideas and improvements in the Contribution Guide.

percona-postgresql-operator's People

Contributors

abrightwell avatar andrewlecuyer avatar cahoonpwork avatar cap1984 avatar cbandy avatar crunchyheath avatar crunchyjohn avatar dependabot[bot] avatar egegunes avatar fiowro avatar flamingdumpster avatar guineveresaenger avatar hors avatar inelpandzic avatar jasonodonnell avatar jkatz avatar jmccormick2001 avatar jmckulk avatar nmarukovich avatar nonemax avatar pooknull avatar prlaurence avatar ptankov avatar spron-in avatar stephensorriaux avatar tjmoore4 avatar tplavcic avatar valclarkson avatar wilybrace avatar xenophenes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

percona-postgresql-operator's Issues

Sheduled kubernetes pg-backup objects are not automatically cleaned up

Report

Sheduled kubernetes pg-backup objects are not automatically cleaned up

More about the problem

Since the 2.3.0 k8s Percona Postgres operator upgrade, pg-backup resources are created for each scheduled backup (K8SPG-410)
The pgbackrest backups and related items saved in the storage are still clean-up up properly following the retention rules defined in the spec.backups.pgbackrest.global.-retention-full* attributes of the pg cluster definition.
BUT the pg-backup related k8s resources are well created but never cleaned up even with the retention attributes properly defined.
This problem lead to a quick accumulation of pg-backups and jobs k8s resources in the namespaces using a pg cluster with scheduled backups.

Steps to reproduce

  1. Create a pg cluster with a backup storage and schedule section defined and a retention period
  2. On the scheduled time, pg-backup resource is created with the related job, pod, pgbackrest backup and items in the storage
  3. At the end of the retention period, pgbackrest backup and items are removed from the storage but none of the k8s resources (pg-backup and job) are not deleted

Versions

  1. Kubernetes - v1.27.6
  2. Operator - Percona for PostgreSQL 2.3.1
  3. Database - PostgreSQL 15.5

Anything else?

No response

Cannot add custom labels for PostgreSQL metrics sent to PMM

Proposal

As far as I know, the CR definition of Percona PostgreSQL cluster does not provide any configuration attribute to add custom labels to the metrics sent to the PMM server from pmm-client containers.
This is possible with Percona MongoDB clusters using spec.pmm.mongodParams and spec.pmm.mongosParams, but also with Percona XtraDB clusters using spec.pmm.pxcParams and spec.proxysqlParams.

Use-Case

The additional flags, in my case custom labels, are added to the "pmm-admin add" options through the PMM_ADMIN_CUSTOM_PARAMS environment variable in the prerun script :

Psmdb definition :

kind: PerconaServerMongoDB

spec:
pmm:
mongodParams: --custom-labels=namespace=my-namespace

Mongo pod definition :

  • name: PMM_ADMIN_CUSTOM_PARAMS
    value: --custom-labels=namespace=my-namespace
  • name: PMM_AGENT_PRERUN_SCRIPT
    value: |-
    cat /etc/mongodb-ssl/tls.key /etc/mongodb-ssl/tls.crt > /tmp/tls.pem;
    pmm-admin status --wait=10s;
    pmm-admin add $(DB_TYPE) $(PMM_ADMIN_CUSTOM_PARAMS) …

It would be a great feature to implement this for the Percona PostgreSQL operator as well !

Is this a feature you are interested in implementing yourself?

No

Anything else?

No response

Full backup by schedule does not work for a second database with the same name in a different namespace

Report

.

More about the problem

.

Steps to reproduce

helm --namespace=test-operator upgrade --install --create-namespace --set=watchAllNamespaces=true --repo=https://percona.github.io/percona-helm-charts pg-operator pg-operator
helm --namespace=test upgrade --install --create-namespace --repo=https://percona.github.io/percona-helm-charts test pg-db
helm --namespace=test2 upgrade --install --create-namespace --repo=https://percona.github.io/percona-helm-charts test pg-db

Versions

  1. Kubernetes 1.27
  2. Operator 2.3.4
  3. Database pg-db 2.3.5

Anything else?

No response

Failed to create PostgreSQL deployment due to resource requests and limits were missing in the sts

Report

Hi Percona team,

I wanted to deploy a PostgreSQL service instance via your operator in my Rancher cluster. I used the example yaml config (https://github.com/percona/percona-postgresql-operator/blob/main/deploy/cr.yaml) and added the resource requests and limit parameter as described in the custom resource options (https://docs.percona.com/percona-operator-for-postgresql/2.0/operator.html) and deployed it via kubectl but the statefullsets didn't use the specified resources. The resources of the statefullsets were empty.

Thanks and regards,
Christian

More about the problem

Here is the deployment yaml I used:
pg_deployment.txt
and here you'll find the errors:
rancher_error.txt

Steps to reproduce

  1. Installation of the latest Percona PostgreSQL operator (version 2.3.1) in a Rancher cluster
  2. Start deployment with the example config and resources specified
  3. Checking the created statefullsets and recognizing that the specified resources were not applied correctly

Versions

  1. Kubernetes RKE1
  2. Operator 2.3.1
  3. Database PostgreSQL 16
  4. Rancher 2.7.6

Anything else?

No response

pgv2.percona.com/v2 PerconaPGCluster doesnt handle the replicaCertCopy sidecar resources

Report

When attempting to create a PerconaPGCluster we need to set the resource and limits on all pods/containers, I can see the that postgresclusters.postgres-operator.crunchydata.com has the ability to set the ReplicaCertCopy resources and seems to be documented https://github.com/percona/percona-postgresql-operator/blob/main/docs/content/tutorial/resize-cluster.md

Tho when attempting to add this to the PerconaPGCluster resource i seem to get errors of the below 2 variants depending how i try to add this in.

W0409 16:26:37.388986   40652 warnings.go:70] unknown field "spec.instances[0].sidecars.replicaCertCopy"
Error: UPGRADE FAILED: failed to replace object: PerconaPGCluster.pgv2.percona.com "svc-db01" is invalid: spec.instances[0].sidecars: Invalid value: "object": spec.instances[0].sidecars in body must be of type array: "object"

Or

"error": "failed to create typed patch object (dev-01/svc-db01-cluster-7429; apps/v1, Kind=StatefulSet): .spec.template.spec.containers: duplicate entries for key [name=\"replication-cert-copy\"]",

Ive checked the CRD for the PerconaPGCluster and there isnt a way that this can be set at any levels at the moment within the yaml files.

I can see the the Crunchy data operators have the
https://github.com/percona/percona-postgresql-operator/blob/main/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgrescluster_types.go#L534

While the PerconaPGCLuster only implements the core.Container type from the k8s client library's
https://github.com/percona/percona-postgresql-operator/blob/main/pkg/apis/pgv2.percona.com/v2/perconapgcluster_types.go#L524

There also doesnt appear to be any methods on the
https://github.com/percona/percona-postgresql-operator/blob/main/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgrescluster_types.go#L534

to have similar functionality exposed

More about the problem

See above

Steps to reproduce


# Source: vortex-postgres-cluster/templates/cluster.yaml
apiVersion: pgv2.percona.com/v2
kind: PerconaPGCluster
metadata:
  namespace: dev-01
  annotations:
    current-primary: scheduler-svc-db01
    rollme: "vslEs"
  labels:
    crunchy-pgha-scope: scheduler-svc-db01
    deployment-name: scheduler-svc-db01
    name: scheduler-svc-db01
    pg-cluster: scheduler-svc-db01
    pgo-version: 2.3.1
    pgouser: admin
    helm.sh/chart: vortex-postgres-cluster-0.0.10
    app.kubernetes.io/name: vortex-postgres-cluster
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "2.3.1"
    app.kubernetes.io/managed-by: Helm
  finalizers:
    null
  name: scheduler-svc-db01
spec:
  crVersion: 2.3.1
  image: percona/percona-postgresql-operator:2.3.1-ppg16-postgres
  imagePullPolicy: Always
  port: 5432
  postgresVersion: 16
  standby:
    enabled: false

  openshift: false
  users:
    - name: test
      databases:
        - testdb
      options: SUPERUSER
      password:
        type: ASCII
      secretName: test-credentials

  pause: false
  unmanaged: false

  instances:
    - name: cluster
      sidecars:
        replicaCertCopy:
          resources:
            limits:
              cpu: 200m
              memory: 128Mi
            requests:
              cpu: 200m
              memory: 128Mi
      replicas: 3
      resources:
        limits:
          cpu: 2
          memory: 4Gi
        requests:
          cpu: 2
          memory: 4Gi
      dataVolumeClaimSpec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 5Gi

  proxy:
    pgBouncer:
      image: percona/percona-postgresql-operator:2.3.1-ppg16-pgbouncer
      replicas: 3
      exposeSuperusers: true
      resources:
        requests:
          cpu: 200m
          memory: 128Mi
        limits:
          cpu: 200m
          memory: 128Mi

  pmm:
    enabled: false
    image: percona/pmm-client:2.41.0
    serverHost: monitoring-service
    secret: scheduler-svc-db01-pmm-secret

  backups:
    pgbackrest:
      image: percona/percona-postgresql-operator:2.3.1-ppg16-pgbackrest
      configuration:
        - secret:
            name: scheduler-service-db-backup
      sidecars:
        pgbackrest:
          resources:
            limits:
              cpu: 200m
              memory: 128Mi
            requests:
              cpu: 200m
              memory: 128Mi
        pgbackrestConfig:
          resources:
            limits:
              cpu: 200m
              memory: 128Mi
            requests:
              cpu: 200m
              memory: 128Mi
      jobs:
        priorityClassName: high-priority
        resources:
          limits:
            cpu: 200m
            memory: 128Mi
          requests:
            cpu: 200m
            memory: 128Mi
      manual:
        repoName: repo1
        options:
         - --type=full
      repos:
      - name: repo1
        schedules:
          full: 0 0 * * 6
        volume:
          volumeClaimSpec:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 1Gi

Versions

  1. Kubernetes: 1.28.0
  2. Operator: Percona Pgsql 2.3.4
  3. Database: PerconaPGCluster

Anything else?

No response

version 2.3.1 doesn't support postgresql 16?

Report

pg-operator seems to not be compatible with postgresql 16 which pg-db is using by default?

More about the problem

2024-01-26T20:43:43.620Z	ERROR	Reconciler error	{"controller": "perconapgcluster", "controllerGroup": "pgv2.percona.com", "controllerKind": "PerconaPGCluster", "PerconaPGCluster": {"name":"name-master","namespace":"name"}, "namespace": "name", "name": "name-master", "reconcileID": "b54a138a-b749-43a3-8550-f3f86bbe0cb6", "error": "update/create PostgresCluster: PostgresCluster.postgres-operator.crunchydata.com \"name-master\" is invalid: spec.postgresVersion: Invalid value: 16: spec.postgresVersion in body should be less than or equal to 15", "errorVerbose": "PostgresCluster.postgres-operator.crunchydata.com \"name-master\" is invalid: spec.postgresVersion: Invalid value: 16: spec.postgresVersion in body should be less than or equal to 15\nupdate/create PostgresCluster\ngithub.com/percona/percona-postgresql-operator/percona/controller/pgcluster.(*PGClusterReconciler).Reconcile\n\t/go/src/github.com/percona/percona-postgresql-operator/percona/controller/pgcluster/controller.go:241\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:316\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1650"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227

Steps to reproduce

  1. install pg-operator helm chart (only configuration is to enable watchAllNamespaces)
  2. install pg-db helm chart (no configuration)
  3. cluster is not created and observe errors in logs

Versions

  1. Kubernetes v1.28.5+k3s1
  2. Operator 1.3.1
  3. Database 16
  4. pg-operator chart 2.3.3
  5. pg-db chart 2.3.2

Anything else?

running:

    Image:          registry-1.percona.com/percona/percona-postgresql-operator:2.3.1
    Image ID:       registry-1.percona.com/percona/percona-postgresql-operator@sha256:a6495c8e13d9fe3f50df12219e9d9cf64fa610fe5680a0a78d0e5c4fb3be2456

PgBackrest: unable to create stanza

Report

Pgbackrest cannot perform a backup to S3 because the endpoint address is generated in a wrong way

More about the problem

Reported logs is here:

2024-05-07T19:04:54.455Z	ERROR	unable to create stanza	{"controller": "postgrescluster", "controllerGroup": "postgres-operator.crunchydata.com", "controllerKind": "PostgresCluster", "PostgresCluster": {"name":"fap-cluster-pg-db","namespace":"postgres"}, "namespace": "postgres", "name": "fap-cluster-pg-db", "reconcileID": "4b48a586-101e-492c-b832-12cd2eb6c17d", "reconciler": "pgBackRest", "error": "command terminated with exit code 49: ERROR: [049]: unable to get address for 'postgres.minio-api.domain.com': [-2] Name or service not known\n", "errorVerbose": "command terminated with exit code 49: ERROR: [049]: unable to get address for 'postgres.minio-api.domain.com': [-2] Name or service not known\n\ngithub.com/percona/percona-postgresql-operator/internal/pgbackrest.Executor.StanzaCreateOrUpgrade\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/pgbackrest/pgbackrest.go:96\ngithub.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).reconcileStanzaCreate\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/pgbackrest.go:2650\ngithub.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).reconcilePGBackRest\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/pgbackrest.go:1360\ngithub.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).Reconcile\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/controller.go:356\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:316\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1650\ngithub.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).reconcileStanzaCreate\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/pgbackrest.go:2657\ngithub.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).reconcilePGBackRest\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/pgbackrest.go:1360\ngithub.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).Reconcile\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/controller.go:356\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:316\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1650"}

Steps to reproduce

  1. Create a secret with S3 credentials
apiVersion: v1
kind: Secret
metadata:
  name: a-cluster-pg-db-pgbackrest-secrets
type: Opaque
stringData:
  s3.conf: |
    [global]
    repo1-s3-key=a_key
    repo1-s3-key-secret=a_secret
    repo1-storage-verify-tls=n
EOF
  1. Create a cluster instance via Helm specifying a custom S3 endpoint:
backups:
  pgbackrest:
    configuration:
    - secret:
        name: fap-cluster-pg-db-pgbackrest-secrets
    repos:
    - name: repo1
      schedules:
        full: "0 * * * *"
      s3:
        bucket: "postgres"
        endpoint: "https://minio-api.domain.com/"
        region: custom

Versions

  1. Kubernetes: 1.28.9+rke2r1
  2. Operator: 2.3.1
  3. Database: 16

Anything else?

According to the logs, it seems that S3 endpoint is constructed as {bucket}.{endpoint} instead of {endpoint}/{bucket}

operator crash loop due to nil pointer

Report

A user error in applying a cr.yaml that was missing the proxy section caused the stack trace seen below. It appears there is no check to see if the proxy section is nil or not.

More about the problem

2024-03-21T19:32:01.194Z INFO Observed a panic in reconciler: runtime error: invalid memory address or nil pointer dereference {"controller": "perconapgcluster", "controllerGroup": "pgv2.percona.com", "controllerKind": "PerconaPGCluster", "PerconaPGCluster": {"name":"rxtest","namespace":"postgres-operator"}, "namespace": "postgres-operator", "name": "rxtest", "reconcileID": "0ecffd68-d97a-4d13-af64-9eafd015dd10"}
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1678ace]
goroutine 459 [running]:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:116 +0x1e5
panic({0x1a233e0?, 0x2ddbe70?})
/usr/local/go/src/runtime/panic.go:914 +0x21f
github.com/percona/percona-postgresql-operator/pkg/apis/pgv2.percona.com/v2.(*PerconaPGCluster).Default(0xc000cdc380)
/go/src/github.com/percona/percona-postgresql-operator/pkg/apis/pgv2.percona.com/v2/perconapgcluster_types.go:179 +0x22e
github.com/percona/percona-postgresql-operator/percona/controller/pgcluster.(*PGClusterReconciler).Reconcile(0xc00045ef30, {0x1fcc410?, 0xc000d2b530}, {{{0xc00005ddb8?, 0x5?}, {0xc00083f6f6?, 0xc00044cd48?}}})
/go/src/github.com/percona/percona-postgresql-operator/percona/controller/pgcluster/controller.go:170 +0x1c5
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x1fcf718?, {0x1fcc410?, 0xc000d2b530?}, {{{0xc00005ddb8?, 0xb?}, {0xc00083f6f6?, 0x0?}}})
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119 +0xb7
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0004e4aa0, {0x1fcc448, 0xc0003a99a0}, {0x1abf5c0?, 0xc000971140?})
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:316 +0x3cc
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0004e4aa0, {0x1fcc448, 0xc0003a99a0})
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266 +0x1c9
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227 +0x79
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 in goroutine 89
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:223 +0x565

Steps to reproduce

Apply a cr.yaml missing the proxy section. Here is a simple test case to verify the problem was a incorrect yaml.

package v2_test

import (
"testing"

"github.com/stretchr/testify/assert"
"gopkg.in/yaml.v2"

v2 "github.com/percona/percona-postgresql-operator/pkg/apis/pgv2.percona.com/v2"

)

func TestPerconaPGCluster_Default(t *testing.T) {
a := assert.New(t)

cluster := new(v2.PerconaPGCluster)

err := yaml.Unmarshal(postgrescluster_empty_proxy, cluster)
a.NoError(err)

cluster.Default()

}

var postgrescluster_empty_proxy []byte = []byte(`
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: hippo
spec:
image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-15.3-2
postgresVersion: 15
instances:
- name: instance1
dataVolumeClaimSpec:
accessModes:
- "ReadWriteMany"
resources:
requests:
storage: 1Gi
backups:
pgbackrest:
image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.45-2
repos:
- name: repo1
volume:
volumeClaimSpec:
accessModes:
- "ReadWriteMany"
resources:
requests:
storage: 1Gi
- name: repo2
volume:
volumeClaimSpec:
accessModes:
- "ReadWriteMany"
resources:
requests:
storage: 1Gi
proxy:

Versions

  1. Kubernetes 1.2.7
  2. Operator 2.3.1 I suspect 2.3.0 has the same issue

Anything else?

Even though this was pure user error it did cause a serious situation in that the operator went into a hard crash loop with no way I could find to break it out. The operator would not run long enough to even try to reapply the corrected yaml, a delete and restart, even an uninstall the operator (other than the crd) did not help the situation.

Thank you.

Label selector for watched namespaces

Proposal

Enable or disable reconciliation in namespace based on labels on the namespace.

Use-Case

We have a procedure where we want to scale all pods in a namespace to 0. This is hard while using the operator, because the operator will set the replicas back to what is defined in the cluster spec. We would like a way to disable reconciliation for a namespace. This might be possible by setting a label on the namespace and make the operator aware of what labels it requires to reconcile a namespace.

Is this a feature you are interested in implementing yourself?

Maybe

Anything else?

No response

SIGSEGV when missing `pmm:` block in PerconaPGCluster CR

If the pmm: block is missing from the PerconaPGCluster CR, then the Postgres Operator crashes with the following error.

If the pmm: is there either enabled or disabled, no problem

time="2023-11-28T09:11:23Z" level=info msg="Observed a panic in reconciler: runtime error: invalid memory address or nil pointer dereference" PerconaPGCluster=postgres/analytics controller=perconapgcluster controllerGroup=pgv2.percona.com controllerKind=PerconaPGCluster name=analytics namespace=postgres reconcileID=e73c9baa-e2c2-4842-bd42-f24fbf9c7a45 version=
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x17a488c]

goroutine 451 [running]:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119 +0x1fa
panic({0x1a313a0, 0x2d59990})
	/usr/local/go/src/runtime/panic.go:884 +0x213
github.com/percona/percona-postgresql-operator/percona/controller/pgcluster.(*PGClusterReconciler).Reconcile.func1()
	/go/src/github.com/percona/percona-postgresql-operator/percona/controller/pgcluster/controller.go:204 +0x9ac
sigs.k8s.io/controller-runtime/pkg/controller/controllerutil.mutate(0xc0018b4400?, {{0xc001a012b0?, 0x0?}, {0xc001a012a0?, 0x1fdff58?}}, {0x1ff4d10, 0xc0018b4400})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/controller/controllerutil/controllerutil.go:339 +0x4f
sigs.k8s.io/controller-runtime/pkg/controller/controllerutil.CreateOrUpdate({0x1fdff58, 0xc0011b5f20}, {0x1fe8a18, 0xc00069eae0}, {0x1ff4d10?, 0xc0018b4400}, 0xc001a012b0?)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/controller/controllerutil/controllerutil.go:211 +0x274
github.com/percona/percona-postgresql-operator/percona/controller/pgcluster.(*PGClusterReconciler).Reconcile(0xc0000c84d0, {0x1fdff58, 0xc0011b5f20}, {{{0xc001a012b0?, 0x0?}, {0xc001a012a0?, 0x40de87?}}})
	/go/src/github.com/percona/percona-postgresql-operator/percona/controller/pgcluster/controller.go:155 +0x54a
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x1fdff58?, {0x1fdff58?, 0xc0011b5f20?}, {{{0xc001a012b0?, 0x199a0e0?}, {0xc001a012a0?, 0x40f946?}}})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:122 +0xc8
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000438f00, {0x1fdfeb0, 0xc00018e4b0}, {0x1ac82a0?, 0xc001dac680?})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:323 +0x377
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000438f00, {0x1fdfeb0, 0xc00018e4b0})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:274 +0x1d9
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235 +0x85
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:231 +0x587

The situation is recoverable by reapplying the CR with a pmm: block even in disabled, and restarting the Operator pod in Kubernetes

`pgo-root-cacert` secret shared across `PerconaPGCluster` installations ?

About the context:

In a single namespace named postgres, i have two PerconaPGCluster CR which created two different Postgres databases named archive and analytics

I did not specify any certificates in the CR, so that Postgres Operator generates them automatically

My Postgres Operator runs cluster wide in a namespace named postgres-operator

Observations:

All secrets created by the Operator in namespace postgres are prefixed with each cluster name. But there is a secret pgo-root-cacert which is not prefixed, and which contains two Owner references (might have been added by Kapp deployer)

I am not sure if this is a problem, or if that means that both Postgres clusters share the same certificates, or if that simply means that cluster certificates are different but simply signed by the same CA

NAME                                 TYPE     DATA   AGE
analytics-analytics-hcdj-certs       Opaque   4      13h
analytics-cluster-cert               Opaque   3      13h
analytics-pgbackrest                 Opaque   1      13h
analytics-pgbouncer                  Opaque   6      13h
analytics-pguser-cocolis-analytics   Opaque   12     13h
analytics-replication-cert           Opaque   3      13h
archive-archive-mww4-certs           Opaque   4      2m2s
archive-cluster-cert                 Opaque   3      2m2s
archive-pgbackrest                   Opaque   1      2m3s
archive-pgbouncer                    Opaque   6      2m1s
archive-pguser-cocolis-archive       Opaque   12     2m2s
archive-replication-cert             Opaque   3      2m3s
pgo-root-cacert                      Opaque   2      13h       <- here
[alex@adell] k8s $ kubectl -n postgres get secret/pgo-root-cacert -o yaml
apiVersion: v1
data:
  root.crt: blabla==
  root.key: blabla=
kind: Secret
metadata:
  creationTimestamp: "2023-11-27T21:09:39Z"
  name: pgo-root-cacert
  namespace: postgres
  ownerReferences:
  - apiVersion: postgres-operator.crunchydata.com/v1beta1
    kind: PostgresCluster
    name: analytics                                               <- here
    uid: d0398d46-b70c-49bb-950c-75c98b6cb92c
  - apiVersion: postgres-operator.crunchydata.com/v1beta1
    kind: PostgresCluster
    name: archive                                                 <- here
    uid: 70bc1488-aeb1-421c-b36b-5670025f21f5
  resourceVersion: "3699654823"
  uid: 268a0f49-aef4-416f-958d-23efa9fef550
type: Opaque

Allow custom s3 endpoints when configuring custom extensions

Proposal

The current implementation of the extension-installer command does not allow passing a custom endpoint to the AWS S3 sdk. Since there are dozens of hosters that offer s3 object storage (like DigitalOcean), passing a custom endpoint would allow people bound to other providers to install custom extensions.

Use-Case

No response

Is this a feature you are interested in implementing yourself?

Maybe

Anything else?

No response

Absent loadBalancerSourceRanges in helm chart

Hello,

I've found issue with loadBalancerSourceRanges. Its absent in Helm chart for values.proxy.pgBouncer.expose section.

Due to CDR spec on https://github.com/percona/percona-postgresql-operator/blob/main/deploy/cr.yaml we can see that loadBalancerSourceRanges persist on line 72. However in Helm chart this parameter absent. You can see it in template file https://github.com/percona/percona-helm-charts/blob/main/charts/pg-db/templates/cluster.yaml (look at line 181 and below)

Please add this parameter in Helm chart.

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.