Code Monkey home page Code Monkey logo

crunchydata / postgres-operator Goto Github PK

View Code? Open in Web Editor NEW
3.7K 67.0 572.0 637.65 MB

Production PostgreSQL for Kubernetes, from high availability Postgres clusters to full-scale database-as-a-service.

Home Page: https://access.crunchydata.com/documentation/postgres-operator/v5/

License: Apache License 2.0

Makefile 0.83% Go 97.73% Shell 1.41% Dockerfile 0.03%
postgresql kubernetes operator postgres postgres-operator high-availability database database-management postgresql-clusters disaster-recovery

postgres-operator's Introduction

PGO: The Postgres Operator from Crunchy Data

PGO: The Postgres Operator from Crunchy Data

Go Report Card GitHub Repo stars License Discord

Production Postgres Made Easy

PGO, the Postgres Operator from Crunchy Data, gives you a declarative Postgres solution that automatically manages your PostgreSQL clusters.

Designed for your GitOps workflows, it is easy to get started with Postgres on Kubernetes with PGO. Within a few moments, you can have a production-grade Postgres cluster complete with high availability, disaster recovery, and monitoring, all over secure TLS communications. Even better, PGO lets you easily customize your Postgres cluster to tailor it to your workload!

With conveniences like cloning Postgres clusters to using rolling updates to roll out disruptive changes with minimal downtime, PGO is ready to support your Postgres data at every stage of your release pipeline. Built for resiliency and uptime, PGO will keep your Postgres cluster in its desired state, so you do not need to worry about it.

PGO is developed with many years of production experience in automating Postgres management on Kubernetes, providing a seamless cloud native Postgres solution to keep your data always available.

Have questions or looking for help? Join our Discord group.

Installation

Crunchy Data makes PGO available as the orchestration behind Crunchy Postgres for Kubernetes. Crunchy Postgres for Kubernetes is the integrated product that includes PostgreSQL, PGO and a collection of PostgreSQL tools and extensions that includes the various open source components listed in the documentation.

We recommend following our Quickstart for how to install and get up and running. However, if you can't wait to try it out, here are some instructions to get Postgres up and running on Kubernetes:

  1. Fork the Postgres Operator examples repository and clone it to your host machine. For example:
YOUR_GITHUB_UN="<your GitHub username>"
git clone --depth 1 "[email protected]:${YOUR_GITHUB_UN}/postgres-operator-examples.git"
cd postgres-operator-examples
  1. Run the following commands
kubectl apply -k kustomize/install/namespace
kubectl apply --server-side -k kustomize/install/default

For more information please read the Quickstart and Tutorial.

These installation instructions provide the steps necessary to install PGO along with Crunchy Data's Postgres distribution, Crunchy Postgres, as Crunchy Postgres for Kubernetes. In doing so the installation downloads a series of container images from Crunchy Data's Developer Portal. For more information on the use of container images downloaded from the Crunchy Data Developer Portal or other third party sources, please see 'License and Terms' below. The installation and use of PGO outside of the use of Crunchy Postgres for Kubernetes will require modifications of these installation instructions and creation of the necessary PostgreSQL and related containers.

Cloud Native Postgres for Kubernetes

PGO, the Postgres Operator from Crunchy Data, comes with all of the features you need for a complete cloud native Postgres experience on Kubernetes!

PostgreSQL Cluster Provisioning

Create, Scale, & Delete PostgreSQL clusters with ease, while fully customizing your Pods and PostgreSQL configuration!

Safe, automated failover backed by a distributed consensus high availability solution. Uses Pod Anti-Affinity to help resiliency; you can configure how aggressive this can be! Failed primaries automatically heal, allowing for faster recovery time.

Support for standby PostgreSQL clusters that work both within and across multiple Kubernetes clusters.

Backups and restores leverage the open source pgBackRest utility and includes support for full, incremental, and differential backups as well as efficient delta restores. Set how long you to retain your backups. Works great with very large databases!

Security and TLS

PGO enforces that all connections are over TLS. You can also bring your own TLS infrastructure if you do not want to use the defaults provided by PGO.

PGO runs containers with locked-down settings and provides Postgres credentials in a secure, convenient way for connecting your applications to your data.

Track the health of your PostgreSQL clusters using the open source pgMonitor library.

Safely apply PostgreSQL updates with minimal impact to the availability of your PostgreSQL clusters.

Advanced Replication Support

Choose between asynchronous and synchronous replication for workloads that are sensitive to losing transactions.

Create new clusters from your existing clusters or backups with efficient data cloning.

Advanced connection pooling support using pgBouncer.

Pod Anti-Affinity, Node Affinity, Pod Tolerations

Have your PostgreSQL clusters deployed to Kubernetes Nodes of your preference. Set your pod anti-affinity, node affinity, Pod tolerations, and more rules to customize your deployment topology!

Choose the type of backup (full, incremental, differential) and how frequently you want it to occur on each PostgreSQL cluster.

Backup to Local Storage, S3, GCS, Azure, or a Combo!

Store your backups in Amazon S3 or any object storage system that supports the S3 protocol. You can also store backups in Google Cloud Storage and Azure Blob Storage.

You can also mix-and-match: PGO lets you store backups in multiple locations.

PGO makes it easy to fully customize your Postgres cluster to tailor to your workload:

Deploy PGO to watch Postgres clusters in all of your namespaces, or restrict which namespaces you want PGO to manage Postgres clusters in!

Included Components

PostgreSQL containers deployed with the PostgreSQL Operator include the following components:

In addition to the above, the geospatially enhanced PostgreSQL + PostGIS container adds the following components:

PostgreSQL Operator Monitoring uses the following components:

For more information about which versions of the PostgreSQL Operator include which components, please visit the compatibility section of the documentation.

Supported Platforms

PGO, the Postgres Operator from Crunchy Data, is tested on the following platforms:

  • Kubernetes 1.25-1.28
  • OpenShift 4.10-4.13
  • Rancher
  • Google Kubernetes Engine (GKE), including Anthos
  • Amazon EKS
  • Microsoft AKS
  • VMware Tanzu

This list only includes the platforms that the Postgres Operator is specifically tested on as part of the release process: PGO works on other Kubernetes distributions as well.

Contributing to the Project

Want to contribute to the PostgreSQL Operator project? Great! We've put together a set of contributing guidelines that you can review here:

Once you are ready to submit a Pull Request, please ensure you do the following:

  1. Reviewing the contributing guidelines and ensure that you have followed the commit message format, added testing where appropriate, documented your changes, etc.
  2. Open up a pull request based upon the guidelines. If you are adding a new feature, please open up the pull request on the master branch.
  3. Please be as descriptive in your pull request as possible. If you are referencing an issue, please be sure to include the issue in your pull request

Support

If you believe you have found a bug or have a detailed feature request, please open a GitHub issue and follow the guidelines for submitting a bug.

For general questions or community support, we welcome you to join our community Discord and ask your questions there.

For other information, please visit the Support section of the documentation.

Documentation

For additional information regarding the design, configuration, and operation of the PostgreSQL Operator, pleases see the Official Project Documentation.

Past Versions

Documentation for previous releases can be found at the Crunchy Data Access Portal.

Releases

When a PostgreSQL Operator general availability (GA) release occurs, the container images are distributed on the following platforms in order:

The image rollout can occur over the course of several days.

To stay up-to-date on when releases are made available in the Crunchy Data Developer Portal, please sign up for the Crunchy Data Developer Program Newsletter. You can also join the PGO project community discord

FAQs, License and Terms

For more information regarding PGO, the Postgres Operator project from Crunchy Data, and Crunchy Postgres for Kubernetes, please see the frequently asked questions.

The installation instructions provided in this repo are designed for the use of PGO along with Crunchy Data's Postgres distribution, Crunchy Postgres, as Crunchy Postgres for Kubernetes. The unmodified use of these installation instructions will result in downloading container images from Crunchy Data repositories - specifically the Crunchy Data Developer Portal. The use of container images downloaded from the Crunchy Data Developer Portal are subject to the Crunchy Data Developer Program terms.

The PGO Postgres Operator project source code is available subject to the Apache 2.0 license with the PGO logo and branding assets covered by our trademark guidelines.

postgres-operator's People

Contributors

abrightwell avatar andrewlecuyer avatar benjaminjb avatar cahoonpwork avatar cbandy avatar cbrianpace avatar cmwshang avatar crunchyheath avatar crunchyjohn avatar dpuckett98 avatar dsessler7 avatar flamingdumpster avatar guineveresaenger avatar jasonodonnell avatar jkatz avatar jmccormick2001 avatar jmckulk avatar prlaurence avatar rimusz avatar roberto-mello avatar spron-in avatar stemid avatar stephensorriaux avatar szelenka avatar the1forte avatar tjmoore4 avatar tony-landreth avatar valclarkson avatar wilybrace avatar xenophenes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

postgres-operator's Issues

Clarify how to deploy the operator to Kubernetes

Only at the very bottom of the README.md is there mention of the postgres-operator container that executes in kubernetes via a Deployment (its docker image in dockerhub is linked).

I was expecting a yaml file that contained a kubernetes Deployment referencing the postgres-operator container, but I cannot find it.

Would you please consider creating a yaml file so that I can deploy the postgres-operator via kubectl create -f postgres-operator.yaml?

repeat backups error

make sure repeating backups does not error, in some situations this will happen.

created clusters do not have roles/passwords specified in .pgo.yaml

Here's my .pgo.yaml (located in ~/.pgo/yaml):

KUBECONFIG:  /Users/jar349/.kube/config
CLUSTER:
  CCP_IMAGE_TAG:  centos7-9.6-1.4.1
  PORT:  5432
  PG_MASTER_USER: admin
  PG_MASTER_PASSWORD:  password
  PG_USER:  admin
  PG_PASSWORD:  password
  PG_DATABASE:  test1
  PG_ROOT_PASSWORD:  password
  STRATEGY:  1
  REPLICAS:  1
  PASSWORD_AGE_DAYS:  3650
MASTER_STORAGE:
  STORAGE_CLASS:  gp2
  PVC_ACCESS_MODE:  ReadWriteOnce
  PVC_SIZE:  1Gi
  STORAGE_TYPE:  dynamic
  FSGROUP:  26
REPLICA_STORAGE:
  STORAGE_CLASS:  gp2
  PVC_ACCESS_MODE:  ReadWriteOnce
  PVC_SIZE:  1Gi
  STORAGE_TYPE:  dynamic
  FSGROUP:  26
BACKUP_STORAGE:
  STORAGE_CLASS:  gp2
  PVC_ACCESS_MODE:  ReadWriteOnce
  PVC_SIZE:  5Gi
  STORAGE_TYPE:  dynamic
  FSGROUP:  26
PGO:
  LSPVC_TEMPLATE:  /Users/jar349/.pgo.lspvc-template.json
  CO_IMAGE_TAG:  centos7-1.5.2
  DEBUG:  true

I create a test1 cluster:

$ pgo create cluster --namespace default test1
DEBU[0000] kubeconfig path is /Users/jar349/.kube/config 
DEBU[0000] namespace is                                 
DEBU[0000] ConnectToKube called                         
DEBU[0000] connected to kube. at /Users/jar349/.kube/config 
DEBU[0000] create cluster called                        
DEBU[0000] no policies are specified                    
DEBU[0000] create cluster called for test1              
DEBU[0000] pgcluster test1 not found so we will create it 
created PgCluster test1

Then I port-forward the pod...

$ kl port-forward test1-1519893712-t1421 5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432

I load up pgadmin and can't connect. On a hunch, I try a password-less login with postgres and I'm in and I see:

screen shot 2017-09-22 at 5 50 45 pm

Have I done something wrong? Why isn't there an admin role with the password password?

The run.sh script, in the build setup, creates 60 PVs

Script examples/operator/run.sh calls create-pv.sh which in turns creates 60 with

for i in {1..60}
do
   	echo "creating PV crunchy-pv$i"
	export COUNTER=$i
	kubectl --namespace=$NAMESPACE delete pv crunchy-pv$i
	envsubst < $DIR/crunchy-pv.json | kubectl --namespace=$NAMESPACE create -f -
done

Most of them go unbound (as shown below). What is the reason for that setup ?

crunchy-pv54   1Gi        RWX           Retain          Available                                                  6m
crunchy-pv55   1Gi        RWX           Retain          Available                                                  6m
crunchy-pv56   1Gi        RWX           Retain          Available                                                  6m
crunchy-pv57   1Gi        RWX           Retain          Available                                                  6m
crunchy-pv58   1Gi        RWX           Retain          Bound       default/crunchy-pvc                            6m
crunchy-pv59   1Gi        RWX           Retain          Available                                                  6m
crunchy-pv6    1Gi        RWX           Retain          Available                                                  6m
crunchy-pv60   1Gi        RWX           Retain          Available                                                  6m
crunchy-pv7    1Gi        RWX           Retain          Available                                                  6m

Databases and clusters should be created in the same namespace as the TPR

I have two clusters defined as TPRs. One in the default namespace and one in the pgtest namespace.

$ kubectl get pgcluster --all-namespaces
NAMESPACE   NAME               KIND
default     pg-orange-jaguar   PgCluster.v1.crunchydata.com
pgtest      pg-pink-whale      PgCluster.v1.crunchydata.com

However, all deployments are created in the default namespace.

$ kubectl get deploy --all-namespaces
NAMESPACE         NAME                          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
default           pg-orange-jaguar              1         1         1            0           35m
default           pg-orange-jaguar-replica      2         2         2            0           35m
default           pg-pink-whale                 1         1         1            0           15m
default           pg-pink-whale-replica         2         2         2            0           15m

It's not clear to me what the model chosen here is. Do I need one operator per pgcluster, one operator per k8s namespaces or one operator per k8s cluster? Optimally I think the operator should only need to be deployed once per k8s cluster and then I can create multiple pgclusters in multiple namespaces.

Failover and recovery

Hi Guys!

I'm looking for information in your solution about failover and recovery of nodes in postgres cluster, sorry but not sure if it exists... could you please advice?

support data deletion

when deleting a cluster, a flag to support data deletion, currently data is NOT deleted when the cluster is deleted

support image prefix other than default

right now, crunchydata is assumed as the image prefix....using a remote registry does not work with this approach...allow for CCP_IMAGE_PREFIX to be specified in the config.

refactor user command

instead of various flags, use a more proper syntax...

pgo create user ....
pgo delete user ...
etc.

version filter

Applying the version filter to show cluster does not work.

pgo show cluster all --version=9.6.3

I am running K8s 1.6.1 with pgo version 1.3.2 in a custom namespace.

Use Kubernetes secrets to store credentials

Currently all credentials seem to be stored in the TPRs. Is it planned to switch this to using Kubernetes secrets? This way the operator could generate credentials while deploying a new database or cluster and store them in secrets.

load - 2nd time throws error

when running load twice, an error is produced not running the load job, the load needs to cleanup existing jobs before being re-run

Godep restore has errors when building from source?

Using version 1.9 of Go and Golang.

[sarah@localhost postgres-operator]$ godep restore
godep: Dep (k8s.io/client-go/discovery) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes/typed/apps/v1beta1) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes/typed/authentication/v1) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes/typed/authentication/v1beta1) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes/typed/authorization/v1) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes/typed/authorization/v1beta1) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes/typed/autoscaling/v1) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes/typed/autoscaling/v2alpha1) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes/typed/batch/v1) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes/typed/batch/v2alpha1) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes/typed/certificates/v1beta1) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes/typed/core/v1) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes/typed/extensions/v1beta1) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes/typed/policy/v1beta1) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes/typed/rbac/v1alpha1) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes/typed/rbac/v1beta1) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes/typed/settings/v1alpha1) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes/typed/storage/v1) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/kubernetes/typed/storage/v1beta1) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/plugin/pkg/client/auth/gcp) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/rest) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/tools/auth) restored, but was unable to load it with error:
Package (context) not found
godep: Dep (k8s.io/client-go/tools/clientcmd) restored, but was unable to load it with error:
Package (context) not found
godep: Error checking some deps.

Clarify that configuration of pgo cli is required, not optional

Now that I have things installed, here's my history trying to create a cluster:

$ which pgo
/usr/local/bin/pgo
$ pgo create cluster my-service
ERRO[0000] --kubeconfig flag is not set and required
$ pgo
The pgo command line interface lets you
create and manage PostgreSQL clusters.

Usage:
  pgo [command]

Available Commands:
  apply       apply a Policy
  backup      perform a Backup
  clone       perform a clone
  create      Create a Cluster or Policy
  delete      Delete a policy, database, cluster, backup, or upgrade
  scale       Scale a Cluster
  show        show a description of a cluster
  test        test a Cluster
  upgrade     perform an upgrade

Flags:
      --config string       config file (default is $HOME/.pgo.yaml)
      --debug               enable debug with true
      --kubeconfig string   kube config file
      --namespace string    kube namespace to work in (default is default)
      --selector string     label selector string
  -t, --toggle              Help message for toggle

Use "pgo [command] --help" for more information about a command.
$ ls ~/.kube/config 
/Users/jar349/.kube/config
$ pgo --kubeconfig ~/.kube/config create cluster my-service
ERRO[0000] --namespace flag is not set and required
$ pgo --kubeconfig ~/.kube/config --namespace default create cluster my-service
ERRO[0000] invalid MASTER_STORAGE.PVC_ACCESS_MODE specified

My thought was I'd just be able to type out the commands the way that you do in your documentation. The documentation says "You can configure both the client and the operator" but the truth is (unless I've made another mistake which is entirely possible) you MUST configure the cli before one can follow along with your documentation.

I recommend clarifying that.

Also, it's common for tools to place their configuration files in $HOME/.tool/config but that's not in the list of places pgo looks for its config. I'd prefer not to put pgo.yaml directly in $HOME. Would it be possible to add $HOME/.pgo/ to the list of places pgo looks for config files?

seg fault - pgo show backup all scenario

when a backup pod is no longer there, the pgo show backup all command will segfault, recreate by create by creating a backup, then remove the pod with kubectl, then pgo show backup all

Use K8s ConfigMap instead of PVC for /data

Similarly to using Secrets #6 it seems using ConfigMaps instead of a hostPath PVC would make more sense to store the operator configuration and templates. Or is there a reason why ConfigMaps wouldn't work here?

Considerations multipile PVCs for non-shared external storage

@jmccormick2001
I got the postgres operator to work with Dell EMC's ScaleIO. Here are some considerations to help with scaling the operator for non-shared external storage like ScaleIO:

  • Create a PVC for main database data mounted at /pgdata
  • Create a PVC for backup database data mounted at /pgbackup.
  • Create a PVC for replica pods data mounted at /pgdata

This will help avoid conflicts when scheduling on multi-node Kubernetes clusters where main database pod and replica pods may get scheduled on the same instance.

Set VolumeMount to ReadOnly=true explicitly for DB replica pod for /pgdata

The documentation mentions that the replica pod for the database is read-only. However, when inspecting the pod, it shows that the readOnly flag is set to false.

Volumes:
  pgdata:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	pgcluster-pvc
    ReadOnly:	false

Would it make sense to also set the volume bind-mount to readOnly for the pod container or would that break the normal DB operation ?

Clarify usage on GCE

According to Readme:

Openshift Origin 1.5.1+ or Openshift Container Platform 3.5

...it is required to have Kubernetes installed with OpenShift. Does that mean that postgres-operator will not work over Google Cloud Platform (specifically Google Container Engine)? Shouldn't operator be cloud-provider agnostic?

policy apply - allow database choice

specify which database the policy gets applied to? It looks to be applied to the postgres database by default, we should allow the PG_DATABASE setting to be chosen for the policy as well.

Missing "thirdpartyresource.extentions pg-clone.crunchydata.com not found" error when creating cluster

I pulled the latest source code, built locally, and deployed necessary resources successfully:

kubectl get deployment,po,configMap,thirdpartyresource
NAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/postgres-operator   1         1         1            1           2m

NAME                                    READY     STATUS    RESTARTS   AGE
po/postgres-operator-2889167354-mjdtk   1/1       Running   0          2m

NAME               DATA      AGE
cm/operator-conf   6         1h

NAME                                             DESCRIPTION                             VERSION(S)
thirdpartyresources/pg-backup.crunchydata.com    A postgres backup ThirdPartyResource    v1
thirdpartyresources/pg-cluster.crunchydata.com   A postgres cluster ThirdPartyResource   v1
thirdpartyresources/pg-upgrade.crunchydata.com   A postgres upgrade ThirdPartyResource   v1

However, when I attempt to create a cluster, I get the following (this is true with other pgo commands):

pgo create cluster mycluster
DEBU[0000] kubeconfig path is /home/vladimir/admin.conf
DEBU[0000] namespace is default
DEBU[0000] ConnectToKube called
ERRO[0000] thirdpartyresources.extensions "pg-clone.crunchydata.com" not found
ERRO[0000] required pg-clone.crunchydata.com TPR was not found on your kube cluster

It is worth noting this works ok when I pull the pre-built binaries ver 1.2 for postgres-operator and pgo.

Why does pgo decide on node placement?

I'm curious about the motivation for fixed placement of postgres instances. Feels like this would decrease robustness by preventing pod + pvc moving to a different node in event of node failure

thirdpartyresources.extensions "pg-clone.crunchydata.com" not found

Result of command (or any other pgo command) with kubernetes cluster on coreos:

pgo show cluster all

Is:
ERRO[0000] thirdpartyresources.extensions "pg-clone.crunchydata.com" not found
ERRO[0000] required pg-clone.crunchydata.com TPR was not found on your kube cluster

And indeed no pg-clone.crunchydata.com TPR is created by the script in the repos as shown below.

kubectl get thirdpartyresources
NAME DESCRIPTION VERSION(S)
pg-backup.crunchydata.com A postgres backup ThirdPartyResource v1
pg-cluster.crunchydata.com A postgres cluster ThirdPartyResource v1
pg-upgrade.crunchydata.com A postgres upgrade ThirdPartyResource v1

Can't find any info on where to get the pg-clone TPR or how to create it.

re-implement clone operation

the clone operation is removed in v2.0.0, it needs to be re-implemented due to various issues with that implementation.

Deployment docs

Operator and it's client deployment docs are a bit unclear

Clarify openshift requirements

The requirements section says that Openshift Origin 1.5.1+ and Openshift Container Platform 3.5 are required, but does not explain why - and the term 'openshift' does not appear in the rest of the README.md.

Is it still required?

Does crunchydata/postgres-operator:centos7-1.5.1 create the correct thirdpartyresources?

the logs of the postgres-operator say that they have created:

  • pg-cluster.crunchydata.com
  • pg-backup.
  • pg-upgrade
  • pg-policy
  • pg-clone
  • pg-policy-log

but then it goes on to say that it can't find the requested resources (they don't have dashes in their names):

E0920 18:20:55.418172       1 reflector.go:201] github.com/crunchydata/postgres-operator/operator/backup/backup.go:100: Failed to list *tpr.PgBackup: the server could not find the requested resource (get pgbackups.crunchydata.com)
E0920 18:20:55.418308       1 reflector.go:201] github.com/crunchydata/postgres-operator/operator/cluster/cluster.go:120: Failed to list *tpr.PgCluster: the server could not find the requested resource (get pgclusters.crunchydata.com)
E0920 18:20:55.425411       1 reflector.go:201] github.com/crunchydata/postgres-operator/operator/cluster/clone.go:73: Failed to list *tpr.PgClone: the server could not find the requested resource (get pgclones.crunchydata.com)
E0920 18:20:55.443308       1 reflector.go:201] github.com/crunchydata/postgres-operator/operator/cluster/policies.go:163: Failed to list *tpr.PgPolicylog: the server could not find the requested resource (get pgpolicylogs.crunchydata.com)
E0920 18:20:55.443419       1 reflector.go:201] github.com/crunchydata/postgres-operator/operator/upgrade/upgrade.go:73: Failed to list *tpr.PgUpgrade: the server could not find the requested resource (get pgupgrades.crunchydata.com)

the examples/tpr folder has a pg-database tpr that isn't created by default by the postgres-operator container. Should it be? OR is it meant for creating a single database instance for testing purposes?

30 minutes after deploying the postgres-operator, I see the following in the logs:

Sep 20 14:57:02 postgres-operator-3770555364-5n3sl postgres-operator error time="2017-09-20T18:57:02Z" level=error msg="error in major upgrade watch closed before Until timeout" 
Sep 20 14:57:59 postgres-operator-3770555364-5n3sl postgres-operator error time="2017-09-20T18:57:59Z" level=error msg="error in ProcessJobs watch closed before Until timeout" 
Sep 20 15:10:23 postgres-operator-3770555364-5n3sl postgres-operator error time="2017-09-20T19:10:23Z" level=error msg="error in ProcessPolicies watch closed before Until timeout" 
Sep 20 15:19:03 postgres-operator-3770555364-5n3sl postgres-operator error time="2017-09-20T19:19:03Z" level=error msg="erro in clone complete watch closed before Until timeout" 

Allow postgres extensions

User should be able to install and enable extensions such as postgis within postgres instances that operator creates

Replace TPR with CRD

Kubernetes 1.7 has deprecated TPRs in favor of CRDs.
Any plans to move to CRD soon?

Getting started script will not work if /data already exists

When the /data directory already exists and the run.sh script is executed, it skips creating the crunch-pvc resource. This in turn will cause the pgo create cluster testcluster to fail when attempting to attach /pgdata

kubectl get pods
NAME                                   READY     STATUS              RESTARTS   AGE
postgres-operator-3650826749-59nxb     1/1       Running             0          2h
testcluster-4275003772-ddfwn           0/1       ContainerCreating   0          2h
testcluster-replica-4058347393-88mhd   0/1       ContainerCreating   0          2h
testcluster-replica-4058347393-q19v2   0/1       ContainerCreating   0          2h
FirstSeen	LastSeen	Count	From			SubObjectPath	Type		Reason		Message
  ---------	--------	-----	----			-------------	--------	------		-------
  2h		32s		23	kubelet, minikube			Warning		FailedMount	Unable to mount volumes for pod "testcluster-4275003772-ddfwn_default(d2a2f3a7-4d68-11e7-a09f-080027d45d33)": timeout expired waiting for volumes to attach/mount for pod "default"/"testcluster-4275003772-ddfwn". list of unattached/unmounted volumes=[pgdata]

Based on kubelet logs, the crunchy-pvc is not found:

Jun 10 02:02:05 minikube localkube[3372]: E0610 02:02:05.042059    3372 desired_state_of_world_populator.go:259] Error processing volume "pgdata" for pod "testcluster-replica-4058347393-88mhd_default(d2a6ecc5-4d68-11e7-a09f-080027d45d33)": error processing PVC "default"/"crunchy-pvc": failed to fetch PVC default/crunchy-pvc from API server. err=persistentvolumeclaims "crunchy-pvc" not found
Jun 10 02:02:08 minikube localkube[3372]: E0610 02:02:08.062501    3372 desired_state_of_world_populator.go:259] Error processing volume "pgdata" for pod "testcluster-4275003772-ddfwn_default(d2a2f3a7-4d68-11e7-a09f-080027d45d33)": error processing PVC "default"/"crunchy-pvc": failed to fetch PVC default/crunchy-pvc from API server. err=persistentvolumeclaims "crunchy-pvc" not found

major upgrade to pg10 not working

until pg10 container supports pg_audit, major upgrades require users to manually edit postgresql.conf and remove pg_audit.so reference before the upgrade will work...when pgaudit is available for pg10 this problem will be resolved. This version effects postgres-operator v.2.0.0.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.