Code Monkey home page Code Monkey logo

cloudnative-pg / cloudnative-pg Goto Github PK

View Code? Open in Web Editor NEW
3.4K 24.0 236.0 22.52 MB

CloudNativePG is a comprehensive platform designed to seamlessly manage PostgreSQL databases within Kubernetes environments, covering the entire operational lifecycle from initial deployment to ongoing maintenance

Home Page: https://cloudnative-pg.io

License: Apache License 2.0

Shell 1.58% Dockerfile 0.03% Makefile 0.37% Go 97.98% jq 0.05%
postgres postgresql kubernetes k8s database sql operator database-management high-availability self-healing

cloudnative-pg's Introduction

CNCF Landscape Latest Release GitHub License Documentation Stack Overflow

Welcome to the CloudNativePG project!

CloudNativePG is a comprehensive open source platform designed to seamlessly manage PostgreSQL databases within Kubernetes environments, covering the entire operational lifecycle from initial deployment to ongoing maintenance. The main component is the CloudNativePG operator.

CloudNativePG was originally built and sponsored by EDB.

Table of content

Getting Started

The best way to get started is with the "Quickstart" section in the documentation.

Scope

The goal of CloudNativePG is to increase the adoption of PostgreSQL, one of the most loved DBMS in traditional VM and bare metal environments, inside Kubernetes, thus making the database an integral part of the development process and GitOps CI/CD automated pipelines.

In scope

CloudNativePG has been designed by Postgres experts with Kubernetes administrators in mind. Put simply, it leverages Kubernetes by extending its controller and by defining, in a programmatic way, all the actions that a good DBA would normally do when managing a highly available PostgreSQL database cluster.

Since the inception, our philosophy has been to adopt a Kubernetes native approach to PostgreSQL cluster management, making incremental decisions that would answer the fundamental question: "What would a Kubernetes user expect from a Postgres operator?".

The most important decision we made is to have the status of a PostgreSQL cluster directly available in the Cluster resource, so to inspect it through the Kubernetes API. We've fully embraced the operator pattern and eventual consistency, two of the core principles upon which Kubernetes is built for managing complex applications.

As a result, the operator is responsible for managing the status of the Cluster resource, keeping it up to date with the information that each PostgreSQL instance manager regularly reports back through the API server. Changes to the cluster status might trigger, for example, actions like:

  • a PostgreSQL failover where, after an unexpected failure of a cluster's primary instance, the operator itself elects the new primary, updates the status, and directly coordinates the operation through the reconciliation loop, by relying on the instance managers

  • scaling up or down the number of read-only replicas, based on a positive or negative variation in the number of desired instances in the cluster, so that the operator creates or removes the required resources to run PostgreSQL, such as persistent volumes, persistent volume claims, pods, secrets, config maps, and then coordinates cloning and streaming replication tasks

  • updates of the endpoints of the PostgreSQL services that applications rely on to interact with the database, as Kubernetes represents the single source of truth and authority

  • updates of container images in a rolling fashion, following a change in the image name, by first updating the pods where replicas are running, and then the primary, issuing a switchover first

The latter example is based on another pillar of CloudNativePG: immutable application containers - as explained in the blog article "Why EDB Chose Immutable Application Containers".

The above list can be extended. However, the gist is that CloudNativePG exclusively relies on the Kubernetes API server and the instance manager to coordinate the complex operations that need to take place in a business continuity PostgreSQL cluster, without requiring any assistance from an intermediate management tool responsible for high availability and failover management like similar open source operators.

CloudNativePG also manages additional resources to help the Cluster resource manage PostgreSQL - currently Backup, ClusterImageCatalog, ImageCatalog, Pooler, and ScheduledBackup.

Fully embracing Kubernetes means adopting a hands-off approach during temporary failures of the Kubernetes API server. In such instances, the operator refrains from taking action, deferring decisions until the API server is operational again. Meanwhile, Postgres instances persist, maintaining operations based on the latest known state of the cluster.

Out of scope

CloudNativePG is exclusively focused on the PostgreSQL database management system maintained by the PostgreSQL Global Development Group (PGDG). We are not currently considering adding to CloudNativePG extensions or capabilities that are included in forks of the PostgreSQL database management system, unless in the form of extensible or pluggable frameworks.

CloudNativePG doesn't intend to pursue database independence (e.g. control a MariaDB cluster).

Communications

Resources

Adopters

A list of publicly known users of the CloudNativePG operator is in ADOPTERS.md. Help us grow our community and CloudNativePG by adding yourself and your organization to this list!

Maintainers

The current maintainers of the CloudNativePG project are:

  • Gabriele Bartolini (EDB)
  • Francesco Canovai (EDB)
  • Leonardo Cecchi (EDB)
  • Jonathan Gonzalez (EDB)
  • Marco Nenciarini (EDB)
  • Armando Ruocco (EDB)
  • Philippe Scorsolini (Upbound)

They are listed in the CODEOWNERS file.

CloudNativePG at KubeCon

Useful links

Star History

Star History Chart

Trademarks

Postgres, PostgreSQL and the Slonik Logo are trademarks or registered trademarks of the PostgreSQL Community Association of Canada, and used with their permission.

cloudnative-pg's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloudnative-pg's Issues

Fix chatops receiver

Currently the workflow slash-command-receiver it's failing since when user runs /test in the PR the workflow fails trying to send option limit to the continuous-delivery workflow and this option it's not available in that workflow

Make sure the backup object store is empty when bootstrapping a new cluster

When bootstrapping a new cluster from a recovery object store, and the target cluster has a backup object store configured, invoke barman-cloud-check-wal-archive to make sure that the destination backup object store is empty.

This prevents users from erroneously overwrite an existing object store. The same concept can be probably extended to any bootstrap phase.

Creation of a new cluster with a separate volume for WAL files (`pg_wal`)

For vertical scalability, introduce a separate and dedicated volume for storing the WAL archive (pg_wal) to parallelize I/O operations between transaction logs and database pages.

This also improves business continuity as we can limit the amount of space used by the archive and dedicate the PGDATA volume for Postgres pages.

In this initial implementation, we only enable this feature at cluster creation time and disable changing this setting on existing clusters.

The second implementation (which will be in a separate ticket) should implement adding the separate volume to an existing cluster that's without it. We should also enable resizing, as well as removing the separate volume from an existing cluster.

Report switchover and failover time in the log

Problem

At the moment it is not trivial to know how long it took for a switchover to be completed (meaning the primary being promoted, not the former primary to re-attach). Also, by calculating the difference between timestamps of target and current primary, you only know the last switchover.

Solution

Please report in the log the following:

  • switchover: the new primary starts up - measure the difference between target and current timestamp and print that in the log
  • switchover: the former primary is reattached - measure the difference between the start of the standby and the target timestamp, then print it
  • failover: the new primary starts up - measure the difference between the current timestamp and the most relevant timestamp we have to detect the moment of failure.

Additional problem

How can we push the above duration metrics somehow to Prometheus so that they can be clearly shown in Grafana?

Validation

Check the log contains those timestamps via automated tests

Replication slots for HA

Cloud Native Postgres currently relies only on the archive log to synchronize those standby servers that have fallen out of sync. However, for some high workload scenarios, this can be inefficient. We need to introduce a smart way for the operator to manage replication slots on the primary and designated primary instances, so that the operator itself regularly moves forward the WAL location based on the most delayed standby. The operator must also remove the replication slot from the former primary and add it to the new one following a failover or switchover.

Manage errors in recovery/initdb bootstrap methods

If recovery or initdb jobs do not successfully complete, the cluster CR is stuck in Setting up Primary state - and only human intervention can unblock it.

This is currently just a placeholder. This issue requires more analysis and coverage, and might spawn a few more specific issues (for example one on recovery and one on initdb).

Cluster conditions don't change when WAL archiving is fixed

We have been able to reproduce the issue in our test environment and confirm this is a bug.
Basically, the procedure will be as follows:

Create two object stores with different endpoint names.

Create a CNPG cluster with a backup section that points to the first object store specifying the endpoint.

Delete the CNPG cluster.

Re-create the CNPG cluster with the same backup section that points to the first object store.

The archive command will then fail, setting the cluster status as follows (check it using the kubectl describe cluster command):
 Conditions: Message: unexpected failure invoking barman-cloud-wal-archive: exit status 1 Reason: Continuous Archiving is Failing Status: False Type: ContinuousArchiving

Change the endpoint in the YAML file of the CNPG cluster to point to the second object store.

Create some traffic in order to start archiving WAL files, and verify that reading the primary's log.

Look at the Cluster status again to verify the Conditions section did not change.

Offline data import and major upgrades for Postgres

We need to provide a declarative way to import existing Postgres databases in CloudNativePG, starting from offline migrations - which also cover major upgrades of Postgres databases.

IMPORTANT: Offline means that with this method applications must stop their write operations to the source until the database is migrated to the new cluster. Offline migrations are in contrast to online migrations with 0 cutover time - which are implemented using native logical replication (see “Logical replica clusters” for online migrations).

Offline migrations extend the “initdb” bootstrap method of CloudNativePG to enable users to create a new PostgreSQL cluster using a snapshot of the data available in another PostgreSQL database - which can be accessed via the network through a superuser connection.

Import is from any supported version of Postgres.

The logical import relies on pg_dump and pg_restore executed from the primary of the new cluster, for all the databases that are part of the operation and, if requested, for the roles.

There are two types to import a database:

  • microservice: select the single database to import in the new cluster as the main application database
  • monolith: select one or more databases to import (including “*” wildcard), as well as roles (superuser privileges are removed)

The microservice approach is the preferred method to import databases in CloudNativePG, while the monolith approach is more aligned with a shared PostgreSQL instance having multiple databases that DBAs are familiar with.

Online data import and major upgrades for Postgres

This feature is also known as “Logical replica clusters” and is the evolution of “Offline data import and major upgrades for Postgres”. It only supports migrating from an existing Postgres 10+ instance. It might benefit from prior development of “Logical replication” support.

Logical replication

Declarative way to publish changes to one or more tables using logical replication (publications), as well as to subscribe to any Postgres 10+ publication (subscriptions) and consume those events.

Enhance fencing mechanism

Currently, fencing one single instance in a cluster disables any failover/switchover operation in the Postgres cluster.

We should remove this limitation so that, for example, when one of the two replica instances is fenced, failover might still be triggered on the other standby.

Additionally, in case after an investigation a fenced instance is deleted, the cluster should resume by recreating the instance.

In case of fencing of a primary, once the fence is lifted, the primary should start up again (without issuing a failover like we currently do).

Expose conditions to provide a simple and reliable way to wait for a CNPO Cluster to be ready

Currently (re)deployment of a CNPO cluster seems to be a bit tricky to wait for completion of in a kubectl-friendly way.
For pods, you might

1. kubectl wait --for=status=Ready -n the-cnp-cluster-namespace pods

but this is very race-prone. It tends to report premature success if the pods aren't created yet, or if the old pods haven't started termination. Or it can hang indefinitely if some of the old pods still exist but are terminating, because kubectl wait will wait forever for a pod that went away. And it might return premature success if only some of the pods are created yet, but others haven't started creation.
CNPO deletes and creates pods after a new or updated Cluster resource is applied to the k8s config. So the new configured Cluster resource is visible but the pods haven't been terminated or launched yet. That's fine and normal for an operator, but poses issues if you want to wait for readiness.
AFAICS there is no Condition exposed by the Cluster CRD, so something like this will not work:

1. kubectl wait -n app-db --for=condition=Ready clusters/app-db

and kubectl doesn't know how to wait for arbitrary fields like State=Cluster in healthy state. It doesn’t support waiting for Phase which is considered deprecated, nor for generic wait conditions (kubectl wait on arbitrary jsonpath · Issue #83094 · kubernetes/kubernetes ) .
kubectl's --field-selector doesn't seem to help either as it's a pretty limited feature that doesn't seem to work well if at all with CRDs.
I'd like to be able to wait for a Condition that becomes true when readyInstances == pvcCount i.e. phase: Cluster in healthy state
In the mean time all I can see to do is to poll until Cluster in healthy state, as shown in the Status or phase fields. But is that i18n-safe? Is that a stable key I can rely on not changing?

Integrate the contributing documentation with how to test a patch

One of the goal of CloudNativePG is to enable developers to contribute to the project. As maintainers we must provide clear instructions on how to test a patch on the developer's laptop and enable them to become contributors.

We should integrate this page:

https://github.com/cloudnative-pg/cloudnative-pg/blob/main/contribute/README.md

With content from:

https://github.com/cloudnative-pg/cloudnative-pg/blob/main/hack/e2e/README.md

Add `backupOwnerReference` field to `ScheduledBackup` CRD

Currently, Backup objects do not have any owner reference. There's an historical reason for this, and it dates back when recovery was only possible through Backup objects and the reason for not having an owner reference was to prevent removing Backup objects when a cluster was deleted - defacto preventing (or complicating life to) users from recovering.

Since the introduction of recovery through external clusters, this requirement is no longer valid. The proposal is to add a field called backupOwnerReference to the ScheduledBackup CRD that accepts 3 values:

  • none: no owner reference for created backup objects (current behaviour)
  • self: set the Scheduled backup object as owner of the backup
  • cluster: set the cluster as owner of the backup

For new installations, I'd like to set the default to "self" - but given that this introduces a change to the existing implementation, we need to explain it well in release notes and docs. Otherwise, the conservative approach is to set it to "none" - but then this will create problems to new installations (I'd rather go with self now).

Please use the analysis phase to rethink the above proposal. It might be enough to just limit to ScheduledBackup ownership. Feel free to change names.

The plugin command: `cnp report operator` fails if the operator was deployed with more than one replica

If we deploy the operator with more than one replica and run the command

kubectl cnp report operator

We receive the following error:

Error: could not get operator pod: number of running operator pods greater than 1: 2 pods running

This is caused by the following code block:

case len(deploymentList.Items) > 1:
err := fmt.Errorf("number of operator deployments != 1")
return appsv1.Deployment{}, err

We should make the function GetOperatorPod more robust by allowing it to handle multiple replica operator deployments

Publish kubectl-cnpg plugin on Krew

Currently, users need to manually download our plugin using the command line that we have in the README of the kubectl-cnpg project, but this doesn’t work well on Windows and makes it hard for users to find our plugin.

The idea is to use Krew as the official plugin distribution of Kubernetes, which seems to be the best idea.

A couple of tasks need to be done:

Write the Krew plugin manifest following this documentation, one good example is here

Test the install process locally

Manually submit the plugin to the krew-index following this documentation

Create the automated process recommended by the Kew community

After doing these tasks, there’s an interesting link that you can follow to get some stats of the plugin: stats.krew.dev

This issue was originally submitted in the EnterpriseDB ticketing system by @sxd.

Tablespaces, including temporary

Add declarative support for tablespace, including temporary ones. This improves vertical scalability in conjunction with declarative partitioning supported by Postgres, as well as for indexes.

Error with permissions for WAL archiving takes long to recover

From a Support incident: reported errored WAL archiving after adding backup info to an existing cluster.

Repro steps:

  1. create a cluster with no backup section, wait for it to become healthy
  2. add a backup section to the cluster
  3. wait, or write some info to the db to trigger WAL writing
  4. With cnpg plugin verify the WAL archiving is not working, see complaints in pod logs about WAL archiving errors

Root cause:

  1. We amend the cluster to include backup, this triggers creation of Secrets for backup credentials
  2. The role takes a bit to create with the permissions for the backup credentials
  3. meanwhile the instance manager tries to cache the backup info but role permissions fail — and the error is ignored
  4. the wal-archive process doesn't find the credentials in the cache and keeps trying and failing
  5. the error will recover only after the next reconciliation is triggered, which could take more than 30 minutes - 45 in my testing.
  6. An alleviation is to make a trivial modification to the cluster, so a new reconciliation loop is triggered immediately.

Restart update method

Introduce the “primaryUpdateMethod” option and add “restart” to the existing switchover method for rolling updates.
Updating a cluster now requires a rollout update, where standby servers are first upgraded. Currently, when it comes to the primary, we are now issuing a switchover, and the only available knob is the “primaryUpdateStrategy” accepting “unsupervised” (immediate switchover - default) and “supervised” (for manual switchover through the cnp plugin for kubectl) for possible values.

While this implementation is OK for most cases, there are some cases where this approach can be problematic, in particular in high workload instances where standby servers are lagging with WAL replaying.In these cases, the switchover might find even the most updated standby to be lagging by an arbitrary number of seconds from the primary. Paradoxically, this method increases the risk of data loss in the cluster due to the premature promotion of a standby, as well as the downtime due to the time required to promote the standby. In such cases, a restart of the primary might be more convenient than a switchover. The proposal is to extend the current rollout strategy (update standby servers first, if available) and introduce another option called primaryUpdateMethod, accepting two values: “switchover” (current behavior) and “restart” (instead of issuing a switchover, it restarts the pod of the primary). In the extremely remote case the restart should fail, the failover procedure will be triggered.

The backup should run in a subprocess to survive online upgrades

Problem

If a backup is running when an online upgrade is performed, the backup is interrupted without marking it as failed. This means it will remain in a running state until some other event will terminate the Postgres container, changing the container id in the instance pod. At that point, the backup will be marked as failed and retried.

A similar problem affects offline upgrades, but being the pod restarted, the failure is detected immediately.

This behavior impacts mainly on big database instances, where a backup can take a long time.

Proposed solution

We want to change the backup command to work with this schema

As usual, calling manager backup triggers a new backup via the HTTP local API

Instead of running barman-cloud-backup directly, the manager runs manager backup --foreground

The actual backup runs inside the new process

Make sure that logs flow through the right channel (JSON pipe)

Alternate solution

We could delay the upgrades until the backup is finished

Zombie syslogger processes when the postmaster is restarted

While reviewing the fencing implementation, we discovered that when we restart the postmaster, we are leaving the syslogger processes (the postmaster background worker) as zombie.

bash-4.4$ ps -efwww
UID          PID    PPID  C STIME TTY          TIME CMD
postgres       1       0  0 14:24 ?        00:00:01 /controller/manager instance run --log-level=info
postgres      24       1  0 14:24 ?        00:00:00 [postgres] <defunct>
postgres     351       1  0 14:26 ?        00:00:00 postgres -D /var/lib/postgresql/data/pgdata
postgres     352     351  0 14:26 ?        00:00:00 postgres: cluster-example: logger 
postgres     354     351  0 14:26 ?        00:00:00 postgres: cluster-example: checkpointer 
postgres     355     351  0 14:26 ?        00:00:00 postgres: cluster-example: background writer 
postgres     356     351  0 14:26 ?        00:00:00 postgres: cluster-example: walwriter 
postgres     357     351  0 14:26 ?        00:00:00 postgres: cluster-example: autovacuum launcher 
postgres     358     351  0 14:26 ?        00:00:00 postgres: cluster-example: archiver 
postgres     359     351  0 14:26 ?        00:00:00 postgres: cluster-example: stats collector 
postgres     360     351  0 14:26 ?        00:00:00 postgres: cluster-example: logical replication launcher 
postgres     415     351  0 14:26 ?        00:00:00 postgres: cluster-example: walsender streaming_replica 10.244.0.217(53488) streaming 0/50039B8
postgres     488       0  0 14:26 pts/0    00:00:00 bash
postgres     496     351  0 14:26 ?        00:00:00 postgres: cluster-example: walsender streaming_replica 10.244.0.218(55630) streaming 0/50039B8
postgres     535     488  0 14:27 pts/0    00:00:00 ps -efwww
bash-4.4$ exit

cloud-native-postgresql on  dev/cnp-2090 [$+?] via 🐹 v1.18 took 39s 
❯ k exec -ti cluster-example-1 -- bash
bash-4.4$ ps -efwww
UID          PID    PPID  C STIME TTY          TIME CMD
postgres       1       0  6 14:24 ?        00:00:12 /controller/manager instance run --log-level=info
postgres      24       1  0 14:24 ?        00:00:00 [postgres] <defunct>
postgres     352       1  0 14:26 ?        00:00:00 [postgres] <defunct>
postgres     556       0  0 14:27 pts/1    00:00:00 bash
postgres     566     556  0 14:27 pts/1    00:00:00 ps -efwww
bash-4.4$ exit

This happens only for the syslogger processes, which postmaster does not wait for when it stops, and happens when we restart a PostgreSQL instance for a configuration change, so it’s not related just to fencing.

This issue was originally submitted in the EnterpriseDB ticketing system by @leonardoce.

Parse operator configmap and secrets name from deployment

we have fixed names for configmaps and secrets for cnpg report operator, the configmap name and secrets and would be changed from deployment instand of default, going to parse the configmap and secret name from operator deployment.

CodeQL shouldn't run when no Go file was modified

The current behavior is to run CodeQL in almost every change, this generated a lot of CodeQL runs that will not make sense, like when modifying the docs/ directory or .yml, etc.

We should run CodeQL action ONLY when a .go file is modified since CodeQL only review Go files

Add an easy way to test the operator

Currently, there's no easy way to test the operator in a local environment without reading the documentation, the idea of this issue is to:

  • Add to Makefile file a way to run the operator locally
    • Including creating a kind cluster to run the operator with only one node
  • Add a way to test the latest build on main in the ghcr.io registry
    • Just deploy the latest operator build in the current kubernetes

All this needs to be documented in the README file with a section "how to test the operator" or "quick start"

Catalog of PostgreSQL images

From a usability standpoint, it would be great if had to only specify the major version of PostgreSQL in an image (e.g. quay.io/enterprisedb/postgresql:14) and let the operator figure out which image to pull down and when to update the operands.

Every time a new set of images is built in the postgres-containers project, update a JSON file that contains the metadata of the container images, including the major version (14, 13, …), the full version (e.g. 14.2), the tag and the digest.

This is still a stub of idea. We need further analysis.

PKI setup should only run on the leader instance of the controller

We've noticed that PKI initialization runs unconditionally at the start of an instance, so it will run multiple times if the controller has multiple instances. To be correct, it should run only one time after the leader election.

We could move the code inside a dedicated LeaderElectionRunnable to handle the CA initialization and the refresh of the certificates. We also need to watch the certificate secrets to react to their changes in the non-leader instances.

Maintenance windows

An initial implementation of maintenance windows is to introduce their specification on a weekly basis. In the future we might provide more control, including specification of an actual day.

Maintenance windows are enabled only for the “unsupervised” primary update strategy, with “restart” method option. When a maintenance window is defined in a cluster, an update of the image is “postponed” to the first available maintenance window, which will trigger a rollout update.

A maintenance window has a minimal length of "w" hours (initially 6), in which activities cannot start any later than "w - s" hours, where "s" is expressed in hours (by default 3). So, for example, with w=6h and s=3, activities can only start in the first 3 hours of the maintenance window.

Maintenance windows can be specified as an array:

maintenanceWindows:
  - dow: 0, 6
    start_at: 02:00
    end_at: 08:00

Or:

maintenanceWindows:
  - dow: 0
    start_at: 02:00
    end_at: 08:00
  - dow: 6
    start_at: 03:00
    end_at: 09:30

Add E2E tests for restore + backup safety

We are adding logic to the operator to:

  • avoid having a cluster back up into the same location it restored from
  • avoid having a cluster back up into a location already used by another

See issues #61 and #62
Our existing E2E suite covers backup and recovery separately, but not "restored cluster has a backup spec".

Scenarios:

  • We have cluster A. We create a cluster B to restore from A, but the YAML for B specifies the same barman object as A. --> FAIL
  • We have cluster A. Create cluster B which recovers from A and uses a new location for Backup. Create cluster C which recovers from B and uses A's bucket as a Backup location --> FAIL
  • We have cluster A. Create cluster B which recovers from A and back-up's into X. Get B to come up properly and perhaps force some WAL writing. Create cluster C from A, and back up into X too. --> FAIL
  • We have cluster A. Create cluster B which backs up to new location X --> AOK

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.