Getting Started with PostgreSQL Operator 4.3 in OpenShift

James Chanco Jr.

13 min read

The first step of working with any software is getting it installed. Based on your environment, this could be as easy as a one-click installer, or require a manual that's hundreds of pages long with thousands of config bits to tune. Deploying complex applications on Kubernetes and OpenShift can often fall somewhere in the middle. For example, deploying an Operator on OpenShift can be viewed as an intimidating thing, but it doesn't have to be. Here we're going to walk through the quickest and easiest possible way to get started!

Below I’ll guide you through the steps for installing and using the PostgreSQL Operator using the pgo-deployer. The pgo-deployer is included in the PostgreSQL Operator, and is presented in a container. In this guide, I’ll be using OpenShift 4.4.3 but any version on 3.11 or greater will work.

Confirming your environment

Before you get started, you will need to confirm your environment is set up to deploy the PostgreSQL Operator.

You will need an OpenShift environment where:

  • You are running OpenShift v 3.11+
  • You have enough privileges to install an application, i.e. you can add a ClusterRole. If you’re a Cluster Admin, you’re all set
  • There are PersistentVolumes available
  • You can access your OpenShift cluster using oc

Install the PostgreSQL Operator

To get started we’ll first need to download the PostgreSQL Operator Installer manifest to our OpenShift environment. In this example we’ll do so with the following command:

curl https://github.com/CrunchyData/postgres-operator/blob/REL_4_3/installers/kubectl/postgres-operator.yml

Note the "REL_4_3" - this is pulling the installer from the PostgreSQL Operator 4.3.x branch. Also note that if you are installing on OpenShift 3.11, you will have to download the installer manifest from as such:

curl https://github.com/CrunchyData/postgres-operator/blob/REL_4_3/installers/kubectl/postgres-operator-ocp311.yml > postgres-operator.yml

It’s important to review the configuration parameters in detail. In this example, we’ll be using many of the default settings, but there are a number of variables that should be changed. A good example of this would be PGO_ADMIN_PASSWORD. This password is used with the pgo client to manage your PostgreSQL clusters. Changing this from the default of “password” is generally recommended. Additionally, you’ll want to specify your particular storage class. We’ll be using GCE for our instance.

- name: BACKREST_STORAGE
  value: “gce"
- name: BACKUP_STORAGE
  value: 'gce'
- name: PRIMARY_STORAGE
  value: 'gce'
- name: REPLICA_STORAGE
  value: 'gce'

All of these changes (password, storage class, etc) should be applied by editing the postgres-operator.yml file that we downloaded in the above steps.

Now that we’ve edited our manifest file we can begin installing the PostgreSQL Operator. First, we’ll need to create the pgo namespace. We can do this with the following command:

oc create namespace pgo

To launch the pgo-deployer container we’ll need to apply the postgres-operator.yml manifest. We can do this with the following command:

oc apply -f postgres-operator.yml

Next we’ll download the pgo client setup script to help set up the OpenShift environment for using the PostgreSQL Operator.

curl https://github.com/CrunchyData/postgres-operator/blob/REL_4_3/installers/olm/install.sh
chmod +x install.sh

Once the PostgreSQL Operator has installed successfully, you can then run the install.sh script. This will do a number of different things for you, like creating a subscription to install the operator, and creating a client pod where commands can be executed. This can take a few minutes to complete depending on your OpenShift cluster.

While the PostgreSQL Operator is being installed, for ease of using the pgo command line interface, you will need to set up some environmental variables.

You can do so with the following command within your RHEL or CentOS system.

export PATH="${HOME?}/.pgo/pgo/pgouser"
export PGOUSER="${HOME?}/.pgo/pgo/pgouser"
export PGO_CA_CERT="${HOME?}/.pgo/pgo/client.crt"
export PGO_CLIENT_CERT="${HOME?}/.pgo/pgo/client.crt"
export PGO_CLIENT_KEY="${HOME?}/.pgo/pgo/client.key"

If you wish to permanently add these variables to your environment, you can run the following:

export PATH="${HOME?}/.pgo/pgo/pgouser"
export PGOUSER="${HOME?}/.pgo/pgo/pgouser"
export PGO_CA_CERT="${HOME?}/.pgo/pgo/client.crt"
export PGO_CLIENT_CERT="${HOME?}/.pgo/pgo/client.crt"
export PGO_CLIENT_KEY="${HOME?}/.pgo/pgo/client.key"
source ~/.bashrc

NOTE: For macOS users, you must use ~/.bash_profile instead of ~/.bashrc

Verify the Installation

Below are a few steps to check if the PostgreSQL Operator is up and running.

By default, the PostgreSQL Operator installs into a namespace called pgo. First, see that the the OpenShift Deployment of the Operator exists and is healthy:

oc -n pgo get deployments

If successful, you should see output similar to this:

NAME READY UP-TO-DATE AVAILABLE AGE
postgres-operator 1/1 1 1 16h

Next, see if the Pods that run the PostgreSQL Operator are up and running:

oc -n pgo get pods

If successful, you should see output similar to this:

NAME READY STATUS RESTARTS AGE
postgres-operator-56d6ccb97-tmz7m 4/4 Running 0 2m

Connect to the PostgreSQL Operator

Finally, let’s see if we can connect to the PostgreSQL Operator from the pgo command-line client. In order to communicate with the PostgreSQL Operator API server, you will first need to set up a port forwarding to your local environment.

In a new console window, run the following command to set up a port forward:

oc -n pgo port-forward svc/postgres-operator 8443:8443

Back to your original console window, you can verify that you can connect to the PostgreSQL Operator using the following command:

pgo version

If successful, you should see output similar to this:

pgo client version 4.3.2
pgo-apiserver version 4.3.2

Have Some Fun - Create a Postgres Cluster

The quickstart installation method creates two namespaces that you can deploy your PostgreSQL clusters into called pgouser1 and pgouser2. Let’s create a new Postgres cluster in pgouser1:

pgo create cluster -n pgouser1 hippo

If successful, you should see output similar to this:

created Pgcluster hippo
workflow id 1cd0d225-7cd4-4044-b269-aa7bedae219b

This will create a Postgres cluster named hippo. It may take a few moments for the cluster to be provisioned. You can see the status of this cluster using the pgo test command:

pgo test -n pgouser1 hippo

When everything is up and running, you should set output similar to this:

cluster : hippo
Services
        primary (10.97.140.113:5432): UP
Instances
        primary (hippo-594f794476-glt9n): UP

Once everything is up and running, we can check to make sure our pgBackRest stanza and backup completed successfully. If successful, we should see “Completed” under the “Status” column.

NAME                                     	READY   STATUS  	RESTARTS   AGE
backrest-backup-hippo-pwpwq                 0/1 	Completed   0      	11m
hippo-594f794476-glt9n                   	1/1 	Running 	0      	12m
hippo-backrest-shared-repo-b5d7f8f68-6rl8k  1/1 	Running 	0      	13m
hippo-stanza-create-5v7j4                	0/1 	Completed   0      	12m

Awesome! We now have a Postgres primary as well as a successful backup. We can log into the pgBackRest repo and check to make sure by issuing the following command:

oc exec -it hippo-backrest-shared-repo-b5d7f8f68-6rl8k -n pgouser1 bash

From here, you’ll be able to run your pgbackrest info command to check on the status of the backup.

bash-4.2$ pgbackrest info
stanza: db
	status: ok
	cipher: none

	db (current)
    	wal archive min/max (12-1): 000000010000000000000001/000000010000000000000004

    	full backup: 20200511-171705F
        	timestamp start/stop: 2020-05-11 17:17:05 / 2020-05-11 17:17:16
        	wal start/stop: 000000010000000000000002 / 000000010000000000000002
        	database size: 31.0MB, backup size: 31.0MB
        	repository size: 3.7MB, repository backup size: 3.7MB

The pgo test command provides the basic information you need to connect to your PostgreSQL cluster from within your OpenShift environment. For more detailed information, you can use the following command:

pgo show cluster -n pgouser1 hippo

cluster : hippo (crunchy-postgres-ha:centos7-12.2-4.3.0)
    pod : hippo-594f794476-glt9n (Running) on opensh-c49lw-w-b-k4fb9.c.container-suite.internal (1/1) (primary)
    pvc : hippo
    resources : Memory: 128Mi
    storage : Primary=300M Replica=300M
    deployment : hippo
    deployment : hippo-backrest-shared-repo
    service : hippo - ClusterIP (172.30.62.61)
    labels : crunchy-pgha-scope=hippo crunchy_collect=false deployment-name=hippo pgo-backrest=true pgo-version=4.3.0 pgouser=admin workflowid=e3793de6-3d6c-4278-b37b-8b49f79b076a autofail=true crunchy-pgbadger=false name=hippo pg-cluster=hippo pg-pod-anti-affinity=

PGO CLI Commands

scale (up)

We’ve now verified that we have a single Postgres cluster with 1 primary! You’ve likely read through some of the documentation at this point and noticed the Crunchy PostgreSQL Operator also offers High Availability. Let’s scale up our existing cluster and make it so.

To create 2 replicas in addition to our existing primary, let’s run the following command on our specify our “hippo” cluster, shall we?

pgo scale hippo --replica-count=2 -n pgouser1

Here, you’ll be asked if you’re sure you want to proceed. Type “yes” and you’ll get output showing that two Pgreplica’s have been created:

created Pgreplica hippo-rkol
created Pgreplica hippo-egzo

Very good! At this point we can re-run our pgo show cluster command. We should see additional pods and pvcs for our newly created replicas.

cluster : hippo (crunchy-postgres-ha:centos7-12.2-4.3.0)
    pod : hippo-594f794476-glt9n (Running) on opensh-c49lw-w-b-k4fb9.c.container-suite.internal (1/1) (primary)
    pvc : hippo
    pod : hippo-egzo-7d567dc84d-sfqfr (Running) on opensh-c49lw-w-b-k4fb9.c.container-suite.internal (1/1) (replica)
    pvc : hippo-egzo
    pod : hippo-rkol-b69947d99-mr58r (Running) on opensh-c49lw-w-c-hws4x.c.container-suite.internal (1/1) (replica)
    pvc : hippo-rkol

The pgo test command now shows the additional PostgreSQL clusters as well.

cluster : hippo
    Services
   	 primary (172.30.62.61:5432): UP
   	 replica (172.30.213.187:5432): UP
    Instances
   	 primary (hippo-594f794476-glt9n): UP
   	 replica (hippo-egzo-7d567dc84d-sfqfr): UP
   	 replica (hippo-rkol-b69947d99-mr58r): UP

You now have a High Availability cluster with 2 replicas! With automated failover now available, and Pod Affinity to help configure how aggressive automatic self healing of failed primaries should be.

scaledown

With the pgo command you can add and remove replicas as needed. Need to remove a replica because it’s just not needed? No problem! With the Scaledown command we can specify a particular replica to remove. Let’s do it, but first we need to choose a replica. We can do so by querying available replicas with the following command:

pgo scaledown hippo -n pgouser1 --query

Cluster: hippo
REPLICA        		 STATUS   	 NODE 		 REPLICATION LAG
hippo-egzo     		 running  	 opensh-c49lw-w-b-k4fb9.c.container-suite.internal           	0 MB
hippo-rkol     		 running  	 opensh-c49lw-w-c-hws4x.c.container-suite.internal           	0 MB

For this example we’ll use hippo-egzo as our replica to remove.

pgo scaledown hippo -n pgouser1 --target=hippo-egzo

For this example we’ll use hippo-egzo as our replica to remove.

pgo scaledown hippo -n pgouser1 --target=hippo-egzo

Again, you’ll be prompted to confirm your decision. We’re ready to go here, so we’ll type “yes” and proceed.

WARNING: Are you sure? (yes/no): yes
deleted replica hippo-egzo

If we run our pgo test command again, we’ll now get output without the replica we chose to delete.

cluster : hippo
    Services
   	 primary (172.30.62.61:5432): UP
   	 replica (172.30.213.187:5432): UP
    Instances
   	 primary (hippo-594f794476-glt9n): UP
   	 replica (hippo-rkol-b69947d99-mr58r): UP

Failover (manual)

Thanks to distributed consensus, the PostgreSQL Operator is designed to tolerate automated failover with ease. This is great, but there may be a need to test failovers manually from time to time. With the pgo client, this can be done easily. Let’s prove that here. Just like with the replica scaledown command, let’s query available failover targets and choose one. We’ll take a note of the provided replica here since it will soon switch places with the existing primary.

pgo failover -n pgouser1 hippo --query

Cluster: hippo
REPLICA        		 STATUS   	 NODE 		 REPLICATION LAG
hippo-rkol     		 running  	 opensh-c49lw-w-c-hws4x.c.container-suite.internal
0 MB

Since we recently scaled down the number of replicas we have, we only see one remaining replica. We’ll choose this replica as our failover target.

pgo failover hippo -n pgouser1 --target=hippo-rkol

We’ll be asked if we’re sure we want to do this. Again, we’ll type “yes” and a failover Pgtask will be created for us.

Running the pgo test command we can now see our old replica, hippo-rkol, is now a primary. Easy!

cluster : hippo
    Services
   	 primary (172.30.62.61:5432): UP
   	 replica (172.30.213.187:5432): UP
    Instances
   	 replica (hippo-594f794476-glt9n): UP
   	 primary (hippo-rkol-b69947d99-mr58r): UP

Labels

Do you have a need to label your PostgreSQL clusters? The Crunchy PostgreSQL Operator allows you to do so easily in the event you need to differentiate between production, UAT, SIT, or DEV environments for example. Let’s create a label for a DEV environment. This will be applied to our “hippo” cluster.

pgo label hippo --label=env=development

Now if we run our pgo show cluster command, we’ll notice that in the labels section there is an “env=development”. This can be quite useful if you need to group PostgreSQL clusters together based on what they’re used for.

labels : workflowid=e3793de6-3d6c-4278-b37b-8b49f79b076a autofail=true crunchy-pgbadger=false crunchy-pgha-scope=hippo name=hippo pg-cluster=hippo pg-pod-anti-affinity= pgo-backrest=true crunchy_collect=false deployment-name=hippo-rkol env=development pgo-version=4.3.0 pgouser=admin

Creating Backups

When we first deployed the Operator, we took a look at how a stanza is created, and a backup is taken. What if we need to create a backup, manually, after deployment? No problem! The pgo command makes this easy by utilizing pgBackRest. Let’s run a backup.

pgo backup -n pgouser1 hippo

This will create a Pgtask for pgBackRest. Last time, we connected to the pgBackRest manually and ran the pgbackrest info command to look at our list of available backups. This time, let’s use the pgo command since it’s a quick and easy way to get the same list. In this output, we’ll see the full backup that was taken a day before, when the cluster was deployed, as well as the manually run Incremental.

cluster: hippo
storage type: local

stanza: db
	status: ok
	cipher: none

	db (current)
    	wal archive min/max (12-1)

    	full backup: 20200511-171705F
        	timestamp start/stop: 2020-05-11 17:17:05 +0000 UTC / 2020-05-11 17:17:16 +0000 UTC
        	wal start/stop: 000000010000000000000002 / 000000010000000000000002
        	database size: 31.0MiB, backup size: 31.0MiB
        	repository size: 3.7MiB, repository backup size: 3.7MiB
        	backup reference list:

    	incr backup: 20200511-171705F_20200512-132803I
        	timestamp start/stop: 2020-05-12 13:28:03 +0000 UTC / 2020-05-12 13:28:06 +0000 UTC
        	wal start/stop: 000000020000000000000009 / 000000020000000000000009
        	database size: 31.0MiB, backup size: 229.8KiB
        	repository size: 3.7MiB, repository backup size: 27.1KiB
        	backup reference list: 20200511-171705F

Backup types

Taking a full backup instead of an incremental is easy. All we have to do here is pass the “--backup-opts” flag and specify “--type=full”. For example:

pgo backup -n pgouser1 hippo --backup-opts=”--type=full”

Backup schedule creation

You can just as easily schedule certain types of backups to run on a cron schedule with the create schedule command listed below.

pgo create schedule -n pgouser1 hippo --schedule="*/1 * * * *" --schedule-type=pgbackrest --pgbackrest-backup-type=diff

Backup schedule deletion

This creates a schedule to run a differential backup every minute. That’s a bit aggressive for some environments and datasets. Let’s delete this schedule to ensure we don’t fill up our pgBackRest repo, shall we?

pgo delete schedule -n pgouser1 --schedule -name=hippo-pgbackrest-diff

Restore from backup

Having backups is great, and definitely necessary, however, backups without regular restore testing is like not having a backup at all! It’s extremely important to test your backups by restoring them in regular testing. Let’s restore a backup to make sure it restores appropriately.

pgo restore -n pgouser1 hippo

It’s important to note that the primary will be stopped and recreated. The following prompt ensures you’re aware of this by requesting a “yes” or “no” input.

Warning: If currently running, the primary database in this cluster will be stopped and recreated as part of this workflow!

WARNING: Are you sure? (yes/no): yes

After selecting “yes” we can then check the status of the job by running the following command:

oc get job -n pgouser1

NAME                	COMPLETIONS   DURATION   AGE
backrest-backup-hippo   1/1       	18s    	62s
hippo-stanza-create 	1/1       	12s    	133m
restore-hippo-mxrm  	1/1       	35s    	118s

Our restore was successful! There are a number of different flags that can be passed in the restore command in order to get your desired result for the restore. More information can be found in the PGO Restore section of our documentation.

Delete

So we’ve deployed and tested numerous pgo command line examples. The Crunchy PostgreSQL Operator is extremely robust, and has loads of features to experiment with! At this point we’ve walked through the beginning steps, and have a good idea of how to get started. Let’s delete the cluster. In the event you’d like to keep the data, you can always pass the “--keep-data” flag so that the PVC is kept. In this example, we’ll simply delete the data.

pgo delete cluster hippo -n pgouser1

As usual, we’ll be prompted to confirm our decision.

WARNING - This will delete ALL OF YOUR DATA, including backups. Proceed? (yes/no): yes
deleted pgcluster hippo

Voila! The cluster is now in the process of being deleted. It may take a few minutes, but you’ll be able to run the following oc commands to ensure the cluster has been deleted successfully.

oc get pod -n pgouser1
oc get pvc -n pgouser1

If you run the commands before the cluster is actually deleted, you’ll see “terminating” in the status field instead of the following:

No resources found in pgouser1 namespace.

There is much, much more to do within the Crunchy PostgreSQL Operator. This fundamental guide is only the beginning. If the above functionality was as interesting to you as we hope it was, please visit the official documentation to see what else the Operator has to offer!

Avatar for James Chanco Jr.

Written by

James Chanco Jr.

July 7, 2020 More by this author