Blog | Data Data

Performing a Major PostgreSQL Upgrade with pg_dumpall

November 26, 2018

PostgreSQL Upgrading

For most major upgrades using a utility such as pg_upgrade or a replication tool such as pglogical will be the best solution. However if these options are not available, pg_dumpall can be used to perform a major upgrade. What follows is a guide on how you can safely upgrade your database to a newer version of PostgreSQL with pg_dumpall.

Won’t You Be My Neighbor? Quickly Finding Who is Nearby

October 10, 2018

PostgreSQL Performance Indexing

Many applications these days want us to know how close we are to things:

  • What are the three closest coffee shops to my current location?
  • Which is the nearest airport to the office?
  • What are the two closest subway stops to the restaurant?

and countless more examples.

Another way of asking these questions is to say “who are my nearest neighbors to me?” This maps to a classic algorithmic problem: efficiently finding the K-nearest neighbors (or K-NN), where K is a constant. For example, the first question would be a 3-NN problem as we are trying to find the 3 closest coffee shops.

(If you are interested in learning more about K-NN problems in general, I highly recommend looking at how you can solve this using n-dimensional Voronoi diagrams, a wonderful data structure developed in the field of computational geometry.)

How can we use PostgreSQL to help us quickly find our closest neighbors? Let’s explore.

Why Covering Indexes Are Incredibly Helpful

September 17, 2018

PostgreSQL Performance

The PostgreSQL 11 release is nearly here (maybe in the next couple of weeks?!), and while a lot of the focus will be on the improvements to the overall performance of the system (and rightly so!), it's important to notice some features that when used appropriately, will provide noticeable performance improvements to your applications.

One example of such feature is the introduction of "covering indexes" for B-tree indexes. A covering index allows a user to perform an index-only scan if the select list in the query matches the columns that are included in the index. You can specify the additional columns for the index using the "INCLUDE" keyword, e.g.

CREATE INDEX a_b_idx ON x (a,b) INCLUDE (c);

Theoretically, this can reduce the amount of I/O your query needs to use in order to retrieve information (traditionally, I/O is the biggest bottleneck on database systems). Additionally, the data types included in a covering index do not need to be B-tree indexable; you can add any data type to the INCLUDE part of a CREATE INDEX statement.

However, you still need to be careful how you deploy covering indexes: each column you add to the index still takes up space on disk, and there is still a cost for maintaining the index, for examples, on row updates.

Understanding these trade offs, you can still apply covering indexes in very helpful ways that can significantly help your applications.

A Simple Example: Tracking Coffee Shop Visits

Using the CIS PostgreSQL Benchmark to Enhance Your Security

August 01, 2018

PostgreSQL security

Crunchy Data recently announced the publication of the CIS PostgreSQL Benchmark by the Center for Internet Security, a nonprofit organization that provides publications around standards and best practices for securing technologies systems. This CIS PostgreSQL Benchmark builds on earlier work that Crunchy started when it helped to publish the PostgreSQL Security Technical Implementation Guide (PostgreSQL STIG) and provides guidance and steps to help secure your PostgreSQL databases.

What is a CIS Benchmark?

A CIS Benchmark is a set of guidelines and best practices for securely configuring a target system. Authoring a CIS Benchmark is a collaborative process as CIS involves considerable peer reviews and discussion before a major version is published, to ensure there is a general consensus on the best practices for deploying a secure system.

Using Kubernetes Deployments for Running PostgreSQL

Running PostgreSQL databases in containerized environments is more popular than ever and is moving beyond running only in local, development environments and into large scale production environments. To answer the need to orchestrate complex, database workloads, the Crunchy Data team created the PostgreSQL Operator to automate many typical database administrator tasks at scale:

 

  • Provisioning new PostgreSQL clusters
  • Scaling up replicas
  • Setup and manage disaster recovery, high-availability, and monitoring
  • Allocate resources (memory, CPU, etc.) and suggest nodes for databases to run on
  • Mass applying user policies
  • Performing major/minor upgrades

and more. In order to keep all of its PostgreSQL databases up and running, the PostgreSQL Operator uses Kubernetes Deployments, which provides an API to manage replicated applications. In order to understand why this is, first we need to understand about running stateful applications with Kubernetes.

Troubleshooting PostgreSQL Streaming Replication Synchronization

Having replicas in a database environment can fulfill many goals: create a high-availability environment, load balance read-only queries across several nodes, have a read-only database to run analytical queries on, and more. PostgreSQL introduced streaming replication in PostgreSQL 9.0 almost 8 years ago to help facilitate satisfying the above requirements, and through the years, the community has added many features to continue to enhance the replication user experience.

There are many different ways to configure streaming replication in PostgreSQL, but fundamentally you need to consider if you want to have asynchronous or synchronous replication. With asynchronous replication, the default in PostgreSQL, a transaction is considered complete when it is committed on the primary database; the replica databases will replay and consume the transaction on their own time. With synchronous replication, the transaction will not be considered complete until it has been committed on all of the replicas (though PostgreSQL 10 introduced “quorum commit” which allows you to configure how many replicas need to commit the transaction before it’s considered complete).

Choosing which form of replication you need for your environment is out-of-scope for this article. Rather, we will look at what happens when a replica becomes out-of-sync with a primary, what to do, and how you can avoid this situation. Credits to Doug Hunley for originally compiling this guide.

Detecting An Out-of-Sync Replica

Upgrading PostgreSQL from 9.4 to 10.3 with pglogical

April 22, 2018

Managing PostgreSQL

I recently helped a customer upgrade a PostgreSQL instance from 9.4 on RHEL to 10.x on Ubuntu. While it initially sounded daunting, the use of pglogical and some planning actually made it rather straightforward. While there’s nothing new or original in this post, I still felt compelled to write it up for anyone else that might find the info useful as an example in their own endeavors.

pglogical is a logical replication system implemented entirely as a PostgreSQL extension. Fully integrated, it requires no triggers or external programs. This makes it faster than Slony, Londiste, et al. It is also (roughly) the basis upon which logical replication in PostgreSQL 10 core is built.

How to Perform Failover with the PostgreSQL Kubernetes Operator

Crunchy Data recently released version 2.6 of the PostgreSQL Kubernetes Operator, a powerful controller that follows the Kubernetes Operator pattern that enables users to provision and manage thousands of PostgreSQL databases at scale. We thought that we would demonstrate some of the features in this latest version over the next few weeks, which includes support for manual database failover, storage selection, node (or server) selection preference, and many goodies that make it easier to manage a large PostgreSQL cluster.

Fast CSV and JSON Ingestion in PostgreSQL with COPY

April 03, 2018

PostgreSQL

If you have been asked to provide a CSV that someone can open up in their favorite spreadsheet editor, chances are you have used the PostgreSQL COPY command. COPY has been around since the early open source releases of PostgreSQL back in the late 1990s and was designed to quickly get data in and out of PostgreSQL.

COPY is also incredibly helpful for ingesting data into a table, especially if you have a lot of it to ingest, and will generally outperform INSERT. Let’s explore a few ways to use COPY to load some data into a table.

 

An Easy Recipe for Creating a PostgreSQL Cluster with Docker Swarm

One of the biggest benefits of running PostgreSQL is running your cluster in primary-replica setup for the purposes of high-availability or load balancing your read-only queries.  It is not necessarily simple to deploy a primary-replica setup out of the box, but by using modern containerization technology, we can greatly simplify the process.

In this article, I will demonstrate how to easily setup and deploy a PostgreSQL primary-replica cluster using Docker and Docker Swarm.